id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
267740292
|
pes2o/s2orc
|
v3-fos-license
|
Associative Memories in the Feature Space
An autoassociative memory model is a function that, given a set of data points, takes as input an arbitrary vector and outputs the most similar data point from the memorized set. However, popular memory models fail to retrieve images even when the corruption is mild and easy to detect for a human evaluator. This is because similarities are evaluated in the raw pixel space, which does not contain any semantic information about the images. This problem can be easily solved by computing \emph{similarities} in an embedding space instead of the pixel space. We show that an effective way of computing such embeddings is via a network pretrained with a contrastive loss. As the dimension of embedding spaces is often significantly smaller than the pixel space, we also have a faster computation of similarity scores. We test this method on complex datasets such as CIFAR10 and STL10. An additional drawback of current models is the need of storing the whole dataset in the pixel space, which is often extremely large. We relax this condition and propose a class of memory models that only stores low-dimensional semantic embeddings, and uses them to retrieve similar, but not identical, memories. We demonstrate a proof of concept of this method on a simple task on the MNIST dataset.
Introduction
Throughout life, our brain stores a huge amount of information in memory, and can flexibly retrieve memories based on related stimuli.This ability is key to being able to perform intelligently on many tasks.In the brain, sensory neurons detect external inputs and transmit this information to the hippocampus via a hierarchical network, which can retrieve in a constructive way via a generative network [1].Stored memories that involve a conscious effort to be retrieved are called explicit, and are divided into episodic and semantic memories.Episodic memories consists of experienced events, while semantic memories represent knowledge and concepts.Both these memories are retrieved in a constructive way via a generative network [1].
In computer science, computational models of associative memories are basically pattern storage and retrieval systems.A standard task is to store a dataset, and retrieve the correct data point when shown a corrupted version of it [10,11].Popular associative memory models are Hopfield networks [10,11], with their modern con-tinuous state formulation [20,14], and sparse distributed memories [12].While these models have a large theoretical capacity, which can be exponential in the case of continuous-state Hopfield networks [20,15], this is not reflected in practice, as they fail to correctly retrieve memories such as high-quality images when presented with even medium-size datasets [17,22].In fact, the similarity between two points is typically computed on the raw pixel space using a simple function (such as a dot product) that is insensitive to the 'semantic' features of images that we wish to discriminate between.The performance would drop even more when using stronger corruptions, such as rotations, croppings, and translations, as relations between individual pixels would be lost.These problems can be solved by learning a similarity function that is sensitive to the semantics of the stored memories.In essence, we need to embed every data point into a different space, where simple similarity scores can discriminate well between semantic features.This approach resembles kernel methods, where the similarity operation is performed after the application of a feature map ϕ, which sends both the input and the data points to a space where the dot product is more meaningful.
This leads to the problem of finding a map ϕ that embeds different data points in a space where they can all be well discriminated.In this work, we demonstrate that the simple approach of using pre-trained neural networks as feature maps strongly improves the performance of standard Hopfield networks.We first review a recent mathematical formalism that describes one-shot associative memory models present in the literature, called universal Hopfield networks, and extend this framework to incorporate these features maps.The main contributions of this paper are briefly as follows: • We define a class of associative memory models, called semantic Hopfield networks, that augment associative memory models with a feature map.In this case, as a feature map, we use ResNet18 and ResNet50, pretrained in a contrastive way, as done in Sim-CRL [3].What results is a model that stores the original data points as in standard memory models, but computes similarities in an embedding space.This model is able to perform an exact retrieval on complex data points, such as CIFAR10, STL10, and Im-ageNet images, when presented with queries formed by corrupted and incomplete versions.• We then address another drawback of current associative memory models, namely, the need to store all data points, which is memory-inefficient.low-dimensional embeddings of the original data points.The retrieved data points are not exact copies of the stored ones, as they are generated via a generative network ψ : R k −→ R d .This also adds a degree of biological plausibility, as the data points in this model are stored in a declarative way, and retrieved in a constructive way.We provide a proof of concept of this model on MNIST, using a simple autoencoder.
The rest of this paper is structured as follows.In Section 2, we introduce Universal Hopfield networks, providing formal definitions that describe their structure.Sections 3 and 4 introduce the original contributions of this work, the semantic memory model and its fully-semantic variation.In Sections 5 and 6, we end the paper with a summary of the related literature and a conclusive discussion.
Preliminaries
In this section, we review universal Hopfield networks [17].According to this framework, associative memory models can always be represented as decompositions of three parametrized functions: score, separation, and projection, whose parameters depend on the stored memories.Let D = {xi} i≤N be a dataset, with xi ∈ R d for every i.Informally, given any x ∈ R d , the goal of an associative memory model is to return the data point of D that is most similar to x according to a function κ : R d × R d −→ R. Hence, we have the following: Ideally, we would like the function µD to store the dataset D as an attractor of its dynamics.Informally, an attractor is a set of points that a system tends to evolve towards.In designing associative memories, we typically wish to store data points as attracting points and design a retrieval function f that converges to the data points in as few iterations as possible.In a continuous space, however, the attractors may be close to the data points, but not exactly where the data points are.This depends on the choice of the separation function.However, this problem is easily solved by taking the maximum value after computing the separation function.In practice, models able to retrieve data points in one shot are preferable.This is always the case when using max as separation function, or a continuous approximation given by a softmax with a large inverse-temperature β.We now describe the main ideas behind the decomposition of universal Hopfield networks, and show how popular models in the literature can be derived from it.
Score.Given an input vector x, the score returns a vector that has the number of entries equal to the number of data points N .The ith entry of the vector κD(x) represents how similar x is relative to the data point xi.Hopfield networks compute the similarity using a dot product, while sparse distributed memories use the negative Hamming distance.
Separation.If the cardinality of the dataset is large, and multiple data points are close to the input x in terms of similarity, the retrieval process may require a large number of iterations of µ.However, we wish to retrieve a specific data point as quickly as possible.The goal of the separation function α is then to emphasize the top score and de-emphasize the rest, to make convergence faster.Popular choices of separation functions are softmax, threshold, and polynomial, used respectively by modern Hopfield networks, sparse distributed memories, and dense Hopfield networks [14,7].
Projection.The projection is a function that, given the vector with scores, already modified by the separation function, returns a vector in the original input space.For exact retrieval, the projection function is set to the matrix of data points.Particularly, consider the matrix P ∈ R d×N , which has its i-th column equal to the data point xi.Then, we have π(x) = P x.If x is a 1-hot vector, a perfect copy of a data point is returned.
This new categorization of one-shot memory models has enabled a systematic testing and generalization over multiple combinations of similarity functions, showing that, for image datasets, some similarity and separation functions work much better than others.For example, metrics such as negative L1 and L2 distances outperform dot products.However, distances are less biologically plausible than dot products, as they require special computations to be computed, while a dot product can be represented as a perceptron layer.As shown in Fig. 1, scoring images on the pixel space is highly impractical, as it suffices to simply rotate or crop an image to trick the memory model.For separation functions, we use softmax with a large inverse-temperature β, as it is able to approximate the max function and hence perform one-shot retrievals.
Semantic Memory Model
In this section, we propose a new class of associative memory models.Intuitively, this class is similar to UHNs, but is augmented with an embedding function ϕ that maps memories into a feature space.Here, two embeddings are scored as in UHNs as if they were the original stored data points.The resulting vector with the similarity scores is first separated, and then projected back to the pixel space.We will show that this approach enables powerful associative memory models.SimCLR.The first problem to address is to find a suitable embedding ϕ to perform associative memory experiments.Ideally, this function should map corrupted versions of the same data point close to each other, and different data points away from each other.A straightforward way of doing this is to train a neural network using a contrastive loss.This has already been done in the literature, as it is an effective way of pre-training a neural network when a large amount of unlabelled data is available [4,5].Typically, the pretraining procedure works as follows: given a dataset D = {xi} i≤N , the network is provided simultaneously with a batch of B pairs of data points xi, xj that are corrupted versions of the data points xi, xj, and trained to minimize the contrastive loss: ), where zi = ϕ(xi) is the output of the network, 1 i,k is a binary function equal to one, if i ̸ = k, and zero, otherwise, and sim is a similarity function.When training has converged, the original work then adds a feedforward layer (or more) attached to the output layer, where the contrastive loss is defined, to fine-tune using the few labelled data available.This simple framework for contrastive learning of visual representations is known as SimCRL.As we do not need to perform supervised learning, here we simply use the pretrained network to compute similarity scores of pairs of data points embedded into the latent space of the model.
Set-up.In the following experiments, we test our semantic memory model on two datasets, CIFAR10 [13] and STL10 [6].The first one consists of 60000 32 × 32 colored images, divided in a 50000 − 10000 train-test split, while the second consists of 105000 96 × 96 colored images, divided in a 100000 − 5000 train-test split.
As functions ϕ, we use a ResNet18 for CIFAR10 and a ResNet50 for STL10 [8], trained as described in the original SimCLR work [4].Details about the parameters used can be found in the supplementary material.Then, we use the test sets, never seen by the models, to evaluate the retrieval performance from corrupted memories.Particularly, we use the following six kinds of corruptions, visually explained on the left side of Fig. 2: 1. Cropping (Crop): the corrupted image is a zoomed version of the original one, 2. Masking (Mask): half of the image is masked with uniform random noise, 3. Color filters (Color): different color filters are randomly applied to the original images, 4. Rotation: the images are randomly rotated by an angle of 0, π/2, −π/2, π, 5. Salt and pepper (S&P): a random subset of the pixels of the original images is set to 1 or 0, 6. Gaussian noise (Gauss): Gaussian noise of variance η = 0.1 and different means is added to the original images.
As similarity functions, we tested the dot product, the cosine similarity, the negative Euclidean distance (L2 norm), and the negative Manhattan distance (L1 norm).As separation function, we used a softmax with large inverse temperature.To make the comparison with UHNs clear, we also report the accuracies using the same corruptions and activation functions.
Implementation Details.As a loss function, we always used a contrastive loss with cosine similarity, as done in the original work on SimCLR.As parameters, we followed a popular PyTorch implementation. 1 It differs from the official one, which is only available in TensorFlow, but is equivalent in terms of the pre-training regime.
For the experiments on CIFAR10, we used a ResNet18 with embedding dimension 512 trained for 100 epochs; for STL10, we used a ResNet50 with embedding dimension 2048 trained for 50 epochs.
The hyperparameters used for both models are the same: batch size of 256, learning rate of 0.0003, and weight decay of 1e − 4. As it is complex to exactly describe the details of the corruptions used to perform our associative memory tasks, we refer to the PyTorch code in the supplementary material.For the first three corruptions, rotations, filters, and croppings, we have used the relative torchvision transformations.For Gaussian noise, masks, and salt and pepper noise, we report the corruption on the original data point.The following code allows to generate the same corruptions of Fig. 2.
Results.Detailed results about the performance of this method, where the percentage of wrongly retrieved images for each task, dataset, and similarity function are given in Tables 1 and 2. As expected, our models outperform UHNs on corruptions where the position of the pixels is altered.This corresponds to all the corruptions considered, besides masking and salt and pepper noise.In fact, when masking an image, 50% of the pixels remain unchanged, allowing similarity functions on the pixel space to return high values.In this task, UHNs outperform our models.A similar reasoning can be applied to salt and pepper noise.Here, however, our method performs better by a small margin.In all the other considered tasks, the margin is large, and the few correctly retrieved images by UHNs belong to particular cases: UHNs were able to retrieve cropped or rotated images only when they had close to uniform colors/backgrounds.In those cases, in fact, it is much more likely that a crop or a rotation leaves the embedding of an image in the pixel space mostly unchanged.Uniform images are in fact fixed points of those transformations.
In terms of similarity functions used, semantic models are generally more robust than UHNs, where the final performance of a specific similarity function strongly depends on the corruption used.In most cases, the cosine similarity and distances obtained a completely different performance.While this also happened in some cases for our model, the negative L1 norm always obtained the best (or close to the best) performance.For UHNs, no similarity has shown to be preferable to the others.This is an advantage of semantic models, as we want to build a memory model that is robust under different kinds of corruptions.
Efficiency.In this paragraph, we show the better efficiency of our method against standard memory models.As already stated, dot products are slightly faster than distances to be computed.However, under some kinds of corruptions, the better performance of the L1 norm makes it the best candidate.In Table 3, we have compared the running times of the proposed experiments.The results show that semantic models are much faster than UHNs, despite the fact that they have to perform a forward pass to compute the semantic embeddings.This better efficiency is simply a consequence of the smaller dimension of the embedding space with respect to the pixel space, but it may be crucial in some scenarios.Particularly, the dimension of the semantic spaces is given by the dimension of the output of the embedding function ϕ considered, in our case 512 for ResNet18 and 2048 for ResNet50.This is a large improvement over the pixel space, as a single CIFAR10 image has the dimension 3072 and a single STL10 image has the dimension 27648.In tasks where having an efficient model is a high priority, it is possible to speed up the model by using pre-trained models with a smaller output dimension.This could be important in online applications.
Changing the Mean.To better study how the two models differ when retrieving images with different levels of noise, we replicate the experiments performed above using as corruptions added Gaus-sian noise with different means (µ = {0, 0.1, 0.2, 0.3, 0.4, 0.5}), and variance 0.1.Visual examples of the resulting corrupted images are given on the left side of Fig. 3.This kind of noise corrupts the image by both adding random noise, and by making it "whiter".UHNs are robust with respect to noise with zero mean [17], but weak when this is increased, as they have a large impact on the position of an image in the pixel space.Making an image "whiter", however, does not alter the semantic information that it contains: from a human perspective, we are easily able to determine that the six images represented on the left side of Fig. 3 are different corrupted versions of the same image.Hence, we expect semantic models to perform better than UHNs when dealing with images corrupted by adding Gaussian noises of high mean.This is indeed the case, as the results presented on the right side of Fig. 3 show.Here, the performance of the two models is comparable (with UHNs being slightly better) when using a mean of 0.3 or smaller.The performance of UHNs, however, significantly dropped when using higher means: they were able to retrieve less than 5% of the images when presented with Gaussian noise with mean 0.4, and less than 1% when this mean was further increased to 0.5.Instead, the performance of semantic models were stable, and suffered only a small decrease: they were able to always retrieve more than 70% of the original memories when presented with Gaussian noise of mean 0.5.
Pretraining on ImageNet.We now show that it is possible to drastically improve the results by using more powerful embedding functions.Particularly, we follow the same procedure defined above, but we use different models pre-trained on ImageNet, instead of the respective training sets.The considered models are a ResNet50x1, ResNet50x2, and ResNet50x4 [4], all downloaded from the official repository. 2 In Table 4, we report the results using the cosine similarity for all models.The results confirm the current trend in machine learning: the larger the model, the better the performance.Particularly, ResNet50x4 obtains the best results that we have achieved in this work with cosine similarity, with a huge improvement with respect to smaller models presented in Tab. 2. This shows that the proposed method is general, and strongly benefits from large pre-trained models made available for transfer learning.
Fully-semantic Memory Model
From the biological perspective, the family of memory models introduced in the previous section is implausible, as it stores exact copies of the dataset in memory instead of low-dimensional representations.
In fact, our brain poorly performs when it comes to exact retrievals, but it is excellent in recalling conceptual memories [19,21,26].Here, we provide a memory model that, on the one hand, is coherent with the biological constraints, and on the other hand, is more memoryefficient.The main drawback, however, is the inability of not retrieving memories exactly, often useful in practical tasks.As both scoring 2 https://github.com/google-research/simclrand retrievals are computed in a low-dimensional embedding space, we call this family of models fully-semantic memory model.
Note that both the score and the projection function defined in the previous section require access to a dataset D. To overcome this, we need two functions ϕ and ψ, where ϕ is conceptually similar to the ones used for the semantic memory model, as it again maps data points to a low-dimensional embedding space R e , and ψ is a generative function that follows the inverse path of mapping from the embedding space back to images.A formal definition is as follows.
Definition 3. Given a dataset D = {xi} i≤N , a feature map ϕ : R d −→ R e , and a generative map ψ : R e −→ R d , a fully-semantic memory model is a function µ ϕ(D) : R d −→ R d such that: Note that the dataset is not stored, but only its embeddings are.If the dimensionality of the embedding space is significantly smaller than the dimensionality of the data, then this results in significant memory savings.However, also the parametric functions ϕ and ψ have to be stored, and hence the effective advantage in terms of memory is a tradeoff between these two quantities.
Learning ϕ and ψ.To make the retrieval of the fully-semantic model effective, we need the functions ϕ and ψ to be meaningful.This means that they again have to be pre-trained on a dataset that has similar features to the ones that we want to store.We will now show an example on a small autoencoder, i.e., a multi-layer perceptron trained to generate the same data point used as an input.The distinguishing characteristic of an autoencoder is the presence of a bottleneck layer, much smaller than the input layer, which is required to prevent the network from simply learning the identity mapping.
The sequence of layers that maps the input to the bottleneck layer is called encoder; the remaining part, which maps the output of the encoder back to the input space is called decoder.We consider the functions ϕ and ψ to be the trained encoder and decoder, respectively.A sketch of this network is shown in Fig. 4(a).
Set-Up.The task that we tackle now is a standard one in the associative memory domain: we present the model with a corrupted version of an image that it has stored in memory as a key, and check whether the model is able to retrieve an image that is semantically equivalent to the original one.As a consequence, the results that we present in this section are purely qualitative, as it does not make sense to score images based on how similar they are to the original with respect to a distance on the pixel space.To learn the functions ϕ and ψ, we trained an autoencoder to generate images of the training dataset, composed of 60000 images.Then, we perform associative memory tasks on the test set, composed of 10000 images.To do that, we first saved the embedding of the test set (every embedding has the dimension 12), and then corrupted every image with Gaussian noise.As similarity functions, we tested the dot product and the cosine similarity, and as separation functions, we used the softmax with different inverse temperatures β.For completeness, we have also reported the reconstructions of MHNs, by using the dot product as a similarity function.In both cases, we have not performed any normalization before scoring the similarities.
Implementation Details.The autoencoder has 8 layers of dimension 784, 64, 32, 16, 12, 16, 32, 64, 784, and was trained with a ReLU activation, learning rate of 0.001, and a batch size of 250 for 300 epochs.The corruption used is simple Gaussian noise with mean 0 and variance 0.2.Training the functions ϕ and ψ (hence, the autoencoder) takes approximately 5 minutes on an RTX Titan.Note that the experiments proposed for fully-semantic models are not to be considered for practical applications, as we have used a simple and deterministic generative model.
Results.Representations of corrupted keys, as well as the retrieved constructive memories, are given in Fig. 4(b).Particularly, the reconstructions show that the model is able to correctly retrieve memories in the embedding space, even when the cardinality of the dataset is large.However, the retrieval is not perfect, and sporadic errors may occur.These results can be improved, and scaled up to more complex datasets, by using more complex encoders and decoders.In terms of functions used, the cosine similarity outperforms the dot product, and the softmax with large inverse temperatures (β ≤ 50) is needed for one-shot retrievals, as shown in Fig. 4(c).In fact, a softmax separa-tion function with a small temperature is not enough to discriminate between different stored data points when performing one-shot retrievals.
Related Work
While using the same networks as in several computer vision tasks, the final goal of our work is to perform memory tasks, and is hence mostly related with the associative memory literature.The first model of this kind, called the learnmatrix [23], dates back to 1961, and was built using the hardware properties of ferromagnetic circuits.The first two influential computational models, however, are the Hopfield network [10,11] and sparse-distributed memory models [12].The first emulates the dynamics of a spin-glass system, and the second was born as a computational model of our long-term memory system.In recent years, associative memory models have re-gained popularity, as their literature is increasingly intersecting that of deep learning.A variation of Hopfield networks with polynomial capacity has been introduced to perform classification tasks [14], and a sequential result showed that this capacity can be made exponential with a simple change of activation function [7].However, these models were used to perform classification tasks also due to their limitation in dealing only with discrete vectors.The generalization to continuous valued vectors has been developed several years later [20].There is also a line of research that uses associative memory models mixed with deep architectures, such as deep associative neural networks [16], which augment the storage and retrieval mechanism of dense Hopfield networks using deep belief networks [9], and generative predictive coding networks [22, ?], which rely on the theory of predictive coding to store and retrieve images.Recent lines of works have also focused in implementing forget operations, to remove stored memories that are not needed anymore [27,18].While many works primarily focus on retrieval tasks, recent ones have also used associative memory models to study and understand the popular transformer architecture [25].It has in fact been shown that the attention mechanism is a special formulation of modern continuous-state Hopfield networks [20], and that their dynamics can also be approximated by a modern formulation of sparse-distributed memory models [2].A similar result has been proven for the fully MLP architecture [24], able to achieve excellent results in classification tasks despite only using fully connected layers.
Conclusion
In this work, we have addressed the problem of storing and retrieving natural data such as colored images in associative memory models.First, we have discussed the problem of computing similarities on the pixel space, which creates a mismatch between human and machine performance when it comes to associate similar stored data points.Due to the fact that modern associative memory models compute simple similarity scores on raw pixels, it is in fact possible to simply rotate or translate an image to trick modern memory models.The same transformations, however, would not be able to trick a human judge.To address this mismatch, we have defined two associative memory models that compute similarity scores in an embedding space, allowing to perform associative memory tasks in scenarios where corruptions do not alter the conceptual content of the stored data points.
In terms of generality of the considered benchmarks, we have tested against an associative memory model that is a generalization of most of the models present in the literature, the universal Hopfield network.In detail, it is a generalization of modern Hopfield networks, continuous state Hopfield networks, as well as Kanerva associative memories.Hence, we believe that our analysis is rich enough, as it shows how the performance is sometimes orders of magnitude better.In terms of architecture considered, we have used ResNets, as they are both the most powerful pre-trained models available with contrastive loss, as well as the ones expected to achieve a better performance.Hence, we expect the results of almost any other class of models to be worse than the ones obtained in this work.However, our method is highly generalizable: given any state-of-the-art (SOTA) memory model X, we can apply our embedding function to enhance X's retrieval performance for natural images while significantly increasing capacity.This generalizability eliminates the need to test against every individual model, as our method naturally improves performance by leveraging the quality of the embedding from a large pretrained ResNet.
As embedding models, we have used neural networks trained with a contrastive loss.As this is a popular method in the modern literature, it is easy to find pre-trained models suitable for a given task, freeing the user from the burden of training one from scratch.Training your own contrastive model, however, has an interesting advantage for some practical applications, where original data points are often faced with the same kind of corruptions.One example is that of adversarial attacks: let us assume our memory model gets always tricked by one kind of corruption, it is now possible to collect multiple examples of this corruption, and feed it in the contrastive loss using them as data augmentation.This would enforce the model to group together corrupted versions of the same data point, where the corruption is the same one that will be faced by the dataset.The second model that we propose has the goal of making the model lighter and more plausible, as well as generating images similar, but not identical, to the stored ones.It is a fully semantic model, which performs both similarities and reconstructions in the embedding space.We have proposed simple experiments on an autoencoder trained on MNIST.Applications in practice would need more powerful generative models, picked according to the needed task and data.
Figure 1 .
Figure 1.(a): Decomposition of a universal Hopfield network in score, similarity, and projection.(b) Examples of retrieved data points when given corrupted versions using Gaussian noise.(c) Examples of retrieved data points when given cropped versions.
2 .
separation: a function α : R N −→ R N not dependent on the dataset, 3. projection: a function πD : R N −→ R d dependent on the dataset.
Definition 2 .
Given a dataset D = {xi} i≤N and a feature map ϕ : R d −→ R e , a semantic memory model is a function µD : R d −→ R d such that: 1. µD admits the decomposition µD = πD • α • κ ϕ(D) • ϕ, 2. the map πD • α • κ ϕ(D) is a universal Hopfield network, where similarity scores are computed in the embedding space R e .
Figure 2 .
Figure 2. Example of a semantic memory model, where the function ϕ is a ResNet pre-trained using a contrastive loss.On the left, examples of the six kinds of corruptions used in this section; on the right, the original image to be retrieved by the model.
Figure 3 .
Figure 3. Retrieval accuracies of UHNs and semantic models when presented with images corrupted with Gaussian noise of variance η = 0.1 and different levels of mean µ.On the left, examples of images after this corruption was applied; on the right, retrieval accuracies plotted considering the best result obtained testing different similarity functions.
Figure 4 .
Figure 4. (a): Example of a fully-semantic memory model, where ϕ and ψ are the encoder and decoder parts of a trained autoencoder, and the goal is to retrieve an MNIST image given a corrupted version.(b) Retrieved images when provided with a corrupted version of the first 20 images of the MNIST test set with Gaussian noise of mean 0 and variance 0.2 (left).The best result is obtained with the cosine similarity, identical to the original retrievals of the autoencoder when provided with clean data.(c) Examples of retrievals with the cosine similarity when varying the temperature constant β.
Table 1 .
Percentage of wrongly retrieved memories on CIFAR10.
Table 2 .
Percentage of wrongly retrieved memories on STL10.
Table 3 .
Running times of the experiments (in seconds).
Table 4 .
Percentage of wrongly retrieved memories on STL using pre-trained models on ImageNet.
|
2023-10-19T13:09:41.502Z
|
2024-02-16T00:00:00.000
|
{
"year": 2024,
"sha1": "474796c648c00f6ae70f948606a92ddbd62d9476",
"oa_license": "CCBYNC",
"oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA230500",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "474796c648c00f6ae70f948606a92ddbd62d9476",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
249084737
|
pes2o/s2orc
|
v3-fos-license
|
Protein interaction networks in neurodegenerative diseases: From physiological function to aggregation
The accumulation of protein inclusions is linked to many neurodegenerative diseases that typically develop in older individuals, due to a combination of genetic and environmental factors. In rare familial neurodegenerative disorders, genes encoding for aggregation-prone proteins are often mutated. While the underlying mechanism leading to these diseases still remains to be fully elucidated, efforts in the past 20 years revealed a vast network of protein–protein interactions that play a major role in regulating the aggregation of key proteins associated with neurodegeneration. Misfolded proteins that can oligomerize and form insoluble aggregates associate with molecular chaperones and other elements of the proteolytic machineries that are the frontline workers attempting to protect the cells by promoting clearance and preventing aggregation. Proteins that are normally bound to aggregation-prone proteins can become sequestered and mislocalized in protein inclusions, leading to their loss of function. In contrast, mutations, posttranslational modifications, or misfolding of aggregation-prone proteins can lead to gain of function by inducing novel or altered protein interactions, which in turn can impact numerous essential cellular processes and organelles, such as vesicle trafficking and the mitochondria. This review examines our current knowledge of protein–protein interactions involving several key aggregation-prone proteins that are associated with Alzheimer’s disease, Parkinson’s disease, Huntington’s disease, or amyotrophic lateral sclerosis. We aim to provide an overview of the protein interaction networks that play a central role in driving or mitigating inclusion formation, while highlighting some of the key proteomic studies that helped to uncover the extent of these networks.
The accumulation of protein inclusions is linked to many neurodegenerative diseases that typically develop in older individuals, due to a combination of genetic and environmental factors. In rare familial neurodegenerative disorders, genes encoding for aggregation-prone proteins are often mutated. While the underlying mechanism leading to these diseases still remains to be fully elucidated, efforts in the past 20 years revealed a vast network of protein-protein interactions that play a major role in regulating the aggregation of key proteins associated with neurodegeneration. Misfolded proteins that can oligomerize and form insoluble aggregates associate with molecular chaperones and other elements of the proteolytic machineries that are the frontline workers attempting to protect the cells by promoting clearance and preventing aggregation. Proteins that are normally bound to aggregation-prone proteins can become sequestered and mislocalized in protein inclusions, leading to their loss of function. In contrast, mutations, posttranslational modifications, or misfolding of aggregation-prone proteins can lead to gain of function by inducing novel or altered protein interactions, which in turn can impact numerous essential cellular processes and organelles, such as vesicle trafficking and the mitochondria. This review examines our current knowledge of protein-protein interactions involving several key aggregation-prone proteins that are associated with Alzheimer's disease, Parkinson's disease, Huntington's disease, or amyotrophic lateral sclerosis. We aim to provide an overview of the protein interaction networks that play a central role in driving or mitigating inclusion formation, while highlighting some of the key proteomic studies that helped to uncover the extent of these networks.
Neurodegenerative diseases (NDs) are complex disorders, with multifactorial pathology, that result in progressive damage to neuronal cells and loss of neuronal connectivity, ultimately leading to impaired mobility and/or cognition. Protein aggregation due to misfolding and oligomerization gives rise to extracellular or intracellular inclusions that are a common hallmark for many NDs. Further spreading of these amyloid aggregates in the nervous system-similar to prion-based infections, hence referred to as a prion-like mechanism-is often thought to be a major element in the etiology of NDs (1). In the past few decades, many of the genetic and biochemical causes underlying NDs associated with protein aggregation were uncovered, leading to the distinction between rarer familial forms, where disease-causing mutations are genetically inherited, and the more common sporadic forms, where genetic and environmental risk factors drive the pathogenesis (2). In both cases, the affected proteins are found enriched in pathological aggregates, which highlights their importance in the manifestation of the disease. However, despite the knowledge accumulated and the many clinical trials in which attempts were made to alleviate protein aggregation, to date no therapeutic strategy has been broadly accepted to cure any of the NDs. This led many scientists to question whether protein aggregation is really central to ND etiology or a mere manifestation of other underlying causes (3,4). Nonetheless, collectively, the work of the past decades generated a more complex understanding of how each aggregation-prone protein engages with many key cellular pathways. In this review, we aim to provide an overview of these intricate connections by bringing together core findings and more recent discoveries.
For each ND, different sets of genes are typically found mutated in the familial forms, and different brain regions and cell types are initially affected. For example, Huntington's disease (HD) and spinocerebellar ataxia type 1 (SCA1) are linked to the expansion of the CAG repeat of the huntingtin (HTT) and ataxin 1 (ATXN1) genes, respectively, resulting in proteins with an unusually long polyglutamine (polyQ) tract that is very prone to aggregation and causes intracellular deposits in striatal neurons (5,6). In Alzheimer's disease (AD), two different types of deposits are observed. The aberrant cleavage products of the transmembrane protein amyloid-β (Aβ) precursor protein (APP) form extracellular plaque deposits in the temporal and parietal brain regions, while the protein tau (MAPT) accumulates intracellularly, in neurofibrillary tangles (7). In Parkinson s disease (PD), the primarily affected brain area is the substantia nigra (SN), where α-synuclein (α-syn; SNCA) aggregates are found to accumulate in dopaminergic neurons (8). In ALS, cellular aggregates of superoxide dismutase 1 (SOD1), RNA-binding protein FUS (FUsed in Sarcoma), and TAR-DNA-binding protein 43 (TDP-43) have been identified in motor neurons of the primary motor cortex, brainstem, and spinal cord (9). It is therefore important to consider each of these proteins independently and in the context of the cells that are most affected. Note, while we will mostly use short protein names in this review, whenever a gene is linked to ND, the corresponding italicized gene name will also be indicated in brackets if it is different from the protein name.
Protein misfolding and aggregation of disease-associated proteins is facilitated by mutations and posttranslation modifications (e.g., phosphorylation and protein cleavage) that avert formation of the native protein structure, while in some other cases misfolding can also seemingly occur sporadically, without yet a clear explanation. Aggregation is first typically initiated by a seed or/and an oligomer, in which sequencespecific elements of the misfolded protein interact to adopt a non-native conformation, which can then convert other proteins into the toxic form. In many cases, the oligomerization of misfolded proteins leads to the formation of amyloid fibrils with a distinctive β-sheet structure that arise when soluble oligomers begin to assemble into small protofibrils (10). When more proteins are converted into the non-native forms, these protofibrils become longer fibrils that can then form larger cellular inclusions visible by light microscopy. Recently, it has been proposed that oligomerization may be favored by liquidliquid phase separation of aggregation-prone proteins (11) (Box 1). Moreover, it is now also evident that there are different polymorphs for most amyloid fibrils in vitro and in vivo (polymorph is a term used to indicate the capacity of a polypeptide to generate fibrils with different structures) (12,13).
Protein aggregation can cause a series of deleterious events in the cell. First, it will typically lead to a loss of function of the aggregated protein and can then also affect other cellular components that normally form protein-protein interactions (PPIs) with the natively folded protein. In some cases, these interacting proteins will also coaggregate. For each aggregation-prone protein discussed later, we review some of the main known PPIs and briefly discuss their physiological relevance. In several cases, we also underline how some disease-specific mutations may impact these PPIs, especially if this is also linked to aggregation.
In addition, aggregation-prone species can mediate a toxic gain of function due to their ability to engage in aberrant and nonphysiological interactions with different cellular components that would not normally occur (14). Notably, acute or chronic exposure to stress conditions can lead to the unraveling of buried, hydrophobic, and aggregation-prone regions, even in unrelated proteins, thereby inducing their coaggregation with pathogenic polypeptides (15). These gained PPIs can often perturb the normal function of affected proteins. For instance, all the reviewed proteins, in their aggregated form, can interact with mitochondrial proteins resulting in disrupted mitochondrial function. These mitochondrial perturbations are reviewed in more detail in each chapter due to their evident centrality in ND. In addition to interfering with other proteins, aggregated proteins and small protofibrils can also interact with and disrupt cell membranes, which will then exacerbate cytotoxicity (15,16). As the aggregating proteins can be arranged in different polymorphs, each form may impact the cell differentially, depending on their ability to propagate and to interact with other cellular components. Interestingly, the cellular environment may dictate which conformation is favored, which in turn could impact how the disease will spread (17).
To deal with the challenges posed by aggregation-prone proteins, human cells developed coping mechanisms that largely rely on the protein homeostasis network. Molecular chaperones are central components of this network that maintain protein homeostasis (a.k.a. proteostasis) by facilitating protein folding and disaggregation, as well as by targeting misfolded products for degradation (18). Different
Box 1. Liquid-liquid phase separation
Conventional mechanisms for aggregation of many disease-related proteins proceed through protein misfolding and oligomerization. Recently, increasing attention has been given to the role of liquid-liquid phase separation (LLPS) in this process. Phase-separated droplets provide a concentrated environment where the aggregation process may be accelerated. LLPS occurs when the interactions between molecules in solution are stronger than their interactions with solution to the extent that the entropic cost of demixing is overcome and a condensed phase is formed (393). Beginning with P granules, many cellular membraneless organelles (MLOs) have been shown to exhibit liquid-like properties such as exchanging components with the surrounding environment, deforming under sheer force and fusing (394)(395)(396). This led to the hypothesis that MLOs form through LLPS. While there is a crucial role of LLPS in cellular processes, changes in the properties of phase-separated granules have been linked to NDs. The ability to phase separate in vitro is emerging as a common property of disease-associated proteins. Over time, phase separated granules can mature into more solid, glass-like states (397). The vitrified state consists of thioflavin-positive aggregates for many of the disease-associated proteins discussed here. Phase separation of tau occurs in the presence of polyanionic molecules or RNA-binding proteins such as T-cell intracellular antigen 1 (TIA1) (398,399). Phosphorylation or the presence of disease-associated mutations in tau promote its phase separation and accelerate the conversion from the liquid-like to solid state eventually forming thioflavin-T-positive fibrils (398)(399)(400)(401). The phase-separation of α-syn requires nonphysiological conditions or long periods of time in vitro (402). However, α-syn localizes to droplets formed by tau through interaction of its negatively charged C-terminal domain with the positively charged prolinerich region of tau (403). The link between phase-separation and neurodegeneration is particularly clear in the case of ALS, where mutations in several RNA-binding proteins have been implicated. RNA-binding proteins are components of many MLOs, and it is thought that their structural properties such as intrinsically disorder regions (IDRs) are important for driving the phase separation of these granules. The interaction between IDRs and RNA can drive phase separation and influence the morphology and dynamics of the resulting protein-RNA granules (404). ALS-associated mutations in IDRs or lowcomplexity domains of the RNA-binding proteins FUS, TIA1, heterogeneous nuclear ribonucleoprotein A1 (hnRNPA1), and TDP-43 accelerate the liquid to solid transition of phase-separated granules in vitro with many eventually forming thioflavin-T-positive inclusions (405)(406)(407)(408)(409)(410). These transitions may influence pathology by disrupting granule functions and trapping RNA-binding proteins and elements of the translational machinery (405). Lowcomplexity domains are characteristic of HTT that phase separates through weak hydrophobic interactions between its proline-rich region and the polyQ expansion (411,412). With increasing polyQ length, HTT phase separates more quickly and at lower concentration. The phase separated inclusions eventually convert to solid and irreversible structures. Interestingly, the protein profilin interacts with HTT at its proline-rich domain and reduces its ability to phase separate, as well as reduced the rate of fibril formation (413). This highlights how PPIs and phase separation behavior of disease-associated proteins are intertwined and potentially play a major role in neurodegeneration. cellular pathways can be used to eliminate toxic misfolded proteins, directing them either to the ubiquitin-proteasome or the lysosomal system. Furthermore, the cell displays a protective mechanism that can drive the formation of larger noncytotoxic, or less toxic, inclusions, hence lowering the cytotoxicity of smaller protofibrils via sequestration (19). Nevertheless, the aberrant interactions of aggregating proteins we described previously can also extend to the protein quality control network itself. These can cause deleterious effects by trapping chaperones and reducing the pool of chaperones available for other critical functions, thereby impairing proteasome function and potentially exacerbating the accumulation of aggregated proteins and their cytotoxicity (20,21). Understanding how the components of the proteostasis network are affected during disease progression could reveal strategies for the treatment of NDs.
The functional characterization of many aggregation-prone proteins associated with NDs has so far proven to be a major challenge. In some cases, the function of the aggregationprone protein remains to be fully elucidated. Furthermore, the contributing role of the genetic modifiers of ND-causing genes is often poorly understood. Few studies have systematically analyzed the similarities between affected proteins causing the main types of NDs. Therefore, we decided to review past work, first by interrogating the PPI networks (PINs) around the proteins associated with NDs and then by carefully examining how each of these aggregation-prone proteins interacts with the protein homeostasis network and the mitochondria, in order to gain a general view of how the progression of neurodegeneration may impact the cells and to determine any commonalities between these diseases.
Building networks
Identification of PPIs of proteins associated with neurodegeneration was in first guided by knowledge of genes associated with the diseases and the initial characterization of components enriched in protein aggregates, often using immunochemistry. The elucidation of PPIs using targeted approaches-customarily by assessing binding using coimmunoprecipitation-still represents a large portion of our knowledge related to aggregation-prone proteins in neurodegeneration. In most cases, the first unbiased searches to gain better understanding of the role of these aggregation-prone proteins or their binding partners were driven by the yeast two-hybrid (Y2H) method, a technique that emerged over 25 years ago. Advances in PPI identification methodologies in the past 2 decades, including improvement of mass spectrometry instrumentation, have further unraveled a large portion of human PINs, providing additional insights into the functions of proteins involved in NDs. Major contributions to PINs associated with NDs have come from proteomics studies that rely on protein coimmunoprecipitation experiments, which are commonly referred as to affinity purification mass spectrometry (AP-MS) (Box 2), while proximity-labeling approaches (Box 3) are becoming increasingly more popular.
AP-MS has contributed greatly to the mapping of PPIs and several large-scale studies have significantly expanded our repositories. For instance, the laboratories of Drs Steve Gygi and Wade Harper have now completed AP-MS experiments for 10,128 baits in a series of landmark studies (22)(23)(24). In addition to these monumental efforts, it is important to identify how PINs may potentially adapt in the presence of diseaseassociated mutations, especially in the context of the NDs that are examined here. Hosp et al. performed an extensive characterization of the interactome of several NDs using quantitative AP-MS to identify interactors of WT and diseaseassociated mutations from PD, AD, HD, and SCA (25). Notably, they identified a small number of interactors specific to each bait in the presence of a disease-associated mutation that were not found with their WT counterparts. Application of this approach to more cell types and disease-associated proteins will be crucial in uncovering how impacted PINs within affected cells contribute to neurodegeneration.
Proximity labeling techniques offer complementary approaches to AP-MS, especially when subcellular compartments cannot be maintained or insoluble components cannot be immunoprecipitated. By using the engineered ascorbate peroxidase, Markmiller et al., not only determined the composition of stress granules in neuronal cells but also identified changes in stress granule composition in induced motor neurons with ALS-associated mutations in C9orf72 and HNRNPA2B1 ( (26)). This study demonstrates the power of proximity labeling in identifying potential mechanisms associated with NDs. Similarly, proximity-dependent biotin identification (BioID) has been used to identify changes in interactions with aggregation-prone proteins. Chou et al. used BioID to identify interactors of TDP-43, including interactors specific to the aggregated form (27). Additionally, BioID has been applied to other disease-associated proteins such as cyclin F (also associated with ALS) and α-syn (28,29). Importantly, interaction studies have also been applied in the absence of disease models to map the underlying proteostasis network. Piette et al. used a combination of BioID and AP-MS to map the interactors of J-domain proteins and heat shock protein (Hsp) 70s (30). This builds on previous work identifying interactors of chaperones by AP-MS (31) and demonstrates the importance of using these complementary techniques in characterizing PINs.
In addition to protein mass spectrometry, Y2H and other protein complementation assays continue to be used for specifically identifying binary interactions. Haenig et al. recently performed an extensive Y2H screen using nearly 500 bait proteins related to NDs (32). Focusing on inherited ND-causing proteins, they constructed disease-specific networks and validated hits from these networks in patient samples. The ability to study interactions of protein fragments in Y2H has led to specific understanding of differences in PPIs between various disease-associated mutations in α-syn, HTT, and ataxias (33)(34)(35). Proximity ligation assays are also often used to verify that a given PPI occurs in a specific location in the cell or when one of the components cannot be easily assessed using other biochemical approaches (e.g., coimmunoprecipitation). Protein arrays can also be used to probe PPIs and were used to identify interactors for oligomerized α-syn and Aβ (36,37). While these techniques are not currently widely employed, they offer some unique abilities to probe PPIs in the context of NDs.
In this review, we focused on six well-known disease-associated proteins that drive aggregation in AD, PD, HD, and ALS (Table S1). For each of the selected protein, we first examined the interactomes (i.e., the ensemble of proteins forming PPIs) that were available in the BioGRID database that included over 714,000 nonredundant human PPIs at the time of writing this review (38). Because a large number of PPIs are reported for a given protein, we initially emphasized mostly PPIs that were reported in multiple independent studies or assessed using different methods, to gain confidence of their potential significance and reduce unspecific hits (Table S1). For instance, there are 554 unique interactors reported for α-syn, but only 18 interactors were characterized in at least four independent observations. We also referenced additional PPIs that were either not yet annotated in BioGRID or placed low in the aforementioned ranking, whenever the PPI association to ND was particularly relevant to a specific area examined in this review. For this analysis, we considered work done in either human or mice. Some proteins known to aggregate such as the major prion protein (PrP) and poly-GA polypeptides translated from C9orf72 with expanded GGGGCC repeats were omitted from this analysis due the relative paucity of information in comparison to other aggregation-prone proteins. For each main ND, we described PPIs with selected aggregation-prone proteins with an emphasis on the elements and events triggering pathogenesis. Whenever possible, we compared both the physiological role of the WT protein and the features distinguishing it from the disease-associated variants. Moreover, we emphasize in each section the PPIs with the proteostasis network and mitochondria. To conclude, we reflect upon the challenges associated with our efforts and outline possible future directions aimed to better handle and integrate the ever-growing PPI databases in the context of NDs.
Alzheimer's disease AD is the leading cause of dementia and the most frequent ND that is characterized by amyloidogenic proteins in the brain (7). To date, there are no treatments that prevent or slow down the disease. The progression of the disease leads to neuronal loss over time. A potential cause is the accumulation of extracellular deposits of Aβ in plaques and intraneuronal neurofibrillary tangles (NFTs) rich in hyperphosphorylated microtubule-associated protein tau. An initial spike of Aβ levels and a loss of its cellular catabolism is postulated to be the triggering event that leads to the formation of senile plaques and results in the development of the first clinical symptoms within years or decades (7). Recent findings suggest that, although misfolded tau and Aβ are deposited in different brain regions, both phenomena are synergetic in AD (39). However, the exact molecular pathways linking tau and Aβ to neurodegeneration are yet to be fully elucidated.
APP is a cell surface receptor and a type I transmembrane precursor protein proteolytically cleaved into a variety of shorter peptides by the transmembrane proteases α-, β-, and γ-secretases. The majority of peptides are nonamyloidogenic products of APP cleavage by α-secretase (ADAM10) and γ-secretase: sAPPα and p3 are released extracellularly and the APP intracellular C-terminal domain (AICD) is released in the cytosol (Fig. 1). AICD then forms a transcriptionally active complex with the adapter protein Aβ precursor proteinbinding family B member 1 (APBB1) and the histone acetyltransferase KAT5 (lysine (K) acetyltransferase 5, TIP60) to enhance transcriptional activation (24,40,41). Conversely, a small fraction of the precursor protein APP undergoes a different sequential enzymatic processing, generating short amyloidogenic peptides (Aβ), which tend to oligomerize and deposit forming the base of amyloid plaques observed in AD patients' brains. The first step in the formation of the Aβ peptide from APP is catalyzed by the β-secretase 1 (BACE1) or its paralog β-secretase 2 (BACE2) (Fig. 1). These cleave APP after residue 671 generating sAPPβ, a long amino-terminal soluble fragment for extracellular release and a corresponding cell-associated carboxy-terminal fragment (β-CTF). β-CTF is further processed by the γ-secretase, which catalyzes its intramembrane cleavage and yields Aβ for secretion and AICD for intracellular release (42)(43)(44).
AD is categorized in two subclasses according to the age of onset of the disease: early onset forms of the disease are mostly caused by mutations in the APP or presenilin genes (PSEN1 and PSEN2), while late onset is linked to other genes such as APOE, discussed later, and other environmental factors (45). Increased production of the amyloidogenic form of APP is observed in familial AD-linked mutations of PSEN1 or PSEN2, which encode the catalytic subunit of the γ-secretase complex (46,47). Because PSEN1 and its homolog PSEN2 directly affect the protease activity of the γ-secretase required for the production of Aβ, they represent major candidates for the development of inhibitory or modulating drugs aimed at preventing AD progression (48). Other components of the γ-secretase complex affecting Aβ levels include presenilin enhancer 2 (PEN2) and anterior pharynx-defective 1 (APH1), both essential for the maturation of the presenilins, as well as nicastrin, necessary for binding β-CTF (49)(50)(51). Mutations in APP have been linked to the autosomal dominant form of AD, with most of its mutations clustered in regions encoding cleavage sites, which favor the generation of Aβ by faulty processing of APP by βor γ-secretase (52,53). Other APP mutations within the Aβ sequence increase the self-aggregation tendency favoring amyloid fibril formation (54,55).
According to the most favored model of Aβ fibrillization, once released, the Aβ monomers can undergo a conformational change shifting from a helical conformation to an abnormal β-sheet structure. In this conformation, monomers start assembling in small oligomers of 3 to 50 Aβ units, which have an increased cytotoxic potential compared to monomeric Aβ and larger insoluble fibrils (56). Some oligomers have seeding activity, which can trigger polymerization of more Aβ into protofibrils. Although protofibrils are still in equilibrium At the cell membrane, APP is predominantly cleaved into soluble fragments by α-, β-, and γ-secretases. In the presence of disease-associated mutations, APP cleavage into Aβ fragments increases. Full-length APP also undergoes endocytosis, which influence its processing. Aβ monomers can aggregate to form small soluble oligomers, then amyloid fibrils, and ultimately insoluble amyloid plaques. The extensive processing of APP to Aβ in different subcellar localizations can impact the cell, such as disruption of mitochondrial function. In contrast, interaction with chaperones can modulate oligomerization and aggregation and several proteolytic systems can promote proteolysis of Aβ extracellularly or of related proteins in the cell. Aβ, amyloid-β.
with smaller oligomers, they can rapidly evolve into thermodynamically stable fibrils, which can finally deposit in larger insoluble aggregates, forming plaques (57,58). The extracellular accumulation of Aβ in neuritic plaques together with the binding of soluble oligomeric Aβ to different exposed cellular receptors has been proposed as a cause of neuronal toxicity in AD. Such toxicity is proposed to derive from the disruption of neuronal homeostasis, the internalization of Aβ leading to cellular defects, and the induction of neurotoxic signals, which can trigger mitochondrial dysfunction or endoplasmic reticulum (ER) stress response (58).
There is a key interplay between plasma membrane proteins that interact with APP or Aβ that impacts AD (Fig. 1). Numerous studies aimed at characterizing the transport of Aβ through the membrane identified low-density lipoprotein receptor-related protein 1, 1B, 2, and 8 (LRP1, 1B, 2, and 8) as major players in AD (59)(60)(61)(62). The aforementioned LRPs interact with a wide range of ligands, including Aβ, the nontoxic precursor APP, and apolipoprotein E (also known as apoE) (63,64). By interacting with precursor APP, different LRPs play opposing roles in its endocytosis. Whereas, LRP1B retains APP at the cell surface and reduces Aβ peptide production, LRP1 promotes fast endocytosis that results in an increase of APP processing into Aβ (65). Binding of lipidcarrying apoE to LRP8 was shown to recruit the adapter protein Aβ A4 precursor protein-binding family A member 1 (APBA1) and APP, thereby inducing the endocytosis of APP in neuroblastoma cells and leading to increased production of Aβ ( Fig. 1) (60). ApoE also plays a central role in AD by directly binding to Aβ. Initially, apoE was shown to bind Aβ and to promote fibrillization of Aβ, particularly isoform 4, which is a major genetic risk factor for AD (66,67). However, searches for Aβ-binding proteins by affinity chromatography by Calero et al. led to the identification of serum amyloid P component (SAP) and apolipoprotein J (also known as clusterin) as the main plasma interactors of Aβ, while apoE was only marginally enriched (68). Subsequent work confirmed that only a small amount of apoE was bound to Aβ in physiological conditions but that does not preclude the importance of this interaction (69). In fact, apoE has a role in Aβ clearance. Notably, presence of LRP1 on brain microvascular endothelial cells plays a major role to mediate the clearance of Aβ from the brain tissue through the blood-brain barrier via transcytosis (70). Notably, apoE interaction mediates delivery of Aβ to LRP1 and drives its internalization (71). Nevertheless, other reports show that apoE competes with Aβ for the interaction with LRP1, resulting in the suppression of Aβ cellular uptake and clearance (69). One important element is that different cell types may have different ability to uptake Aβ. For instance, the triggering receptor expressed on myeloid cells 2 (TREM2) receptor on microglia cells that binds to apoE and clusterin has been recently found to be involved in Aβ uptake (72). Interestingly, clusterin is a small extracellular chaperone molecule capable of inhibiting Aβ primary and secondary nucleation by suppressing the elongation step of Aβ aggregation in vitro (73,74). The interplay with plasma membrane proteins and APP or Aβ is very complex and a major challenge is to properly evaluate the importance of each PPI. This is particularly convoluted because multiple genes associated with AD interact with the same protein (e.g., LRP1), the animal models do not perfectly mimic human pathology, and the perturbation of a gene can have different effect in different cell types. For instance, isoform 4 of apoE that is major genetic risk factor for AD in humans is not present in mouse, and the disruption of a LRP can affect APP catabolism in neurons, while it can impact transcytosis in endothelial cells.
Interactions between APP or Aβ with the proteostasis network also play a major role in regulating levels of the different components involved in Aβ production and its oligomerization. Alongside the binding of Aβ to plasma transport proteins, extracellular proteolysis of Aβ prevents its accumulation in the brain (Fig. 1). Aβ can be degraded by four major zinc metalloendopeptidases; including neprilysin (MME), endothelin converting enzymes 1 and 2 (ECE1/2), or insulysin (IDE) (75)(76)(77); or the serine protease plasminogen/plasmin, which can degrade both monomeric and fibrillar Aβ, thereby reducing its toxicity (78). Other proteases with Aβ-degrading activity are the matrix metalloproteases 2 and 9 and the ADAM30-activated cathepsin D (79,80). In addition to generating Aβ, regulation of APP levels by different cellular proteolytic machineries also plays a major role in AD. For example, ubiquitin ligase ubiquitin-conjugation factor E4 B (UBE4B) can help mediate endocytosis of APP via its ubiquitination, thereby impacting Aβ levels (81). While UBE4B levels are not affected in AD, elements of the endosomal sorting complex required for transport (ESCRT) machinery that further direct endosome trafficking downstream of UBE4B are found expressed at lower levels in AD patients. Importantly, downregulation of ESCRT components and UBE4B leads to higher levels of Aβ production in primary neurons. F-BoX LRR-repeat protein 2 (FBXL2), which is one of the substrate adapters of the Cullin RING ligase 1 (CRL1) ubiquitin ligase, was also shown to interact with APP, and its overexpression in mice reduced levels of Aβ (82). C terminus of Hsc70-interacting protein (CHIP, also known as STUB1) is a E3 ligase that has been shown to target a variety of misfolded proteins for proteasomal degradation. Notably, CHIP binds to both APP and Aβ (83). In addition, CHIP also influences Aβ metabolism by targeting the β-secretase for degradation (84). Interestingly, knockdown of CHIP does not lead to higher levels of APP but reduces the ability of Hsc70 or Hsp70 to interact with APP (83). Earlier proteomic work had shown that several chaperones can bind to cytosolic portion of APP and Aβ, including Hsc70, Hsp90, and especially the small Hsp αB-crystallin that was strongly enriched following affinity purification from brain lysates (85). Interestingly, αB-crystallin interacts with the Aβ fibrils and aggregation-prone misfolded proteins using two different interfaces, indicating that this chaperone has developed distinctive functions (86). A recent study using AP-MS and proximity labeling with molecular chaperones led to the identification of multiple APP interactors such as several components of the Hsp40 family (DnaJB12, DnaJC25, DnaJC30, and DnaJC5) and Hsp70s (HSPA13 and HSPA5) (30). Note that there are 11 human genes encoding for HSP70, and whenever possible we also provide their gene names. This work provides a greater view of the extent of possible PPIs, and together with additional studies it portrays a more refined view on how APP is handled by the protein homeostasis network (Fig. 1). For instance, DnaJC30 and DnaJB6 were also identified as interactors of soluble Aβ oligomers in two different studies using human protein microarray technology (37,87). Notably, DnaJB6 can capture Aβ oligomers and prevent primary nucleation (88,89). In contrast, DnaJB12, which was also identified as an interactor of APP-K670N/M671L, the so-called Swedish mutation linked to familial AD, acts as a cochaperone with Hsc70 (HSPA8)also shown to interact with APP-to promote protein folding, trafficking, and prevention of aggregation (25,90,91). Interaction with the proteostasis network within the cell may be key to alleviating Aβ cytotoxicity, especially in relation to mitochondria and perhaps to tau aggregation (see later).
Mitochondrial dysfunction is considered a hallmark of many NDs and AD is no exception. Precursor APP is mainly localized to plasma membrane and ER via its ER-targeting sequence in amino acids 1 to 35; however, as it contains a cryptic mitochondrial targeting sequence between residues 36 to 67, it can also be recruited to the mitochondria ( Fig. 1
) (92).
There it becomes stuck with the cytosolic carboxy terminus associated to the mitochondrial translocase of the outer membrane 40 (TOM40), and the amino terminus is fully imported in the matrix (93,94). In this conformation, it interacts with other mitochondrial membrane proteins and obstructs the TOM channel, impairing mitochondrial protein import (95). An example of affected mitochondrial proteins is leucine-rich PPR-motif-containing protein (LRPPRC), which is linked to multiple cellular functions including maturation and export of nuclear mitochondrial mRNA (25). Analysis of amyloid plaques from AD patients confirmed that LRPPRC is not found extracellularly, whereas proximity ligation assays in cells and cell fractionation indicated a distinctive mitochondrial colocalization for both APP and LRPPRC (25,96). Furthermore, LRPPRC has been found to interact intracellularly with early onset AD variants of APP and to induce mitochondrial dysfunction by the dysregulation of mitochondrial gene expression, specifically genes encoding subunits of complex I and complex IV (25,97,98). A similar but more direct effect on inner-membrane complexes is observed with internalized toxic Aβ, which can accumulate within the inner mitochondrial membrane and affect complex IV activity (99). Mitochondrial dysfunction is further confirmed in various cellular, transgenic, and human AD models where mitochondrial localization of WT or mutant APP disrupts mitochondrial dynamics, reactive oxygen species (ROS) generation, and energy state, as well as causes a loss of membrane potential (95,100,101).
In addition to the major contribution of Aβ in AD, microtubule-associated protein tau is another key player that aggregates. Tau is an intrinsically disordered protein that regulates microtubule stability in neurons. In addition to AD, aggregation of tau is observed in numerous NDs communally termed tauopathies. Notably, numerous tau autosomal dominant mutations that are located near the microtubule-binding domain and promote aggregation have been associated with frontotemporal dementia (102,103). While phosphorylation plays a role in regulating tau function, tau is found hyperphosphorylated in NFTs (Fig. 2). Notably, phosphorylation events can modulate both PPI and the aggregation propensity of tau in a complex manner. For instance, whereas 14-3-3ζ interacts with unphosphorylated tau and promotes its aggregation, tau phosphorylation on S134 increases 14-3-3ζ affinity but, in this case, reduces tau aggregation (104,105). Other phosphorylation events have been shown to promote aggregation, and several of these posttranslation modifications are only identified in the context of neurodegeneration (106). Especially, several kinases have been proposed to promote tau aggregation like glycogen synthase kinase-3 (GSK3), casein kinase 1/2 (CK1/2), leucine-rich repeat kinase 2 (LRRK2, PARK8) and Fyn, and many have been considered as candidates for potential AD therapeutics (107)(108)(109)(110). For instance, inhibition of cyclin-dependent kinase 5 (CDK5) that interacts with tau restores long-term potentiation in mice indicating it may mitigate some AD traits (111). However, an effective compound has yet to be identified, and other approaches may need to be taken in consideration to successfully treat AD by taking in account other PPIs. Screens for tau-interacting proteins in transgenic mice expressing WT human tau led to the identification of new interactors like the adapter protein of neuronal nitric oxide (NO) synthase CAPON (carboxyl-terminal PDZ ligand of neuronal NO synthase protein), which can induce tau aggregation and links tau pathology to the production of NO (112). Lowering CAPON levels could suppress tau pathology. Moreover, cell survival could be achieved by ensuring prevention of tau deposition or clearance of hyperphosphorylated tau intracellularly. Components of the ubiquitin system like E3 ligases CHIP, membrane-associated ring-CH-type finger 7 (MARCH7), and the E2 enzyme ubiquitin-conjugating enzyme E2 W (UBE2W) are in place to promote ubiquitination of tau and the removal of its oligomers by the proteasome (113)(114)(115). Hsp70/Hsc70s (HSPA8, HSPA4, HSPA1A) and Hsp90s (HSP90AA1) interact with amyloid structures to, in cooperation with DnaJ proteins or BAG family molecular chaperone regulator 1 (BAG1), modulate their assembly and disassembly or promote client tau degradation (116)(117)(118)(119)(120)(121). Notably, Hsp70 prevents formation of tau aggregates alongside the DnaJA2 and DnaJB1 cochaperones (121)(122)(123). Interestingly, recent work shows that, while DnaJA2, DnaJB1, and Hsc70 all bind to the same tau region, Hsc70 binds preferentially to the monomeric tau, DnaJB1 binds to the oligomerized form, and DnaJA2 binds to both (124). Another recent proteomic study revealed that DnaJC7 is yet another cochaperone that interacts with tau (125). In this case, the cochaperone specifically binds with higher affinity to the natively folded tau, and this interaction is thought to stabilize the native conformation to prevent aggregation. In addition, the small HSP Hsp27 (HSPB1) can also delay formation of tau aggregates via a weak interaction with the small fibrils (126). These studies illustrate how different elements of the proteostasis network exert distinct but complementary roles to prevent protein aggregation in the cell.
The "relationship" between Aβ and tau in AD is still a matter of debate. Earlier work showed that Aβ can directly induce tau aggregation in vitro and that it may enhance tau phosphorylation (127,128). However, it is still unclear how Aβ promotes the formation of NFTs, especially since aggregate deposition occurs in different brain regions. There are two important elements: (1) the pattern of NFT deposition in AD has a clear correlation with the disease severity and is actually used to define the six different Braak stages to describe the disease progression (Braak stages are also used in PD, where it follows the spread of α-syn aggregation rather than NFT) and (2) there is now increasing evidence that Aβ formation precedes tau aggregation. For instance, specific phosphorylation events on tau coincide with Aβ formation up to 2 decades before other sites are phosphorylated and tau aggregation begins (129). In addition, Aβ plaque favor an environment for rapid spreading of tau aggregates (130). Recent work also suggests that microglial cells activated by Aβ may play an important role in promoting phosphorylation of tau, notably via the activation of the p38 mitogen-activated protein (MAP) kinase (131). In addition, isoform 4 of apoE, associated with increased risk of AD, also aggravates tau aggregation (132). We will likely continue to refine our understanding of the interplay between Aβ and NFTs in the upcoming years. This relationship is key to a better comprehension of AD etiology and strategies for therapeutics.
Parkinson's disease
PD is the second most common ND and is characterized by motor abnormalities and neuropsychiatric disturbances derived from selective degeneration of dopaminergic neurons in the SN (8). Approximately, 85% of PD cases are sporadic, whereas the remaining 15% represent autosomal dominant or recessive familial PD cases linked to genetic factors (133). At present, mutations in over 19 genes have been associated with PD development, including those encoding α-syn (SNCA/PARK1), parkin (PRKN/PARK2) E3 ubiquitin ligase, PTEN-INduced kinase 1 (PINK1, PARK6), deglycase DJ-1 (PARK7), and LRKK2 (PARK8) (133,134). Among them, α-syn is the most extensively studied, followed by parkin. However, as none of these PD-associated genes has a 100% penetrance, one of the prevalent models involves a synergistic action between multiple factors causing both familial and sporadic PD (133). We will mostly discuss PPIs with α-syn, as well as summarize parkin/ PINK1 function in this review.
The presence of Lewy bodies (LBs) in some neuronal tissues is a key feature of PD. LBs are protein-rich inclusions that contain α-syn among potentially more than 100 different proteins (135). Despite the high concentration of α-syn in neurons, these toxic inclusion bodies do not occur in healthy individuals. Several SNCA mutations and gene duplication events are linked to rare cases of familial PD and promote α-syn fibril formation. In addition to PD, dementia with LBs and multiple system atrophy are the main synucleinopathies characterized by the presence of α-syn inclusions. Identifying and understanding the cellular mechanisms in place to reduce accumulation and propagation of misfolded α-syn in the brain will likely play a major role in the design of therapeutics to prevent synucleinopathies and slow down the progression of PD. Recent structural analyses provided new insights into α-syn fibril structures (136,137). Interestingly, PD-linked point mutations in α-syn are all predicted to impact the fibril structure. Notably, α-syn A53T fibrils have a smaller interface in comparison to fibrils generated with WT α-syn, which may explain why these fibrils are less stable thereby promoting seeding propagation (138). Similarly, E46K causes a distinct conformation that is less stable in some experimental conditions (139,140). One key element will be to distinguish whether some of these polymorphs may cause a gain of cytotoxicity by forming new protein interactions.
While the exact physiological function of α-syn is still debated, there is increasing evidence that it plays a role in regulating the homeostasis in synaptic vesicles owing to its ability to bind to multiple synaptic proteins and the cell membrane (Fig. 3A). α-Syn is a small intrinsically disordered protein of 140 amino acids that is localized in presynaptic termini. In its monomeric form, it regulates synaptic vesicle trafficking and subsequent neurotransmitter release, by increasing local calcium release from microdomains to enhance ATP-induced exocytosis (141). In its multimeric membrane-bound state, it supports the assembly of the SNAREs synaptic fusion components at the presynaptic plasma membrane, a function that becomes crucial during aging (142). α-Syn interacts with several cytoskeleton components. Both α/β tubulins have been found to bind α-syn in AP-MS experiments with human brain samples from PD patients and to colocalize with α-syn-positive pathological structures (143). As well, α-syn binds tau, and both proteins have a synergetic effect toward their polymerization into fibrils (144,145). In a recent study, in which PPIs of a subset of LB-localizing proteins were assessed using a combination of nanobead luminescence and two-color coincidence detection assays, tau was shown to bind more strongly to preformed α-syn fibrils rather than to monomeric and oligomeric α-syn (146). In addition to its association with the microtubular network, α-syn has also been found to associate with dopamine transporter 1 (DAT1), and may thereby regulate dopamine neurotransmission by modulating DAT1 levels at the cell surface by tethering the transporter to microtubules (147)(148)(149). The interaction of DAT1 with the familial PD-linked α-syn A30P mutant is enhanced compared to the WT, thereby decreasing DAT1 surface expression and activity (150). In addition to mutations, posttranslation modifications can also impact PPIs. Notably, α-syn phosphorylated on S129 is found enriched in LB where it accumulates upon progression of the disease (151,152). While the impact of S129 on PD is not yet fully elucidated, it can clearly induce multiple effects (153). In a comparative proteomic study using different α-syn peptides, McFarland et al. showed that different subsets of interactors are affected depending on whether α-syn is phosphorylated (154). Nonphosphorylated α-syn coprecipitates with protein complexes enriched for mitochondrial proteins (many in complex I), whereas phosphorylation on either S129 or Y125 leads α-syn to interact with more cytoskeletal and vesicular trafficking proteins. In addition, oligomerization of α-syn can also lead to novel nonphysiological PPIs. For instance, Tanudjojo et al. recently showed that upon formation of fibrils, there is a gain of interaction with DJ-1 (155). Notably, KO of DJ-1 also increases the aggregate-induced cytotoxicity, which illustrates how a gain of interaction may aggravate the outcome at the cellular level.
The interaction between α-syn and the proteostasis network plays a major role both to promote normal function of α-syn and to prevent toxic gain of function interactions. Interactions with Hsc70/Hsp70, Hsp90, and small HSP αB-crystallin have been shown to reduce oligomerization of α-syn in vitro (Fig. 3B) (156)(157)(158)(159). In addition to preventing oligomerization, other chaperones can also display disaggregase activity and disassemble α-syn fibrils. This is the case with DnaJB1, Hsp70, and the Apg2 nucleotide exchange factor that convert cytotoxic fibrils to nontoxic monomeric α-syn in an ATP- dependent manner (160). Another molecular cochaperone, DnaJB6, has been shown to suppress the aggregation of seeded α-syn in cells and in animal models of PD (146,(161)(162)(163). Different chaperone proteins can bind to different populations of misfolded α-syn. Notably, while the small HSP αB-crystallin preferably binds to seeding oligomers, DnaJB6 interacts instead with assembled fibrils (146). Distinctly, recent work using in-cell nuclear magnetic resonance indicates that a majority of monomeric α-syn could be bound to chaperone proteins inside the cell (164). Interestingly, the binding of α-syn to chaperones and cell membranes is mutually exclusive, as it is mediated via the same N-terminal region of α-syn. Protein mass spectrometry confirms that a large portion of PPIs with that N-terminal region of α-syn are with chaperone proteins. Therefore, inhibition of chaperone interactions increases localization of α-syn at cellular membranes, including with mitochondria, and the aggregation of α-syn. The equilibrium between free-state, membrane-bound, and chaperone-bound α-syn is likely to play a major role in PD etiology (Fig. 3B). Notably, phosphorylation of α-syn Y39, a posttranslation modification that increases with the severity of the PD and is found in LB, perturbs both the interactions with chaperones and cell membranes, and could therefore greatly accelerate α-syn aggregation (164)(165)(166).
As SNCA gene duplication is linked to PD, regulation of α-syn levels in the cell likely plays a major role in the etiology. A subset of α-syn is ubiquitinated in LBs, indicating that ubiquitination may play an important role in regulating α-syn levels (167). Several ubiquitin ligases have been found to mediate α-syn ubiquitination (Fig. 3A). Parkin E3 ligase was first shown to coimmunoprecipitate with and ubiquitinate glycosylated α-syn (168). Parkin was also subsequently shown to ubiquitinate synphilin-1 (SNCAIP), a protein that interacts with α-syn and is enriched in LBs (169). However, subsequent work on parkin has been more focused on its role with PINK1 in regulating mitophagy, a specialized autophagy process in which defective mitochondria are targeted to lysosomes (see later) (170). Instead, more attention was given to other E3 ubiquitin ligases that may potentially conjugate α-syn. The Seven In Absentia Homolog (SIAH) 1 and 2 E3 ubiquitin ligases were shown to ubiquitinate α-syn, which also promotes its aggregation (171). In contrast, the deubiquitinating enzyme ubiquitin-specific peptidase 9 X-linked (USP9X) interacts with ubiquitinated α-syn, and downregulation of USP9X accelerates turnover of α-syn in a proteasome-dependent manner in tissue culture cells indicating that ubiquitin, in this case, promotes turnover-and not aggregation-of α-syn (172). Similarly, ubiquitin-specific peptidase 8 (USP8) is another deubiquitinating enzyme that interacts and reduces turnover of α-syn and is enriched in LBs with less ubiquitinated α-syn in the SN (173). Downregulation of USP8 promotes the accumulation of K63 polyubiquitin chains on α-syn and its lysosomal degradation (K63 polyubiquitin chains are typically not associated with proteasomal degradation). The neuronal precursor cellexpressed developmentally downregulated 4 (NEDD4) E3 ligase that binds to the C-terminal region of α-syn promotes its targeting to the lysosomal pathway, a pathway that was found "druggable" as it mitigates α-syn cytotoxicity (174,175). In addition to the ubiquitination of the monomeric α-syn, other E3s have been shown to target or affect α-syn fibrils. The CHIP E3 ligase, which also targets tau and accumulates in LBs, can reduce the accumulation of oligomerized α-syn in tissue culture cells (176). CHIP is recruited to preformed fibrils internalized in tissue culture cells. Crosslinking coupled to mass spectrometry experiments show that the interaction with the fibrils is at least mediated by CHIP U-box that recruits the E2 ubiquitin-conjugating enzyme (177). The potential role of this E3 enzyme in regulating α-syn is convoluted, as CHIP also binds Hsp70 via its tetratricopeptide repeat domain. For instance, while CHIP can ubiquitinate α-syn in vitro and in transfected cells, this activity is inhibited by BAG5, which is recruited via Hsp70 (178). The impact of CHIP on α-syn regulation would need to be carefully re-evaluated as many initial experiments relied on transient transfection and overexpression in tissue culture cells. More recently, the CRL1 E3 ligase with F-box/LRR repeat protein 5 (FBXL5) was shown to interact with α-syn fibrils that were taken up by cells (179). Different components of CRL1 colocalize with internalized α-syn fibrils that form inclusions, especially after rupture of the lysosome. Cullin-1, the main scaffold component of CRL1, is required to mediate α-syn ubiquitination and downregulation of different components of CRL1, including the substrate adapter FBXL5, leads to an increased accumulation of internalized α-syn in tissue culture cells. Remarkably, downregulation of FBXL5 or Skp1 (another component of CRL1) in mice allows more spreading of LB-like pathology upon administration of exogenous α-syn fibrils in the ipsilateral brain region, where expression of the E3 components was targeted by a lentivirus. Collectively, past and emerging work indicate that there are multiple E3 ubiquitin ligases that can regulate cellular α-syn levels, whether it is in its monomeric form or assembled in fibrils.
Several additional pathways have been implicated in the clearance of α-syn in a ubiquitin-independent manner. Chaperone-mediated autophagy (CMA) is a cellular mechanism ensuring normal cellular function by degrading misfolded proteins via the lysosomes (180). It is of particular relevance in cells that have limited or no division capacity, such as neurons, and is therefore incapable of diluting accumulated, damaged, and toxic intracellular components. α-syn and LRRK2, another gene linked to familial forms of PD, both contain the CMA-specific recognition motif (KFERQ), which is recognized by Hsc70 (HSPA8) and targets the substrate to lysosomes via the lysosome-associated membrane protein 2 (LAMP2A) membrane receptor prior to the degradation by intralysosomal proteases (Fig. 3B) (181-183). Importantly, while A30P and A53T mutations of SNCA lead to an increase in binding with LAMP2A, these α-syn mutants are not properly cleared by CMA (181). Similarly, LRRK2 R1441G PD-linked mutation induced accumulation of α-syn oligomers in aged mutant brains by impairing CMA protein clearance (183,184). These inhibitory effects on CMA could cause toxicity in PD by hindering the degradation of α-syn. In addition to cellular clearance, it is also becoming apparent that cells can expel protein aggregates via exosomes or other specialized mechanisms (185). Recent work has highlighted the contribution of the misfolding-associated protein secretion for secretion of several ND-associated proteins, such as tau and α-syn, that relies on the DnaJC5 cochaperone (186). Interestingly, Trepte et al. report that DnaJC5 binds to α-syn using a double bioluminescence-based two-hybrid technology (termed LuTHy) (187). Both α-syn and DnaJC5 localize to neuronal synaptic vesicles, where they support the folding or assembly of SNAREs and promote synaptic function (188,189). This interaction may also suggest that, together with Hsc70, the role of DnaJC5 could extend to triaging misfolded proteins associated with NDs. Secretion of α-syn could both protect cells from cytotoxicity, as well as mediate spreading of the pathology in a prion-like manner.
Mitochondrial dysfunction has emerged as a common thread in PD pathophysiology, which has been also highlighted in a recent review (190). We will therefore only emphasize a few key points by first summarizing how other PD-associated genes impacts mitochondria homeostasis before highlighting contributions from α-syn (Fig. 3C). Earlier work characterizing brain tissues from PD patients noted the presence of mitochondrial defects, oxidative stress, and a predominant decrease in complex I activity in the SN (191). Similar PD-like phenotypes can be achieved upon exposure of mouse brains to mitochondrial drugs, which either directly inhibit complex I activity (e.g., the rotenone herbicide) or hamper mitochondrial import (e.g., CCCP or carbonyl cyanide m-chlorophenyl hydrazine, an uncoupler of oxidative phosphorylation) (192). Moreover, several nuclear genes encoding familial PD-linked mutant proteins regulate mitochondria homeostasis, such as CHCHD2, DJ-1, parkin, and PINK1. Notably, the PINK kinase and the parkin ubiquitin ligase play a major role in the regulation of mitophagy, in which defective mitochondria can be targeted for lysosomal degradation via macroautophagy. While PINK1 is normally imported and degraded in mitochondria by presenilins-associated rhomboid-like (PARL), low membrane potential impairs its import, thereby allowing the kinase to then phosphorylate ubiquitin and parkin, which is in turn activated (193)(194)(195)(196). Ubiquitination of a first subset of mitochondrial proteins, including mitofusins, leads to their proteasomal degradation and promotes fission (197,198). Conjugation of other mitochondrial proteins (e.g., VDAC1; voltage-dependent anion-selective channel 1) then mediates the recruitment of autophagy receptors, such as optineurin, that guide the formation of the autophagosome membrane to eliminate the damaged mitochondria (199,200).
Mitochondria homeostasis is also impacted by α-syn (Fig. 3C). Earlier work demonstrated that overexpression of α-syn in neuronal cells caused a change in mitochondrial morphology, leading to shorter, wider, and more fragmented mitochondria, which is accompanied by increased levels of free radicals (201,202). Similar effects on mitochondria were also observed in mouse models (203). A cryptic mitochondrialtargeting signal in the first 32 amino acids in α-syn confers mitochondrial localization in human dopaminergic neurons (204). The mitochondrial localization was enhanced for the A53T mutant in comparison WT α-syn. Similarly, there was more mitochondrial α-syn in the SN of patients with PD in comparison to controls. Subsequent work confirmed that α-syn can directly associate with mitochondrial membrane in neuronal cells to drive mitochondrial fission (202,205). PD-associated α-syn mutants also reduce mitochondria ER contact sites (also known as MAM or mitochondria-associated ER membranes) owing to its PPI with vesicle-associated membrane protein (VAMP)-associated protein B (VAPB), which is enhanced with the A30P and A53T mutations (206,207). In mitochondria from PD brains, as well as cultured cells, WT and A53T α-syn interact with complex I subunits and increase the production of ROS (204). In addition, α-syn interacts with the mitochondrial matrix peptidase CaseinoLytic protease P (ClpP), which is directly involved in complex I maintenance and mitochondrial energy metabolism (208). α-syn suppresses ClpP protease activity thus triggering mitochondrial oxidative damage and neurotoxicity. Aggregating α-syn drives additional PPIs that impact mitochondria. Oligomeric, but not monomeric or fibrillar, α-syn interacts with TOM20 and impairs mitochondrial import (209). In addition, α-syn oligomers are located in close proximity to the ATP synthase and cause its oxidation, which is then linked to the observed increased transition pore opening that triggers neuronal cell death (210).
The convergence of parkin, PINK1, and α-syn on mitochondrial dynamics as well as the interconnectivity of all these genes related to familial PD reveals an intricate network in which it is hard to disentangle the cause and effect of the disease. An apparent common involvement of these PARK genes in mitochondrial stress response provides a potential physiological basis for the prevalence of α-syn pathology in PD.
Huntington's disease
HD is an autosomal dominant inherited neurodegenerative condition that presents with progressive motor, cognitive, and psychiatric impairments due to neuronal dysfunction and extensive cell loss mainly in the cerebral cortex and the striatum (211). After the onset of symptoms, the average survival time is about 15 years, and there is currently no effective treatment for HD. A pathological expansion of the trinucleotide CAG in the first exon of the HTT gene is the main cause of the disease, encoding a mutant HTT (mutHTT) with expanded polyQ stretches. In healthy individuals, the CAG sequence is repeated 9 to 35 times, whereas it exceeds 35 repeats in the HD population (212). HTT is ubiquitously expressed and localizes in the cytosol and nucleus where it is essential for development and is involved in diverse functions, including vesicle transport along axons and transcription (211). The etiology of HD is determined by the combined result of the emerging toxic properties of mutHTT protein, and potentially its mRNA, and the associated loss of normal HTT functions, with an inverse relationship between the expanded repeat length and the age of disease onset that can be modulated by additional gene modifiers (213). PolyQ expansions in mutHTT have been shown to drive the formation of pathological amyloid fibrils that are able to perturb the intracellular proteostasis network or deplete factors crucial for the basic neuronal cell functionality (214,215). Although mutHTT is typically thought to accumulate and exert its toxicity from within the cell, there is also recent evidence for an extracellular localization and transfer to neighboring cells or other tissues via the blood stream, supporting the idea that mutHTT can propagate in a prion-like fashion (216,217). This may contribute to the HD pathology or its worsening, as blood can serve as a vehicle of propagation for mutHTT outside of the brain, enabling peripheral pathological conditions that were previously observed such as changes in protein metabolism or mitochondrial function in skeletal muscle, hepatocytes, or immune system cells (218)(219)(220).
HTT is a large 3146 amino acid protein (if containing a 23-glutamine polyQ, or 23Q) that is ubiquitously expressed and is hypothesized to act as a scaffold for binding multiple protein assemblies (221). Recent work shows that HTT mainly consists of α-helices arranged in three domains that are stabilized by the HTT-associated protein 40 (HAP40)-interacting protein: the N-and C-terminal domains consisting of multiple HEAT (HTT, Elongation factor 3, phosphatase 2A, and kinase TOR) repeats that are linked by the bridge domain, which contains tandem α-helical repeats (222). However, large segments of the N-terminal region remain poorly defined structurally, such as the polyQ, as they are highly dynamic. Preceding the variable polyQ stretch, whose function in normal HTT remains poorly understood, the HTT amino terminus consists of 17 residues arranged in an amphipathic α-helix, which has a nuclear export signal. This region may interact with the cell membrane and is prone to posttranslational modifications (223). Following the polyQ stretch, a proline-rich domain drives the binding of different WW or SH3 domain-containing proteins, involved in nucleic acid binding and processing or cellular dynamics (25,(224)(225)(226), in addition to maintaining the solubility of mutHTT in vitro and in vivo (227,228). Interaction with cellular proteases, such as caspases and calpains, promotes cleavage of both WT HTT and mutHTT amino termini, with enhanced proteolysis being specifically observed in HD patient brain extracts (229). This processing has been linked to the generation and accumulation of small HTT fragments with the polyQ portion, which can translocate into the nucleus, associate with membranes, cause toxicity, and accelerate HTT aggregation (229)(230)(231)(232). Aberrant splicing can also lead to the translation of the exon 1 isoform with the pathogenic polyQ segment (233). Following seeding, pathological elongated polyQ stretches form ordered and β-sheet-rich amyloid fibrils (234,235). The βhairpin-containing β-sheets are connected through hydrogen bond interactions that provide a critical monomeric core fundamental to trigger the aggregation and the formation of amyloid fibrils (236). Due to the high affinity of the β-sheets, higher molecular weight aggregates can be formed by recruiting not only pathogenic polyQ protein but also other endogenous nonpathogenic proteins, thereby disrupting other cellular networks and their functions (221).
One hypothesis is that HTT acts as a hub to tether multiple partners to promote several major processes in the cell such as vesicle trafficking and transcription (Fig. 4, A and B). A range of studies have shown that several PPIs can be perturbed by extended polyQ in mutHTT and several high-throughput experiments have vastly expanded the list of candidate interactors using Y2H screening and AP-MS (225,(237)(238)(239). For instance, Shirasaki et al. employed AP-MS to define a spatiotemporally resolved HTT-interactome in transgenic mice expressing mutHTT (97Q) and identified interactions that are more specific to some brain regions (cortex, striatum, and cerebellum) and specific ages (2 and 12 months) (238). HTT forms a network of PPIs with motor and motor-associated proteins such as HAP1 and HAP40, HTT-interacting protein 1 (HIP1), optineurin, and Zinc finger DHHC-type palmitoyltransferase 17 (ZDHHC17; HIP14) consistent with HTT being an essential integrator of intracellular vesicular trafficking and transport in the cell (224,225,(238)(239)(240) (Fig. 4A). HAP1 serves as adapter protein recruiting and stabilizing the anterograde microtubule motor kinesin and retrograde motor dynactin-dynein complex (225,(241)(242)(243). Dynein, which requires dynactin for its activation, constitutively interacts with HTT (25,225), whereas interaction with kinesin to recruit vesicle occurs only when HTT is phosphorylated on serine 421 (244,245). Therefore, the dynein-dynactin-mediated retrograde motility can occur when HTT is not phosphorylated (246). Interactions with other proteins also determine the association to the actin cytoskeleton. High cytoplasmic levels of HAP40 promote the formation of a complex with HTT and optineurin on Rab5-positive endocytic vesicles (247). Through optineurin, the actin-based motor myosin VI associates the cargo to the actin cytoskeleton for its transport (224,248). Interestingly, a complex with HTT, optineurin, and Rab8 has been shown to regulate exocytic membrane trafficking from the Golgi (248,249). MutHTT leads to mislocalization of optineurin and irregular cargo transport, leading to impaired palmitoylation of ZDHHC17 substrates, with implications in HD pathogenesis (250). HTT interaction with the palmitoyl transferase is also reduced in presence of an extended polyQ (251). Through the interaction with HIP1 and adapter-related protein complex 2 subunit beta 1 (AP2B1), which also binds HTT at the amino terminus, HTT may play a role also in clathrin-mediated endocytosis and trafficking, although mutHTT does not seem to directly inhibit clathrin-dependent endocytosis (25,225,252,253). Nonetheless, the sequestration of the Hsc70 chaperone in mutHTT aggregates affects clathrin-mediated endocytosis (254). This phenomenon was also observed when other proteins aggregate, like mutated SOD1 and ataxin-1. Numerous studies have also shown that HTT binds to transcription factors such as specificity protein 1 (SP1) and other components regulating transcription like CREB-binding protein (CBP) (Fig. 4B) (255)(256)(257). Presence of extended polyQ increased SP1 binding, and CBP is recruited to HTT inclusions, which can then lead to transcriptional dysregulation (255)(256)(257). Several groups have also integrated multiple omics approaches to identify interactors that can act as gene modifiers. For instance, knockdowns of the vacuolar protein sorting 35 (Vps35) and the brain-specific angiogenesis inhibitor 1-associated protein 2 (BAIAP2) Rho GTPase that both interact with HTT reduce HTT toxicity in a drosophila model and tissue culture cell-based model, respectively (238,258). The drosophila model also indicates that fly HTT genetically interacts with autophagy genes, and work in mammalian cells shows that HTT is required for selected autophagy, notably by acting as a scaffold for several autophagy-regulating proteins, such as p62 (SQSTM1) (259).
Multiple elements of the chaperone system interact with HTT, and several components can reduce mutHTT aggregation or regulate HTT clearance (Fig. 4C). The main components of the human HSP70-based disaggregase system have been identified in diverse interactome studies of WT and mutant HTT. Members of the Hsp70/Hsc70 family have been shown to interact with mutHTT, including genes encoded by HSPA8 and HSPA1A, alongside Hsp40 cochaperones from the family A (DNAJA1 and DNAJA2) and family B (DNAJB1 and DNAJB6), as well as small Hsps (HSP27) (25,260,261). Notably, soluble mutHTT-53Q and mutHTT-103Q oligomers colocalize and interact with Hsp70 (HSPA8, HSPA1A/B) and DnaJB1, both in vitro and in rat neuronal model cell lines (262). The Hsc70 (HSPA8), DnaJB1, and the nucleotide exchange factor Apg2 cooperatively suppress mutHTT fibrillization in vitro (263). Moreover, their depletion promotes mutHTT aggregation, while overexpression of DnaJB1 alone was shown to be sufficient to strongly reduce aggregation. A previous screen for HD modifiers also identified, DnaJB6 and DnaJB8, as suppressors of mutHTT aggregation (264). These two cochaperones were shown to directly bind to the polyQ stretches of mutHTT-Q119 through a serine/threonine (S/T)-rich region in their carboxy terminus, thereby preventing aggregation independently of the Hsp70 machinery (264,265). Notably, only cytosolic isoform B and not nuclear isoform A of DNAJB6 is able to suppress aggregation of polyQ-containing and other aggregation-prone proteins (266), making it the major focus for future studies on polyQ aggregation. Indeed, DnaJB6 levels are higher in undifferentiated neuronal cells, which could explain why stem cells may be protected from HTT aggregation (267). Hsp90 also interacts with HTT (HSP90AA1 and HSP90AB1) (260,268,269). Interestingly, in one study in which new mouse HD models were generated by expressing HTT fragments that mimic protease cleavages, Hsp90 shows greater interaction with a longer HTT fragment that mimics caspase 6 cleavage than a shorter and more toxic fragment that can be generated by caspase 2 (268). While overexpression of Hsp90 reduces cytotoxicity, downregulation of Hsp90 leads to more cell death alongside lower levels of the exogenous HTT fragments, indicating Hsp90 interaction may regulate mutHTT turnover. In a separate study, Hsp90 was found to specifically interact with the N-terminal amphipathic α-helix of HTT (269). Because Hsp90 also binds to the USP19-deubiquitinating enzyme, it may play a major role in regulating HTT cellular levels by modulating its ubiquitination. Yet another chaperone, the chaperonin T-complex protein ring complex (TRiC, also known as CCT for chaperonin-containing TCP-1), interacts with the N-terminal amphipathic α-helix of HTT, which underlies the importance of this domain in HTT aggregation (270). Previously, TRiC had been shown to promote assembly of nontoxic HTT oligomers in vitro and tissue culture cells (271,272). In a recent study, levels of TRiC components were shown to be reduced in mouse neural stem and progenitor cells, which could explain why diseases onset occurs in older individuals (273).
Clearance of HTT can be mediated by several components of ubiquitin proteasome system as well as by the lysosome (Fig. 4C). The N-terminal region of mutHTT interacts with the E6-associated protein (E6-AP or UbeE3A) ubiquitin ligase (274,275). Levels of this E3 ligase are reduced in older mice, which coincide with the accumulation of HTT associated with K63-linked polyubiquitin in a HD mouse model, and which could result from a failure to adequately target mutHTT for proteasomal degradation (275). Accordingly, exogenous expression of E6-AP in tissue culture cells and HD mouse model reduces accumulation of HTT inclusions (274,275). Interestingly, mutHTT does not aggregate in induced pluripotent stem cells derived from HD patients (276,277). Koyuncu et al. showed that the Ubr5 E3 ligase, which binds to mutHTT alongside E6-AP, is expressed at higher levels in induced pluripotent stem cells and targets mutHTT for proteasome degradation (278). Recent work also showed that single nucleotide polymorphisms near UBR5 acts as a modifier of the HD age onset, which suggests that Ubr5 may play a major role in regulating mutHTT levels in neuronal cells (213). Ubiquilin 1 and 2 (UBQLN1/2), which functionally link the ubiquitination machinery to the proteasome, are shown bind to polyubiquitin chains on mutHTT (279,280). Several ER-associated ubiquitin ligases have also been involved in targeting mutHTT for degradation. For instance, the homocysteine-responsive ER-resident protein (Herp, HERPDU1) E3 binds to the N terminus of HTT with an extended polyQ (Q160) and can mediate proteasomal degradation of the mutHTT in neuronal tissue culture cells (281). Interestingly, the expression of HERPDU1 is induced upon ectopic expression of mutHTT and the E3 ligase localizes in the periphery of the HTT inclusions, where it could mediate mutHTT conjugation. Human HMG-CoA reductase degradation 1 (Hrd1) is another ER-associated E3 that interacts with an N-terminal fragment of mutHTT and mediates its degradation in tissue culture cells (282). Interestingly, the turnover of the ectopic HTT fragment also requires the VCP/p97 disaggregase. The ATPase VCP can bind to the first 17 residues of HTT and is capable of disassembling mutHTT toxic aggregates via its disaggregase function both in vitro and in HD cellular models (283). In addition to its disaggregase function, VCP also recruits linear ubiquitin chain assembly complex (LUBAC) an E3 ligase that mediates conjugation of linear ubiquitin chains (i.e., linked via the methionine 1 of ubiquitin instead of a lysine) on aggregated mutHTT in tissue culture cells (284). Notably, increase of linear ubiquitin reduces the recruitment of SP1 to mutHTT inclusions. Markedly, in addition to HTT, VCP interacts with ALS-related SOD1 and AD-related APP, suggesting that human VCP might act as a general disaggregase for several protein aggregates linked to neurodegeneration (25,(285)(286)(287)(288)(289). Several additional posttranslation modifications may regulate HTT clearance. For instance, conjugation of the SUMO ubiquitin-like protein on lysine residues located in the N terminus of HTT-which have also been found ubiquitinatedreduces turnover and promote aggregation of mutHTT (290). In contrast, phosphorylation by TANK-binding kinase 1 (TBK1) suppresses mutHTT aggregation and toxicity (291). Similarly, inhibitor of κB kinase (IKK) activation and the resulting HTT phosphorylation promotes degradation by the proteasome and lysosome (261). Indeed, CMA has been shown to also to regulate mutHTT clearance with the participation of Hsc70 (292). Interestingly, the activation of IKK is induced upon binding of mutHTT leading to a hyperactivation of the transcription factor NF-κB with repercussions for mitochondrial homeostasis, similar for what was observed in PD (293,294).
Mitochondrial dysfunction plays a critical role also in HD pathogenesis (Fig. 4D). Both full-length mutHTT and its cleavage-associated fragments directly associate with mitochondria, resulting in impaired protein import, membrane permeabilization, impaired energy metabolism, abnormal mitochondrial trafficking, and impaired mitochondrial dynamics (295)(296)(297)(298). As observed in human HD-affected brains, trafficking motors and mitochondrial components become sequestered in HTT inclusions (299). MutHTT interacts with mitochondrial Drp1, a mediator of mitochondrial fission, and abnormally enhances its activity (297). This results in impaired mitochondrial dynamics causing defects in anterograde mitochondrial axonal transport and synaptic deficiencies. In addition, mutHTT also recruits VCP to mitochondria, which induces excessive mitophagy (300,301). Remarkably, inhibition of the HTT-VCP PPI reduces mitochondrial impairment and cell death in HD cell culture and mouse models. Therefore, the interaction between mutHTT and VCP may have different consequences depending on the localization of both proteins. Furthermore, mutHTT affects both the organization and composition of mitochondria by interacting with additional proteins in the intermembrane space. Yablonska et al. showed that both WT HTT and mutHTT-Q111 localize to the mitochondrial intermembrane space where they interact with translocase of mitochondrial inner membrane 23 (TIM23) (302). In particular, the increased interaction between TIM23 and mutHTT results in reduced TIM23-dependent import of mitochondrial matrix proteins, drastically altering the mitochondrial proteome. Another interesting mitochondrial partner of mutHTT is mitofilin, the main component of the mitochondrial contact site and cristae organizing system (MICOS) that is involved in forming of contact sites to the outer membrane to maintain mitochondrial cristae morphology (25,303). Notably, compromised membrane architecture has been observed both in the striatum and the cortex of HD mice, which has been associated with impaired oligomerization of optic atrophy 1 (OPA1), which interacts with mitofilin (304). A Y2H study focused on diseaseassociated HTT mutations revealed mutHTT interactions with other mitochondrial proteins with possible additional relevance for mitochondrial impairment in HD pathology (239). For instance, mutHTT interacts with mitochondrial intermediate peptidase (MIPEP) and NME4, a nucleoside diphosphate kinase, which are both key to mitochondrial function. However, additional studies would be required to evaluate this possibility. Although the mechanisms by which mutHTT mediates mitochondrial pathology in HD are not fully elucidated, numerous studies now provide insight into the consequences of gained mitochondrial PPIs with mutHTT.
The great diversity in novel or altered PPIs with mutHTT together with the variety of their cellular localizations, suggests that multiple pathways are simultaneously affected during the development of HD. One challenge is to identify a therapeutic approach that can alleviate these different effects simultaneously. In comparison to the other NDs discussed in this review, HD has the particularity of being a genetic disorder essentially driven by the mutation of one gene. Therefore, HD is an ideal candidate for a genetic therapeutic approach such as antisense oligonucleotides to reduce HTT expression. Though, the recent failed antisense oligonucleotide trials suggest that the endogenous HTT function may be important and that more refined approaches to target HTT are needed (305).
There are a large number of SOD1 mutations (over 180) associated with fALS and many are thought to impact protein folding and promote aggregation. SOD1 is an enzyme that detoxifies superoxide anions by converting them to hydrogen peroxide, and it is typically present in the cytosol, nucleus, peroxisomes, and mitochondrial intermembrane space of mammalian cells where it forms a stable homodimer (316). For its maturation, SOD1 generally undergoes four main steps: zinc insertion, copper insertion, disulfide bond formation, and homodimer formation (Fig. 5A). The copper chaperone for SOD1 (CCS) interacts with SOD1 providing it with copper and facilitating the transition to the holoform (317,318). More specifically, CCS interacts with the zinc-metalated, disulfidereduced SOD1. Recent work has shown that copper has to be transferred from CCS to SOD1 before SOD1 homodimerization occurs, following disulfide bond formation (319). Nonetheless, absence of CCS in mouse models in which mutSOD1 is overexpressed does not accelerate the disease onset, despite SOD1 remaining mostly inactive in this case (320). These results indicate that failure to complete copper loading on SOD1 and loss of function of SOD1 are not at the basis of the pathology. Instead, it has been proposed that there is a gain of toxicity following conformational changes of monomeric SOD1 (321,322). In fact, many SOD1 mutations destabilize the protein structure leading to more exposed hydrophobic regions, especially the monomeric metal-free species (323)(324)(325). The reduced capacity to dimerize and the exposure of more hydrophobic regions can lead to new PPI. For instance, gain of PPI were noted for a truncated mutSOD1 that cannot dimerize due to a two base pair deletion at codon 126 (Leu126delTT), notably with several subunits of the Na + /K + ATPase pump (326). More recently, an independent study that used an antibody specific to misfolded SOD1 for AP-MS confirmed the interaction of the Na + /K + ATPase pump with fALS-associated SOD1-G93A (327). Importantly, this interaction was detected in early stages and was confirmed for other fALS-associated mutants. Moreover, the levels of Na + /K + ATPase are reduced in tissues derived from patients with sporadic ALS. Uncoupling the Na + /K + ATPase by misfolded SOD1 could be highly cytotoxic in motor neurons and play an important role in the ALS etiology.
Since most assessed fALS-associated mutants reduce protein folding, interaction of SOD1 with the proteostasis network is pivotal and may have a major impact in disease progression (Fig. 5B). Proteomic analysis of SOD1 PPIs in a mouse model led to several chief observations (328). First, the mutant protein-but not the WT SOD1-shows a marked interaction with Hsc70 (HSP8A) and Hsp70 (HSPA1B), even in 1-monthold mice prior to the presence of any SOD1 inclusions. Interaction of mutated SOD1 with Hsc70/Hsp70 had also been observed in several other studies (326,(329)(330)(331). Secondly, despite the interaction of mutated SOD1 with Hsc70 and Hsp70 in young mice, motor neuron disease developed in 4-to 12-month-old mice (328). Finally, there is a gain of multiple additional PPIs with other chaperones coinciding with appearance of aggregated SOD1 in later stages, including with Apg2/Hsp110 (HSP4), Hsp105 (HSPH1), BiP/Grp78 (HSPA5), and DnajA1. Association to the ER-associated chaperone is perhaps at first puzzling. However, a recent study by Piette et al. also found WT SOD1 associating to BiP (30). In parallel, C9orf72 was found associated to SEC63. In mammalian cells, transport of signal peptide-containing proteins into the ER is mediated by the Sec61 channel complex, which relies on BiP and SEC63 (332). Association of WT counterparts of fALSassociated proteins to the Sec61 complex and its interactors could hint to a possible involvement of ER dysfunction and protein-folding stress in the progression of sporadic ALS, as previously shown (333). Interestingly, presence of a mutSOD1 JBC REVIEWS: Protein aggregation and neurodegenerative diseases can modify the composition of the client protein pool interacting with Hsc70 and Hsp70, even when mildly overexpressed in tissue culture cells (334). These proteomic experiments relied on engineered chaperone proteins to trap normally transient interactions with client proteins (i.e., proteins folded by a given chaperone). The authors of the study noted that in the presence of misfolded mutSOD1, there is a gain of client proteins that are predicted to be more disordered, whereas proteins with more stretches of residues predicted to be aggregation-prone accumulate in the pellet fraction, perhaps due to a competition with Hsp70 binding. These results illustrate how presence of misfolded proteins can shift the equilibrium within the cell by potentially altering the cell folding capacity. It will be important to address how proteostasis is affected in differentiated cells and in physiological conditions in future studies. In another recent study, the binding of two mutSOD1 to Hsp70 and Hsp90 was compared using a modified ELISA (335). In this case, the G41S mutant associated a severe form of fALS shows further increased binding to Hsp70 in comparison to WT SOD1, whereas G37R that is associated with a milder phenotype displays, instead, a higher association to Hsp90 but not Hsp70. While only two SOD1 mutants were compared, other disease-associated mutations with known disease severity were assessed and show the same trend. These results imply that the severity of the disease may in part be determined by how a misfolded protein is triaged by the proteostasis network and confirm that Hsp90 can act as protein-folding buffer in the cell to mitigate the effect of some diseases-associated mutations.
In addition to the SOD1 PPIs with chaperone proteins, levels of SOD1 are regulated by proteolysis. Both the ubiquitin-proteasome system and autophagy are reported to mediate degradation of mutSOD1 when transfected in neuronal cells (336). In agreement with a role for autophagy in SOD1 clearance, mutSOD1 mutants-but not WT-bind to p62 (also linked to fALS), which facilitate autophagy (337). In addition, the α-crystallin chaperone (HSPB8) that is induced in mouse motor neurons that express SOD1-G93A can facilitate the autophagy clearance of aggregated SOD1 and TDP-43 in tissue culture neuronal cells (338). Moreover, several studies report that induction of autophagy increases SOD1 clearance or reduces accumulation of aggregated SOD1 and improves Normal folding and maturation of SOD1 is disrupted by disease-associated mutations. Misfolded SOD1 becomes incorporated into β-sheet-rich fibrils and inclusion bodies. B, misfolded SOD1 can be degraded by either the proteasome or by the lysosome. Soluble mutSOD1 may be specifically targeted by the E3 ligase dorfin. Larger aggregates or inclusion bodies may be degraded through autophagy or become resolubilized by disaggregases such as Hsc70 or VCP and then be subsequently degraded by the proteasome. C, mutSOD1 may disrupt mitochondrial import and thus energy and redox states. Altered interactions between SOD1 and the components of the antioxidant system may further contribute to increased ROS. Formation of aggregates sequesters soluble BCL-2, disrupting its antiapoptotic activity. TDP-43 physiological interactions and its involvement in ALS. IMS, intermembrane mitochondrial space; ROS, reactive oxygen species. some ALS phenotypes (339)(340)(341). Interestingly, in another study where fibroblast cells were derived from ALS patients, levels of mutSOD1 are increased by proteasome inhibition, but not by autophagy suppression (342). One possibility is that different cell types may have different capacity to triage misfolded SOD1 or that autophagy can only efficiently clear aggregated SOD1. Several E3 ubiquitin ligases have been implicated in regulating SOD1 levels, especially of the fALSassociated mutants (Fig. 5B). As for other ND-associated proteins, CHIP has been shown to interact-together with Hsc70-with mutated but not WT SOD1 (329,330). Whereas CHIP may be involved in the regulation of SOD1 turnover, several results also indicate that another E3 ligase is involved in ubiquitinating SOD1. Indeed, dorfin, which is a RING between RING (denoted RBR) E3, interacts specifically with mutated but not WT SOD1 and is recruited to SOD1 inclusion bodies (343). The presence of a disulfide crosslink between the cysteine residues at positions 6 and 111 of SOD1 has been suggested to both mediate SOD1 aggregation and the recognition by dorfin (344). However, another study, using a different C6 mutation, also showed that the disulfide bond between these residues is not required for aggregation, indicating that another mechanism may mediate recognition of misfolded SOD1 by dorfin (345). Other ubiquitin ligases have been shown to target misfolded SOD1 for proteolysis. For instance, the NEDD4-Like ubiquitin ligase 1 (NEDL1) E3 that is specifically expressed in neuronal cells interacts with and ubiquitinates several fALS mutants, but not WT SOD1, as well as reduces the half-lives of mutSOD1 when overexpressed (346). In addition, E6-AP (Ube3A), the ERassociated glycoprotein 78 (gp78 or AMFR; autocrine motility factor receptor) and cellular inhibitor of apoptosis protein 1/2 (cIAP 1/2) E3s have all been shown to interact and regulate levels of different mutSOD1 (347)(348)(349). While several ubiquitin ligases have been implicated in regulating SOD1 levels, careful evaluation of each E3 ligase expression levels in different neuronal cells will need to be more precisely quantified in future studies to establish a more comprehensive model of their possible impact in ALS.
Failure to adequately fold or eliminate SOD1 leads to the accumulation of misfolded SOD1 and its aggregation. Inclusion bodies enriched in SOD1 can be observed in the cytoplasm of motor neurons and astrocytes in mouse models and may play a key role in the ALS etiology (321,350). These inclusion bodies present two distinct areas, an external protease-sensitive coating and a more compact protease-resistant core. The exact structure varies with different mutations (351). Three main regions of SOD1, each with different physiochemical properties, were identified as fibril cores (352). Synthetic peptides corresponding to these three core regions can form fibrils, either alone or in a mixture with the other peptides. Furthermore, lowering structural stability by reduction of the highly conserved intramolecular disulfide bond between cysteines 57 to 146, increased aggregation of mutSOD1-G93A under physiological conditions, suggesting that exposing fibril-forming core regions through structure destabilization can facilitate fibrillar aggregation of SOD1 (351,352). Une et al. identified proteins that preferentially bind to the different aggregation cores upon normal conditions and upon inhibition of autophagy or of the ubiquitin proteasome system in mouse (331). Proteins interacting with SOD1 cores are involved with PI3K/AKT and MAP kinase cascades, suggesting that mutant SOD1 could indirectly affect cell growth and cell survival through sequestration of such components. In addition to cellular aggregates, there is growing experimental evidence supporting uptake and prion-like propagation of pathological conformations of SOD1, but the precise mechanism underlying the transfer is still unknown (353)(354)(355). In line with this, extracellular matrix proteins or cell surface receptors were also identified as interacting proteins in several studies (356,357). Interestingly, SOD1 can also form homotrimers and fALS mutants that promote trimer formation and are more cytotoxic in model neuronal cells (358,359). In contrast, assembly into large insoluble fibrils appear to mitigate neurotoxic effects both in model tissue culture cells and in a mouse model, indicating that large SOD1 aggregates might be cytoprotective (359,360). It will be important to determine in future work whether small SOD1 oligomers engage in specific PPI that are detrimental to the cell.
As a fraction of SOD1 is localized in the mitochondria, it is perhaps not a surprise that several PPIs with mitochondrial proteins have been reported. In addition, expression of mut-SOD1 impacts mitochondrial homeostasis. For instance, overexpression of the cochaperone CCS in mice leads to the accumulation of mutSOD1-G93A in mitochondria, inducing a severe mitochondrial pathology that accelerates neurological deficits and shortens lifespan (Fig. 5C) (361). Different forms of mutSOD1 preferentially associate to the cytoplasmic side of the mitochondrial outer membrane. Proteomic studies by Li et al. showed that expression of different fALS-associated SOD1 variants leads to a decrease in mitochondrial protein import, despite increased levels of some components of the mitochondrial import machinery (362). Interestingly, this defect is more pronounced in organelles purified from motor neurons than from the liver, suggesting a potential mechanism of motor neuron susceptibility. In addition, both WT and mutant SOD1 have been shown to bind the antiapoptotic and mitochondrial-associated protein Bcl-2 in vitro and in vivo in mouse and human spinal cord, providing evidence of a direct link between SOD1 and an apoptotic pathway (363). SOD1 localizes to the intermembrane mitochondrial space, and gain of PPI with one fALS-associated mutant (Leu126delTT) was reported for several intermembrane mitochondrial space proteins such as members of the solute carrier family (SLC25A 4, 5, 12) (326). Interestingly, additional SOD1 interactors that reside in the matrix were also reported in the same study, such as the subunits α, β, and γ of ATP synthase. This may be particularly relevant as subunit β (ATP5B) is reported to be depleted or affected in the mitochondrial fraction of G93A mice spinal cord, suggesting that loss of SOD1 might play a role either in stabilizing the ATPase complex or rather promote efficient mitochondrial import, affecting, when mutated, the energy and redox levels of the organelle (364)(365)(366). In a large proteomic study aimed to generate a high-confidence mitochondrial protein network linked to ND disorders, Malty et al. identified SOD1 interacting with members of the peroxiredoxin antioxidant system (PRDX 2, 4, 5, and 6), as previously reported (288,367,368). Disruption of the interaction with PRDX5, a hydrogen peroxide scavenger localizing in the mitochondrial matrix, results in an increase of ROS, which highlights the importance of SOD1-PRDX5 binding (288). Interestingly, DJ-1 (PARK7), has also been found to interact with SOD1 as well as PRDX 3 and 6, which are active in protecting mitochondria by clearing ROS. Whether DJ-1 interaction is related to its antioxidant defense role or its chaperone activity remains unclear, but the shared PPIs between proteins implicated in PD and ALS raises the question whether DJ-1 and SOD1 might have a role in the same antioxidant pathway, which may be commonly affected in these two different NDs.
Despite mutations in TDP-43 accounting for less than 1% of ALS cases, cytoplasmic inclusions of TDP-43 are found in the majority of ALS cases (369). TDP-43 aggregates are not limited to ALS and have been found in frontotemporal dementia, AD, HD, and PD. TDP-43 is an RNA-binding protein, which is predominantly localized to the nucleus, and is involved in RNA processing, such as splicing. The number of reported PPIs is lower than for other disease-associated proteins in this review. However, TDP-43 interacts with a large number of different RNAs, including its own mRNA to reduce, in this case, TDP-43 expression levels (370). In addition, depletion of TDP-43 in mouse brain tissue results in the downregulation of synapse related genes, suggesting a potential mechanism of neuronal vulnerability to changes in TDP-43 levels associated with ALS. If not properly localized, TDP-43 can also impede translation of mitochondrial genes, thereby inducing mitochondrial damages (371). Of the reported PPIs with TDP-43, many are interactions with ribonucleoproteins or RNA-binding proteins. Freibaum et al. identified nuclear and cytosolic interaction networks using AP-MS of tagged TDP-43 (372). In this case, most PPIs are not affected by the presence of disease-associated mutations in TDP-43 (A315T or M337V), but many of these interactions are RNAdependent suggesting they are indirect. The cytosolic PIN comprises translation initiation and elongation factors and ribosomal subunits, including the stress granule component Stau1, which was later shown to interact with TDP-43 in a RNAdependent manner (373). The nuclear network of interactors included transcription and splicing factors, like HNRNPA1 and HNRNPH1. Additional evidence for the interaction of TDP-43 with RNA-binding proteins, including HNRNPH1, comes from the BrainMap published by Pourhaghighi et al. (374). Using cofractionation of proteins obtained from mouse brain tissues, the authors identified a complex of RNA-binding proteins that included several ALS-associated proteins such as TIA1, ataxin 2 (ATXN2), Ewing's sarcoma (EWS), and FUS. Additional studies have found that TDP-43 and FUS drive together the expression of Hdac6 and that their PPI is also RNA dependent (375,376). The identification of several ALS-associated proteins as interactors suggests that a common pathway is affected in the development of ALS.
Regulation of TDP-43 interaction with both RNA and the protein homeostasis network is likely playing a major role in ALS pathology. Yu et al. recently showed that several mutations or posttranslation modifications that reduce RNA binding drive TDP-43 demixing in atypical droplets that contain both liquid spherical shells and cores (377). While TDP-43 is more concentrated in the shell, it is also present at a lower concentration in the core where it interacts with various Hsp70s and Apg2. These interactions are required to maintain phase separation and prevent TDP-43 aggregation. Notably, Hsp70 may directly interact with a more conserved stretch of the C-terminal low complexity domain that plays a key role in LLPS of TDP-43 (378). Alongside preventing aggregation, chaperone proteins also promote the clearance of TDP-43, especially after the activation of the heat shock transcription factor 1 (HSF1) master regulator of the heat shock response (379)(380)(381). For example, DNAB8, a small HSP, inhibits the aggregation of smaller fragment of TDP-43 that is particularly prone to aggregate in the cytosol, which can then be cleared by the proteasome and autophagy (382).
TDP-43 is ubiquitinated in aggregates found in the motor neurons of ALS patients (383), and several E3 ubiquitin ligases have been shown to interact with TDP-43. The Praja ubiquitin ligase was recently shown to be upregulated by HSF1, and it mediates clearance by binding to the C-terminal domain of TDP-43 (384). RING finger 122 (RNF112, Znf179) is another ubiquitin ligase that interacts and ubiquitinates TDP-43 (385). Notably, TDP-43 inclusions are observed in several brain regions of the RNF112 mouse KO, suggesting it may play an important role in regulating TDP-43 turnover. In addition to regulating turnover, ubiquitination could also play a role in TDP-43 localization. For instance, parkin and TDP-43 interaction is reported in several studies, and expression of both human proteins in rat cortex results in an increased ubiquitination of TDP-43 and mislocalization of TDP-43 to the cytosol, which could promote cytosolic aggregation (386)(387)(388). Cyclin F, which is a CRL1 substrate adapter protein encoded by a gene associated to fALS, binds to the N-terminal region of TDP-43 and mediates its ubiquitination in vitro (389). Remarkably, one of cyclin F ALS-associated mutation (S621G) leads to an increase of K48-linked ubiquitin chains conjugated to TDP-43 in vitro, and there is an accumulation of cytosolic inclusions containing both TDP-43 and K48 ubiquitin chains in the spinal cord section of a patient with that mutation. These results are surprising, as K48-chains are mostly thought to mediate proteasome degradation, which should in principle hamper TDP-43 aggregation. One possibility is that the ubiquitination of TDP-43 may increase the affinity for ubiquilin 2-yet another protein encoded by a fALS-associated gene-that contains a ubiquitin-binding domain and binds to the C-terminal region of TDP-43 (390). Ubiquitin 2 itself undergoes phase separation, localizes to stress granules, and helps mediate proteasomal degradation of misfolded proteins (391,392). In this case, the tightened interaction between TDP-43 and ubiquilin 2 could favor the solidification of elements in the stress granule condensates, which then would undergo a liquid to solid transition and nucleate aggregation.
Outlook and future directions
While clearer views are emerging regarding the role of each aggregation-prone protein, many areas still require further clarification. For example, when more than one function has been attributed to a given aggregation-prone protein, it is not always clear which function(s) may be more important in the context of the pathology and how they may each exacerbate or mitigate misfolding and oligomerization. In addition to the perturbation of normal function, the mutations, posttranslational modifications, and oligomerization of aggregation-prone proteins can also affect cellular processes due to deviant PPIs and other cytotoxic events (e.g., damage to the cell membrane). It remains to be determined which of the affected cellular processes are dominant in the progression of the disease, especially in differentiated neurons. One possibility is that a combination of multiple "system failures" within the cell contributes to cell death. Alternatively, distinct "series of unfortunate events" may occur in different neuronal cells, with some cells being more susceptible to a specific type of injury. For instance, while impaired vesicle trafficking in some cells containing HTT inclusions may be particularly detrimental, misregulated transcription may be more damaging in other cells in which an HTT inclusion has formed. This may be particularly true for human neuronal cells that have an average lifespan of over 10 years, during which time they may get exposed to different sets of stimuli or stresses.
To summarize the numerous PPIs discussed here, we assembled a PIN (Fig. 6). For clarity, we only included proteins for which there are at least four experimental evidence reported in BioGRID for at least one of the six aggregationprone proteins. Perhaps not surprisingly, multiple components of the proteostasis network interact with several aggregation-prone proteins and are placed at the center of the PIN. In contrast, proteins involved in the physiological function of the examined proteins tend to be located at the periphery of the PIN. Notably, Hsc70 (e.g., HSPA8) and Hsp70 (e.g., Apg2/HSPA4) plays a dominant role in preventing both misfolding and aggregation, by driving the disaggregation of oligomerized and aggregated proteins and the clearance of misfolded proteins. Together with Hsp90, these chaperones occupy a central location in the PIN. Oddly, DNAJ proteins are noticeably absent from the PIN because the number of observations are below our cutoff. Nonetheless, DnaJA1, A2, B1, and B6 each bind to multiple aggregation-prone proteins. Table S1. Here are represented only the proteins that display at least one high-confidence interaction (with number of PPI evidence on BioGRID greater or equal to four), which are depicted with solid lines. Interactions with less than four PPI evidence on BioGRID are represented as dotted lines. The size of the nodes is proportional to the number of interaction(s) with the aggregation-prone proteins, and the nodes are colored as indicated.
Because these cochaperones play a key role in mediating the Hsc70/Hsp70 interactions, it would be important to examine their expression levels and their activity in human neuronal cells upon aging. This would determine whether a lower folding capacity in the cell is really the main triggering factor for aggregation. Other central elements include proteins associated to the ubiquitin proteasome system such as VCP, p62, and CHIP. One striking observation is that each aggregation-prone protein has been found to be targeted for degradation by more than one pathway, yet the presence of inclusions indicates that cells are not able to adequately prevent accumulation of misfolded proteins. For example, several E3s can mediate ubiquitination of a given misfolded protein, but it is not clear whether all these pathways are really simultaneously active. One possible issue is that many of the earlier studies were often based on transient transfection experiments, in which both the E3 and the substrate were vastly overexpressed in comparison to their endogenous levels. This is particularly true for CHIP, as the bulk of studies on the topic was performed over a decade ago when only fewer molecular and cell biology tools were available. Therefore, while there are possibly several redundant quality control pathways at play, not all are necessarily active in a given cell type, especially in differentiated neurons. Side-by-side evaluation of some of these pathways will need to be performed in suitable cell models to better assess the role of each of these E3s in neurodegeneration.
One challenging issue for evaluating the role of PPIs in disease progression is that there is no common repository for altered PPI due to a disease-associated mutation or upon oligomerization of a protein. For instance, most PPIs reported in BioGRID are assumed to occur between WT proteins. Some PPI specific to ND-associated mutants are included in the main database, but others, such as those involving oligomerized fibrils, are often not included. Perhaps, the creation of PPI databases that are better structured, with datasets being organized by gene and including the ND-associated mutations, could lay the foundation for a greater comprehension of the PINs involved in disease etiology. Another main issue is that most studies rely on coimmunoprecipitation experiments using either overexpressed baits in tissue culture cells or homogenized tissues, which implies nonphysiological conditions or loss of contextual information, respectively. For instance, some PPIs may not occur at lower concentrations or only occur in specific cell types. Therefore, it remains essential to test the role of each candidate PPI using adequate cell or animal models in order to properly determine whether it plays a role in the disease.
Mitochondrial dysfunction can be triggered by all the reviewed aggregation-prone proteins and may therefore play a central role in the etiology of neurological disorders. Neurons critically depend on this organelle and its correct localization to enable brain development, differentiation, neurotransmission, and neuronal activity. Markedly, many mitochondrial PPIs that we cited are not found in our main PIN (Fig. 6). Similar to the Hsp40 cochaperones, the number of reported evidence remains low for many of these interacting mitochondrial proteins. Nonetheless, some clear shared paths are emerging. For instance, impaired mitochondrial protein import and maturation of OXPHOS subunits, including the subsequent increase in ROS production and a lower energy state, are a common signature in all NDs that we reviewed. Additional detrimental defects further diversify the burden on mitochondria for each ND. For instance, accumulation of mutHTT on the outer mitochondrial membrane alters mitochondrial dynamics and promotes mitophagy in addition to disrupting protein import. In the case of ALS, import defects were extended to ROS-scavenging enzymes, thereby favoring ROS accumulation, alongside the inactivation of antiapoptotic factors. Although the exact timing between mitochondrial dysfunction and the onset of NDs are yet to be fully elucidated, it is clear that the impact of the aberrant interactions of mutant and/or misfolded aggregation-prone proteins leads to failure to maintain functional mitochondria. This often results in aberrant mitochondrial fusion/fission dynamics, impaired axonal transport, and ultimately neuronal death, thereby representing one of the major risk factors in the development and progression of neurodegenerative disorders.
Despite numerous clinical trials and the vast expansion of our knowledge in NDs, no cure has yet been established for these disorders. One important aspect is that each aggregation-prone protein engages multiple PPIs with various components of different cellular networks. Therefore, multifactor approaches will likely be required to effectively combat the cytotoxicity associated with protein aggregation. Alongside, we will need to better define which PPIs occur in differentiated cells and which elements are most likely altered upon aging, as they might exacerbate neurodegeneration. Importantly, while mutations are great tools to start investigating how PINs may be altered in diseases, better models will need to be established to understand how PPIs may be affected in sporadic forms of NDs.
Supporting information-This article contains supporting information.
Acknowledgments-We have tried to include as much as possible the relevant literature and we apologize to colleagues whose contributions have been omitted. Funding and additional information-Our work is supported by CIHR project scheme grant (PJT-148489). G. C. is supported by a DFG Walter Benjamin fellowship (CA 2559/1-1) and a MSFHR Research Trainee fellowship (RT-2020-0517) and C. M. by a UBC scholarship.
Conflict of interest-The authors declare that they have no conflicts of interest with the contents of this article.
|
2022-05-27T15:11:05.188Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "5bc5dbb99c9c9eea304355ae07af0b5d9365db1e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jbc.org/article/S0021925822005026/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb7678df30ac3e933c7f0fa4324e39c348268ad2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
148282187
|
pes2o/s2orc
|
v3-fos-license
|
Are Quests for a “Culture of Assessment” Mired in a “Culture War” Over Assessment? A Q-Methodological Inquiry
The “Assessment Movement” in higher education has generated some of the most wide-ranging and heated discussions that the academy has experienced in a while. On the one hand, accrediting agencies, prospective and current clientele, and the public-at-large have a clear vested interest in ensuring that colleges and universities actually deliver on the student learning outcomes that they promise. Anything less would be tantamount to a failure of institutional accountability if not outright fraud. On the other hand, it is no secret that efforts to foster a “culture of assessment” among institutions of higher learning have frequently encountered resistance, particularly on the part of faculty unconvinced that the aspirations of the assessment movement are in fact achievable. One consequence of this tension is the emergence of an embryonic literature devoted to the study of processes that monitor, enhance, or deter the cultivation of a “culture of assessment” with sufficient buy-in among all institutional stakeholders, faculty included. Despite employment of a wide-ranging host of research methods in this literature, a significant number of large unresolved issues remain, making it difficult to determine just how close to a consensual, culture of assessment we have actually come. Because one critical lesson of extant research in this area is that “metrics matter,” we approach the subjective controversy over outcomes assessment through an application of Q methodology. Accordingly, we comb the vast “concourse” on assessment that has emerged among stakeholders recently to generate a 50 item Q sample representative of the diverse subjectivity at issue. Forty faculty and administrators from several different institutions completed the Q-sort which resulted in two strong factors: the Anti-Assessment Stalwarts and the Defenders of the Faith. Suggestions are offered regarding strategies for reconciling these “dueling narratives” on outcomes assessment.
Introduction
What has been labeled the "Assessment Movement" (Ewell, 2002(Ewell, , 2009 in higher education has generated some of the most wide-ranging and heated discussions that the academy has experienced in quite some time. Indeed, judging from the most audible voices in conversations surrounding outcomes assessment, the general impression conveyed by the tone and tenor of the debate bears less of a resemblance to a vigorous yet disciplined, diplomatic exchange of alternative intellectual views than an all-out, profoundly polarized, and acrimonious "dialogue of the deaf" between deeply-entrenched and seemingly antithetical positions. Occupying one side of the ensuing stand-off are the advocates of outcomes assessment. Among proponents of the assessment movement, it is absolutely essential to gauge the effectiveness of higher education and to hold institutions accountable (Carey, 2010;Glenn, 2010;Havens, 2013;Miller, 2012). On the opposite side of this proposition we find an equally committed cluster of higher education professionals, namely the skeptics or detractors of the outcomes assessment enterprise that has gathered momentum, particularly as a result of public demands for accountability and accrediting agency practices over the past decade. Central to the resistance by members of the skeptics camp are a range of concerns including, among others, complaints about the "political" nature and origins of the accountability movement, the lack of meaningful faculty input and ownership, and the persistence of troubling epistemological questions bearing on the validity of various measures commonly employed to provide undisputable evidence of real educational progress (Hazelkorn, 2013;Horn & Wilburn, 2013;Nugent, 2008).
Whether these caricatures of the pro and con camps on the value of outcomes assessment efforts are accurate is a question that the present research is designed in part to address. As we shall see, extant research aimed at ascertaining progress in the quest to develop institution-wide "cultures of assessment" by calibrating and monitoring over time attitudes and practices vis-à-vis outcomes assessment on the part of key stakeholders-accrediting agencies, academic administrators, teaching faculty, and students-is by no means of one piece in terms of methodologies and metrics, no less than substantive findings. Indeed, our review of the literature bearing on assessment and its effective implementation across the universe of higher education in the United States leads us to surmise that one of the biggest deterrentsif not the principal obstacle-encountered in the quest to cultivate campus-wide "cultures of assessment" stems from inadequately understood viewpoints held by key parties to the enterprise. Particularly important in this regard are the perspectives of faculty and administrators as institutions embark on the creation of such cultures in a manner able to satisfy external accrediting agencies while upholding or advancing the quality of the college's core educational mission. If so, it may prove beneficial before reviewing the results from diverse efforts along these lines in setting the stage for the methodological alteration we employ here, to pause briefly beforehand in an effort to place the "Assessment Movement" in an abbreviated historical context.
The State of Play in the Outcomes Assessment Enterprise
We can begin by noting that assessment has always been a critical component of the education-learning process. Initially, assessment was the exclusive province of teachers who designed courses, developed assignments, and then evaluated the extent to which students had mastered the material. Over time, others who had an interest in education and the learning process became involved. For example, in the early 20th century, groups like the Carnegie Foundation for the Advancement of Teaching-with an interest in objective and scientific evaluation of student learning-began developing standardized tests to evaluate specific areas of learning. These efforts were followed in the 1930s by the work of several universities, including the University of Chicago, to expand assessment efforts to measure multi-disciplinary learning and "general education." This period saw the development of tests such as the Graduate Record Examination. This work was followed in the post-World War II era by what Shavelson (2007) calls the "The Rise of the Test Providers." As a direct result of the number of returning soldiers using the GI Bill to go to college, several companies, most notably Educational Testing Service (ETS), were developed to help assist colleges in screening and assessment process. Due to the combined effects of these separate developments, the evolving culture of higher education in the United States of the mid-1960s was generally hospitable to a growing emphasis on assessment. More specifically, this acceptance was manifested in the evergrowing reliance on ETS-type instruments such as the SAT, GRE, and the like as widely-used and legitimate measures of educational outcomes.
The exact sequence and precise effects of the crucial events to follow are subject to different interpretations, but what is clear from the various accounts is that the assessment process began to change significantly during the 1970s. Prior to this decade, the outside forces that had pushed for greater assessment were interested primarily in enhancing the education-learning process. That began to change and the assessment movement took on an additional emphasis as a consequence of alterations in the political and economic environment that, in concert, accelerated the demand for college degrees by employers and prospective employees in search of certification. This in turn was accompanied by rapid rises in the cost of college, increases that far outpaced the capacity of federal and state governments to subsidize these costs. Soon, partisan differences on the role of such subsidies would be activated and this, combined with increased pressure from tuition-strapped families, contributed to elevated concern among political office-holders. The cumulative effect of these developments was a political climate far more hospitable to the exercise of added control of colleges and universities, all defended as part of a legitimate interest in holding higher education accountable.
Universities had always claimed that students were learning, but now the time had come, some argued, for universities to demonstrate that learning was actually taking place. A multitude of forces coalesced to produce this turn of events. Included among those is the emergence of the emphasis of private sector management models being applied to the public and education arenas. What this meant was a greater emphasis on evaluation and accountability (Zumeta, 1998(Zumeta, , 2000. In addition, during this period there were substantial funding cutbacks for higher education tied to a more audible demand from state governments that universities be accountable for what they do with their taxpayers' dollars. Some states began to establish funding formulae based on objective indicators of performance by universities (Ewell, 1994(Ewell, , 2001. At the same time, spending cutbacks escalated the cost of higher education which, when coupled with tax revolts across the country, increased the public's demand for greater accountability (Miller, 2012;Zumeta, 2000). This demand for greater accountability was taken by some, especially faculty, as unjustifiably contrary to the respect and deference given to universities in the past. At the same time, however, with the need for more money and the trend toward the application of business models in higher education, boards of directors at many universities had become dominated by members from the business world. As a result, boards came to function much less as buffers and protectors of institutions of higher learning from outside forces, and more as vehicles by which the demands from both the public and private sectors would be implemented. Accrediting agencies, though having been involved for decades, also began to take on a new focus by requiring institutions of higher education to develop measures of institutional effectiveness and requiring colleges to develop clear student learning outcomes (Ewell, 2009;King, 2000;Zumeta, 1998).
Added to these institutional alterations in the environment of American higher education is a discernable increase in the demands for accountability from many critics of higher education. One in particular warrants citation. Margaret Spellings, Secretary of Education under President George W. Bush from 2005 to 2009, in The Commission on the Future of Higher Education report bearing her name (Spellings, 2006) focused on accountability from a self-proclaimed consumerist perspective. The Commission's report called for the creation of a data base so that the public could have access to information about individual colleges and universities. This information would serve as performance indicators to demonstrate a given institution's ability to produce results so that prospective students and their parents could make better choices on where to spend their money. Similarly, higher education could not escape the impact of other social forces operating in the external environment such as the culture wars. Works such as those of Allan Bloom (1987) in The Closing of the American Mind along with other books by then-Secretary of Education William Bennett (1989) provided thinly veiled political assaults on the perceived leftward leanings of most institutions of higher education as well as their faculty. This added considerably to the demand for greater accountability. Similar attacks have continued up to the present (Archibald & Feldman, 2011;Arum & Roka, 2011).
Arum and Roka's Academically Adrift: Limited Learning on College Campuses warrants special attention due to the fact that it constitutes what is arguably the most thorough, careful, and thoughtful-not to mention, sobering-application of assessment per se to key educational outcomes at two dozen highly esteemed colleges and universities of varying sizes and locations within the United States. Spanning several years, Arum and Roka's research design enabled them to chart progress on several fronts within the institutions they examined-critical thinking, analytical reasoning, and written communication-over time while also issuing data-based judgments of institutional effectiveness at a global level across the schools they examined. Based primarily on data generated by student responses to the rubric-based Collegiate Learning Assessment (CLA), an instrument fostered and promoted by the Council for Aid to Education, the results were devastating: excluding dropouts, fully 45% of the participants failed to demonstrate any significant gains in the three critical skill areas cited above during the first 2 years of college. Equally depressing, this figure was reduced only marginally, to 36%, over 4 years of college. Not surprisingly, these findings gained wide circulation among mass-media outlets in the United States, adding a growing sense of urgency to the political incentives to shore up accountability. Finally, it bears noting that these growing concerns were gaining increased circulation at the same time that severelyelevated student debt levels were garnering widespread attention. The spike in aggregate debt burdens incurred by college graduates stemmed in large measure from the fact that average tuition costs, at 4-year public universities circa 2014, had climbed to 225% of their 1984 levels (College Board, 2015, p. 16).
Unresolved Issues From Scholarly Scrutiny
In such an environment, it should come as no great surprise that the outcomes assessment movement is shrouded in controversy. And the controversy catalyzed by divergent perspectives on assessment held by differing constituencies appears to persist even when attention to the issue shifts from popular or policymaking venues to scholarly efforts to monitor meaningful progress in the cultivation of genuinely cooperative cultures of assessment. Repeated surveys of faculty buy-in to institutional assessment regimes, undertaken by Matt Fuller and colleagues, defy holistic narrative interpretation and, despite some evidence of declining resistance generally by faculty to outcomes assessment, the percentages of avid supporters among the ranks of teaching faculty fall short of substantial majorities (Fuller, 2011;Fuller, Henderson, & Bustamante, 2015;Fuller & Skidmore, 2014). Additional commentary and studies of both an impressionistic nature (Ewell, 2009;Gold, Rhoades, Smith, & Kuh, 2011;Hutchings, 2010;Katz, 2010;Kelly-Woessner, 2011;Lederman, 2010;Praslova, 2013) and more empirically oriented studies on samples of critically placed administrators and/or program participants (Farkas, Hinchliffe, & Houk, 2015;Hunt-Bull & Packey, 2007;Kuh & Ikenberry, 2009;Loughman & Thomson, 2006;MacDonald, Williams, Lazowski, Horst, & Barron, 2014;Marrs, 2009;Welsh & Metcalf, 2003a, 2003b) underscore a complementary conclusion in pointing to the sense of faculty buy-in as the pivotal variable in explaining why some efforts to construct cultures of assessment succeed and others fail.
For his part, Fuller (2011) is convinced that research capable of identifying the roots of faculty support for and opposition to outcomes assessment holds the key to success in breeding cultures of assessment. Moreover, it is Fuller's view that until large-sample surveys of the sort he has conducted can generate genuine narratives in which discreet survey responses assume the properties of a subjectively coherent account, scholarship of the sort needed will remain elusive. Meanwhile, the appearance of Astin and Antonio's (2012) widely regarded, authoritative volume on such matters not only cast doubts on the degree of faculty buy-in within current efforts to cultivate cultures of assessment, it lays at the feet of faculty the clear onus of blame for this condition. Faculty, according to these authors, are inherently resistant to change in their work environments and are deemed guilty for resorting to initiatives under the rubric of assessment by engaging in a stylistic series of "academic games" that sabotage reasonable cooperation with authorities to institutionalize feedback processes fundamentally aimed at enhancing student learning outcomes.
The dearth of data-based accounts, coupled with the limitations of extant empirical studies, have not prevented other scholars (e.g., Kinzie, 2010;Miller, 2012) from advancing ambitious generalizations speaking to the overall state of play vis-à-vis assessment in the academic community as a whole. Both, for instance, claim that assessment has taken root on the vast majority of campuses across America and, while faculty still lag behind administrators in their enthusiasm for devising metrics of institutional effectiveness, in many if not most institutions the process of cultivating an authentic culture of achievement is well-and widelyunderway. Miller's (2012) conclusions, derived from an effort to track attitudes toward the Assessment Movement by examining the debates over the past 20 years about assessment in the scholarly journal Change, are worth citing in this regard: In some ways, the assessment movement over the last 25 years is similar to what individuals experience as they move through Kübler-Ross's stages of grief: denial, anger, bargaining, depression, and acceptance. . . . During the initial denial stage, faculty and staff could not understand why assessment was necessary, which led to anger that outside forces were trying to mandate it. However, demands for accountability continued to create pressure for colleges and universities to assess student learning, leading institutions to try bargaining with state officials and regional accreditation agencies. Unflattering national evaluations of American higher education . . . propelled many institutions into depression. But eventually, reluctantly, slowly, and unevenly, many institutions came to an acceptance of assessment and its role in higher education. (p. 3) For our part, such a characterization seems premature at best. Indeed, our own interest in the state of play in the assessment game was catalyzed in the spring of 2014 when a chair of a political science department posted a negative comment about the assessment process on a department chairs' online discussion list, apsanet.org, operated by the American Political Science Association. This particular post was followed by a flood of comparably hostile comments, and a few relatively supportive comments, from a wide swath of professors and chairs representing political science programs across a diverse selection of colleges and universities. The discovery of this commentary led us to search other material and discussions, particularly a series of articles about assessment in the Chronicle of Higher Education. Many of the articles in the online version of the Chronicle were followed by hundreds of comments, most of them critical of what was currently taking place under the aegis of the Assessment Movement. At the very least, the commentary taken as a whole cast a lengthy shadow of doubt on Miller's contention that the higher education community had come to accept if not make peace with the rigors and requirements of assessment. Indeed, if the commentaries we encountered were to any degree representative, it would appear that many stakeholders now occupying the classroom trenches as practitioners in the assessment enterprise are not accurately labeled as embodying the acceptance stage at this point and are still moored, if not mired, in the anger stage. To be fair, Miller's conclusions are framed in terms of compliance with the assessment mandates at the institutional rather than the individual level of administrative or faculty stakeholders. And she has couched her observations in carefully imprecise (e.g., "many institutions . . . ") language that make the truth-claims she advances difficult to refute. Even so, her claims run precariously close to committing the ecological fallacy by, in effect, assuming that institution-wide compliance with accrediting agencies' emphases on assessment can automatically be taken to infer supportive subjectivity on the part of the individuals comprising those institutions.
Q Methodology
This project exploits the advantages of Q methodology to examine the subjective structure of the discourse regarding assessment in higher education. Originated by William Stephenson (1935Stephenson ( , 1953, Q methodology provides for the systematic study of subjectivity. McKeown and Thomas (2013) provide an overview of Q methodology: Q methodology encompasses a distinctive set of psychometric and operational principles that, conjoined with statistical applications of correlational and factor-analytic techniques, provides researchers with a systematic and rigorously quantitative procedure for examining the subjective components of human behavior. Within the context of Q methodology, "subjectivity" is regarded as a person's communication of a point of view on any matter of personal or social importance. A corollary is the twofold premise that subjective viewpoints are "communicable" and advanced from a position of "self reference." (Preface, p. ix) Key to Q methodology is the concept of concourse, by which Stephenson meant all that could be said about a particular topic, which is, of course, theoretically infinite in nature. Concourse is rooted in self-reference and this universe of statements is made up of statements of opinion. For example, only the delusional would dispute that Barack Obama is president of the United States in 2015, but everyone has an opinion about that fact, and the communication of such opinion-that is, shared communicability-is behavior. Similarly, expressions of opinion about the assessment movement in higher education, when shared, is behavior that can be studied scientifically, using the procedures and principles of Q methodology.
A sample of statements (Q sample) drawn from the concourse is presented to subjects who rank-order the statements through a process known as Q-sorting. Thus, each statement is placed along a continuum, relative to the other statements in the Q sample, reflecting the sorters point of view about that topic. Note the substantial difference between Q-sorting and responding to a Likert scale or a battery of survey items. In a Likert scale or survey, each item is independent of the other items. In Q-sorting, the items are compared against the others, with salience attributed to those statements at either end of the continuum. Those statements in the middle of the Q-sort have lesser importance to the sorter, and, indeed, the zero point in the continuum signifies a neutral feeling.
Once Q-sorts are collected, the data are subjected to correlation and factor analysis that reduces the data by grouping sorts that were done similarly. Thus, similarly done sorts are grouped together to form factors. These factors are operant representations of shared points of view and can be compared and contrasted with other factors to allow the researcher the ability to understand the subjective structure of the views concerning the topic under study.
Applying Q Methodology: Concourse, Q Sample, and P-Set
Given the aforementioned limits of the few empirical investigations of academic attitudes on assessment and the Assessment Movement, this project seeks to redress the relative neglect of stakeholder subjectivity in this research and explore the meanings and viewpoints of professional academics on this subject. To tap these subjective understandings of assessment and the Assessment Movement, Q methodology was selected (Brown, 1980;McKeown & Thomas, 2013;Stephenson, 1953). Q methodology is particularly effective in dealing with subjective evaluations of various entities as well as uncovering various viewpoints on policy issues (Brown & Maxwell, 2007;Gargan & Brown, 1993.). The first step was a careful examination of the concourse of communication (Stephenson, 1978) about the Assessment Movement. As indicated previously, this included a lengthy discussion on the American Political Science Association's (APSA) department chair's online discussion list. This was followed by an examination of articles on assessment and the extensive commentary that often followed, particularly in the Chronicle of Higher Education. It also included extracting statements from the commentary on assessment cited earlier in this article. In addition, the occasional papers and research reports available from the National Institute for Learning Outcomes (NILOA) were reviewed, as well as material available from The Association for the Assessment of Learning in Higher Education (AALHE), The Association of American Colleges and Universities (AACU), and some regional assessment organizations (e.g., NEEAN, New England Educational Assessment in Higher Education). This was supplemented by interviews with several persons who were actively involved in the assessment process, or were known to be critical of the process. After this extensive review, the commentary began to become redundant indicating the concourse on this subject had been adequately covered. The result was over three hundred statements broadly representative of the wide variety of viewpoints on assessment and the Assessment Movement. These ranged from statements about the purpose and consequences of assessment, to concerns about the validity of the instruments used in assessment, the role of faculty, and the sources of the pressure for more assessment. The concourse did not yield nor were there any suggestions from the literature of a theoretical framework for the selection of the final Q sample. There was a definite tendency for some statements to be positive and supportive of assessment, some to be negative, and some to be ambivalent or neutral. There also were some that were very descriptive of the process, while others were hostile because of the pressure put on faculty to change, or because it added more work. Accordingly, an effort was made to ensure to the extent possible a sample of statements that reflected the diversity contained in the concourse. As a result, 50 statements were eventually selected to comprise the Q sample. The entire statement sample is contained in the Appendix; the following represent some illustrative examples of the range of subjectivity displayed within the concourse: 2. The assessment movement in higher education has been driven as much, if not more, by outside political forces determined to exercise greater control over education, than it has been by persons legitimately interested in advancing the quality of learning. 4. To faculty, it usually seems to be a burdensome, pointless extra, grafted onto an already heavy workload. However, an assessment process embedded in work routines that can be implemented in a way that minimizes extra work might be more acceptable.
14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution. 20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their "assessment software" and other "assessment snake oil remedies" for the assessment "problem." Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it.
26. When institutions narrow their educational vision to a discrete set of skills and outcomes that can be measured at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision of what they try to cultivate in students. What we measure dictates what we teach and what we do not teach.
29. Designed appropriately, a well-organized sequence of outcomes assessments can provide information vital to tracking student learning over time, and potentially increasing institutional effectiveness.
The fifty statements comprising the final Q sample were provided to respondents who were asked to rank the statements from +5, those that were most characteristic of their beliefs about assessment, to −5, those that were most uncharacteristic according to the following opinion continuum: The Q sample with instructions initially was sent out to all of the individuals who had participated in the original debate on the political science department chairs' discussion list. Added to this were other professional academics who had written or commented on articles in the Chronicle of Higher Education or were members of national organizations involved in the assessment process. In addition, each author invited as participants colleagues at their respective institutions who were involved in the assessment process either in some type of official capacity or as an individual who at least had been affected in some way by a current or developing assessment process. A total of 40 persons eventually responded and, as indicated in Table 1, this P-set reflects a diverse range of respondents in terms of age, gender, faculty status, administrative position, and involvement in the assessment process.
Results: Dueling Narratives on Outcomes Assessment in Higher Education
The data were analyzed in customary Q technique fashion: All Q-sorts were correlated and the resulting 40 × 40 correlation matrix was factor analyzed, initially by both centroid factor analysis and principal components analysis (Schmolck & Atkinson, 2012). The decision to settle upon a simple two-factor solution produced by Principal Component Analysis (PCA) and a varimax rotation was not difficult: The two factors were defined by the purely significant loadings of 32 of the 40 participants, and all of the remaining eight respondents had significant loadings on both factors. Still, the final two factors were clearly orthogonal, being correlated at −.11. Factor loadings are presented in Table 1. Loadings 36 or above (two place decimals have been removed) are significant (p < .01). The decision to retain these two factors and use PCA with a varimax rotation (over, say, centroid factor analysis and judgmental rotation) was based on the clean, readily interpretable character of the factor structure produced. The eight respondents who had significant loadings on both factors are, in Q methodology terms, "confounded" or "mixed." This means that they share some of the sentiments of both factors. Further interviews with these subjects might well shed light on why they are "confounded," but it is common practice in Q methodology to focus attention on those views that are shown to be shared.
Factor A: Anti-Assessment Stalwarts
Factor A is comprised of 17 of the 40 sorters, all faculty members. Five of the sorters also serve as Department Chairs, but see their roles as primarily faculty members. All but two of the 17 sorters are male, and 15 teach either in the areas of humanities or social science. Those with high factor loadings on Factor A are steadfast in their critique of the assessment process and find it a burdensome, unnecessary intrusion into their academic life. They are hostile to assessment on a number of different fronts: They see assessment as having been forced upon them by entities outside of academia; that the process is inconsistent with the mission of higher education (and worse, yet, having a deleterious impact on higher education); that assessment cannot really measure the value of the teacher-student dynamic; that the assessment movement does not value faculty or hold students accountable for their part in the learning process; and that scant evidence exists to suggest that assessment has led to any meaningful, positive changes in the educational enterprise. In short, Factor A is not only skeptical of assessment, but downright cynical.
These themes are seen clearly by examining those statements most agreed with by Factor A. The following statement received the highest score and points to their belief in the futility of the assessment process to measure the learning that takes place when dealing with higher-level thinking. (The respective factor scores for A and B are in parentheses following the statements.) 36. The problem is that what is truly learned in college often does not come to fruition until years later; long after the "assessment process" has been completed. (+5, +2) Factor A is also troubled by the perception that assessment is driven by forces external to higher education, either Most uncharacteristic Neutral Most characteristic by those wishing to apply an economic model to college teaching or by those who want to hold faculty "accountable" in various ways to satisfy other constituencies.
The following statements show a concern among Factor A sorters for the role of external demands. 38. I do have a problem when assessment becomes just another hoop we have to jump through to please an outside constituency.
More and more, that is what seems to be driving outcomes assessment. (+5, 0) 2. The assessment movement in higher education has been driven as much, if not more, by outside political forces determined to exercise greater control over education, than it has been by persons legitimately interested in advancing the quality of learning. (+5, 0) 49. Although assessment is data driven, it is being driven by those who seek to know the cost and benefit of everything, but know nothing of the values of things taught and accomplished. (+3, −3) 6. Those who are afraid of rubrics and assessment instruments remind me of Luddites who refuse to perceive reality. If we are to rely on our time-tested bold statements that "we are a quality institution," without any evidence, then we deserve to be judged by outside constituencies. (−5, −2) 35. We and the accreditation agencies are on the same side-we are both about student learning. They want us to prove that we are doing what we claim we are doing. We want them to leave us alone-but they won't until we devise valid and reliable measures that demonstrate that learning is taking place. (−3, 0) In addition, Factor A believes that the narrow focus of measuring learning outcomes is inconsistent with the mission of higher education. For these sorters, a college education is more than the sum of its constituent parts, and it is that larger picture that eludes the grasp of the simplistic application of quantitative measures. Faculty on Factor A thus seem to share sentiments with Laurie Fendrich (2007) whose lamentation over the extent to which assessment efforts have been captured and denigrated by faceless bureaucrats appears as an opening epigram to this article. The following statements reinforce this theme: 26. When institutions narrow their educational vision to a discrete set of skills and outcomes that can be measured at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision of what they try to cultivate in students. What we measure dictates what we teach and what we do not teach. (+4, 0) 32. I have yet to see an assessment protocol that truly measures what we claim to be doing. Where are the measures of the ability to solve major societal problems? The measures of leadership ability? The measures of the potential to become a good citizen?
From the viewpoint of Factor A, not only does assessment miss the integrative nature of higher education, when put into practice, assessment tools hamper the educational mission in at least two ways: first, by adopting the approach of "No Child Left Behind," professors will be led by assessment mandates to teach concepts and ideas whose mastery by students may be easily quantifiable, but will not be measures of a good education, and, second, the bureaucratic infrastructure that assessment has and will continue to demand will direct scarce resources from the classroom with the attendant negative consequences on student learning: 10. There is an inevitable vicious circle here where much of what we teach cannot be measured so we establish outcomes that can be measured which forces us to teach what we really do not think is what we should be teaching in the first place. (+3, −2) 34. The assessment movement provides an ideological smokescreen acting as a distraction from the real problems of U.S. higher education that relate to issues of inequality, cost, and the out of control expansion of the number of administrators. (+3, −4) 20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their "assessment software" and other "assessment snake oil remedies" for the assessment "problem." Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it. (+3, −3) 47. The assessment movement offers a fundamental change of our higher education system: learning is now non-negotiable and the claims for learning are clear. This is a profound change and stands to reverse the erosion of quality in higher education. (−5, −2) Factor A also does not believe that there has been reliable evidence produced that assessment practices have resulted in improved student learning. Steven Hales's (2013) provocative statement-"How can we be sure whether outcomes assessment really works as advertised or has the accuracy of a Soviet agricultural report?"-surely would resonate with Factor A. Statement 28 was given a high positive score by Factor A (p. 2): 28. I've almost given up saying this, but good grief, people, how about some evidence! Has there been a single, carefully controlled study that shows assessment produces better-educated graduates? (+4, −1) Statements in the Q sample that were critical of faculty or suspicious of their motives were, not surprisingly, rejected by Factor A. At the same time, items that questioned why students were not held to more account for their responsibility in their own education resonated positively with this perspective. 42. What happened to the respect for faculty; the belief that they actually know what they are doing? (+3, −3) 41. Faculty resist assessment because they resist everything. They are the most immovable objects on the planet. (−5, −5) 21. We need some type of assessment because too many professors and administrators are failing to hold students accountable, but are letting them slide through college without learning much. (−3, −4) 30. No assessment vehicle I have ever encountered measures the extent to which students are often unwilling to do the work of getting an education. Refining teaching methods puts the onus on faculty, so does the assessment buzzword of the day: engagement. (+4, −2) Factor A simply does not see assessment as a necessary endeavor, nor as one that would be helpful to them as educators.
They systematically reject a series of statements that trumpet assessment as a way for faculty to think more deeply about their courses, or as a means to measure the quality and significance of what they do: 16. OK, I admit it: I like assessment. I like it because it encourages faculty members to think more carefully about what they do, how they do it, and why they do it that way. (−4, +2) 14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution. (−4, +4) 1. The idea that we ought to be exempt from assessment, from demonstrating the value of our work, smacks of privilege, as though we think everyone ought to dutifully support us without asking us to be accountable to them. (−3, +1) 9. It's not radical doubt about the role or effectiveness of grading as a measuring tool for learning outcomes that motivates assessment. It's just the desire to provide a second-level check on the effectiveness of such tools. (−3, 0) 50. What can be so wrong about asking someone to systematically and empirically demonstrate that they actually do accomplish their stated goals and objectives? (−3, +2) Finally, Factor A strongly rejects the idea that assessment is a form of scholarship, and that faculty should be held to account for having adequately performed assessment during tenure and promotion evaluations: 39. Assessment should be treated as a form of scholarship that is closely linked to teaching and learning, and it should play a role in the tenure and promotion processes. (−4, +2) In sum, Factor A thoroughly rejects any useful purpose for assessment and contends that the process is fatally flawed, promoted by external forces who do not understand higher education, or, worse yet, actively seek to dramatically change the nature of academia to no good end. Clearly, Factor A participants do not want much, if anything, to do with assessment; however, they exist in an environment in which demands to participate in the process are unrelenting and expanding. This contradiction must provide a great deal of conflict for these faculty members as they seek to balance the opportunities and obligations at the core of their professional lives.
Factor B: Defenders of the Faith
Factor B is comprised of 15 respondents from all three major areas of academia: the humanities, social sciences, and the natural sciences. In contrast to Factor A, which is made-up entirely of faculty members, seven sorters on Factor B are either full-time administrators or faculty who have some administrative duties and consider themselves as performing dual roles. Also, Factor B is nearly evenly split along gender lines, with defining sorts provided by eight males and seven females. Factor B defends assessment and promotes the idea that outcomes-based education can be helpful as long as the college or university controls the process. Factor B rejects the view that outside forces are controlling and dictating assessment as a means to control higher education, and also rejects the idea that colleges should be accountable to outside constituencies. The accountability that Factor B types believe assessment would serve is to make faculty more conscious of the pedagogical decisions they make in terms of the impact of those decisions on student learning, program development, and ultimately, institutional effectiveness. Further, Factor B does not believe that involvement in assessment will bring dramatic changes to higher education; rather it will help everyone to do their jobs better.
Factor B sees assessment as a helpful process in discovering how effective the institution is in preparing students. They see assessment as an organic process that professors are already engaged in, and believe that a more systematic application through well-designed assessment tools will benefit all involved. The following statements were all scored positively by Factor B: (The respective factor scores for A and B are in parentheses following the statements.) 33. The point of assessment is to ask, "What do we want our graduates as a cohort to know and be able to do by the time they graduate?" Are we getting them there? If not, where is the curriculum not serving our goals for our students and what can we do to change that? (−1, +5) 45. It's not like teaching and assessing are some separate, episodic events, but rather they are, or should be, ongoing, interrelated activities focused on providing guidance for improvement. (−1, +5) 18. Executed well, assessment encourages faculty members to articulate their course and assignment goals more clearly and to develop sound rubrics. That helps them think more broadly about overarching program goals, and how to measure students' success in reaching those goals. (−1, +5) 23. Neither the assessment tools of the professor nor of the external assessor are perfectly reliable. Despite that, both can carry valuable information, if their assessments are well designed. (0, +4) 29. Designed appropriately, a well-organized sequence of outcomes assessments can provide information vital to tracking student learning over time, and potentially increasing institutional effectiveness. (0, +3) Factor B sees assessment as a means by which faculty can think more deeply about their courses and articulate more clearly, for students and themselves, what the learning objectives for a given course might be. It is a process of reflection and careful consideration that supporters believe will lead to more meaningful and productive pedagogical decisions being made by faculty. 48. It is incumbent on academics to decide for themselves how to assess whether their students are learning, less to satisfy external calls for accountability than because it is the right thing for academics, as professionals who care about their students, to do. (+2, +4) 14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution. (−4, +4) 3. If assessment accomplishes nothing else than to force faculty to sit down and discuss what it is they are trying to do and whether or not they are accomplishing that, then it can be considered a success. (−2, +3) Factor B seems sensitive to the critique made by Factor A types that education is more than the sum of various course objectives, but, unlike Factor A, does so in a way that still embraces assessment.
8. If we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes-whether these outcomes are somehow connected or entirely independent of each other-then we have to expand our approach to include process as well as product. (−2, +3) One of the major concerns of Factor A-the degree to which assessment is being driven by forces outside the institutions-is not seen as an issue for Factor B. These participants reject the idea that assessment is a vehicle to bully higher education, whether perpetrated by vote-seeking politicians or by corporate types who are attempting to apply a business model that is inappropriate for higher education.
22. Outcomes assessment is not really about gathering knowledge or improving quality, but to bully higher education. From that perspective, it's working pretty well. (+2, −5) 34. The assessment movement provides an ideological smokescreen acting as a distraction from the real problems of U.S. higher education that relate to issues of inequality, cost, and the out of control expansion of the number of administrators. (+3, −4) 19. The history of the assessment movement is that it originates with public scrutiny over the cost of higher education. In a way, we have done this to ourselves. Rather than confront the cost issue, our accreditors and professional organizations decided to demonstrate that the cost was worth it, by proving how much students learn. (+1, −4) 49. Although assessment is data driven, it is being driven by those who seek to know the cost and benefit of everything, but know nothing of the values of things taught and accomplished. (+3, −3) 15. It is a wonder anyone learned anything in the days before we had a formal metric. Assessment is done not for students, but for administrators. Not for faculty, but to faculty. Not for program improvement, but for compliance monitoring. (+1, −3) However, despite Factor B's rejection of the nefarious motives of external agencies, they do give support to the idea that assessment needs to remain in the hands of those most knowledgeable, and presumably, the institution itself. 11. Many if not all of us would agree from our own experiences that assessment, when used properly, can move an educational process forward in positive ways. But what is appropriate and what is proper, and who will decide this, are the important questions. (+1, +4) Given their generally positive view of assessment, it should come as no surprise that Factor B types do not resist being involved in the process. And while they see assessment as necessary and proper, they do not believe that the entire landscape of higher education will be dramatically changed by its use. 40. I've given up fighting this thing. I just do as minimal a job as allowed and then hope even that time is not wasted. (−1, −5) 44. What goes on in the classroom on a daily basis does not "count." What "counts" is "documented" learning, that is, the product-as-educational-widget. We would do well to push back as hard as we can so that the assessment movement does not gobble up and spit out higher education. (+2, −4) 13. It's easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. (−1, −3) Finally, Factor B is fundamentally at odds with Factor A's view that assessment has led to an overload of administrators tasked with carrying out the process. However, Factor B does share Factor A's view that faculty are not intransigent and predisposed to reject assessment because they do not want to be held accountable. 20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their "assessment software" and other "assessment snake oil remedies" for the assessment "problem." Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it. (+3, −3) 41. Faculty resist assessment because they resist everything. They are the most immovable objects on the planet. (−5, −5) 21. We need some type of assessment because too many professors and administrators are failing to hold students accountable, but are letting them slide through college without learning much. (−3, −4) Factor B offers a strong endorsement of assessment practices from an educational vantage point, believing that there is intrinsic merit to participation in assessment and that the educational product will benefit from a careful examination of what it is that faculty do in the classroom. Factor B does not see assessment as being driven by suspect forces outside the university, and firmly believe that faculty are not resisting assessment because they reject being scrutinized or are just temperamentally predisposed to resist any encroachment on their authority. This seems to argue that Factor B believes faculty can and will "buy in" to assessment when they are sufficiently educated as to the benefits of this systematic approach. Factor A shares the view that faculty are not resistant by nature or are allowing students to pass their courses without demonstrating competency. However, as Wendy Weiner (2009) has written, "If the faculty does not own it (assessment), it is not going to happen" (p. 28). It would seem that given the viewpoints uncovered here, Factor A types are not at all ready to "buy in" to assessment and this at least raises the possibility that the types of benefits seen by supporters of assessment may never be realized.
Concluding Discussion: Is "All-In" Outcomes Assessment Attainable?
Before subjecting these findings to appraisal for their significance and implications, we are obliged to issue the customary disclaimers that tend to accompany Q-methodological studies. Foremost in this respect is the reminder that P-sets (or person samples) in Q studies are typically small and nonrandomly composed compared with large-sample surveys. In the case at hand, 40 respondents (though well within the usual range for Q-based inquiries) cannot and are not taken as grounds for estimating the larger distribution of opinion on outcomes assessment in contemporary higher education with the United States. However, the faculty-centric nature of Factor A is worth noting. All those that loaded on Factor A are faculty members or Department Chairs, who see their principal role as that of a faculty member. Clearly, not all faculty members are Factor A types, as there are faculty members with significant loadings on Factor B. Administrators in this study loaded on Factor B, which again is not to say all administrators are Factor B types, but both of these patterns are suggestive. At the same time, it is worth noting that in the aforecited empirical studies on opinions toward assessment, we find articles based on as few respondents as three (Marrs, 2009), 12 (Loughman & Thomson, 2006), and 45 (Kinzie, 2010). More compelling in this connection, however, it is worth reiterating that the "generalizations" sought from Q studies are focused on discovering how stakeholders think on a given matter rather than how many of a certain demographic identity subscribe to a given view (Thomas & Baas, 1992. Seen in this light, the pair of perspectives presented above are perhaps best viewed as constituting a preliminary calibration of two views in their subjective, narrative character that may or may not have been anticipated prior to undertaking this study, though the former possibility does in retrospect seem more likely than the latter. It is of course possible (though doubtful for reasons outlined above) that a second disclaimer, pertinent to Q-based research, is in order here: Specifically, it is possible that our sampling of the concourse of subjective communicability surrounding outcomes assessment was deficient somehow in turning up items of a more nuanced or ambivalent character.
Setting this possibility aside for the moment, it still seems doubtful that faculty aligning with Factor A are unaware that Factor B exists. By the same token, administrators with heavy duties in the assessment area are likely well aware of faculty colleagues who fit the Factor A profile to a T. However, there may be some surprise that other, more nuanced factors failed to emerge. Perhaps Factor B types have comforted themselves in the belief that the strident opposition to assessment voiced in Factor A is limited to a small core of faculty who, if adequately educated on the matter, would join their ranks as suggested by the Astin and Antonio (2012) treatment of "games" played by faculty. Such a line of thought is plausible-indeed, reasonable-if many faculty grudgingly comply with assessment demands without voicing their opposition; the limits of the present analysis notwithstanding, the data here at least suggest that a Factor A view may be fairly prevalent among teaching faculty. But if the empirical documentation of "two cultures" rivaling C. P. Snow's (1959) famed treatise on epistemological bifurcation in the Academy occasions no great surprise when attitudes toward assessment are examined, does this automatically portend "bad news" for the dueling narratives and for the inclusive community of higher education of which they are a part? Are we forced to follow the implication of Wendy Weiner's (2009) judgment that the creation of plentiful and productive cultures of assessment exists only in the form of pipe dreams absent adequate faculty buy-in? On the one hand, such a reading seems inescapable if the foregoing findings are shared with a breadth commensurate to their depth. For readers with deep reservations with the assessment movement and its effects on faculty morale, the discovery of Factor A would seem to remove any doubts about the depth and breadth of such concerns among one's colleagues. Likewise, those holding more benign views toward assessment and its advocates would likely be gratified by the appearance of Factor B and the subjective validation it affords for like-minded professionals on a matter of considerable controversy. But if feelings of subjective validation accompany the discovery of kindred spirits on a contentious issue, what are the likely effects of encountering incontrovertible evidence of equally strong believers in the counter-attitudinal viewpoint? How, in other words, would Factor A be expected to react to the operant character of Factor B and vice versa?
Given their strong differences, the reflexive response to the latter question about the ontological status of the alternative viewpoint to either Factor A or Factor B, it seems safe to say, is likely to be one of affective consternation. At a minimum, we might hypothesize, those readers having affinities with Factor B would be expected to recognize with dislike the Anti-Assessment Stalwarts of Factor A as the veritable embodiment of the dug-in state of intransigence that Factor B has come to assign blame for the failure to make adequate progress in addressing and aggressively tackling assessment tasks. Similarly, it seems logical to expect that readers inclined to agree with Factor A, while finding gratification borne of confirmation in the existence of fellow believers, would inevitably find themselves taking strong exception with the perspective and the proponents of Factor B despite the fact that its existence was expected. Like those on Factor B confronted with Factor A, the latter is likely to elicit disdain, however implicit, due to its simple, undeniable failure "to get it" in comprehending, let alone appreciating, the viewpoint it denigrates.
At the risk of appearing devoid of common sense, we are not ready to endorse this particular form of conventional wisdom. Indeed, we would like to advance as a possibility for serious scholarly consideration the counter proposition that, in circumstances defined subjectively and structurally by "dueling narratives"-a pervasive condition of our polarized politics and culture in the contemporary United Statesexpectations in the fullest sense are not dashed by the presence of the "other" party to the duel; rather, expectations require the presence of another that is susceptible to blame and demonization in order to keep one's own view energized and viable. And if this is indeed the case, then the aggregate consequence of these dynamics is the persistence of the stand-off at the expense of substantive change.
At one level, what we are proposing here rests on a simple-yet-preliminary reiteration of the oft-heard adage that what we see is a function of where we sit. In other words, one's viewpoint is often traceable to one's vantage point, and we have drawn attention to the possible applicability of this postulate to the data at hand by identifying the differing composition of Factors A and B in terms of professional roles in relation to the assessment enterprise. Factor A, it will be recalled, was primarily defined by teaching faculty (including a half-dozen department chairs who elected to describe their principal duties as faculty members rather than administrators). Factor B, in contrast, contained a far higher percentage of academic administrators among its ranks, thereby lending circumstantial evidence to the notion that viewpoint and vantage point are potentially indistinguishable in settings defined by subjective polarization.
The broader, more contentious, point we are proposing here-that is, that each side locked in a dueling-narratives dispute is lacking in sufficient incentives to alter its behavior, including its perspective on the opposition/enemy as the principal locus of conflict-can be illustrated with a brief reference to a comparable dynamic from contemporary American politics. It is no secret that partisan polarization has now reached excessive proportions at the federal level in U.S. politics: Congressional Republicans have "succeeded" in almost every instance not in blocking legislative initiatives originating from the Obama White House since the 2010 Midterm Elections produced a significant partisan majority for the GOP in the House, followed in 2014 by the same change in party control in the Senate. Symptomatic of the ensuing partisan acrimony, then-Speaker of the House John Boehner filed a suit against the President for an alleged violation by Obama of his oath of office. At the same time, more zealous critics of Obama within Boehner's party launched a campaign for instigating impeachment proceedings against the president. In what at first blush appears blatantly paradoxical, both parties utilize and exploit such threats as dramatically effective occasions to raise money through public donations. Neither party, in other words, senses any meaningful incentive to diminish the degree of partisan polarization despite the fact that, in the longer term, such polarization is fundamentally at odds with responsible (and responsive) governing. Something similar, we are suggesting, may be involved in the imbroglio defined by the dueling narratives that now accompany the assessment movement.
Take, for example, the likely subtext for Factor A: that the targets of assessment (faculty in this case) naturally resent the power that this system gives to the assessors (administrators on or off campus) who then use the results to decide the fates of the examinees. Such an arrangement wreaks havoc with systems of faculty governance that at least aspire to the democratic principle of peer review insofar as the same "contingencies of reinforcement" (Skinner, 1969) do not apply equally to the assessors and the assessed. The recent 40th anniversary of Watergate returned to public light a powerful analogy from American political history. In this case, the parallel event occurred in the form of the Supreme Court's unanimous decision, in Nixon vs. the United States. For his part, the former president argued, in essence, that he was not bound by the same set of rules that applied to all other citizens and therefore did not have to turn over to the Special Prosecutor audiotapes of discussions within the Oval Office pertaining to the attempted cover-up of the Watergate breakin and a host of other illicit activities tied to the White House or the Committee to Re-Elect the President. To be sure, there are obvious differences in the details, motives, and magnitude of Watergate, in its threat to democratic governance under a Constitutional Republic, and the feelings of faculty in the face of the Assessment Movement. At the same time, however, there are subjective parallels that bear consideration when efforts are made to understand the indignant subtext that underlies and animates Factor A. To keep that sense of indignation alive, Factor A "needs its Nixon," so to speak, and as it happens, this is conveniently marshaled in the form of Factor B.
If such speculation holds water, are we forced to conclude that a genuinely consensual, "all-in" approach to assessment is an unlikely prospect on campuses containing prominent clusters of both Factor A and Factor B types? The answer, we believe, is that "it depends." It depends, first of all, on decently clear communication between parties to the assessment process, and in the quest to make progress on this front, the example put forward by Gargan and Brown (1993) warrants consideration and emulation in the case at hand. Titled "What is to be Done?" the Gargan-Brown project invites local policymakers along with those having a special interest in those policies to generate off-the-cuff nominees of problems warranting attention and, separately, proposed solutions worthy of immediate agenda-item status for policymaking officials. The process, completed in the course of a single such meeting, serves as a practical and practicable demonstration of a course that could be taken to mitigate the effects of Marrs's (2009) concern that the actual meaning of "assessment" is so ambiguous that it maximizes the chances for confusion and irrelevant affect in discussions of what is assumed to be the same phenomenon.
Finally, the prospects for an all-in culture of assessment are elevated to the extent that its development and application across campus deviate from a one-size-fits-all, top-down approach in favor of an aggressively decentralized strategy that seeks, to the extent possible, to neutralize the subtext of Factor A that is taken as a not-so-subtle sign of professional disrespect, whether intended or not. Toward this end, we would endorse a spirit of liberal experimentation that, ironically, aims for a return to basics. If, as common sense implies, higher education is ultimately "what we make of it," then studies designed to explore aspects of a particular institution's invisible tapestry (Baas & Thomas, 2011;Thomas & Ribich, 2007) or undergraduates' understanding of the liberal arts (Thomas, 1999) at colleges bearing that title become relevant as important first steps in an inevitably long-lived, open-ended, and multi-faceted attempt to better understand how what we do is understood by those we serve. In this spirit, it bears reiteration that contained within but obscured perhaps the dueling-narrative nature of Factors A and B were eight participants (one fifth of our P-set) who performed Q-sorts that were significantly loaded on both factors. To these individuals, the previously described self-reinforcing dynamic of the dueling narratives would quite likely, literally, fall on deaf ears.
Perhaps this is the case in all or most such situations where dueling narratives render inaudible the voices of the ambivalent, thereby leaving the impression that what truly remains accurately described as a dialogue of the deaf. It is worth remembering, however, that those with ambivalent attitudes are neither deaf nor dumb, and in occasions such as these may well have voices well worth hearing. Such an eventuality of course is no guarantee of reconciliation. At the same time, it likely advances the date when we will be posed to answer the all-in question about outcomes assessment in a more genuinely affirmative manner.
Factor scores Statements
A B 8. If we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes-whether these outcomes are somehow connected or entirely independent of each otherthen we have to expand our approach to include process as well as product.
−2 3 9. It's not radical doubt about the role or effectiveness of grading as a measuring tool for learning outcomes that motivates assessment. It's just the desire to provide a second-level check on the effectiveness of such tools. 12. It is striking how quickly assessment can come to be seen as part of "the management culture" rather than as a process at the heart of faculty's work and interactions with students. 0 2 13. It's easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes.
−1 −3 14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution.
−4 4
15. It is a wonder anyone learned anything in the days before we had a formal metric. Assessment is done not for students, but for administrators. Not for faculty, but to faculty. Not for program improvement, but for compliance monitoring. 17. Because the very act of learning occurs in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.
1 3 18. Executed well, assessment encourages faculty members to articulate their course and assignment goals more clearly and to develop sound rubrics. That helps them think more broadly about overarching program goals, and how to measure students' success in reaching those goals.
−1 5 19. The history of the assessment movement is that it originates with public scrutiny over the cost of higher education. In a way, we have done this to ourselves. Rather than confront the cost issue, our accreditors and professional organizations decided to demonstrate that the cost was worth it, by proving how much students learn.
1 −4 20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their "assessment software" and other "assessment snake oil remedies" for the assessment "problem." Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it.
3 −3 21. We need some type of assessment because too many professors and administrators are failing to hold students accountable, but are letting them slide through college without learning much. 24. We understand from other areas that the assumption that the simple presence of data invariably leads to improved outcomes and performance, and that those who are presented information under data-driven improvement schemes will know how best to make sense of it and transform their practice, is simply not true.
2 −1 25. We live in an age when parents and students are not content to accept our assurances that we are doing a good job educating students. This expectation is especially pertinent because of the increasingly high cost of education. 27. The view that the status quo is the only correct model of teaching and learning is the kind of hubris that makes higher education appear haughty and conceited, rather than as a vehicle for growth and opportunity.
−2 −2 28. I've almost given up saying this, but good grief, people, how about some evidence! Has there been a single, carefully controlled study that shows assessment produces better-educated graduates? 4 −1 29. Designed appropriately, a well-organized sequence of outcomes assessments can provide information vital to tracking student learning over time, and potentially increasing institutional effectiveness. 0 3 30. No assessment vehicle I have ever encountered measures the extent to which students are often unwilling to do the work of getting an education. Refining teaching methods puts the onus on faculty, so does the assessment buzzword of the day: engagement. 4 −2 31. We do not need mandates or government pressure to explore ways to improve teaching and learning, our institution is interested in getting it right and they have embraced the use of data to diagnose what is and is not working and then changing our practices. 34. The assessment movement provides an ideological smokescreen acting as a distraction from the real problems of U.S. higher education that relate to issues of inequality, cost, and the out of control expansion of the number of administrators.
3 −4 35. We and the accreditation agencies are on the same side-we are both about student learning. They want us to prove that we are doing what we claim we are doing. We want them to leave us alone-but they won't until we devise valid and reliable measures that demonstrate that learning is taking place.
−3 0 36. The problem is that what is truly learned in college often does not come to fruition until years later; long after the "assessment process" has been completed.
2
37. I do not understand all the resistance to assessment. It's just a bit more systematic than what we have been doing in the past. That cannot be all bad.
−4 1 38. I do have a problem when assessment becomes just another hoop we have to jump through to please an outside constituency. More and more, that is what seems to be driving outcomes assessment.
5 0 39. Assessment should be treated as a form of scholarship that is closely linked to teaching and learning, and it should play a role in the tenure and promotion processes.
−4 2 40. I've given up fighting this thing. I just do as minimal a job as allowed and then hope even that time is not wasted. −1 −5 41. Faculty resist assessment because they resist everything. They are the most immovable objects on the planet. −5 −5 42. What happened to the respect for faculty; the belief that they actually know what they are doing? 3 −3 43. Have not we really skipped a step in this process? Is everyone really on the same page when it comes to the purpose of education? And do not we need to resolve this first before we attack assessment? 1 −1 44. What goes on in the classroom on a daily basis does not "count." What "counts" is "documented" learning, that is, the product-as-educational-widget. We would do well to push back as hard as we can so that the assessment movement does not gobble up and spit out higher education.
2 −4 45. It's not like teaching and assessing are some separate, episodic events, but rather they are, or should be, ongoing, interrelated activities focused on providing guidance for improvement.
No Child Left
Behind has shown us the effectiveness of assessment taken away from the school teachers. We now have a generation that is good at taking standardized tests, but cannot do basic arithmetic or write a coherent sentence.
2 1 47. The assessment movement offers a fundamental change of our higher education system: learning is now nonnegotiable and the claims for learning are clear. This is a profound change and stands to reverse the erosion of quality in higher education.
−5 −2 48. It is incumbent on academics to decide for themselves how to assess whether their students are learning, less to satisfy external calls for accountability than because it is the right thing for academics, as professionals who care about their students, to do.
4
49. Although assessment is data driven, it is being driven by those who seek to know the cost and benefit of everything, but know nothing of the values of things taught and accomplished.
3 −3 50. What can be so wrong about asking someone to systematically and empirically demonstrate that they actually do accomplish their stated goals and objectives?
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research and/or authorship of this article.
|
2019-05-09T13:10:14.585Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "ec2635db395f906d4d6269f80bb7c2beb0518ed8",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2158244015623591",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "81ae67e7adf07bcd2866d7dcda73aa1672099ed2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
44847232
|
pes2o/s2orc
|
v3-fos-license
|
Correlation energy functional and potential from time-dependent exact-exchange theory
In this work we have studied a new functional for the correlation energy obtained from the exact-exchange (EXX) approximation within time-dependent density functional theory (TDDFT). Correlation energies have been calculated for a number of different atoms showing excellent agreement with results from more sophisticated methods. These results loose little accuracy by approximating the EXX kernel by its static value, a procedure which enormously simplifies the calculations. The correlation potential, obtained by taking the functional derivative with respect to the density, turns out to be remarkably accurate for all atoms studied. This potential has been used to calculate ionization potentials, static polarizabilities and van der Waals coefficients with results in close agreement with experiment.
I. INTRODUCTION
Time-dependent density functional theory (TDDFT) provides a promising and rigorous framework for treating interacting many-electron systems within a reasonable computational cost. Although the overall aim of TDDFT is to describe physical phenomena associated with excited states, strong connections between the latter and ground-state properties make it possible to obtain an improved description of the ground state through the use of TDDFT.
In many interesting cases, it is sufficient to treat systems under the influence of external perturbations weak enough to allow for a description in terms of linear response. Within this realm, the basic quantities of TDDFT are the ground-state exchange-correlation (XC) potential and the corresponding XC kernel (f xc ). 1 In previous work we have studied these quantities at different levels of approximation. The XC potential has been calculated for atoms at the level of the GW approximation 2 and the XC kernel has been presented within the exactexchange (EXX) approximation. [3][4][5] In all these previous publications the necessary formulas have been derived from the variational formulation of many-body perturbation theory. 6,7 The virtue of this approach is that the obtained results are guaranteed to obey many conservation laws and sum rules like, e.g., the f -sum rule, of importance to the calculated optical spectra. In the present work we have deviated from this path and instead followed an approach originally suggested by Peuckert. 8 Having some approximation for the density-density response function of a many-electron system one can obtain an expression for the total ground-state energy by using the Hellmann-Feyman theorem applied to the strength of the inter-particle Coulomb interaction. 9,10 And, by means of TDDFT, from any approximation for the XC potential and the corresponding XC kernel one can obtain an approximation to the density response function of the system. The XC part of the resulting total energy can then be differentiated once with respect to the density to yield a new approximation for the XC potential and then twice to yield a new approximation for the XC kernel. From these results we can obtain a new total energy which again can be differentiated to obtain a new potential and a new kernel, and so on. Of course, in practice the resulting expressions quickly become unmanageable and the proposed iterations have to be limited to one or two steps.
In previous work we have seen that the total energy obtained via the Hellman-Feynman theorem applied to the EXX approximation is very accurate giving errors of the order of 5% in the resulting correlation energies. 3 These results inspired us to believe that a differentiation of the corresponding expression for the XC energy with respect to the density might give rise to a very accurate XC potential and XC kernel. And this is, indeed, what we have found in the present work, at least as far as the potential is concerned and, then, within the approximations we have been forced to do in order to obtain tractable expressions.
In the present context, the XC kernel of the EXX approximation (EXXA) has the convenient property of being linear in the Coulomb interaction allowing us to carry out the integration over the strength of the Coulomb interaction analytically. The result is a closed expression for the XC energy. This expression explicitly contains the EXX kernel f x , which give rise to numerical difficulties in the later process of differentiating the XC energy with respect to the density. Motivated by the fact that the EXX kernel for He is independent of the density we have neglected this density dependence for all atoms. The validity of this approximation must, of course, be verified independently but, encouraged by our excellent results, we have decided to leave this test to a future publication.
We have also employed an additional approximation which we believe to be of even less consequence. The fact that the EXX kernel is frequency dependent for all systems but for a two-electron one leads to much longer computational times. We have, however, seen that total energies are very insensitive to this frequency dependence and we thus recommend to neglect it. The frequency dependence will, of course, also affect the calculation of the XC potential except in the case of He. We have here assumed that also the potentials of the heavier atoms are relatively insensitive to this frequency dependence but we leave also the verification of this assumption to a future publication.
The XC potentials that we obtain from the first iteration of the Peuckert procedure starting from the EXXA turns out to be much better than any approximate XC potentials perviously obtained and they are actually very close to the exact ones 11 where these are known. For a number of spherical atoms, we have used these new potentials to calculate total energies, static polarizabilities, and van der Waals coefficients using the expression for the XC kernel obtained within the EXXA. In all cases we find excellent agreement with experiment. We thus conclude that we now have an affordable and well defined way of obtaining accurate results for ground-state properties and low lying excitations of many-electron systems.
II. CORRELATION ENERGY FUNCTIONAL
A standard expression for the correlation energy can be obtained by introducing a fictitious Hamiltonian H λ with a scaled Coulomb interaction λv and a local multiplicative potential which guarantees that the density is constant for every value of the scaling parameter λ. At λ = 1, H λ coincides with the fully interacting Hamiltonian and at λ = 0 with the one of the non-interacting Kohn-Sham (KS) system. Using the Hellman-Feynman theorem one can show 9,10,12 where χ s is the non-interacting KS response function and χ λ is the scaled density response function. We have also used the short hand notation Tr f g = drdr ′ f (r, r ′ )g(r ′ , r) for any two-point functions f and g. Within TDDFT the function χ λ reads The scaled XC kernel f λ xc is a functional of the groundstate density and is defined as the functional derivative of the scaled XC potential v λ xc with respect to the density n.
The simplest approximation to Eq. (1) is the random phase approximation (RPA), obtained with f λ xc = 0. The RPA has the advantage of allowing for an analytical evaluation of the λ-integral in Eq. (1). The result is In the language of Feynman diagrams Eq. (3) is equal to an infinite summation of ring-diagrams. The RPA correlation potential v c is obtained as the functional derivative of Eq. (3) with respect to the density. If we let V signify the total KS potential, G s the non-interacting KS Green function, χ s = −iG s G s and the functional derivative is conveniently obtained via the chain rule The result is the well-known linearized Sham-Schlüter (LSS) equation 6,13 Here, we have used the notation (r 1 , t 1 ) = 1 etc. and introduced Λ(3, 2; 1) = −iG s (3, 1)G s (1, 2). The correlation part of the self-energy Σ c in the RPA is given by where Thus, the Σ c in the RPA coincides with the GW selfenergy but evaluated at KS Green functions. For the purpose of obtaining the RPA potential Eq. (4) and Eq. (6) have to be solved self-consistently and has only been done so far for atoms 2 and in bulk Si, LiF and solid Ar. 14 The RPA is also sometimes called the linearized time-dependent (TD) Hartree approximation since χ RPA is obtained by allowing the electrons to respond only to the perturbing potential plus the induced Hartree potential. The next level of approximation is obtained by also including exchange effects, which leads to the RPAE approximation or the linearized TD Hartree Fock (TDHF) approximation. Within TDDFT the same level of approximation corresponds to the TDEXX approximation, in which the HF potential is replaced by the local EXX potential v x . The latter potential is obtained from the TD exchange version of the LSS equation which amounts to replacing, in Eq. (4), v c by v x and Σ c by Σ x = ivG s , i.e., the HF self-energy. A variation of that equation yields an equation for the EXX response kernel f x : where . A full analysis of this kernel has been performed recently. 4 It is important to observe that the kernel f x is an implicit functional of the density through the KS orbitals and their eigenvalues. From Eq. (7) we see that the potential v x is an ingredient in the kernel f x but, as discussed above, v x is also an implicit functional of the density. For future reference we here note that an EXX kernel obtained for any density and used in Eq. (2) will generate an interacting response function which obeys the important f -sum rule. This is due to the fact that the procedure above guarantees that the kernel does not blow up at large frequencies, something that we proved in a previous publication. 3 The fulfillment of the f -sum rule is crucial for the construction of new functionals for the correlation energy, one of which we will now derive. We start by making the observation that the EXX kernel is linear in the explicit dependence on the Coulomb potential. Therefore, the λ-integration in Eq. (1) can, just as in the case of RPA, be carried out analytically yielding the result Tr We note that Tr vχ s contains a singularity proportional to the Coulomb potential at the origin times the number of particles. This singularity must be cancelled by the first term in Eq. (8) which will only occur if the kernel remains finite at large frequencies, i.e., obey the sum rule.
Correlation energies obtained from this functional was recently presented 3 but in a non self-consistent fashion using the EXX density for the evaluation. The results were excellent for many closed-shell atoms. A diagrammatic description of Eq. (8), up to second order, is given in Fig. 1. Many higher order terms do not have a strict diagrammatic representation but can be considered to simulate the higher order exchange diagrams.
In order to obtain the correlation potential from Eq. (8) we need to differentiate the correlation energy with respect to the density. We then encounter the problem of differentiating the EXX kernel with respect to the density, which is a rather cumbersome task. Such derivatives formally amounts to three-point correlation functions δf x (1, 2)/δn(3). For He, however, this derivative is zero since f x = − 1 2 v. Therefore, we have here chosen to see how well we can do by neglecting the density variation of the EXX kernel also for larger atoms. With this assumption the same procedure used for going from Eq.
(3) to Eq. (4) now leads to an equation similar to Eq. (4) with Σ c replaced by where χ is the interacting response function in the TDEXX approximation (Eq. 2 at λ = 1 and f xc = f x ). The evaluation of the correlation energy form Eq. (8) is relatively time consuming due to the frequency dependence of the kernel f x . 3 The computational cost would be substantially reduced it the kernel could be kept at its static value (f x (r, r ′ , ω) ≈ f x (r, r ′ , 0)) without seriously affecting the correlation energies. In Table I correlation energies evaluated using the fully frequencydependent EXX kernel (TDEXX) are compared to those obtained by using the static (=adiabatic) kernel (AEXX) and the difference is seen to be relatively small. Notice that in the case of He the static kernel is the full kernel. These results have encouraged us to evaluate the correlation potentials also using the adiabatic approximation.
III. CORRELATION ENERGIES AND POTENTIALS
When evaluating the correlation energy for a system it is natural to use the self-consistent density for that particular system. From now on, the self-consistent density corresponding to the new functional in Eq. (8) is referred to as the RPAX density. Due to the stationary property of the total energy we expect, however, that evaluating correlation energies at a slightly different density will give almost the same result. 15 This is, indeed, the case as can be seen in Table I, which also seems to demonstrate that the total energy is a minimum at the RPAX density. Clearly, the EXX kernel gives a large improvement over the too large RPA values. It also improves the MP2 results, which are here the self-consistent results given in Ref. 16. The latter approximation also follows from Eq. (1) if χ λ is replaced by χ λ ≈ χ s +λχ s [v + f x ] χ s , or if the logaritm in Eq. (8) is expanded to second order. The use of the adiabatic approximation is seen to be less severe yielding energies of the same quality as those obtained with the frequency-dependent kernel. As noticed before, the values in TDEXX, or in AEXX for that matter, are very accurate for these systems.
The XC potential is interesting in its own right. The highest occupied eigenvalue of the KS system exactly corresponds to the ionization potential 18 and the larger part of the particle-conserving excitation energies consists of KS eigenvalue differences. 19 In Tab. II we present ionization potentials produced by different KS potentials (RPAX, RPA, MP2 and EXX). As noticed previously, the RPA values improve over EXX and are also better than the MP2 values. Here, we see that the RPAX potential yields an even further improvement giving excellent ionization potentials for all atoms. In Fig. 2 we plot the same correlation potentials for He, Be, and Ne. The He RPAX potential almost coincide with the exact potential 11 and the RPAX potential for Be is much closer to the exact than any other approximation we have tried -especially in the outer region. The Ne potential is also very accurate, yielding even a qualitative improvement by better describing the 2s-shell giving rise to an extra structure in the RPAX, and the exact, correlation potential.
IV. STATIC POLARIZABILITIES AND VAN DER WAALS COEFFICIENTS
The static polarizability is defined according to the formula and the van der Waals coefficient, or C 6 -coefficient, between ion A and B is given by where α A (iω) is the dynamic polarizability of ion A calculated at imaginary frequencies. In previous work these quantities were calculated using the response function of the TDEXX approximation. It was then a natural choice to use the corresponding self-consistent EXX density in the evaluation. We then found, not so surprisingly, that our results closely resembled those of the TDHF approximation and were, therefore, not overly impressive. We interpreted this partial failure to be a consequence of a rather poor description of the ground state. We found, however, also that the results are rather sensitive to the density used in the evaluation. In the present work we have instead evaluated the same EXX formula for the polarizability using the correlated density produced by our new correlation functional. The van der Waals energy is a pure correlation effect and one can argue that it should be evaluated at a correlated density and not just the exchange density like, e.g., the RPA or the RPAX density. This turns out to have rather drastic effect on the actual values moving them much closer to the more accurate values found in the literature as seen in Tab. III and IV. A clear improvement is found when correlated densities are used in the evaluation with a slight edge for our new RPAX density. It is also noticed that when calculating C 6 -coefficients the adiabatic approximation is sufficient as also observed previously.
V. CONCLUSIONS AND OUTLOOK
In the present work we have decided to temporarily part from our familiar, systematic and conserving way of constructing improved approximations to XC potentials and kernels within TDDFT in the linear regime. Instead, we have tested the first step in an iterative scheme originally proposed by Peuckert. The starting point has been the previously studied EXX approximation for the XC potential and corresponding kernel. The method is described in detail in the sections above. The rational for this deviation has been the extraordinary accurate potentials obtained from this approach. The potentials are indeed very close to the exact XC potentials of DFT where these are known. Calculated ionization potentials for all atoms are close to experimental results in accordance with the well known fact that the highest occupied DF eigenvalue should equal minus the ionization potential.
We have in previous work calculated static polarizabilities, low lying particle-hole excitation energies, and van der Waals coefficients from the EXX approximation. These turned out to be very similar to those of TDHF theory and not very accurate. We argued then that these less satisfactory results were a consequence of a rather poor description of the ground state within the EXX approximation. Using instead, our new potentials to recalculate the mentioned properties results are in excellent agreement with experiment for all atoms studied.
We can thus state with confidence that we now have at our disposal an affordable and not too complicated way of obtaining accurate results for a large number of static and low-frequency dynamic properties of physical systems. And all this within a relative minor modification of the EXX approximation within TDDFT.
Of course, we have, at this stage, no idea about the possible conserving properties of the proposed scheme. This is an interesting project for future research. But, on the other hand, conservation laws and sum rules might not be of such great importance to the properties discussed here.
We should also mention that the results obtained here rely on the assumption that one can neglect the density and the frequency dependence of the XC kernel of the EXX. Based on the fact that this is no assumption in the case of a two-electron system we believe in the validity of this assumption but this must, of course, be further investigated. In the meanwhile we propose to use the new method as an effective tool for calculating physical properties within the realm of TDDFT.
|
2009-09-16T08:17:32.000Z
|
2009-09-16T00:00:00.000
|
{
"year": 2009,
"sha1": "40e2605e1736b56fbc41a3c0d0d2fb981da66fd4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0909.2955",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "40e2605e1736b56fbc41a3c0d0d2fb981da66fd4",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Chemistry"
]
}
|
52958381
|
pes2o/s2orc
|
v3-fos-license
|
A Retrospective Review of Hospital-Based Data on Enteric Fever in India, 2014–2015
Abstract Background Enteric fever remains a threat to many countries with minimal access to clean water and poor sanitation infrastructure. As part of a multisite surveillance study, we conducted a retrospective review of records in 5 hospitals across India to gather evidence on the burden of enteric fever. Methods We examined hospital records (laboratory and surgical registers) from 5 hospitals across India for laboratory-confirmed Salmonella Typhi or Salmonella Paratyphi cases and intestinal perforations from 2014–2015. Clinical data were obtained where available. For laboratory-confirmed infections, we compared differences in disease burden, age, sex, clinical presentation, and antimicrobial resistance. Results Of 267536 blood cultures, 1418 (0.53%) were positive for S. Typhi or S. Paratyphi. Clinical data were available for 429 cases (72%); a higher proportion of participants with S. Typhi infection were hospitalized, compared with those with S. Paratyphi infection (44% vs 35%). We observed resistance to quinolones among 82% of isolates, with cases of cephalosporin resistance (1%) and macrolide resistance (9%) detected. Of 94 participants with intestinal perforations, 16 (17%) had a provisional, final, or laboratory-confirmed diagnosis of enteric fever. Discussion Data show a moderate burden of enteric fever in India. Enteric fever data should be systematically collected to facilitate evidence-based decision-making by countries for typhoid conjugate vaccines.
Typhoid and paratyphoid fever (collectively known as enteric fever) is caused by the organisms Salmonella Typhi and Salmonella Paratyphi (serovars A, B, and C) and is a systemic disease that is endemic in many Asian countries where a large proportion of the population lacks access to safe water, sanitation, and hygiene infrastructure. S. Typhi and S. Paratyphi are estimated to cause nearly 12 million and 4 million annual cases of illness, respectively, and >153 000 annual deaths, although accurate estimates are lacking and inconsistent because of the limited number of well-conducted studies [1,2]. Although enteric fever is rare in industrialized countries, it remains an important and persistent public health problem in low-resource countries. In the countries most affected, however, barriers such as a lack of systematic public health reporting and laboratory infrastructure contribute to substantial knowledge gaps of the disease burden and presentation. In India, where pooled estimates have shown that nearly 10% of isolates from individuals with enteric fever have been identified as S. Typhi [3], there have only been 3 studies in 2 locations that have attempted to determine the incidence of enteric fever, and few hospital-based studies have been performed in recent years to understand the spectrum of disease [4][5][6]. Since a new typhoid conjugate vaccine (TCV; Typbar-TCV, Bharat Biotech International) has been recently recommended and prequalified by the World Health Organization (WHO) and included in the 2019-2020 funding window of Gavi, the Vaccine Alliance, additional data on the burden and clinical presentation of enteric fever in India is needed for decision-making on the introduction of the new vaccine and to understand its potential impact [7,8].
While antimicrobial therapy is an effective treatment for enteric fever, an increasing rate of resistance to available antibiotics is resulting in higher morbidity, mortality, and cost of treatment [9][10][11]. The most commonly used diagnostic test is blood culture, which based on pooled estimates and has been shown to be only 61% sensitive [12]. Further, routine blood culture is not always available in low-resource settings, and physicians commonly rely on clinical symptoms, which are nonspecific from other febrile illnesses, to empirically treat enteric fever. This can lead to inappropriate treatment and, subsequently, increasing antimicrobial resistance. Results from a 12-year retrospective study in India showed an increase in reduced susceptibility to ciprofloxacin in S. Typhi isolates, which has also been recently shown in other South Asian countries, such as Nepal and Bangladesh [13][14][15]. Although recent patterns showed a decrease in multidrug-resistant isolates (ie, those resistant to ampicillin, chloramphenicol, and trimethoprim-sulfamethoxazole), emerging resistance to third-generation cephalosporins, the primary antibiotics of choice in recent years, has been increasingly seen in the South Asian continent, severely threatening treatment options while increasing treatment costs [16,17].
A systematic review of studies on enteric fever in India revealed few community-based studies attempting to estimate typhoid and paratyphoid fever incidence and, in the last 10 years, only 7 hospital-based studies [3]. Since many recent studies in India have been characterized by a small sample size and were limited to single-center sites, additional data in India are needed to show burden of disease and provide evidence for the usefulness of TCVs [18,19]. The absence of credible estimates of the disease burden in India has resulted in limited understanding of the impact of the disease and consequently hindered prevention and control efforts. Further, some studies have suggested a seasonal component to typhoid occurrence in India [5]. Elucidating the spectrum, temporality, and burden of disease will help inform typhoid prevention and control strategies through vaccines and other measures in countries where it is endemic.
We conducted this retrospective review to gather data on the enteric fever burden in India and to better explain the epidemiology and clinical profile of enteric fever cases across the country. As part of the Surveillance for Enteric fever in Asia Project (SEAP), this retrospective study aims to describe the clinical profile, severity, antimicrobial resistance, and outcomes of laboratory-confirmed enteric fever cases in India, using existing hospital data. This study also aims to review characteristics of intestinal perforation cases as a marker of disease severity.
Study Design and Site Selection
We conducted a retrospective, cross-sectional study among patients with blood culture-confirmed S. Typhi or S. Paratyphi infection or intestinal perforation in different hospitals across India from 2014 to 2015. We selected hospitals that were secondary or tertiary-care facilities containing laboratory departments capable of diagnosing enteric fever with searchable electronic laboratory records. Five hospitals were identified and agreed to participate in the study: (1) the Postgraduate Institute of Medical Sciences (PGI), a mixed public-private tertiary-care hospital in Chandigarh with 1960 beds mainly serving an urban population; (2) Medanta Hospital (Medanta), a private tertiary-care hospital in Gurugram (previously known as Gurgaon), Haryana, with 1250 beds mainly serving an urban population; (3) Christian Medical College (CMC), a private tertiary-care hospital in Vellore, Tamil Nadu, with 2800 beds mainly serving an urban population; (4) Apollo Gleneagles Hospital (Apollo), a private tertiary-care hospital in Kolkata, West Bengal, with 750 beds mainly serving an urban population; and (5) Kasturba Medical College-Manipal University Hospital (KMC), a private tertiary-care hospital in Manipal, Karnataka, with around 2000 beds mainly serving a peri-urban population.
Data Collection
Our data sources were electronic laboratory records and surgical department registers. The electronic laboratory records were searched to identify patients with laboratory confirmation of S. Typhi or S. Paratyphi infection by blood culture between January 2014 and December 2015. Data on demographic characteristics and hospital admission status of patients with laboratory-confirmed infection were initially extracted from the laboratory database. Study staff then used patient identification numbers of hospitalized cases to find inpatient medical charts.
Surgical department registers were searched to identify patients with an intestinal perforation between January 2014 and December 2015. Study staff used patient identification numbers of intestinal perforation cases to find inpatient medical charts.
For hospitalized patients with laboratory-confirmed infection or intestinal perforation who had available medical charts, staff abstracted laboratory results and clinical data (eg, duration of hospitalization, diagnoses, symptoms, and complications), using standard paper-based data collection forms. Data were entered into a database for analysis using Microsoft Access (Redmond, WA).
Data Analysis
We conducted a descriptive analysis to compare burden differences, according to age and sex and antimicrobial resistance, between S. Typhi and S. Paratyphi infections, using the Pearson χ 2 test, the Fisher exact test, or the nonparametric Wilcoxon rank sum test to determine statistical significance. We examined differences in age distribution by using nonparametric Kolmogorov-Smirnov 2-sample tests. We also reviewed the seasonality of case counts, by hospital. All statistical tests were 2-sided and considered statistically significant at a P value of <.05.
Ethical Considerations
The study protocols were reviewed and approved by the institutional ethics committees at the Translational Health Science and Technology Among the 1418 patients with laboratory-confirmed infection, 97% had information on age and sex available within laboratory records. The median age for all patients with laboratory-confirmed infection was 24 years (interquartile range [IQR], 18-30 years), and the sex of 54% was male ( Table 1). The group aged 20-29 years had the highest percentage of both S. Typhi and S. Paratyphi infections (45%). While S. Paratyphi infections had a single-peaked age distribution around 25 years, S. Typhi cases peaked at 10 years and 25 years ( Figure 1).
In addition to differing by etiology (typhoid vs paratyphoid fever), the age distribution of confirmed infections also differed by sex (P < .001; Figure 2). Male patients had a broader age distribution curve (kurtosis = 0.79), shown by an interquartile range of 13-30 years, while female patients had a more tightly clustered age distribution (kurtosis = 3.88), shown by an interquartile range of 21-27 years.
We observed increases in the number of cases during the summer monsoon months (May-September) at Medanta in 2014; the number of cases peaked at 72 in August, which is 288% higher than the 2-year monthly average (25 cases during 2014-2015; Figure 3). We also observed an increase in cases at KMC in May 2015, compared with preceding and subsequent months, but observed no seasonal trends at Medanta in 2015 or at Apollo, CMC, and PGI for either year.
Clinical Data
Of the 597 hospitalized patients identified at the laboratories of all 5 hospitals, 429 (72%) had available medical charts, including 362 (84%) infected with S. Typhi and 67 (16%) infected with S. Paratyphi.
The most commonly reported symptoms among patients with hospitalized laboratory-confirmed infection included fever (in 97%), nausea/vomiting (in 50%), weakness/malaise (in 38%), headache (in 35%), abdominal pain (in 32%), diarrhea (in 29%), and cough (in 29%; Table 2). A higher proportion of patients infected with S. Typhi presented with gastrointestinal symptoms, compared with patients infected with S. Paratyphi (P = .027 for nausea/vomiting, and P = .011 for diarrhea). The median duration of fever at admission of enteric fever cases was 7 days (IQR, 5-14 days), and previous antibiotic use was reported among 21% of admitted patients with enteric fever. About 50% of enteric fever cases had a provisional diagnosis of enteric fever/typhoid, while fever of unknown origin was the second most common diagnosis (35%).
Among hospitalized patients with laboratory-confirmed infection, 76 (18%) had a diagnosis of at least 1 complication The case-fatality rate among hospitalized patients with laboratory-confirmed infection was 1.2% (5 of 429). Among fatal cases, the median age was 35 years (IQR, 18-52 years), and the sex was male in 80%. All 5 patients who died had ≥2 ); ceftriaxone resistance was reported in 4 isolates (1%) that were also resistant to ciprofloxacin. Azithromycin resistance was identified in 3 of 33 isolates (9%) tested. PGI reported significantly more isolates resistant to ampicillin than other hospitals (34% vs 0% at Medanta and Apollo and 2% at KMC and CMC; P < .0001), while Medanta and PGI reported a significantly smaller percentage of isolates resistant to ciprofloxacin than the other hospitals (49% and 16%, respectively, vs 90% at CMC and 99% at Apollo and KMC; P < .0001). Statistically significant differences were not observed in antimicrobial resistance patterns between S. Typhi and S. Dengue fever 22 (6) 6 (9) 28 (7) Malaria a 11 ( laboratory-confirmed diagnosis of enteric fever (all due to S. Typhi). Disease in 4 patients was laboratory confirmed (all due to S. Typhi), through either blood culture (in 3 [19%]) or histopathologic analysis (in 1 [6%]; Table 3). The median age of these 16 patients was 25.5 years (IQR, 19.5-33.5 years), and the sex in 88% was male. Of the 14 patients for whom data on the location of the perforation were available, perforations for all (100%) were in the ileum. Some symptoms, including nausea/ vomiting, diarrhea, and weakness/malaise, were reported in similar proportions of patients with perforations and a provisional or confirmed diagnosis of enteric fever and all patients with laboratory-confirmed enteric fever. Compared with hospitalized patients with laboratory-confirmed disease, however, a significantly lower proportion had fever (75% vs 97%; P = .001), and significantly higher proportions had abdominal pain (75% vs 32%; P = .0003) or constipation (38% vs 3%; P < .0001).
DISCUSSION
Our retrospective review of hospital records, spanning 2 years and including data from 5 hospitals across the country, indicates that enteric fever is still present in healthcare settings across India and predominately affects children and young adults. Our study captured 16 confirmed cases of ileal perforation and 16 laboratory-confirmed cases with multiple severe complications, including at least 5 fatalities. Prior to this study, enteric fever surveillance in India had been limited to mainly small single-hospital-based studies, leading to substantial knowledge gaps [3]. While published literature from studies conducted in South Asia over the past decade has consistently reported increasing resistance to fluoroquinolones, limiting the prescriptive use of drugs from this class, we also documented resistance to third-generation cephalosporins and macrolides, presently the treatments of choice in India [13,20,21]. These evolving antimicrobial resistance patterns should be carefully monitored in prospective studies. Increasing antimicrobial resistance in S. Typhi and S. Paratyphi isolates, such as that observed in the recent outbreak of extensively drug-resistant cases in Pakistan, also highlights the need for ongoing enteric fever surveillance and the potential benefits of rapid deployment of typhoid vaccines.
The age distribution we observed for all enteric fever cases is similar to results from other hospital-based studies in Asia [14,22]. This is in contrast to community-based surveillance in previous studies, which reported a shift toward a higher prevalence in younger populations [4][5][6]. This difference may be explained by one of the limitations of hospital-based studies-the inability to control for healthcare-seeking behaviorwhich is also apparent in the sex-associated disparities in our data for both minor and adult populations. Previous hypotheses on sex-associated differences in healthcare-seeking behavior among children included parental decision to delay treatment and lower inclination to spend money on treatment for female children, compared with male children [23,24]. These demographic data gaps in hospital-based surveillance are potentially controllable through a low-cost hybrid method using prospective health facility-based surveillance and household surveys to determine community healthcare utilization rates, which has been outlined by Luby et al [25]. Ascertaining the true burden of disease in the community will be crucial to accurately targeting high-risk populations for the new vaccine.
The age distribution in our study, and in previous studies, differs slightly by etiology (typhoid vs paratyphoid fever), although children and young adults bear the largest burden of both diseases [26][27][28]. The presentation of hospitalized patients with enteric fever resembled that described in previous studies, including a significantly higher rate of gastrointestinal symptoms in those infected with S. Typhi [29]. Our study found that infections with S. Typhi were more severe than infections with S. Paratyphi, including a higher proportion S. Typhi-infected patients admitted to the hospital (although the timing of hospitalization, whether before or after culture results were available, is not known), which is similar to other studies in Asia and Africa [30,31]. However, some studies have found that the severity of S. Paratyphi infection is increasing and comparable with that of S. Typhi infection [32]. Typhoid-related intestinal perforations have been estimated to occur in 0.8%-39% of laboratory-confirmed cases, depending on the socioeconomic status of the country [33]. It can be difficult to isolate S. Typhi from persons with intestinal perforations, owing to the likelihood of antibiotic use before blood culture or surgery-only 49% of our surgical cases had blood culture performed. Future studies should consider using surgical surveillance to strengthen the link between perforation and enteric fever [34].
Last, our study looked for temporal patterns in the burden of enteric fever. The seasonal influence of monsoons on disease burden has been previously documented in tropical countries where enteric fever is endemic [35]. Of the 5 hospital sites, 3 (Apollo, PGI, and CMC) did not experience these seasonal patterns, suggesting that additional investigation is needed to understand the epidemiological and environmental factors that predominantly drive the disease burden in India. Of the 2 sites that experienced an increase in cases during the typical Indian monsoon months in 1 of 2 study years, 1 site (Medanta) experienced a large and prolonged increase in cases, by month. This notable occurrence highlights the need to develop and maintain surveillance systems that can analyze patterns of disease in real time to provide timely information for disease control efforts.
This retrospective study provides insights to inform the design of future surveillance systems for enteric fever in India, including information on the distribution of disease, disease presentation and outcomes, and antimicrobial resistance patterns. However, these study findings should be interpreted with several limitations in mind. First, since the study design was retrospective, the data are subject to the biases associated with any retrospective study, such as inconsistent case definitions and missing data (consider that one quarter of inpatient charts were not found for review). Second, the study hospitals did not have electronic clinical records, leading to limited analysis of clinical data of all cases identified in the laboratory. Third, surgical specimens from intestinal perforation cases were infrequently tested by histopathologic analysis or blood culture, leading to a gap in the data collected. Last, since these data are hospital based, information on enteric fever in the hospitals' geographic area depends on care-seeking behavior.
South Asia has the highest estimated global burden of enteric fever morbidity and mortality; however, current surveillance capabilities have not permitted an accurate estimate of the full spectrum of the impact that enteric fever has on the region. Further elucidating the link between severe complications and typhoid can also provide information on the potential benefits of typhoid vaccination campaigns. As the newly World Health Organization-recommended TCV has been shown to be 50%-87% efficacious, most if not all of these severe cases and deaths could be preventable with broad use of the vaccine [36]. In addition, broad implementation of TCV may help reduce transmission of typhoid, including resistant strains. In addition to describing the severity of disease and presence of antimicrobial resistance, national data on the burden of typhoid fever should incite Indian policymakers to consider including TCV in their immunization programs. Evidence-based decision-making using these types of regional-level data is crucial to reducing the impact of enteric fever in countries of endemicity.
|
2018-10-27T16:49:07.248Z
|
2018-10-11T00:00:00.000
|
{
"year": 2018,
"sha1": "74cbce5387f3a6dcc13acd34670ea1d783d468f8",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jid/article-pdf/218/suppl_4/S206/27230228/jiy502.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "74cbce5387f3a6dcc13acd34670ea1d783d468f8",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261371746
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of Leadership Style and Working Motivation for Employee Performance With Job Satisfaction as Intervening Variabel (Case Study at PT
This survey is a quantitative survey that was conducted with a sample of 55 employees from PT Jaya Abadi Denpasar utilizing the total sample/saturated sample/census approach. The goal of this study was to identify and assess the influence of leadership style and job motivation on job satisfaction and performance among employees of PT Jaya Abadi Denpasar. The results of the validity tests are valid, the results of the reliability tests are reliable, the results of the normality tests are the normal distribution of the study data
INTRODUCTION
Intense competition from competitors using the same vendor's products has a significant impact not only on employee performance, but also on sales and corporate profits. There is a need to study the causes of this disease, such as improving management methods and employee motivation.
A company's leadership style, from senior level to middle management, has a significant impact on the performance of teams, especially those responsible for sales and direct customer contact.
All leaders must be able to inspire enthusiasm and motivation in their subordinates. We expect that increased work motivation would boost job satisfaction and staff performance at PT Jaya Abadi Denpasar. According to Fillmore, H. Stanford (1969) on Mangkunegara (2017) defines motivation as a state that moves people toward a particular goal. The leadership of PT Jaya Abadi Denpasar always inspires his subordinates at every opportunity of the meeting to motivate them to work. Each employee of PT Jaya Abadi Denpasar has different motivations at work, so the leadership of PT Jaya Abadi Denpasar needs to reach out to each employee. A successful company requires a leader who can mobilize all employees of PT Jaya Abadi Denpasar and work together to achieve the company's goals, including improving employee performance
RESEARCH METHODE Population and Sample
Population is the set of all individuals who can provide data and information for research purposes. The population in this study were all employees of PT Jaya Abadi Denpasar, totaling 55 employees.
Due to the number of the population is small, then all members of the population are used as samples. The samples taken from this study used a census sample technique/total sampling/saturated sampling because the population size was considered small. According to Sugiyono (2017) census/saturated sampling/total sampling is a sampling technique when all members of the population are used as samples. So that the sample in this study were 55 employees of PT Jaya Abadi Denpasar.
Data Collection Techniques
The data used in this study based on data sources are primary data and secondary data. Data collection techniques used to collect data in this study are:
Interview
According to Sugiyono (2017), an interview is a meeting of two persons who share information and ideas through question and response in order to develop meaning in a certain issue. Interviews are used as a data collecting strategy when the researcher wants to perform a preliminary study to identify problems that need to be investigated, as well as when the researcher wants to learn more in-depth information from respondents when the number of respondents is small.
Questionnaire.
A questionnaire, according to Sugiyono (2017), is a data gathering tool in which respondents are given a set of questions or written statements to answer. Observation. Sugiyono (2017) cites Sutrisno Hadi (1986) in support of the idea that observation is a complicated process made up of a number of biological and psychological processes, the two most significant of which are memory and observation.
RESULT AND DISCUSSION
The number of employees of PT Jaya Abadi Denpasar can be seen in the following
Data Processing and AnalysisTechniques Validity Test
Validity test is a test used to show the extent to which the measuring instrument used in a measure measures what is being measured. Ghozali (2018) states that the validity test is used to measure the legitimacy or validity of a questionnaire.
The measurement results are said to be valid if there are similarities between the data collected and the actual data on the object being measured, there are similarities between the test results and the actual conditions of the person being measured. Based on the processed results of SPSS, it is known that the results of the Validity Test are as follows:
79
The Validity Test analysis findings in Table 3 reveal that all study variables (leadership style, work motivation, job satisfaction, and employee performance) have a corrected item total correlation value better than 0.3, indicating that they pass the Validity Test.
Reliability Test
According to Arikunto (2017), "to test the amount of trust in a measuring instrument, a measuring instrument called the Reliability Test is utilized. An instrument when used at different times to measure the same item will yield the same result, which can also be considered to be dependable." The following are the outcomes of data processing using SPSS for Windows: Based on table 4 above, it is known that the data from the questionnaire results have fulfilled the Reliability Test requirements because all research variables, are : leadership style, work motivation, job satisfaction and employee performance have a Cronbach's Alpha value greater than 0.60 so that they are said to be reliable.
Normality test
The results of research data processing using SPSS for Windows:
80
From Picture 1 the results of the SPSS processing it can be said that the questionnaire data after being processed using SPSS produces data that meets the normal distribution requirements, because the residual points in the image follow/around a straight diagonal line . Heteroscedasticity Test
Picture 2. Heteroscedasticity Test
Based on Picture 2 the data from the questionnaire results show that they are free from heteroscedasticity, this is because in the Scatterplot image there is no clear pattern, and the points spread above and below zero on the Y axis, Ghozali (2018: 134). Table 5, all independent variables (leadership style, work motivation, and job satisfaction) have a variance inflation factor (VIF) value less than 10 and a tolerance value larger than 0.1, indicating that the questionnaire result data is free of multi-collinearity. The t-test hypothesis / partial test used in this study is as follows:
Multicollinearity Test
• H0: Leadership style has no significant effect on employee performance. • H1: Leadership style has a significant effect on employee performance.
In this study using the confidence level α of 5%. From table 8 it is known that the leadership style variable has a Sig value of 0.043 (smaller than 5%) so that reject H0 and accept H1 means that leadership style has a significant effect on employee performance.
Work Motivation Variable (X2).
The t-test hypothesis/partial test used in this study is as follows: • H0 : Work motivation has no significant effect on employee performance.
• H1 : Work motivation has a significant effect on employee performance. Use of a 5% confidence level was made in this research. Given that work motivation has a sig value of 0.005 (less than 5%), which can be determined from table 7, we may reject H0 and accept H1, which indicates that work motivation significantly affects employee performance.
Variable Job Satisfaction (Z).
The t-test hypothesis / partial test used in this study is as follows: • H0: Job satisfaction has no significant effect on employee performance.
• H1: Job satisfaction has a significant effect on employee performance. In this research, a confidence level of 5% was used. According to table 8, the variable job satisfaction has a Sig value of 0.000 (less than 5%), implying that rejecting H0 and accepting H1 indicates that job contentment has a substantial influence on employee performance. Because 2.951 > 1.96 (which is the value of the Z table and in this study uses a confidence level of 5%), so it can be said that the results of the Sobel test for the indirect effect are significant, meaning that work motivation has a significant effect on employee performance through job satisfaction.
So that the total magnitude of the influence of work motivation on employee performance is direct influence + indirect effect = 11.02% + 27.85% = 38.88%.
The Magnitude of the Effect of Job Satisfaction Variables on Employee Performance From table 8 it is known that the magnitude of the influence of job satisfaction on employee performance (directly), are : ρyz 2 . 100 % = 0.450 X 0.450 X 100 % = 20.25 %.
Coefficient of Determination/ Adjusted R Square Sub Structure 2
Adjusted R 2 value is 0.936, meaning that the variables of leadership style, work motivation and job satisfaction are jointly contribute to employee performance by 0.936 or 93.6% while the remaining 100% -93.6% = 6.4% is influenced by other variables that are not involved in this study.
DISCUSSION
After conducting data analysis, the discussion in this study is as follows: Leadership style has a significant effect on job satisfaction Based on the results of the Partial Test, leadership style has a Sig value of 0.042 (smaller than 0.05) so that reject H0 and accept H1 means that leadership style has a significant effect on job satisfaction. The results of this study indicate that leaders have harmonious cooperation with their subordinates so that the work environment supports respondents to complete the job well. Collaboration between leaders and subordinates who are compatible in solving various problems faced by employees of PT Jaya Abadi Denpasar so that respondents feel satisfied with their work environment, this shows that the leadership style applied by a leader will affect the level of job satisfaction of his subordinates. Job satisfaction is one of the most important things for leaders and employees of PT Jaya Abadi Denpasar, because with job satisfaction employees will be loyal, will not feel hatred, will not experience frustration at work, will not arise feelings of anxiety and can finish work well because Therefore, the leader applies a leadership style that is in accordance with the characteristics of his subordinates so that his leadership is successful.
Work motivation has a significant effect on job satisfaction
Based on the results of the Partial Test, the work motivation variable has a Sig value of 0.001 (smaller than 0.05) so that reject H0 and accept H1 means that work motivation has a significant effect on job satisfaction. The results of this study indicate that respondents are motivated to work because they want to get a bigger salary so that the compensation received by respondents is in accordance with their responsibilities at work. This means that employees of PT Jaya Abadi Denpasar feel satisfied at work because the desire to get a bigger salary as motivation to work has been fulfilled with the compensation received by respondents according to their responsibilities in their work, this shows that employee motivation can affect the level of job satisfaction that employee.
Job satisfaction has a significant effect on employee performance
Based on the results of the Partial Test, job satisfaction has a Sig value of 0.000 (smaller than 5%) so reject H0 and accept H1, meaning that job satisfaction has a significant effect on employee performance. The results of this study indicate that the work environment supports the respondent to complete the job well so that the amount of work that the respondent produces is always in accordance with the quantity standards set by the company. This shows that respondents are satisfied with the working environment conditions of PT Jaya Abadi Denpasar so that respondents are able to produce work that is in accordance with the quantity standards set by the company, this means that job satisfaction can influence employee performance levels, Leadership style has a significant effect on employee performance Based on the results of the Partial Test, leadership style has a Sig value of 0.043 (smaller than 5%) so reject H0 and accept H1, meaning that leadership style has a significant effect on employee performance. cooperate with colleagues in completing a job. The leadership of PT Jaya Abadi Denpasar always invites their subordinates to always participate in solving problems faced by the company, whether solved individually or by working together in groups so that their subordinates have the ability to cooperate with colleagues in completing a job, this shows that leadership style has an influence on the performance of their subordinates.
Work motivation has a significant effect on employee performance
Based on the results of the Partial Test, work motivation has a Sig value of 0.000 (smaller than 5%) so reject H0 and accept H1, meaning that work motivation has a significant effect on employee performance. The results of this study indicate that respondents are motivated to work because they want to get good work performance so that the quality of the work they produce is always in accordance with the quality standards set by the company. Employee motivation of PT Jaya Abadi Denpasar to be able to excel so as to be able to produce work is always in accordance with the quality standards set by the company so this shows that work motivation has an influence on employee performance.
Leadership style has a significant effect on employee performance through job satisfaction
Based on the Sobel test, Z count is 2.17 > 1.96 (which is the Z table value and in this study uses a confidence level of 5%), so it can be said that the Sobel test results for the indirect effect are significant, meaning that leadership style has a significant effect on employee performance. through job satisfaction. The results of this study indicate that the leadership conveys their ideas/ideas in a persuasive manner, so that the work environment supports the respondent to complete the job well and the respondent always completes the work on time in accordance with the time standard set by the company. The leaders of PT Jaya Abadi Denpasar in conveying their ideas and ideas are conveyed in a persuasive way of communication, communication is important in the work environment because with good communication employees will be able to complete work on time in accordance with the time standards set by the company. With the increase in job satisfaction of employees of PT Jaya Abadi Denpasar on their work environment, it will affect the increase in employee performance, while employee job satisfaction cannot be separated from the participation of the leadership of PT Jaya Abadi Denpasar in creating job satisfaction for their subordinates to be able to improve the performance of their subordinates, so this shows that the style leadership influences employee performance through job satisfaction. Work motivation has a significant effect on employee performance through job satisfaction Based on the Sobel test, Z count is 2.951 > 1.96 (which is the Z table value and in this study uses a confidence level of 5%), so it can be said that the Sobel test results for the indirect effect are significant, meaning that work motivation has a significant effect on employee performance through satisfaction Work. The results of this study indicate that respondents are motivated to work because they want to establish better interpersonal relationships, so that respondents feel at ease at work and respondents have the ability to work together with colleagues in completing a job. The importance of work motivation with the aim of establishing better relations between employees of PT Jaya Abadi Denpasar, this is intended so that there is no hostility between one employee and another employee so that employees feel satisfied with peaceful working conditions at work and have the ability to collaborate with colleagues in completing a job because to be able to improve performance, all work must be completed immediately, both work that can be completed by individuals/individuals and work that can be completed as a team work so that it can be said that motivation has an influence on employee performance through job satisfaction.
CONCLUSION
In accordance with the data analysis and discussion, the conclusions from this research are as follows: 1) In PT Jaya Abadi Denpasar, the leadership style has a big impact on how satisfied employees are with their jobs. 2) Work motivation significantly affects how happy PT Jaya Abadi Denpasar employees are with their jobs.
3) The performance of PT Jaya Abadi Denpasar personnel is significantly impacted by job satisfaction. 4) The effectiveness of the staff of PT Jaya Abadi Denpasar is significantly influenced by the leadership style.
|
2023-08-31T15:18:41.871Z
|
2023-07-06T00:00:00.000
|
{
"year": 2023,
"sha1": "45d9a1e16fd6b3fd08d662b060097d346272024e",
"oa_license": "CCBY",
"oa_url": "https://lpppipublishing.com/index.php/ijessm/article/download/169/150",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6df9038be8134654be2c3da74aa0528fc9ab3123",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": []
}
|
43677008
|
pes2o/s2orc
|
v3-fos-license
|
Imported infectious diseases and surveillance in Japan
Summary Surveillance of imported infectious diseases is important because of the need for early detection of outbreaks of international concern as well as information of risk to the travelers. This paper attempts to review how the Japanese surveillance system deals with imported infectious diseases and reviews the trend of these diseases. The cases of acquired infection overseas were extracted from the surveillance data in 1999–2008. The incidence and rate of imported cases of a series of infectious diseases with more than one imported case were observed by the year of diagnosis and place of acquired infection. During the period 10,030 cases that could be considered to be imported infectious diseases were identified. Shigellosis ranked as the most common imported disease, followed by amebiasis, malaria, enterohemorrhagic Escherichia coli infection and the acquired immunodeficiency syndrome, typhoid fever, dengue fever, hepatitis A, giardiasis, cholera, and paratyphoid fever. The annual trends of these diseases always fluctuated but not every change was investigated. The study reveals that the situation of imported infectious diseases can be identified in the current Japanese surveillance system with epidemiologic features of both temporal and geographic distribution of cases of imported infectious diseases. However, further timely investigation for unusual increase in infectious diseases is needed.
Introduction
Because of the current global travel and trade, there is no border for infectious diseases. Even in Japan, which belongs to a temperate climate zone many tropical infectious diseases are found in the local hospitals. But there have been several case reports describing difficulty of early diagnosis and treatment. 1e3 It is important to provide information to travelers on particular risks and to increase protection, as well as information for local clinicians on the current situation of endemicity of infections in the foreign countries in order to facilitate early diagnosis and to avoid nosocomial infection. From the viewpoint of public health, the introduction of new pathogens may result in their establishment in the country.
Public health surveillance is one of the essential components for infectious disease control and no doubt a starting point for control. Because of current circumstances a surveillance system should be designed not only at the national level but also at the global level of infectious disease control. Current National Epidemiological Surveillance for Infectious Diseases (NESID) in Japan requires that all notifiable diseases should be reported with the presumptive place of infection. This report summarizes the data from the NESID from 1999 to 2008 on the situation of imported infectious diseases in Japan.
Surveillance of infectious diseases in Japan
The National Epidemiological Surveillance for Infectious Diseases (hereafter referred to as NESID) is conducted based on the Law Concerning the Prevention of Infectious Diseases and Medical Care for Patients of Infections (hereafter referred to as the Infectious Disease Control Law) enacted in April 1999. Infectious disease surveillance system before then is described elsewhere. 4 Infectious diseases included in this law were categorized into IeV with specific means for control based upon the public health impact of each disease as shown in Table 1.
All physicians must report cases of Categories IeIV immediately and Va within 7 days after identification to local public health centers which are the primary level institution for disease control and prevention located strategically throughout the nation. Local public health centers are expected to enter data into the nationwide electronic surveillance system, which enables data to be shared throughout the system including all local public health centers, local and national governments, quarantine stations, local infectious disease surveillance center, local public health laboratory and central infectious disease surveillance center, which is the Infectious Disease Surveillance center of National Institute of Infectious Diseases. Category Vb diseases, which include sentinel reporting diseases, should be reported by designated sentinel medical institutions weekly or monthly with the number of clinical cases aggregated by sex and age groups. All reports should be compatible with the reporting criteria which were documented in detail for each disease including clinical and laboratory case definitions for categories Va and Vb of hospital sentinel reporting disease, and only clinical case definitions for other Vb sentinel reporting diseases. 5 Cases of category IeVa diseases should be reported with sex, age, method of laboratory confirmation, symptoms on diagnosis (descriptive), date of onset, date of consultation, date of diagnosis, estimated date of infection, date of death (if patients died), area of permanent residence (in-country or foreign countries), presumptive place of infection (domestic or foreign countries), contact to the vectors or activities on the fields (Yes or No), estimated infection route, another patients in the family members, colleagues, or neighbors (Cluster or NOT). The presumptive place where infection was acquired should be described based on reasonable situation considering travel history and incubation period according to the interview of patients.
Surveillance data and method of analysis
The cases with the presumptive place of infection in a foreign country (hereafter referred to as imported cases) were extracted from the NESID data from April 1999 to March 2008. Data in 1999 are only available in AprileDecember because of the change of the law in April 1999 and data in 2008 are included until March. Finally nine years data are reviewed. Annual trend of total, imported, and domestic cases of disease containing one or more imported one are recorded and attributable events and causes are investigated with information in the line listing data and relevant epidemiological reports. Incidence rates per 1,000,000 population are calculated using the 2002 census population and imported disease per 1,000,000 outbound travelers are calculated using the 2002 outbound travelers by the Japan National Tourist Organization.
Results
In the period observed 10,030 cases that could be considered to be imported infectious diseases were identified. These include various infectious diseases as listed in Table 2 with reported number of cases (imported, domestic, unknown and total), imported case rate among imported and domestic cases, incidence rate of domestic cases per year per 1,000,000 population and the incidence rate of imported cases per year per 1,000,000 outbound travelers.
Shigellosis ranked as the most common imported infection, followed by amebiasis, malaria, enterohemorrhagic Escherichia coli (EHEC) infection and the acquired immunodeficiency syndromes (AIDS), typhoid fever, dengue fever, hepatitis A, giardiasis, cholera, and paratyphoid fever. The rate of imported diseases of malaria, dengue fever and rabies is complete as they are not endemic in Japan and over 50% in coccidioidomycosis, paratyphoid fever, typhoid fever, cholera, shigellosis and Echinococcosis (Echinococcus granulosus). Although coccidioides is not considered to be indigenous, a domestic case is identified with no history of overseas travel. However, this case was a dealer of imported cotton and he may have acquired the infection from fungi attached to the imported cotton. 6 The annual trends of imported diseases always fluctuate because of the local situation and sometimes there is sudden increase because of cluster among the same tour groups. The Amebiasis tended to increase recently both in domestic and imported cases. And cases acquired infection through sexual contact represented 50% of the total cases. 10 There were continuous reports of imported disease of AIDS, syphilis and hepatitis B.
Dengue fever is increasing year by year, but malaria is decreasing gradually. Typhoid and paratyphoid fever and hepatitis A showed an increase and decrease throughout observation period. Although the outbreak among group tours to endemic countries was reported to account for the increase of imported diseases, 11 investigation of attributable events or causes were not always made in a timely manner. Retrospective investigation could recognize the increase of cases returning from certain countries, but it was difficult to seek further risk factors because limited information was listed on line.
Discussion
Public health surveillance is defined by the World Health Organization as the ''Systematic ongoing collection, collation, and analysis of data and the timely dissemination of information to those who need to know so that action can be taken.'' The basic principle for disease control and prevention is the same no matter where it is acquired. But target groups who need to know differ. The precautionary information should be communicated to travelers with the risk assessed properly. Of course rapid detection of cases can lead to rapid response and early containment and finally to prevention of indigenous transmission of exotic pathogens. In this respect, effective infectious disease surveillance is essential. In this study it was not difficult to overview the situation of imported infectious diseases because the current Japanese surveillance system requires the presumptive place of infection including the specified country if possible. But there have been no studies on the evaluation of reporting rate. Before April 1999 when the infectious disease control law was revised, 50e80 cases of malaria were reported annually in the framework of the old infectious disease prevention law, but the Research Group on Chemotherapy of Tropical Diseases reported by their field investigation that approximately 120 patients have been confirmed with malaria annually. 12 It means that cases notified to the Ministry of Health and Welfare were about 50% in those days. Reports of malaria peaked in 2000 one year after the new law enactment and steadily decreased to the same level before 1999. On the other hand, Dengue fever is increasing. It is not possible to determine whether the reporting rate of malaria has decreased recently or not, it will be necessary to evaluate the surveillance system including assessment of missed opportunity for diagnosis and treatment. It might be better to report febrile illness with travel history abroad for effective detection and evaluation.
It is natural that reported imported cases of malaria, dengue fever and rabies are complete. Domestic case of coccidioidomycosis is reported to be caused from imported materials. A proportion of typhoid/paratyphoid fever, cholera and shigellosis are acquired inside the country without doubt because there is no travel history abroad. As there is a report of Vibrio cholerae from imported food, 13 further is required investigation of the source of infection in each cases.
Most of the imported cases were reported with the suspected country of infection. Analysis using 2006e2008 data showed suspected countries of infection for cholera are India, Philippine, and Indonesia (in descending order); India, Indonesia, and China for shigellosis; India, Indonesia and Nepal for typhoid/paratyphoid fever; Philippine, Thailand and India for dengue fever; Papua New Guinea, Nigeria, India and Indonesia for malaria. But it depends upon the number of travelers and the local situation in certain countries which might change year by year. More detailed analysis using country specific travelers is necessary.
AIDS, syphilis, hepatitis B, and giardiasis are part of imported infectious diseases. As they have the unique feature as sexually transmitted diseases (STD), it might be better to handle these separately. But it is important to monitor imported STD because they could increase local infection rates, and to provide information for travelers.
In the current study, it was noted that unusual increases of reported imported infectious disease were not fully investigated for attributable events or causes in a timely manner although several events affecting the number of reports were identified. Retrospective analysis can provide the country of infection, but more timely information is necessary for travelers. The capacity of timely investigation and risk assessment should be enhanced further.
The results of investigation of an outbreak among a tour group sharing common source of infection or cluster in time and of travel place of individual tourists not related each other will be reflected or involved in a local epidemic, which can be linked to international investigation and control activities. Under the current circumstances of pandemic alert, the timely sharing of imported infectious disease at the global level will also be necessary.
|
2018-04-03T05:40:42.731Z
|
2008-09-11T00:00:00.000
|
{
"year": 2008,
"sha1": "0132a567dff93656b181eed63dfc827b6333f262",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.tmaid.2008.07.001",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "02235af542739db7e057985248802367ac60bd8b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18610670
|
pes2o/s2orc
|
v3-fos-license
|
Measuring the Wess-Zumino Anomaly in Tau Decays
We propose to measure the Wess-Zumino anomaly contribution by considering angular distributions in the decays $\tau\to\nu_\tau \eta\pi^{-}\pi^{0}$, $\tau\to\nu_\tau K^- \pi^{-} K^+ $ and $\tau\to\nu_\tau K^- \pi^- \pi^+$. Radial excitations of the $K^*$, which cannot be seen in $e^++e^-$, should be observed in the $ K^- \pi^+ \pi^-$ decay channel.
Introduction
With the experimental progress in τ -decays an ideal tool for studying strong interaction physics has been developed. In this paper we show that several decays can be used to test the Wess-Zumino anomaly [1]. It appears that the anomaly violates the rule that the weak axialvector-and vector-currents produce an odd and an even number of pseudoscalars, respectively. From its structure it can be seen that the anomaly contributes possibly to τ decays into ν τ + n mesons (with n ≥ 3). Of course the goldenplated decay is τ → ν τ ηπ − π 0 which has a vanishing contribution from the axialvector-current [2,3,4]. Therefore a detected ηπ − π 0 final state implies a nonvanishing anomaly. A recent measurement of its width [5] confirms the CVC predictions [3,4]. In this paper we will present the most general angular distribution of the ηπ − π 0 system in terms of the vector current formfactor for this channel. Furthermore we demonstrate that also other decays modes into three pseudoscalars can be used to confirm (not only qualitatively) the presence of the Wess-Zumino anomaly. However, the most prominent decay channel into three pseudoscalars, i.e. π − π − π + , cannot be used since G-parity forbids the anomaly to contribute. Beside the ηπ − π 0 channel interesting candidates are the decay channels K − π − K + and K − π − π + . Recently the corresponding branching ratios have been reconsidered [4] and it appears that the branching ratios alone are not sufficient to determine the presence of the anomaly. In the course of this paper we will show that a detailed study of angular distributions, as defined in [6,7], is well suited to extract the contribution of the anomaly. Our paper is organized as follows: In section 2 we introduce the kinematical parameters which are adapted to the present experimental situation where the direction of flight of the tau lepton can not be reconstructed and only the hadrons are detected. Then we present, following [6,7], the most general angular distribution of the three hadrons in terms of hadronic structure functions. The dependence of the τ -polarization is included. By considering adequate moments in section 3 we show that all of the hadronic structure functions can be measured without reconstructing the τrestframe. Section 4 is devoted to the hadronic model [4] encoded in the structure functions. We present explicit parametrizations of the formfactors for the decays into ηπ − π 0 , K − π − K + and K − π − π + . The different parameters of the model have been moved to appendix A.
Finally numerical results for the hadronic structure functions of the considered channels are presented in section 5, proving that an experimental determination of the anomaly is feasible. We anticipate our results and urge experimentalists to analyse the K − π − π + -channel which could contain radial excitations of the K * which can not be obtained in e + e − experiments.
Lepton Tensor and Angular Distributions
Let us consider the following τ -decay where h i (q i , m i ) are pseudoscalar mesons. The matrix element reads as with G the Fermi-coupling constant. The cosine and the sine of the Cabbibo angle (θ C ) in (2) have to be used for Cabibbo allowed ∆S = 0 and Cabibbo suppressed |∆S| = 1 decays, respectively. The leptonic (M µ ) and hadronic (J µ ) currents are given by and V µ and A µ are the vector and axialvector quark currents, respectively. The most general ansatz for the matrix element of the quark current J µ in (4) is characterized by four formfactors [7,4].
The Wess-Zumino anomaly which is of main interest in the present paper give rise to the term proportional to F 3 . The terms proportional to F 1 and F 2 originate from the axialvector-current. Together they correspond to a spin one hadronic final state while the F 4 term is due to the spin zero part of the axialvector-current. As it has been shown in [4] the spin zero contributions are extremely small and we neglect them in the rest of this paper, i.e. F 4 is set equal to zero. The differential decay rate is obtained from where L µν = M µ (M ν ) † and H µν ≡ J µ (J ν ) † . Reaction (1) is most easily analyzed in the hadronic rest frame q 1 + q 2 + q 3 = 0. The orientation of the hadronic system is characterized by three Euler angles (α, β and γ) introduced in [6,7]. In current e + + e − (→ τ + τ − (→ ν τ 3 mesons)) experiments two out of the three Euler angles are measurable. The measurable ones are defined by cos β =n L ·n ⊥ (8) cos γ = −n L ·q 3 |n L ×n ⊥ | where (â denotes a unit three-vector) •n L = −n Q , withn Q the direction of the hadrons in the labframe, •n ⊥ =q 1 ×q 2 , the normal to the plane defined by the momenta of particles 1 and 2.
Note that the angle γ defines a rotation aroundn ⊥ and determines the orientation of the three hadrons within their production plane. The definition of the angles β and γ is shown in fig. 1. Performing the integration over the unobservable neutrino and the the unobservable Euler angle α we obtain the differential decay width for a polarized τ [6,7]: In (9) we have defined the invariant masses in the Dalitz plot s i = (q j + q k ) 2 (where i, j, k = 1, 2, 3; i = j = k) and the square of the invariant mass of the hadron system Q 2 ≡ (q 1 + q 2 + q 3 ) 2 . The angle θ is related to the hadronic energy in the labframe E h by [8,6,7] Another quantity depending on this energy E h is which will be of some interest in the subsequent discussion. Finally in the case where the spin zero part of the hadronic current J µ is zero (F 4 = 0 in (5)), XL X W X is given by a sum of nine termsL X W X with X ∈ {A, B, C, D, E, F, G, H, I} corresponding to nine density matrix elements of the hadonic system in a spin one state. One has [7] where In (14) P denotes the polarization of the τ in the laboratory frame while θ and ψ are defined in eqs. (10,12). For LEP-physics (Z-decay) P is given by with v τ = −1 + 4 sin 2 θ W and a τ = −1; while for lower energies P vanishes. In this case (for ARGUS, CLEO) (14) simplifies to Note that the full dependence on the τ polarization P , the hadron energy (through θ and ψ) and the angles β and γ is given in eqs. (13) to (15). The hadronic functions W X contain the dynamics of the hadronic decay and depend in general on s 1 , s 2 and Q 2 . Let us recall that we are working in the hadronic rest frame with the z− and x− axis aligned alongn ⊥ andq 3 , respectively (see fig. 1). The hadronic tensor H µν = J µ (J ν ) † (with J µ given in (5)) is calculated in this frame and the hadronic structure functions W X are linear combinations of density matrix elements which are obtained from where are the polarization vectors for a hadronic system in a spin one state defined with respect to the normal on the three meson plane in the hadronic restframe. The pure spin-one structure functions are where x i are functions of the Dalitz plot variables and Q 2 . One has Here E i and q i refers to the components of the hadron momenta in the hadronic rest frame with Note that the structure functions W B,F,G,H,I are related to the anomaly formfactor F 3 . In the following we will consider these structure functions in more detail.
Def inition of Moments
Equation (9) provides the full description of the angular distribution of the decay products from a single polarized τ . They reveal that the measurement of the structure functions W i and therefore the measurement of the anomaly formfactor F 3 is possible in currently ongoing high statistics experiments. In the following we will concentrate on the s 1 , s 2 integrated stucture functions A possible strategy to isolate the various structure functions in (9) is to take suitable moments on the differential decay distribution [7]. Let us define 1 which yields where the function Rc s (Q 2 ) has been defined by Some comment are in order here: • First note that after integration over the angles β and γ the preceding expressions are still dependent on P and E h (through θ and ψ) while the hadronic structure functions w X are functions of Q 2 .
• The sum w A + w B is closely related to the spin one part of the spectral function: and we obtain the standard form for the total width In our figures in section (5) we will present the functions Rc s (Q 2 )w X (Q 2 ) as well as numerical results for the hadronic structure functions itself.
In the next section we present an explicit parametrization of the formfactors which are used in our numerical simulations in order to test whether the anomaly can be measured experimentally.
Formfactors
In a recent paper [4] we have given an explicit parametrization of the form factors and compared successfully with measured widths. The physical idea behind the model for the formfactors can be resumed to: Now we present our parametrization for the formfactors F i defined in (5) which fulfill all these requirements. First we present the formfactors induced by the anomaly The parametrization of the ηπ − π 0 channel is obtained from e + + e − -data via CVC [3,4]. It is given as a product of two functions describing the resonances in Q 2 and s i . The same Q 2 dependence (T (2) ρ [Q 2 ]) can be used for the K − π − K +channel. Of course the two-body channels have to be modified since they involve a ρ and a K * as well, we have included these contributions in T ρK * (s 2 , s 1 , α) [4]. The same function T ρK * also enters in the K − π − π + channel. Unfortunately nothing is known experimentally on the three-body resonances in this channel. In [4] only the K * resonance (T (1) K * ) has been included. In our numerical results we will also use a different parametrization including more ∆S = 1 vector resonances.
Second the axialvector current induces two formfactors F 1 and F 2 for the K − π − K + and K − π − π + channels with the following parametrization and Note that G-parity forbids the axialvector current to contribute to the ηπ − π 0channel. In the axialvector channel we assume the dominance of a resonance in each channel, i.e. the A 1 and the K 1 in the ∆S = 0 and ∆S = 1 channel, respectively. The two-body channels are again parametrized by the functions T (1) ρ and T (1) K * . We have moved explicit expressions of these functions and all numerical parameters (taken from [4]) to appendix A.
Numerical results
In this section we will present numerical results for Rc s (Q 2) · w X (Q 2 ) (defined in (22,25)) as well as for the hadronic structure functions w X (Q 2 ) separately for the different decay channels. We prefer to present both Rc s · w X and w X in order to show the effect of the phase space (included in the function Rc s ) while the hadronic physics is more visible in the plots for w X alone. Although by integrating over s 1 and s 2 we have lost information on the resonances in the two body decays we observe still interesting structures.
Let us start with the Cabibbo allowed decay τ → ν τ + ηπ − π 0 . As mentioned before this channel has a vanishing contribution from the axialvector current (G-parity) which implies that only w B is different from zero. A comparison of the data and our prediction for R c · w B and w B in fig. (2a,b) would be highly interesting, especially a confirmation of higher lying ρ resonances in T (2) ρ (Q 2 ) (the shoulder at Q 2 = 3GeV 2 in fig. (2a,b)).
Next process is the Cabibbo allowed decay τ → ν τ K − π − K + which has contributions from both the axialvector and vector currents. Therefore all nine structure functions are different from zero. In fig. 3a we present the structure function combinations obtained from the 1 and (3 cos 2 β − 1)/2) moment. Note that a measurement of the differential decay width (proportional to 1 ) is not enough to separate w A and w B . We observe a sizeable effect of w B which makes a determination of F 3 possible. Note this sizeable effect is due to heavy ρ ′ resonances in the anomaly channel, which existence is predicted from the description of e + e − → ηππ. In order to get a feeling of the effects of the phase space in this channel we present the combinations w A + w B and w A − 2w B as well as the structure functions w A and w B in fig. 3b. The moments which measure the interference of the axialvector and vector-currents are presented in fig. 3c. The size of these moments is comparable to those in fig. 3a and the very peculiar shape would make them measurable too. For completeness we present the remaining moments in fig. 3d.
Finally we discuss the Cabibbo suppressed decay τ → ν τ K − π − π + in figs. 4a-d for the parametrization with T (1) K * in (30). Note that although this decay is Cabibbo-suppressed the moments are comparable in size to the K − π + K + case (suppression due to the mass of the supplementary kaon in the phase space is comparable to the Cabibbo-suppression). All moments of figs. 4a-d have a shape which shows the strong presence of the K 1 resonance in the axial channel. A measurement of the structure functions related to the anomaly seems very hard since F 3 is very small in this parametrization. Of course this unfavorable result could have been deduced since the contribution of the anomaly to the rate, as computed in [4], was of the order of 1%. However, we should note that our parametrization of the anomaly form factor [4] includes only a K * , which can never be on mass shell, and therefore produces no strong enhancement. On the other hand in this channel we have no CVC prediction which could tell us if heavier resonances are present in this channel. In order to get a feeling for possible effects of heavier K * resonaces we propose the following parametrization which is ρ-channel inspired (see eq.41). With this parametrization we obtain the results in figs. 5a-c. Note that fig. 4d is not changed. For completeness we present the results of the total decay width Γ ηπ − π 0 , Γ K − π − K + and Γ K − π − π + normalized to the electronic width Γ e (Γ e /Γ tot ≈ 18%). One has [4] Channel (abc) In view of this result we urge our experimental colleagues to study carefully this Cabibbo suppressed channel.
Conclusions
In this paper we have proposed to measure moments (eq. (23)) which allow to determine quantitatively the contribution of the Wess-Zumino-anomaly to τ decays into three mesons. We have considered the channels ηπ − π 0 , K − π − K + and K − π − π + . We have shown that measuring the unique moment of ηπ − π 0 channel allows to verify the CVC prediction, especially the presence of heavy ρ excitations observed in e + + e − data [10]. In the K − π − K + channel we can define much more moments because of the interference of the anomaly with the axial vector contributions. In our prediction Fig. (2) the effect of the heavier ρ is again clearly seen.
The interest of the analysis of the K − π − π + is twofold: we learn something about resonances, first in the axialvector channel and second in the vector channel. We noted that in contradistinction to the Cabibbo allowed decays the vector channel for Cabibbo suppressed cannot be predicted through CVC from e + + e −data.
A Parameters used in the formfactors
As stated in section 4 the formfactors are dominated by resonances. The effects of these resonances are described by functions BW P (Q 2 ), which are normalized (BW (0) = 1) Breit-Wigner propagators. We will use two kind of Breit-Wigners: • energy-dependent width The energy-dependent width (Γ R (s)) is computed from its usual definition.
• constant width First we define the parameters which arise in the axialvector three-body channel: In the Breit-Wigner of the A 1 we use where the function g(s) has been calculated in [9] g(s) = This is because the decay of the K 1 is experimentally not well known.
The Cabibbo allowed vector formfactor is obtained from CVC and yields [10] Definition of the polar angle β and the azimuthal angle γ. β denotes the angle between n ⊥ and n L . γ denotes the angle between the ( n L , n ⊥ ) plane and the ( n ⊥ ,q 3 )-plane. Fig. 2 a) Q 2 dependence of w B · R C for the decay channel ηπ − π 0 b) Q 2 dependence of w B for the decay channel ηπ − π 0 Fig. 3 a) Q 2 dependence of (w A + w B ) · R C , (w A − 2w B ) · R C , w A · R C and w B · R C for the decay channel K − π − K + b) Q 2 dependence of (w A + w B ), (w A − 2w B ), w A and w B for the decay channel K − π − K + c) Q 2 dependence of w F · R C , w G · R C , w H · R C and w I · R C for the decay channel K − π − K + d) Q 2 dependence of w C · R C , w D · R C and w E · R C for the decay channel K − π − K + Fig. 4 a) Q 2 dependence of (w A + w B ) · R S , (w A − 2w B ) · R S and w B · R S for the decay channel K − π − π + with the parametrization T K * in (42) b) Q 2 dependence of (w A + w B ), (w A − 2w B ) and w B for the decay channel K − π − π + with the parametrization T (1) K * in (42) c) Q 2 dependence of w F · R S , w G · R S , w H · R S and w I · R S for the decay channel K − π − π + with the parametrization T (1) K * in (42) d) Q 2 dependence of w S · R S , w D · R S and w E · R S for the decay channel K − π − π + with the parametrization T
|
2014-10-01T00:00:00.000Z
|
1993-01-04T00:00:00.000
|
{
"year": 1993,
"sha1": "9ef73ac3cbaf6545327831eb51335680e6c88c3e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9301203",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e3441a7a35a7337185b480e33046e3f796ee5b19",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
28721778
|
pes2o/s2orc
|
v3-fos-license
|
Prediction of Stature According to Three Head Measurements
Morphometry is a non-invasive method important to assess the size, proportions, and composition of the human body. Because the prediction is an important subje ct in almost all life sciences, the aim of this research was to find out the cephalofacial variables that may predict the body height. Three cephalofacial variables and the body height were measured on 288 adult male s ubjects of the Kosovo Albanian population, aged 18–35 years old. Using the procedures of regressive analysis we found that cephalofacial v ari bles: head height, head circumferences, face height, significantly may predict body height, and in this way we derived the specific r gress on equation for estimation of the body height, based on these cephalofacial variables. The predicted body height, has relatively h igh significant correlation (0.512) with true body height, as well as paired T-test indicates significant similarity between these two variable s.
INTRODUCTION
Morphology is a branch of bioscience that studies the form, structure and structural features of organisms.The most applied, non-invasive, and inexpensive measurement method of morphology is morphometry, which is important to assess the size, proportions, composition of the human body, as well as may be useful in the field of biometrics to better understanding development of human body (Guan & Zhuang, 2011).
During embryonic development the skeletal elements developed through two different embryonic processes.While endochondral ossification gives rise to long bones, facial bones, vertebrae, and the lateral medial clavicles, the intramembranous ossification gives rise to the flat bones that comprise the cranium and medial clavicles (Ornitz & Marie, 2002).
Except in some pathological cases or under some ecological factors when the different elements of the skeleton grow at different rates, almost always, the human body height has proportional biological relationships with other parts of the body (Jantz & Jantz, 1999;Stinson & Frisancho, 1978).
The great challenge for anthropologists is to observe and compare the relationship between stature and other segments of the body (Coon, 1939;Dhima, 1985;Rexhepi & Meka, 2008).So, many of them have tended with various methods to explore, find, and quantify the relationships between the craniofacial (cephalofacial) variables and body height.Stature reconstruction is important as it provides a forensic anthropological estimate of the height of a person in the living state; playing a vital role in the identification of individuals from their skeletal remains (Agnihotri et al., 2011;Krishan et al., 2010;Ryan & Bidmos, 2007;Waghmare et al., 2011).
Many studies have been conducted on the estimation of stature from various body parts like hands, trunk, intact vertebral column, upper and lower limbs, individual long and short bones, foot and footprints, as well head (Agnihotri et al.;Auyeung et al., 2009;Duyar & Pelin, 2003, 2010;Habib & Kamal, 2010;Krishan, 2008;Ozaslan et al., 2012).According to Ozaslan et al. it is possible to estimate, respectively to predict human body height based on hand or foot measurements, he also has detected that the length measurements are more reliable than breadth measurements for predictive purposes of stature.
Even though, various studies have recorded changes in craniofacial shape that occur throughout life, as well as changes in the size of various craniofacial measurements, including head length, width, and circumference, bizygomatic breadth, and anterior face height, the estimation of stature from craniofacial dimensions is without doubt important in forensic cases.Regarding these facts the conclusions of the scientists are different, some of them have found strong correlations between cephalofacial measurements and stature, whereas some of them have not.Patil & Mody (2005), in order to predict body height in males as well as females, have derived regression equations using only the measurement of maximum head length, specified as glabella-opisthocranion.
Ryan & Bidmos found moderate correlations of up to 0.45 between an individual cranial measurement and skeletal height and up to 0.54 for combinations of cranial measurements, which were chosen using stepwise regression.Kalia et al. (2008) has tried to predict the dimension of body height based on the dimensions of the teeth, and according to him, even though tooth dimensions alone may not be useful in prediction of body height, the skull with teeth may provide accurate predictive clues for human body height.Rao et al. (2009) using the lengths of the coronal and sagittal sutures have attempted to derive regression equations for the estimation of body height.They found significant correlation with a coefficient of 0.363 between body height and coronal suture length in males.Pelin et al. (2010) have measured 286 healthy, living, male Turkish subjects, and have not found any strong correlation between their cephalofacial measurements and body height.According to them cephalofacial dimensions are not good predictors for body height for the Turkish population.
According to Agnihotri et al., Asha & Lakshmi Prabha (2011), and Krishan & Kumar (2007) and Krishan (2008) the cephalofacial measurements are strongly and positively correlated with stature.The regression analyses showed that the cephalic measurements give better prediction of the stature than facial measurements.
Based on the results of Ilayperuma (2010) cranial dimensions provide an accurate and reliable means in estimating the height of an individual.
Because the prediction is one of the most important subjects in almost all life sciences, the aim of this research was to find out the cephalofacial variables that may predict the body height.
MATERIAL AND METHOD
The present study is a part of the project "Morphological characteristics of the Kosovo Albanian population" that was conducted at the Institute of Sports Anthropology in Prishtina, Kosovo, and received approval from the Ethics Committee of the University Clinical Center in Prishtina.
According to the International Biological Program three cephalofacial variables were measured, as well as the body height (stature).The measurements were done on 288 adult male subjects of the Kosovo Albanian population, during the period 1997-2002.The examined subjects aged 18 -35 years of age, were chosen randomly, respecting the rule that their psycho-physic, skeletal, dental and soft tissue condition were normal.
The estimation of stature from body parts involves specialized anthropometric techniques applied with great precision (Krishan & Kumar).
Following the definitions of Martin & Saler (1959) morphometric variables were measured by classical anthropometric instruments: anthropometer, cephalometer, metric tape, as well as with sliding compass, with accuracy of 1 mm.
Because the body height is the most mesostabyl longitudinal dimension of the human body, this study was carried out to investigate the possibility of estimating of body height of a person based on their cephalofacial variables, by application of regression analysis.
The following morphometric variables were measured: Statistical analysis was performed using SPSS statistical software (version 17.0) Regression analysis is one of the most commonly used statistical techniques, with applications in almost every scientific field.Whereas canonical correlation simultaneously predicts multiple dependent variables from multiple independent variables, regression analysis predicts a single dependent (criterion) variable from a set of multiple independent (predictor) variables, respectively explores their relationships.
Through Regression Analysis we explored the possibility of prediction of body height (stature) with three selected cephalofacial variables: head height, head circumference, and face height.
RESULTS AND DISCUSSION
Considering that our aim was to find out whether three selected cephalofacial variables significantly will predict body height, these variables were ranked as the predictor (independent) variables, whereas the body height as the criterion (dependent) variable.
Using the regression analyses we have tested whether the set of cephalofacial variables has explained a statistically significant amount of the variability (p <0.05) of body height.
Certainly the null hypothesis of regression analysis in this study may be that three selected cephalofacial variables will not explain variability of body height, while the alternate hypothesis may be that they will explain a statistically significant amount of the variability.
According to the data of Table I the system of independent variables (head height, head circumference, and face height) significantly (p<0.000)predicts 26.2% of total variance of the dependent variable (body height).The Durbin-Watson test was calculated to evaluate if the autocorrelation, respectively the independence of the predictor errors (residuals) has been detected in the regression model.According to the value of this test (DW= 1.555>0.8)it can be concluded that the error deviations, respectively the residuals are uncorrelated, autocorrelation corrected is not needed, and thus can be continued with further statistical procedures.
From the data of Table II it can be defined that independent variables that have more influence on the dependent variable.In our study, it was found that all three selected cephalofacial variables significantly may predict the estimation of body height.
According to the data of Table II, with intention to predict the body height is derived the specific regression equation for measured population: The prediction of body height was estimated for each measured entity based on the above linear regression equation.The value of the observed body height were compared with the gained value of the estimated body height, and was found that the difference ranges from -5.36 cm to 3.69 cm (Table III).
The predicted body height has realized relatively high significant correlation (0.512) with true living body height (Table III), whereas differences between them were assessed using paired t-tests that indicate no significant differences between these two variables (Table IV).
CONCLUSION
The results of regression analysis indicate that the selected set of cephalofacial variables (head height, horizontal circumference of head, and face height) can, successfully and significantly predict the estimation of body height.
The specific regression equation calculated for the prediction of stature for the Kosovo Albanian population shows high degree of reliability.Using this new formula that was applied to each measured subject; we have derived one new variable, which indicates the predicted estimation of body height.Then, the mean of body height, estimated by regression equation, was compared with the mean of the true body height.
Statistical analysis with the paired t-test has revealed that the predicted estimates of body height are not different from true height; respectively both results are very similar.
The derived regression equation is specific for the given population upon which the regressive equation is based, and cannot be used on other populations of the world, as well as it cannot be expected that the predicted results of body height will be the same with the original results of body height.
The results of this study may be practically very useful for forensic medicine personnel of our country, especially when after war in this region the problems like the thousands of missing persons, many massive graves, as well as problems with identification of the unknown skeletal remains, are very actual.RESUMEN: La morfometría es un importante método no invasivo para evaluar el tamaño, proporciones y composición del cuerpo humano.Debido a que la predicción es un tema importante en casi todas las ciencias de la vida, el objetivo de esta investigación fue averiguar las variables cefalofaciales que pueden predecir la altura del cuerpo.Se midieron tres variables cefalofaciales y la altura en 288 adultos varones de la población albanesa de Kosovo, de entre 18 a 35 años de edad.Con el uso de procedimientos de análisis regresivo se encontraron las siguientes variables cefalofaciales: altura de la cabeza, circunferencia de la cabeza, altura de la cara.Estas variables pueden predecir significativamente la altura del cuerpo, y de esta manera se obtuvo una ecuación de regresión específica para la estimación de la altura del cuerpo, sobre la base de estas variables cefalofaciales.La predicción de la altura del cuerpo presentó una alta correlación significativa (0,512) con la verdadera altura del cuerpo, estableciendo con la prueba T una similitud significativa entre estas dos variables.PALABRAS CLAVE: Morfometría; Variables céfalo faciales Estatura; Análisis de regresión.
Table I .
Regression Model Summary.
Table II .
Regression Coefficients.
Table III .
Descriptive Statistics and Correlation.
Table IV .
Paired Samples Test.
|
2017-08-27T19:41:29.912Z
|
2015-09-01T00:00:00.000
|
{
"year": 2015,
"sha1": "49b3ad845a41afc31d52e4fc2b6750b693f90d10",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/ijmorphol/v33n3/art55.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "49b3ad845a41afc31d52e4fc2b6750b693f90d10",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Philosophy"
]
}
|
236966650
|
pes2o/s2orc
|
v3-fos-license
|
Stereoselective β-mannosylations and β-rhamnosylations from glycosyl hemiacetals mediated by lithium iodide
Stereoselective β-mannosylation is one of the most challenging problems in the synthesis of oligosaccharides. Herein, a highly selective synthesis of β-mannosides and β-rhamnosides from glycosyl hemi-acetals is reported, following a one-pot chlorination, iodination, glycosylation sequence employing cheap oxalyl chloride, phosphine oxide and LiI. The present protocol works excellently with a wide range of glycosyl acceptors and with armed glycosyl donors. The method doesn't require conformationally restricted donors or directing groups; it is proposed that the high β-selectivities observed are achieved via an SN2-type reaction of α-glycosyl iodide promoted by lithium iodide.
General Experimental
The reagents and solvents used in the following experiments were bought commercially and used without further purification. Oxalyl chloride from a fresh bottle was immediately stored in a Young's tube under a nitrogen atmosphere. In a glove-box, anhydrous lithium iodide beads were powdered. The powdered LiI was stored in capped vials on the bench for several weeks before use. It should be a freeflowing white solid. Dry solvents were obtained using equipment based on Grubb's design [1] and stored in Strauss flask over 4 Å molecular sieves. A Karl Fischer Titrator was used to determine the amount of water in dry solvents. For air-sensitive reactions, solvents were added via syringe through rubber septa.
2,3,4,6-Tetra-O-benzyl-α/-D-mannopyranose 1a
A solution of S2 (2.8 g, 7.1 mmol) in methanol (30 mL) was treated with Na2CO3 (222 mg, 2.09 mmol) and stirred at room temperature for 14 h. The reaction mixture was neutralised with resin IR-120, filtered and concentrated in vacuo to give a brown syrup. Under a N2 atmosphere, a solution of the syrup in anhydrous DMF (18 mL) was cooled to 0 C and NaH (60% dispersion in mineral oil) (1.6 g, 39 mmol) was added. The reaction mixture was stirred at room temperature for 30 min after which it was again cooled down to 0 C. Benzyl bromide (4.7 mL, 39 mmol) was added dropwise and the reaction mixture was left to stir at room temperature for 4 h. The reaction was quenched with MeOH and the mixture was concentrated in vacuo to give a yellow slurry. The slurry was diluted with CH2Cl2 and washed with 1 M HCl, followed by saturated NaHCO3 and brine, dried over anhydrous MgSO4 and concentrated in vacuo.
2,3,6-Tri-O-benzyl-4-O-(4-methylbenzyl)-α/-D-mannopyranose 1c
Based on the literature procedure, [8] a solution of S11 (1.3 g, 2.5 mmol) in MeCN/H2O (4:1, 13 mL) was treated with trifluoroacetic acid (2.0 mL, 26 mmol) at room temperature. After stirring for 7 h at room temperature, the reaction was quenched with saturated NaHCO3 and diluted with CH2Cl2 (150 mL). The aqueous layer was washed with CH2Cl2 (2 x 100 mL). The combined organic layers were washed with brine, dried over anhydrous MgSO4 and concentrated in vacuo to give a bright yellow oil. The crude material was used in the next step without further purification.
Under a N2 atmosphere, a solution of crude mannopyranoside S13 in anhydrous DMF (5 mL) was cooled to 0 C and NaH (60% dispersion in mineral oil) (462 mg, 11.3 mmol) was added. The reaction mixture was stirred at room temperature for 15 min after which it was again cooled down to 0 C. Benzyl bromide (1.3 mL, 11 mmol) was added and the reaction mixture was left to stir at room temperature for 3 h. The reaction was quenched with MeOH and diluted with Et2O (100 mL). The aqueous layer was washed with Et2O (2 x 75 mL). The organic layers were combined and dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow oil.
A solution of crude S14 in 9:1 acetone/water (13 mL) and treated with NBS (1.3 g, 7.5 mmol) at room temperature. After 2.5 h the reaction was quenched with saturated Na2S2O3 and diluted with CH2Cl2.
2,3,4-Tri-O-benzyl-6-O-(4-methoxybenzyl)-α-D-mannopyranose 1d
Under a N2 atmosphere, a solution of mannopyranoside S6 (0.50 g, 1.3 mmol) in anhydrous DMF (2.6 mL) was cooled to 0 C and NaH (60% dispersion in mineral oil) (0.21 g, 5.2 mmol) was added followed by benzyl bromide (0.62 mL, 5.2 mmol). After stirring the reaction mixture for 11 h, it was quenched with MeOH and diluted with Et2O (40 mL). The aqueous layer was washed with Et2O (2 x 40 mL). The organic layers were combined and neutralised with 1 M HCl. The organic layer was washed with water and brine, dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow oil.
Based on the literature procedure, [8] a solution of crude S7 in MeCN/H2O (4:1, 6.5 mL) was treated with trifluoroacetic acid (0.74 mL, 10 mmol) at room temperature. After stirring for 8 h at room temperature, the reaction was quenched with saturated NaHCO3 and diluted with CH2Cl2 (50 mL). The aqueous layer was washed with CH2Cl2 (2 x 50 mL). The combined organic layers were washed with brine, dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow syrup. The crude material was used in the next step without further purification. Under a N2 atmosphere, a solution of the crude mannopyranoside in anhydrous DMF (2.5 mL) was cooled to 0 C and NaH (60% dispersion in mineral oil) (130 mg, 3.25 mmol). After 15 minutes, 4-methoxylbenzyl bromide (0.45 mL, 3.3 mmol) was added and the reaction mixture was left to stir at room temperature for 2 h. The reaction was quenched with MeOH and diluted with Et2O (100 mL). The aqueous layer was washed with Et2O (2 x 40 mL). The organic layers were combined and neutralised with 1 M HCl. The organic layer was washed with water and brine, dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow oil.
Based on the literature procedure, [8] a solution of crude S7 in MeCN/H2O (4:1, 6.5 mL) was treated with trifluoroacetic acid (0.74 mL, 10 mmol) at room temperature. After stirring for 8 h at room temperature, the reaction was quenched with saturated NaHCO3 and diluted with CH2Cl2 (50 mL). The aqueous layer was washed with CH2Cl2 (2 x 50 mL). The combined organic layers were washed with brine, dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow syrup. The crude material was used in the next step without further purification. Under a N2 atmosphere, a solution of the crude mannopyranoside in anhydrous DMF (2.5 mL) was cooled to 0 C and NaH (60% dispersion in mineral oil) (130 mg, 3.25 mmol). After 5 minutes, 2-(bromomethyl)naphthalene (719 mg, 3.25 mmol) was added and the reaction mixture was left to stir at room temperature for 1 h. The reaction was quenched with MeOH and diluted with Et2O (100 mL). The aqueous layer was washed with Et2O (2 x 40 mL). The organic layers were combined and neutralised with 1 M HCl. The organic layer was washed with water and brine, dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow oil.
A solution of S16 in 9:1 acetone/water (13 mL) and treated with NBS (0.69 g, 3.9 mmol) at room temperature. After 2 h the reaction was quenched with saturated Na2S2O3 and diluted with CH2Cl2. The aqueous layer was washed with CH2Cl2. The combined organic layers were washed with water and brine, dried over anhydrous MgSO4, filtered and concentrated in vacuo to give a colourless syrup.
2,3,4-Tri-O-benzyl-6-O-tert-butyldiphenylsilyl-α/-D-mannopyranose 1f
A solution of S2 (500 mg, 1.27 mmol) in methanol (10 mL) was treated with Na2CO3 (40 mg, 0.38 mmol) and stirred at room temperature for 1.5 h. The reaction mixture was neutralised with resin IR-120, filtered and concentrated in vacuo to give a brown paste. Under a N2 atmosphere, a solution of the crude material and imidazole (259 mg, 3.81 mmol) in anhydrous DMF (2.5 mL) was treated with TBDPSCl (0.50 mL, 1.9 mmol) and left to stir at room temperature for 2.5 h. The reaction mixture was diluted with Et2O and washed with water and brine, dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow syrup.
Under a N2 atmosphere, a solution of crude mannopyranoside S17 in anhydrous DMF (2.5 mL) was cooled to 0 C and NaH (60% dispersion in mineral oil) (228 mg, 5.72 mmol) was added. The reaction mixture was stirred at room temperature for 15 min after which it was again cooled down to 0 C. Benzyl bromide (0.68 mL, 5.7 mmol) was added and the reaction mixture was left to stir at room temperature for 6 h. The reaction was quenched with MeOH and diluted with Et2O (100 mL). The aqueous layer was washed with Et2O (2 x 75 mL). The organic layers were combined and dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow oil.
A solution of crude S18 in 9:1 acetone/water (13 mL) and treated with NBS (678 mg, 3.81 mmol) at room temperature. After 50 minutes the reaction was quenched with saturated Na2S2O3 and diluted with CH2Cl2. The aqueous layer was washed with CH2Cl2 (2 x 100 mL). The combined organic layers were washed with water and brine, dried over anhydrous MgSO4, filtered and concentrated in vacuo to give a yellow syrup. Purification by column chromatography (3:1 to 0:1; Pentane/Et2O) afforded the hydrolysed product 1f as a yellowish syrup (388 mg, 44% yield over 4 steps, / = 85:15).
The following were observed for / anomers:
The crude material was used in the next step without further purification. Under a N2 atmosphere, a solution of the crude material and DMAP (17 mg, 0.14 mmol) in anhydrous CH2Cl2 (2.9 mL) was treated with anhydrous pyridine (0.11 mL, 1.4 mmol) followed by benzoyl chloride (0.33 mL, 2.8 mmol) at room temperature. TLC (CH2Cl2) analysis of the reaction after 50 minutes showed complete consumption of starting material. The reaction was quenched with water and diluted with CH2Cl2. The organic layer was washed with 1 M HCl, saturated NaHCO3 and brine. The organic layer was dried over anhydrous MgSO4, filtered and concentrated in vacuo to give a colourless oil. 1 The crude material S20 was dissolved in 9:1 acetone/water (14 mL) and treated with NBS (1.02 g, 5.72 mmol) at room temperature. After 3 h the reaction was quenched with saturated Na2S2O3 and diluted with CH2Cl2. The aqueous layer was washed with CH2Cl2. The combined organic layers were washed with water and brine, dried over anhydrous MgSO4, filtered and concentrated in vacuo to give a colourless syrup. Purification by column chromatography (2:1 to 1:1; Pentane/Et2O) afforded the hydrolysed product 1h as a colourless syrup (645 mg, 81% yield over 3 steps, / = 77:23).
The following were observed for / anomers: . NMR data were consistent with literature data. [12]
4-O-Acetyl-2,3,6-tri-O-benzyl-α/-D-mannopyranose S22
A solution of S5 (800 mg, 1.62 mmol), acetic anhydride (0.31 mL, 1.6 mmol) and DMAP (20 mg, 0.16 mmol) in pyridine (0.13 mL, 1.6 mmol) (little bit of CH2Cl2 was added to get a clear solution) was stirred at room temperature for 2 h. The reaction mixture was diluted with CH2Cl2 and washed with 1 M HCl, saturated NaHCO3 and brine. The organic layer was dried over anhydrous MgSO4, filtered and concentrated in vacuo to give a yellowish syrup. The crude material was used in the next step without further purification. 1 15.0 (SCH2CH3). NMR data were consistent with literature data. [4] The crude material S21 was dissolved in 9:1 acetone/water (10 mL) and treated with NBS (865 mg, 4.86 mmol) at room temperature. After 5 h the reaction was quenched with saturated Na2S2O3 and diluted with CH2Cl2. The organic layer was washed with water and brine, dried over anhydrous MgSO4, filtered and concentrated in vacuo. Purification by column chromatography (2:1 to 1:1, Pentane/Et2O) afforded the hydrolysed product S22 as a syrup (632 mg, 79% yield over 2 steps, / = 79:21).
The following were observed for / anomers:
Under a N2 atmosphere, 3i (200 mg, 0.37 mmol) was dissolved in anhydrous CH2Cl2 (3.7 mL) and the reaction was treated with anhydrous pyridine (0.11 mL, 1.4 mmol) followed by pivaloyl chloride (0.146 mL, 1.20 mmol) at room temperature. TLC (cyclohexane:EtOAc) analysis of the reaction after 1 h showed complete consumption of starting material. The reaction was quenched with water and diluted with CH2Cl2. The organic layer was washed with 1 M HCl, saturated NaHCO3 and brine. The organic layer was dried over anhydrous MgSO4, filtered and concentrated in vacuo to give a colourless oil.
The following were observed for / anomers: 1
Synthesis of Rhamnosyl Donors
After stirring the reaction mixture at room temperature for 20 h, TLC analysis (cyclohexane: EtOAc; 6:4; Rf = 0.7) showed full conversion of the starting material into a single product. The reaction mixture was then carefully quenched with saturated NaHCO3. The organic layer was washed with water and brine, dried over anhydrous MgSO4 and concentrated in vacuo to give S27 as a yellow oil (9.00 g, 26. (d,J = 6.3 Hz,3H). NMR data are consistent with the literature. [19] Thiorhamnoside S27 (9.0 g, 26.2 mmol) was dissolved in MeOH (300 mL). Na2CO3 (0.5 g, 5.0 mmol) was added to the solution and the mixture was left to stir at room temperature for 2 h, after which TLC analysis (EtOAc; Rf = 0) indicated that the reaction had gone to completion. The reaction mixture was neutralised with resin IR-120 and the mixture was filtered and was concentrated in vacuo to give S28 as a yellow oil, which was used in the next step without further purification.
Crude thiorhamnoside S31 was dissolved in CH2Cl2 (12 mL) and H2O (0.5 mL) was added. TFA (4.0 mL, 52 mmol) was then added and the reaction left to stir at RT overnight. TLC analysis (cyclohexane: EtOAc; 7:3; Rf =0.3) indicated that the reaction had gone to completion. The reaction mixture was quenched saturated NaHCO3 (20 mL), diluted with CH2Cl2 (50 mL) and the two layers were separated.
The aqueous phase was extracted with CH2Cl2 (100 mL) and the combined organic layers were washed with water (20 mL), brine (20 mL), dried over anhydrous Na2SO4, filtered and concentrated in vacuo.
The crude material was used in the next step without further purification. Under a N2, the crude thiorhamnoside was dissolved in anhydrous DMF (20 mL, 0.2 M). The flask was cooled to 0 °C (50:50; ice/water), and NaH (60% dispersion in mineral oil) (384 mg, 16.0 mmol) was added to the reaction flask. The ice bath was removed, and the reaction was left to stir at room temperature for 30 min. The flask was again cooled to 0 °C, and BnBr (1.2 mL, 10 mmol) was added to the reaction mixture. The ice bath was removed, and the reaction mixture was left to stir at room temperature for 4 h, after which TLC analysis (cyclohexane: EtOAc; 4:1; Rf = 0.6) showed that the starting material had been consumed. The reaction was quenched with MeOH (2 mL), and the solvents were removed using rotary evaporation.
The organic layer was dried over anhydrous Na2SO4, filtered off and concentrated in vacuo.
Crude rhamnoside S32 was dissolved in a 9:1 mixture of acetone:water (20 mL) and NBS (1.1 mg, 6.0 mmol) was added. The reaction left to stir at room temperature for 1 h when TLC analysis (cyclohexane: EtOAc; 6:4; Rf = 0.6) showed the complete conversion of the starting material to the desired product.
Crude thiorhamnoside S33 was dissolved in CH2Cl2 (12 mL) and H2O (0.5 mL) was added. TFA (4.0 mL, 52 mmol) was then added and the reaction left to stir at RT overnight. TLC analysis (cyclohexane:EtOAc; 7:3; Rf = 0.4) indicated that the reaction had gone to completion. The reaction mixture was quenched saturated NaHCO3 (20 mL), diluted with CH2Cl2 (50 mL) and the two layers were separated. The aqueous phase was extracted with CH2Cl2 (100 mL) and the combined organic layers were washed with water (20 mL), brine (20 mL), dried over anhydrous Na2SO4, filtered and concentrated in vacuo. The crude material was used in the next step without further purification. Under a N2, the crude thiorhamnoside was dissolved in anhydrous DMF (20 mL). The flask was cooled to 0 °C (50:50; ice/water), and NaH (60% dispersion in mineral oil) (384 mg, 16.0 mmol) was added to the reaction flask. The ice bath was removed, and the reaction was left to stir at room temperature for 30 min. The flask was again cooled to 0 °C, and BnBr (1.2 mL, 10 mmol) was added to the reaction mixture. The ice bath was removed, and the reaction mixture was left to stir at room temperature for 4 h, after which TLC analysis (cyclohexane: EtOAc; 4:1; Rf = 0.6) showed that the starting material had been consumed. The reaction was quenched with MeOH (2 mL), and the solvents were removed using rotary evaporation.
The organic layer was dried over anhydrous Na2SO4, filtered off and concentrated in vacuo.
A solution of the crude product S36 was dissolved in anhydrous DMF (100 mL) and the flask was cooled to 0 ºC. NaH (60% dispersion in mineral oil) (3.77 g, 94.3 mmol) was added to the solution and the icebath was removed. The reaction was stirred at room temperature for 1 h, after which it was again cooled to 0 ºC and treated slowly with BnBr (11.2 mL, 94.3 mmol). The ice-bath was removed, and the reaction mixture was left to stir at room temperature. TLC analysis (cyclohexane:EtOAc; 9:1) after 12 h showed complete consumption of starting material. The reaction mixture was quenched with MeOH (10 mL) and was extracted wit Et2O (3 × 200 mL). The combined organic layer was washed with 1 M HCl (100 mL) followed by saturated NaHCO3 (100 mL) and brine (30 mL). The organic layer was dried over anhydrous Na2SO4, filtered and concentrated in vacuo to give a yellow oil.
The crude product S37 was dissolved in MeOH (10 mL) and 1.25 M HCl in MeOH (10 mL) was added.
Methyl 2,3-di-O-benzyl-4,6-O-benzylidene--D-glucopyranoside S39
Under a N2 atmosphere, a solution of glucopyranoside S38 (5.0 g, 18 mmol) in anhydrous DMF (40 mL) was cooled to 0 C and NaH (60% dispersion in mineral oil) (2.3 g, 58 mmol) was added. The reaction mixture was stirred at room temperature for 30 min after which it was again cooled down to 0 C. Benzyl bromide (6.4 mL, 54 mmol) was added dropwise to the reaction mixture. The reaction mixture was left to stir at room temperature for 9 h. The reaction was quenched with MeOH and the mixture was concentrated in vacuo to give a yellow slurry. The slurry was diluted with CH2Cl2 and washed with 1 M HCl, followed by saturated NaHCO3 and brine, dried over anhydrous MgSO4 and concentrated in vacuo. Purification by column chromatography (90:10; Pentane/Et2O) gave S39 as a white solid (7.9 g, 95% yield). 1 . NMR data were consistent with literature data. [25]
Methyl 2,4,6-tri-O-benzyl--D-glucopyranoside 3c
Under a N2 atmosphere, a solution of glucopyranoside S43 (2.1 g, 5.0 mmol) in anhydrous DMF (17 mL) was cooled to 0 C and benzyl bromide (0.72 mL, 6.1 mmol) was added. After 15 minutes, NaH (60% dispersion in mineral oil) (242 g, 6.05 mmol) was added in one portion and the reaction mixture was stirred at 0 C for 30 minutes. The reaction mixture was warmed to room temperature and stirred for 3.5 h. The reaction was quenched with MeOH and the mixture was concentrated in vacuo to give a yellowish slurry. The slurry was diluted with CH2Cl2 and washed with 1 M HCl, followed by saturated NaHCO3 and brine, dried over anhydrous MgSO4 and concentrated in vacuo. A solution of the crude in MeOH was treated with MeONa (0.14 g, 2.5 mmol) and the mixture was left to stir at room temperature for 3 days. Purification by column chromatography (98:2 to 95:5; CH2Cl2/Et2O) gave 3c as a colourless syrup (1.3 g, 56% yield over 2 steps). 1 were consistent with literature data. [29]
The reaction was quenched with water (5 mL), diluted with CH2Cl2 (200 mL) and the two phases separated. The organic layer was subsequently washed with 1 M HCl, water, saturated NaHCO3 and brine, dried over anhydrous MgSO4, filtered and concentrated in vacuo to give a light brown syrup. The crude material was dissolved in CH3CN and water (100 mL, 7:1) and treated with TFA (16 mL, 0.21 mol). The reaction mixture was stirred at room temperature for 21 hours and monitored by TLC analysis
General Procedure A for Mannosylation Donor Scope
A 25 mL crimp-top vial charged with a stir-bar, hemiacetal (0.10 mmol, 1 eq) and Ph3PO (14 mg, 0.050 mmol, 0.5 eq) was placed under three cycles of vacuum and nitrogen. The solids were dissolved in anhydrous CHCl3 (0.2 mL, 0.5 M), treated with oxalyl chloride (10 L, 0.12 mmol, 1.2 eq) and left to stir at room temperature. After 1 h, the solvent and excess oxalyl chloride were removed by applying vacuum. Solid acceptor (0.07 mmol, 0.7 eq) and powdered LiI (53 mg, 0.40 mmol, 4 eq) were added to the vial and placed under three cycles of vacuum and nitrogen. The contents were re-dissolved in anhydrous CHCl3 (0.25 mL, 0.4 M w.r.t. donor) and treated with iPr2NEt (42 L, 0.25 mmol, 2.5 eq).
The reaction was stirred at 45 C for 24 h or 30 C for 36 h. The reaction mixture was diluted with CH2Cl2 (1 mL) and treated with 1 M HCl (1 mL). The aqueous layer was washed with CH2Cl2 (2 x 1 mL) and the combined organic layers were dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow or brown syrup. The / ratio was determined by 1 H NMR spectroscopy of the crude reaction mixture.
General Procedure B for Mannosylation Acceptor Scope
A 25 mL crimp-top vial charged with a stir-bar, hemiacetal (0.10 mmol, 1 eq) and Ph3PO (28 mg, 0.10 mmol, 1 eq) was placed under three cycles of vacuum and nitrogen. The solids were dissolved in anhydrous CHCl3 (0.2 mL, 0.5 M), treated with oxalyl chloride (10 L, 0.12 mmol, 1.2 eq) and left to stir at room temperature. After 30 minutes, the solvent and excess oxalyl chloride were removed by applying vacuum. Solid acceptor (0.07 mmol, 0.7 eq) and powdered LiI (53 mg, 0.40 mmol, 4 eq) were added to the vial and placed under three cycles of vacuum and nitrogen. The contents were re-dissolved in anhydrous CHCl3 (0.25 mL, 0.4 M w.r.t. donor) and treated with iPr2NEt (69 L, 0.40 mmol, 4 eq).
The reaction was stirred at 45 C for 24 h. The reaction mixture was diluted with CH2Cl2 (1 mL) and treated with 1 M HCl (1 mL). The aqueous layer was washed with CH2Cl2 (2 x 1 mL) and the combined organic layers were dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow or brown syrup. The / ratio was determined by 1 H NMR spectroscopy of the crude reaction mixture.
General Procedure C for Mannosylation Acceptor Scope
A 25 mL crimp-top vial charged with a stir-bar, hemiacetal (0.10 mmol, 1 eq) and Ph3PO (28 mg, 0.10 mmol, 1 eq) was placed under three cycles of vacuum and nitrogen. The solids were dissolved in anhydrous CHCl3 (0.2 mL, 0.5 M), treated with oxalyl chloride (10 L, 0.12 mmol, 1.2 eq) and left to stir at room temperature. After 30 minutes, the solvent and excess oxalyl chloride were removed by applying vacuum. Powdered LiI (53 mg, 0.40 mmol, 4 eq) was added to the vial and placed under three cycles of vacuum and nitrogen. A stock solution of the acceptor in anhydrous CHCl3 (0.4 M w.r.t. donor or 0.28 M w.r.t acceptor) was added followed by iPr2NEt (69 L, 0.40 mmol, 4 eq). The reaction was stirred at 45 C for 24 h. The reaction mixture was diluted with CH2Cl2 (1 mL) and treated with 1 M HCl (1 mL). The aqueous layer was washed with CH2Cl2 (2 x 1 mL) and the combined organic layers were dried over anhydrous MgSO4 and concentrated in vacuo to give a yellow or brown syrup. The / ratio was determined by 1 H NMR spectroscopy of the crude reaction mixture.
General procedure D for Rhamnosylation Donor Scope
A 25 mL crimp-top vial charged with a stir-bar, hemiacetal (0.10 mmol, 1 eq) and Ph3PO (28 mg, 0.10 mmol, 1 eq) was placed under three cycles of vacuum and nitrogen. The solids were dissolved in anhydrous CHCl3 (0.2 mL, 0.5 M), treated with oxalyl chloride (10 L, 0.12 mmol, 1.2 eq) and left to stir at room temperature. After 30 minutes, the solvent and excess oxalyl chloride were removed by applying vacuum. Solid acceptor (0.07 mmol, 0.7 eq) and powdered LiI (53 mg, 0.40 mmol, 4 eq) were added to the vial and placed under three cycles of vacuum and nitrogen. The contents were re-dissolved in anhydrous CHCl3 (0. 25 mL, 0.4 M w.r.t. donor) and treated with iPr2NEt (69 L, 0.40 mmol, 4 eq).
The reaction was stirred at 45 C or 30 °C for 24 h. The reaction mixture was diluted with CH2Cl2 (15 ml) and washed with 1M HCl (2 5 ml), brine (10 ml), dried using Na2SO4, filtered and concentrated in vacuo. The / ratio was determined by 1 H NMR spectroscopy of the crude reaction mixture.
General procedure E for Rhamnosylation Acceptor Scope
A 25 mL crimp-top vial charged with a stir-bar, hemiacetal (0.10 mmol, 1 eq) and Ph3PO (14 mg, 0.050 mmol, 0.5 eq) was placed under three cycles of vacuum and nitrogen. The solids were dissolved in anhydrous CHCl3 (0.2 mL, 0.5 M), treated with oxalyl chloride (10 L, 0.12 mmol, 1.2 eq) and left to stir at room temperature. After 1 h, the solvent and excess oxalyl chloride were removed by applying vacuum. Solid acceptor (0.07 mmol, 0.7 eq) and powdered LiI (53 mg, 0.40 mmol, 4 eq) were added to the vial and placed under three cycles of vacuum and nitrogen. The contents were re-dissolved in anhydrous CHCl3 (0. 25 mL, 0.4 M w.r.t. donor) and treated with iPr2NEt (42 L, 0.25 mmol, 2.5 eq).
The reaction was stirred at 45 C or 30 °C for 24 h. The reaction mixture was diluted with CH2Cl2 (15 ml) and washed with 1M HCl (2 5 ml), brine (10 ml), dried using Na2SO4, filtered and concentrated in vacuo. The / ratio was determined by 1 H NMR spectroscopy of the crude reaction mixture.
Donor and Acceptor Limitations
Peracetylated/4-OAc donors S22, S24 and S35, were disarmed and no glycosylation was observed. 6-OTIPS donor S23 led to the anhydro sugar. Benzylidene acceptors S40 and S46 were poor nucleophiles and no desired reaction was observed. Benzoylated acceptor S49 led to complex mixtures due to ester migration. We confirmed that ester migration occurred on benzoylated acceptor S49 with iPr2NEt in CHCl3 at 45 C in the absence of other reagents. Acceptor S43 gave a complex mixture in reactions with 1a; there were trace amounts of the desired -product (as evidenced by HSQC), and the major product was the donor elimination product. We suspect there was also acyl migration but it was difficult to be sure because of the complex mixture generated. S43 is a poor nucleophile and so observation of elimination is not that surprising. Reaction with cholesterol gave the product S50 in a very high yield but with no selectivity. When S35 used as donor in the glycosylation reaction with acceptor 3a, transesterification was observed giving product S51 (see below). [38] Entry Route Deviation from standard procedure β/α a
|
2021-08-11T05:23:36.163Z
|
2021-07-05T00:00:00.000
|
{
"year": 2021,
"sha1": "b889617dd82d6289afa147a966b2cfb44beae4fa",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/sc/d1sc01300a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b889617dd82d6289afa147a966b2cfb44beae4fa",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252370181
|
pes2o/s2orc
|
v3-fos-license
|
Multilevel analysis of undernutrition and associated factors among adolescent girls and young women in Ethiopia
Background The consequences of undernutrition have serious implication for the health and future reproductive periods of adolescent girls and young women aged 15–24 years. Inspite of this, they are neglected age groups and there is limited information about the nutritional status of this age group in Ethiopia. Therefore, estimating the extent and associated factors of undernutrition among adolescent girls and young women in a national context using multilevel analysis is essential. Methods Secondary data analysis was conducted from the Ethiopian Demographic and Health Survey 2016. A total sample weight of 5362 adolescent girls and young women was included in this study. A multilevel mixed-effect binary logistic regression model with cluster-level random effects was fitted to determine the associated factors of undernutrition among adolescent girls and young women in Ethiopia. Finally, the odds ratios along with the 95% confidence interval was generated to determine the individual and community level factors of undernutrition. A p-value less than 0.05 was declared as the level of statistical significance. Results Overall, 25.6% (95%CI: 24.5–26.9) of adolescent girls and young women were undernourished. Statistically significant individual level factors includes adolescent girls and young women aged 15–19 years (AOR: 1.53, 95%CI: 1.32–1.77), individual media exposure (AOR: 0.82, 95%CI: 0.69–0.97), and unprotected drinking water source (AOR: 1.24, 95%CI: 1.04–1.48). Whereas, Southern Nations, Nationalities, and Peoples' Region (AOR: 0.33, 95%CI: 0.13–0.83) and rural residence (AOR: 1.69, 95%CI: 1.24–2.32), were community level factors for adolescent girls and young women undernutrition. Conclusion One quarter of the Ethiopian adolescent girls and young women were undernourished. Therefore, the Ethiopian government should better engage this age group in different aspects of the food system. To improve nutritional status, public health interventions such as increased media exposure for rural residents and interventions that improve access to protected water sources will be critical.
Introduction
The World Health Organization (WHO) defined adolescent girls and young women (AGYW) as those females under the age category of 15-24 years and makeup about 20% of the world's population and are characterized Negash et al. BMC Nutrition (2022) 8:104 by significant physiological, psychological, and social changes that place their lives at high risk [1,2]. Nutritional requirements like protein, carbohydrates, and others are required in increased amounts during this age [3].
Worldwide, nutritional deficiencies, suboptimal linear growth, and undernutrition are major public health problems [3]. Globally, the prevalence of stunting, wasting and underweight are 29.1%, 6.3%, and 13.7%, respectively [4]. In particular, the sub-Saharan Africa region is more affected by undernutrition among AGYW, in which 13.7% and 12% are stunted and wasted in SSA as compared to 5.2% and 10.4% in Europe and central Asia are stunted and wasted, respectively [5].
Undernutrition had multiple consequences like anemia, delayed growth, retarded intellectual development, increase infection, inadequate bone mineralization, and long-term effects like stillbirths, complicated delivery, and maternal death in the future lives of individuals [3,[6][7][8].
Studies in Ethiopia showed that the magnitude of undernutrition is different among pregnant mothers and adolescent girls. For example among pregnant mothers rates, ranged from 14.4% in Gondar [9] to 19.5% in Dessie [10]. In addition, 29.2% of adolescent girls were stunted, and 30.4% were wasted [11,12]. Another systematic review in Ethiopia also revealed that the pooled prevalence of stunting and underweight among adolescent girls were 20.7% and 27.5%, respectively [7].
Large family size, rural residence, unprotected source of drinking water, lack of latrine, low dietary diversity score, mother illiteracy, and food insecure households were identified as contributing factors to adolescent undernutrition in the literature [7,13].
To overcome the aforementioned problems, multiple nutritional strategies and political support for nutritional programs are essential methods for improving the nutritional status of all reproductive aged women [14,15].
There has been significant reduction in undernutrition in Ethiopia over the past two decades (from 30 in 2000 to 22 in 2016 for women) and has been implementing a nutrition focused agenda to catalyze improvements in nutrition corresponding with the UN Decades of Action on Nutrition (2025) and the Sustainable Development Goals (2030) [16]. Moreover, the country recently approved a new food and nutrition policy such as increasing agricultural production and productive safety net programs in an effort to reduce undernutrition [17,18].
Even though nutrition interventions focused on children and pregnant women, the conception of proteinrich foods and micronutrient supplementations, and the prioritization of AGYW aged 15-24 years was relatively neglected in research priority. Many studies have tried to address undernutrition among under-five and adolescent girls [1,[19][20][21][22][23]. Therefore, this study tried to assess the prevalence of undernutrition and associated factors among AGYW in Ethiopia by considering both the individual and community level factors. Once the nutritional status of AGYW is known, interventions aimed at the nutritional finding have shown improvements in birth weight, preterm delivery, and negative long-term outcomes [24,25]. Thus, identifying the prevalence and associated factors of undernutrition among AGYW is very crucial, and the evidence found from the study will help policymakers, governments, and stakeholders to have an insight to design context-specific interventions.
Study setting
Ethiopia is found in the horn of Africa, and is administratively divided into nine ethnically, and politically autonomous regional states (Tigray, Afar, Amhara, Oromia, Benishangul Gumuz, Gambela, Southern Nations Nationalities and People Region, Harari, and Somali) and two administrative cities (Addis Ababa and Dire-Dawa). The regions are administratively divided into zones, and zones into districts (the third administrative division). With an estimated population of 118 million, Ethiopia is the 14 th most populous country in the world and the second most populous on the African continent [26].
Study design and period
A community-based cross-sectional survey was conducted in 2016 among reproductive age women in Ethiopia. The 2016 EDHS is the fourth survey conducted from January 18 to June 27/2016 in Ethiopia. The Ethiopian Demographic, and Health Survey (EDHS) is a nationallevel study conducted every five years as part of the worldwide Demographic Health Survey (DHS) [23,27].
Population and eligibility criteria
The AGYW during the survey in Ethiopia were the source population, while the study population were all the AGYW who were in the selected enumeration areas included in the analysis. Whereas pregnant, postpartum, AGYW who gave birth in the two months preceding the date of the interview were excluded from the analysis because it affects the estimate of BMI as weight changes during pregnancy and postpartum periods.
Data source and sampling procedure
The 2007 Ethiopian population and housing census, which was conducted by the central statistical agency (CSA) of Ethiopia was used as a sampling frame for the 2016 EDHS. A total of 645 enumeration areas (EAs) (202 urban and 443 rural) were used for the census. The EDHS employed two stage stratified-cluster sampling technique. Proportional allocation was achieved within each sampling stratum before sample selection at different levels. In the first stage, 645 EAs were selected with a probability proportional to EAs size, and each sampling stratum was selected from the given samples. Household listing operations were implemented to determine the number of residential units in each EA. Then, the resulting lists of households were used as the sampling frame for selecting households. In the second stage, 28 households from each cluster (645) were selected with an equal probability. Interviews were conducted only with households that had been preselected. More detailed information about the methodology or sampling design is available in EDHS 2016 report [28]. All adolescent girls and young women aged 15-24 years who are the usual members of selected households and visitors who slept in the household the night before the survey were eligible for the survey [23].
Study variables
The dependent variable of the study was undernutrition among AGYW, which was measured by weight (Kg) and height (m 2 ) reports of Body Mass Index (BMI). Therefore, in this study undernutrition was defined as less than 18.5 kg/m 2 (which includes either stunting or underweight) [29,30]. The effect of factors on the outcome variable (undernutrition) for the i th adolescent girls and young women in the j th cluster (Yij) was dichotomized as follows: Yij = 1 if BMI < 18.5 kg/m 2 as undernutrition, and 0 if BMI ≥ 18.5 kg/m as not undernutrition.
Individual level factors (level-one) included AGYW age, educational status, religion, number of family members, individual level media exposure, and source of drinking water, while community level factors (leveltwo) included region, residence, community level media exposure, community level poverty, and community level literacy. The aggregate community level independent variables (community level poverty, community level media exposure, and community level literacy) were constructed by aggregating individual-level characteristics at the community (cluster) level. They were categorized as high or low based on the distribution of the summary of the proportion values calculated after checking the distribution using the histogram for each variables. In the aggregate variable, there was no normal distribution, so the median value was used for categorization. And finally, model three (level three) examined both individual and community-level variables simultaneously.
Data management and analysis
The data extraction, cleaning, recoding, and labeling for further analysis were done using STATA version 14 statistical software and Microsoft Excel.
Age was grouped as 15-19 and 20-24. Occupation was coded as employed, sales/merchant, agriculture, and others. No formal education, and formal education were the categories for educational level of the women. Wealth index was recoded as poor, middle and rich. Media exposure was coded as 'yes' for those AGYW who had read newspapers/magazines or listening radio and/or watching television less than once a week/at least once a week and otherwise 'no' . < 5 and ≥ 5 were codes for family size. Source of drinking water categorized as protected and unprotected. Community level variables (community level media exposure, community level poverty and community level literacy) were generated by aggregating the individual level factors at cluster level and categorized them as high if the proportion is ≥ 50% and low if the proportion is < 50% based on the national median value since these were not normally distributed [31].
Before analysis sampling weight of each variable was done to restore the unequal probability of selection between strata to get reliable estimates. Out of 15,683 total eligible households, 6,401 were AGYW aged 15-24 years. Of this, 418 and 505 AGYW who were currently pregnant and gave birth in the two months preceding the date of the interview were excluded, respectively. Lastly, 5,478 AGYW were included in the analysis. Overall, a total weighted sample of 5,362 AGYW were included in this study.
The second step was a bivariable analysis that calculated the proportion of undernutrition across the independent variables with their p-values. All the variables having a p-value less than 0.2 in bivariable analysis were used for multivariable analysis. For the multivariable analysis, adjusted odds ratios with 95% confidence intervals and a p-value of less than 0.05 were used to identify associated factors of undernutrition. In the final step of the analysis, a multilevel logistic regression analysis comprising fixed effects and random effects was done.
Data eligibility for multilevel analysis was checked before analysis (Intra-class Correlation Coefficient (ICC) greater than 10% (ICC = 12.2%)). As a result of the hierarchical nature of EDHS data, adolescent girls and young women are nested within communities (clusters). In this case, the assumptions of independence and equal variance in a logistic regression model may not be met. Therefore, a multilevel binary logistic regression model was used to estimate the effect of individual and community-level variables on undernutrition [32]. Four models were fitted for multilevel analysis; null model (model 0) which shows the variations in undernutrition in the absence of any independent variables. Model I an adjusted for the individual-level variables, Model II adjusted for the community level variables, and model III adjusted for both individual and community level variables. Simultaneously, model fitness was done using the deviance (-2 log likelihood). The results of the fixed effects of the model were presented as adjusted odds ratio (AOR) while the random effects were assessed with intraclass correlation coefficient (ICC). Variance inflation factor (VIF) was used to check for multicollinearity among independent variables and it was found no multicollinearity (mean value for the final model = 1.87) ( Table 4).
Socio-demographic characteristics of study participants
A total of 5362 AGYW were included in the final analysis. The median age of the study participants was 19 (IQR: [17][18][19][20][21][22][23][24] years. More than half (57.6%) of the study participants were under the age group of 15-19 years. For 55.8 percent of the population, the family size was less than five people (Table 1).
Individual and community level factors of undernutrition (fixed-effects)
In the final model (Model III) after adjusting for individual and community level factors, Individual level variables such as age of AGYW, individual media exposure, source of drinking water, educational level of AGYW, family size, individual level wealth index and from the community level variables, region, residence, community level poverty, community level media exposure and community literacy were candidate variables for multivariable logistic regression analysis. From the aforementioned variables, age of AGYW, individual level media exposure, source of drinking water, region, and residence were significantly associated variables with undernutrition among AGYW.
Random effects (measures of variation)
There was a significant variation in the prevalence of undernutrition among AGYW across the clusters. The intra-cluster correlation coefficient (ICC) for the null model was 12.2%. This means that 12.2% of the variation in undernutrition among AGYW is due to differences in regions/clusters (between cluster variations). Model comparison was employed using deviance. Correspondingly, the model with the lowest deviance was selected.
To identify associated factors of undernutrition among AGYW, an adjusted odds ratio with a 95% confidence interval was calculated. In the multivariable analysis, a p-value of 0.05 was used to declare statistical significance of the association. The median odds ratio (MOR) showed undernutrition was heterogeneous among clusters. In the first model, the value of MOR was 1.9. Which implied that AGYW within the cluster of higher undernutrition had a 1.9 times higher chance of undernourishment than AGYW within a cluster of lower undernutrition if AGYW were randomly selected from different clusters (EAs). Concerning PCV, 50% of the undernutrition variability was explained by the final model (Table 4).
Discussion
Globally, one third of the population suffers from malnutrition [33]. Especially in low and middle income countries, women are highly vulnerable to all forms of malnutrition [34,35]. There is no doubt that AGYW nutrition plays a crucial role in the development of maternal, newborn, and child health [36]. It is imperative that preventable determinants of undernutrition be identified and reduced in order to improve the world's commitment to end malnutrition by 2030 [34]. This study carried out to assess the prevalence and associated factors of undernutrition among AGYW in Ethiopia. Based on this, one-fourth, 25.6% (95% CI: 24.5-26.9), of AGYW in Ethiopia had undernourished. This is in line with the research report 26.4% conducted in southern Ethiopia [37]. This finding is higher than studies conducted in Gondar, Ethiopia in which 14.4% were undernourished [9]. Similarly, the finding is higher than studies conducted among adolescent girls, which is 20.7% [39]. The possible justification for this difference might be the difference in the study participants. In this study the study participants were AGYW while the other studies were among adolescent girls and pregnant mothers. Furthermore, variations in the magnitude of undernutrition may be explained by differences in data source and sample size. The magnitude of undernutrition is lower than a study conducted in Ethiopia, where 29.2% and 30.4% adolescent girls were wasted and stunted, respectively [11,12].
Similarly this study is lower than the 43.8% study report among rural pregnant women in eastern Ethiopia [38]. This is most likely due to a difference in the source population, as well as differences in the study period, sampling technique, and size. The above studies were conducted as primary research, whereas in our study we used secondary data sources. The lower prevalence might also be accounted for by the governments ongoing improvement to reduce undernutrition [17,18].
The outcome was attributed to both individual and community level factors. Youth age, individual media exposure, unprotected drinking water source, occupational status, region, and residence were significant associated factors of undernutrition among AGYW after adjusting for individual and community level variables. Accordingly, the current study identified that AGYW, aged 15-19 years were 1.53 times more likely to develop under nutrition than those aged 20-24 years. This is supported by studies in Ethiopia where early adolescent were two times more likely to develop undernutrition than late adolescents [40]. The same holds true in India [41] and Nigeria [42]. This might be due to the reason that the effect of growth velocity synergistic, meaning that there is a synergistic effect of growth velocity during puberty when peak height velocity occurs [43]. In addition, as age increases linear growth is also marked by the lengthening of long bones at the growth plate followed by epiphyseal closure when growth is completed [44]. Moreover, lower-aged girls probably often have little power in decision-making about food in the household. This indicates that nutritional strategies and political support for nutritional programs are essential methods for improving the nutritional status, focusing on the lower age group.
Those who had media exposure had an 18% lower odds of being undernourished than those who had no media exposure. This is in line with another study in Ethiopia [44]. This could probably be those who had access to the media (radio, newspapers, and television) could get information about a balanced diet, the importance of a variety of foods and health programs. So, increasing media exposure of health programs has a role in overcoming undernutrition among AGYW.
The odds of undernutrition, among AGYW, in the region of SNNPR were 67% lower compared to Dire Dawa city. This is in agreement with another study in Ethiopia [45]. This might be differences in socio-demographic and economic status, and the availability of divergent foods in the SNNPR.
This study revealed that the odds of undernutrition among AGYW from rural residents was nearly 70% higher compared with urban residents. This is in line with another study in Ethiopia, Ghana, and Tanzania [7,[46][47][48]. The possible reason might be their educational status as most rural residents are illiterate, which is associated with the inaccessibility of information about medical issues for rural residents. Food security in rural areas depends on natural and human resources that are vulnerable to change including rain or weather patterns, agricultural knowledge and human capital [49]. So, nutritional education like a balanced diet and food security are needed for rural residents like AGYW.
The current study identified that AGYW who had used an unprotected drinking water source were 1.24 times more likely to become undernourished compared with those who had used a protected water source. The finding is in line with other studies conducted in Ethiopia [7,11]. This could be due to the fact that an unprotected source of drinking water leads to communicable diseases, bacteria, and intestinal parasites resulting in micronutrient depletion and finally leadings to undernutrition [50,51].
Strengths and limitations of the study
For this study the following strengths and limitations are forwarded; its large sample size, nationally representative data. This study also employed a multilevel-modeling technique to identify a more valid result that takes the survey data's hierarchical nature into account. Furthermore, the DHS methodology allows for comparison with other settings. However, because the data is cross-sectional, the temporal relationship of causations cannot be established.
Conclusions
One quarter of the Ethiopian AGYW were undernourished. Age group 15-19 years, individual media exposure, region, rural residents, and unprotected drinking water sources were significant associated factors for AGYW undernutrition. Therefore, considering the intergenerational effect of undernutrition, the Federal Ministry of Health (FMOH) should increase media exposure, particularly for rural residents. And the Ethiopian government needs to better engage this age group in different aspects of food systems. Moreover, improve access to protected water sources for enhancing the safety of drinking water is an important intervention.
|
2022-09-20T00:37:49.471Z
|
2022-09-19T00:00:00.000
|
{
"year": 2022,
"sha1": "fd5575aca3d46c9a0830c918f7caab8f0a777095",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fd5575aca3d46c9a0830c918f7caab8f0a777095",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15401076
|
pes2o/s2orc
|
v3-fos-license
|
Optimal combinations of acute phase proteins for detecting infectious disease in pigs
The acute phase protein (APP) response is an early systemic sign of disease, detected as substantial changes in APP serum concentrations and most disease states involving inflammatory reactions give rise to APP responses. To obtain a detailed picture of the general utility of porcine APPs to detect any disease with an inflammatory component seven porcine APPs were analysed in serum sampled at regular intervals in six different experimental challenge groups of pigs, including three bacterial (Actinobacillus pleuropneumoniae, Streptococcus suis, Mycoplasma hyosynoviae), one parasitic (Toxoplasma gondii) and one viral (porcine respiratory and reproductive syndrome virus) infection and one aseptic inflammation. Immunochemical analyses of seven APPs, four positive (C-reactive protein (CRP), haptoglobin (Hp), pig major acute phase protein (pigMAP) and serum amyloid A (SAA)) and three negative (albumin, transthyretin, and apolipoprotein A1 (apoA1)) were performed in the more than 400 serum samples constituting the serum panel. This was followed by advanced statistical treatment of the data using a multi-step procedure which included defining cut-off values and calculating detection probabilities for single APPs and for APP combinations. Combinations of APPs allowed the detection of disease more sensitively than any individual APP and the best three-protein combinations were CRP, apoA1, pigMAP and CRP, apoA1, Hp, respectively, closely followed by the two-protein combinations CRP, pigMAP and apoA1, pigMAP, respectively. For the practical use of such combinations, methodology is described for establishing individual APP threshold values, above which, for any APP in the combination, ongoing infection/inflammation is indicated.
Introduction
The acute phase protein (APP) response is an innate reaction towards tissue injury and follows rapidly (6-12 h) after onset of any disease compromising tissue homeostasis, for example infections, trauma, inflammation with various etiologies and some tumors. It can also be induced by injection of microbial molecules (peptidoglycan, lipopolysaccharide) and pro-inflammatory cytokines [1][2][3]. The APP response involves substantial changes in the serum concentrations of numerous proteins, mainly as a result of changes in their hepatic synthesis rates, although other organs and tissues also show local APP responses [4]. The APPs are typically present in the μg/mL to mg/mL range in serum and plasma from an affected individual.
The APP response is thus a robust indicator of disease and easily measurable in a blood sample. It has been used as a very useful diagnostic tool for many years especially in human medicine [5][6][7] but also increasingly in veterinary medicine [8], especially for companion and sports animals [9,10]. We have characterized the APP response in pigs undergoing experimental infections [11][12][13][14][15] and the correlation between APP concentrations and disease in pigs in herds of different health status has also been studied [16][17][18][19][20][21][22]. It is well established that some APPs react to a lesser extent than others to the same infection/inflammatory stimulus [8] reflecting different induction sensitivities of different APPs. Conversely, APPs may also react differently to different types of stimuli as reported for SAA (serum amyloid A), Hp (haptoglobin) and alpha-1-acid glycoprotein in cattle, which were all found to be more sensitive indicators of acute than of chronic inflammation [23,24]. Other examples of this include porcine Hp reflecting more closely the extent of lung damage in respiratory diseases than CRP (C-reactive protein) [25], pigMAP (pig major acute phase protein) that was reported not to react to infection with PRRSV (porcine respiratory and reproductive syndrome virus) in naturally infected pigs [18], and increased Hp being associated with lesions due to enzootic pneumonia caused by Mycoplasma hyosynoviae but not with lesions due to pleuritis (caused by Actinobacillus pleuropneumoniae) at slaughter [16]. The use of more than one APP to increase the sensitivity of disease detection was indicated in some of these studies [18,25] and generalized in the suggestion by Gruys et al. [26] of using both rapidly reacting as well as slowly reacting APPs for detection of disease with increased sensitivity.
Although much useful information has been gained from these studies, here we seek to answer the question of which combination of pig APPs can be used generally to give the most sensitive detection of ongoing disease in the pig. The aim of the study was therefore to define combinations of APPs that can be tested for use in real life monitoring of infections in pig herds where, typically neither the nature of infection nor the course of the infection is known. As explained above, the threshold for initiating an APP response varies between proteins and diseases, as does the speed at which an acute phase reaction resolves. In addition to this, the magnitude of the reaction is also dependent on the APP/disease combination. We therefore studied the acute phase response of a number of specific pig APPs to several relevant infectious agents as well as to aseptic inflammation in experimental groups of pigs. The choice of APPs was based on the following criteria: the acute phase changes of the APPs should be well described, substantial and reproducible, and reliable assays should be generally available for their measurement in serum. Furthermore both positive and negative APPs should be included.
The serum panel employed consisted of more than 400 samples, obtained from three bacterial, one viral, and one parasitic infection and one aseptic inflammation group, including infections with the prevalent pathogenic agents Actinobacillus pleuropneumoniae (A.p.), Streptococcus suis (S. suis), Toxoplasma gondii (T. gondii), Mycoplasma hyosynoviae (M. hyos.), porcine reproductive and respiratory syndrome virus (PRRSV) and as a model inflammation, pigs injected aseptically with turpentine.
Immunochemical analyses of seven different acute phase proteins, four positive (CRP, Hp, pigMAP and SAA) and three negative (albumin (Alb), transthyretin (TTR), and apoA1) were performed on all samples using the best available assays in four different European laboratories. Advanced statistical treatment of the data was performed in a two-step procedure, including defining cut-offs for a positive reaction and calculating detection probabilities for single APPs as well as for all possible APP combinations in order to select the combination of APPs that was most sensitive for detecting any of the infections/inflammation by strictly objective criteria.
By selecting APP's that complement each other during the progression of an infection, we aimed to construct a measure of infection that was more sensitive than any individual APP, over a wide period of the disease progression.
Animal groups
Test serum samples were obtained from consecutive time points from a number of well-controlled and wellcharacterized experimental infection experiments as well as from a group of pigs undergoing induced aseptic inflammation. Serum samples were obtained prior to and from early, intermediate and late time points during infection/inflammation. Sampling and experimental groups were defined as follows: pigs should be more than one month of age, the whole infection period (preinfection, during infection, post-infection) should be covered by the sampling, a statistically adequate number of samples/animals should be included for each infection, data on clinical signs and pathology should be available, and samples from subclinically infected and virus-infected pigs should be included.
All Danish pigs were from specific pathogen free (SPF) herds. Breeds were Danish Yorkshire/Danish Landrace (for the S. suis and T. gondii groups, see below) or crossbreds between Yorkshire/Landrace sows and Hampshire/Duroc boars (A.p. and M. hyos. groups) and the age was from 4/5 weeks and upwards (see below). Spanish pigs were used for the inflammation group and were crossbreds between Large White, Landrace and Pietrain pigs and 20 weeks of age. Before inoculation, pigs were acclimatized for at least 1 week in isolation units in groups of 3-6 animals with free access to water, and fed commercial feed without antibiotic growth promoters. All animal experiments were conducted in accordance with local legislation (Danish Animal Experiments Inspectorate and Ethical Committee for Animal Research at the University of Zaragoza) and were executed according to best practices and legislation with veterinary supervision to avoid unnecessary suffering.
Inflammation: Five pigs at 20 weeks of age were subjected to aseptic inflammation by s.c. injection of 0.3 mL of turpentine/Kg body weight distributed equally on each side of the neck [12] and blood samples were obtained at 0, 12 h, 24 h, 36 h, 48 h, 3 days, 4 days, 7 days, 10 days and 14 days pi.
Blood samples
All blood samples were collected without anti-coagulant and allowed to clot (at room temperature for 2 hours or at 4°C overnight), before retrieval of serum by centrifugation. Serum samples were stored below -20°C until use.
Acute phase protein assays
The serum panel was blind-tested for the concentrations of the positive APPs CRP, pigMAP, Hp and SAA and the negative APPs apoA1, TTR and albumin, using immunoassays.
Briefly, albumin, pigMAP and ApoA1 were determined by radial immunodiffusion [32] in 1% agarose gels containing specific rabbit polyclonal antisera and using a porcine serum as a secondary standard. The concentration of these proteins in the secondary standard was previously determined by radial immunodiffusion using the purified proteins as standard [12,33,34]. Intra-and inter-assay coefficients of variation were below 5%.
The concentration of CRP, TTR, SAA and Hp was measured by ELISA. Serum CRP was measured as described in Sorensen et al. [15]. Microtiter plates were coated with phosphoryl choline coupled BSA (BSA-CP) and blocked with milk-powder in saline [35]. Samples and standards were diluted in 50 mM Tris, 0.9% NaCl, 10 mM CaCl 2 , 0.1% Tween 20 (TBS-CT buffer), and bound CRP was detected using an in-house anti-pig CRP monoclonal antibody, followed by a peroxidaselabelled goat-anti-mouse antiserum (Jackson Immuno research Laboratories). All washings and additions of secondary reagents were done in TBS-CT buffer. The ELISA was developed using 50 mM citric acid, pH 4.0, 0.1 mM ABTS (2,2 Azino-bis(3-ethylbenzthiazoline-6sulfonic acid)), 0.01% H 2 O 2 as a color substrate and the absorbance read at 405 nm. For the measurement of TTR [11,15], microtiter plates were coated with serum samples and purified human TTR (standard)(Sigma-Aldrich, Poole, UK) and blocked with non-fat dried milk in assay buffer (0.12 M NaCl, 0.02 M Na2HPO4, 0.1% (v/v) Tween 20, pH 4.0). Bound TTR was detected with sheep anti-human TTR antiserum, followed by a peroxidase conjugated anti-sheep IgG (Sigma, Poole, UK). All washing and detection steps were performed using assay buffer. The ELISA was developed using TMB substrate solution and the absorbance was read at 450 nm. Porcine Hp was analysed by sandwich ELISA essentially as described before [15], using an in-house monoclonal antibody against porcine Hp as the catching antibody. A pool of pig serum calibrated against a porcine Hp standard from Saikin Kagaku Co. Ltd. (Japan) was used as in-plate standard. Samples were run in duplicate and the absorbance was read at 490 nm subtracting 650 nm. Finally, the concentration of SAA in the samples was assessed by a sandwich ELISA from Tridelta Ltd. (Tridelta Development Ltd, Bray, Co. Wicklow, Ireland) in accordance with the manufacturer's instructions.
Statistical treatment of data (Detailed information on this part of the work may be obtained from the authors) The aim of the statistical treatment was to establish a measure of the ability of single APPs as well as any combination of APPs to detect ongoing infection/ inflammation. To do this, detection probabilities, based on cut-off values calculated for each APP, were computed and evaluated for each of the five treatment groups separately (sections "Univariate analysis for calculation of single APP detection probabilities" and "Multivariate analysis for calculation of combined APP detection probabilities"), and for all of these weighted together in a performance index for ongoing unknown infection/inflammation (section "A global performance index for unknown infection/inflammation").
Univariate analysis for calculation of single APP detection probabilities
Pre-treatment APP concentrations as derived from the experimental data were used to estimate cut-off values for each APP, i.e. the maximum (minimum for ApoA1, albumin and TTR) value that the APP concentration is expected to attain within a standard, one-sided 95% confidence interval in an animal not undergoing an infection. This was done by approximating the observed pre-infection values with a normal distribution having (estimated) mean μ and variance σ 2 for each APP (except SAA, see below, Results). For a positive APP the cut-off value is then μ + (1 + 1/n)cσ, where c is the 0.95 percentile in the standard normal distribution, and n is the number of animals in the data used to estimate μ and σ 2 . For a negative APP (i.e. ApoA1, albumin and TTR), the cut-off is μ − (1 + 1/n)cσ. An animal is then classified as undergoing infection/inflammation if the concentration of a given positive APP measured in a sample from this animal is above the cut-off value for the APP in question (and below for a negative APP). This classification does not indicate anything about the nature or the time course of the infection/inflammation.
In principle, the distribution of pre-infection values for any given APP should not vary between the treatment groups. This, however proved not to be the case here, and therefore specific cut-off values had to be calculated for each treatment group.
Cut-off values allowed the calculation of detection probabilities. The detection probability is defined as the probability, based on APP data, of classifying an animal as undergoing infection/inflammation given that the animal is actually undergoing infection/inflammation as defined by the experimental conditions. Thus, the detection probability equals the sensitivity of the measurement(s) of the given APP in revealing ongoing infection/ inflammation using the experimentally defined infection/ inflammation status of the animal as the "gold standard". For an APP measurement at time t after the start of the infection/inflammation, having mean μ t and variance s t 2 , the detection probability r t for a positive APP is simply defined as the integral of the corresponding normal density above the cut-off value, i.e.
For negative APPs the modification of this expression is obvious.
Multivariate analysis for calculation of combined APP detection probabilities
A combined measurement of more than one type of APP is expected to yield higher detection probabilities. To quantify these gains in sensitivity, combined detection probabilities needed to be calculated for all combinations of two or more APPs. Albumin and TTR were disregarded due to inappropriate data structures for these two APPs (see Results), and thus calculations were based on Hp, CRP, ApoA1 and pigMAP only, using multivariate analysis of pre-infection measurements of these four APPs (using minus the pre-infection measurements for the negative APP ApoA1). As these measurements showed a clear pig effect they were approximated with a multivariate normal distribution, in which variance-covariance parameters were allowed to vary freely. This interdependence of APP concentrations was remedied by establishing a theoretical transformation making the APP concentration statistically independent. This was done by arbitrarily sequencing the APPs as CRP first, then Hp, ApoA1 and pigMAP. Then the difference between the true mean and the conditional mean of any of the APP measurements, given concentration measurements for the remaining APPs in the sequence, was added. For onedimensional normal variables X and Y with means μ x , μ y , variances σ x 2 , σ y 2 and correlation ρ, these operations correspond to subtracting (ρσ x /σ y )( Y-μ y ) from X. For any subset of the four APPs, a similar technique was applied, simply by deleting the APPs not included in the subset from the sequence of APPs. This transformation rendered all co-variances between APP measurements zero while retaining the original mean, and allowed the application of standard techniques for independent stochastic variables. Then, combined cut-off values were calculated for each APP in a combination in order to keep the combined probability that any APP in the combination show a value above its combined cut-off value below 5% given that the animal is not undergoing infection/inflammation. Thus, with Y j denoting the j'th transformed APP measurement, which has (estimated) mean μ j and variance σ j 2 , the decision rule for classifying the animal as 'infected' on the basis of a subset J of the four APPs considered, will be that for at least one j in J : Y j > μ j + cσ j , where μ j + cσ j is the combined cut-off value for APP j. If J is all four APPs, c is equal to 2.23. For J consisting of two or three APPs, c is equal to 1.95 and 2.12, respectively.
Finally, the combined detection probabilities were corrected for the fact that the independence-giving transformation was estimated through the estimates of variances and covariances from the multivariate normal approximation of data. The transformation thus deviates from the theoretical transformation, making the transformed APP concentration measurements only approximately stochastically independent. Also, there was no longer a fixed number of animals used to estimate the variance, as the different treatment groups had different numbers of animals. To deal with these two issues, 50 000 sets of 4 APP measurements for the same number of animals as in the experiment were simulated from the estimated multivariate distribution, and transformed as described above, using the transformation derived from the empirical variance-covariance of the simulated data. The value of c was then adjusted so that 5% of the simulations were classified as "infected". Based on the simulation study, we adjusted the c value for the AP4 data with a factor of 1.18 while Mycoplasma, Strep. suis, Toxoplasma and Inflammation did not require adjustment of the c value. Combined detection probabilities were then estimated for all time points in the study and for all combinations of the four APPs through simulation studies (10 000 replications per time point and set of APPs).
A global performance index for unknown infection/ inflammation
In practice, it may not be known which, if any, infection/ inflammation is present and at which stage. The detection probabilities for single APPs as well as for APP combinations computed above were all related to a specific infection/inflammation. To generalise this into an estimate of the overall/global detection probability of a given APP or APP combination, a global performance index was constructed, scoring the overall ability of any APP or APP combination to detect the five types of infection/inflammation that were considered in this study. This was based on the assumptions that each of the five infection/inflammation types were equally likely to be the one resulting in significantly changed concentrations of any of the four APPs, and that a sample from any time point in the study period had the same probability of representing an animal undergoing infection/inflammation. We considered the detection probabilities as a function of time from infection/inflammation, and extrapolated the function linearly between time points where data were available to the end of the study period (see Additional file 1, Figure S2; note that the end of the study period is defined by a point after the ultimate time point equalling half the distance between the ultimate and the penultimate time points, to put similar weights on all observations). At time 0 (time of infection/start of inflammation), the probability of incorrect detection was set to 0.05, to conform to the 95% confidence limit used for constructing detection probabilities. For any combination of APPs, the probability of detection at a uniformly random time point within the study period, the performance index I J for a subset J of the four APPs then takes the form I J = 1 5
Disease development in the experimental groups
The inoculation strain was re-isolated from relevant tissue in all infected animals and macroscopic lesions typical of the infection in question were found in all animals upon necropsy, except one in the S. suis group, and one animal in the A.p. group. With the exception of the PRRSV group, general clinical signs such as fever and loss of appetite were observed in all groups as were specific clinical signs, in the relevant groups, such as lameness (S. suis, M. hyos.), and coughing and sneezing (A.p.). In short, clinical signs showed up early (within first 24 h after inoculation) in the S. suis, A.p. and inflammation groups and later in M. hyos. (most animals at days 9-12 pi) and T. gondii (day 6 pi) groups. In the PRRSV group all animals were asymptomatic even if they all presented with PRRSV viraemia at days 3-4 pi. Absence of clinical signs is typical of PRRSV-infected animals of this age (21 weeks). For ethical reasons most of the A.p. infected animals (11 of 12) were treated with antibiotics at 27 h. As the study objective was to establish the best APP combination for indicating ongoing infection/inflammation, irrespectively of clinical signs, all animals were included, even if severity of both clinical signs and pathology varied between individual pigs. The specific inoculation agent was re-isolated from all animals.
Acute phase protein response kinetics
Pre-challenge concentrations and the derived cut-off values for Hp differed between different experimental groups (see Table 1), while the within-group animal-toanimal variation was not bigger than for the other APPs. Very low pre-challenge concentrations of Hp were seen for the A.p. and the M. hyos. groups while the T. gondii, S. suis and inflammation groups showed much higher pre-challenge values, and, consequently, higher cut-off values. This indicated that pre-challenge conditions such as age of pigs, origin and sanitary/microbial and stress status of pigs and housing of pigs had a big effect on Hp concentration. For some reason, this effect was not as pronounced with the other APPs, although prechallenge effects were also clearly detectable with CRP (Table 1). It can be noted that while the inflammation and S. suis groups showed affected pre-challenge concentrations for all proteins, the A.p. group showed a specific pre-challenge elevation of CRP and the T. gondii group showed a specific elevation of Hp.
As seen in Figure 1a clear-cut APP responses for the positive acute phase proteins CRP, pigMAP, Hp and SAA were observed after infection with A.p., S. suis, to a very quick response, while responses were gradually slower in the order S. suis, A.p. and T. gondii. In addition, some variation between the responses of individual APPs was evident. This was most clearly seen with SAA which was an "all-or-none" responder with a large proportion of the samples not having SAA above the detection limit, and showing a very short-lived response. Also, there were subtle differences between the reactions of CRP, pig-MAP and Hp to the different infections; for example CRP reacted more quickly in S. suis infected pigs than pigMAP and -as mentioned above -was the only APP induced by PRRSV. Also, Hp and CRP reacted more strongly to aseptic inflammation than did pigMAP, while Hp showed a lower response to A.p. than to S. suis and T. gondii compared to CRP and pigMAP which showed more similar responses to these three pathogens (see Figure 1a). For the negative APPs (Figure 1b), ApoA1 was the only protein showing a clear and transient decrease occurring rapidly for inflammation, S. suis and A.p. and later for T. gondii. The responses of negative APPs albumin and TTR were relatively weak with a large between-animal variation. TTR did show a weak transient decrease with S. suis and M. hyos. however was not affected by T. gondii, PRRSV and A.p. infection. During inflammation the serum concentration of TTR decreased rapidly and stayed depressed throughout the experiment. Albumin did appear to decrease in the course of A.p. infection, however did not react to any of the other infections.
Data from the PRRSV-infection experiment were excluded from further analyses due to the evident inability of this infection to induce any protein apart from CRP (see Figure 1) and therefore not contributing to defining the optimal APPs and APP combinations. The full set of data for individual animals is shown in Additional file 1, Figure S1.
Detection probabilities for single APPs and APP combinations
Estimated detection probabilities for each infection/ inflammation group are listed for CRP, Hp, pigMAP and ApoA1 and for all combinations of these in Table S1 (Additional file 1) and all single APP detection probabilities are shown in Figure S3a (Additional file 1). Albumin and TTR were not included due to their large between-animal variation and inconsistent responses, and SAA was also excluded, as calculation of a statistically meaningful cut-off value for this protein was not possible due to the pre-infection concentrations being below the detection limit of the assay and thus having zero variance.
An example on the correlation between detection probabilities and actual APP concentrations with the clinical phase during infections is shown for haptoglobin in Figure 2 for the S. suis and T. gondii groups. As can be seen, the detection probabilities quite accurately reflect the much narrower clinical phase for the T. gondii infection as compared to the S. suis infection.
Combined detection probabilities for APP combinations yielded a broader window of detection of all of the infection/inflammation groups (see Figure S3, Figure S4 and Table S1 (Additional file 1)). Figure 3 shows examples of detection probabilities going from one-dimensional to multivariate and showing the worst and the best one-protein APP, and the best two-, three-and the four-protein APP combination for all experimental groups. For A.p., there was not much difference between using the best single APP and using any of the optimal APP combinations while, for the inflammation group there was always an effect of increasing the number of APPs and for the inflammation group there was always an effect of increasing the number of APPs (Figure 4a). Figure S3b (Additional file 1) shows that for M. hyosyn., there was a big gain in going from one to two APPs but not much gain in increasing from two APPs to three or four, and for S. suis, there was an effect of using two APPs instead of one, but not much effect in increasing the number of APPs to three, although increasing it all the way to four APPs did have an effect. In the same figure it is seen that for T. gondii, there was no effect in increasing to two APPs, but there was effect of increasing the combination to three or four APPs. The full set of detection probability curves for all APP combinations is shown in Additional file 1, Figure S4. Table S2). Overall performance of APPs and APP combinations (global performance index) Table S2 (Additional file 1) and in Figure 4b. As the detection probability was fixed at 0.05 at time 0, the values in Table S2 (Additional file 1) should be compared with the upper bound of 0.935, which is due to the study design. The average of the effect of increasing the number of APPs in the measurements was reflected in the increase in the global detection index for the best single, two-, three-and the four-APP combination, which were 0.63, 0.81, 0.84 and 0.89, respectively (graphically depicted in Figure 4b, also see Additional file 1, Table S2). This global performance of APPs and APP combinations was calculated by summing areas under the curve for detection probability curves for all infection/inflammation groups, for each APP and APP combination as described above. The best two-protein combination was CRP and pigMAP (0.81) closely followed by apoA1 and pigMAP (0.78), while the best three-protein combination was CRP, apoA1 and pig MAP (0.84), with the four-APP combination only slightly better (0.89). Thus, both of the best three-APP combinations and the four-APP combination were only marginally better than the CRP, pigMAP combination.
Discussion
The data reported here give a wealth of information on the response of different APPs to different infections in the pig, complementing earlier studies in experimental models [11][12][13][14][15] and suggesting ways of using APPs for monitoring infections when type of the infection(s) as well as the infection starting points are unknown. The aim was to define the combination of APPs detecting any infection/inflammation with the highest possible sensitivity. The sensitivity of a given APP or APP combination for general detection of infection/inflammation will depend on the generality of the response of the APP(s) in question (the consistency of the response in a high proportion of animals exposed to a range of different -relevant -types of infections and inflammatory states), the kinetics of the response (the rapidity, peak time and extent of the response) and the between-animal variation (the extent to which the significance of the response is affected by variations in pre-infection levels and in response levels between individual pigs).
The approach taken here is general, not incorporating clinical data or taking biological differences between different infections into account, although they clearly give rise to different APP responses. The idea is that it would be beneficial if the APP measurement could also indicate subclinical infection (APPs have the potential to do just that, see for example Karreman et al. [36], Sorensen et al. [15] and Gerardi et al. [37]). It is also assumed that any combination of APP measurements giving a maximum detection probability as defined here, i.e. with no reference to occurrence of clinical signs and/or pathological changes, will also be the most globally sensitive combination for demonstrating any infection/inflammation, be it clinical or subclinical. In addition, the experiments included here are not comparable with respect to frequency and level of detail in recording clinical signs. From this it follows that data were treated under the assumption that all animals included in an experimental group were subject to the same course of infection/ inflammation. Accordingly, the results of the calculations do not indicate to which extent the APPs can differentiate between individual animals being differently affected by the infection/inflammation (as e.g. indicated by differences in clinical responses). In other words, differences in reaction to the (same) stimulus by the individual pigs were incorporated into the calculations and accounted for the majority of the variations in the treatment groups.
The experimental groups covered a broad and relevant range of infections, and data on CRP, Hp, ApoA1 and pigMAP concentrations were included; Albumin and TTR concentrations showed a large animal-to-animal variation and negligible detection probabilities (not shown) and thus were excluded from further study, and for SAA a cut-off value could not be defined as its preinfection serum concentration was below the detection limit of the assay (6 μg/mL). Data from the PRRSV group were excluded from analysis as only CRP showed any substantial response in this group.
The statistical treatment of data comprised a two-step procedure first defining cut-off values for the individual APPs and then deriving detection probabilities for single APPs and for combinations of APPs by multivariate analysis, both of these for each experimental group. As can be seen in Figure 3, the detection probability curves for the single worst (apoA1) and single best (pigMAP) performing APPs and for the best two-APP, three-APP and the single possible four-APP combination, clearly show that detection sensitivities for most challenge groups are much improved when increasing the number of APPs. To generalize this, a measure of overall (global) detection sensitivity for all of the experimental groups involved was constructed based on the summed area under the curve averaged over all of the infections in order to compare the global performance indexes for the different APP combinations. This measure gives the average probability of detecting, using the APP combination in question, any of the infections with all 5 infections equally probable.
This evaluation showed that APP combinations allowed the detection of disease more sensitively than any individual APP (best individual APP is pigMAP (0.63)). The global performance indexes for the best two-APP combination, the best three-APP combination and the four-APP combination were within a close range (0.81, 0.84 and 0.89, respectively) and close to the upper limit of the index (0.935). Indeed, it seems worthwhile to consider the two-protein combinations (especially CRP, pigMAP (0.81) and apoA1, pigMAP (0.78)), performing almost as well as the best three-protein combination. The benefit of choosing the best three-APP combination is that it includes both negative and positive APPs. The Hp, pigMAP combination had a similar global detection probability index (0.77), however it might be advisable to avoid Hp as its cut-off differed widely between the different treatment groups. Clearly, pre-challenge history (age of pigs, origin and sanitary/ microbial status of pigs stress and housing of pigs) had a bigger effect on Hp than on the other APPs investigated. This confirms data reported on pig Hp in different pig herds [20], showing higher variability than CRP [16,17] and pigMAP [21]. Although cut-off values for CRP also varied considerably between experimental groups this was mostly due to one group having very low pre-challenge levels (M. hyos.).
While providing suggestions for which APPs to combine for sensitive detection of infection and inflammation in pigs, no generally applicable cut-off values, neither single-APP cut-off values nor combined cut-off values can be derived from this study. However, a method is provided for calculating combined cut-off values for each APP in an APP combination (see section "Multivariate analysis for calculation of combined APP detection probabilities"), based on pre-infection concentrations. Evidently, this favours the use of APPs that show little variation between animals (by increasing the detection probabilities for the APP in question) and APPs that show little variation between groups (or herds), by increasing the probability that a given cut-off value calculated from the pre-infection data from a collection of relevant samples is indeed applicable to the set of samples being evaluated.
The approach described here enables the use of the optimal APP combination for sensitive detection of infection/inflammation, by measuring each APP in the preferred combination and observing if any of the APPs are above (or below for negative APPs) their respective combination cut-off values. If specific circumstances make certain APPs more practical and/or advantageous to use than others, the methods presented here can also be used to calculate combination cut-off values for such a set of APPs.
Thus, a decision rule for defining an APP serum concentration as indicating infection/inflammation, irrespective of the type of infection or the time of disease progression was established, different APP combinations were evaluated and their optimally sensitive combinations were identified. While the methods are general, the results are dependent on the experimental structure that was used to obtain the APP data. Furthermore, the data were heterogeneous in the sense that it was not possible to establish a pre-infection distribution independent of infection type for each APP. Thus, for general, practical use, the definition of cut-off values, based on relevant pre-infection data is pivotal, necessitating that a relevant group of non-infected animals are available and that herd/management effects can be accounted for, as such effects will also apply to all other pigs in the herd and will vary from one APP to another (Hp being more sensitive to these effects than the other APPs studied, see above) [16,17,20,21]. Thus, although the present study as well as that of other investigators for example Parra et al. [18] provide values for normal, pre-infection concentrations of a number of useful APPs it is recommended that group/herd-specific data are always obtained and used in order to define cut-off values. Such data may be derived by continuous APP surveillance of herds in periods in which the herd is free from disease.
In addition, to further corroborate the conclusions on which APP combinations are generally optimal, future studies should extend to more, relevant infections including different types of (clinical and subclinical) viral infections, infections with helminths and bacterial infections restricted to the mucosal surfaces. This would generate a more complete picture of the possibilities and limitations of the use of APPs for revealing infection and to possibly define APP subsets that are particularly applicable to certain groups of infections and/or situations.
The potential of the method for analyzing APP data from herds in which knowledge of infections is scarce opens up new ways of classifying/certifying pig herds with improved welfare. In addition this would allow continuous, general screening of herds for health problems which may be followed up by traditional serological methods if needed. In order to achieve the full potential of this approach, validated and robust APP assays and APP standards need to be generally available.
Additional material
Additional file 1: Supplementary data.All APP concentration data for all treatment groups. All detection probabilities for all treatment groups and for all APPs and their combinations. Performance index for APPs and APP combinations.
|
2014-10-01T00:00:00.000Z
|
2011-03-17T00:00:00.000
|
{
"year": 2011,
"sha1": "e988d2037eccbd0030a8f1c49d1d056cb66c7a80",
"oa_license": "CCBY",
"oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/1297-9716-42-50",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd0f2add0edf9c3f1badb1be69cc442b61554104",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
244683379
|
pes2o/s2orc
|
v3-fos-license
|
In vitro adhesion of Bacillus sp. and Enterobacter sp. probiotics on intestinal epithelial cells of red tilapia (Oreochromis sp.) and the application effects on the fish growth and survival rate
This research aimed to determine the adhesion of Bacillus sp. (PCP1) and Enterobacter sp. (JC10) on intestinal epithelial cells of red tilapia (Oreochromis sp.) and the effect of the probiotics application in feed on fish growth, survival rate, and feed conversion ratio. In vitro adhesion test was performed by using 108 cells/ml of bacteria, and 105 cells/ml of epithelial cells for 1 hour of incubation. The probiotics were added to the fish pellet with the dose of 5 x 104 CFU/g of feed with four treatments, including probiotic application every three days, seven days, without probiotic, and commercial probiotic application every three days. Each treatment consists of three replications. Red tilapia is maintained for 30 days in fiberglass ponds. The feed is given two times per day with a dose of 5 % of the biomass. The adhesion experiment results showed that Bacillus sp. (PCP1) and Enterobacter sp. (JC10) have adherence abilities higher than the commercial probiotics. The application of probiotics in tilapia for one month did not affect the fish growth, survival rate, and feed conversion ratio (P > 0.05). Probiotic application in a longer period is needed to be addressed.
Introduction
Tilapia is one of the main freshwater fish commodities that is easy to cultivate, grows relatively fast, and has a high tolerance for the environment [1] [2]. Tilapia production in 2014 was 999,695 tons, while in 2016 it reached 1.14 million tons and in 2017 it was 1.15 million tons or an increase of 3.6% [3].
Probiotics are live microbes which when administered in sufficient quantities can have a beneficial effect on the health of the host and can improve the balance of microbes in the digestive tract [4]. Provision of probiotics in feed has been shown to modify the composition of the gut microbiota [5], improve feed digestibility [6], feed efficiency [7], growth and immunity of tilapia [7] [8]. Probiotics are biofriendly agents that can improve intestinal health, growth, and fish production through activation of nutrient absorption and metabolism [9].
Probiotics can work well if they have good adherence capabilities and can colonize the digestive tract [10]. Bacillus sp. (PCP1) and Enterobacter sp. (JC10) are two candidate fish probiotic bacteria isolated from the digestive tract of fish in Jepara, Central Java [11][12] [14]. Both bacteria have strong proteolytic enzymes, are acid resistant, resistant to antibiotics [11]. The optimal dose of probiotics given to tilapia on growth is a reference for this research. The dose of probiotics on red tilapia which was able to increase growth and low FCR was 10 4 CFU/g of feed [12]. Therefore, this study will look at the adhesion ability of the bacteria used and the effect of the frequency of giving probiotics to improve the digestive system as indicated by the growth of tilapia. The selected probiotic bacteria are expected to increase the growth of tilapia through the right frequency of administration.
Bacterial Adhesion Test on Epithelial Cells
The sample size of tilapia used was 35-40 g. Tilapia samples had previously been fasted for 2 days, with the aim that bacteria in the intestines could be minimized. Intestinal epithelial cells of tilapia were obtained by dissecting fish, cut, and placed in a sterile petri dish. The inner surface of the fish intestinal epithelial cells was scraped using a sterile spatula and suspended in DMEM (Dulbecco's Modified Eagle Medium) which was added with 10% filtered serum and added 100 u/ml penicillin and 100 mg/ml streptomycin. Intestinal epithelial cells were then centrifuged at 1000 rpm for 5 minutes. The supernatant was then discarded and DMEM was added. Epithelial cells were counted using a hemocytometer to obtain an epithelial cell density of 10 5 cells/ml DMEM [12]. Then, a solution of epithelial cells that had a density of 10 5 cells/ml DMEM was plated onto a microplate of 125 l with 3 replications for each bacterium tested, then the epithelial cells that had been plated onto a microplate were incubated overnight in a cell incubator (30 o C;5% CO2).
After the epithelial cells were incubated, they were then tested by the staining method using crystal violet. The supernatant fluid was discarded to remove nonadherent epithelial cells. Isolated probiotic bacteria with a density of 108 cells/ml were added with 100 l of microplate wells. Then the microplate was incubated at room temperature for 1 hour to attach the bacteria to the microplate well. The supernatant liquid was discarded so that the bacteria that did not stick were also wasted. Then the microplate was fixed at 60 o C for 60 minutes with a Dry Block Heater (dry). Then, each filled well was stained with 0.1% crystal violet at 100 l per well and allowed to stand for 45 minutes. Each well was then washed twice with 100 l of PBS to remove excess stains. Then 100 l of citrate buffer was added (20 mmol/l; pH 4) as a solvent and incubated for 45 minutes. The solution was read for absorbance using a microplate reader with the Rapid Test application at a wavelength of 630 nm. The negative control used was epithelial cells that were not given bacteria but were still given the same treatment, then the positive control used was probiotics.
Application of the probiotic bacteria to Fish Through Feed
Preparation of probiotics was carried out at the Fish and Environmental Health Laboratory of the Department of Fisheries, Faculty of Agriculture, Gadjah Mada University. The design of this study used a completely randomized design (CRD) to find out which treatment was different among all treatments based on the value of variance. There were four treatments and each had three replications. Tilapia are reared using fiber tubs. Before being applied to the treatment, the fish were acclimatized first. After the acclimatization process, treatment can be given. The treatment given is as follows: P1: feed without the addition of probiotics (negative control) P2: feed with the addition of probiotics at a dose of 5x10 4 CFU/g every 3 days P3: feed with the addition of probiotics at a dose of 5x10 4 CFU/g every 6 days P4: feed with the addition of probiotics at a dose of 5x10 4 CFU /g Raja Catfish (control positive) every 3 days Fish rearing tank measuring 50x50x60 cm. The number of tubs used is 12 tubs. Before use, the tub was brushed using the Baycline solution and then rinsed clean. The tub that has been rinsed is then dried in the sun. After drying, then the tub is arranged and filled with water. Each rearing tank is aerated in addition to dissolved oxygen in the water. Fish are stocked after the tub water is filled and no tub is leaking. In each tank is stocked with a density of 32 fish. The size of the fish used in this study was 8-10 cm. After the acclimatization process, the fish was ready to be treated by giving probiotics with a feed dose of 5% of the biomass. Feed is given twice a day, every morning at 09.00 WIB and in the afternoon at 15.00 WIB.
Probiotic preparation
The bacteria used were Bacillus sp., Enterobacter sp., and Raja Catfish. Preparations made before mixing probiotics are calculating the density of bacteria using the Mc method. Farland. Bacteria were cultured in test tubes containing 10 ml of Tryptone Soya Broth (TSB) medium for 24 hours. The culture results were taken as much as 1 ml with a micropipette and put into a cuvette. The cuvette that has been filled with the results of the 24-hour bacterial culture is inserted into the spectrophotometer. The blank used is TSB. The wavelength used is 625 nm. After knowing the absorbance value, then the value is entered into the Mc function. Farland. The function of the Mc Farland equation used is y = 20.955 x -3.6222, where y is the value of bacterial density and x is the absorbance value [14]. The culture results are also kept as stock. The stock was stored in a microtube with a volume of 200 l consisting of 100 l of culture and 100 l of TSB Glycerol, the stock was then stored in the freezer.
The dose of probiotics used during the study was 5 x 10 4 CFU/g feed. This dose has been studied by [2] which states that this dose is the best dose compared to other doses (10 5 , 10 6 CFU/g) or without probiotics. The application of probiotic bacteria is carried out according to the needs of probiotic bacteria in the feed. The application was carried out by culturing the stock stored in the freezer on TSB medium for 24 hours. Then the volume of bacteria needed is adjusted based on the amount of feed needed.
Mixing probiotics in feed
Mixing probiotics in feed begins with weighing the feed according to the needs of each tank. Feed is put in plastic. Furthermore, cultured probiotics are then adjusted to their needs and mixed with PBS into a sprayer. The control treatment was only given PBS without probiotics. The amount of PBS is 10% of the weight of the feed. Under the statement of previous study [16] [12], the water content in the feed ranges from 10-12%. Probiotics are then sprayed onto the feed and stirred evenly to make it homogeneous. After the feed is stirred evenly, the feed is ready to be given to the test fish.
Observation of growth, survival, total production, feed conversion ratio (FCR), and water quality
Observations of growth, synthesis, total production, and PCR were carried out based on previous study [12].
Data Analysis
Analysis of adhesion data was carried out by Analysis of Variance (ANOVA) with an accuracy of 95% (α = 5%), if there was a significant difference, then continued with the Tukey test to see the real difference between test treatments and adherence ability seen descriptively compared to controls. Data analysis of growth, survival (SR), total production, and Feed Conversion Ratio (FCR) were analyzed using Analysis of Variance (ANOVA) with an accuracy of 95% (α=5%) if there was a significant difference then proceed with the DMRT test (Duncan Multiple Range Test) to determine the differences between treatments. Water quality parameters were analyzed descriptively by comparing them with the literature.
Bacterial adhesion to epithelial cells
Based on the data obtained in the adhesion test, the absorbance results of bacterial adhesion to tilapia epithelial cells can be seen in Figure 2. The highest absorbance value was given to the treatment with JC10 bacteria with an absorbance value of 0.2172, then followed by the administration of PCP1 bacteria with an absorbance value of 0.2027, Raja Catfish commercial probiotics with an absorbance value of 0.1834, and probiotics (positive control) with an absorbance value of 0.1720. The lowest absorbance value was in negative control or cells without bacteria with an absorbance value of 0.1717. The results of the ANOVA test showed a significant difference and then Tukey further tested it to see the real difference between the test treatments. Based on Tukey's further test, the most significant treatment for the control was JC10. Figure 2. Invitro absorbance of adherence bacteria and the epithelial cells of tilapia.
Fish growth performance
The results of the calculation of the bacterial culture of Bacillus sp. (PCP1), Enterobacter sp. (JC10), and Raja Catfish (positive control) which were used to be mixed in the feed of this study, respectively, namely 2.2 x 10 8 cells/ml, 1.4 x 10 9 cells/ml and 7.7 x 10 8 . The bacteria were then diluted and sprayed on the feed at a dose of 5 x 10 4 CFU/g feed. Feed that has been sprayed with probiotics is ready to be given to fish on the same day.
Fish growth can be seen in Figure 2. Absolute weight growth in feed treatment with probiotics every 6 days weighing 17.65 g, then followed by treatment without probiotics (control) weighing 16.42 g, treatment with Raja Lele probiotics (positive control) ) weighing 14.9 g, and treatment with probiotics every 3 days weighing 13.87 g. The results of the ANOVA test showed that all treatments were not significantly different. These results indicate that the administration of probiotics does not have a significant effect on the absolute weight growth of tilapia. The same can be seen in the specific weight growth parameters. Specific weight growth in the treatment of giving probiotics every 6 days with a percentage of 2.53%/day, then followed by treatment without administration with a percentage of 2.46% / day, the treatment of giving commercial probiotics Raja Lele (positive control) was 2.42% / day and in the treatment of giving probiotics every 3 days with a percentage of 2.28%/day. The results of the ANOVA test showed that all treatments were not significantly different. These results showed that the administration of probiotics did not have a significant effect on the growth of specific weight in tilapia. Absolute length growth in treatment with probiotics every 6 days with a length of 2.7 cm, then followed by treatment with probiotics every 3 days with a length of 2.46 cm, treatment without probiotic administration (negative control) with a length of 2.44 cm, and treatment with King Lele probiotic (positive control) with a length of 2.43 cm. The results of the ANOVA test showed that all treatments were not significantly different. These results indicate that the administration of probiotics does not have a significant effect on the absolute length growth of tilapia. The same thing can be seen in the specific length growth. Specific length growth in the treatment of giving probiotics every 6 days with a percentage of 0.85%/day, then followed by the treatment of giving probiotics every 3 days with a percentage of 0.81%/day, giving Raja Lele probiotics (positive control) with a percentage of 0.8 %/day and treatment without probiotics (negative control) with a percentage of 0.78%/day. The results of the ANOVA test showed that all treatments were not significantly different. These results showed that the administration of probiotics did not have a significant effect on the specific length growth of tilapia. Figure 3. The final weight of red tilapia fed with probiotics in different feeding frequencies.
Fish survival rate
Based on the data obtained during rearing, the survival rate of tilapia can be seen in Figure 4. The survival rate in the treatment of probiotic administration every 6 days with a percentage of 91.67%, then followed by the treatment of Raja Lele commercial probiotics (positive control) with a percentage of 78.12 %, in the treatment without giving probiotics (negative control) with a percentage of 76.04%, and the treatment giving probiotics every 3 days with a percentage of 66.67%. The results of the ANOVA test showed that all treatments were not significantly different. These results indicate that the administration of probiotics did not have a significant effect on the survival of tilapia.
3.2.3.Total Production of fish biomass
Based on the data obtained during rearing, the FCR of tilapia can be seen in Figure 5. Total production in the treatment of giving probiotics every 6 days was 875 g, then followed by treatment without probiotics (negative control) of 837.5 g, the treatment of giving probiotic Raja Catfish (positive control) of 726.96 g, and the provision of probiotics every 3 days was 703.83 g. The results of the ANOVA test showed that all treatments were not significantly different. These results indicate that the administration of probiotics does not have a significant effect on the total production of tilapia. Figure 5. Total fish biomass production of red tilapia fed with probiotics in different feeding frequencies.
Food Convertion Ratio / FCR)
Based on the data obtained during rearing, the FCR of tilapia can be seen in Figure 6. FCR in the treatment of giving probiotics every 3 days and the administration of commercial probiotic Raja Lele (positive control) was 1.56, then followed by the treatment of giving probiotics every 6 days of 1.42 and the treatment without probiotics was 1.29. The results of the ANOVA test showed that all treatments were not significantly different. These results indicate that the administration of probiotics is not able to reduce the value of FCR in tilapia. Figure 6. The feed conversion ratio of red tilapia fed with probiotics in different feeding frequencies.
Water quality
Water quality parameters measured during maintenance were DO, temperature, and pH. The following table shows the results of measuring water quality parameters. adhesion than probiotics (positive control) and negative control (cells without bacteria). This indicates that the probiotic bacterial isolates used can adhere to the epithelial cells/mucus of tilapia. Enterobacter sp. (JC 10) showed higher adhesion results than others. This is because the JC10 bacterial isolate used is sourced from direct fish digestion so that many live in intestinal epithelial cells and have a synergistic relationship in adhesion to fish intestinal epithelial cells. However, the ability of these bacteria to stick can not significantly affect the growth of tilapia. Comparison of absorbance between adherent bacteria and bacteria administered in vitro (10 8 cells/ml) for Bacillus sp. about 1/3,5 and Enterobacter sp. about 1/2 that sticks out of a given bacteria. If in vitro there is sticking power but not 100%, then the application with a dose of 10 4 CFU/gram of feed is also suspected of not sticking to 100%. In the adhesion system, the main mechanisms of action of probiotics include increasing the protective function of the epithelium, increasing adhesion to intestinal cells, inhibiting pathogens by occupying adhesion sites, producing antibacterial substances, and regulating immune function [17]. According to a previous study, the attachment of bacteria to intestinal epithelial cells is specific and reversible (permanent) and is the first step in the colonization process for bacteria [18]. It is also suggested that the mechanism of bacterial inhibition occurs through competition for attachment sites and nutrients needed by pathogenic bacteria to grow [19]. Finally, another factor to be considered in the future is the survival ability of the probiotics in the fish intestine which also plays important role on the function in enhance fish growth [26].
Probiotics could improve growth performance, feed efficiency, and minimize mortality. Application of probiotics with various frequencies, which is every 3 and 6 days for one month in the present study did not affect the growth, survival, total production, and FCR of fish. The present results are similar to previous experiments using Bacillus sp. in tilapia diet at a dose of 10 4 CFU/gram feed. Enhancement of the fish growth by the probiotics could not be seen on day 30 but day 60 [25]. Hence, it is suspected that the observation period in the present study was too short [22] [23]. Accordingly, probiotics' effects on fish growth performances need to be investigated over a prolonged period. of the present study growth will occur if there is an excess of energy after the available energy has been used for metabolism, digestion, and activities [20]. Based on cellulolytic and proteolytic enzymatic tests we conducted in our previous study, it was confirmed that the Bacillus sp. and Enterobacter sp. have strong enzyme activity [11] [12] [14]. Results of the present study indicate that bacterial activity is decreasing. The value of enzyme activity is less than 1.5 which indicates that the bacteria have weak enzyme activity. It is suspected that the weaker enzyme activity affects the growth rate of fish so that growth is not significant. According to other studies, probiotics bacteria work by produce several enzymes that are beneficial for digestion [6] [21]. Some of the digestive enzymes in the feed include amylase, protease, cellulase, and lipase. The bacterial probiotics could produce enzymes such as amylase, protease, lipase, and cellulose that help hydrolyze stored feed nutrients (complex molecules), such as carbohydrates, proteins, and fats into simpler molecules to facilitate the process of digestion and absorption of feed in the digestive tract. The effect of probiotics on growth also depends on stocking density, feed composition, probiotic concentration, feeding, duration, type, and source of probiotics [24]. The factor that influences the success of probiotic products in increasing growth and feeds efficiency in fish is the. In addition, growth is also influenced by internal factors and external factors. Internal factors largely depend on the condition of the fish's body, for example, the fish's ability to utilize the remaining energy and protein after eating.
Conclusion
Bacillus sp. (PCP 1) and Enterobacter sp. (JC 10) can attach to the intestinal epithelial cells of red tilapia in vitro, but the effects of the probiotics provision in the fish feed with three and six days intervals on the fish survival rate and growth performance could not be seen at a month of the examinations.
|
2021-11-27T20:06:25.328Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f8c9751f3c66a5dbaca2a4939344b3c1cd908517",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/919/1/012056",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f8c9751f3c66a5dbaca2a4939344b3c1cd908517",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9591563
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Germination on Proximate, Available Phenol and Flavonoid Content and Antioxidant Activities of African Yam Bean (Sphenostylis stenocarpa)
and flavonoid content and antioxidant activities before and after germination were investigated. The crude protein, moisture, and crude fiber content of germinated AYB were significantly higher (P<0.05) than that of ungermianated seed, while the fat, Ash and carbohydrate content of ungerminated were higher than the germinated seed. Germination increased the phenol and flavoniod content by 19.14% and 14.53% respectively. The results of AOA assay showed that the DPPH, reducing power and FRAP of germinated AYB seed gave high values: 48.92 ±1.22mg /ml,0.75± 0.15mg /ml and 98.60±0.04 µmol/g while that of ungerminated seed were: 31.33µ/ml, 0.56±1.52mg/ml and 96.11±1.13µmol/g respectively. Germinated AYB has phytochemicals with potential AOA for disease prevention.
I. INTRODUCTION
NTIOXIDANTS are radical scavengers that inhibit or slow down the oxidation of other molecules by blocking the propagation of oxidizing chain reactions that lead to degenerative diseases such as cancer, inflammation, anaemia, diabetes, neuro-degeneration, cardiovascular and ageing [1], [2]. Phenols and flavonoids, which are excellent antioxidants, can scavenge reactive oxygen and nitrogen species thereby preventing the onset of oxidative diseases in the body.
The use of natural antioxidants is on the increase due to the carcinogenic effect of synthetic antioxidants, such as butylated hydroxyl toluene (BHT) and butylated hydroxyl anisole (BHA), have on humans [3]. Several researchers have reported that antioxidants of plant origin can protect human body against oxidative stress [4].
With the aim of improving the bioactive compounds in legumes, preparation techniques have been developed to significantly raise the bioavailability of their antioxidants. Such techniques include germination, during which some seed reserve materials are degraded and used for synthesis of new cell constituents, causing significant changes in the Nneka N. Uchegbu is with the Department of Food Technology, Institute of Management and Technology, Enugu, Nigeria (phone: +2348034080388; e-mail: uchegbu.nneka@yahoo.com) Ndidi F. Amulu is also with the Department of Food Technology, Institute of Management and Technology, Enugu, Nigeria (e-mail: amulufrancisca@gmail.com) biochemical, nutritional and sensory characteristics of the modified legumes [5].
Sprouting modifies the phytochemical content into antioxidants that act as protective factors against oxidative damages in the human body [1]. African yam bean is one of the underutilized legume in Africa particularly in Nigeria, Togo and Cameroun. This herbaceous climbing vine produces ellipsoid, rounded or truncated seeds, which show considerable variation in size and colour, varying from creamy -white or brownish -yellow to dark brown. Both the seeds and leaves of the plant are edible. The plant also produces tubers, which can be cooked and eaten. They are important sources of starch and protein [6]. There is scarcity of studies on the effect of germination on the antioxidant activity of African yam bean extract. Therefore, the aim of this work was to evaluate the effect of germination on proximate composition total phenols and flavonoid compounds and antioxidant activity of African yam bean.
II. MATERIALS AND METHODS
Dried African yam bean seed (Sphenostylis stenocarpa) were purchased from Ogbete Main Market in Enugu State -Nigeria. The samples were contained in plastic sealed and stored before germination.
A. Germination Process
A 300 g of African yam bean was soaked in 1 litre of water containing 0.7% sodium hypochlorite solution for 30 minutes at room temperature (28 o C). The water was drained off, and re-soaked in distilled water for 5 hours and the water also drained. The hydrated seeds were placed under wet muslin cloth and left to germinate for 3 days at room temperature (28 o C) without direct contact with sunlight [7]. The sprouted seeds were oven dried (Gallenkemp 1H -100 model, UK) at 60 o C for 4 hours and milled to pass through 0.18 mm sieve to obtain the flour which was packaged. The non sprouted seeds were ground, sieved and packaged. This served as control.
B. Extraction of the Seed
A 200 g of both the sprouted and non sprouted flour samples were defatted separately by stirring with 100 ml of 70% acetone at 25 o C for 24 hours and filtering through Whatman No. 4 filter paper, following the method described previously [8]. The residues were further defatted with an additional 50 ml of 70% acetone, as described above for 3 hr. reduced pressure, using a rotary vacuum evaporator (RE 300, Yamato, Tokyo, Japan) at 40 o C and the remaining water was removed by Lyophilization (4KBTxL-75; Virtis Benchtop K, New York, USA). The obtained dry powder was stored in an air tight polythene bag at 0 o C until it was used.
C. Determination of Proximate Composition
The moisture, crude protein, fat, fibre, and ash content of the samples were determined in triplicates according to standard method described elsewhere [9]. The carbohydrate content was determined by difference.
D. Determination of Total Phenol
Total phenol content of the sample was determined using the method described previously [10]. A 50 µL of the sample extract was put in test tubes and the volume was made up to 500 µL using distilled water. Then 250 µL of folin-ciocalteu reagent was added into test tube followed by 1.25 ml of 20% sodium carbonate solution. The tube was vortexed before being incubated in the dark for 40 minutes. Absorbance was read at 725 nm using spectrophotometer.
E. Determination of Total Flavonoid
Aluminium chloride colorimetric method was used for flavonoids determination [11]. The Sample extract (0.5 ml of 1:10 g/ml) in methanol were mixed with 1.5 ml of methanol, 0.1 ml of 10% aluminium chloride, 0.1 ml of 1 M potassium acetate and 2.8 ml of distilled water. The mixture remained at room temperature (28 o C) for 30 min; the absorbance of the reaction mixture was measured at 415 nm with a double beam Perkin Elmer UV/Visible spectrophotometer (USA). The calibration curve was prepared with quercetin solutions at concentrations 12.5 to 100 g ml -1 in methanol. The concentration of the sample extract was then extrapolated from the standard curve drawn (Absorbance against concentration).
F. Determination of DPPH free Radical Scavenging Activity DPPH scavenging activity was carried out by the method described elsewhere [12]. A 250 µg/ml of African Yam bean seed extract with methanol was dissolved in DMSO (dimethyl sulfoxide) and pipette into test tubes in triplicates. Then 5 ml of 0.1 M ethanol solution of DPPH (1,1 Diphenyl-2picrylhydrazyl) was added to each of the test tubes and were shaken vigorously. It was allowed to stand at 35 o C for 20 minutes. The control was prepared without any extracts. Methanol was used for base line corrections in absorbance (OD) of sample and measured at 517 nm. A radical scavenging activity was expressed as 1 % scavenging activity and was calculated by the following formula.
Radical scavenging activity % OD control OD sample OD Control x 100
G. Reducing Power Assay
Reducing power of the sample extract was determined according to the procedure described previously [13]. Aliquots (2.5 ml) of sample extract in phosphate buffer (0.2 M phosphate buffer, pH 6.6) was added to 2.5 ml of potassium ferricyanide (10 mg/ml) and the reaction mixture incubated at 50 o C for 20 min. Trichloroacetic Acid (TCA) (2.5 ml of 100 mg/ml solution) was then added to the reaction mixture, vortexed and centrifuged at 1000 rpm for 10 min. The resultant supernatant (2.5 ml) was mixed with an equal volume of distilled water and 0.5 ml of ferric chloride was added (1 mg/ml solution). Absorbance was measured spectrophotometrically at 700 nm against ascorbic acid as standard and higher absorbance of sample indicates greater reducing power.
H. Ferric Reducing/Antioxidant Power (FRAP) Assay
Ferric reducing antioxidant power of the sample extract was determined as described elsewhere [14]. This method is based on the ability of the sample to reduce Fe +3 to Fe +2 . FRAP reagent (900 µl), prepared freshly and incubated at 37 o C, was mixed with 90 µl of distilled water and 30 µl of seed extract of methanol (for the reagent blank). The seed extract and reagent blank were incubated at 37 o C for 30 min in a water bath. The final dilution of the test sample in the reaction mixture was 1/34. The FRAP reagent contained 2.5 ml at 20 mmol/l 2,4,6tripyridyl-triazine (TPTZ) solution in 40 mmol/l HCl plus 2.5 ml of 0.3 mol/l acetate buffer (pH 3.6). After incubation for 6 min at room temperature, reduction of TPTZ to the ferrous complex formed a blue colour was measured at a wavelength of 593 nm. FeSO 4 was used as standard.
III. STATISTICAL ANALYSIS
Data was subjected to analysis of variance using the statistical package for social science (SPSS), version 15.0. Results were presented as mean ± standard deviations. One way analysis of variance (ANOVA) was used for comparison of the means. Differences between means were considered to be significant at p<0.05 using the Duncan Multiple Range Test. Values are average of triplicate experiments ± standard deviation. The results of the proximate composition of both the germinated and ungerminated AYB are shown in Fig. 1. The crude protein, moisture and crude fibre contents of the germinated seed were significantly higher (p<0.05) than that of the ungerminated seeds while their fat content, Ash content and carbohydrate were lower. The observed increase in the crude protein content of germinated seed might be attributed to a net synthesis of enzymes (e.g. protease) by germinating the seed [15]. The observed decrease in the total carbohydrate after germination might be due to increase in α-amylase activity. The α-amylase breaks down complex carbohydrate to simpler and more absorbable sugars which are utilized by the growing seedlings during the early stages of germination. This is in agreement with previous report [16] which observed a decrease in carbohydrate content after germination. Also the observed decrease in fat contents of the germinated seeds corroborated with former observation made elsewhere [17] who observed a decrease in fat after germinating bambara groundnuts. The decrease in fat content of germinated seeds might be due to the increased activities of the lipolytic enzymes during germination. They hydrolyzed fats to simpler products which can be used as source of energy for the developing embryo. Thus decreased fat content implies an increased shelf-life for the germinated seeds compared to the ungerminated ones. Value are means ± SD; n = 3. Mean values followed by different letters in a column are significantly different (p<0.05).
A. Total Phenol Content
Phenol and other phytochemical found in fruits, vegetables and legumes are bioactive compounds capable of neutralizing free radicals and many play a role in the prevention of certain diseases [18].
Functional food and nutritional supplements eliminates certain risks and have a preventive effect that is based on the therapeutically and regulatory effect of nutrients [19]. Phenolic compounds contribute to the overall antioxidant activities of the plant foods by acting as free radical terminators. Total phenolic content (TPC) of the extracts of germinated and non germinated African yam bean (AYB) are shown in Table I. The TPC in germinated AYB is higher (p<0.05) than that of non-germinated AYB. This increase in the amount of phenolic compound after germination is in accordance with former observation [5] which indicates that germination modifies the quantity and quality of phenolic compounds in legumes. Also, the work done previously [20] is in concordance with this present work, they germinated lupin seeds (Lupinus angustifolius L.) and observe 46% increase in total phenols.
B. Total Flavonoid Content
Researches have shown that intake of foods rich in flavonoid protects human against diseases associated with oxidative stress. The mechanisms of action of flavonoids are through free-radical scavenging or chelating process and protection against oxidative stress [21]. The flavonoid contents of the extracts of germinated AYB (68.31 mg/100 g of dry weight) were higher (P<0.05) than that of non germinated AYB (59.64 mg/100 g). Their findings were attributed to the biochemical metabolism of seeds during germination, which might produce some secondary plant metabolites such as anthocyanins and flavonoids [22]. Value are means ± SD; n = 3. Mean values followed by different letters in a column are significantly different (p<0.05).
C. DPPH Assay
Free radicals are involved in many disorders like neurodegenerative diseases, cancer and diabetes [18]. Antioxidants through their scavenging power are useful for the management of those diseases. Radical scavenging activity using DPPH has been use to survey the antioxidant activity of agricultural produce [23]. The free radical scavenging activity of germinated and non germinated AYB was tested by measuring their ability to quench the DPPH radical (Table II). The results showed that germinated AYB had the higher DPPH free radical scavenging ability (48.92 µg/ml) than non germinated AYB (31.33 µg/ml).
The high scavenging property of germinated AYB may be due to hydroxyl groups existing in the phenolic compounds' chemical structure that can provide the necessary component as a radical scavenger. DPPH is a stable free radical with adsorption maximum at 517 nm. It does this adsorption when accepting an electron [24]. Therefore, germinated AYB could contain some substances which are electron donors that reacted with free radicals to convert them to more stable products and block the radical chain reaction. Table II presents the reducing power of germinated and non germinated AYB. The results showed that the reducing power of germinated AYB (0.75±0.15 µg/ml) was greater than that of none germinated (0.56±1.52 µg/ml). Reducing power assay is used to evaluate the ability of natural antioxidants to donate electrons. It has been accepted that the higher the absorbance at 700 nm, the greater the reducing power [25]. Samples with higher reducing power have better abilities to donate electrons and free radicals to form stable substances, thereby interrupting the free radical chain reactions [26]. The result from this reducing power assay indicated that germinated AYB which has a higher reducing power than non germinated AYB will have a better ability to donate electron which is related to the antioxidant activity. This finding is in agreement with the work done previously [27] which reveals that germination enhances the antioxidant capacity of chick pea. It also agrees with previous work [5]. In their work, they studied the effects of varying germination conditions for bean, lentils and pea, at semi-pilot scale, on bioactive compounds and indicated that peas and beans undergo significant increases in antioxidant activities after germination, whereas lentils show a decrease.
E. Ferric Reducing/Antioxidant Power (FRAP) Assay
The FRAP assay measures the antioxidant effect of any substance in the reaction medium. The method is based on the ability of the sample to reduce Fe +3 to Fe +2 ions. The reducing power of germinated AYB (98.60±0.04µmol/g) was found to be higher than that of non germinated AYB (96.11±1.13µmol/g). The presence of higher antioxidant in the germinated sample caused a higher reducing power compared to the non-germinated.
V. CONCLUSION
Based on the findings obtained in this study, germination process caused increase in proximate, total phenol and flavonoid content as well as increased the anti oxidant properties of African yam bean .
|
2017-05-05T17:26:28.306Z
|
2015-02-03T00:00:00.000
|
{
"year": 2015,
"sha1": "f113afcfb7dea2650161517a01f34635b147c6ae",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/1099750/files/10000757.pdf",
"oa_status": "GREEN",
"pdf_src": "Unpaywall",
"pdf_hash": "f113afcfb7dea2650161517a01f34635b147c6ae",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
236931116
|
pes2o/s2orc
|
v3-fos-license
|
Porous Carbon Substrate Improving the Sensing Performance of Copper Nanoparticles Toward Glucose
An accurate sensor to rapidly determine the glucose concentration is of significant importance for the human body health, as diabetes has become a very high incidence around the world. In this work, copper nanoparticles accommodated in porous carbon substrates (Cu NP@PC), synthesized by calcinating the filter papers impregnated with copper ions at high temperature, were designed as the electrode active materials for electrochemical sensing of glucose. During the formation of porous carbon, the copper nanoparticles spontaneously accommodated into the formed voids and constituted the half-covered composites. For the electrochemical glucose oxidation, the prepared Cu NP@PC composites exhibit much superior catalytic activity with the current density of 0.31 mA/cm2 at the potential of 0.55 V in the presence of 0.2 mM glucose. Based on the high electrochemical oxidation activity, the present Cu NP@PC composites also exhibit a superior glucose sensing performance. The sensitivity is determined to be 84.5 μA /(mmol.L) with a linear range of 0.01 ~ 1.1 mM and a low detection limit (LOD) of 2.1 μmol/L. Compared to that of non-porous carbon supported copper nanoparticles (Cu NP/C), this can be reasonable by the improved mass transfer and strengthened synergistic effect between copper nanoparticles and porous carbon substrates. Supplementary Information The online version contains supplementary material available at 10.1186/s11671-021-03579-y.
Introduction
In recent years, diabetes has raised great attention worldwide, promoting the rapid and accurate determination for glucose concentration [1]. Various techniques have been developed [2]. With the merits of easy operation, fast response and high sensitivity, electrochemical methods are of particular interest in glucose sensing, and the electrode active materials are of utmost importance for the sensors [3,4]. So far, the reported materials with good glucose response activity include noble metals (gold [4], silver [5], platinum [6], palladium [7]), non-noble metal (copper [8], nickel [9]), metal oxides (zinc oxide [10], manganese oxide [11], nickel oxide [12], iron oxide [13]), and carbon materials (carbon nanotubes [14], carbon nanodots [15], mesoporous carbon [16]), etc. Among these materials, copper-based composites show the great potential for constructing an efficient sensing platform for glucose, as a result of the low cost [3], good electrical conductivity [17], controlled specific surface area. Meanwhile, it is reported that the electrochemical performance of copper-based materials will be significantly improved by forming composites with carbonaceous substrates such as grapheme [18,19], carbon nanofibers [20], carbon nanotubes [21] and mesoporous carbons [22]. For example, Zhang et al. prepared the copper nanoparticles on laser-induced graphene composites and successfully developed a flexible enzyme-free glucose amperometric biosensor. Benefiting from its simplicity and high sensitivity, the sensor was expected to be used in wearable or implantable biosensors [23]. Using arc discharge method, the composite materials of CuO and single-wall carbon nanotubes were synthesized by Wang's group. The highly conductive network facilitated by carbon nanotubes leaded to high sensitivity and good selectivity in glucose sensing [21]. Because of the good conductivity of copper nanowires and fast electron transfer in two-dimensional reduced graphene oxide (rGO) layers, Ju et al. synthesized a composite of one-dimensional copper nanowires and two-dimensional rGO nanosheets, showing a sensitivity of 1625 µA/(mM·cm 2 ) and a limit detection of 0.2 µ M for the detection of glucose [3]. The much performance enhancement of copper-based materials have achieved, however, it is still not enough for the real applications of portable device. This means that it is necessary to search new templates or matches for copper nanoparticles.
With the special three-dimensional framework structure [24], porous carbons not only possess abundant binding sites to promote the dispersion of metal active centers, but also provide a larger specific surface area that improves the accessibility of electrons and reactive substances [25][26][27]. In recent years, porous carbons have been recognized as a type of promising modification and substrate materials, which can greatly enhance the electrochemical sensing activity of metal materials. For instance, Li et al. investigated the composites of Co 7 Fe 3 alloy nanoparticles embedded in porous carbon nanosheets (Co 7 Fe 3 /NPCSs). The results showed a much wide linear range for the detection of glucose (from 0.001 to 14.00 mM), due to the nanoconfined effect from the porous carbon [28]. Using the metal-organic frameworks (MOFs) as self-sacrificial templates to prepare porous carbon materials, the nickel nanoparticles embedded on nanoporous carbon nanorods prepared by Jia et al. presented good glucose sensing properties with the fast response times (within 1.6 s) [29]. Song et al. constructed a composite (Cu@C-500) consisting of copper nanoparticle uniformly embedded porous carbon bed by using Cu MOF as a raw material. Because of the hierarchical porosity, it exhibited high sensitivity and low detection limit, and presented great potential in glucose sensor devices [30]. Therefore, with the unique structural and electronic effects, porous carbon material is anticipated to be an excellent partner for further enhancing the electrochemical performance of copper nanomaterials in the glucose sensing.
Herein, in this work, the composites of copper nanoparticles accommodated in porous carbon substrates were designed and synthesized by calcinating the cheap filter papers impregnated with copper ions at high temperature. During the synthesized process, the formation of porous carbon and the accommodation of copper nanoparticles simultaneously occurred, which can be demonstrated by scanning electron microscopy and transmission electron microscopy. For the electrochemical measurements, the results show that the prepared samples (Cu NP@PC) exhibit high electrocatalytic activity for glucose oxidation with the current density of 0.31 mA/cm −2 at the potential of 0.55 V in the presence of 0.2 mM glucose, which is much better than that from the Cu NP/C. For the glucose sensing, the sensitivity is determined to be 84.5 μA (mmol/L) −1 and the detection limit is calculated to be 2.1 μmol/L, much superior to those from most of the previously reported materials. Furthermore, the good selectivity of present materials was also demonstrated by the anti-interference experiment.
Instruments
X-ray diffraction (XRD) spectra were obtained from instrument X'Pert PRO MPD multi-purpose powder X-ray diffractometer. Fourier transform infrared spectra (FT-IR) in the range of 1000-4000 cm −1 were recorded from the IS50 FT-IR spectrometer. Raman spectra were measured in the inVia Qontor (Renishaw, UK) system at a wavelength of 532 nm. X-ray photoelectron spectroscopy (XPS) measurements were performed on a Thermo ESCALAB 250XI spectrometer running at 120 W. The morphologies of the sample were characterized by Hitachi S4800 scanning electron microscope (SEM) with a working accelerating voltage of 20 kV. The transmission electron microscopy (TEM) images were collected from the Tecnai G2 F20. The Brunauer-Emmett-Teller (BET) measurements were performed on the specific surface area physical adsorption apparatus (ASAP2020M).
Synthesis of Cu NP@PC and Cu NP/C
Typically, the synthesis of Cu NP@PC was completed by two-step high temperature pyrolysis. First, the commercial filter papers were allowed to pre-treat at 250 °C for 1 h in a tube furnace under nitrogen atmosphere. Next, a piece of treated pale yellow filter paper with the size of 10 mm × 50 mm was soaked in blue transparent copper nitrate solution with a concentration of 0.1 M, and was taken out after 10 min. After drying at room temperature, the filter paper was put into a clean porcelain boat and successively treated at 180 °C, 240 °C, 900 °C for 2 h, 2 h, and 1 h in a tubular furnace under nitrogen protection, respectively. Finally, the Cu NP@ PC product was collected when the system was cooled to room temperature, and was grinded before electrochemical tests. For the control samples, the synthesis of Cu NP/C and pure carbon were carried out through the same procedure, except that the concentration of copper nitrate was 0.2 M and 0 M, respectively.
Electrochemical Measurements
In this work, all electrochemical tests were performed on CHI 760E electrochemical workstation with a standard three-electrode system at room temperature. Before the experiment, several pieces of carbon papers (5 mm × 5 mm) as current collectors were rinsed with water, ethanol and dried overnight at 60 °C. For the preparation of catalyst ink, 10 mg sample (Cu NP@PC, Cu NP/C or pure carbon powders) was mixed with ethanol, water, and Nafion (5%) solution in a certain proportion of 10:10:1 to form a uniformly dispersion. Then, the catalyst ink of 40 μL was dropped on a clean carbon paper with a load of 1.6 mg/cm 2 , which was used as a working electrode. An Ag/AgCl (saturated KCl) electrode and a graphite rod were used as reference electrode and counter electrode, respectively. For the electrochemical experiments, cyclic voltammetry and linear sweep voltammetry were adopted to qualitatively examine the potential performance of the prepared material for glucose oxidation. The chronoamperometry was used to quantitatively evaluate the sensing performance of the prepared material. In the whole process, 0.1 M KOH solution was selected as the electrolyte.
Results and Discussion
As shown in Fig. 1a, for the synthesis of target materials, the preheating treatment was allowed to remove the unstable impurities and moisture from the filter paper with the color changing to light yellow. Then, to support the metal nanoparticles, the treated filter papers were infiltrated into the copper ion solution. During the high temperature calcination process in a tubular furnace, the copper atoms and tiny crystallites were formed. Because the nucleation and growth rate of copper nanoparticles is less than the pyrolysis rate of carbon, these initial copper microcrystals can catalyze the decomposition and evaporation of carbon, leading to the formation of holes [31]. Finally, the brown-black Cu NP@PC samples were prepared. Note that the excessive concentration of copper ions will increase the nucleation rate, causing the formation of non-porous carbon materials. To identify the components of the prepared sample, X-ray diffraction (XRD) patterns were collected, as shown in Fig. 1b. Both Cu NP@PC and Cu NP/C samples present the diffraction peaks of copper and carbon. The three sharp characteristic peaks located at diffraction angles of 43.2°, 50.3°, and 73.9° can be respectively attributed to the lattice planes of (111), (200) and (220) from the copper nanoparticles (PDF#04-0836) [32,33]. The broad peak with the center around 25° corresponds to the (002) crystalline face from the graphitized carbon, which will promotes the electron transport in subsequent electrochemical reactions [3,25,34]. To analyze the specific compositions of carbon, the Raman spectra of Cu NP@PC and Cu NP/C were collected. As shown in Fig. 1c, the D-band and G-band can be unambiguously determined by the peak around 1350 cm −1 and 1600 cm −1 , respectively [35]. As reported, the G band is caused by the relative motion of sp 2 carbon atoms, while the D band is connected with the breathing mode of carbon rings [36]. Herein, the calculated D/G band ratio of Cu NP@PC was 0.899, the same with the value from Cu NP/C. Therefore, the distribution of amorphous carbon and nanocrystalline graphite are similar in two samples. This indicates the almost same components of two prepared materials, i.e., that both Cu NP@PC and Cu NP/C are consisted of copper nanoparticles and carbon frameworks. To further reveal the microstructure information, the FTIR spectra of Cu NP@PC and Cu NP/C were investigated. As presented in Fig. 1d, it can be seen that the signals located at 1734 cm −1 and 1628 cm −1 appear in Cu NP@PC which can be attributed to the stretching vibration of C=O [39] and the stretching vibration of C-O [40]. Compared to the Cu NP/C, the band at 2363 cm −1 from the Cu NP@PC is attributed to carbon dioxide in the air. A slight absorption band was observed at 3466 cm −1 from the Cu NP@PC and Cu NP/C could be assigned to O-H bond stretching vibration in molecule of water [37].
To observe the morphologies and structures of the prepared materials, the scanning electron microscope (SEM) experiments were conducted. For the Cu NP@PC sample, the SEM image in Fig. 2a shows that abundant holes are randomly distributed on the surface of carbon layer, and the copper nanoparticles just reside in these holes. Figure 2b presents that almost all copper nanoparticles are half inside and half outside. As it is reported, the electrochemical reaction usually involves electron and mass transport. Thus, the half inside will be conducive to electron transfer with the carbon substrate, while half outside can act as active sites, interacting with substances. This will ultimately improve the efficiency of electrochemical reactions. In Fig. 2c, no porous carbon was found and all the copper nanoparticles are supported on the surface of carbon in the Cu NP/C sample. Some agglomerations even occurred in Fig. 2d. In addition, the copper nanoparticles size from two samples was 0.406 and 0.398 μm, respectively, based on a hundred metal nanoparticles. Thus, the size of the copper nanoparticles grown under two different copper ion concentrations is not much different, indicating that increasing copper ion concentration can only control the morphology of the carbon. Moreover, it can be seen from the TEM image in Fig. 2e that the enlarged copper nanoparticles have a size similar to these holes and partially encapsulated in them, again indicating that the successful formation of the target composites. To further reveal the porous properties of prepared materials, the nitrogen adsorption isotherms of Cu NP@PC and Cu NP/C were studied. As shown in Fig. 2f, the calculated BET surface area of the Cu NP@PC nanomaterials was 309.95 m 2 /g, much higher than that of Cu NP/C. This is consistent with the results from the SEM and TEM.
To investigate the electronic structure of the samples, X-ray photoelectron spectroscopy (XPS) was carried out. Figure 3a and b display the full XPS survey spectra of Cu NP@PC and Cu NP/C, respectively, which show the existence of Cu, C and O. For the Cu element, Fig. 3c presents the deconvoluted Cu 2p XPS spectra of the Cu NP@PC and Cu NP/C. Both signals were produced at the same peak positions, hinting the same composition of two samples. Two obvious peaks at 952.5 eV and 932.8 eV are attributed to the Cu 2p 3/2 and Cu 2p 1/2 of Fig. 2 a, [38]. The binding energies at 953.7 eV and at 934.8 eV are assigned to the Cu 2p 3/2 and Cu 2p 1/2 from the Cu(II) [39][40][41]. The presence of Cu(II) can be also confirmed by weak satellite peaks at 944.2 eV and 941.4 eV [10]. From the fitting peaks corresponding to Cu(0) and Cu(II), the ratios of Cu(0)/Cu(II) in Cu NP@PC and Cu NP/C are estimated to be 2.2 and 1.8, respectively. This can be explained by the fact that the surface copper atoms in Cu NP@PC are not easy to be oxidized due to the encapsulation of porous carbon layer. Meanwhile, more metal copper atoms may play an important role for the glucose sensing. For the C1s spectrum of the two samples in Fig. 3d, three signals located at 289 eV, 286 eV and 284.8 eV correspond to the C=O, C-O, C-C/C-H, respectively, indicating the existence of oxygen-containing functional groups such as carboxyl group [42,43] and in consistent with the results from FTIR.
Based on the advantages of porous carbon, the electrochemical sensing properties of Cu NP@PC and Cu NP/C toward glucose were investigated in 0.1 M KOH solution. The pure carbon material without copper nanoparticles is used as the reference sample. As shown in Fig. 4a, the cyclic voltammetric curves (CV) show the largest current response from Cu NP@PC with the presence of 0.2 mM glucose in electrolyte, when compared to that of Cu NP/C and pure carbon sample. Specifically, the current density of 0.31 mA/cm −2 was obtained at the potential of 0.55 V. This indicates that the prepared Cu NP@PC is the best catalyst for glucose oxidation, which can be reasonable by its owning porous structure. As it is reported, the porosity can promote the mass transport [29]. Herein, to demonstrate the enhanced mass transport, the effect of scanning rates on the glucose oxidation was investigated on Cu NP@PC modified electrode. As shown in Fig. 4b, the current density increases in a gradient with the scanning rate changing from 20, 40, 60 to 80 mV/s. Figure 4c shows the fitting curve between the current density (J p ) and the square root of scanning rate (v 1/2 ). The linear relationship can be expressed as: J p = 0.00254 v 1/2 − 0.00359 (correlation coefficient: R 2 = 0.995), indicating a diffusion controlled process of the glucose oxidation on the Cu NP@PC modified electrode [44]. Furthermore, in Fig. 4d, the electrochemical impedance spectra (EIS) present that the charge transfer resistance of Cu NP@PC is lower than that of Cu NP/C. Therefore, combining the promoted mass transport and the enhanced electron transfer process, the catalytic oxidation of glucose on the Cu NP@ PC modified electrode can be sketched in Fig. 4e. The Cu(II) was first oxidized to Cu(III), which subsequently accepted an electron and was reduced to Cu(II). During this process, the glucose molecule donated an electron and was oxidized to gluconolactone. Benefiting from the materials' porosity, the formed gluconolactone can be rapidly transfered into the solution and then hydrolyzed to gluconic acid [3,45].
According to the superior electrochemical catalytic oxidation performance, the potential sensing performance of Cu NP@PC toward glucose was examined.
To qualitatively study the current response of Cu NP@ PC toward glucose concentration, cyclic voltammetry was carried out in the concentrations of 2, 4, 6, 8 and 10 mM. As shown in Fig. 5a, the current density from the Cu NP@PC-modified electrode gradually increases with the glucose concentration increasing, hinting the potential excellent sensing performance. For quantificationally revealing the glucose sensing properties of Cu NP@ PC, the chronoamperometry (I-t) was performed and the potential of 0.55 V was chosen. As shown in Fig. 5b, the current density from Cu NP@PC-modified electrode increases step by step with the glucose concentration increasing from 0.01 to 1.1 mM. From the I-t curves, in Fig. 5d, the fitted calibration curve between glucose concentrations and response currents can be expressed Meanwhile, the sensitivity was determined to be 84.5 μA (mmol/L) −1 . According to the formula of LOD = 3σ/q [46] (σ refers to the standard deviation of the blank response and q is the slope of that linear regression curve), the detection limit was calculated to be 2.1 μmol/L. These two indexes are much better than those from most of the previous reports, as shown in Fig. 6b [47][48][49][50][51][52]. As comparison, the current density of I-t curve from Cu NP/C-modified electrode also shows a gradient change with the glucose concentration increasing, as shown in Fig. 5c. However, the magnitude of change was significantly reduced. As shown in Fig. 5d, the fitting linear curve between glucose concentrations and response current was represented as: y = 0.007 x + 0.0017 (correlation coefficient R 2 = 0.998). The sensitivity was 1.75 μA (mmol/L) −1 and the detection limit was estimated to be 10 μmol/L. Therefore, compared to the results of Cu NP/C, the sensing performance of Cu NP@PC sample was also improved by the porous carbon substrate.
As is well-known, anti-interference ability is another key factor to evaluate the materials' sensing performance. In this work, to investigate the selectivity of Cu NP@ PC-modified electrode toward glucose, several interfering substances including ammonium acetate (NH 4 OAc), sodium chloride (NaCl), urea (UA), citric acid (CA) with the concentration of 0.01 mM were chosen and was injected successively into the electrolyte [53]. Obviously, the current density changes caused by the interfering substances can be negligible. Only when 0.01 mM glucose was injected, the current density increased significantly regardless of the above interferences, as shown in Fig. 6a. Moreover, using the urine as the substrate, this proposed system can still achieve the sensitivity detection of glucose, comparable to the commercial test paper (Additional file 1: Figure S3 and S4). Therefore, the Cu NP@PC materials possess an excellent electrochemical catalytic oxidation and sensing ability toward glucose.
Conclusion
A composite consisting of copper nanoparticles and porous carbon substrates was designed and synthesized by calcinating the commercial filter papers impregnating with the copper ions. With the advantages of the porosity, the prepared Cu NP@PC showed an excellent ability for the electrochemical glucose oxidation and sensing. The sensitivity was determined to be 84.5 μA mM −1 and the limit of detection was calculated to be 2.1 μM, which is much superior to those from most of the previous reports. Furthermore, the Cu NP@PC-modified electrode also exhibited good selectivity for glucose. Therefore, the composite prepared in this work will provide not only a new candidate for constructing portable glucose sensors, but also a new thought for the preparation of porous carbon materials.
|
2021-08-06T14:20:21.852Z
|
2021-08-06T00:00:00.000
|
{
"year": 2021,
"sha1": "b12f3c63f180fc2bffe94aa76c1b598aa08e467a",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-021-03579-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b12f3c63f180fc2bffe94aa76c1b598aa08e467a",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252610095
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Implementing Personalized Health Education in Ambulatory Care on Cardiovascular Risk Factors, Compliance and Satisfaction with Treatment
Aim and Methods: Data from the CARDIOPLUS study (a prospective, multicenter, non-interventional study, which was conducted among patients and physicians from ambulatory patient care in Poland) were used to assess whether primary care behavioral counseling interventions to improve diet, increase physical activity, stop smoking and reduce alcohol consumption improve outcomes associated with cardiovascular (CVD) risk factors, metabolic parameters, compliance and satisfaction with treatment in adults. The study was carried out throughout Poland in the period from July to December 2019. Results: The study included 8667 patients—49% women and 51% men aged (63 ± 11 years)—and 862 physician-researchers. At the 3-month follow-up, there was a significant reduction in body weight (p = 0.008); reduction of peripheral arterial pressure, both systolic (p < 0.001) and diastolic (p < 0.001); reduction in total cholesterol levels (p < 0.001), triglycerides (p < 0.001), and LDL cholesterol (p < 0.001). The percentage of respondents who fully complied with the doctor’s recommendations increased significantly. The respondents assessed their own satisfaction with the implemented treatment as higher (by about 20%). Conclusions: As a result of pro-health education in the field of lifestyle modifications, a significant reduction of risk factors for cardiovascular diseases, as well as improved compliance and satisfaction with pharmacological treatment, was observed. Thus, appropriate personalized advice on lifestyle habits should be given to each examinee in a positive, systematic way following the periodic health check-ups in order to reduce the person’s risk and improve the effectiveness of the treatment.
Introduction
Cardiovascular diseases (CVD) are responsible for more than 4 million deaths each year across Europe [1]. The number of deaths from CVD is higher in women (2.2 million) than in men (1.8 million), with CVD accounting for 49% of all deaths in women and 40% of all deaths in men [1]. Those data emphasize the need for prophylaxis both in the general population and individual patients. Studies show that the implementation of preventive actions can prevent more than 80% of CVD cases. Education and prevention should be tailored to the needs and capabilities of a particular patient, and the physician plays a key role in this process. As cardiovascular risk increases in patients, lifestyle modification and counseling should be intensified, and pharmacotherapy should be initiated, especially in very high-risk patients [2].
Studies show that programs of lifestyle change that include nutritional education and physical exercise might be efficient in achieving the proposed goals for the treatment 2 of 12 of metabolic syndrome (MetS) [3,4]. In the study of Saboya P et al., the authors tried to identify whether there is some influence of performed behavioral counseling on the improvement process of the metabolic condition in 72 patients with MetS aged 30-59 years. Lifestyle modifying interventions resulted in a significant reduction in body mass index, waist circumference, systolic blood pressure at 3 months, and the improvement of QOL [5].
The most recent guidelines of the American College of Cardiology (ACC)/American Heart Association (AHA) [6] and the European Society of Cardiology [2] recommend the following non-pharmacological interventions for the control of diseases such as hypertension, diabetes, and lipid disturbances: restricted intake of alcohol, weight loss, intensification of physical activity with a structured exercise program, use of the Dietary Approaches to Stop Hypertension (DASH) diet, high intake of fruit and vegetables [6,7].
Positive health-promoting behavior, including lifestyle factors (healthy diet, smoking cessation, regular exercise, weight control), should be strongly advised [2].
Our focus on counseling interventions that take place in or were considered feasible for primary care among adults without risk factors for CVD or known CVD is relatively narrow. Thus, the aim of this study was to test whether a simple program of education and motivation in primary care about lifestyle change is effective in the reduction of cardiovascular risk factors, including metabolic parameters, as well as compliance and satisfaction with treatment in the population of Poland.
Data from the CARDIOPLUS study were used to assess whether primary care behavioral counseling interventions to improve diet, increase physical activity, stop smoking, and reduce alcohol consumption improve outcomes associated with CVD risk factors, metabolic parameters, compliance, and satisfaction with treatment in adults.
CARDIOPLUS was a prospective, multicenter, non-interventional study that was conducted among patients and physicians from ambulatory patient care in Poland, assessing tolerability and satisfaction with treatment with acetylsalicylic acid (ASA) 100 mg). The study was carried out throughout Poland in the period from July to December 2019. The aspirin treatment was not initiated as a part of the study, and the included patients were on acetylsalicylic acid regardless of the study participation.
Inclusion criteria were defined as: • Age ≥ 18 years, • Psychophysical state of health, which promises compliance with medical recommendations, • Treatment for up to 4 weeks with acetylsalicylic acid at a dose of 100 mg.
Exclusion criteria were as follows: • Chronic mental illnesses, • Alcohol and/or drug addiction, • Contraindications to the use of drugs containing acetylsalicylic acid.
The study began on May 2019, when the materials were distributed to the physicians. In May 2019, the recruitment process started; the second visit took place between September and October 2019. Completion of patients' data was on October 2019.
Each of the doctor-researchers participating in the study conducted it with at least 10 patients who met all inclusion criteria for the study.
In order for the study to be considered completed by a given doctor-investigator, it was required to conduct the study in its entirety, i.e., with the entire group of patients enrolled in it and within the prescribed period.
The tool used to carry out the study was a two-visit questionnaire. Study data were collected through a standardized list of questions ("questionnaires") to be completed by a physician based on patient information, personal observations, physical examination, and lipid profile results. The study was aimed at both doctors and patients from all over the country. The study consisted of two parts: an interview questionnaire completed during the patient's first visit to the doctor's office and the second questionnaire conducted during the next visit (approximately 12 weeks (±2weeks) after the first visit, depending on the physician's decision).
The interview questionnaire in the "first visit" consisted of questions specifying the patient's health condition at the time of inclusion in the study and questions regarding the therapy-the fact of tolerance, satisfaction with therapy, compliance, and lipid profile results. In addition, during the first visit, the researcher provided non-pharmacological recommendations for the patient. The interview questionnaire in the "second visit" part consisted of questions specifying the patient's health condition during the study and questions about the therapy-satisfaction with therapy, therapy tolerance, compliance, degree of patient implementation of non-pharmacological recommendations on CVD risk factors, and control lipid profile results. At the second visit, we assessed compliance with the non-pharmacological recommendations provided to patients by the researcher during the first visit.
Ambulatory care physicians conducted individual personalized consultations during the first and second visits with patients focusing on the prevention of cardiovascular diseases according to the guidelines, including increasing physical activity (to at least the recommended level of 150 min/week), diet modification, and weight reduction, as well as limiting alcohol consumption and smoking cessation [2,[8][9][10]. The patients were also given educational leaflets covering health issues to read at home about hypolipemic diet with menu proposition, proper physical activity, and methods of reducing cardiovascular risk factors.
The diet program was based on the healthy diet model, which included the consumption of 30-45 g of fiber, 200 g of fruit, and 200 g of vegetables per day. Patients were advised to consume fruit and vegetables for 2-3 servings per day and fish at least twice a week [8,11].
The general practitioners (GP) advised patients to avoid alcohol, and those who drink regularly were advised to consume no more than 20 g per day for men and 10 g per day for women.
Patients were also asked to rate their satisfaction with the treatment on a 10-point scale, where 1 meant a very low grade and 10 very high. A score of 8-10 meant high to very high satisfaction with the treatment. As part of the questionnaire, GPs determined the level of achievement of the goals set for the currently used pharmacotherapy by selecting one of the three responses: full implementation, partial implementation, or no implementation.
All patients gave informed consent prior to the procedure. The study protocol is in compliance with the Declaration of Helsinki and was approved by Bioethics Committee of the Polish Mother's Memorial Hospital Research Institute in Lodz (opinion number 56/2019 (RNN///KE).
Statistical Analysis
The statistical analyses were performed by the use of Statistica software (v13.1 Statsoft, Poland). The distribution of continuous data was analyzed by the two-tailed Student t-test between two different groups or t-test for dependence groups (the first and the second visit). The categorical data were analyzed by the chi-square test. All results were considered significant for p < 0.05.
Results
City residents constituted 84.7% of the total study population- Figure 1. Among them, the highest percentage lived in cities with >100,000-500,000 inhabitants (28.6%). Almost half of the patients were retirees/pensioners (46.2%)- Figure 2. The mean body mass index (BMI) was 29 ± 4.7 kg/m 2 . The detailed characteristics of age and BMI in the analyzed population are presented in Table 1. The age groups and BMI of the respondents according to gender are presented in Figure 3.
City residents constituted 84.7% of the total study population- Figure 1. Among them, the highest percentage lived in cities with >100,000-500,000 inhabitants (28.6%). Almost half of the patients were retirees/pensioners (46.2%)- Figure 2. The mean body mass index (BMI) was 29 ± 4.7 kg/m 2 . The detailed characteristics of age and BMI in the analyzed population are presented in Table 1. The age groups and BMI of the respondents according to gender are presented in Figure 3. City residents constituted 84.7% of the total study population- Figure 1. Among them, the highest percentage lived in cities with >100,000-500,000 inhabitants (28.6%). Almost half of the patients were retirees/pensioners (46.2%)- Figure 2. The mean body mass index (BMI) was 29 ± 4.7 kg/m 2 . The detailed characteristics of age and BMI in the analyzed population are presented in Table 1. The age groups and BMI of the respondents according to gender are presented in Figure 3. Arterial hypertension was diagnosed in 65.70% of the study participants over the years 1955/2019, including half of the cases in the past 5 years. Mean systolic blood pressure in the whole study population was 138 ± 14 mmHg, and diastolic blood pressure was 84 ± 9 mmHg. Regarding other comorbidities and disturbances, 8.5% had a stroke, 14.6% had a heart attack, and 12.3% had heart failure in the interview. Being overweight was the Arterial hypertension was diagnosed in 65.70% of the study participants over the years 1955/2019, including half of the cases in the past 5 years. Mean systolic blood pressure in the whole study population was 138 ± 14 mmHg, and diastolic blood pressure was 84 ± 9 mmHg. Regarding other comorbidities and disturbances, 8.5% had a stroke, 14.6% had a heart attack, and 12.3% had heart failure in the interview. Being overweight was the most common disease in the studied population (46.6%)- Figure 4. About 1/3 of patients had osteoarthritis, diabetes, or obesity (36.7%; 34.8%; 29%, respectively)- Figure 5. Twenty-seven percent of patients reported current smoking on the first visit, and 25% used to smoke in the past. The most common drugs used in the study were statins, beta-blockers, ACE inhibitors, and diuretics, without significant differences between the first visit and the follow-up visit (Figure 6). At the follow-up visit, there was a significant reduction in body weight (55.06% versus 45%; p = 0.008) (Figure 7). Arterial hypertension was diagnosed in 65.70% of the study participants over the years 1955/2019, including half of the cases in the past 5 years. Mean systolic blood pressure in the whole study population was 138 ± 14 mmHg, and diastolic blood pressure was 84 ± 9 mmHg. Regarding other comorbidities and disturbances, 8.5% had a stroke, 14.6% had a heart attack, and 12.3% had heart failure in the interview. Being overweight was the most common disease in the studied population (46.6%) - Figure 4. About 1/3 of patients had osteoarthritis, diabetes, or obesity (36.7%; 34.8%; 29%, respectively)- Figure 5. Twenty-seven percent of patients reported current smoking on the first visit, and 25% used to smoke in the past. The most common drugs used in the study were statins, betablockers, ACE inhibitors, and diuretics, without significant differences between the first visit and the follow-up visit (Figure 6). At the follow-up visit, there was a significant reduction in body weight (55.06% versus 45%; p = 0.008) (Figure 7). There was also observed a reduction in peripheral arterial pressure, both systolic (138 ± 14 versus 132 ± 11 mmHg; p < 0.001) and diastolic (80 ± 8 mmHg vs. 84 ± 10; p < 0.001)- Figure 8. There was also observed a reduction in peripheral arterial pressure, both systolic (138 ± 14 versus 132 ± 11 mmHg; p < 0.001) and diastolic (80 ± 8 mmHg vs. 84 ± 10; p < 0.001)- Figure 8. There was also observed a reduction in peripheral arterial pressure, both systolic (138 ± 14 versus 132 ± 11 mmHg; p < 0.001) and diastolic (80 ± 8 mmHg vs. 84 ± 10; p < 0.001)- Figure 8. At the follow-up visit, the concentrations of total cholesterol (196 ± 38 mg/dL versus 209 ± 44; p < 0.001), low-density lipoprotein cholesterol (LDL-C) (111 ± 33 mg/dL versus 121 ± 39, p < 0.001) and triglycerides (141 ± 49 mg/dL versus 155 ± 69; p < 0.001) had also decreased ( Table 2). There was no significant difference in high-density lipoprotein cho- At the follow-up visit, the concentrations of total cholesterol (196 ± 38 mg/dL versus 209 ± 44; p < 0.001), low-density lipoprotein cholesterol (LDL-C) (111 ± 33 mg/dL versus 121 ± 39, p < 0.001) and triglycerides (141 ± 49 mg/dL versus 155 ± 69; p < 0.001) had also decreased ( Table 2). There was no significant difference in high-density lipoprotein cholesterol (HDL-C) after follow-up. The parameters of lipid profile at baseline and 3 months after lifestyle counseling are presented in Table 3. The percentage of respondents who reported a high to a very high level of satisfaction with the treatment provided was 62.8% (score of 8-10) (Figure 9, Figure 10). The percentage of respondents who fully complied with the doctor's recommendations increased significantly. The respondents assessed their own satisfaction with the implemented treatment as higher (by about 20%). . The comparison of the satisfaction level of the patients from the medical care between first and second visit. Blue graph-first visit; orange graph-second visit. Graph (A) represents the percent of the satisfied patients calculated for each subpopulation separately. The subpopulations of the patients were created according to the subjective opinion of the patients: 1-the lowest satisfaction up to 10-the highest satisfaction. Graph (B) represents the percent of satisficed patients according to the opinion of the medicine doctor. The percent of the satisfied patients were calculated separately for each of the following subgroups: full, partial and low.
Factors Influencing Medical Adherence to Lifestyle Modification
In general, the inhabitants of small towns (50-100,000 inhabitants) follow the recommendations the best (over 85% vs. 80% in villages and larger towns; p = 0.01), and the worst was in patients aged 70-90 years (75% vs. 85% in younger age groups; p = 0.00001) as well as old age and retired (79% vs. 85% compared to the employed; p = 0.0002). The gender and type of work performed did not have statistical significance.
Factors Influencing Medical Adherence to Lifestyle Modification
In general, the inhabitants of small towns (50-100,000 inhabitants) follow the recommendations the best (over 85% vs. 80% in villages and larger towns; p = 0.01), and the worst was in patients aged 70-90 years (75% vs. 85% in younger age groups; p = 0.00001) as well as old age and retired (79% vs. 85% compared to the employed; p = 0.0002). The gender and type of work performed did not have statistical significance.
Physical Activity
Regarding the recommended physical activity, the best adherence to the recommendations of physical activity was observed in patients with normal body weight and overweight but not obese (83% vs. 79%, p = 0.0006). Higher values of TCh and LDL cholesterol motivated patients to increase physical activity (patients with TCh > 190-almost 83% vs. 80% in patients with TCh < 190, p = 0.019; and patients with LDL > 100-over 83% vs. 80% in patients with LDL < 100, p = 0.006).
Body Weight-Diet Reduction
Patients over 50 years of age, and especially those over 70, presented poor compliance with dietary recommendations (67.7% of patients aged over70 years vs. 79% of patients under 49 years of age, p = 0.007). Gender, smoking, arterial hypertension, and lipid profile values had no effect on dietary adherence. The best adherence to the dietary recommendations was observed among inhabitants of cities with a size of 50-100,000, the worst among residents of cities with over 100,000 inhabitants and rural residents (80% of urban residents of 50-100,000 vs. 72% of urban residents of 100-500,000 vs. 76% of rural residents, p = 0.008). People with obesity and overweight willingly followed the recommendations of diet modification, as opposed to people with normal body weight (overweight people 77% vs. people with normal body weight 72%, p = 0.022).
Limiting Alcohol Consumption
People over 40 reduced their alcohol consumption to a lesser extent after the intervention of a primary care physician than younger people (people over 40 years of age below 89% vs. people 20-39 years of age above 92%, p = 0.018). Women and people with arterial hypertension more often followed this recommendation (women vs. men 89% vs. 84%; p = 0.003; people with hypertension vs. people without hypertension 87% vs. 83%, p = 0.023). Place of residence, lipid profile values, and body weight did not have an impact on the extent of alcohol consumption. Physically working people reduced their alcohol consumption less frequently (manual workers vs. other types of professional activity 80% vs. more than 84%, p = 0.0001).
Smoking
People under 50 and over 80 years of age had the greatest difficulty in quitting smoking (<36% and 31%, respectively, p = 0.008). Obese and hypertensive people were more likely to quit or reduce smoking (obese people vs. normal body weight 49% vs. 39%, p < 0.0001; people with hypertension vs. without 44% vs. 38%, p = 0.009). The occupation and the lipid profile values had no effect.
Discussion
Our study based on 8667 patients and 862 physician-researchers revealed that even 3 months of personalized behavioral interventions resulted in benefits across a variety of important intermediate health outcomes with a significant reduction in metabolic risk factors such as body weight, total cholesterol levels, triglycerides, and LDL cholesterol as well as lowering of peripheral arterial pressure, both systolic and diastolic. The percentage of respondents who fully complied with the doctor's recommendations and expression with the implemented treatment increased significantly after behavioral interventions. In general, the inhabitants of small towns (50-100,000) and younger persons follow the recommendations the best. The gender and type of work performed did not have statistical significance. The best adherence to the instructions for physical activity was observed in patients with normal body weight and overweight but not obese. Higher values of TCh and LDL cholesterol motivated patients to increase physical activity. People with obesity and overweight willingly followed the recommendations of diet modification, as opposed to people with normal body weight. People under 50 and over 80 years of age had the greatest difficulty in quitting smoking. People over 40 reduce their alcohol consumption to a lesser extent after the intervention of a primary care physician than younger people. Physically working people reduced their alcohol consumption significantly less frequently. In our population, patients over 50 years of age, and especially those over 70, presented poor compliance with dietary recommendations, the worst adherence to lifestyle modification had the patients aged 70-90 years compared to younger patients. People over 40 reduced their alcohol consumption to a lesser extent after the intervention of a primary care physician than younger people. Perhaps one of the reasons might be the fact that the recollection of risk factor information decreased with age, and patients aged ≥ 65 years were fifty percent less likely to recollect risk factor information compared to younger patients [12].
Obese and hypertensive people were more likely to quit or reduce smoking and hypertensive women more often reduced alcohol consumption. Lifestyle intervention programs comprise the first-choice therapy to reduce the cardiovascular risk factors and therefore reduce the risk for heart diseases, one of the main public health problems nowadays [13]. The twelve-week clinical trial of Piovesan et al. included 125 adults who presented at least three of the criteria defined by the revised NCEP ATP III (National Cholesterol Education Program Adult Panel III) for metabolic syndrome (MetS) [13]. The Group Intervention and Individual Intervention patients presented a significant decrease in body mass index, abdominal circumference, and diastolic and systolic arterial pressure after the intervention. The number of diagnostic criteria for MetS decreased significantly. In this study, similarly to ours, the non-pharmacological strategies for changing lifestyle led to a reduction of cardiovascular risk factors [14].
In a randomized controlled trial, Saboya et al. included 72 individuals with MetS aged 30-59 years and randomized them into three groups of multidisciplinary intervention: standard, group, and individual during 12 weeks [5]. The primary outcome was a change in the metabolic parameters and, secondarily, the improvement in quality of life (QOL) measures. Group and individual interventions resulted in a significant reduction in waist circumference, body mass index, as well as systolic blood pressure at 3 months and the improvement of QOL, although it was significantly associated with the physical functioning domain. However, opposite to our study, the authors did not observe statistically significant effects on triglycerides and diastolic blood pressure, including all interventions. In the study of Eriksson et al., 123 subjects from a lifestyle intervention group obtained a significant increase in maximal oxygen uptake, better physical activity, improvement in quality of life, and a significant decrease in body weight, waist and hip circumference, body mass index, waist-hip ratio, systolic and diastolic blood pressure, triglycerides, and glycosylated hemoglobin [15]. Lin et al. performed a systematic review of randomized controlled trials published from January 1985 to June 2014 to evaluate the effectiveness of lifestyle-modification programs on metabolic risks [16]. Among the five trials included, the most commonly applied interventions were diet plans, supervised exercise, health education, individual counseling, behavioral modification, and motivational interviewing. Three-fifths of the studies were nurse-led, and only one of the selected trials was theory-guided. The lifestyle-modification programs effectively reduced triglyceride levels, waist circumference, and systolic blood pressure. However, few trials consistently confirmed the benefits of metabolic risks, and none revealed a significant effect on high-density lipoprotein or fasting blood glucose. The duration of the lifestyle modification programs in the included trials ranged from 4 to 24 weeks, and durations of at least 12 weeks significantly improved quality of life [16]. In order to develop an effective counseling system for the prevention of cardiovascular diseases, the association of a favorably changed lifestyle with improved risk factors was examined. Participants were 7321 office workers aged 30-69 years from in and around Nagoya city. Those who began to eat breakfast and increased their vegetable intake normalized their previously abnormal diastolic blood pressure with more than twice the likelihood ( [17]. Patnode et al. conducted a systematic review to support the U.S. Preventive Services Task Force (USPSTF) in updating its 2012 recommendation on behavioral counseling to promote a healthful diet and physical activity for the primary prevention of CVD in adults without known CVD risk factors [17]. The authors included 88 trials reported in 145 publications. Similarly as in our study there was evidence of statistically significant improvements in systolic blood pressure (mean difference In accordance with previous reports, our study demonstrated that lifestyle intervention produced beneficial effects on cardiovascular risk factors, especially on weight loss, blood pressure, and lipid profile. A prevention program in primary healthcare with a focus on physical activity, quitting smoking, and diet counseling can favorably influence several risk factors for cardiovascular diseases as well as compliance and satisfaction with treatment. In our study, the implementation of lifestyle counseling also significantly improved compliance and satisfaction with treatment. It is inconclusive whether this improvement might be related to weight loss, blood pressure normalization, quitting smoking, a healthy diet, improvement in the physical condition [18], or both [19]. Maintaining a healthy lifestyle in the secondary prevention of coronary heart disease was thought to be equally important as pharmacotherapy, which serves as an independent factor in reducing cardiovascular morbidity and mortality [20]. Findings have suggested that adhering to a combination of healthy behaviors (non-smoking, moderate alcohol intake, physical activity, and fruit and vegetable consumption) was associated with a lower risk of CVD mortality [21].
Limitations of the Study
Our review was limited to interventions that were conducted in primary care, or those that we felt may be feasible for primary care. Our study represented an unselected population of patients treated in primary care because of CVD diseases. Lack of randomization is a study limitation. However, using real data gives us the ability to understand how excluded in randomized control trials, individuals react to interventions, which provides a broader insight into the population studied. Another limiting factor concerns the relatively small follow-up period of 12 weeks. Although this is the period normally used in other trials, results of improvement of metabolic parameters might have been maintained if the intervention had lasted longer. The questionnaire and educational intervention did not cover all aspects of lifestyle modification and information about SCORE score, so it is unknown how variables such as resting, sleeping, and mental or social activities could change risk factor profiles.
Conclusions
As a result of pro-health education in the field of lifestyle modifications, a significant reduction of risk factors for cardiovascular diseases, as well as improved compliance and satisfaction with pharmacological treatment, was observed. Thus, appropriate personalized advice on lifestyle habits should be given to each examinee in a positive, systematic way following the periodic health check-ups in order to reduce the person's risk and improve the effectiveness of the treatment.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of NAME OF INSTITUTE (protocol code 93-338 and date of approval)." for studies involving humans.
|
2022-09-30T15:26:11.907Z
|
2022-09-26T00:00:00.000
|
{
"year": 2022,
"sha1": "3899aebc315ab9c0443b54565e6c4c419eabd4db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4426/12/10/1583/pdf?version=1664190398",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34c27680eb1ab2b1a017440976157d2382a5c07e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266372383
|
pes2o/s2orc
|
v3-fos-license
|
Initial results with [18F]FAPI-74 PET/CT in idiopathic pulmonary fibrosis
Abstract Idiopathic pulmonary fibrosis (IPF) is a chronic fibrosing interstitial lung disease with a poor prognosis. 68Ga-labeled FAP ligands exhibited highly promising results due to the crucial role of activated fibroblasts in fibrosis imaging of the lung. However, 18F-labeled FAP ligands might provide qualitatively much higher imaging results with accompanying economic benefits due to large-scale production. Thus, we sought to investigate the potential of [18F]FAPI-74 prospectively in a small patient cohort. Methods Eight patients underwent both [18F]FAPI-74-PET/CT and HRCT scans and were then compared with a control group without any fibrosing pulmonary disease. The tracer uptake of fibrotic lung areas was analyzed in synopsis with radiological and clinical parameters. Results We observed a positive correlation between the fibrotic active volume, the Hounsfield scale, as well as the vital and diffusing capacity of the lung. Conclusion The initial results confirm our assumption that [18F]FAPI-74 offers a viable non-invasive assessment method for pulmonary fibrotic changes in patients with IPF.
Introduction
Idiopathic pulmonary fibrosis (IPF) is a chronic fibrosing interstitial lung disease of unknown cause with a very poor prognosis that is characterized by abnormal collagen accumulation in the lung parenchyma, leading to progressive impairment of regeneration and repair [1,2].Various genetic and environmental factors have been considered to trigger the cascade of inflammatory events, leading to permanent loss of normal lung tissue [3].High-resolution computed tomography (HRCT) and pulmonary function tests represent currently the mainstay of diagnosis, of which HRCT is considered a gold standard due to the reported positive predictive value of 90% [4,5].However, since these examinations reveal fibrotic changes rather late during the course of the disease, lung areas with minor fibrotic changes in the short term during therapy might be difficult to detect-thus impairing the clinical decision-making process with optimal dynamic disease management.
The introduction of fibroblast activation protein (FAP) ligands as fibrosis monitoring agents offers great potential in order to fulfill this unmet clinical need.Several studies Frederik L. Giesel and Álvaro Undurraga contributed to this work equally.
investigating the efficacy of 68 Ga-labelled FAP ligands provide highly promising insights into the mechanism and clinical course of fibrosis in lung parenchyma [6][7][8][9][10].However, due to the well-known drawbacks of 68 Ga-labelling such as high radiation burden with suboptimal imaging quality and poor cost-effectiveness, 18 F-labelled FAP ligands draw great attention in routine clinical care [11,12].To the best of our knowledge, this is the first study, analyzing the efficacy of [ 18 F]FAPI-74-PET/CT prospectively in a small patient cohort.
Patients
Eight patients suffering from IPF were recruited at the National Thorax Institute, Santiago (Chile) between March and July 2021.IPF was diagnosed based on clinical parameters, functional tests (spirometry and DLCO), the radiologic pattern on CT, and pathology results in some cases, assessed by an experienced pulmonology physician in conjunction with a radiologist, both with over 20 years of experience in the interpretation of lung disease.The cohort included 4 males and 4 females with a median age of 71 years (range 66-77) (Table 1).The time since initial diagnosis ranges from 5-10 years.The control group included six male patients who had undergone [ 18 F]FAPI-74 within the diagnostic workup of a cancer entity (pancreas cancer n = 2; rectum cancer n = 1, sarcoma n = 3) with no evidence for a clinical or radiological sign of pathologic pulmonary finding between March and June 2021 (median 70 years, range 68-78).The study was approved by the regional ethics committee board (CEC SSM Oriente/05012021) and conducted in accordance with the Declaration of Helsinki, Good Clinical Practices, and national regulations.Written informed consent was obtained from all patients before undergoing any intervention and on an individual basis.The imaging and clinical data were then anonymized and analyzed retrospectively at the University Hospital of Duesseldorf (UKD).
Image acquisition
All PET scans were performed 60 min after intravenous tracer administration using a Biograph mCT Flow scanner (Siemens, Erlangen, Germany).Imaging data were acquired in 3-dimensional mode (matrix, 220 × 220) with an acquisition time of 3 min per bed position.Attenuation correction was performed using CT data (170 mAs, 100 kV, 2-mm slice thickness).The injected activity of [ 18 F]FAPI-74 was 199-239 MBq.
Image analysis
Tracer uptake in the pulmonary fibrotic lesions was quantified with mean and maximum standardized uptake values (SUV mean and SUV max ) and the fibrotic active volume (FAV).FAV was determined in volumes of interest (VOI) with isocontour set at 45% of the maximum tracer uptake within the respective region of interest (ROI) using the automated lung segmentation protocol of a dedicated software package (Hermes Hybrid Viewer, Affinity 1.1.4;Hermes Medical Solutions, Stockholm, Sweden).Hounsfield scale (HU) in the corresponding area of FAV was quantified as HUmean and HUmax.
Statistical analysis
Statistical analysis was performed using SigmaStat Version 3.5 (Systat Software, Inc., San Jose, CA, USA) and SigmaPlot Version 11.0 (Systat Software, Inc., San Jose, CA, USA) for graphical visualization.The comparison of tracer uptake between IPF and the control group was determined using a two-sided t-test.A p-value of less than 0.05 was considered statistically significant.The correlation between tracer uptake and clinical parameters was determined using Pearson's correlation analysis.
Tracer uptake in the fibrotic lung
Fibrotic changes in pulmonary parenchyma showed markedly elevated [ 18 F]FAPI-74 uptake in the visual analysis (Fig. 1).FAV could delineate clearly the fibrotic changes in lung parenchyma (Fig. 2) with corresponding statistically significant, higher levels of SUV max and SUV mean in contrast to the control group (Fig. 3a and b).
Correlation between tracer uptake and radiological and clinical parameters
We analyzed the correlation of SUV mean with Hounsfield scale (HU) obtained from CT data in the corresponding area of FAV, which showed a significant correlation (R = 0.887, p < 0.005) (Fig. 4a).In addition, FAV showed a significant negative correlation to the forced vital capacity (FVC) (R = − 0.759, p < 0.05), and a mild but not significant positive correlation to CO diffusing capacity (R = − 0.593, p = 0.121) (Fig. 4b and c).
Discussion
IPF is a relatively rapid-progressing lethal disease that is characterized by the abnormal accumulation of collagen in lung tissue, impairing its ability for repair and regeneration [1].These changes in the extracellular matrix enhance the migration and activation of quiescent fibroblasts, which further accelerate the fibrosing process [2].Thus, early detection and initiation of antifibrotic therapy as well as a reliable therapy-monitoring tool appear to be essential for therapy management.Based on this unmet clinical need, several research groups investigated the novel-tracer family, FAP ligands, in the assessment of fibrosing processes of the lung with highly interesting, promising results [6][7][8][9].
In the light of encouraging results obtained with 68 Galabeled FAPI [13][14][15] and to overcome some drawbacks of 68 Ga-labeling, we hypothesized that the quantification of lung fibrosis using fibrotic active volume in [ 18 F] FAPI-74 might correlate with the clinical severity and thus could serve as a non-invasive evaluation method.To the best of our knowledge, this is the first study to investigate [ 18 F]FAPI-74 in pulmonary fibrosis.
In the current study, we could demonstrate that patients with clinically impaired lung function showed higher fibrotic active volume with corresponding pulmonary area with positive FAPI uptake.Because FAP tracers are known to accumulate in the non-quiescent, activated fibroblasts, our results indicate that FAPI-PET/ CT can quantify ongoing tissue remodeling, which might predict disease progression.[ 18 F]-FAPI uptake (median SUV mean ) in our study was 1.40 (1.10-1.90) in IPF vs. 0.90 (0.60-1.40) in the control group, similar to the results of Bergmann et al. [7] using [ 68 Ga]-FAPI-04 with median SUV mean 0.80 (0.60-2.10) in ILD vs. 0.50 (0.40-0.50) in controls.A slightly higher tracer uptake observed in our study might result from the longer half-life of 18 F remaining in the region of interest at the time point of the PET scan.Interestingly, Bergmann et al. [7] have previously shown in the mentioned prospective study in patients with interstitial lung disease (ILD) in systemic sclerosis that [ 68 Ga]-FAPI-04 accumulation at baseline was associated with the risk for ILD progression in the follow-up period of 6-10 months.The authors observed this disease progression independently of other known risk factors, such as the extent of involvement in HRCT at baseline and FVC at baseline.Considering this, our data as a one-pointassessment might imply further prognostic relevant information, which should be verified in the long-term period.We found the fibrotic active volume of the fibrotic lung significantly correlates with the Hounsfield scale on CT.This result is consistent with the previous work of Röhrich et al. [8], which demonstrated that the tracer uptake of [ 68 Ga]-FAPI-46 correlates with radiographic parameters such as fibrosis (FIB)-and ground-glass opacity (GGO) index.Bergmann et al. [7] also reported on minor correlation of [ 68 Ga]-FAPI-04 with the extent of ground glass opacities.Because radiographic signs like ground-glass opacity can be easily affected by morphological changes of other causes (e.g., alveolitis or edema of infectious backgrounds) and might be rather unspecific, these findings further support the utility of FAPI-PET/ CT in the clinical setting, as it contains the radiographic as well as the fibrotic information, which might allow insight into the current disease activity with possible prognostic implications.Regarding the general image quality, 18 F-labeled compounds reveal higher spatial resolution allowing the detection of smaller lesions as compared to 68 Ga-labeled compounds, combined with several other advantages including cost-effective production and centralized supply.Whereas 18 F-labeled compounds allow larger batch production in facilities equipped with cyclotrons at lower cost and can be delivered to remote centers (satellite concepts), the typical batch size of a 68 Ge/ Ga 68 generator allows the daily clinical performance of only a few patients per elution and requires an on-site and on-time synthesis of the radiotracer, also because of the shorter half-life (68 min).
Further, we could demonstrate an investigator-independent, readily reproducible evaluation method using a dedicated software package (Hermes Hybrid Viewer, Affinity 1.1.4;Hermes Medical Solutions, Stockholm, Sweden) that provides a clear delineation of fibrotic or chronic inflammatory changes of lung parenchyma in terms of fibrotic active volume.Although fibrotic active volume, as already mentioned, displayed a statistically significant, negative correlation with forced vital capacity, the correlation with CO-diffusing capacity could not reach statistical significance, most probably due to the small cohort size.
The main limitation of our study seems to be the small cohort size.Additionally, PET images were acquired without respiratory gating so that there might be some difference in tracer uptake between respiratory phases.
Conclusion
These encouraging results pave the way for further, multicentric, large-scale trials for the evaluation of fibrosing processes in lung parenchyma with [ 18 F]FAPI-74.
Fig. 3 [
Fig. 3 [ 18 F]FAPI-74 uptake in patients with IPF.SUV data obtained in the patients and in controls are displayed as Whisker plots.a SUV max of [ 18 F]FAPI-74 uptake of IPF patients vs controls.b SUV mean of [ 18 F]FAPI-74 uptake of IPF patients vs. controls
Fig. 4
Fig. 4 Correlation between [ 18 F]FAPI-74 uptake and radiological and clinical parameters.a Correlation between SUV mean and Hounsfield scale in CT. b Correlation between fibrotic active volume (FAV) and pulmonary function (FVC, forced vital capacity).c Correlation between fibrotic active volume (FAV) and CO diffusing capacity (DLCO)
Table 1
Patient characteristics FVC forced vital capacity, DLCO diffusing capacity for carbon monoxide, CO carbon monoxide
|
2023-12-21T06:17:43.169Z
|
2023-12-20T00:00:00.000
|
{
"year": 2023,
"sha1": "84ef9229363277d5786da6c278229ed935e11849",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5e302fef7d8d1665d433452860c44c21072c1dd5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
110599267
|
pes2o/s2orc
|
v3-fos-license
|
LABORATORY EVALUATION OF ORGANIC AND CHEMICAL WARM MIX ASPHALT TECHNOLOGIES FOR SMA ASPHALT
Warm mix asphalt (WMA) technologies allow significant lowering of the production and paving temperature of the conventional hot mix asphalt (HMA), which promise various benefits, e.g. lowering the greenhouse gas emissions, reduction of energy consumption, improved working conditions, better workability and compaction, etc. However, in order to reach widespread implementation of WMA, it is necessary to prove that it has the same or better mechanical characteristics and long-term performance as HMA. This article presents a laboratory study that has been conducted to evaluate two different WMA technologies – chemical (using Rediset WMX) and organic (using Sasobit) for the use with stone matrix asphalt (SMA). The properties of two types of bitumen after modification with two different dosages of each WMA additive have been tested by traditional empirical test methods and with the Dynamic Shear Rheometer for a wide temperature range. Asphalt testing has been performed for SMA11 type mixture. At first, the necessary changes in testing conditions were determined by means of asphalt stiffness – the results suggested that for adequate comparison with reference HMA, at least two hour asphalt aging is essential before preparing test specimens. The properties of asphalt were determined for specimens that were prepared at four different compaction temperatures by means of two compaction methods – Marshall hammer and gyratory compactor. The test results show that it is possible to reduce the compaction temperature of 155 °C for HMA to at least 125 °C for both WMA products with maintaining similar density and mechanical characteristics at intermediate to high temperatures.
Introduction
The modern warm mix asphalt (WMA) technologies have the potential to reduce production temperature by 20 °C up to 40…50 °C from the conventional hot mix asphalt (HMA) and have a potential to do this without affecting the performance of asphalt (Čygas et al. 2009;Hurley, Prowell 2006). The different WMA production technologies are categorized in three groups (Zaumanis et al. 2012): 1. Foaming technologies; 2. Organic or wax technologies; 3. Chemical additives. However, not all of the techniques provide similar asphalt performance as for HMA, therefore the WMA design process should involve not only empirical characterization of asphalt but also a careful analysis of bitumen and the performance properties of asphalt at different temperatures. The test methods and evaluation criteria may require adjustment in some cases. This laboratory study has been conducted to evaluate Fischer-Tropsch wax and one of the available chemical additives. Sasobit (Fig. 1a) is a Fischer-Tropsch process wax that reduces the viscosity of bitumen above the melting point of wax (~90 °C), thus improving the coating of aggregates and workability of the mix (Zaumanis 2011).
Rediset WMX (Fig. 1b) is a chemical additive in flaked form with a melting point of 110 °C. It is a combination of cationic surfactants and rheology modifier, based on organic additives. It modifies the bitumen chemically and encourages active adhesion that improves the coating of the aggregates by binder. Other components of the additive reduce the viscosity of the binder at the production temperature (Zaumanis 2011).
Tasks of the research
The aim of the research is to investigate the changes in bitumen consistency after modification with WMA additives, to determine the physical-mechanical properties of asphalt after reduction of compaction temperature and to compare the characteristics of WMA with those of conventional HMA. To achieve this aim, the following tasks have been set: 1. Investigation of the changes in bitumen consistency at different temperatures after modification with WMA additives. 2. Determining the necessary adjustments in the mixture preparation, testing conditions and compaction method for evaluation of WMA properties and their adequate comparison with HMA. 3. Determining the physical and mechanical properties, including stiffness, resistance to deformations and compactibility of asphalt modified with WMA additives and comparing the results with conventional HMA.
Methodology
In order to determine the visco-elastic behaviour of bitumen after modification with WMA additives, testing has been performed with conventional testing methods and the Dynamic Shear Rheometer (DSR) according to the testing plan provided at Fig. 2
Testing with empirical test methods
The test results (Table 1) after addition of Rediset WMX show that this additive has small effect on the bitumen consistency characteristics at any temperature, suggesting that the difference in viscosity is not the explanation of the warm mix effect. The binder containing Sasobit compared to pure bitumen shows the tendency of consistency reduction at temperatures above the melting point of the additive and increases after crystallization of the wax. As expected, the degree of viscosity changes depends on the amount of the additive in bitumen. If different types of Sasobit modified binders are compared, the conclusion is that relatively to the pure bitumen the influence on the tested properties is similar, with the exception of dynamic viscosity. The results for initially softer bitumen (50/70) in this test show comparatively greater increase in viscosity than for harder bitumen (40/60). However, interpretation of the consistency results for determination of the optimum mixing temperature show that only about 5 °C reduction is attained. This suggests that the viscosity reduction is not the only property allowing reducing the temperature and that another parameter -bitumen lubricity (Hanz et al. 2010) -should be evaluated for describing the effects of these additives on the bitumen properties. The Fraass breaking point temperature is significantly increased by using WMA additives. However in general, the properties of original bitumen are irrelevant because during the production process it oxidizes. It is more important to evaluate bitumen in the state in which it occurs in mixture. The aging process was simulated by the RT-FOT and the influence of this procedure on Fraass temperature is significantly different for pure and modified bitumen. The breaking point temperature of the reference bitumen increased by a notable 5 °C after the RTFOT. That of the Sasobit modified binder increased only by 1 °C and it even dropped by 2 °C for bitumen modified with Rediset WMX which suggests some anti-aging effect on bitumen of the chemical additive. This shows that the general concern that wax technology significantly worsens the low temperature behaviour may not be true for all types of bitumen and has to be verified. It must be also taken in consideration that the effect of oxidative hardening in actual production process would be smaller for bitumen in WMA than for HMA because lower temperatures would be applied, therefore possibly even greater flexibility of bitumen in WMA is attained.
The analysis of the consistency results after RTFOT suggests that aging has similar effect on the change of mass and retained penetration for both WMA modified binders and pure bitumen. The changes in softening point for Sasobit are relatively smaller than for other binders but this is logical considering that already initially it had a significantly higher value in this test.
Performance-related testing
The DSR was used to measure the rheological properties of binder. The test parameters (complex shear modulus (G*) and the phase angle (δ)) are used to characterize both the viscous and elastic behaviour at intermediate to high temperatures which are the main ranges that are affected by the WMA additives.
The relative comparison of G* for modified and unmodified binders (Fig. 3) show that after crystallization Sasobit increases the stiffness of binder and improves the resistance to deformation. The relative comparison between two Sasobit modified bitumens show a logarithmical increase when the wax content changes from 2% to 3% which indicates that 3% is the best alternative for further testing. The illustration (Fig. 3) also demonstrates the crystallization range of wax, which is between 80 °C and 90 °C, meaning that in construction object the compaction should be finalized before this temperature is reached. At this temperature the additive creates a shear sensitive binder, of which the consistency depends both on the temperature and the frequency of loading. The evaluation of G* for Rediset WMX, however, suggests that this chemical additive has almost no effect on this property.
The summary of changes in phase angle, in comparison with pure bitumen, provided at Fig. 4 shows that binders containing Sasobit have improved elasticity, however addition of Rediset WMX, like with G*, shows almost no effect on δ at any given temperature. The large phase angle variations within 80 °C and 70 °C compared to other temperature ranges for 3% Sasobit may be attributed to the process of wax crystallization while the test was being performed. Fig. 3. Ratio of G* for the modified binders relative to pure bitumen Both of the results (G* and δ) somewhat explain the reports of increased resistance to rutting for the Sasobit modified bitumen, which is especially important for high in-service temperatures (~60 °C) and short loading times that are typical for traffic.
Methodology
Testing of the mixture has been performed on SMA-11 mixture with granite course aggregates according to testing plan at Fig. 5. Based on bitumen testing results 2% Rediset WMX and 3% Sasobit was used for mixing WMA. Three different WMA compaction temperatures were compared with the reference HMA temperature which was deducted according to EN 12697-35 Laboratory Mixing for the 40/60 grade bitumen. All of the testing was performed according to respective EN standard procedures listed in Fig. 5.
The differences in WMA production temperature and technology include modification of bitumen consistency, different bitumen and aggregate interaction and changes in the binder aging processes (Bueche, Dumont 2011). This may result in different strength gain of the WMA compared to HMA during a short period of time (Chowdhury, Button 2008); therefore, a part of the testing plan was to determine whether short-term aging is necessary before performing tests. Short-term hardening simulates the initial strength gain processes that would occur during actual asphalt storage in the silo and transportation of the mix to paving site (Perkins 2009). Asphalt aging was performed according to AASHTO PP2-2001:Standard Practise for Short and Long Term Aging of Hot Mix Asphalt, in a forced draft oven at the proposed compaction temperature. The mechanical effect of asphalt aging was examined by means of the indirect tensile test, which characterizes the stiffness of asphalt and is proven to be sensitive to stiffness of binder, length of short-term ageing, compaction temperature, and anti-stripping treatments (Aschenbrener 1995).
Compaction was performed by means of two different methods -Marshall hammer and gyratory compactor. Impact (Marshall) compaction was performed according to EN 12697-30 Specimen Preparation by Impact Compactor at the desired temperature with 50 blows from each side. Gyratory compactor allows evaluating the compaction of mixture in all of the densification range which is especially important for assessment of WMA properties. However, there are concerns that it is insensitive to temperature changes (Hurley, Prowell 2006). To evaluate wide range of compaction force, 200 gyrations at 600 kN for 1.25° external angle were applied. Moulds of 100 mm diameter were used. Max density of the mixture was determined for unaged reference samples according to EN 12697-5 Determination of the Maximum Density procedure A (volumetric) by using water.
Asphalt aging
The densification data from gyratory compactor was expressed as a function of density at particular number of gyrations with a reference max density of 2532.2 kg/m 3 . The results show significant changes in densification at different times of aging. The compactibility data for specimens with no aging confirm that the compaction requires less energy for both WMA compared to HMA. However, after hardening for two and four hours, the compaction characteristics level out and are very similar for both WMA and HMA.
The stiffness modulus and the number of air voids at different aging times are presented at Fig. 6. The results show an increase in stiffness after extending the aging time for all specimens, except for Sasobit at 4 h which is probably due to the excessive density of this core. The strength gain however is different for WMA products compared to the reference HMA. Whereas specimens initially have a similar stiffness modulus, already after two hours of aging the stiffness has a variation of 2089 MPa between the lowest (Rediset WMX) and the highest (Sasobit) of the obtained results.
The stiffness test results suggest that initial aging is essential for adequate comparison of mixes, however further research is required to determine the optimum oxidation time. All subsequent samples for the purposes of this research were compacted after two hour aging, because this is considered as an average time for mixture storing and transportation.
Density
The results of bulk density for both compaction methods that are shown at Figs 7 and 8 do not correlate. The density of the reference HMA at 155 °C for gyratory specimens was lower than for WMA, whilst for Marshall specimens it was higher in all cases. This is probably due to different compaction energies used but the different temperature sensitivity of each compaction method is another explanation. However, numerically the difference between all the WMA specimens and HMA, except for Marshall at 115 °C, is minor and the cores are attributed as having a similar density.
The compaction data from gyratory compactor in percent to max density for both WMA products and the control mix at different temperatures is illustrated at Fig. 9. It is obvious that compactibility at temperatures of 125 °C and 135 °C is similar to the reference mix for both WMA products. WMA at 115 °C, however, has noticeably different compaction characteristics for both products. Density at the first part of the compaction is significantly higher than for other samples and reaches its final compaction level at about 100 gyrations for Sasobit and 70 gyrations for Rediset WMX. The compaction energy of about Fig. 8. Bulk density at different compaction temperatures for gyratory specimens 70 gyrations is considered to relate to the actual field compaction, meaning that with this compaction effort, higher in-situ density than for HMA would be achieved. This behaviour is attributed to the reduced hardening of binder, due to the lower aging temperature.
Stiffness
The comparison between the stiffness modulus of Marshall and gyratory cores has shown not a good correlation (Fig. 10) in relation to control mix at 155 °C. Therefore, the evaluation of the stiffness modulus of WMA depends not only on the type of additive used and the compaction temperature, but also on the compaction method and/or a) b) the applied compaction force. Nonetheless, the results show that the stiffness of Sasobit is higher than for Rediset WMX at all compaction temperatures for both compaction methods. It is also clear that the difference between stiffness of both WMA at 135 °C and 125 °C is not significant, thus allowing to assume that it is possible to lower temperature to at least 125 °C while maintaining the relatively highest possible stiffness modulus for both WMA products. Further lowering of the temperature is considered to reduce stiffness of the mixture.
Permanent deformations
The Marshall test results are presented in Table 2. The Marshall stability results show a tendency to decrease with the reduced temperature and are generally lower than for the control mix at 155 °C, meaning that the rutting resistance is worse than for the reference mix at 155 °C. The results of Marshall flow also show the tendency of decreasing with lowering the temperature. This means less deformation in the pavement under the critical stability load. The Marshall quotient values are calculated as the ratio of stability to flow and represent an approximation of the load ratio to deformation under the particular test conditions. Therefore, the results could be used as a measure of the resistance of materials in service to shear stresses, permanent deformation and, hence, rutting. The results show that the WMA at 125 °C has approximately the same value as the reference. However, although the Marshall test is widely used for mix design, it is important to recognize its limitations. The research (Brown 1993) for conventional HMA shows that the Marshall test is a poor measure of permanent deformations of asphalt, especially for SMA which was evaluated in this research.
The dynamic creep test has been performed only for the WMA samples that according to previous test results were considered to have the best ratio of temperature reduction versus performance. Consequently, samples compacted at 125 °C were used. The max strain results at the end of test (3600 s) are presented in Fig. 11 and show similar levels of WMA for specimens compacted with both compaction methods, but the results of the reference sample differ by 30%. These differences for the HMA specimens are attributed to different compaction levels because of changed compaction methods and densification force. Nonetheless, in general, the results are considered to show good resistance to rutting, proving that reduction in the compaction temperature by 30 °C for both WMA products is possible without having an increased susceptibility to permanent deformations. Elastic behaviour, which is measured as the recovery after the relaxation period, has proportionally shown almost identical data for WMA and HMA, meaning that both WMA products are capable of recovering after the applied stress, as well as the control mix.
Conclusions
1. Addition of Sasobit reduces viscosity of bitumen at high temperatures and increases it at intermediate temperatures. At in-service temperatures, Sasobit provides higher resistance to deformations and improved elasticity of bitumen. Addition of Rediset WMX has relatively minor effect on the viscosity properties of bitumen.
2. The low temperature properties after RTFOT aging are similar for pure and modified bitumen. The use of Rediset WMX reduced oxidative hardening compared to other samples and decreased the Fraass breaking point temperature after RTFOT.
3. Oxidative hardening has different effects on WMA and HMA. Therefore, for the laboratory mixed samples the changes in mix preparation method should be considered by performing asphalt aging before carrying out compaction.
4. The use of both tested WMA products allows reducing the compaction temperature by at least 25 °C with the density remaining similar to HMA. The compactibility at this temperature is similar to that of HMA.
5. The analysis of mechanical properties of asphalt showed that reduction of compaction temperature by at least 25 °C for both WMA products is possible while maintaining similar stiffness and without having an increased susceptibility to permanent deformations.
|
2019-04-13T13:05:16.628Z
|
2012-10-17T00:00:00.000
|
{
"year": 2012,
"sha1": "0d0ab447672137bedd32d88da08f7346837e4d3b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3846/bjrbe.2012.26",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5fd3aa0755a4dc002c8979fc2f685679cbc8398b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Engineering",
"Environmental Science"
]
}
|
118792951
|
pes2o/s2orc
|
v3-fos-license
|
How small can an over-spinning body be in general relativity?
The angular momentum of the Kerr singularity should not be larger than a threshold value so that it is enclosed by an event horizon: The Kerr singularity with the angular momentum exceeding the threshold value is naked. This fact suggests that if the cosmic censorship exists in our Universe, an over-spinning body without releasing its angular momentum cannot collapse to spacetime singularities. A simple kinematical estimate of two particles approaching each other supports this expectation and suggests the existence of a minimum size of an over-spinning body. But this does not imply that the geometry near the naked singularity cannot appear. By analyzing initial data, i.e., a snapshot of a spinning body, we see that an over-spinning body may produce a geometry close to the Kerr naked singularity around itself at least as a transient configuration.
I. INTRODUCTION
It is a well known fact that the Kerr singularity of mass M is enclosed by an event horizon if and only if its angular momentum J is not larger than a threshold value J max := GM 2 /c, where G and c are Newton's gravitational constant and the speed of light, respectively: the Kerr singularity with J > J max is necessarily naked (see e.g., Ref. [1]). If the cosmic censorship conjecture which states that the spacetime singularity produced by the physically reasonable gravitational collapse is enclosed by the event horizon [2,3] is true, an over-spinning body cannot collapse to spacetime singularities if it does not release its angular momentum. A simple kinematical estimate supports this expectation: If we impose a condition on the total angular momentum J > J max , the impact parameter b of two test particles without any interaction in Minkowski spacetime is bounded below as b > 2GE/c 4 , where E is the total energy of the system. However, it is a very non-trivial question whether an over-spinning body can be so small even for a moment that the geometry around it is almost equal to that of the domain very near the naked singularity in the over-spinning Kerr spacetime. There are several studies on the high-speed collision of two black holes with non-vanishing total angular momentum, but these studies have not payed attention to this issue [4][5][6][7]. In order to get an answer to this problem, we do not need to investigate dynamical processes but it is sufficient to only study the initial data of the Cauchy problem in general relativity. In this paper, we set up the initial data of an axi-symmetric infinitesimally thin shell with the topology of S 2 by numerically solving the constraint equations in the Einstein equations. We assume that the outside of the shell is identical to a spacelike hypersurface of the Kerr spacetime; such initial data was discussed by Corvino and Schoen [8]. We assume that the inside of the shell is vacuum regular space.
The shell is assumed to be located at the constant radial coordinate r = R of the Boyer-Lindquist coordinates which cover the Kerr domain outside the shell. We investigate how small R can be in the case of the over-spinning shell, J > J max , under the weak, strong and dominant energy conditions which seem to be reasonable for macroscopic matter fields.
Hereafter, we adopt the geometrized units G = c = 1. In this paper, the Greek indices represent spacetime components, whereas the Latin indices donate the spatial components.
II. CONSTRAINT EQUATIONS
A set of the intrinsic metric γ ij , the extrinsic curvature K ij of a spacelike hypersurface Σ 0 , and the energy density and the momentum density of matter fields can be the initial data of the Cauchy problem in general relativity (see e.g. [9]). We may regard this set as a snapshot of the system.
The intrinsic metric γ ij determines the intrinsic geometry of Σ 0 , whereas the extrinsic curvature K ij determines how Σ 0 is embedded in the spacetime manifold as shown below.
The future directed unit normal to Σ 0 is denoted by n µ = (−α, 0, 0, 0), where α is called the lapse function. The projection operator to Σ 0 is defined as Note that γ ij = B ij . The extrinsic curvature is defined as From this definition, we can see that K µν is the spatial tensor, i.e., K µν n ν = 0 = n µ K µν , and is rewritten in the form where D i is the covariant derivative with respect to γ ij , and β i := g 0i is called the shift vector. The energy density ρ and the momentum density J i for normal line observers are defined as where T µν is the stress-energy tensor of matter or radiation fields.
The initial values must satisfy the constraint equations which are the time-time component and time-space components of the Einstein equations, and the former is called the Hamiltonian constraint, and the latter the momentum constraint. These are written in the where 3 R is the Ricci scalar of γ ij , and K := γ ij K ij .
III. INITIAL DATA: A SNAPSHOT OF A RAPIDLY ROTATING SHELL
As mentioned, we set up the initial data of a rapidly rotating infinitesimally thin shell with the spherical topology S 2 . The energy density and the momentum density confined on the shell are not fixed prior to solving the constraint equations (6) and (7) in the prescription we adopt. In this section, we show how to obtain the initial data of γ ij and K ij by using the conformal decomposition [10][11][12].
We assume that the system is axi-symmetric and its infinitesimal line element is written in the form where 0 ≤ r < ∞, 0 ≤ θ ≤ π and 0 ≤ ϕ < 2π are spherical polar coordinates. We assume that the infinitesimally thin shell is located at r = R =constant. Hereafter we call the conformal metric.
In order to define the "size" of the shell without any ambiguities, we assume that the outside of the shell is exactly the initial data of the over-spinning Kerr spacetime, whereas the inside of it is merely regular space. We adopt the Boyer-Lindquist coordinates for the outside Kerr domain, r > R, and hence, by defining the following three functions the metric functions outside the shell, r > R, are given by where M and a are the ADM mass and the Kerr parameter, respectively. Note that the ADM mass corresponds to the total energy, whereas Ma is the total angular momentum.
The non-vanishing components of the extrinsic curvature of Σ 0 in the outside Kerr domain, r > R, are given by where Of course, the above intrinsic metric and the extrinsic curvature satisfy the constraint equations (6) and (7) with ρ = 0 = J i . As well known, there is a ring singularity at r = 0 in the Kerr spacetime with Boyer-Lindquist coordinates. Hereafter we assume R > 0 so that no spacetime singularity exists outside the shell.
We assume that the inside of the shell, r < R, is vacuum and regular. By defining smoothed step function as with a constant L which satisfies 0 < L < R, we assume that the metric functions A, B and C are where Ψ is a positive constant, whereas the conformal factor φ(r, θ) will be determined by solving the constraint equations.
Since the trace of the extrinsic curvature vanishes in the outside region r > R, we assume the same situation in the inside region r < R. We write the extrinsic curvature in the form where |j is the covariant derivative with respect to the conformal metric λ ij and |j = |i λ ij .
Substituting the above expression into the momentum constraint (7) with J i = 0, we have Here, we assume which leads to the similar non-trivial components to those outside the shell, In accordance with Israel's formalism [13], the derivative normal to the shell of the spacetime metric does not have to be single valued on the shell but only be finite there, and the same is true for the extrinsic curvature K ij . Hence, the above non-trivial components (26) do not have to be continuous at the shell r = R. From the assumption (25), the momentum constraint (24) takes the following form; The above equation is an elliptic type differential equation for X ϕ . It is a practically very In order to get a meaningful solution of Eq. (27), we should impose an appropriate boundary condition on r = R. If we impose the continuity of the extrinsic curvature across r = R, we have from Eqs. (16) and (26) the following two conditions The former condition comes from the continuity of K rϕ , whereas the latter one comes from K θϕ . Since the other components vanish identically in the both inside and outside of the shell, the continuities of those components are trivially guaranteed. The condition (28) is the Neumann type, whereas the condition (29) is the Dirichlet type since, by integrating (29) with respect to θ, we obtain where we have chosen the integration constant so that X ϕ (R, 0) = 0. We can not impose both of them at once; we will adopt the boundary condition (29) or equivalently (30). In the next section, the reason why we adopt the condition (30) will be made clear.
Substituting Eqs. (8) and (23) into Eq. (6) with ρ = 0, we obtain an elliptic type differential equation for the conformal factor φ: where R is the Ricci scalar of the conformal metric λ ij . After some manipulations, we obtain By solving Eqs. (27) and (31), we can determine the initial values of γ ij and K ij in r < R.
Since the intrinsic metric should be continuous at r = R, the boundary condition on Eq. (31) should be the following Dirichlet type one; We should note that the first order derivative of φ with respect to r will be discontinuous at r = R. Since every non-vanishing component of the conformal metric λ ij is C 5 function
IV. THE SURFACE STRESS-ENERGY TENSOR
In this section, we show how to know the surface stress-energy tensor of the shell at r = R through Israel's formalism [13].
The world volume of an infinitesimally thin shell will be a singular timelike hypersurface Σ s . We assume that the initial data corresponds to a moment at which the "size" of the shell is extremum: It may be a moment of a bounce due to its large angular momentum.
This assumption implies that the timelike unit normal n µ of the spacelike hypersurface Σ 0 is tangent to Σ s (see Fig. 1). Here note that this assumption restricts not only the initial situation but also partly the time evolution. As a consequence of this assumption, the stress of the matter field composed of the shell is partly determined in the present setting of the initial data although the information about the stress of the matter field is not necessary for only setting up the initial data.
The projection operator to Σ s is defined as@ where r µ is the unit normal to Σ s . The extrinsic curvature of Σ s is defined as Then, Israel's condition of the metric junction is given in the form where, denoting a quantity evaluated just outside the shell by a symbol with + and that evaluated just inside the shell by the symbol with −, we have defined The quantity S µν in the right hand side of Eq. (37) can be regarded as the surface stressenergy tensor of the shell through Einstein's equations.
We introduce a tetrad basis on the shell. In order to write explicitly their components with respect to the coordinate basis, we need to fix the spacetime coordinates, i.e., the lapse function and the shift vector. We adopt a coordinate condition in which g 0µ is equal to (−1, 0, 0, 0), i.e., the Gaussian normal coordinates. Then, we define the tetrad basis as By using the above tetrad basis, the projection operator is written in the form Through straightforward manipulations, we obtain the tetrad components of S µν corresponding to the energy density and the momentum density as The information about the stress of material fields on the shell is not necessary in fixing the initial value. However, as mentioned a bit, since we have assumed that the unit vector n µ normal to Σ 0 is tangent to Σ s , the stress for normal line observers has partly been determined. We have and Hence, from Eq. (37) and the above results, we obtain The above results imply that the surface stress-energy tensor takes the following form; where It should be noted that p has never been determined yet since Q (n)(n) is not determined in the present initial data [See Eq. (49)].
From the above results, we can see that the junction condition (37) The obtained solutions should satisfy the following conditions. The conformal factor φ should be positive and finite in r < R. In Appendix A, we discuss the weak energy condition (WEC), the strong energy condition (SEC) and the dominant energy condition (DEC) [1] in the case of the present surface stress-energy tensor (51). All of WEC, SEC and DEC are As long as the above inequality holds, the appropriate p guarantees all of the energy conditions. If the equality in Eq. (54) is satisfied, p should be equal to σ/2, and S αβ takes the following form, where The detail of the derivation is shown in Appendix A.
V. NUMERICAL RESULT AND DISCUSSION
We numerically solve the constraint equations (27) and (31). We solve the momentum constraint (27) first and then, after substituting the solution of (27) into the Hamiltonian constraint (31), we solve Eq. (31) and obtain its solution. We adopt the finite difference method of the second order accuracy. We denote the grid number for the domain 0 < r < R by N r and for the domain 0 < θ < π/2 by N θ . We take N r = 1000 and N θ = 100 in typical run, but N r = 2000 and N θ = 200 in case that Ψ, L/R or R/M is small. The reason why the grid number in r-direction is much larger than that for θ-direction is that the Ricci scalar R is a very steep function of r. We invoke the ILUCGS method for the matrix inversions to solve elliptic type differential equations. In order to check the numerical code, we have seen the convergence of solutions with grid number increased: See Fig. 2.
The surface energy density σ is positive in all our numerical calculations. In Fig. 3, the ratio of j to σ is depicted as a function of θ for various radii R with a = 2M, L = 0.5R and Ψ = 0.2. We see from this figure that the maximal value of |j|/σ is equal to the value of j/σ at the equator θ = π/2. This is true for all our numerical calculations. The other important tendency is that the smaller the radius of the shell R is, the larger the maximal value of |j|/σ is. We can see from taking adequate value of Ψ or L.
In Fig. 4, we depict the maximal value of |j|/σ, i.e., j/σ at θ = π/2 as a function of Ψ for three cases R = 0.25M, 0.5M and 0.75M, where we assume L = 0.5R and a = 2M. We see from this figure that the smaller Ψ is, the smaller the maximal value of |j|/σ is. It is worthwhile to notice that the energy condition (54) is satisfied even in the case of R = 0.25M if we choose Ψ 9 × 10 −2 . In Fig. 5, we depict j/σ at θ = π/2 as a function of L for three cases R = 0.25M, 0.5M and 0.75M, where we assume Ψ = 10 −1 . We can see from this figure that the smaller L is, the smaller j/σ at θ = π/2 is.
In Appendices B and C, we discuss the behavior of the solutions of the momentum and Hamiltonian constraints in the limit of L → 0 and show that σ may become arbitrarily large in this limit, whereas |j| is bounded above. This means that the energy conditions may hold despite the value of R, a and Ψ as long as R > 0, a > M and 0 < Ψ < 1, if L takes a sufficiently small value. Our numerical results are consistent with these estimates, but due to the limitation of the numerical resolution, we have not yet seen the asymptotic behavior expected from the discussions in Appendices B and C. Hence, exactly speaking, it is still an open question whether the energy conditions necessarily hold for sufficiently small L, but no lower bound on R has been found in our numerical results. In the case of a = M(1 + ǫ) with 0 < ǫ ≪ 1, the value of ∆/M 2 at r = M is equal to ǫ(2 + ǫ) which is much less than unity. This implies that, as pointed out by Patil and Joshi [14], collisions of test particles with the trans-Planckian energy in their center of mass frame occur at r ≃ M in the Kerr domain, since the collision energy is proportional to ∆ −1/2 at their collision event. Here we should note that, in contrast with the situation supposed by Patil and Joshi, the initial data we consider will not be a snapshot of a stationary configuration, and hence the trans-Planckian collisions of test particles may be allowed for only short time interval in the present case.
Finally, it is worthwhile to notice that the geometrical size of the shell is not necessarily small even in the case of R ≪ M. Even in the limit of R → 0, the geometrical size of the shell is finite: See Appendix D. Our result is consistent with the hoop conjecture which states that a black hole with horizon forms when and only when the mass M gets compacted in the region whose circumference C measured in every direction satisfies C ≤ 4πM [15].
VI. SUMMARY AND DISCUSSION
We studied how small a rapidly rotating body can be by investigating the initial data which is a snapshot of a rotating infinitesimally thin shell with a spherical topology: The exterior of the shell is set up so that its intrinsic and extrinsic geometries are completely the same as that of the Kerr spacetime with the over-threshold angular momentum a > M, whereas the interior of the shell is determined by solving numerically the constraint equations. In this set of initial data, the shell is located on r = R, where r is the radial coordinate in the Boyer-Lindquist coordinate system and R is a positive constant.
In the present numerical results, no lower bound on R of the over-spinning shell has been found. This result suggests that it is physically meaningful to study the dynamics of test particles and test fields in the domain close to the naked singularity in the over-spinning Kerr spacetime, even though the cosmic censorship conjecture is true: Phenomenon similar to the Patil-Joshi process may occur even without Kerr naked singularities.
Since the exterior domain of the shell is the same as a spacelike hypersurface of the overspinning Kerr spacetime, the Kretschmann invariant K in the exterior of the shell is given as K ≡ R abcd R abcd = C abcd C abcd = 48M 2 (r 6 − 15a 2 r 4 cos 2 θ + 15a 4 r 2 cos 4 θ − a 6 cos 6 θ) (r 2 + a 2 cos 2 θ) 6 , (57) in the Boyer-Lindquist coordinates. Just outside the shell, K diverges in the limit of R → 0 at the equator θ = π/2: This corresponds to well known ring singularity of the Kerr spacetime [19]. The complementary set of the causal future of the shell is equivalent to the over-spinning Kerr spacetime (see Fig. 7), just outside of the shell is not enclosed by the event horizon however large the Weyl invariants is there. Our present results imply a possibility of the formation of a spacetime border [16] by an over-spinning body.
Appendix A: Energy Conditions
We can rewrite Eq. (51) in the following diagonalized form: where and, defining the following quantity λ ± and v α ± are given in the form and Since it is believed that the stress-energy tensor of a physically reasonable material field except for a null fluid has real eigen values, we assume We can easily see Equation (A7) implies that λ + corresponds to the energy density in the case of ω > 0, whereas λ − corresponds to the energy density in the case of ω < 0.
We see what restrictions are imposed on σ, j and p by the weak, strong and dominant energy conditions (see e.g., Ref. [1] about the energy conditions).
Weak energy condition
We consider the case of ω < 0 first. As mentioned, in this case, λ − is the energy density, and hence the weak energy condition (WEC) is equivalent to the following set of inequalities: From Eqs. (A2), (A3), (A4) and the assumption of ω < 0, we have The above result implies that the inequality ω < 0 contradicts WEC.
Hereafter we assume ω ≥ 0. Then, Eq. (A6) becomes Since λ + corresponds to the energy density in the case of ω > 0, WEC is equivalent to the following set of inequalities: From Eqs. (A2), (A3) and (A4), we have Since both of ω and µ are positive, the conditions (A14) and (A15) are necessarily satisfied.
Strong energy condition
In the case of ω < 0, the strong energy condition (SEC) is equivalent to the set of inequalities (A6), (A9), (A10) and As in the case of WEC, the condition (A10) contradicts the assumption ω < 0, and hence ω ≥ 0 should hold. Then, SEC is equivalent to the set of inequalities (A12), (A14), (A15) and We have and hence µ + p ≥ 0 should be satisfied so that the condition (A18) holds. Since Eqs. (A14) and (A15) are trivially satisfied, SEC is equivalent to the following set of inequalities,
Intersection of WEC, SEC and DEC
As a result, we can see that all of WEC, SEC and DEC are satisfied at once if and only if the following set of inequalities holds; The half of Eq. (A25), i.e., −µ ≤ p leads to p ≥ 0, or if p < 0, then Hence, the intersection of Eq. (A26) and −µ < p is given by The other half of Eq. (A25), i.e., p ≤ (ω + µ)/4 leads to σ > 3p, or if σ ≤ 3p, then since ω ≤ 4p, p is necessarily non-negative, and hence we have Hence the intersection of Eqs. (A26) and p ≤ (ω + µ)/4 is given by As a result, the WEC, SEC and DEC are satisfied, if and only if the following set of inequalities is satisfied; We find that the minimal value of σ is given by the positive minimum of the function holds.
Positivity of the stress
As mentioned, if we assume ω > 0, −λ − is the stress. We show the condition that the we have (σ − p) 2 ≤ µ 2 , or equivalently, pσ ≥ j 2 . The intersection between pσ ≥ j 2 and Eqs.
(A33)-(A35) is given by In the limit of L → 0, the momentum constraint (27) takes a very simple form in the domain 0 ≤ r < R: If we solve Eq. (B2) by assuming X ϕ | r=R−0 = X ϕ | r=R and imposing the boundary condition (30), we get a regular solution in the domain 0 ≤ r ≤ R with finite ∂X ϕ /∂r| r=R−0 . Furthermore, since even in the limit of L → 0, all metric variables and X ϕ and their derivatives with respect to θ will be finite in the domain R − L < r < R, the integral in the last equality of Eq. (B1) vanishes in the limit of L → 0. Thus, we have Eq. (B3) implies that ∂X ϕ /∂r| r=R is finite, and hence the surface angular momentum density j is finite even in the limit of L → 0: See Eq. (53).
Since ∂X ϕ /∂r is finite in the neighborhood of r = R, we have The above result is consistent with our assumption X ϕ | r=R−0 = X ϕ | r=R .
Appendix C: Hamiltonian constraint of L → 0 By integrating Eq. (31) from r = R − L to r = R, we have where Integrals T 2 and T 3 vanish in the limit of L → 0, since their integrands are finite, but T 1 does not, as shown below. We have where It should be noted that T 12 vanishes in the limit of L → 0.
By two times of the integration by part in Eq. (C5), we have × ln r 4 B(r, θ)C(r, θ) A(R, θ) where in the last inequality, we have used the Hamiltonian constraint (31). Then, from Eq. (C7), we have Here it should be noted that, in the limit of L → 0, where ϑ(x) is Heaviside's step function with ϑ(0) = 1/2, and δ(x) is Dirac's delta function.
However, on the ground of the dimensional analysis, this assumption seems to be reasonable.
In the situation of 0 < L ≪ R, we will have where F 3 is a function of θ. Hence, the consistency condition (C18) may holds.
From the above considerations, we may have since σ is proportional to ∂ r φ| r=R from Eq. (52). Hence, if we adopt sufficiently small L, the energy conditions may be satisfied despite the values of R and a as long as R > 0 and a > M.
Appendix D: The geometrical size of the shell The circumference C e of the equator θ = π/2 of the shell is given by C e (R) = 2πR C(R, π/2) = 2π R 2 + a 2 + 2Ma 2 R . (D1) We can easily see that C e takes a minimum value 2π 3(Ma 2 ) 2/3 + a 2 at R = (Ma 2 ) 1/3 . If a > M, C e is larger than 4πM; this is consistent to the hoop conjecture [15]. The circumferential length in the meridian direction C m is given by where E(k) is the complete elliptic integral of the second kind. Since E(k) is monotonically decreasing with respect to k, the minimal value of C m is equal to 4a achieved at R = 0. The area of the shell A s is given by π/2 0 R 2 B(R, θ)C(R, θ) sin θdθ = 2π R 2 + a 2 + R(R 3 + a 2 R + 2Ma 2 ) 2a ∆(R) ln R 2 + a 2 + a ∆(R) The minimal value of A s is equal to 2πa 2 achieved at R = 0. Hence, in this sense, the size of the shell is bounded below in the present case: see Fig 8.
|
2014-12-15T02:18:33.000Z
|
2014-06-26T00:00:00.000
|
{
"year": 2014,
"sha1": "96d9ce1d87d7b54f70b695767da3d24b95e5b57a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1406.6798",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "96d9ce1d87d7b54f70b695767da3d24b95e5b57a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
226207113
|
pes2o/s2orc
|
v3-fos-license
|
The effects of caffeine on olfactory function and mood: an exploratory study
Caffeine has been demonstrated to enhance olfactory function in rodents, but to date, the sparse research in humans has not shown any equivalent effects. However, due to the methodological nature of those human studies, a number of questions remain unanswered, which the present study aimed to investigate. Using a double-blind experimental design, participants (n = 40) completed baseline mood measures, standardised threshold and identification tests and were then randomly allocated to receive a capsule containing either 100 mg of caffeine or placebo, followed by the same olfactory tests and mood measures. Results revealed that despite a trend toward elevated arousal following caffeine for habitual caffeine consumers, there were no changes in odour function. In contrast, for non-caffeine consumers, caffeine acted to enhance odour (threshold) sensitivity but reduce odour identification. Overall, these findings demonstrate a complex profile of effects of caffeine on odour function and, given the evidence from the wider caffeine literature, it is proposed that the effects of caffeine might be limited to older populations.
Introduction
Caffeine is contained in a number of common beverages such as tea, coffee and energy drinks and has been consumed for over 2000 years (Barone and Roberts 1984). Caffeine is classified as a psychostimulant and has been extensively researched for its effects over the years, which include increases in arousal and enhanced performance in tasks requiring vigilance (see review Temple et al. 2017). However, there is still debate on the veracity of such effects in those who regularly consume caffeine, i.e. are the observed effects more about reversing the withdrawal symptoms of caffeine (James and Rogers 2005). At doses routinely consumed by humans, the main mechanisms of action are the antagonism of adenosine receptors (Patocka et al. 2019) and of particular interest here are its effects on the adenosine A 2a receptors. Research has shown that the stimulatory effects of caffeine are largely achieved via the blockage of adenosine A 2a receptors (Svenningsson et al. 1999), and separately, we know that A 2a receptors are found in the olfactory bulb (Kaelin-Lang et al. 1999). Evidence for the link between adenosine and odour function was also demonstrated in an elegant rodent experiment where both caffeine and separately an A 2a receptor antagonist enhanced olfactory function (Prediger et al. 2005). Since no effects were found for an adenosine A 1 receptor antagonist, that study suggested that the enhancements observed were due to caffeine's action as a partial A 2a receptor antagonist.
Due to the importance of our sense of smell (Stevenson 2010;Philpott and Boak 2014) and the ineffectiveness of interventions for anosmic/hyposmic individuals (Philpott and Boak 2014;Lill et al. 2006), there is growing interest in the possibility that caffeine might have positive effects on olfaction in humans. This has also been bolstered by one study that found that in humans with first-degree relatives of Parkinson's disease patients, olfactory function was higher for those who consumed more caffeine (Siderowf et al. 2007). To date, only two studies have tested the effects of caffeine on olfactory function in humans, where one study found that caffeine administration had no effect on a group of hyposmic (impaired sense of smell) individuals (Meusel et al. 2016). The second study tested individuals without any known smell impairments and despite caffeine improving attention (fewer errors), there were no differences on odour function (Han et al. 2020).
Though these two studies suggest that caffeine has no effect on odour function, there are however some aspects that remain unclear. Both of the previous studies used coffee (caffeinated versus decaffeinated) as the method of administering caffeine, which, although having high ecological validity, does also introduce issues in terms of the additional active compounds found in coffee (Arnaud 2011) and hence does not answer the question of whether caffeine alone might influence odour function. It is also likely that there would be expectancy effects from individuals receiving such beverages (both caffeinated/decaffeinated) which may have affected subsequent behaviour and have been found in caffeine research (e.g. Dawkins et al. 2011). Additionally, both of those studies used a relatively modest and similar dose of caffeine, estimated at 65 mg (Meusel et al. 2016) and 72 mg (Han et al. 2020), and it is therefore unclear whether a larger dose might yield any differences in odour function. Finally, neither of the studies measured individual mood and hence it was uncertain whether the stimulant effects of caffeine were present. To answer these questions, the current experiment examined the effects of a dose of caffeine (100 mg) shown to have mood effects (e.g. Smit and Rogers 2000;Stafford and Yeomans 2005) on odour function and mood in a healthy sample. We also took measures of caffeine craving (West and Roderique-Davies 2008) to verify whether individuals who were overnight deprived of caffeine differed in the subsequent caffeine and placebo conditions and how this related to olfaction.
Method
Participants Participants (n = 40; age M = 19.3, SD = 1.9 years, range 18-29; 34 females, 6 males) were university students. Sample size (N = 40) was predetermined by a combination of power analysis calculations and previous work in this area. Power analysis (G*Power 3.1) for ANOVA, given f = 0.50, power 0.8 and α = 0.05, recommends N = 34 participants, but we opted for a more cautious N = 40. Effect size was based on a previous work testing olfactory function (Stafford and Welbeck 2010;Stafford et al. 2019). Participants were advised not to take part if they had respiratory problems (e.g. asthma): problems in their ability to smell and/or allergies to certain odourants. Additionally, due to the administration of caffeine, we specified that individuals with any known aversions to food additives (aspartame, saccharin, fructose, glucose, sucrose, caffeine, natural food colouring, maltodextrin) should not take part. The study was advertised as 'Understanding our sense of smell', and the protocol (see Table 1) was given ethical approval from the University's Science Faculty Ethics committee (SFEC 2018-095); all participants gave informed consent.
Design
The study used a between-subjects design, where individuals were randomly assigned to a caffeine or placebo condition and the main dependent variables were odour sensitivity (threshold), odour identification and mood.
Olfactory threshold
The odour used for the threshold test was n-butanol (Fisher Scientific, UK) which was diluted in distilled water. The odourant was prepared using fifteen 50-ml amber glass bottles, in 16 dilution steps, starting at 1% (step 1) with each successive step diluted by a factor of two using serial dilution to the lowest (step 16). In addition to the odour containing bottles, for each dilution step, two 'blank' bottles (containing dilutant only) were used in the threshold test. Testing commenced by asking participants to smell the bottle with the highest concentration to familiarise themselves with the target odour. They were then presented with the triplet containing the weakest concentration. Following presentation of the last bottle of the triplet (counterbalanced), participants were asked which bottle contained the odour (1, 2 or 3). If the participant answered correctly (and it was the lowest concentration), they were presented with the same triplet again (in a different order) and the task repeated until they made a mistake, which resulted in the triplet containing the next concentration step being presented. Using a single up-down staircase system (as used widely in olfactory research, e.g. (Kobal et al. 1996;Hummel et al. 2007)), this was then repeated until there were seven 'turning points', with the mean of the last four points determining the threshold for the individual. Each bottle was held under the participant's nose (≈ 2 cm) and gently waved between each nostril to ensure optimal inhalation. A blindfold was used by the participants to avoid odour identification. The experimenter wore cotton gloves (Boots, Portsmouth) to reduce any cross contamination of odours.
Odour identification
This task was closely modelled on the Sniffin' sticks identification test (Hummel et al. 2007). In this version, we used fifteen different odourants: lavender (Essential oil, Holland and Barret, 3 drops), glue (PVA, 3 drops), sandalwood (Essential oil, Mia Roma, 3 drops), nutmeg (Tesco, small section), oil (WD40, 1 spray), vanilla extract (Tesco, 3 drops), star anise (Tesco, small section), cinnamon (Schwartz, 1 ml), pear (isoamyl acetate, Fisher Scientific, 1 ml), tea leaves (Tesco, 2 ml), chocolate (Dale Air, 1 ml), thyme (Sainsburys, 2 ml), frankincense (essential oil, Holland and Barret, 3 drops), caraway (Tesco) and oregano (Tesco, 2 ml). For each odourant, the respective amount was placed on a cotton ball (Boots) if a liquid or under the cotton ball; all odourants were then placed in an individual amber glass bottle (50 ml). Participants were presented with one odour at a time and asked to identify which odour they had smelled from a form consisting of four possible odours. They were instructed to make a choice even if they were unsure or did not detect an odour. To minimize practice effects, there were two different versions of the task, which varied in the order the odours were presented and also in the order the odours appeared on the form. We completed piloting of the test to ensure performance was neither at floor or ceiling levels.
Profile of Mood States
We used a briefer version composed of the original 72-item Profile of Mood States (POMS) questionnaire (McNair et al. 1971), which composed of 39 items from the original inventory, with the addition of 'jittery/nervous/shaky', 'headache', 'hungry' and 'calm', which were included to measure withdrawal and general effects of caffeine. The rationale for using a shorter version was based on the premise that a number of factors were not relevant to caffeine research, i.e. 'Anger', 'Depression' and 'Elated', and was used in previous work (Stafford and Yeomans 2005). Subjects rated the 43 items on a 5-point scale from 'Not at all' to 'Extremely'. From these responses, POMS permits five factors to be extracted: 'anxiety', 'fatigue', 'vigour', 'confusion' and 'friendliness' and the additional factor of 'arousal' (anxiety + vigour) − (fatigue + confusion).
Caffeine Craving Questionnaire
The current study used the Questionnaire of Caffeine Craving (QCC; West and Roderique-Davies 2008) which was based on the Questionnaire of Smoking Urges (QSU) (Tiffany and Drobes 1991). The QCC is a 21-item measure, yielding three factors: factor 1 (Desires and intention), factor 2 (General reinforcement) and factor 3 (Negative reinforcement).
General Health Questionnaire
This form contained of five questions concerning the frequency of consuming tea, coffee and soft drinks which was used in previous work , followed by the frequency of smoking/vaping and alcohol consumption.
Caffeine administration
Pre-weighed quantities of caffeine hydrochloride and a white powder used as a placebo (maltodextrin) were stored in small transparent non-gelatine vegetarian capsules (all items supplied by Bulk Powders UK) (size 4) in coded plastic boxes to ensure double-blind testing. The quantity of caffeine/ placebo was 100 mg.
Procedure
Participants were instructed to refrain from consuming any food/drinks that contained the following substances, alcohol, taurine, caffeine, glucose and aspartame, for 12 h before their allocated session. On arrival, participants were asked what they had consumed in the last 12 h and any participants who had consumed any of the listed substances were rescheduled to another session; they then completed baseline measures (POMS, odour threshold, odour identification). They were then given the capsule with a glass of water. This was followed by a rest period (30 min) to allow for the caffeine to be metabolised. Next, they completed the same tasks in the same order; the version for the odour identification task was different to the first presentation. Following the completion of these tasks, they completed the QCC, general health form and were asked two questions: (a) What did they think was the aim of the study? (b) Did they think the capsule they consumed contained caffeine: Y/N? Finally, they were given a full debriefing.
Data analyses
Preliminary analyses revealed that one participant did not achieve a threshold score, even at the highest concentration and was therefore excluded from further analyses. Sample characteristics (Table 2) showed that some participants (n = 8) were not habitual caffeine consumers, and although this was not a primary aim of the study, we decided to allocate these participants to a Non-consumer group to compare against Consumers. There were an equal number (n = 4/4) of participants in the caffeine and placebo conditions, and there were no differences in age in any of the (Condition-Caffeine/Placebo and Group-Consumer/Nonconsumer) comparisons. Gender was evenly spread in the Condition/Group combinations (Table 2).
Data for odour threshold, odour identification and mood were calculated as differences from baseline. These data were then analysed using a multivariate ANOVA, with the between-subjects factors of Condition (Caffeine/Placebo) and Caffeine status (Consumer/Non-consumer). Caffeine craving data was also analysed with the same multivariate ANOVA. Preliminary analyses of the data revealed that Box's Test of Equality of Covariance Matrices was violated, Box's M = 53.25, F = 1.99, p = .009, which was due to differences in variability in the Non-consumer group, particularly in the Desires and intention factor.
Mood
The analyses revealed no main effects of Condition on any of the mood measures (all F's < 1.3); however, when analysed separately for each group, differences did emerge. For Consumers only, arousal ratings increased for those receiving caffeine but declined for those in the placebo group, F(1,29) = 3.03, p = .09, η 2 = .09. Tense/anxiety ratings declined more sharply for placebo compared with caffeine, F(1,29) = 3.64, p = .06, η 2 = .11 (Table 3). The mood data therefore show nonsignificant trends for the stimulant effects of caffeine in caffeine consumers but not in non-consumers.
Discussion
The study found that, overall, there were no significant effects of caffeine on odour threshold or identification, which is consistent with previous work (Meusel et al. 2016;Han et al. 2020). Both of those studies utilized coffee as the vehicle for delivering caffeine and since coffee also contains other active compounds (Arnaud 2011); the effects of caffeine alone on odour function were unknown. By using pure caffeine contained in capsules, the current study was able to overcome that limitation and additionally examine the effects of using a larger dose of caffeine. The findings here suggest that neither coffee containing caffeine nor caffeine alone have any overall effect on odour function. Interestingly, however, there were differences between habitual caffeine consumers and nonconsumers which were not investigated in the earlier work (Meusel et al. 2016;Han et al. 2020). For non-consumers only, caffeine had divergent effects, leading to higher odour sensitivity (threshold test) compared with consumers but, in contrast, reduced odour identification. In trying to account for these differences, it could be that the stimulatory effects of caffeine were particularly beneficial for the threshold test, being a task longer in duration and possibly monotonous to some individuals, whereas the same arousing effects were not beneficial in the identification test, being a shorter task, demanding higher order cognitive function. This theory would also seem to fit the pattern that caffeine has a more reliable effect on attention and vigilance rather than memory and more demanding cognitive tasks (Stafford et al. 2007).
The reason that consumers' odour function did not follow the same pattern could be explained in that they would be less sensitive to the effects of caffeine, and therefore, following a period of caffeine abstinence, consumer ingestion of caffeine may have simply reversed caffeine withdrawal (James and Rogers 2005). Nevertheless, it was curious that the best evidence of any changes in mood was for the consumers only, in terms of tense and arousal, though only reaching significance in the former. Such differences do however link to the wider caffeine literature which has shown differences in the effects of caffeine in consumers and non-consumers. For instance, one study found that caffeine benefitted performance (vigilance) more in non-consumers versus consumers but in contrast for mood, consumers derived more benefit from caffeine than non-consumers (Haskell et al. 2005). Drawing on these separate areas, it could be, therefore, that any caffeineinduced alterations in odour function are not dependent on observable changes in mood.
The absence of caffeine effects on olfaction needs to be considered in the wider context. Previous human work that suggested positive effects was based on first-degree relatives of Parkinson's disease patients, where increasing lifetime estimates of caffeine intake were associated with higher olfactory (UPSIT) function (Siderowf et al. 2007). One obvious Group (Consumer/Non-consumer) difference between that work and the study here is that no caffeine was administered in that study but rather relied on individuals' account of routine caffeine intake. It is also worth noting that the participants in that study were all over 50 years of age and hence substantially older than the study here (M = 19 years) and the previous study in normosmics (27 years;Han et al. 2020). This fact could be important in that the enhancing effects of caffeine on olfaction in animal work were based on older (12 and 18 months) rodents (Prediger et al. 2005) and therefore suggest that the beneficial effects of caffeine on odour function might be restricted to older humans. The suggestion from that work was that the age-related increase in adenosine A 2a receptors may play a role in declining odour function, which can be temporarily reversed by caffeine's antagonism of those adenosine A 2a receptors. One of the consequences of this blockade is the increasing transmission of a range of neurotransmitters including dopamine, noradrenaline, and glutamate in brain areas related to cognitive function (see review, Patocka et al. 2019). In summary, in accordance with the wider research on the effects of caffeine in ageing (e.g. Van Gelder et al. 2007), it could be that caffeine may influence olfactory function but that this is mainly limited to older individuals. In terms of the study limitations, it is important to acknowledge that whilst the overall sample size used in this study was adequate in terms of the pre study power calculations, the number of non-consumers in this study (n = 8) was rather small and therefore the findings relating to that group need to be treated as preliminary. It is also worth reflecting that although the dose of caffeine used here was larger than previous work on odour function (Meusel et al. 2016;Han et al. 2020), it is uncertain whether using a larger dose may lead to different effects, which given caffeine's rather inconsistent effects (Stafford 2004;Stafford et al. 2007), would be worth examination. Finally, the odour 'identification' test used in this study was a custom-built test, used for the first time, and even though modelled closely on the Sniffin' sticks identification test (Kobal et al. 1996), it was not a validated test (see considerations, Hsieh et al. 2017). Nevertheless, we did complete pilot testing before the study to ensure that the test was sensitive to detect effects and performance was not at 'floor' or 'ceiling' levels.
In conclusion, we found no overall effect of caffeine on odour function, but in evidence that for non-consumers only, caffeine had beneficial effects on odour threshold but impaired odour identification.
Compliance with ethical standards
The study was advertised as 'Understanding our sense of smell', and the protocol (see Table 1) was given ethical approval from the University's Science Faculty Ethics committee (SFEC 2018-095); all participants gave informed consent.
Conflict of interest
The authors declare that they have no conflicts of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2020-10-31T13:06:02.789Z
|
2020-10-29T00:00:00.000
|
{
"year": 2020,
"sha1": "21a16c69c0ce818cc56f020f7605c61062b7fe97",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00213-020-05695-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3ff71eda5896c3604f478bf9ed5bcd79f5045d2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266769824
|
pes2o/s2orc
|
v3-fos-license
|
“Company management decision-making based on the analysis of events after the reporting period”
The study aims to discuss the impact of the analysis of events after the reporting date (subsequent events) on management decision-making. In the interval between the end of the reporting period and the publication of the annual financial report, company management may learn about events that either occurred during the reporting period but were previously unknown or occurred when the financial report was already prepared but not approved. The consequences of these events can be so serious that they require adjustments to the financial statements, changes in the company’s strategy and tactics, and radical management transformations. The paper structures such events depending on their impact on business performance and the procedure for reporting and identifies the determinants and mechanisms for their analysis and correct accounting. To assess the complex impact of events after the reporting period on the financial re-sults of a company, an integral indicator is proposed, a set of management measures is defined in accordance with the values of this indicator, and the mechanism for its calculation and use is demonstrated on the example of a hypothetical scenario. The sensitivity analysis of this indicator to fluctuations in the weighting coefficients of its components was performed using the Monte Carlo method. In an environment where transparency, accountability, trust between key stakeholders, adaptability, and proac-tivity are crucial for effective management, this indicator can be used as an effective metric that is taken into account by auditors, regulators, clients, investors, company management, etc.
INTRODUCTION
In business management, there is always a time lag between the end of the reporting period and the publication (approval) of the annual financial report.During this period of time, the company's management and business owners may receive additional information about certain favorable and unfavorable events (from unforeseen changes in the market to new global trends), which in the established international terminology are called subsequent events, i.e., events after the reporting date (IAS 10, 2023).These events may significantly affect the company's development trajectory, sustainability, efficiency, and strategic vision of its future development.
If these events existed as of the reporting date but were not known before the end of the reporting period, the financial statements must be adjusted.Sometimes, such adjustments may fundamentally change previous analytical conclusions about the business, significantly affect the strategy and tactics of the company's management, and require the application of radical management measures.
Examples of such events may include situations when the court confirmed the existence of financial obligations of the company to its counterparties; fraud or errors were detected, which confirms that the financial statements contain inaccurate information; a key counterparty went bankrupt after the reporting date, which usually confirms the existence of bad debts at the end of the reporting period; an asset was sold or purchased, the usefulness of which unexpectedly changed significantly after the reporting date, which would require an accounting estimate.Suppose these events did not exist as of the reporting date but occurred later before the date of publication (approval) of the report (for example, the company's property was significantly damaged between the end of the reporting period and the approval of the financial statements for the issue, which led to a significant impairment of assets).In that case, the financial statements are not adjusted.However, the notes to them analyze these events in detail and their impact on business performance, based on which management decisions may also change significantly.The company must describe the nature of such events, preliminarily estimate their financial impact (based on the facts available as of the date of approval of the financial statements and not on forecasts and general statements), or state that such an estimate is impossible.Thus, the correct, timely, and adequate consideration of events after the reporting date in preparing financial statements is essential for the company's tactical and strategic management.It allows owners and management to obtain information about real affairs and make economic decisions based on financial statements.Moreover, it provides a basis for more informed, adaptive, and strategically aligned management practices in a changing business landscape.The issue of proper disclosure of the material impact of events after the reporting date is one of the key issues in the audit of financial statements and may impact the auditor's opinion.Stakeholders, including investors, regulators, and clients, increasingly demand transparency and foresight, forcing companies to strengthen their management strategies by applying a systematic approach to events occurring after the reporting period.
Although the importance of events after the reporting period is widely recognized, the lack of theoretical and methodological consensus poses challenges for companies seeking to interpret these events holistically and integrate them into their management strategies.
THEORETICAL BASIS
A crucial starting point for understanding events after the reporting period is the regulatory framework and reporting standards established by accounting bodies and regulatory authorities.Michels (2017) The key legislative document regulating the recognition of events after the reporting period is IAS 10 "Events after the Reporting Date" (IAS 10 (2023)).According to this standard, recognizing such events depends on whether they are adjusting or non-adjusting (Appendix A).
Attention should be drawn to the fact that IAS 10 (2023) prohibits an entity from preparing financial statements on a going concern basis if events after the reporting period indicate that such an assumption is inappropriate.Also, suppose the company maintains accounting not by IFRS.In that case, it can make changes to its accounting policy and reflect similar events in the reporting differently, in line with the company's internal policy on reliability and substantiation of accounting.
Identifying and analyzing events after the reporting period pose challenges for companies, auditors, and standard-setting bodies.Scholars have explored the difficulties associated with timely information gathering, assessing the materiality of events, and determining their impact on financial statements.Various events have been identified as having the potential to influence financial statements.Changes in legislation, financial risks, lawsuits, and economic downturns are among the critical events explored in the literature (Carson & Dowling, 2012;Allegrini & Monteduro, 2018;Czerney et al., 2020).Researchers have delved into the mechanisms through which these events exert their influence, providing valuable insights for practitioners.Also, for example, Czerney et al. (2020) examine the relevance of changes in the business environment and sustainable development for enterprise development, shedding light on factors that could influence the analysis of events after the reporting period.Chung et al. (2013) delve into socially relevant factors affecting the organizational mortality of enterprises, providing a broader context for understanding corporate sustainability.This factor could impact post-reporting period events.Vasilyeva et al. (2019) assess the dynamics of bifurcation transformations in the economy, which may provide a theoretical basis for understanding economic shifts that could affect a company's financial performance.
A summary of scientific approaches to events that may be considered in the analysis after the reporting period is provided in Appendix B, Table B1.
Understanding the perspectives of various stakeholders, including investors, analysts, and regulators, is crucial for assessing the significance of events after the reporting period.Dźwigoł 2022) conducted a systematic bibliometric review of artificial intelligence technology in organizational management, which could be relevant for understanding technological advancements that may influence the analysis of events after the reporting period.Skrynnyk and Vasilyeva (2020) delve into neuro-genetic hybrid systems and machine learning for organizational development, potentially offering innovative approaches for analyzing the impact of events after the reporting period.Kwilinski (2019) explored the implementation of Blockchain technology in the accounting sphere, which could affect how companies manage their financial data and reporting processes.Dzwigol (2020), Skrynnyk and Vasilyeva (2022), Dzwigol (2022), and Mandryka et al. (2023) offered insights into methodological platforms and research methodologies in management science, including the concept of triangulation, which may be relevant for developing analytical approaches in the context of events after the reporting period.
The practical implementation of events after the reporting period analysis has garnered significant attention.Bentley-Goode et al. (2017) highlighted the necessity of adjusting financial indicators and providing comprehensive disclosures in the appendixes to financial statements.Sivaruban (2023) and Herda and Lavelle (2014) explored the role of events in corporate risk management after the reporting period.Research has examined how companies can proactively identify, assess, and mitigate risks associated with unforeseen events.The literature underscores the strategic importance of event analysis in safeguarding a company's financial stability.
Scholars have proposed a range of methodologies and tools for evaluating the impact of events after the reporting period.Based on the results of thoroughly analyzed studies, methodologies for analyzing the impact of events after the reporting period can be summarized by identifying the following groups, as shown in Figure 1.
The literature emphasizes the importance of selecting appropriate methods based on the nature and complexity of the events.
As seen in Figure 1, in addition to the standard financial methods (based on the analysis of the dynamics of the indicators presented in the financial statements or notes), companies may calculate additional indicators specifically designed to assess the impact of events after the reporting period.These include sensitivity analysis, statistical models, and scenario planning.These supplementary methods are designed to capture nuances that may not be fully represented by standard financial measures, thereby providing stakeholders with a deeper understanding of the company's financial position and resilience, taking into account the impact of events after the reporting period.Appendix B, Table B2 provides methods for analyzing events after the reporting period.The choice of method depends on the nature of the event, the availability of data, and the specific objectives of the analysis.Effective management plays a crucial role in selecting and implementing these methods, ensuring that they are consistent with the company's strategic goals and objectives.Some empirical studies have attempted to quantify the actual impact of specific types of post-accounting period events on financial statements using various non-standard methods.For example, Skrynnyk (2023) focuses on predicting convergent and divergent determinants of organizational development, offering a predictive approach that may be relevant to analyzing the impact of post-period events.
It should be noted that any of the methods listed in Table B2, Appendix B must include the following steps: 1. Identification of events: these events may include both external factors (e.g., changes in legislation or economic conditions) and events specific to the company (e.g., litigation, mergers, or significant asset impairment).
2. Assessing materiality: not all events after the reporting period will have a material impact on the financial statements.It is crucial to assess the significance of each event in relation to the overall financial position and performance of the company.
3. Recognition assessment: for events that are considered material, the next step is to determine whether they should be recognized in the financial statements.This involves assessing whether the event meets the recognition criteria, such as reliable measurement and future economic impact.
4. Disclosure requirements: even if an event is not recognized in the financial statements, it
Figure 1. Areas of methodologies for analyzing events after the reporting period
Analyzing events after the reporting period
Based on the analysis of the dynamics of indicator changes
Based on the assessment of the consequences of threats to the company's financial security through the determination of material damage
Indicative
Based on the calculation of the integral indicator By deviations of the actual value from the threshold value Using expert opinions may still need to be disclosed in footnotes or supplementary information.Such disclosures provide transparency to stakeholders about the nature and potential impact of the event.
5. Quantify the impact: for recognized events, it is important to quantify their impact on the financial statements.This may involve adjusting specific items, such as assets, liabilities, revenue, or expenses, to reflect the effects of the event.
6. Reconciliation of subsequent events: in some cases, events that occurred after the reporting period may provide additional information about conditions that existed during the reporting period.These subsequent events may require adjustments to the financial statements or additional disclosures.
7. Stakeholder communication: the results of the analysis should be effectively communicated to relevant stakeholders, including investors, analysts, regulators, and other interested parties.This communication ensures that stakeholders know the potential impact of events after the reporting period.Despite the existing research on the importance of accounting for the impact of events after the reporting period, there is a lack of research that would summarize the theoretical and methodological information on the identification, analysis, and reporting of events after the reporting period.Therefore, the purpose of the study is to create a cohesive theoretical and methodological framework for analyzing events after the reporting period, determining their impact on business performance, and developing an integral metric to assess their aggregate impact on the company's financial results.
Index development
It is crucial to consider and reflect in the financial statements the impact of each event after the reporting period.However, for effective management and operational decision-making, sometimes a company needs to assess the overall impact of these events on the company's financial position rather than focusing solely on one event, such as war or international economic sanctions.An integral indicator is a comprehensive approach to assessing the cumulative impact of events after the reporting period on the overall financial condition of an enterprise, taking into account their interdependence and synergy.The value of the integral indicator varies from 0 to 1. Accordingly, the value of the integral indicator is close to 1, indicating the need for an immediate reaction from the company's management, taking tactical actions to stabilize its financial stability.In general, the algorithm for determining and analyzing an integral indicator is proposed in Figure 2.
Based on Figure 2, developing an integral index for post-reporting period events would involve several steps: • identification of events: this would entail a thorough examination of events occurring after the reporting period; • categorization and prioritization: events would need to be categorized based on their nature and potential impact; some events might have a larger influence on the company's financials and prospects than others, so they would receive higher weighting; • data collection and analysis: relevant data about each identified event would be collected and analyzed.This could involve financial data, market research, regulatory documents, and other sources; • weight assignment: assigning appropriate weights to each category of events based on their perceived significance; • normalization and aggregation: normalizing the data to ensure comparability and then aggregating it to compute the integral index; • interpretation and reporting: the computed index would then be interpreted and reported.This might involve providing context, explaining methodology, and offering insights into the implications of the index; • feedback and iteration: stakeholder feedback would be necessary for refining the index over time.This could involve adjusting the weights assigned to different event categories or modifying the criteria for inclusion.
In the computation of the integral index, a normalization procedure is implemented, as all indicators have different dimensions and may even have different directions: there are indicators where an increase is desirable (S -stimulators), while others are preferred to decrease (D -de-stimulators).Normalization transforms in- There are ranges of change for integral indicators: small, medium, and critical.The proposed ranges of change of the integral indicator, characteristics of changes, and the company's response to such changes are shown in Table 1.
By categorizing changes in this manner, companies can prioritize their responses based on the magnitude of the impact.This approach allows for a more efficient allocation of resources and ensures that the most critical changes receive the highest level of scrutiny and disclosure.It also provides stakeholders with a clear understanding of the relative significance of each change, aiding in their decision-making processes.
Hypothetical example
Embarking on a hypothetical scenario, the study provides formulations and algorithm are employed to assess the influence of events occurring after the reporting period on the financial stability of a company.Four indicators (x1, x2, x3, x4) relevant to the evaluation have been identified (Table 2).The objective is to compute an integral index (I) to quantify the overall impact.
Objective specification O(t) -involves clearly defining the specific goals and scope of the evaluation, considering both direct and indirect consequences of events.This step lays the foundation for the entire evaluation process.
In formula 1, t represents time, or more precisely, the point in time at which the estimate is made.t = 5, which means that the evaluation is performed at the fifth moment (after the reporting period) after the known initial moment t = 0.
It is important to note that the value of t may vary depending on the specific study or analysis.In this case, since this is only a conditional example, one has chosen the value t = 5 to illustrate the calculations.In the real analysis, t will be chosen according to the specific context and data.
dt -represents an indefinite integral with respect to variable t.In this context, it means that one is computing a quantity, which is an accumulation of a certain function f(t) over the variable t from an initial time t 0 up to a specific moment t.The integral allows for the calculation of the accumulated change of this function over this time interval.
e -b i t -this is an exponential function, where e is the Euler's number, approximately equal to 2.71828.
Change/Value of the integral index Characteristics Response
Small change [0-0.30] A small change signifies a relatively minor alteration in the indicator's value.The change is within an acceptable threshold and is not expected to significantly impact financial reporting.It may represent normal fluctuations or minor adjustments that do not materially affect the company's financial position or performance.
Companies may make minimal adjustments to account for this change if necessary.However, it may not warrant extensive disclosure.
Medium change [0.31-0.74] A medium change indicates a noticeable but not drastic alteration in the indicator's value.
The change is substantial enough to warrant attention and consideration in financial reporting.It may result from a moderate impact event that could influence the company's financial position or performance to a notable extent.
Companies should conduct a thorough analysis to understand the implications of this change.This may involve adjustments to financial statements and additional disclosures.
Critical change The change is substantial and has the potential to materially affect the company's financial position or performance.It may result from a major event or circumstance that requires immediate attention and thorough assessment.
Companies should undertake a comprehensive analysis to fully understand the impact of this change.Significant adjustments to financial statements and extensive disclosures are likely warranted.b i -is a parameter that can be specified for each individual case.This function accounts for the exponential decrease in the impact of a specific event over time t.
Therefore, in this case, e -b i t determines how significant the impact of a specific event that occurred in the past at time t, where b i defines the effectiveness of this exponential decay.
For this example, Plugging in the values:
Multidimensional integral calculation:
For this step, f(x) = x 1 • x 1 + x 3 2 + x 4 3 and a defined volume V in the (x 1 , x 2 , x 3 , x 4 ) space.The integral can be computed based on the chosen function and volume.
Composite integral index computation (formula 8):
( ) In this context, the formula means that the product (multiply) of the values Z(x i ) • ω i for each indicator i from 1 to n is the number of considered indicators.
In this case, it is used to compute the composite integral index, where each indicator has its normalized weight (expressed as Z(x i ) • ω i ).These products are then raised to the power of 1/n and multiplied together.
) ( ) 0.67 0.4 0.33 0.33 0.367.0.67 0.2 0.5 0.1 This process allows for the weighting of indicators and considers their impact on the composite index, which reflects an overall assessment of the influence of events after the reporting period.
This example demonstrates a hypothetical evaluation process using the provided algorithm and formulas.Each step involves complex calculations based on the specified context, weights, normative values, and indicator data.The composite integral index (I) is computed to provide an overall assessment of the impact of events after the reporting period on the company's financial stability.
Value of the composite integral index (formula 9) -0.367 -indicates that, based on the considered model and provided parameters, the impact of events after the reporting period on the company's financial stability is moderate.Given that the index is in the range from 0 to 1, where 0 signifies minimal impact, and 1 signifies maximum (Table 3), a value of 0.367 may suggest that the influence of events after the reporting period is moderate, and there may be some positive or negative dynamics occurring.
It is important to remember that the specific value of 0.367 in the context of a particular study may require further analysis and comparison with other indicators or data to determine its true significance and relevance to a specific situation or company.
An uncertainty and sensitivity analysis was performed as an example of further analysis.With the uncertainty assessment, it could now evaluate how sensitive the composite integral index is to changes in the weights assigned to each indicator.This additional step provides insights into the stability and reliability of the assessment process.A similar approach to considering the sensitivity of the obtained indicator was used in the studies of Lyeonov et al. (2023) and Brychko et al. (2023).
In this step, the assumption involves evaluating the robustness of the composite integral index (I) in response to variations in the weights assigned to each indicator.Conducting a Monte Carlo simulation involves 1,000 iterations, with weights randomly sampled from a specified distribution.Monte Carlo simulation is a powerful technique for assessing the impact of uncertainty in model inputs on model outputs.In this case, the aim is to understand how variations in weights impact the robustness of the composite index.
Monte Carlo simulation allows accounting for uncertainty by repeatedly sampling from a specified distribution.It provides a range of possible outcomes and insights into the model's sensitivity to changes in inputs.
Randomly sampling weights for each indicator from a normal distribution: Using the previously calculated normalized indicators and normative values, it is computed the Integral Index (I) for each set of randomly sampled weights using the formula: , where ω j i , represents the randomly sampled weight for indicator x j in the i-th iteration.
Similarly, all values of the integral indices are computed for each iteration The final step is to calculate the average integral index uncertainty I uncertainty over all iterations: . 00 This provides an estimate of the robustness of the integral index to variations in the assigned weights.
DISCUSSION
Comparing this paper to other academic studies in the field, it is evident that the proposed integral indicator goes beyond traditional metrics used for assessing the impact of events after the reporting period.While existing research often focuses on specific event categories or individual financial indicators, this approach provides a holistic, management-driven framework for synthesizing multiple data points.This study stands out from the work conducted by Lyeonov et al. (2023) in several key aspects.Lyeonov et al. (2023) focus on exploring information openness as a factor in business leadership within the digital environment.In contrast, this study takes a different and specialized approach by concentrating on developing and applying an integral metric specifically designed to comprehensively evaluate the overall impact of events after the reporting period.Lyeonov et al. ( 2023) may contribute to a theoretical understanding of factors influencing business leadership.In contrast, this paper provides a directly applicable decision support tool.The integral metric is designed to empower management with the necessary information to respond effectively to financial challenges stemming from events occurring after the reporting period.
This innovative approach aligns with the evolving financial reporting landscape and strategic management practices, offering stakeholders a more comprehensive and actionable tool.
The study significantly distinguishes itself from Michels (2017) and Filatova et al. (2022), who primarily focus on changes in the business environment and the impact of sustainable development on enterprise development.While these studies contribute valuable insights into the broader context of business evolution and sustainability, this work serves as a complementary and crucial addition by offering a more precise and refined methodology for assessing the specific impact of these changes on a company's financial stability.Scholars explore the overarching trends and effects of changes in the business environment, providing a macro-level understanding of the challenges and opportunities companies may face.
In contrast, this study hones in on the financial implications of these changes, presenting a granular and detailed assessment framework.Going beyond the generalities of environmental and sustainable impacts, the focus is on providing a tool that precisely measures and evaluates how these factors influence a company's financial standing.
Also, this study provides an additional layer of analysis to Mursalov et al. (2023) for assessing a company's financial stability and management responses in digitalization.While the integral indicator holds great promise, it is essential to acknowledge potential limitations and considerations for its application.Factors such as the accuracy of event impact assessments and the need for clear communication regarding the methodology used in its calculation should be considered.
Additionally, ongoing research and refinement of this framework will be crucial to ensure its effectiveness in providing valuable insights for decision-makers in corporate management and investment analysis.
Agreeing with Tajani et al. (2022) and Skrynnyk (2023), it is noted that calculating the integral index may be less effective for several reasons.One such consideration is the potential scarcity or inaccessibility of data about events after the reporting period, making it challenging to conduct a thorough analysis.Additionally, the subjective nature of assessing the impact of these events can introduce variability in the results, as different evaluators may have varying perspectives on their significance.Furthermore, there may be a lag in obtaining pertinent information, causing delays in the evaluation process and potentially reducing the timeliness and relevance of the integral index.Collectively, these factors underscore the need for cautious interpretation and utilization of the integral index in the context of events occurring after the reporting period.
CONCLUSION
The purpose of the study was to investigate the theoretical and methodological foundations of the analysis of events after the reporting period and to develop an integral index used to assess the consolidated impact of such events on the company's financial results to make effective management decisions based on it.
Using the example of a hypothetical scenario, the study demonstrated the acceptability of using the methodology for calculating an integral index to assess the consolidated impact of events after the reporting period on the financial stability of a company.Based on the proposed methodology and the specified parameters, the value of the integral index was obtained at 0.367.Using the proposed gradation of the integral indicator, it was determined that its value is within the acceptable threshold and is not expected to significantly impact the financial statements.These are normal fluctuations or minor adjustments that do not significantly affect the company's financial position or performance.The index's sensitivity to changes in the input parameters and assumptions was assessed using the Monte Carlo method.The results demonstrate the ability of the index to provide valuable information about the financial stability of companies after events that occurred after the reporting period.
Thus, the developed methodology for calculating the integral impact of events after the reporting period can be a universal tool for stakeholders, company management, and financial analysts seeking a comprehensive understanding of a company's resilience to changing economic conditions.Its adaptability to different contexts makes it a valuable financial analysis and risk assessment tool.
Business developments
Securing a significant contract, successful product launches, or entering into strategic partnerships, egal settlements, intellectual property gains, successful mergers, government grants, increased demand, favorable currency exchange rates, asset sales, investment returns ets.
These planed events and financial windfalls can significantly enhance a company's financial performance, market position, and long-term sustainability.These events not only contribute to improved investor confidence and stakeholder relations but also offer opportunities for strategic growth, increased profitability, and a stronger competitive position in the market.Managing these events wisely is crucial for maximizing their positive effects and ensuring sustained success.
Unexpected gains, asset sales, or favorable legal settlements.
These events can boost financial reserves, improve liquidity, and provide additional resources for strategic initiatives.Properly accounting for such windfalls is crucial for accurate financial reporting.
Key executive changes
A change in leadership can lead to shifts in strategic direction, which can impact a company's financials.
Market opportunities
Identifying and capitalizing on emerging market trends, changing consumer preferences, or global economic conditions.
Seizing market opportunities can contribute to revenue growth, profitability, and competitiveness.Companies need to assess and incorporate these developments for effective strategic planning.
UNFAVORABLE EVENTS
Operational challenges Supply chain disruptions, production issues, or regulatory hurdles.
These challenges can negatively impact operational efficiency, decreasing revenues and increasing costs.Companies must address such issues promptly to mitigate their adverse effects on financial performance.
Unexpected expenses, losses on investments, or adverse currency fluctuations.
Financial setbacks can erode profitability and financial stability.It is crucial to promptly assess and disclose such events to give stakeholders a realistic picture of the company's financial health.
Legal and regulatory issues
Lawsuits, compliance violations, or changes in regulations affecting the industry.
Legal and regulatory challenges can result in financial penalties, reputational damage, and operational disruptions.Managing and disclosing such events is vital for maintaining compliance and minimizing negative consequences.
Political instability
Events like political unrest, coup d'états, or major geopolitical shifts can affect a company's operations, especially in global markets.
These events can affect a company's operations, especially in global markets.
Labor strikes or disputes Labor-related events.
Recognizing labor-related events, such as strikes or disputes, as events after the reporting period is crucial for ensuring the accuracy and completeness of financial reporting.These events can significantly impact a company's operations, leading to disruptions, production delays, and increased costs.By acknowledging these events, financial statements can provide a more comprehensive and realistic view of the company's financial position, allowing stakeholders to make informed decisions based on the most up-todate information.
Environmental incidents
Events like oil spills or chemical leaks.
These challenges can have substantial financial implications in terms of clean-up costs and legal liabilities.By considering these events after the reporting period, companies can accurately reflect the potential financial impact in their financial statements, providing transparency to stakeholders regarding potential future expenses.
Pandemics or health crises
Events like the Covid-19 pandemic.
These events can cause wide-ranging impacts on operations, supply chains, and demand.Recognizing these events after the reporting period allows companies to disclose the financial consequences of the crisis, helping stakeholders understand the potential risks and uncertainties associated with the pandemic's aftermath.
Market reaction analysis
This method involves studying how financial markets respond to the disclosure of events after the reporting period.Changes in stock prices, trading volumes, and other market indicators can provide valuable information about investor sentiment.
/, MR CSP MIC = where MR -market reaction; CSP -change in stock price; MIC - market index change.This analysis measures the market's reaction to events after the reporting period, assessing how a company's stock price changes relative to overall market movements.
Qualitative assessment This approach relies on expert judgment and industry knowledge.
N/A
Qualitative assessment for events after the reporting period involves subjective judgment, considering factors such as the nature of the event, industry trends, and expert opinions to evaluate potential impacts.
Stress testing
Subjecting financial models to extreme scenarios to assess how resilient a company's financial position is to adverse events.It helps identify vulnerabilities and potential areas of concern.
There is no specific formula; Stress testing generally focuses on assessing the company's performance in difficult conditions after the reporting period.
Table B2 (cont.).Methods and approaches to assess events after the reporting period , Olowookere et al. (2022), Dechow et al. (2011), and I. Makarenko and S. Makarenko (2023) have emphasized the importance of adherence to accounting standards, such as International Financial Reporting Standards (IFRS) and Generally Accepted Accounting Principles (GAAP), in ensuring the proper recognition and disclosure of events occurring after the reporting period.
Figure 1
Figure1also demonstrates that one of the acceptable methodologies for analyzing the impact of events after the reporting period is to calculate an integral indicator.Researchers emphasize the far-reaching benefits of the integral indicator for various stakeholders (investors, regulators, and financial analysts receive valuable information about the current state of the company and its prospects).However,Beasley et al. (2013) andTajani et al. (2022) only emphasize the possibility of using this indicator rather than developing it.
|
2024-01-06T16:31:12.064Z
|
2023-12-27T00:00:00.000
|
{
"year": 2023,
"sha1": "1e88bfa828cacc35f7cf157781089f1546704faf",
"oa_license": "CCBY",
"oa_url": "https://www.businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/19422/PPM_2023_04_Aliyeva.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4a205bd14451eab3ba7f285f29baee5f5c1a8497",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
256233389
|
pes2o/s2orc
|
v3-fos-license
|
Estimating the supply of oilseed acreage for sustainable aviation fuel production: taking account of farmers’ willingness to adopt
Continued progress towards reducing greenhouse gas emissions will require efforts across many industries. Though aviation is estimated to account for modest portions of global greenhouse gas emissions, these shares may grow as the industry expands. The use of biomass- and crop-based sustainable aviation fuels can help reduce emissions in the industry. However, limited feedstock supplies are a barrier to increased use of these fuels. This study examines the potential supply of feedstock from oilseeds and farmer willingness to produce oilseed crops under contract for sustainable aviation fuel production with a focus on canola and similar oilseed feedstocks (e.g., rapeseed). Stated-choice survey data is used to examine the contract and crop features that drive contract acceptance in six states located in the U.S. Great Plains and Pacific Northwest and then acreage supply curves are estimated for canola using secondary data. The estimated number of acres supplied under contract varies considerably across states and scenarios. Relatedly, estimated supply curves exhibit high degrees of price responsiveness. Of the states analyzed, oilseed acreages supplied under contract are generally found to be greatest in Kansas and North Dakota. Results suggest that in the absence of favorable contract and crop scenarios canola and other oilseed prices will need to considerably increase from typical levels to induce higher levels of supplied acres. The presence of crop insurance, shorter contract lengths that provide cost sharing and the availability of particular crop attributes are shown to diminish the need for higher canola and other oilseed prices.
Introduction
Global greenhouse gas emissions (GHG) have steadily marched upwards over the past several decades [1]. Air travel has been a contributing factor. Passenger-kilometers on flights, for example, have increased from about 3.6 billion in 2004 to about 8.3 billion in 2018 [2,3]. Though aviation is estimated to account for modest portions of global greenhouse gas (GHG) emissions, these shares may grow as the industry expands. For example, though international aviation was estimated to account for only 1.3% of global CO2 emissions as recently as 2012; this share could potentially grow to 22% by 2050 [4]. Moreover, achieving significant reductions in global GHG emissions will require a wholistic approach that results in reductions across a broad spectrum of sectors, such as heating, chemicals, road transport, and electricity [5] and some that may be viewed currently as minor contributors, such as aviation. Potentially, these various sectors will be competing for the same bioenergy feedstocks [5], though some assessments have shown that use of sustainable aviation fuels (SAF) can reduce carbon emissions in
Open Access
Energy, Sustainability and Society *Correspondence: Steven.Ramsey2@usda.gov 2 Economic Research Service, U.S. Department of Agriculture, Kansas City, MO, USA Full list of author information is available at the end of the article the aviation sector without significant impacts on the rest of the bioenergy sector [6]. Some studies suggested that median greenhouse gas emissions may be reduced by as much as 63% when using SAF (which included those derived from rapeseed and an edible variety of rapeseed, canola, as feedstock options) compared to the use of low sulfur jet fuel based on life cycle assessments [7]. As concerns surrounding climate change and its potential impacts have grown, regulatory bodies have taken steps to curb emissions in the aviation industry. In the U.S., the Federal Aviation Administration (FAA) in 2012 introduced the United States Aviation Greenhouse Reduction Plan. As part of this plan, the FAA set a goal for U.S. annual use of alternative jet fuel of one billion gallons by 2018, which it hoped to meet by supporting SAF research and development [8]. The U.S. Department of Defense has made efforts to incorporate alternative fuel usage into its jet and ship fleets, but with minimal success: Between 2007 and 2014, only 2 million gallons of alternative fuel were purchased compared to 32 billion gallons of petroleum-based fuels [9]. More recently, the Sustainable Skies Act-introduced in the U.S. House in May 2021 and in the U.S. Senate in June 2021-seeks to cut aviation emissions by 50%. In the E.U., aviation emissions were brought into the Emissions Trading Scheme (ETS) in 2008 as part of an overall goal of reducing GHG emissions to at least 20% below 1990 levels [10]. However, SAF usage in the E.U. has been and is expected to continue to be minimal [11]. A draft proposal from the European Commission could change this by imposing a tax on aviation fuels, which had been exempt from previous fuel taxes [12]. Under the draft proposal, sustainable fuels would not be subject to the new taxes [12]. Recent resolutions adopted by the International Civil Aviation Organization (ICAO)-a specialized United Nations agency that sets aviation standards for member countries-suggest emission cutting efforts will increase moving forward. Specifically, the Carbon Offsetting and Reduction Scheme for International Aviation has set a goal of zero global net CO2 emissions above the 2020 level that is to be enforced via the purchase and cancellation of emissions units [13]. To date, 121 ICAO member countries representing 97.5% of revenue tonne kilometers (revenue load in tonnes multiplied by kilometers) have submitted action plans establishing long-term strategies for reducing emissions in the aviation sector [14].
To meet current and future emissions requirements, it has been suggested (by Kousoulidou and Lonza [15] and Gegg et al. [16]) that the most attractive option for airlines may be a switch to drop-in-type SAF, which can be used without infrastructure or engine modifications, rather than an overhaul of fleets for increased fossil-fuel efficiency or for use with non-drop-in-type SAF. Moreover, Wang et al. [17] stated that a switch to low-emission fuels is the only way to meet emissions requirements in the aviation industry due to the limited reductions that can be achieved through other technological updates. In general, biomass-based transportation fuels are increasingly being considered as alternatives to fossil fuels [18,19]. With respect to SAF in particular, research across biochemistry, bioengineering, and economics suggests oilseeds, such as rapeseed (which includes canola), and camelina are leading candidates [20].
Despite the potential for SAF, barriers to large-scale utilization remain. For SAF in general, uptake has been limited in part due to difficulties in providing them in a cost effective and reliable manner [21,22]. In interviews with aviation biofuel stakeholders in Europe and North America by Gegg et al. [16], every interview identified the high production cost of aviation biofuels as a key constraint on market development. A key factor in these high costs, meanwhile, was attributed by several stakeholders to a lack of sufficient feedstock supply [16]. For SAF derived from field crops (e.g., corn or soybean), additional concerns are present, such as the "food versus fuel" debate and the GHG emissions associated with direct or indirect land-use change [20].
Production of SAF using oilseeds may help to alleviate some of these concerns. First, if increased production of SAF via oilseeds represents a net SAF increase-i.e., it is not replacing production from other sources or areasprices should drop just through the supply-demand mechanism. In addition, if enough oilseed was produced as feedstock, it may allow for cost savings via economies of size and/or scale at the point of SAF production. Second, in areas, such as the Great Plains in the U.S., oilseeds can be incorporated into traditional rotations (such as with wheat) by replacing a fallow period [20,23]. This should help satisfy food versus fuel concerns, as replacing a fallow period is not replacing production that would have gone into the food or animal feed system. This should also alleviate some concerns regarding competition for feedstocks between biofuels, such as biodiesel and renewable diesel in the case of canola or rapeseed oil. If replacing a fallow period, this would represent an increase in the total feedstock supply rather than a diversion of current supply to new uses. Such diversions could still occur though if sufficient "new" supply could not be contracted to make plant operation feasible. However, this could be a short-to-medium-term concern as the transport sector has the technological ability and the societal push for increased adoption of electric vehicles, which could decrease the demand for all liquid fuels in this sector. Furthermore, Shi et al. [20] estimated that there is a potential for net GHG reduction associated with the resulting SAF even when taking the changed land uses into consideration.
Some previous research has looked at the feasibility and potential for SAF feedstock supplies. Murphy et al. [24] provide a framework for assessing the feasibility of a SAF industry within a region along with a Queensland case study. The analysis assumes a long-term supply contract, though the authors note that a variety of arrangements would likely be required and that additional research is needed on acquiring contracts with farmers. Trejo-Pech et al. [25] estimate the farm-level breakeven prices as well as potential profitability and locations for crushing facilities and refineries in an analysis of the potential for pennycress as a SAF feedstock. In a related study, Zhou et al. [26] found that for farmers considering growing pennycress for aviation fuel, key concerns included market access for pennycress as a bioenergy crop and profitability of growing pennycress. The most important benefit for consideration was found to be additional farm income.
However, the potential benefits of using oilseed crops for SAF production will not be realized without farmer buy in. Initial market supply will likely rely on contracting between producers and refineries, as has been established in other biofuel markets [27,28]. Yet, little research exists on oilseed-feedstock supply, particularly on how contractual conditions impact farmers' willingness to produce oilseed crops. This gap is addressed in this study using farmer survey data to examine willingness to incorporate oilseeds in rotation with traditional wheat under different contractual conditions. Analysis focuses on the use of canola (a variety of rapeseed) as the oilseed of choice, given its crop and oil yield potential, as well as existing production in the region of study [20,29]. Oilseed crops have been shown to be a beneficial crop for replacement of fallow in wheat rotations and as a break crop, helping to improve wheat yields and soil health by reducing problems due to continuous cereal production [20,30]. This analysis provides insights into the feasibility of large-scale oilseed production as a SAF feedstock. Empirical analysis utilizes standard econometric techniques that could easily be transferred to other parts of the world pending the availability of or ability to collect the necessary data, such as survey responses and production-economics parameters. In addition, the analyses advance studies of biofuel feedstock supply by directly incorporating producers' willingness to grow these crops under contract.
Data and methods
Primary and secondary data were used (i) to estimate farmers' willingness to grow oilseeds in rotation with wheat (replacing fallow or another crop); (ii) to estimate the amount of land in the "Wheat Belt" that may feasibly be put into contracted oilseed production; and (iii) to provide an estimate of the supply of oilseed feedstock under different contractual and profitability conditions. Primary data was obtained via a survey of producers in the study region (study region and producer survey details provided below). Secondary data was obtained from various U.S. state-and federal-government entities, such as the U.S. Department of Agriculture's (USDA) Economic Research Service (ERS), Farm Service Agency (FSA), and National Agricultural Statistics Service (NASS).
Study region
The study region, depicted in [31]. The Prairie Gateway and the Northern Great Plains regions accounted for 74% of all wheat acres planted in 2019 [32]. The Prairie Gateway region experiences wide extremes in both temperature and precipitation, having bitterly cold air masses during winter and hot, humid summers. This region is susceptible to floods, severe thunderstorms, summer drought, heat waves, and winter storms [33]. Climate in the Northern Great Plains region is semiarid with longer and colder winters, as well as shorter and hotter summers. Land management in this region is a mixture of dryland cropping systems and livestock production based on rangeland, pastures, and hay production [34]. In the Fruitful Rim (Pacific Northwest), about two-thirds of rainfall comes between October and March and it is fairly dry during the remainder of the year [35].
Producer survey
An agricultural producer survey was administered to 10,089 non-irrigated wheat growers in the study region to assess farmers' willingness to adopt specialized oilseed crops under contract for utilization as a feedstock for SAF production. Contact information for the 10,089 wheat farmers was obtained from Farm Market ID (www. farmmarketid.com). Focus groups within the region, as well as experts in the field, were consulted for questionnaire development and testing 1 . Survey responses from producers were anonymous and a letter accompanying the survey explained the purposes of the survey, indirect benefits to participants, confidentiality and anonymity, and that the survey was strictly voluntary.
The survey was mailed to farmers in April 2013. Reminder postcards were sent to non-responders 10-12 days after the first survey packets were mailed. A second survey packet was mailed 14-16 days after the reminder postcard was mailed. A total of 971 responses were received (a response rate of 9.7%) in 2013. Due to a lower than expected response rate, the survey was sent again to non-respondents (using the same process) in January and February 2014. The low response rate in 2013 may be attributed to the timing when the survey was administered 2 . From the two mailings, 9,723 surveys were sent out that had deliverable addresses and 1,444 surveys were completed, providing a response rate 15%. Usable surveys were not utilized in analyses due to smaller samples (17,87,169,47, and 47, respectively) and a decision to focus analysis on the larger wheat producing states. Analysis was conducted separately for the states of KS, OK, ND, and SD, while WA and OR were combined into a single analysis due to the smaller number of observations from each state. Demographics reported by farmers in the survey are compared to the statistics from each state as reported in the 2017 Census of Agriculture [36]. Table 1 shows the comparison between the survey statistics and the census. The survey sample is representative with respect to average age and the percentage of producers who are white. The survey was less representative with respect to the percentage of farmers who are male and with respect to total sales, with both being higher on average when compared to the Census of Agriculture. The total sales result may be attributable, to some extent, to the use of total sales ranges in the survey rather than actual total sales. Average total sales were thus calculated by assigning total sales to a farmer equal to the midpoint of the selected range. Nevertheless, the surveyed farmers likely represent larger operations. In addition, the sample excludes the part of the farming population that does not grow wheat. This could also contribute to the difference between the survey figures and the Census of Agriculture if this population tends to have smaller total sales.
Stated choice experiment
The primary usage of the survey was a stated choice experiment examining farmers' willingness to enter into contracts to produce specialized oilseeds as a feedstock for SAF production. Each choice situation consisted of nine attributes to reflect differing contract and growing conditions. Four attributes were related to oilseed characteristics: shatter resistance, pest tolerance and herbicide resistance, winter hardiness, and extended window to direct combine. The remaining five attributes describe contract features: net returns, length of contract, crop insurance, cost share, and presence of an "Act of God" clause. The attributes used in the experiment represent the significant crop traits and contract attributes participants at focus group interviews indicated were the most important through discussions and surveys of participants. For crop variety attributes, shatter resistance, pest tolerance, and winter hardiness were important for ensuring the viability and yield of the crop, while an extended direct combine window was important for the flexibility it provides for including oilseed crops in rotation with small grains. Farmers have indicated that the length of contract, crop insurance, and net returns are highly important when considering the adoption of a crop [37,38]. Bergtold, Fewell, and Williams [27] showed that the length of contract, net returns, presence of crop insurance, and financial incentives are important contract considerations in a similar context for production of cellulosic feedstocks for ethanol production.
Survey respondents were asked to consider each contractual scenario and choose if they would enter the contract to grow oilseeds in rotation with wheat or "opt out". Contract attributes were defined in the stated choice experiment and an example question is provided in Fig. 2. In conjunction with the oilseed farmer survey, a supplemental information sheet was provided that highlighted the information about specific oilseed crops, such as costs and potential returns relative to wheat production.
As per the survey instructions, farmers were also asked to take into consideration that oilseed crops would be designated for SAF production and grown in rotation with spring or winter wheat under dry-land conditions. Net returns were presented in the survey as the expected percent gain above the net returns for producing an acre of wheat. Four levels of net returns were considered: -5, 5, 15, and 25 percent above wheat net returns (but do not include cost-sharing). The cost share attribute was described as the percentage of the input costs that the biorefinery or processor agrees to pay. Three levels of the cost share attribute were considered in the survey: 0, 15, and 30 percent. Two levels were considered for contract length: 1 year or 3 years. The 3-year contract was considered, because an oilseed crop is typically only rotated once every 3 years in a crop rotation with small grains. It would be assumed that some portion of the farmer's land would be planted to an oilseed crop each year to meet contract obligations. Oilseed characteristics, crop insurance, and the "Act of God" clause are binary attributes: 1 = Yes (present) and 0 = No (not present). A 2 7 × 3 × 4 fractional factorial design was used to find the combinations needed to construct the set of stated choice questions based on the approach from Louviere, Hensher, and Swait [39]. PROC OPTEX was used in SAS (version 9.3) to develop the design and blocking of choice sets. The D-optimality criterion was used to obtain the optimal design and a D-Efficiency score of 99.13 was obtained. The procedure developed 48 choice sets which were randomly assigned into 12 blocks of 4 choice questions yielding 12 survey versions, which were randomly distributed to survey respondents following standard stated choice techniques [39]. That is, each respondent answered 4 choice questions on the survey, providing additional within respondent variation (of their preferences), in addition to variation across respondents.
Estimation of oilseed supply
Estimation of oilseed supply for SAF production began by first estimating farmer willingness to enter contractual obligations to grow the oilseed. The survey instrument presented farmers with four contract scenarios with varying oilseed and contract attributes (Fig. 2). For each scenario, farmers were asked to respond "Yes" or "No" to the statement "I would probably be willing to grow an oilseed crop under contract for this scenario. " Oilseed attributes included shatter resistance, pest tolerance and resistance, winter hardiness, and extended direct combine window; contract attributes included net returns (as percent above or below net returns to wheat), length of contract, availability of crop insurance (yes or no), cost sharing by the biorefinery/processor (as percentage of the input costs), and the inclusion of an "Act of God" clause (yes or no).
It is assumed that producers maximize expected utility when deciding whether to enter a contract for oilseed production. Letting y = 1 if the farmer enters the contract and y = 0 if the farmer does not, then following Hanneman [40] farmers are assumed to have a utility function given by u = u y, x, z , where x is a vector of contract and oilseed attributes and z is vector of variables that impact utility but are not associated with the production contract or oilseed. While the utility function may be known to the farmer, it is treated as random by the research and written as where ε y ∼ iid 0, σ 2 ε . Then, the farmer will accept the contract if and the probability that the farmer enters the contract can be written as or Defining η = ε 0 − ε 1 , ∆ν = ν 1, x, y − ν 0, x, y , and F η (·) as the cumulative distribution function (CDF) for η Eq. (4) then becomes Assuming F η (·) follows a logistic CDF, the model given by Eq. (6) can be estimated using logistic regression techniques [41]. However, it is assumed that the same contract can be viewed with differing levels of favorability by different farmers due to unobserved heterogeneity in farm and/or farmer characteristics. As such, a random parameters logistic regression model is used to capture this unobserved heterogeneity [39]. Specifically, for farmer i in county k the model allows for farmer-specific intercept terms such that Eq. (6) becomes: where The term β 0 + θ ′ z i,k represents the conditional mean of the distribution of the intercept; z i,k is a vector containing a dummy variable for the year of the survey (2013 or 2014), as well as a set of sub-region dummy variables; σ β is the standard deviation of the distribution of the intercept; and u i,k is assumed to be mean zero and standard normally distributed [42]. Additional spatial heterogeneity is captured by estimating a separate model for each region r ∈ (KS, ND, OK , PNW , SD) . Thus, for farmer i in (1) u y, x, z = ν y, x, z + ε y for y = 0, 1 county k in sub-region s in region r, Eqs (7) and (8) can be expressed as: and where z i,k has been replaced by z s,r , because these vectors are identical for all farmers and counties in sub-region s in region r. The variables in z s,r are used only in the distributions for β i,k,s,r,0 and thus the term δ ′ z i,k drops out in Eq. (9). Equations (9) and (10) serve as the estimable adoption or willingness-to-grow models for each region. Following estimation of (9) and (10), for a given contract and crop variety with associated attribute vector x j , adoption probabilities for county k in sub-region s of region r are estimated as: where x ′ j = S j T j W j C j R j L j I j O j G j , S j denotes improved shatter resistance, T j is pest tolerance, W j is winter hardiness, C j is extended combine window, R j is percent returns above wheat, L j is contract length, I j is the availability of insurance, O j is cost share, and G j is the "Act of God" clause. All attributes are binary except for L j , O j , and R j . For all binary variables, a value of 1 indicates the presence of the attribute in the contract or crop, while 0 indicates its absence. For sub-region s in region r, the vector z ′ s,r = t 2013 d 1 d 2 · · · d S r , where t 2013 is the survey-year dummy variable and d s for s = 1, . . . , S r are the dummy variables for sub-regions within region r. The i and k subscripts have been dropped from P s,r,j and β s,r,0 to note that this approach uses the same value for all farmers and counties in sub-region s of region r.
One approach for estimating regional crop acreages is to assume that the share of acres devoted to a crop is equal to the probability that any given field in the region is devoted to that crop [43]. Acreages can then be estimated as the total potential acreage (e.g., total cropland in the region) multiplied by the field-level probability. Because the survey asked if respondents would be willing to grow oilseeds in rotation with wheat, the total potential acreage is defined as the total wheat acreage in a county. The amount of oilseed grown annually for SAF production in county k in (9) P i,k,s,r = P y i,k,s,r = 1 | x, z s,r sub-region s of region r under scenario j, A O k,s,r,j , is then estimated as: where A W k,s,r is the total area planted to wheat in the county based on USDA FSA planted acreage data from 2019, δ r is the adjusted survey response rate for region r, and the 1/3 scaling is applied to account for an assumed 3-year rotation, where oilseeds enter only once every 3 years, following best management practices. By including δ r it is assumed that farmer participation-and thus the proportion of traditional wheat acreage offered for participation-is capped at the survey response rate for the region. This helps to indicate initial interest in this type of farm enterprise based on survey response in the region, providing a conservative estimate of initial adoption in the study region as the market develops. Sub-regional and regional supplies can be obtained by summing  O k,s,r,j across the counties in the (sub-) region.
For this analysis, a straightforward approach to constructing supply curves would then be to estimate  O k,s,r,j for all counties across a range of values for R j and then simply chart regional supplies as a function of R j (e.g., [44]). However, a more useful analysis is to provide estimated supply curves in the traditional way, as a function of oilseed price. This approach is adopted here even though it is complicated by R j , which expresses oilseed net returns as a percentage of net returns to wheat. To operationalize this, the adopted approach is as follows. First, a range of oilseed prices is selected, p O n ∈ p O 1 , p O 2 , . . . , p O N . Then, for each county k, the net return variable R k is estimated as: and NR W k are the oilseed yield per acre (cwt/ac), variable costs ($/ac) for oilseed production, fixed costs ($/ac), and net returns to wheat ($/ac) in county k and O j ∈ {0, 0.15, 0.30} is the cost share associated with scenario j. It is assumed fixed costs are the same under the oilseed and wheat production. Due to data availability for the terms in Eq. (14), this analysis looks at the potential supply of canola, a type of rapeseed, for SAF production, which is primarily produced along the wheat belt and can also act as a proxy for other potential oilseeds being considered as feedstocks for SAF production, such as industrial rapeseed [28]. The assumed values and their sources for the terms in Eq. (14) are found in Tables 2 and 3. Once R k is calculated for a given canola price ($/cwt), county-level acreages are calculated using Eqs. (11) For policy and industry, there exists a significant interest in the elasticity of this oilseed supply. In this analysis, acreage elasticities are estimated by treating the points along the simulated supply curves as observational data arising from an underlying supply function given by: where A O r,j,n is the simulated canola acreage in region r, N is the price of canola, and γ and α are parameters to be estimated. The functional form was chosen, because, following a natural log transformation, the model can be estimated using simple linear regression and elasticity estimates are easily obtained. Thus, elasticity estimates are obtained via the simple regression given by: where the parameter α represents the acreage elasticity with respect to price. Equation (16) is estimated using ordinary least squares. The acreage elasticity provides a measure of the potential volatility in the market while (15) A O r,j,n = γ p α n (16) ln A O r,j,n = ln (γ ) + α ln (p n ) + ε n accounting for the responsiveness to contract, plant, and market conditions, as volatility will be dependent upon the adoption probabilities and market conditions.
Sensitivity analyses
Understanding how well this model will hold up and the impacts from changes to the assumptions that have been made requires sensitivity analyses. These analyses show how the potential supply of canola (or rapeseed) may change due to contract, market, or external conditions. The following sensitivity analyses are included in this paper: 1. Supply Estimation for "Low", "Medium", and "Highly" Favorable Scenarios-Separate analyses were conducted for "low-", "medium-", and "high-" favorability scenarios, where favorability is considered from the farmer's perspective. The attribute vectors associated with these scenarios are presented in Table 4. It could be argued that from a purely profit and risk perspective, the ranking of contract favorability may be the opposite for SAF producers with respect to the contract attributes. 2. Wheat Net Returns Scenarios-Because so much is assumed with respect to returns to wheat and there is little spatial heterogeneity in the assumed returns, sensitivity analysis is conducted with respect to this variable. To limit the set of results, analyses are restricted to just the "medium" favorability scenario (see Table 4). The sensitivity analysis is conducted by re-estimating supply curves for this contract with net returns to wheat that are 25% greater and less than in the "baseline" case (see Table 3). 3. Cost-Share Scenarios-The percentage of oilseed input costs that are paid by the SAF production facility will have a significant impact on the refinery's bottom line. Thus, the degree to which this cost share may impact the supply of feedstock is an important measure. To gain insights here, supply curves are re-estimated across a cost-share range of 0-30%. This analysis is limited to consideration of only the "highly" favorable scenario, where this attribute is included.
Stated-choice regressions
Results of the stated-choice regressions for each region are presented in Table 5. Whether the estimated coefficients for the oilseed and contract attributes are positive or negative indicate whether the attribute has a positive or negative impact on the probability a farmer is willing to grow the oilseed under the proposed scenario. In general, results in this regard were as expected. Except for one instance, all oilseed attributes-shatter resistance S j , pest tolerance T j , winter hardiness W j , and extended combine window C j -increased the likelihood a farmer would enter the proposed contract. The exception was for winter hardiness in North Dakota, but this result was not statistically significant. Increases in net returns relative to wheat net returns R j , the availability of crop insurance I j , increases in the cost share level O j , and the presence of an "Act of God" clause G j had positive impacts across all regions. Longer contract lengths L j generally decreased the probability of entering a contract except in Oklahoma. These results are in line with economic theory-based expectations and with the limited prior research conducted in this area (e.g., [27]).
Supply estimation for "Low", "Medium", and "Highly" favorable scenarios
Estimated probabilities of entering a canola-production contract under the "Low", "Medium", and "Highly" favorable scenarios are presented by state in Fig. 3 as a function of the percentage increase or decrease in net returns relative to wheat. These probabilities were estimated by varying R j in Eq. (11). The remaining variables for each scenario were set according to Table 4. For each scenario, probabilities were estimated across canola prices ranging from $0 to $50 by increments of $0.10. In general, estimated probabilities perform as expected with increases seen (1) when moving from "Low" to "Medium" to "High" favorability and (2) as canola net returns increase (relative to wheat net returns). Regional similarities are also present. The probability functions for KS, ND, OK, and SD tend to (approximately) reach their upper and lower bounds at net returns of approximately ±100% of net returns relative to wheat. In contrast, the probability functions of OR and WA reach their bounds at around ±200% of net returns relative to wheat. The implication of these results is that a given change in R is likely to have a greater impact in KS, ND, OK, and SD than will be seen in OR or WA. The probability functions in Fig. 3 form the basis for the acreage supply estimates under these scenarios, which are depicted in Fig. 4. Immediately evident is the variation in maximum acreage supplies across states, ranging from about 40,000 acres in OR to about 375,000 acres in KS. This is of interest, given OK and ND-the largest canola producing regions -have maximum estimated acreages of about 123,000 and 309,000, respectively. Some of this is driven by the underlying probabilities of adoption and maximum acreages are also limited by wheat acreages, survey response rates, and a rotation adjustment as seen in Eq. (13). Survey responses from oilseed producers were lower than expected in these regions, potentially indicating a lack of interest in this potential enterprise that would compete with canola or oilseed production for food markets. Because the associated probabilities at these extremes are, for all intents and purposes, equal to 1, these values represent the variation in wheat acreages and the adjustment factors from Eq. (13). Differences are also seen in the prices at which upper and lower acreage thresholds are met. For example, in SD and under the "High" favorability scenario, canola acreage is maximized at a price of $21.50/cwt and does not exceed 1,000 acres until the canola price is about $10.00/cwt 3 . In contrast, canola acreage in OK for the "High" favorability scenario is maximized at a price of $47.10/cwt and exceeds 1,000 acres at a price of about $8.70/cwt. Additional pricequantity combinations for these and the remaining states and scenarios are presented in Table 6 and Figs. 7, 8, 9, 10, 11, 12. Overall, the results suggest that a highly favorable scenario may be needed for oilseed SAF production to be feasible in any of the study region states. Across 2017-2019 the average price received for canola across the study region was about $14.72/cwt (USDA-NASS, 2020). As shown in Table 6, at a price of $15/cwt and under the low-favorability scenario and baseline wheat net returns, KS, OK, and OR are all estimated to supply less than 1,000 acres of canola for SAF production. Washington exceeded 1,000 acres but only marginally at 1,219. Thus, in the absence of more favorable scenario characteristics, supplies are likely to be negligible in these states at recent market prices. Production may be more feasible in ND and SD under this scenario, which had estimated acreages of 90,816 and 16,022, respectively. Under the highfavorability scenario, however, SAF may be more feasible across each of the states. In this case, the lowest estimated acreage is seen in OR at 7,898 and the remaining states all have estimated acreages of greater than 15,000. To put the above results in perspective, Archer et al. [20] report it would take about 2.1 kg of rapeseed oil to produce 1 kg of SAF. Assuming 44% oil content in its feedstock, a small SAF refinery with a 100-million-kgper-year capacity would require approximately 477 million kg of feedstock. Assuming a standard canola yield of 3,600 lbs. (1,633 kg) per acre and the same 44% oil content, this would require approximately 292,000 acres of canola production within the vicinity of the refinery. Based on the estimates from this analysis, only KS and ND could meet this requirement (see Table 6). In KS, this acreage could be attained at a canola price of around $20/cwt and a highly favorable scenario. It could also be met under the "Low" and "Medium" favorability scenarios, but it would require a higher canola price. For ND, the acreage requirement is met for each of the favorability scenarios provided the canola price is at least $20/ cwt. Smaller scale refineries-about 40 million kg per year requiring about 117,000 acres-could be supported in OK and WA, though this would again require market prices significantly higher than recent levels. Moreover, because supplies are estimated at the state level, there is no guarantee enough canola could be contracted within a distance acceptable to the refinery. These results highlight the need to consider producers' willingness to produce oilseed crops for SAF production under contract and the potential volatility in starting up this market.
Wheat net returns scenarios
To examine the sensitivity of results to the assumptions made regarding net returns to wheat, two additional scenarios were simulated for each state. Using the "Medium" favorability scenario as a starting point, acreages are re-estimated under a 25% increase and a 25% decrease in the baseline net returns to wheat (Table 3). Results for these simulations are depicted in Fig. 5 and presented for select prices in Table 6. In general, the results were as expected: As net returns to wheat increase (decrease), the estimated canola-acreage supply decreases (increases). The largest impacts (in gross acreage) were seen towards the middle of the price range. For example, at a canola price of $15/cwt, the total estimated acreage across all six states decreases by about 53,000 when net returns to wheat are increased by 25%. In contrast, when net returns to wheat are decreased by 25%, the estimated acreage increases by about 58,000. Given the spatial and temporal variability of wheat net returns, these results suggest that identifying areas which, on average, have lower net returns to wheat could play an important role in determining the feasibility of future biorefinery and SAF processing facility locations.
Cost-share scenarios
The results from varying the cost-share level between 0%, 15%, and 30% for the "High" favorability scenario are presented in Fig. 6 and Table 6. These results were also as expected: As the cost-share percentage increases (decreases), the estimated acreage supply also increases (decreases). Though not directly comparable ("Medium" versus "High" favorability), varying the cost-share level had a greater impact than varying the net returns to wheat. For example, looking again at a canola price of $15/cwt, changing the costshare from 0% to 15% increases estimated acreage by about 87,000 acres across all states. An additional increase in the cost-share from 15% to 30% adds roughly 125,000 acres. It should be noted that these estimates are the result of two components. First, there is the direct impact on the probability of adoption via O j in Eq. (11). Second, there is an indirect effect on the probability via O j in Eq. (14) through net returns. A more conservative approach would be to remove this indirect effect. However, given the overall conservative nature of these estimates-due to the response rate adjustment (see Eq. (13))-the inclusion of the indirect effect is not believed to be of major concern. Thus, it may be in the interest of SAF biofuel producers and refineries or government to offer cost share incentives to promote production under contract, especially when the market for these feedstocks are beginning to grow. The contracting literature on biofuel production has shown that cost-share incentives may increase farmers' willingness to grow [27], which is further supported by the cost-share results from the stated choice regressions.
Elasticity estimates
Acreage elasticities were estimated using Eq. (16). For estimation purposes, acreage and price values of 0 were set equal to 0.001. Results are presented in Table 7 for all states and scenarios. The elasticity scenarios are the same as those presented in Table 4 except that wheat net returns and the cost share percentage were varied across the low-, medium-, and high favorability scenarios. In all cases, acreage elasticities were greater than one, suggesting supplies will be sensitive to prices. The mean elasticity across all scenarios was about 4.4, which would imply a 1% increase (decrease) in the canola price would increase (decrease) contracted canola acres by about 4.4%. The smallest estimated elasticity of about 1.5 was seen in Oklahoma when scenario favorability was "High", wheat net returns were decreased by 25%, and the costshare level was 30%. The largest elasticity of about 6.4 was in Kansas for "Low" favorability, wheat net returns that were increased by 25%, and a cost share level of 15%. On average, estimated elasticities were highest in Kansas (about 5.9) and lowest in Washington (about 3.2). Elasticities tended to decrease as scenario favorability changed from "Low" to "Medium" to "High". These decreases though were relatively minor with elasticities changing between 4.5 and 4.2. With respect to wheat net returns, elasticities increased from about 3.4 under the "25% Decrease" scenario to about 5.5 for the "25% Increase" scenario. Conversely, elasticities tended to decrease as the cost-share level increased, going from about 4.8 at 0% cost share to about 4.0 at the 30% level. The highly elastic estimates are not unexpected, given this would be a nascent market and could be highly variable until a more mature market is established [45]. However, it should be noted that because the analysis uses a 1-year contract length, these supply curves essentially represent the supply curve for a single growing season and the number of acres provided via new or renewed contracts. If biorefineries can obtain multi-year contracts, they may be able to help smooth this volatility.
The results above imply that the ability of biorefineries to contract with farmers could be highly dependent upon market prices for canola and/or similar substitutes. This has both good and bad consequences for the biorefineries, especially if operating with 1-year contracts. On one hand, a decrease in price could drastically reduce the number of acres enrolled in contracts. Conversely, the same change but as an increase in price could significantly increase enrollable acres. Depending on whether a higher degree of responsiveness is favored by biorefineries, the results here suggest actions that can be taken to move this responsiveness in the desired direction. For example, if lower responsiveness is desired, it may be best to locate in states such as OK, OR, and/or WA as opposed to KS, ND, and/or SD. Oilseed for SAF production likely faces competition from other oilseed production in these latter states. With respect to contract attributes, biorefineries could decrease price responsiveness by increasing cost-share levels. Other scenario-favorability factors that may impact supply volatility, such as the availability of insurance, are likely to be outside the control of the biorefineries.
Discussion
This work aimed at looking at the viability of oilseed supply for SAF by (1) identifying the factors that may impede or aid efforts at contracting oilseed supplies for SAF production needed to establish a viable supply chain and (2) estimating the potential acreages that could be contracted in select U.S. states under various scenarios (assuming the oilseed would be rotated with a wheat crop). Expanded use of SAF will likely be crucial to reducing emissions in the aviation industry [5,46,47], but the limited availability of SAF feedstock could hinder progress [5,16,47]. Feasibility of SAF production may depend on the ability of a refiner to enter contracts with farmers for feedstock supply, particularly if these feedstocks do not have established markets in the area [48]. Moreover, contract attributes and scenarios have been shown to impact farmer willingness to accept them [27,28,48]. Similarly, results in this study indicated that canola-acreage supplies will be heavily influenced by location, contract attributes, and scenario context. With respect to location, potential supplies are, on average, estimated to be largest in Kansas and North Dakota. Across all states, however, the canola prices needed to considerably increase from typical levels to induce the higher levels of estimated acreages, which may not be realistic in all scenarios. The results with respect to scenario attributes, however, indicate the potential to alleviate these concerns. In general, and particularly across a realistic range of canola prices, estimated acreage supplies increase as scenario favorability moves from "Low" to "High", as wheat profitability decreases (making canola a stronger substitute crop), and as cost share levels increase. Thus, to maximize potential supplies, biorefineries may want to (1) target areas, where wheat is relatively less profitable and (2) consider compensating producers for a share of the variable costs of production. To the extent possible, biorefineries should also target locations, where more desirable oilseed varieties are feasible to produce. The need for canola (and other oilseed) prices well above what have been seen historically suggests that policy actions may be needed to achieve sufficient oilseed feedstock supplies. Options could include tax incentives for production of SAF using oilseed feedstocks, such as those that have been proposed in the United States for biofuel production. These incentives are not provided at the farmer level, so it may require SAF producers to ensure that some of the benefits are passed on to farmers through favorable contract prices above what market prices may suggest (or indexing contract price to market prices and setting a minimum guaranteed price). Though this analysis focused on the creation of new supplies through contracting, such policies may also create enough new demand to increase market prices to requisite levels to induce production in a spot market. Another option could be monetary incentives directed at farmers, though policymakers would need to be cognizant of any trade-based liabilities and conflicts with other federal programs.
While this study provides an important component in determining the feasibility of oilseed markets and supply chain for SAF production, additional research is needed. One issue that needs addressing is the updating of existing and creation of missing enterprise budgets for potential oilseed feedstocks. This analysis focused on canola because of the presence of reliable state-level budgets for this crop. However, canola budgets were not always available for every state and for those states with budgets some were not current. Updating and expanding the set of oilseed budgets would provide a more realistic model of the economic situation farmers may face. In addition, analysis for a particular oilseed could be improved with survey instruments focused solely on that crop. For this analysis, the primary objective of the survey instrument was to gain insights regarding a suite of potential feedstocks rotated with wheat. As such, stated-choice questions were framed in general for a nonspecific feedstock with respect to net returns to wheat. A more detailed analysis could pose stated-choice questions with respect to market prices for specific oilseed feedstocks of interest. In addition, a more narrowly focused survey could possibly bring in other oilseed specifics, such as yields, costs of production, etc.
Conclusions
This study examined the factors that drive willingness to produce oilseed feedstocks under contract for sustainable aviation fuel (SAF) and how different combinations of these factors could translate into actual acreage supplies. Analysis was conducted using farmer survey data from Kansas, North Dakota, Oklahoma, Oregon, South Dakota, and Washington. Factors included attributes of the oilseed under consideration and production-contract attributes. Farm-level probabilities for entering these contracts under various scenarios were estimated via random-parameter logistic-regression models. Separate models were estimated for Kansas, North Dakota, Oklahoma, and South Dakota and a Pacific Northwest region comprised of Oregon and Washington. Estimated probability models were then used to estimate canolaacreage supply curves for each state under multiple scenarios to examine the consequences of changing particular attribute(s). First, the analysis examined overall scenario favorability-"Low", "Medium", or "High"which employed simultaneous changes to multiple attributes to make scenarios more (or less) favorable from the farmer perspective. Additional analyses examined the impacts of fluctiations in (1) wheat profitability and (2) the level to which biorefineries cost share the variable costs of production. Finally, acreage-supply elasticities were estimated for all scenarios. Results from each model indicated that net returns to canola will have crucial impact on the supply of contracted acreage. Results also suggested that refiners may be able to induce contract acceptance by offering farmer-friendly contract attributes, such as input cost sharing. Moreover, elasticity estimates indicated contract and scenario attributes would also impact the responsiveness of supplied acreages to canola prices. Overall, the results suggest that acquiring sufficient feedstock is likely to be the most feasible in Kansas in North Dakota, though smaller-scale SAF operations may be feasible in Oklahoma and Washington at historically high canola prices.
|
2023-01-26T15:16:27.341Z
|
2021-09-27T00:00:00.000
|
{
"year": 2021,
"sha1": "cc5d239e6414eed0d1a6a6f98dafccafeb254a4d",
"oa_license": "CCBY",
"oa_url": "https://energsustainsoc.biomedcentral.com/track/pdf/10.1186/s13705-021-00308-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "cc5d239e6414eed0d1a6a6f98dafccafeb254a4d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
258939221
|
pes2o/s2orc
|
v3-fos-license
|
CFD analysis of heat transfer enhancement by wall mounted flexible flow modulators in a channel with pulsatile flow
The aim of the present study is to explore heat transfer and pressure drop characteristics in a pulsating channel flow due to wall-mounted flexible flow modulators (FFM). Cold air in pulsating fashion is forced to enter through the channel having isothermally heated top and bottom walls with one/multiple FFMs mounted on them. The dynamic conditions of pulsating inflow are characterized by Reynolds number, non-dimensional pulsation frequency and amplitude. Applying the Galerkin finite element method in an Arbitrary Lagrangian-Eulerian (ALE) framework, the present unsteady problem has been solved. Flexibility (10−4 ≤ Ca ≤ 10−7), orientation angle (60° ≤ θ ≤ 120°), and location of FFM(s) have been considered in this study to find out the best-case scenario for heat transfer enhancement. The system characteristics have been analyzed by vorticity contours and isotherms. Heat transfer performance has been evaluated in terms of Nusselt number variations and pressure drop across the channel. Besides, power spectrum analysis of thermal field oscillation along with that of the FFM’s motion induced by pulsating inflow has been performed. The present study reveals that single FFM having flexibility of Ca = 10−5 and an orientation angle of θ = 90° offers the best-case scenario for heat transfer enhancement.
Introduction
Recent developments in computational power have made it considerably simpler to predict fluid-flow phenomena based on the conservation laws of fluid motion. In addition, there has been an increase in numerous innovative techniques for thermal enhancement. Fluid-structure interaction (FSI), which accounts for the hydrodynamic effect of fluid and material deformation, is one such technique. Lately, the research community has been quite interested in heat exchangers due to its widespread usage in industrial systems such as solar thermal technology, chemical process, electrical device cooling, refrigeration systems, and so on. These systems, however, are not very thermally efficient on their own. Hence, various approaches for active or passive flow modulation have been introduced to facilitate heat transfer. FSI aids in this regard by modulating local vortices formed within the system and thereby promoting heat exchange. Meanwhile, pulsating flow provides another alternative means for thermal augmentation in case of flow through channels, pipes, or open/vented cavities. An overview of contemporary literature regarding application of flow modulators as well as flow pulsation for thermal performance enhancement of systems has been presented below.
Study on heat transfer enhancement incorporating rigid modulation
Thermal augmentation in a channel has received extensive study in numerous research works. Li et al. [1] explored heat transfer enhancement of a heated surface in an airflow channel with a vibrating piezo fan at Reynolds number, Re = 4000-30,000. They reported that when the Reynolds number was around Re = 13,000, thermal enhancement up to 147% was possible. Thermo-hydraulic assessment for Newtonian fluid flow in a corrugated channel under forced convection was carried out recently by Mehta and Pati [2]. They were able to achieve heat transfer enhancement as much as 428.93% from their corrugated channel when the dimensionless amplitude (α) and the wavelength (λ) of the wavy profile were α = 0.7 and λ = 0.5 respectively, at Re = 100. As a way to increase fluid mixing and subsequently heat transfer by vortex generation, various studies incorporated rigid modulation [3,4]. Consequently, rigid fins [5] and baffles [6,7] were added in two-or three-dimensional channel systems to analyze the impacts of such rigid devices on heat transfer. Thermal augmentation of about 10 times along with an increased friction factor of 100 times were achieved from the rigid fin-channel system described in Ref. [5] at Re = 500. Similarly, the addition of baffles in a channel system of [7] accrued an increased pressure drop around 183.3% despite improving heat transfer by 350% at a blockage ratio of 0.5.
Study on heat transfer enhancement incorporating flexible modulation
Even though introduction of rigid modulators resulted in an appreciable amount of thermal improvement, a substantial pressure loss followed rendering the above-mentioned thermal systems ineffective. Therefore, to counteract this pressure drop effect, researchers turned to FSI investigations using flexible flow modulators. Inevitably, a multitude of flexible devices, including flexible baffles [8] and oscillating flexible fins [9][10][11][12][13] were added in stationary/lid-driven enclosures in recent years for comprehensive study. Thermal enhancement up to 101.69%, 200%, 280%, 8.5% and 3.29% respectively was possible from the systems of [8][9][10][11][12] when compared to the rigid modulator systems. Hence, these studies established that flexible flow modulators boosted heat transfer by stimulating vortex formation which could be attributed to their inertial effects. Afterwards, Park et al. [14] were also able to enhance heat transfer by 160% when installing a flexible flag in a Poiseuille flow-channel in expense of considerable mechanical energy loss (around 246%). Eventually, membranes or partitions were introduced in square [15][16][17], circular [18], and trapezoidal [19] enclosures and it was discovered that the increased FSI forces caused by the flexible membrane's deformation promoted forced convection. While exploring mixed convection in a cavity-channel arrangement, Sabbar et al. [20] discovered that the inclusion of flexible wall(s) facilitated heat transfer (around 17%). To assess the overall effectiveness of a partially compliant channel system, Ismail [21] opted to employ the widely used criterion known as thermal-hydraulic enhancement criterion (TEC). For such a channel system he reported a maximum value of TEC around 1.278 at Cauchy number Ca = 10 − 7 and Re = 250.
Study on heat transfer enhancement incorporating flexible modulator's inclination
In addition to installing flexible modulators, their orientation plays a vital role on periodic vortex shedding as well as bulk fluid motion. Hence, Park [22] examined the influence of a flexible vortex generator's (VG) inclination angle and deduced that the increase in angle caused instabilities inside the system. This led to thermal enhancement of 117% accompanied with an increased energy loss of 179%. Following this, Song et al. [23] on their flat-plate film cooling found that cooling effectiveness became significant with the inclination of VG. They reported a 27.4% increase of cooling effectiveness when the inclination angle of VG was reduced from 40 • to 20 • . Ali et al. [24] while investigating the performance of a channel consisting of several inclined self-oscillating flexible vortex generators found 56% increase in thermal performance factor and 134% increase in the overall heat transfer.
Study on heat transfer enhancement utilizing pulsation flow
Adding pulsation to the flow is another active approach to increase heat transfer besides the insertion of rigid/flexible flow modulation devices [25]. Impact of pulsating flow in heat removal from two heated blocks installed in a channel was studied by Kim and Kang [26]. They found that increase in pulsating frequency, Strouhal number (St) heavily effected thermal enhancement and obtained up to 21% enhancement at St = 0.8 for the second block. Under a perturbation technique, Nield and Kuznetsov [27] explored forced convective heat transfer in a channel accompanied with a pulsatile flow and found that phase and magnitude of fluctuating Nusselt number changed with the dimensionless frequency. The effect of pulsating flow in channels of varying configurations [28][29][30][31] has also piqued the curiosity of several researchers recently. Later, from their investigation of forced convection in a wavy channel, Jafari et al. [32] reported that both the oscillation frequency and amplitude (A pulse ) played a crucial role on configuring flow and thermal fields. Consequently, they were able to obtain thermal enhancement up to 20% at St = 0.25 and A pulse = 0.25. Selimefendigil and Oztop [33] studied the influence of pulsating flow over an adiabatic fin mounted on a backward-facing step and noticed heat transfer enhancement of 21% as well as increased dimensionless pressure drop dP m = 65 at St = 0.1 and A pulse = 1.0. Later Joshi et al. [34] incorporated twin flexible fins directly opposite to each other in a laminar pulsating channel flow and found maximum Nusselt number with flexible plate about 100% larger as compared to channel without fin. Eventually, magnetic field was introduced by Kolsi et al. [35] to demonstrate its influence on pulsating nanofluid flows in a corrugated bifurcating channel. Similarly, Hamzah and Sahin [36] analyze pulsating SWCNT-water nanofluid flow in a wavy channel. Recently, in an effort to experimentally study heat transfer augmentation in corrugated ducts Tokgoz and Sahin [37] employed Particle Image Velocimetry (PIV) technique to determine the influence of phase shifts (0⸰ and 90⸰) on the flow behavior. Afterwards, to experimentally explore convective heat transfer performance and hydrodynamics using PIV method Zontul and Sahin [38] introduced pulsation flow to their grooved channel. They inferred that pulsating flow improves interaction between wake flow in groove and mainstream which resulted in increased heat transfer capability of the system due to better flow entrainment.
Scope of the present work
Although numerous studies discretely delineate the impact of flexible elements and flow pulsation on thermal enhancement, the cumulative effects of these phenomena have received little to no attention. Current investigation explores the impact of the orientation, flexibility, and various configurations of the flexible modulator under pulsating channel flow conditions. Moreover, influence of the pulsation frequency of the flow on the thermal as well as tip deflection frequency has been thoroughly analyzed with the help of power spectrum analysis. Also, the influences of flexibility, orientation, and configurations of the modulator on overall system performance have been given paramount importance.
The present computational fluid dynamics study incorporates the application of pulsating flow along with flow modulation by wall mounted FFM. To accurately capture the motion of FFM, the ALE framework [39,40] along with finite element analysis has been implemented. Fig. 1 depicts the setup of the current problem alongside the boundary conditions in a two-dimensional Cartesian system of coordinates represented by (x*, y*). In essence, the channel under study is a conduit of length L* c = 16H [22,32] and width W* c = 2H. The computational rectangular domain spans from (0, − H) to (L* c , H), where H is the characteristic length and implies half of channel width. A FFM of length L* f = 0.8H is installed to the bottom wall at various orientation angles (θ). The base of the modulator is secured at a distance d* = 2H, and its free end is susceptible to deflection because of fluid's dynamic action by the incoming flow. It should be noted that air is considered as the working fluid for the present study.
Governing equations and assumptions
The fluid flow is considered to be laminar, two-dimensional, incompressible, Newtonian, and unsteady and the thermo-physical properties of the working fluid are presumed to be constant. The influences of buoyancy, magnetic field, thermal radiation, Joule heating, and viscous dissipation are ignored. By implementing the well-known ALE approach and applying the foregoing assumptions, the dimensional governing equations (1)-(4) are as follows: For the fluid domain, Continuity equation: Momentum equation: Energy equation: For the elastodynamic domain, Displacement equation: where σ * denotes the stress tensor and F * v represents the body force vector acting on the mass of the modulator. Since the FFM is sufficiently thin and free from the effect of buoyancy force, the associated body force can be ignored [8,10]. Besides, w * indicates the velocity of the moving coordinate, u * = (u * , v * ) denotes the fluid velocity vector, and d * s indicates the solid displacement vector. Also, p * , T * , α f , ν f and t * represent fluid's pressure, temperature, thermal diffusivity, kinematic viscosity, and time respectively. The stress tensor σ * exerted on the FFM due to the fluid flow can be calculated by equation (5) as follows: where F = (I + ∇ * d * s ),J = det.(F), I represents the identity matrix and S is the Piola-Kirchhoff stress tensor that can be derived from the induced strain ε from equation (6) as follows: where, C --C (E, ν) denotes the elasticity tensor where E and ν indicates the Elastic modulus and Poisson's ratio respectively. Also, the colon ':' in represents the double-dot tensor product. By implementing dimensional analysis, the nondimensional governing equations (7)-(10) can be derived as follows [11]: For the fluid domain, ∂u ∂t ∂T ∂t For the elastodynamic domain, Dimensionless parameters and variables have been derived by normalizing them by following equation (11):
System boundary and initial conditions
For proper depiction of the studied geometry, the boundary conditions of flow, thermal and displacement fields have been tabulated in Table 1 in non-dimensional form. The dimensionless flow frequency is modelled by Strouhal number St = fH/u o , where f is the dimensional pulsatile frequency. Additionally, the term A represents the amplitude of inflow oscillation.
Vorticity
To study the hydrodynamics and flow structure of the system, vorticity field of the fluid flow has been analyzed [37,38]. The z-component of vorticity, ω z has been evaluated using equation (12) given as:
Performance parameters
To investigate the heat transfer characteristics of the model, following Nusselt numbers have been evaluated utilizing following equations (13)- (15): Here, Nu x (x,t) is the local Nusselt number, Nu(t) is the spatially averaged Nusselt number and Nu avg is the time averaged Nusselt number for a period of oscillation t p .
The FFM experiences varied pressure drops as it deflects under fluctuating flow conditions. This pressure drop is evaluated by equations (16) and (17) as follows: Here, P in (t) is the inlet pressure, P in,avg and P out,avg are time averaged inlet and outlet pressures respectively, and ΔP is the pressure drop.
By considering this significant pressure drop, the overall thermal performance can be computed utilizing enhancement ratio (ER), pressure ratio (PR), and performance factor (PF) following [2,21,24] by implementing following equation (18): where, the subscript 'o' represents a system without any FFM.
Numerical methodology
The Galerkin finite element approach [39] has been utilized to model the nonlinear governing partial differential equations (1)-(4) and their respective dimensionless boundary conditions. A non-uniform grid incorporating boundary mesh refinement and composed of triangular and quadratic elements is used to discretize the computational domain, as portrayed partially in Fig. 2. The Arbitrary Lagrangian-Eulerian [40] method has been employed to adopt the moving mesh condition for capturing the oscillations and displacement of the FFM. To iteratively solve the residual finite element equations in each time-step (a constant time-step of 10 − 2 ), the Newton-Raphson method is applied. The numerical solution is assumed to have converged when the sum of the residuals for all parameters is less than or equal to 10 − 7 . The numerical solution to the current problem is obtained using the finite element method-based software package "COMSOL Multiphysics 6.0". Table 2, it can be delineated that with the increase of domain elements, the value of average Nusselt number becomes almost constant. However, taking the accuracy and cost of computation into account, a non-uniform grid of 40,712 elements has been established as the optimum grid arrangement to acquire results throughout the study.
Validation of model
To justify the FSI phenomenon, the current framework has been validated against the outcomes of Sabbar et al. [20] in terms of variation of average Nusselt number as depicted in Fig. 3(a) for a channel-cavity assembly. The cavity of the model consists of elastic side wall and a heating source at the bottom wall. It is fairly evident that the findings from the current model and those documented by Sabbar et al. [20] holds a good agreement. Moreover, another validation of corrugated channel model of Mehta and Pati [2] has justified the forced convective heat transfer in the current model. This literature studied forced convective heat transfer in a channel having isothermally heated wavy upper and lower wall. It can be observed in Fig. 3(b) that the evaluated average Nusselt number using the current model completely conforms with that of Mehta and Pati [2].
Results and discussions
This study has been carried out to investigate the influence of a FFM in a channel with pulsating air (Pr = 0.71) flowing through it. Hence, the modulator's flexibility, which is denoted by Ca, has been varied within the range of 10 − 4 ≤ Ca ≤ 10 − 7 . Influence of the orientation angle of the FFM with respect to the bottom wall also has been investigated. Three orientation angles of θ = 60 • , 90 • and 120 • have been considered in the current study. Moreover, this study has explored the impact of double FFM on flow field as compared to single FFM. Overall, three different configurations based on the presence of FFM have been explored as shown in Table 3. Throughout the study, the dynamic parameters of the flow field have been kept constant at Reynolds number Re = 200, nondimensional pulsating frequency in terms of Strouhal number St = 0.20 and flow amplitude A = 0.5. All the outcomes have been evaluated at t > 30 since system reaches statistically steady-state condition after 30 for all the cases under consideration as presented in Fig. 4.
Impact of the flexibility of FFM
Flexibility of a flexible flow modulator is defined by Cauchy number which is inversely proportional to the elastic modulus. To investigate the influence of the flexibility of FFM, Ca has been varied from 10 − 7 to 10 − 4 for a single bottom wall mounted FFM oriented at θ = 90 • .Vortices and isotherms for various value of Ca have been portrayed in Fig. 5. As can be seen, periodic vortices are generated in the downstream of the channel as a result of the interaction between the modulator and pulsating fluid flow. Since FFM is highly flexible at Ca = 10 − 4 , dynamic force of the flow field prevails the restoring force of the modulator. As a result, the modulator substantially leans toward the bottom wall. Furthermore, this flexible FFM weakly shears incoming fluid resulting in the development of weak vortices. These vortices sweep along the bottom wall without promoting enough mixing and fade out very quickly. Consequently, thicker thermal boundary layer is observed in the corresponding isotherm for Ca = 10 − 4 . Since the value of Ca decreases to 10 − 5 and owing to higher restoring force, FFM bends less toward the bottom wall and shears the incoming fluid strongly. Stronger vortices are generated at mid-section of the channel which promotes fluid mixing both at the upper and the lower walls. Hence, heat transfer increases and thinner thermal boundary layers are formed. Further decrease of Ca to 10 − 6 makes the FFM almost rigid. Therefore, very strong vortices are generated at the downstream and heat transfer enhances greatly as depicted by the corresponding isotherm.
To demonstrate the influence of the FFM on heat transfer and structural deformation, temporal variations of Nu(t) along with y tip (dimensionless vertical displacement of FFM's tip) have been portrayed in Fig. 6. Both parameters are oscillatory in nature due to pulsating flow condition. Nu(t) obviously, becomes higher at a lower Ca which can be justified by the presence of stronger vortices. In addition, fluctuations of y tip also diminishes at a lower value of Ca due to increased rigidity of FFM. Power spectrum analysis aided by fast Fourier transform (FFT) has been implemented for the corresponding thermal and tip deflection frequencies as shown in Fig. 7. From the observation of highest peak of FFT plots, it can be stated that both thermal and tip deflection frequencies are dominated by the pulsation frequency at St = 0.20. Since the FFM generates hardly any oscillations at Ca = 10 − 6 , the dominant peak is barely visible.
Since installation of an auxiliary device results in increased pressure drop, an overall performance analysis has been presented in Fig. 8 in terms of ER, PR, and PF. From Fig. 8(a) it can be observed that the value of ER rises with lower value of Ca until the increment becomes negligible beyond Ca = 10 − 6 . Therefore, beyond Ca = 10 − 6 , the modulator becomes rigid. Now, there is no further effect of elasticity of the modulator on heat transfer enhancement. PR plot in Fig. 8(b) also shows the similar trend, indicating pressure drop increases as the flexibility of the FFM reduces. Since a flexible FFM bends while oscillating, the region above it gets bigger. Hence, fluid flows through the channel with less obstruction which leads to lower pressure loss. On the other hand, for Ca beyond 10 − 6 , fluid flows through the narrower region above the rigid FFM which results in significant drop in pressure. Both ER and PR have essentially opposite impact on the system performance. Therefore, performance factor has been evaluated considering heat transfer enhancement and pressure drop as illustrated in Fig. 8(c). Although greater heat transfer enhancement is achieved when FFM becomes more rigid, pressure drop increases dramatically, which leads to the requirement of more pumping power. Thus, requirement of more mechanical energy offsets the enhancement on thermal performance. Consequently, lower PF is noticed when FFM gets more rigid. On the other hand, due to very poor heat transfer, highly flexible FFM also has lower PF. Hence, maximum performance factor is obtained at an optimum value of flexibility with about Ca = 10 − 5 .
Influence of FFM's orientation
Influence of FFM for orientation angles of θ = 60 • , 90 • and 120 • have been explored in this section while keeping Ca fixed at 10 − 5 . From the visualization of vortices and isotherms in Fig. 9, it is apparent that when θ increases from 60 • to 120 • , the induced vortices get stronger. At θ = 60 • , FFM is oriented along the flow direction and the obstruction to fluid flow caused by FFM is relatively less. Therefore, the generated vortices are comparatively weaker. When the value of θ increases to 90 • , the region above the FFM become narrower. Incoming fluid is strongly sheared by the tip of the FFM resulting in stronger vortices. When θ = 120 • , the FFM is oriented directly opposite to the flow direction of the incoming fluid. As a result, the incoming fluid is severely impeded leading to greater shearing and stronger vortices. Rapid fluid mixing is increased with the generation of stronger vortices which in turn causes thinning of thermal boundary layer. From the plot of isotherms in Fig. 9, it can be observed that thinnest thermal boundary is achieved at θ = 120 • and higher heat transfer is obtained at a larger value of θ.
To further investigate the trend of system performance that has been observed in Fig. 9, temporal variations of Nu(t) and y tip have been portrayed in Fig. 10 for different values of θ. Both Nu(t) and y tip show oscillatory nature as usual. When the value of θ increases, the curve of Nu(t) shifts upward, due to enhancement of heat transfer rate enhances with FFM tilted more towards the entrance of the channel. In case of y tip , amplitude of fluctuation for θ = 60 • is maximum since FFM is placed along the direction of the fluid flow. However, when θ = 120 • , when FFM starts moving downward from the maximum position, a secondary fluctuation is introduced by the incoming fluid for a brief moment. Hence, the fluctuations for θ = 120 • is highly deviated from the sinusoidal pattern as compared to others. This phenomenon leads to the formation of stronger vortices at θ = 120 • and thus achieves highest heat transfer. To observe characteristics of thermal and flow induced frequencies of FFM, power spectrum analysis corresponding to the plots of Nu(t) and y tip have been performed and demonstrated in Fig. 11. For both Nu(t) and y tip , the dominant peak is prominent at a pulsating flow frequency of 0.20, whereas the non-dominant peaks are very insignificant. However, there is an exception in case of y tip plot for θ = 120 • , where a significant non-dominant peak at a frequency of 0.40 can be observed. This is because, for θ = 120 • , there is a secondary fluctuation induced by the oscillation of tip deflection of the FFM.
Overall performance analysis of FFM at various orientation angles has been presented in Fig. 12. It is observed that both ER and PR show an ascending trend with increasing value of θ. Increasing trend of ER in Fig. 12(a) can be justified by the presence of stronger vortices at larger values of θ as portrayed in Fig. 9. Two different rising trends of PR are observed for θ between 60 • and 120 • in Fig. 12 (b). When θ goes from 60 • to 90 • , PR rises slowly at first, but then it rises dramatically as θ goes up to 120 • . When FFM is positioned vertically (θ = 90 • ), the area above it gets smaller. However, it oscillates at a lower angle due to its flexibility which makes the area above it larger. Hence, the pressure drop is about the same as in case of θ = 60 • . Now, when FFM is installed at θ = 120 • position, it substantially restricts the flow of the incoming fluid. Consequently, there is a substantial drop of pressure at θ = 120 • . Fig. 12(c) demonstrates the overall performance factor after taking into consideration the improvement of heat transfer and the induced pressure drop. The value of PF rises till θ = 90 • and then sharply decreases when θ = 120 • . The highest value of PF is obtained at about θ = 90 • since higher ER is obtained with comparatively lower PR. On the other hand, because of enormous drop in pressure, the value of PF is found to be minimum when FFM is placed at θ = 120 • .
Comparisons of various FFM configurations
In this section, the channel with various FFM configurations as listed in Table 3 has been investigated. For all cases, modulator with Ca = 10 − 5 has been installed vertically (θ = 90 • ). Vortices and isotherms for different cases have been depicted in Fig. 13. For case 1, thin sheet of vorticity can be observed since there is no FFM in the channel to facilitate vorticity generation. Hence, fluid mixing does not occur significantly which leads to formation of very thick thermal boundary layer. At Case 2, FFM placed at bottom wall, shed vortices near the bottom wall which in turn induce counter vortices near the top wall. For this case, very thin thermal boundary layer is observed at both top and bottom walls due to bulk fluid mixing. However, thermal boundary layer at bottom wall is comparatively thinner than that of top wall. Therefore, it can be said that the channel wall, on which FFM is installed, has a greater temperature gradient.
From the discussion so far, it can be inferred that the installation of a single FFM to a channel improves fluid mixing and heat transfer by a significant amount. For further investigation, Case 3 has been taken into consideration to assess the effects of simultaneously installing a top and bottom wall mounted FFM. In the case of a single FFM, the vortices it sheds induce counter vortices to form A. Das et al. at the opposing wall. However, in a channel with top and bottom wall-mounted modulators, the vortices at the walls are formed by direct solid-fluid interactions. As a result, vortices of equal strength are formed at both top and bottom wall. Consequently, fluid mixing enhances equally on both walls. Hence, thin thermal boundary layer of same thickness can be noticed in the corresponding isotherm plots. However, thick thermal boundary layer is observed near the FFM that is placed at further distance in Case 3. This is due to the Table 3 for θ = 90 • and Ca = 10 − 5 (color online). (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) Table 3 for Ca = 10 − 5 and θ = 90 • (color online). (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) previously produced vortex of the prior FFM obstructing the vortex shed by the later FFM, henceforth, impeding the fluid mixing.
Overall performance analysis of these three configurations has been presented in Fig. 14 in terms of ER, PR and PF. Fig. 14(a) shows that the channel with modulators achieve always higher heat transfer than the channel without any. Among all cases, channels with double FFM offer the maximum heat transfer enhancement. However, a pressure drop of significant amount offsets this outcome that can be seen in Fig. 14(b). One can notice that the pressure drop in double FFM configuration is almost twice as compared to the single FFM case. Since fluid flow is highly impeded by the top and the bottom FFMs, significant amount of pumping work would require to deliver the fluid through the channel. From the PF plot as shown in Fig. 14(c), it can be asserted that the best possible thermal performance can be attained with a bottom wall mounted FFM which is around 10%. Due to huge pressure drop in Case 3, its contributions to heat transfer enhancement become expensive. In fact, PF drops below 1 for Case 3, which indicates very poor efficiency of the system. Therefore, it could be concluded that single bottom wall mounted FFM having flexibility of Ca = 10 − 5 and orientation angle of 90 • offers the best-case scenario for heat transfer enhancement.
Conclusions
The current study explores the impact of flexible flow modulator on heat transfer enhancement in a straight channel with isothermally heated upper and lower walls which are subjected to pulsating air flow. The fluid flow is considered to be twodimensional laminar, incompressible, Newtonian, and the thermo-physical properties of the working fluid are presumed to be constant. Non-dimensional flow amplitude and pulsation frequency have been kept constant at A = 0.5 and St = 0.20, respectively. Galerkin finite element method in an Arbitrary Lagrangian-Eulerian framework has been applied to numerically solve the present unsteady problem. This study has thoroughly investigated the impacts of flexibility (10 − 4 ≤ Ca ≤ 10 − 7 ), orientation angle (60 • ≤ θ ≤ 120 • ) and different configurations of FFM. The flow field has been visualized with the help of vorticity contour and isotherm plots. Moreover, power spectrum analysis has been performed for interpreting induced thermal and tip deflection frequencies. Overall performance analysis has also been done by accounting both the heat transfer enhancement and pressure drop across the channel.
Following remarks can be drawn from the present study: • FFM with an optimum flexibility of Ca = 10 − 5 is most suitable for thermal augmentation in a channel. Highly rigid or flexible modulator deteriorates the performance of the system. • Frequencies of thermal field and induced tip deflection of FFM have been found conforming to the imposed pulsating inflow frequency (St = 0.20). • Among various orientation angles of the FFM, a vertically oriented FFM (θ = 90 • ) offers better heat transfer enhancement with a reasonable pressure drop. • Although channels with double FFM achieves highest heat transfer, the corresponding pressure drop is huge. As a result, the overall performance of the system drastically deteriorates. • The channel with a single bottom-wall mounted FFM achieves the best overall performance among the various configurations, which is about 10% higher compared to a channel without modulator.
Author contribution statement
Arpita Das: Fahim Tanfeez Mahmood: Rabeya Bosry Smriti: Performed the experiments; Analyzed and interpreted the data; Wrote the paper.
Sumon Saha: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Mohammad Nasim Hasan: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Data availability statement
Data will be made available on request.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper
|
2023-05-28T15:11:05.085Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "3231e0b0644b188e1f172c3fd9671694e8e64149",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e16741",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a77e61e8c97846a3c2df347f248a684c2365d601",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3362229
|
pes2o/s2orc
|
v3-fos-license
|
Antimicrobial resistance patterns of Staphylococcus species isolated from cats presented at a veterinary academic hospital in South Africa
Background Antimicrobial resistance is becoming increasingly important in both human and veterinary medicine. This study investigated the proportion of antimicrobial resistant samples and resistance patterns of Staphylococcus isolates from cats presented at a veterinary teaching hospital in South Africa. Records of 216 samples from cats that were submitted to the bacteriology laboratory of the University of Pretoria academic veterinary hospital between 2007 and 2012 were evaluated. Isolates were subjected to antimicrobial susceptibility testing against a panel of 15 drugs using the disc diffusion method. Chi square and Fisher’s exact tests were used to assess simple associations between antimicrobial resistance and age group, sex, breed and specimen type. Additionally, associations between Staphylococcus infection and age group, breed, sex and specimen type were assessed using logistic regression. Results Staphylococcus spp. isolates were identified in 17.6% (38/216) of the samples submitted and 4.6% (10/216) of these were unspeciated. The majority (61.1%,11/18) of the isolates were from skin samples, followed by otitis media (34.5%, 10/29). Coagulase Positive Staphylococcus (CoPS) comprised 11.1% (24/216) of the samples of which 7.9% (17/216) were S. intermedius group and 3.2% (7/216) were S. aureus. Among the Coagulase Negative Staphylococcus (CoNS) (1.9%, 4/216), S. felis and S. simulans each constituted 0.9% (2/216). There was a significant association between Staphylococcus spp. infection and specimen type with odds of infection being higher for ear canal and skin compared to urine specimens. There were higher proportions of samples resistant to clindamycin 34.2% (13/25), ampicillin 32.4% (2/26), lincospectin 31.6% (12/26) and penicillin-G 29.0% (11/27). Sixty three percent (24/38) of Staphylococcus spp. were resistant to one antimicrobial agent and 15.8% were multidrug resistant (MDR). MDR was more common among S. aureus 28.6% (2/7) than S. intermedius group isolates 11.8% (2/17). One S. intermedius group isolate was resistant to all β-lactam antimicrobial agents tested. Conclusion S. intermedius group was the most common cause of skin infections and antimicrobial resistance was not wide spread among cats presented at the veterinary academic hospital in South Africa. However, the presence of MDR-Staphylococcus spp. and isolates resistant to all β-lactams is of both public health and animal health concern.
Background
Although Staphylococcus are commensals of the skin, mucous membranes, alimentary and urogenital tracts of a diverse group of mammals and birds, they have been implicated in clinical infections of humans and animals [1][2][3]. Transmission of Staphylococcus between animals and humans are known to occur [1,4]. Cats have been reported as carriers of both Coagulase positive (CoPS) and coagulase negative Staphylococcus species (CoNS) [2,3,[5][6][7]. However, coagulase positive Staphylococcus species infections seem to be more prominent in feline medicine than CoNS infections [1]. Among the CoPS species in cats, S. pseudintermedius are the most common followed by S. aureus [5,8]. These infections have been associated with pyoderma, postoperative wound infections and otitis [9]. In addition, S. felis, is a cause of urinary tract infections [10].
Although resistance to β-lactam antimicrobials among Staphylococcus isolates from cats has been reported [6,8], other antimicrobial agents such as gentamycin, enrofloxacin and doxycycline have been reported to be effective against Staphylococcus infections in cats [5,11,12]. However, information on the proportion of antimicrobial resistant isolates and resistance patterns of Staphylococcus species in clinical cases of cats in developing economies in general and South Africa in particular is very limited. Therefore, the objective of this study was to investigate the proportion of antimicrobial resistant isolates and resistance patterns among Staphylococcus species isolates from cat samples submitted to a veterinary academic hospital in South Africa between 2007 and 2012.
Data collection
Data containing records of cat samples submitted to the University of Pretoria Bacteriology Laboratory at the Veterinary Teaching Hospital in South Africa between January 2007 and December 2012 for routine diagnostic tests were evaluated. The following variables were captured: breed, age, sex, specimen type, staphylococcus species isolated, antimicrobial included in the antimicrobial susceptibility test panel and the susceptibility profile of the isolates.
Staphylococcus identification and antimicrobial susceptibility testing
Culture of samples was done using sheep blood agar incubated at 37°C for at least 24 h. All media used were quality controlled using S. aureus ATCC 25923. Suspected Staphylococcus colonies were identified based on phenotypic characteristics including colony characteristics, catalase, D-mannitol, maltose, deoxyribonuclease (DNase) tests, polymyxin-B and Gram-staining as described by Quinn [13]. S. intermedius and S. delphini were classified as S. intermedius group (SIG) as described by Sasaki et al. [14].
Data analysis
All the statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA) statistical package. The dataset was assessed for missing data and inconsistencies such as improbable values. Shapiro-Wilk test of normality was used for evaluation of distributions of age that was found to be non-normally distributed and hence median and interquartile ranges were reported. Age was also categorised into two categories: <2 years and ≥2 years. The frequencies and proportions of all categorical variables were calculated and presented in a table. Associations between antimicrobial resistance of Staphylococcus spp. isolates and a number of host factors (breed, age, sex, specimen type) and other categorical variables were assessed using the Chi-square and Fisher's Exact tests. Statistical significant was assessed using a critical p-value of 0.05. The variables specimen type and breed had too many categories to include in the model in their original form and hence they were re-coded (Table 1).
Univariable and multivariable models
Investigation of the predictors of Staphylococcus spp. infections was done in two steps. In the first step, univariable logistic regression model was fit to assess the relationships between sex, age, specimen type and breed, and the outcome variable, Staphylococcus status. The potential predictors of Staphylococcus spp. infection at this stage were identified using a relaxed α ≤ 0.20. Thus variables with p ≤ 0.20 in the univariable model were considered for inclusion in the multivariable model in the 2nd step. Therefore, the 2nd step involved fitting a multivariable logistic regression model using manual backwards selection method with the significance set at α ≤ 0.05.
Confounding was assessed by comparing the change in model coefficients with and without the suspected confounders. If the removal of a suspected confounding variable resulted in a 20% or greater change in another model coefficient, the removed variable was considered a confounder and retained in the model regardless of its statistical significance. In addition, two-way interaction terms between variable in the final main effects model were assessed.
Odds ratios (ORs) and their 95% confidence intervals were computed for variables included in the final model. The differences between categories of statistically significant predictors for Staphylococcus spp. were also assessed by changing the reference categories of the predictors. Hosmer-Lemeshow goodness-of-fit test was used to assess model fit.
Predictors of staphylococcus infections
Based on the univariable logistic model, only sex and specimen type stood out as potential predictors of Staphylococcus spp. infection based on a liberal α ≤ 0.20 (Table 5). Thus, only these two variables were assessed in the multivariable model. In the final model only specimen type was significantly associated with staphylococcus species infection based on α ≤ 0.05. The odds of testing positive for Staphylococcus spp. infections were significantly higher among ear canal (p = 0.0002) and skin samples (p < 0.0001) than urine samples (Table 6). However, there was no significant differences in the odds of Staphylococcus spp. infection between skin and ear canal samples (Table 7).
Discussion
The aim of this study was to investigate the proportion of antimicrobial resistant isolates and resistance patterns of Staphylococcus spp. isolates from clinical samples obtained from cats admitted to a veterinary academic past studies have focused on carriage rather than infections [8,17]. Similar to findings from other studies [8,17], in this study we observed that skin and ear canal samples had significantly higher odds of testing positive for Staphylococcus spp. than other samples. These results seem to suggest that Staphylococcus spp. are a major cause of skin related infections in cats [18][19][20]. Although there tended to be a higher proportion of Staphylococcus spp. isolated from the domestic short hair breeds, the final model indicated no significant association between breed and odds of Staphylococcus spp. infection. However, the lack of significant association might be due to small sample size involved in this study. It is worth noting that, there is evidence that certain diseases are more common in certain breeds of cats and we suspect that this might be the case with Staphylococcus infections [17,21].
Consistent with other studies [3,5,17,19], we observed a higher percentage of CoPS than CoNS. This is mainly due to the observed higher percentage of S. intermedius group, which are CoPS, isolated in this study. On the contrary, Abraham et al. [7] reported nearly equal proportions of S. aureus and S. pseudintermedius [22][23][24][25].
The observed higher percentage of resistance towards β-lactam and lincosamide antimicrobial agents among the Staphylococcus isolates in cats has previously been reported [6,8,23]. Of particular concern is one S. intermedius group isolate that was resistant to all β-lactam antimicrobial agents tested in this study. Moreover, MRSA have an intrinsic resistance to β-lactams by virtue of newly acquired lowaffinity penicillin-binding protein 2A (PBP2A). Therefore, it is possible that this isolate was MRSA [26,27]. Unforunately, we could not assess this since the lab that supplied the data used in this study did not test for methicillin resistance. Almost 16 % (15.8%) of Staphylococcus isolates in this study were MDR. This is close to the 14.8% reported by Gandolfi-Decristophoris et al. [23] in Switzerland.
Since this is a retrospective study, these findings should be interpreted with caution. The history of previous use of antimicrobial agents was not included in the analysis and this could have affected the recovery rates of Staphylococcus species. The study also suffers from low samples size which impacted the precision of some of the estimates. Nonetheless, the results provide a useful preliminary indication of the burden and antimicrobial resistance patterns of Staphylococcus spp. infections in cats presented to the academic veterinary hospital in South Africa.
Conclusions
As has been observed in other studies, this study suggests that S. intermedius group is the most common cause of skin infections in cats investigated in this study. It also suggests that antimicrobial resistance is not so wide spread among cats presented at the veterinary academic hospital in South Africa. Considering the risk of cross-transmission of resistant organisms between cats and humans, the levels of resistance to βlactams is of great concern from both a public health and animal health point of view. However, given the limited scope of this study, there is need for larger and more detailed primary base studies to specifically assess the extent of antimicrobial resistant infections in cats in South Africa and their role in the spread of antimicrobial drug resistance to humans.
Availability of data and materials
The data that support the findings of this study are available from the bacteriology laboratory of the University of Pretoria that has legal ownership of the data. The data are not publicly available and should be requested and obtained from the above legal owner.
Authors' contributions DNQ was involved in study design and data management and performed all statistical analyses and interpretation as well as preparation of the manuscript draft. AO was involved in study design, data analysis and interpretation as well as extensive editing of the manuscript. JWO was involved in study design and editing of the manuscript. DS was involved in data collection and interpretation of results of the manuscript. All authors read and approved the final manuscript.
Ethics approval
The study was approved by the University of Pretoria Ethics Committee (reference number S4285-15).
Consent for publication
The study does not involve human subjects and therefore no consent was required. However, the lab that supplied the study data provided consent for study results to be published.
Competing interests
The authors declare that they have no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
2017-09-26T10:05:44.060Z
|
2017-09-15T00:00:00.000
|
{
"year": 2017,
"sha1": "61ef4b039234ffbb50e30b69c0fe3f3da080a64b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12917-017-1204-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61ef4b039234ffbb50e30b69c0fe3f3da080a64b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13936825
|
pes2o/s2orc
|
v3-fos-license
|
Pharmacogenetic association with early response to intravitreal ranibizumab for age-related macular degeneration in a Korean population.
PURPOSE
To determine whether genetic factors that influence age-related macular degeneration (AMD) have an early pharmacogenetic effect on treating exudative AMD with ranibizumab in a Korean population.
METHODS
A retrospective study of 102 patients (70 with typical neovascular AMD and 32 with polypoidal choroidal vasculopathy) with exudative AMD treated with intravitreal ranibizumab monotherapy was conducted. Optical coherence tomography, fluorescein, and indocyanine green angiography were taken at the baseline. The best-corrected visual acuity (BCVA) and the central subfield macular thickness (CSMT) were recorded at the baseline and at each monthly visit. The genotypes of the polymorphisms in the known AMD susceptibility loci (CFH, AMRS2, HTRA1, VEGFA, and KDR) were determined, and association between their frequencies and the changes in the BCVA and the CSMT were evaluated.
RESULTS
The mean baseline visual acuity was 0.96 ± 0.59 logMAR (approximately 20/200 in the Snellen equivalent), and the mean number of injections was 3.87 before the month 6 visit. No association was observed between the change in BCVA and each genotype. For the changes in the CSMT, a significant difference was observed only with the VEGF-A (rs833069) gene. The decrease in the CSMT at month 3 for the major allele homozygote AA genotype, the heterozygote AG genotype, and the risk allele homozygote GG genotype was 25.66 ± 85.40, 86.93 ± 92.31, and 85.30 ± 105.30 μm, respectively (p=0.012, p=0.044, and p=0.002 for AG, GG, and combined AG or GG genotype, respectively, compared to the AA genotype). This trend was maintained until month 6.
CONCLUSIONS
The VEGF-A (rs833069) polymorphism showed a significant association with the anatomic response to intravitreal ranibizumab. No significant difference was found between the genotype of the potential risk polymorphism for development of AMD and the early visual improvement after intravitreal ranibizumab.
VEGF-A, and mediates most of the endothelial growth and survival signals from VEGF-A. An association between a polymorphism in VEGF and its receptor gene with the development of AMD has also been reported [13,14]. Galan et al. [14] reported that two polymorphisms (rs833069 in intron 2 of the VEGF-A gene, rs2071559 in the promoter of the kinase insert domain receptor [KDR] gene) were significantly associated with the development of AMD. In particular, for VEGF-A rs833069 the AMD risk was increased fivefold for G homozygotes compared with the homozygous carriage of the A allele. For KDR rs2071559, the AMD risk was increased threefold for T homozygotes compared with the homozygous carriage of the C allele.
Recently, several pharmacogenetic studies among Caucasian populations reported a relationship between genetic characteristics and response to anti-VEGF treatment. However, the phenotypic and genotypic characteristics of Asian AMD are unique from those of Caucasian AMD including the proportion of polypoidal choroidal vasculopathy (PCV) and frequency of CFH Y402H [15][16][17]. Therefore, identifying the genetic associations in Asian populations that may predict the response to ranibizumab, the current standard treatment for exudative AMD, is important. Accordingly, this study evaluated the association of the genotype for the CFH, ARMS2, HTRA1, VEGF-A, and KDR genes with the change in visual acuity and macular thickness after 6 months of intravitreal ranibizumab therapy for exudative AMD in a Korean population.
METHODS
Patients: This study was approved by the Institutional Review Board of Yeungnam University Medical Center. All subjects provided written informed consent before participation. The research adhered to the tenets of the Declaration of Helsinki. All individuals were recruited from the Department of Ophthalmology, Yeungnam University Medical Center, and underwent a clinical examination by two retina specialists.
All patients were examined with best-corrected visual acuity (BCVA), fundus photography, fluorescein, indocyanine green angiography, and optical coherence tomography (Stratus OCT; Carl Zeiss, Jena, Germany). The BCVA was measured at the initial presentation and at each follow-up visit. The BCVA was converted to the logarithm of minimum angular resolution (logMAR) values for the calculation. Fluorescein and indocyanine green angiography (HRA2, Heidelberg Engineering, Heidelberg, Germany) were performed at the initial presentation for all patients. PCV was diagnosed primarily based on the indocyanine green angiographic findings, branching vascular network and terminating polypoidal lesion(s). The central subfield macular thickness (CSMT) was measured with the fast macular thickness map protocol at the baseline and at each follow-up visit. The inclusion criteria were age 60 years or more, diagnosis of exudative AMD in one or both eyes, and minimum 6 months of monthly follow-up after the first intravitreal ranibizumab injection. Eyes with subfoveal atrophy, CNV secondary to pathologic myopia, angioid streak, and a previous history of photodynamic therapy or anti-VEGF injection were excluded. We included the second eye only if both eyes met the inclusion criteria. All patients underwent three consecutive intravitreal ranibizumab injections in the loading phase and further injections as required in the maintenance phase. Retreatment was prompted only if signs of lesion activity (i.e., persistent or recurrent subretinal fluid, intraretinal cysts or thickening on OCT, or new subretinal hemorrhage on fundus examination).
Genotype determination: Genomic DNA was extracted from a buccal swab using a Qiagen QIAamp Mini Kit (Qiagen, Valencia, CA), and the DNA concentration was determined using a NanoDrop ND1000 (Wilmington, DE) spectrophotometer. The purity of the DNA was assessed based on the 260/280 nm absorbance ratio ranging from 1.7 to 2.1. Genotyping was undertaken using the Sequenom (San Diego, CA) iPLEX platform, according to the manufacturer's instructions. Five single nucleotide polymorphisms (SNPs; rs1061170, rs10490924, rs11200638, rs833069, and rs2071559) were detected by analyzing the primer extension products generated from previously amplified genomic DNA using a Sequenom chip-based matrix-assisted laser desorption/ ionization time-of-flight (MALDI-TOF) mass spectrometry platform. Multiplex SNP assays were designed using Spec-troDesigner software (Sequenom). Polymerase chain reaction (PCR) amplification took place in a 5 µl mixture containing 10 ng of genomic DNA, 100 nM of each amplification primer, a 500 mM dNTP mix, 1.625 mM MgCl 2 , and 5.5 units HotStarTaq DNA Polymerase (Qiagen). The mixture was subjected to the following PCR conditions: a single denaturation cycle at 95 °C for 15 min, followed by 45 cycles at 94 °C for 20 s, 56 °C for 30 s, 72 °C for 60 s, and a final extension at 72 °C for 3 min. The unincorporated nucleotides in the PCR product were deactivated using shrimp alkaline phosphatase. The allele discrimination reactions were conducted by adding the allele-specific extension primers (UEP), DNA polymerase, and a cocktail mixture of deoxynucleotide triphosphates and dideoxynucleotide triphosphates to each well. MassExtend clean resin (Sequenom Inc.) was added to the mixture to remove the extraneous salts that could interfere with MALDI-TOF analysis. The primer extension products were then cleaned and spotted onto a SpectroChip (Sequenom Inc.). The genotypes were determined by spotting an aliquot of each sample on to a 384 SpectroChip, which was then read with the MALDI-TOF mass spectrometer. Table 1 lists the primer sequences.
Data analysis: An independent Student t test was conducted to determine the statistical significance of the potential predictor variables. The association between the genotype and the change in the BCVA and the CSMT was assessed with one-way analysis of the variance. The comparisons between the genotypes were adjusted for multiple comparisons using Tukey's multiple comparison test. The predetermined level of statistical significance was p<0.05. All analyses were performed using commercially available software (SPSS v18.0K; SPSS Inc., Chicago, IL).
RESULTS
This study evaluated 102 patients with AMD treated with ranibizumab across the five known genetic risk factors for AMD. Table 2 lists the baseline demographics, lesion subtype, BCVA, and the CSMT. The mean baseline visual acuity was 0.96±0.59 logMAR (approximately 20/200 in the Snellen equivalent), and the mean number of injections was 3.87 before the month 6 visit. The association of smoking status, lesion subtype, and baseline BCVA with the change in the BCVA and the CSMT at 6 months was analyzed (Table 3 and Table 4). Table 5 and Table 6 show the changes in BCVA and CSMT from the baseline according to the genotypes examined at 3 and 6 months after the intravitreal ranibizumab injection, respectively. There was no significant statistical difference between change in BCVA and each genotype. For the change in the CSMT, however, a significant difference was observed for the VEGF-A gene. The decrease in the CSMT at month 3 for the major allele homozygote AA genotype, the heterozygote AG genotype, and the risk allele homozygote GG genotype was 25.66±85.40, 86.93±92.31, and 85.30±105.30 μm, respectively (p=0.012, p=0.44, and p=0.002 for AG, GG, and combined AG or GG genotype compared with the AA genotype). This trend was also observed at month 6 ( Table 7). No association was observed between the CSMT changes and the genotype for the CFH, ARMS2, HTRA1, and KDR genes.
DISCUSSION
In this study, we found a significant association between VEGF-A (rs833069) genotype variants and anatomic response to intravitreal ranibizumab in patients with AMD. However, there was no association in visual improvement for any of the genes studied, CFH (rs1061170), ARMS2 (rs10490924), Previous studies have investigated a possible association between the CFH genotype and anti-VEGF treatment response, mostly in Caucasian populations. With intravitreal bevacizumab, Brantley et al. [18] and Nischler et al. [19] reported a trend toward worse visual outcome and a lower chance of improving visual acuity for the CC genotype. With intravitreal ranibizumab, Lee et al. [20] reported that the CFH genotype did not affect the post-treatment visual outcome, but the CC genotype was more likely to require more injection than the TT genotype. However, Francis [21] and McKibbin et al. [22] reported different results showing that the CC genotype required fewer injections and had a more favorable visual outcome. Menghini et al. [23] recently reported that in a long-term study the CT genotype had an approximately three times higher probability of experiencing a significant long-term gain in visual acuity at 12 and 24 month follow-up. In addition to these confusing results, another study of Caucasians by Orlin et al. [24] reported no association between the CFH genotype and the response to anti-VEGF. In our study, there was no significant visual outcome difference between the TT genotype and the CT genotype.
ARMS2/HTRA1 is the second major polymorphism associated with AMD which is localized in chromosome 10q26. Brantley et al. [18] and Orlin et al. [24] reported no association of the anti-VEGF treatment outcome with an ARMS2 polymorphism, but McKibbin et al. [22] observed a better visual outcome with the risk allele G in the HTRA1 gene through a +2.2 and +7.5 EDTDRS letter score change in the GG and GA genotypes, respectively. Unlike the CFH gene, the incidence of the ARMS2/HTRA1 polymorphism appears to be similar in Caucasians and Asians, and this polymorphism is strongly associated with PCV and AMD [15,16,[25][26][27]. A high incidence of polymorphism was observed in the present study population, the number of homozygotes of each risk allele was more than half, and there was no significant association with the response to ranibizumab.
The VEGF gene does not appear to be a major genetic risk factor for developing exudative AMD. Considering the role of VEGF in angiogenesis and hyperpermeability (the major characteristics of exudative AMD), the polymorphism in the VEGF and VEGF receptor gene might be a factor that affects the pharmacological mechanism of anti-VEGF. With this theoretical base, we selected two SNPs, VEGF-A (rs833069) and KDR (rs2071559), which Galan et al. recently reported were associated with the development of AMD [14]. Recently, Park et al. [26] reported that an rs833069 polymorphism in VEGF-A was significantly associated with the risk of PCV in a Korean population. They revealed a 2.29-fold increased risk in the risk allele G compared to the major allele A and a 6.25-fold increased risk of PCV in the GG genotype compared to the major allele AA genotype. Several studies reported an association of the VEGF gene with the treatment response. McKibbin et al. [22] reported an association between a variation of the VEGF-A gene (rs1413711) and the response to ranibizumab, and Nakata et al. [28] observed a better visual outcome with the VEGF-A rs699946 GG genotype after intravitreal bevacizumab in a Japanese population. These findings suggest that many components of the VEGF system are involved in the treatment outcome. In this Korean population-based study, the risk allele G of rs833069 was significantly associated with a better anatomic outcome and had a tendency to be associated with improvement in the mean BCVA after ranibizumab loading, but the improvement was not statistically significant. Most prior studies did not use the CSMT as an outcome measure. The visual outcome is an integrated outcome that reflects the functional ability of the visual system, but is still a subjective measure for assessing the pharmacogenetic response, particularly in a retrospective study. Moreover, the CSMT, particularly measured at month 3, is relatively objective because the CSMT data measured with OCT are not biased by drug dosage and ocular conditions other than AMD.
The marker rs833069 is located in intron 2 of the VEGF-A gene. There are no functional data on this polymorphism. However, the VEGF-A rs833069 polymorphism has been reported to be associated with the development of AMD in Caucasians [14] and PCV in Asians [26]. Lima et al. [29] reported that PCV might be a subset of AMD with similar demographic and genetic risk factors and clinical features through a Caucasian patient-based genetic study. In the present study, there was a significant association between the rs833069 polymorphism and the change in CSMT in the heterogeneous patient group, which includes typical neovascular AMD and PCV.
With recent advances in OCT technology, studies of exudative AMD have recently reported that foveal thickness measurements were useful in evaluating treatment response to intravitreal ranibizumab [30,31]. OCT is extremely useful in detecting intraretinal and subretinal fluid and is thus important for evaluating the treatment response. However, the relationships between visual acuity and central retinal thickness were inconsistent. According to a recent report [32], improvement in the inner segment/outer segment photoreceptor junction (IS/OS) line after anti-VEGF injection would be a good indicator for predicting the initial response to anti-VEGF treatment. However, the present study based on stratus OCT was limited in analyzing the IS/OS line in detail, and thus, we used the CSMT as the only OCT parameter, which might cause the discrepancy between visual and anatomic outcomes in the VEGF-A rs833069 polymorphism.
To our knowledge, no other group has investigated the influence of the genotype on the response to ranibizumab therapy in an Asian population, and the CSMT was used as a secondary parameter to assess the treatment outcome to overcome the limitation of a retrospective study when evaluating the pharmacogenetic response. The change in the CSMT should be considered another good indicator of the treatment response while eliminating subjective bias, particularly the interval change after the loading phase. However, this study is limited by small sample size, relative low minor allele frequency in some SNPs, and short follow-up period.
In conclusion, this study evaluated the potential association between the selected SNPs in the CFH, ARMS2, HTRA1, VEGF-A, and KDR genes, and the response to an intravitreal ranibizumab injection for exudative AMD in a Korean population. A significant association between the VEGF-A (rs833069) genotype variants and the anatomic response to intravitreal ranibizumab was observed, but no significant difference was found between the genotype of the potential risk polymorphism for the development of AMD and the visual acuity change after treatment with intravitreal ranibizumab. A larger cohort study including more potential risk SNPs for the development of AMD is needed to evaluate the pharmacogenetic association with the response to an anti-VEGF treatment.
|
2017-06-06T09:04:02.024Z
|
2013-03-21T00:00:00.000
|
{
"year": 2013,
"sha1": "ef2fbcf66a6ba1bbb33fbdd84537765bb51b532d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ef2fbcf66a6ba1bbb33fbdd84537765bb51b532d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268827519
|
pes2o/s2orc
|
v3-fos-license
|
Unusual Artifacts Encountered during Routine Studies on Positron Emission Tomography-Computed Tomography and Gamma Camera
Artifacts in nuclear medicine imaging are not uncommon. We are aware of some of these, for which we follow necessary protocols to avoid them. However, there are some unusual and unavoidable artifacts that we come across in daily imaging, which may be of concern and need to be detected and corrected on time. Hence, sharing a few such unusual artifacts we encountered while performing routine studies on positron emission tomography-computed tomography and gamma cameras, evaluating the cause and possible precautions.
Introduction
Artifacts in nuclear medicine are atypical findings observed during imaging that may lead to misinterpretation of a physiologic process or an anatomical structure as pathological.Artifacts may obscure the visualization of structures and result in false positive or false negative interpretations. [1]The physician and operator must hence be aware of and be able to identify such artifacts and differentiate them from real image interpretation or normal variants, with the help of additional clinical information or examination whenever possible.However, it is also important to determine the cause of the artifacts so that measures can be taken to avoid or correct them.[3][4] While performing routine scans on positron emission tomography-computed tomography (PET-CT) and gamma camera, we encountered three uncommon artifacts, which raised concerns of interference in the interpretation of scan images.We present three cases with unusual artifacts in nuclear medicine imaging and determine the cause of each of them.
Case reports Case 1
Our first case shows a PET-CT scan performed for oncologic indication.On viewing the reconstructed images, we identified two unusual artifacts on the PET image.One appeared as multiple equidistant hot spots in a linear fashion along the length of the patient's body [Figure 1 black arrow], and the other artifact noted were multiple horizontal photopenic lines perpendicular to the body plane at the level of these hot spots seen on PET image [Figure 1 red arrow].
Preliminary, we thought it was due to normalization error, but all other scans of the day were fine, so we ruled out any possibility of contamination on the scanner bed.However, on probing further, these hot spots appeared to be originating posterior to the scanner bed.We then examined the PET gantry thoroughly and found an isolated cap of an intravenous cannula accidentally dislodged on the camera gantry near the protective covering of the collimator.The multiple equidistant hot spots in a linear fashion appeared to be caused by this cap of the intravenous cannula, which was confirmed on reacquiring the scan after removing the cap, hence suggesting faulty normalization.Thus, we need to ensure an absolutely clean surface over the gantry, as even the tiniest of objects can create large interferences in interpretation.Indian Journal of Nuclear Medicine | Volume 39 | Issue 1 | January-February 2024 As is known normalization is a correction done to eliminate differences in sensitivity between detectors (or detector pairs).This correction in PET is obtained from a recording that provides a uniform irradiation of all detectors, which could be by a rotating pin source or a uniform cylinder phantom.Normalization scans should reflect the underlying performance of the detector elements of the PET scanner.When a normalization scan is being performed, one must ensure that no objects other than the recommended source of radiation are present in the field of view (FOV) of the scanner.Normalization scans should be performed on a regular basis (preferably every quarter) or when detectors or signal processing boards are tuned or replaced. [5]se 2 In our second case, on the acquired PET-CT scan images, we observed on the CT images linear dotted artifacts on sagittal and coronal sections [Figure 2a and 2b], which appeared as concentric thin ring shape densities on axial images [Figure 2c].
After analyzing the artifact, we probed into the gantry and software.Preliminarily, it was thought to be a fault in the hardware.However, after careful assessment, we found a spillage of intravenous contrast on the Mylar window and a small amount even over the CT tube aperture.We cleaned the entire spillage area and reacquired the scan to see if all the artifacts had disappeared; hence, we concluded that the spillage of contrast was the cause of this artifact.Proper fitting of the Mylar window and proper care while handling contrast to avoid any spillage can prevent this occurrence.
It has been seen in literature that rings artifacts could appear in transaxial images of computer tomography due to a defective detector element or set of detector elements, which can be eliminated by replacement of the defective CT detector module followed by CT number calibration. [6]other cause reported for ring artifacts on transaxial CT images can be due to loose electronic contact, which could be avoided by maintaining strict environmental conditions prescribed by the manufacturer. [7]se 3 In the third case, we have two different scan images, of a planar three-phase bone scan and myocardial perfusion single-photon emission computed tomography (SPECT) scan, on a gamma camera where we observe a similar artifact, an irregular significantly hot area within FOV which was seen intermittently during the scan [Figure 3a and b].After ruling out all possibilities of contamination, we further explored the possibility of a fault with gamma camera hardware or software.
Further analysis revealed a fault in the power distribution and organization control (PDOC) board [Figure 4].The PDOC board includes positioning and summing circuits, and its function is to sort out the data from Photomultiplier tubes (PMT) and provide the x and y events and z energy information.Unfortunately, this artifact, although rare, is unavoidable, but whenever identified, it must raise suspicion of PDOC board flaw and be reported so that it can be repaired/replaced.Ideally, artifacts should be prevented and, if they are not, should be identified at the time of imaging or reporting.Artifacts in nuclear imaging can be identified in many ways, not only by the reporting specialist physician but also by other staff involved.Few artifacts can be completely avoided with the help of proper precautions, awareness, and routine mandatory check-ups, while a few others may not be under the user's control.
In PET-CT and SPECT-CT scans, we come across multiple artifacts due to high attenuation bodies, truncation, respiratory motion, and misregistration, for which multiple software is now available to decrease these artifacts. [8,9]Artifacts due to intravenous cannulas can occur due to radiopharmaceutical residues in the catheter lumen or venous system, and proper flushing with saline after administration is good practice to prevent them.Similarly, urinary catheter tubes and bags, and nephrostomy bags should be properly positioned away from the interpretable FOV.Metallic objects such as buttons, coins, keys, and even internal metallic devices/prostheses such as a pacemaker, fixator rods and screws, metallic plates, and previously ingested barium contrast can cause attenuation artifacts.Thus, proper instructions to remove all possible metallic objects should be given to the patient, and reporting physician should be informed about the presence of other interfering metallic objects that cannot be removed.Contamination artifacts, most commonly urine contamination artifacts, are seen on clothing and skin, which can easily be confirmed and eliminated or corrected by washing skin, removing contaminated clothing, or different image views when required. [2]ysicians and technologists, as well as paramedical staff, together share the responsibility of providing quality nuclear medicine service to our patients.Good compliance with guidelines and protocols and good communication among medical team members make for the best outcomes. [1]
Conclusion
Careful observation is the key to discerning artifacts from images, which enables better treatment.We must be well aware of different possible unusual artifacts we may come across so that they can be detected on time, and on-site problem solving and hence efficient time management is achievable while reducing radiation exposure.Furthermore, if any abnormality is noted while images are being acquired or already acquired, it must be immediately informed to the responsible staff member so that timely actions can be taken and errors corrected to avoid such artifacts and improve patient care.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms.In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal.The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Figure 1 :
Figure 1: Positron emission tomography image in sagittal plane (a) and maximum intensity projection image (b) Artifacts observed are the multiple equidistance hot spots in a linear fashion along the length of patient's body (black arrow) and multiple horizontal photopenic lines perpendicular to body plane at the level of these hot spots (red arrow) b a
Figure 2 :
Figure 2: Computed tomography (CT) scan images of positron emission tomography-CT scan: Artifact (marked by yellow arrows) seen as linear dotted artifacts on coronal (a) and sagittal sections (b), which appeared as concentric ring shape densities on axial sections (c)
Figure 4 :
Figure 4: Power distribution and organization control board
Figure 3 :
Figure 3: Artifact appears as intermittent hot spots (blue arrow) in Three-phase bone scan perfusion phase image (a) and Artifacts appears as intermittent hot spots (white arrow) in myocardial perfusion single-photon emission computed tomography image (b) b a
|
2024-04-02T16:45:48.189Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "2ce05ffffc01318351bc3e90b69bcd68f315b16c",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "c902d401825197818f2392f398c3dc1357e228cc",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": []
}
|
251431657
|
pes2o/s2orc
|
v3-fos-license
|
Inhibitory effects of Streptococcus salivarius K12 on formation of cariogenic biofilm
Bacground/purpose Streptococcus salivarius (S. salivarius) K12 is known to be a probiotic bacterium. The purpose of this study was to investigate anti-cariogenic effects of S. salivarius K12 on cariogenic biofilm. Materials and methods S. salivarius K12 was cultured in M17 broth. The antimicrobial activity of spent culture medium (SCM) against Streptococcus mutans was investigated. S. salivarius K12 was co-cultivated with S. mutans using a membrane insert. When the biofilm was formed using salivary bacteria and S. mutans, the K12 was inoculated every day. The biomass of biofilm was investigated by a confocal laser scanning microscope. Also, bacterial DNA from the biofilm was extracted, and then bacteria proportion was analyzed by quantitative PCR using specific primers. The expression of gtf genes of S. mutans in the biofilm with or without S. salivarius K12 was analyzed by RT-PCR. Results The SCM of S. salivarius K12 inhibited the growth of S. mutans. Also, S. salivarius K12 reduced S. mutans growth in co-cultivation. The formation of cariogenic biofilm was reduced by adding S. salivarius K12, and the count of S. mutans in the biofilm was also decreased in the presence of S. salivarius K12. gtfB, gtfC, and gtfD expression of S. mutans in the biofilm was reduced in the presence of S. salivarius K12. Conclusion S. salivarius K12 may inhibit the formation of cariogenic biofilm by interrupting the growth and glucosyltransferase production of S. mutans.
Introduction
Dental caries is a phenomenon of demineralization of tooth surface by acid and closely related with cariogenic bacteria such as Streptococcus mutans (S. mutans), Streptococcus sobrinus (S. sobrinus), and Lactobacillus species which have the characteristics of vigorous acid production and aciduricity. 1 Especially, S. mutans is considered to be more related to dental caries because of their action in the microbial ecosystem in oral cavity rather than the characteristics of this bacterium alone. S. mutans produces watersoluble or -insoluble glucans from sucrose using glucosyltransferases (Gtf) and plays an important role in the development of biofilm on tooth surface through the production of glucans. 2,3 Furthermore, in the formed oral biofilm, S. mutans makes a strongly acidic environment inside the biofilm by its continuous acid production in sugar rich condition. 3,4 This low pH environment increases the growth of aciduric bacteria as cariogenic bacteria, and the proliferation of non-mutans streptococci were suppressed. 5 The increased duration of low pH leads to demineralization of tooth, eventually inducing caries.
Streptococcus salivarius (S. salivarius) K12 is early colonizer in microbial ecology of oral cavity and is a species isolated from tongue swabbing. 6 The K12 strain produces bacteriocins and safety characteristics in demonstrated in tests in human and animal models. 7,8 Therefore, this bacterium is considered with probiotic bacterium. Probiotics are known to be 'live microorganisms, which when administered in adequate amounts, confer a health benefit on the host, 9 and probiotics have been attempted to be used for prevention and treatment of human diseases. Among probiotics, S. salivarius K12, and secretes two bacteriocin such as salivaricin A2 and B. 10 Since S. salivarius K12 is isolated from the oral cavity and secretes antibacterial agents, it is better applied to oral bacterial diseases than other probiotics.
The aim of this study was to be investigated anticariogenic effects of S. salivarius K12 on cariogenic biofilm containing S. mutans and on effect of the K 12 strain on biofilm-forming related factors of S. mutans.
Materials and methods
Bacterial strain and culture condition S. salivarius K12 was kindly donated from Green store Inc. (Bactoblis; Seongnam, Gyeonggi, Korea) and used in this study. The bacterium was cultivated with M17 broth (BD bioscience, Sparks, MD, USA) supplemented with 1% glucose to manage it and make bacterial stock. In order to investigate antimicrobial activity of S. salivarius K12, S. salivarius and S. mutans ATCC 25175 were cultivated with tryptic soy broth (BHI; BD bioscience).
Antimicrobial activity of S. salivarius K12 against S. mutans The antibacterial activity of S. salivarius K12 against S. mutans was evaluated by a minimum inhibitory concentration using a microdilution methods according to methods recommended by Clinical and Laboratory Standards Institute (CLSI). 11 A milliliters of S. salivarius (1 Â 10 7 bacteria/ ml) was inoculated into 10 ml M17 broth, and the bacterial suspension was incubated for 24 h in an aerobic condition. The K12 suspension was centrifuged at 5000Âg for 10 min, and the supernatant was transferred into a new 15 ml conical tube (SPL Life Sciences, Gyeonggi, Korea). The supernatant was filtered with a polyvinylidene fluoride filter (Millipore, Billerica, MA, USA). The filtered supernatant as a spent culture medium (SCM) was used to susceptibility assay. 180 ml of M17 broth was dispensed into 96-well plate (SPL Life Sciences). The SCM was added into 1st well containing the fresh medium and performed 2-fold serial dilution to the 11th column. S. mutans was counted with a bacterial counting chamber (Hausser Scientific, Horsham, PA, USA) and adjusted 2 Â 10 6 bacteria/ml with fresh M17 broth. The prepared S. mutans suspension (20 ml) was inoculated into the well containing the mixed media. The plate was incubated at 37 C in an aerobic incubator. The bacterial growth was measured using optical density at 660 nm of wavelength by a microplate reader (BioTek, Winooski, VT, USA). In another experiment, S. mutans (1 Â 10 6 bacteria/ml) and S. salivarius K12 (1 Â 10 7 bacteria/ml or 1 Â 10 8 bacteria/ml) were co-cultivated using Transwellä (pore size 0.4 mm; Corning Co., Corning, NY, USA) in 12-well plate. The bacterial suspension of S. mutans and S. salivarius were inoculated into inside and outside of Transwellä, respectively, and the plate was incubated at 37 C for 24 h. The growth of S. mutans was measured optical density at 660 nm of wavelength, and the image of S. mutans was taken by CMOS camera of phase contrast microscope (Nikon Co., Tokyo, Japan).
Biofilm formation and observation
Biofilm was formed a method reported by Lee SH 3 ). First, to form pellicle on plate, unstimulated saliva was collected from 10 healthy person, and pooled saliva was centrifuged at 7000Âg for 10 min at 4 C to remove debris and bacteria. The prepared saliva was dispensed into 12-well polystyrene plate (SPL Life Sciences) and 8-well culture slide (Corning Co.) and dried in dry oven at 40 C. This step was repeated 5 times. The plates were carried out UV sterilization. In order to form saliva biofilm, the pooled saliva was mixed with BHI broth including 2% sucrose and 1% mannose and vortexed for 20 s. S. mutans (1  10 6 bacteria/ml) was added into the mixture with saliva and BHI broth to form cariogenic biofilm, and the prepared suspension was inoculated into saliva-coated 12-well plate and 8-well culture slide. The plates were incubated in an aerobic condition for 7 days, and the media were changed every day. To investigate effects of S. salivarius on the biofilm, when the cariogenic biofilm was formed, one hundred microliter of S. salivarius K12 suspension (1  10 8 and 1  10 9 bacteria/ml) was inoculated with fresh medium change every day. The biofilm was washed three times with phosphate buffered saline (PBS, pH7.2) and BHI broth (1 ml) was added in the biofilm formed 12-well plate. The biofilms were disrupted with a scraper (Corning Co.), and the biofilm suspensions were transferred into 1.5 ml tube. After vortexing for 30 s, the suspensions were performed 10-fold dilution to 10 6 with BHI broth. The diluted suspensions were spread on Mitissalivarius agar plate and Mitis-salivarius bacitracin agar plate (BD bioscience) to count oral streptococci and S. mutans, respectively. The plates were incubated at 37 C for 48 h, and the colonies of oral streptococci and S. mutans were counted. In other experiment, the biofilm on 8-well culture slide was washed three times with PBS and stained with Live/dead bacterial viability kit (Invitrogen, Eugene, OR, USA) for 1 h at room temperature. After washing with PBS, the biofilm was observed by a confocal laser scanning microscope (CLSM) (LSM 700; Carl-Zeiss, Oberkochen, Germany). For 3D scanning of the biofilm, the z-stack scans were performed (0e30 mm), and the images of the biofilm were analyzed by ZEN program (Carl-Zeiss). To measure biofilm mass, after saving each slide of 3D image, dimension of fluorescence (bacteria) in each slide was measured by ImageJ software (National Institutes of Health, NY, USA). Percentage of bacteria biomass in each slide was calibrated by correlating the image dimension (160 mm  160 mm) and each level of slides was sum it to calculate biofilm mass.
Investigation of bacterial proportion in the biofilm
To examine bacterial proportion in the biofilm, the biofilm was formed with or without S. salivarius K12 by method of 2.3. Biofilm formation and observation, and total bacterial DNA was extracted with bacterial genomic DNA extraction kit (iNtRON biotechnology, Gyeonggi, Korea). To generate standard curve, S. mutans and S. salivarius were counted by a bacterial counting chamber (Hausser Scientific), and 1 Â 10 7 bacteria/ml of the bacteria was serially diluted to 10 5 . Each diluted bacterial suspensions were harvested by a centrifugation at 7000Âg for 10 min, and the supernatant was removed. Genomic DNA from the bacterial pellets were extracted with the kit. Extracted DNA (4 ml) was mixed with 25 ml of 2 Â TB Greenä premix Ex Taqä (Takara Co., Kyoto, Japan), 0.2 mM of each primer, ROX reference dye, and distilled water in 50 ml of final volume. The mixture was carried out PCR with 10 min template denaturation step at 94 C and 40 cycles of amplification step (denaturation at 95 C for 10 s, annealing at 60 C for 10 s, and extension at 72 C for 33 s) with 7500 real-time PCR system (Applied Biosystems, Foster City, CA, USA). The primers for real-time PCR are shown in Table 1. The bacteria level was calculated by critical threshold cycle (Ct) compared with generated standard curve from Ct values and standard bacterial count.
Change of virulent factor of S. mutans by S. salivarius K12 Total RNA from bacteria in the biofilm was extracted with a TRIzolâ Max bacterial RNA isolation kit (Invitrogen Life -TGG AGC ATG TGG TTT AAT TCG A-3 The Antimicrobial activity of S. salivarius K12 against S. mutans. After collecting the spent culture medium from S. salivarius K12, S. mutans was cultivated in various concentration of the SCM, and the bacterial growth was measured by optical density using a spectrophotometer. The experiment was performed three times in duplicate, and data are represented as mean AE standard deviation. * (asterisk) indicates statistically significant differences compared to control.
Journal of Dental Sciences 18 (2023) 65e72 Tech, Carlsbad, CA) according to the manufacturer's recommended protocol. cDNA was synthesized by genespecific primers as follows: 5 0 -CAT AAG GCG TTA ATT TCC CTT CA-3 0 for gtfB, 5 0 -CCT GTG AAG TTA GCT TGC TAT TG-3 0 for gtfC, and 5 0 -ATA GGC TGT CTT ATC GCT GTT GCT A-3 0 for gtfD. 12 1 mg of total RNA, 2 mM of gene-specific primer, 10 mM dNTA mix, and 200 U of SuperScriptä IV Reverse Transcriptase (Invitrogen, Carlsbad, CA, USA) was mixed, and the mixture was incubated at 50 C for 10 min and heated at 80 C for 10 min to inactivate reaction. cDNAs were mixed with 10 ml of 2 Â TB Greenä premix Ex Taqä (Takara Co.), 0.4 mM of each primer, ROX reference dye, and distilled water in 20 ml of final volume. The mixture was carried out PCR with 5 min template denaturation step at 95 C and 40 cycles of amplification step (denaturation at 95 C for 10 s, annealing at 60 C for 10 s, and extension at 72 C for 33 s) with 7500 real-time PCR system (Applied Biosystems). The PCR products were examined for each specific amplification product using a melting temperature. The primers for real-time PCR are shown in Table 1 recA gene was used as housekeeping gene. 12,13 Statistical analysis The data was investigated distribution using the KolmogoroveSmirnov test. Significant differences between each group were analyzed by Kruskal Wallis test and ManneWhitney U test using IBM SPSS statistics Ver. 23 (IBM, Armonk, NY, USA). P-values less than 0.05 were considered statistically significant.
Results
Antibacterial activity of S. salivarius K12 against S. mutans In antimicrobial experiment using SCM of S. salivarius K12, the growth of S. mutans was significantly inhibited at 8-fold diluted SCM and completely inhibited above 4-fold diluted SCM (P < 0.05) (Fig. 1). Next, in order to investigate the antibacterial activity when coexisting in the oral cavity, S. mutans was co-cultured with S. salivarius k12 of 10 and Figure 2 The growth of S. mutans in co-cultivation with S. salivarius. S. mutans was inoculated into inside of cell culture insert(A), and 10-or 100-fold of S. salivarius K12 was inoculated into outside of cell culture insert (B and C). The plate was incubated for 24 h, and the image of the suspension of S. mutans was obtained by a camera on the microscope (A, B, and C). The growth of S. mutans was measured by a spectrophotometer (D). The experiment was performed three times in duplicate, and data are represented as mean AE standard deviation. * (asterisk) indicates statistically significant differences compared to control. 100-fold concentration using Transwell. The growth of S. mutans was significantly inhibited by 100-fold concentration of S. salivarius K12 (Fig. 2).
Bacterial proportion in the biofilm
Next, to investigate whether S. salivarius K12 merely inhibits biofilm formation or alters the bacterial proportion in the biofilm, genomic DNA from total bacteria of the biofilm was extracted, and quantitative PCR was processed using the DNA. The average amount of total bacteria in the biofilm showed 2.05 Â 10 8 cells/biofilm, and in the biofilm treated with 10 7 and 10 8 concentration of S. salivarius K12, the average amount of bacteria showed 1.90 Â 10 8 and 1.47 Â 10 8 cells/biofilm, respectively (Fig. 4A). When the number of oral streptococci in the biofilm were investigated, control and S. salivarius K12 (10 7 ) and (10 8 )-treated sample showed the average level of 1.86 Â 10 8 , 1.56 Â 10 8 , and 1.32 Â 10 8 cells/biofilm, respectively (Fig. 4B). Total bacteria and oral streptococci in the biofilm were significantly decreased by S. salivarius K12 at a concentration of 10 8 cells (P < 0.05). The amount of S. mutans in the biofilm was significantly reduced in both concentrations treated with S. salivarius K12 compared to control (P < 0.05) (Fig. 4C). On the other hand, the amount of S. salivarius in the biofilm was significantly increased in both concentrations treated Figure 3 3D image of salivary biofilm. When the biofilm was formed with salivary bacteria and S. mutans, S. salivarius K12 was inoculated every day at a concentration of 10 8 (B) and 10 9 cells (C). The 3D image of biofilm was acquired by CLSM. The biomass of biofilm was measured by Image J software (D). The experiment was performed three times in duplicate, and data are represented as mean AE standard deviation. * (asterisk) indicates statistically significant differences compared to control.
Reduction of virulent factor of S. mutans in the biofilm by S. salivarius K12 Finally, it was investigated whether the inhibition of biofilm formation by S. salivarius K12 is only caused by the reduction of the growth of S. mutans, or it also inhibits the expression of factors related with biofilm formation of S. mutans. Thus, it investigated the gtf expression of S. mutans in the biofilm was investigated. The expression of gtfB, gtfC, and gtfD was significantly reduced in S. mutans of S. salivarius K12 treated biofilm (P < 0.05) (Fig. 5).
Discussion
Recently, it is tried to treat and prevent diseases caused by pathogenic bacteria using beneficial bacteria. 14e16 These attempts have led to more research about probiotics. In dentistry, although many studies have been conducted to apply probiotics to bacteria-related diseases, 9 most probiotics have aciduricity and can induce dental caries. 17,18 S. salivarius is an early colonizer on epithelial surface of oral cavity in infant 5 and produces bacteriocin. 19 Among S. salivarius, S. salivarius K12 in known to be a probiotic bacterium and has strong antimicrobial activity against various bacteria and fungus. 20,21 The present study was to investigate effect of S. salivarius K12 on cariogenic biofilm and its mechanism for inhibiting formation of cariogenic biofilm. First, S. mutans is considered to be a bacterium closely associated with cariogenic biofilm which creates localized low acidic environment. 3 Therefore, the susceptibility of S. mutans using spent culture medium (SCM) of S. salivarius K12 was investigated. The SCM of S. salivarius K12 inhibited the growth of S. mutans at or above the 8-fold dilution. Comparing SCM of other probiotics, the SCM of S. salivarius K12 showed weakly antimicrobial activity. 22 Therefore, it was evaluated using co-cultivation whether the growth of S. Figure 4 Bacterial proportion in the biofilm. When the biofilm was formed with salivary bacteria and S. mutans, S. salivarius K12 was inoculated every day at a concentration of 10 8 and 10 9 cells. After extracting total DNA, total bacteria (A), oral streptococci (B), S. mutans (C), and S. salivarius (C) in the biofilm was analyzed by qPCR using specific primers. The experiment was performed three times in triplicate, and data are represented as mean AE standard deviation. * (asterisk) indicates statistically significant differences compared to control. mutans was inhibited when S. salivarius K12 was present at which concentration. In co-cultivation of S. mutans and S. salivarius K12, S. mutans may inhibit the growth and metabolism of S. salivarius K12 due to production of abundant lactic acid, 23 and S. salivarius K12 may inhibit the growth of S. mutans by its salivaricin as a bacteriocin. 24 S. salivarius K12 inhibited S. mutans growth at 100-fold cells of initial inoculating concentration. These results showed the potential of S. salivarius K12 as a candidate probiotic for the prevention of dental caries induced by S. mutans. Therefore, effects of S. salivarius K12 on cariogenic biofilm using salivary bacteria and S. mutans was examined.
When salivary bacteria collected from healthy doner and S. mutans was formed biofilm for 7 days, the biofilm was considered to be a cariogenic biofilm because it showed a low pH (below pH 5.5) enough to induce dental caries, 25 and the effect of S. salivarius K12 on cariogenic biofilm was tested. To evaluated efficacy amount of S. salivarius K12 application for cariogenic biofilm, 10 7 and 10 8 cells of S. salivarius K12 were treated in forming biofilm every day (the concentrations showed antimicrobial effect in the cocultivation). The formation of biofilm was reduced in treated with S. salivarius K12. When forming a biofilm, S. salivarius K12 inhibited the biofilm despite the sucrose-rich environment. These results indicate the possibility that dental caries can be prevented by continuous use of S. salivarius K12 products. Whether it is a healthy biofilm or cariogenic biofilm according to the microbial ecosystem was suggested by Marsh, P.D. 26 When the distribution of S. mutans in the biofilm was examined based on oral streptococci, the levels of S. mutans showed an average of 12.3% in the control group, and the ratio of S. mutans was 4.5% and 1.9% in the S. salivarius K12-treated group in a dosedependent manner. Furthermore, the levels of S. salivarius was increased in the biofilm in the S. salivarius K12treated group compared to control group. Finally, data showing that S. salivarius K12 may convert cariogenic biofilm to healthy biofilm.
The glucosyltransferases of S. mutans plays an important role in biofilm formation. 27 Therefore, the gtf gene expression of S. mutans in the biofilm was investigated using a real-time RT-PCR. When 10 7 and 10 8 cells of S. salivarius K12 were treated in forming biofilm every day, the expression of gtfB, C and D of S. mutans in the biofilm was significantly reduced. Eventually, S. salivarius K12 may inhibit the expression of gtf genes as well as S. mutans growth.
In conclusion, S. salivarius K12 inhibited the formation of cariogenic biofilm as well as the inhibition of S. mutans growth in the biofilm. These effects may appear by inhibition of the growth and glucosyltransferases production of S. mutans by antimicrobial activity of S. salivarius K12. Furthermore, S. salivarius K12 may colonize into oral biofilm. Therefore, among probiotics, S. salivarius K12 may be considered to be an effective candidate for the prevention of dental caries.
Declaration of competing interest
The authors claim to have no conflicts of interest related to this paper.
|
2022-08-09T15:25:55.172Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "620d5f541ac9b1785707b3e942ef75de16046342",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jds.2022.07.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1b01d5c0dafd4a9955f85f809ca9aab8bfbffe6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
155841937
|
pes2o/s2orc
|
v3-fos-license
|
How Much are Car Purchases Driven by Home Equity Withdrawal?
Previous research indicates that changes in housing wealth affect consumer spending on cars. We find that home equity extraction plays only a small role in this relationship. Consumers rarely use funds from equity extraction to purchase a car directly, even during the mid-2000s housing boom; this finding holds across three nationally representative household surveys. We find in credit bureau data that equity extraction does lead to a statistically significant increase in auto loan originations, consistent with equity extraction easing borrowing constraints in the auto loan market. This channel, though, accounts for only a tiny share of overall car purchases.
INTRODUCTION
House prices in the U.S. rose dramatically from 1998 to 2006 and then plunged thereafter, bottoming out in 2011. Several studies, which we review below, have connected the changes in housing wealth during this period to the patterns in consumer spending on other goods, in particular automobiles. Less is known, however, about how households deploy their home equity gains in order to purchase autos.
Narratives in the popular press suggest that it is quite common for households to use the proceeds from home equity extraction to fund auto purchases and that this practice was especially popular during the housing boom in the mid-2000s (e.g., Dash 2008, Harney 2015, Singletary 2007. The economics behind these narratives can seem a bit puzzling, however, as it is usually more cost effective for households to finance car purchases with auto loans than with home equity loans, even during housing booms. To better understand these narratives and assess how important home equity extraction actually is to funding auto purchases, in this paper we assess the two ways in which homeowners might use home equity to purchase cars. First, homeowners might use equity extraction proceeds directly to purchase cars outright. Second, households might use equity extraction proceeds indirectly to facilitate purchasing a car with an auto loan. In particular, credit-constrained households might use home equity proceeds to alleviate down payment constraints in the auto loan market or to pay down high-interest debt and thereby free up space in their budgets to take out an auto loan.
We find evidence that both pathways play some role in the relationship between house prices and car purchases, but neither pathway appears to have been a quantitatively important part of car purchases during the mid-2000s housing boom. We first show that very few households report purchasing cars primarily with funds from home equity lines of credit or the 2 proceeds of cash-out refinancing, even during the housing boom years. This result is consistent across three nationally representative household surveys. We then use credit bureau data to explore whether home equity extraction indirectly supports car purchases by facilitating auto loans, and we find relatively strong evidence that this is the case. We explore the data a bit further and find that this relationship more likely reflects the role of equity extraction in easing down payment requirements in the auto loan market than an interaction between equity extraction and high-interest debt. But our estimates imply that the quantitative impact of home equity extraction on car purchases through the indirect auto loan channel is also quite small.
We use an event study setup in the analysis of the credit bureau data and identify the effects of home equity extraction on auto loan originations by looking for a discontinuous increase in auto loan originations shortly after equity extraction. The setup allows us to distinguish the role of equity extraction in easing auto loan credit constraints from other factors that might cause equity extraction and auto lending to move together, such as house prices and interest rates, and to assert that the relationship that we find between equity extraction and auto lending is likely causal.
Our results provide mixed support for studies in the existing literature that find that housing wealth primarily supported consumption during the 2000s by increasing the ability of households to borrow. Some patterns in the credit bureau data are consistent with this narrative, such as the stronger relationship we find between home equity extraction and subsequent auto loan origination for borrowers with low-to moderate credit scores than for other borrowers.
However, individuals in the household surveys who report using home equity as the primary source of funds for purchasing a car do not appear to be particularly borrowing constrained.
3 Because so few car purchases are funded through home equity, though, we hesitate to generalize too broadly about the implications of our findings for housing wealth and consumption.
Our results cast some doubt on the narrative that home equity extraction was an important source of funds for auto purchases during the housing boom in the mid-2000's, but they do not imply that housing wealth was inconsequential for these purchases. The wealth effects of the changes in house prices could have been large, and some of the indirect effects of home equity extraction on auto purchases that we cannot explore in the data could also have been important.
In the conclusion we discuss whether households may purchase other goods and services with home equity and free up space in a household balance sheet to buy a car.
RELATED LITERATURE
Our paper contributes to two literatures: (1) Studies of the relationship between house prices and consumption, and (2) studies of credit constraints in the auto loan market. Turning first to house prices and consumption, one key question in this vast literature is whether increases in house prices spur consumption primarily because households are wealthier (the wealth channel) or because lenders are willing to extend more credit to households after their house values rise (the borrowing constraints channel). 1 The studies that have examined this relationship using data from the 2000s generally conclude that borrowing constraints are the more important of the two channels (e.g., Aladangady 2017, Bhuttta and Keys 2016, Cooper 2013, and Cloyne et al. 2017. Consistent with this general finding, several studies also indicate that borrowing constraints in the mortgage market are an important part of the link between house prices and auto sales. Mian, Rao, and Sufi (2013) and Mian and Sufi (2014) find that the relationship 4 between the changes in house prices and auto sales is strongest in zip codes where the share of the residents with high debt burdens or low incomes is high. Brown, Stein, and Zafar (2015) show that increases in house prices in the 2002 to 2006 period were associated with increases in borrowing on home equity lines of credit and auto loans; the response of auto debt to the changes in house prices was strongest for subprime borrowers. Gabriel, Iacoviello, and Lutz (2017) show that auto sales increased more between 2008 and 2010 in counties where California's foreclosure prevention programs were especially successful in stabilizing house prices after the 2007-09 recession; they attribute this result to the rise in housing wealth, which eased credit constraints.
Other studies find that auto loan originations increase when changes in mortgage finance conditions allow more households to tap their home equity. Beraja et al. (2017) find that the drop in mortgage rates that ensued after the start of the Federal Reserve's large-scale asset purchase program resulted in the largest increase in auto purchases in MSAs with highest median home equity. They also find that auto loan originations increased more for individuals that had a cash-out refinancing than a non-cash-out refinancing. Laufer and Paciorek (2016) find that looser credit standards on mortgage refinancing are associated with an increase in auto loan originations among subprime mortgage borrowers.
Our contribution to this literature is to ask whether households who experience large house price gains subsequently use home equity extraction to fund car purchases. Other than the Beraja et al. (2017) study, this particular question has not been investigated very thoroughly in the extant literature. We also consider whether the households who appear to purchase cars with home equity have characteristics that suggest borrowing constraints were a key factor in their choice of payment method.
5
Turning to the literature on credit constraints, several studies have documented that borrowing constraints are an important feature of the auto loan market, including Attanasio, Goldberg, and Kyriazidou (2008) and Adams, Einav, and Levin (2009); the latter study shows that minimum down payments matter a great deal to borrowers in the subprime auto loan market.
Consistent with this result, Cooper (2010) finds in some waves of the Panel Study of Income Dynamics a positive relationship between home equity extraction and automobile costs, which include down payments on loans and leases.
A piece of empirical evidence that is commonly used to support the importance of borrowing constraints is the high contemporaneous sensitivity of auto purchases to predictable changes in income. Some studies demonstrate this sensitivity by using changes in mortgage market conditions, which affect the income that is available for non-housing purchases. For example, Agarwal et al. (2017) and DiMaggio et al. (2017) find an increase in auto loan originations after a drop in household mortgage payments due to the Home Affordable Modification Program and mortgage rate resets, respectively. DiMaggio et al. (2017) find a stronger response for homeowners with lower incomes and higher loan-to-value ratios. Other examples of predicable changes in income that appear to affect car sales contemporaneously include tax refunds (Adams, Einav, andLevin 2009, Souleles 1999); economic stimulus payments (Parker et al. 2013); an increase in the minimum wage (Aaronson, Agarwal, and French 2012); an increase in Social Security benefits (Wilcox 1989); and expansions of health insurance (Leininger, Levy, and Schanzenbach 2010). We add to this literature by documenting that car purchases are responsive to increases in available liquidity in the form of equity extraction. We believe that we are also the first authors to explicitly link an easing of borrowing constraints in the mortgage market to an easing of borrowing constraints in the auto loan market.
PURCHASES
We begin by measuring the share of auto purchases that are funded directly by home equity. Using household surveys, we define a car purchase as funded directly with home equity if a respondent indicates that she bought a new or used car and that home equity was a source of funding. Our analysis is based on three surveys: The Reuters/University of Michigan Survey of Consumers (Michigan Survey), the Federal Reserve's Survey of Consumer Finances (SCF), and the Bureau of Labor Statistics' Consumer Expenditure Survey (CE). As described in Appendix A, the three surveys ask about home equity extraction and auto purchases in different ways but nonetheless show a similar relationship between these two events.
As shown in Table 1, households rarely report using home equity to purchase cars.
Results from the three surveys suggest that home equity extraction funds about 1 to 2 percent of both new and used car purchases. When we run these tabulations on the SCF and CE using only data for homeowners, as renters cannot have home equity, the shares of car purchases funded with home equity are only about ½ percentage point higher. 2 The surveys show that households typically fund new car purchases with auto loans, which finance around 70 percent of new car purchases and a somewhat smaller share of used car purchases-around 40 to 50 percent. Cash or some other source of funds are used to finance the remaining 25 percent or so of new car purchases and 50 to 60 percent of used car purchases. 3 Although home equity appears to directly fund only a very small share of car purchases, its use might have picked up during the housing boom and then dropped off during the financial crisis. To assess this possibility, we calculated from the CE the share of car purchases funded by a home equity loan for each year between 1997 and 2012 ( Figure 1). The share of cars 7 purchased with home equity was low over the entire period; it averaged 0.7 percent both during the housing boom (1997 to 2006) and after it (2007 to 2012). 4 There are a few reasons why it may not be surprising that the share of car buyers that report home equity as the funding source, even during the housing boom, is so low. First, personal finance professionals would generally advise against using a home equity loan to purchase a car, as these loans extend maturities beyond the lengths typically recommended for cars and thus may increase the total interest paid by consumers (Singletary, 2008;The Wall Street Journal). Second, the transaction costs of extracting home equity with a second lien or mortgage refinancing generally exceed those of originating an auto loan; doing so only makes sense if the homeowner plans to extract a lot of equity at once and use much of it for another purpose. Third, the primary advantage to using home equity rather than an auto loan to finance a car purchase-the tax deductibility of the interest for loans up to $100,000-is most likely not relevant for the approximately one-third of homeowners who end up taking the standard deduction (Poterba and Sinai 2008). 5 Finally, auto loans were an attractive financing choice during much of the housing boom period: Auto credit appears to have been widely available, and interest rates on new car loans were generally low and often heavily discounted by the manufacturers, especially for households with low credit risk.
So who uses home equity to buy cars? To answer this question and explore whether borrowing constraints are a factor, we compare the income, wealth, and credit history characteristics of households who purchase cars with home equity with those who purchase cars with auto loans or with cash or other means. We use data from the SCF for this exercise, and we limit the sample to homeowners who purchase new cars to eliminate the differences between homeowners and renters, and between new car purchasers and used car purchasers. 8 The comparisons, which are shown in Table 2, suggest that homeowners who report using home equity to buy a car do not appear to be lacking in terms of income, wealth, or access to credit. Among new car buyers, the table shows a clear ordering by method of funding an auto purchase: Households who use cash have the most wealth and access to credit, followed by households who use home equity and then households who use auto loans. Most of the differences among the three groups are statistically significant even with the very small sample of households who use home equity.
Beginning with wealth, the median of liquid assets is $42,000 for homeowners who purchase new cars with cash, $22,000 for those who use home equity, and $10,500 for those who use an auto loan. 6 Likewise, median net worth is a bit greater than $1,000,000 for cash purchasers, nearly $600,000 for home equity purchasers, and nearly $300,000 for auto loan purchasers. The ordering of median income among the groups is the same as for wealth, but the differences are not statistically significant.
Turning to access to credit, the share of homeowners who purchase new cars and answered "yes" to the survey question "Was there any time in the past five years that you thought of applying for credit at a particular place, but changed your mind because you thought you might be turned down?" is low-only 2 percent for cash and home equity purchasers and 10 percent for auto loan purchasers. The share who answered "yes" to the survey question "In the past five years, has a particular lender or creditor turned down any request you made for credit?" is somewhat higher at 7 percent for cash purchasers, 15 percent for home equity purchasers, and 20 percent for auto loan purchasers. But the differences among the groups are not statistically significant for this measure. By both measures, homeowners who purchase new cars with home equity do not appear to be credit constrained.
9
Demographic characteristics that are correlated with credit access-age, education, and stockownership-similarly suggest that households who purchase new cars with home equity do not stand out as being credit constrained. Home equity purchasers are around 50 years old, on average, somewhat younger than cash purchasers (60 years old) and about the same age as auto loan purchasers. The share of home equity purchasers with a college education is about 43 percent, below the share of cash purchasers (54 percent) and about the same share as auto loan purchasers. The share of home equity purchasers that own stock is 39 percent, below the share of cash purchasers (48 percent) and above the share of auto loan purchasers (24 percent).
HOME EQUITY EXTRACTION AS A FACILITATOR OF AUTO LOANS
Although few households report directly using home equity to purchase a car, a larger number of households might indirectly use home equity to purchase a car by using the proceeds of a recent equity extraction to overcome down payment requirements or other credit constraints in the auto loan market. In this section, we use an event study set up to examine this indirect channel and estimate whether homeowners are more likely to take out an auto loan right after extracting home equity. 7 As described in the literature review, a common way to detect the presence of borrowing constraints is to test whether auto purchases rise after a household receives a predictable boost in income. Using a similar logic, we use an event study setup to estimate the effect of home equity extraction on auto loan originations via the route of alleviating borrowing constraints in the auto loan market. Specifically, we measure the additional increase in the probability of originating an auto loan after home equity is extracted relative to the probability observed before equity is extracted.
10
The identification strategy of the event study setup assumes that the effects of common shocks to home equity extraction and auto loan originations-such as a wealth effect associated with a rise in house prices or a price effect associated with a change in interest rates-are equally relevant for auto loan originations before and after home equity is extracted. In contrast, when equity extraction facilitates an auto loan origination because it eases a constraint in the auto lending market, the auto loan origination must follow the extraction.
Our analysis uses credit bureau data from the Federal Reserve Bank of New York Consumer Credit Panel (CCP). 8 The panel is a randomly selected anonymized 5 percent sample of credit records from the credit bureau Equifax. The data include individuals' credit scores, debt balances, payment histories, age, and geographic location (down to the Census block level).
Individuals are followed over time with quarterly snapshots of their data, although the sample is periodically refreshed so that it remains representative of all individuals with a credit record and a social security number. We use data from 1999 to 2015, and for computational ease we select a 20 percent subsample; all told, our dataset is a 1 percent sample of the universe of credit records. An observation i in our sample is the data for a given individual in a given quarter.
We construct a sample of individuals who could plausibly have extracted equity at any time in an event window that spans three quarters before and three quarters after the quarter in which we observe the individual. Those individuals are borrowers who have mortgage debt and are current on that debt throughout the event window. For each event window we drop from the sample households who appear to have purchased a new residence (as determined by a change in the census block of residence from quarter-to-quarter) or who appeared to have been property investors (as determined by the presence of more than one first lien mortgage or home equity line of credit on the credit bureau file). 9 The resulting dataset has approximately 31.7 million personquarter observations. Auto loan originations and home equity extractions are not directly reported in the CCP data, and so we infer these extensions of new credit from the number of open accounts for each borrower and their loan balances. For auto loans, we infer that a new loan was originated when the number of open auto loan accounts for a borrower increases from one quarter to the next or when the borrower's total indebtedness on non-delinquent auto loans rises. 10 As with mortgages, we do not count a balance increase on delinquent accounts as a loan origination because it may reflect overdue interest or fees being rolled into the loan balance.
To infer that a home equity extraction took place, we search our dataset for borrowers with mortgage debt in two consecutive quarters and with an increase in total mortgage debt of at least 5 percent (and at least $1,000) from the first to the second quarter. Because our dataset includes no borrowers who purchase new residences, appear to be property investors, or have a delinquent mortgage, none of the increases in mortgage balances in our dataset are associated with these activities. In addition, we flag apparent changes in the loan servicer, which can result in the reported balance on the loan dropping to zero for a quarter until the new servicer begins reporting to the credit bureau. In these cases, we replace the zero balance with the average of the balances from the prior and subsequent quarters and therefore do not record these servicing transfers as equity extractions. 11 Data limitations preclude us from following a similar procedure for servicing transfers associated with auto loans. 12 The reason we drop borrowers who purchase residences, are property investors, or are delinquent on their mortgages from our dataset-despite the fact that borrowers in these situations may extract home equity-is that retaining these borrowers would bias our estimates 12 downward. The source of the bias is the uncertainty present in these situations about whether increases in mortgage balances imply that home equity was extracted. Therefore, keeping borrowers with these situations in our sample and assuming that all increases in mortgage balances are not equity extractions would bias downward the relationship we estimate between equity extraction and auto loan origination. We judge the simplest solution to be to drop these households entirely.
As a baseline, we use our final dataset to estimate equation (1) and determine the likelihood that an individual takes out an auto loan, conditional on whether she has extracted home equity recently or will do so in the near future. The dependent variable Autoi equals 1 if she originated an auto loan in the quarter associated with observation i and 0 otherwise. The independent variables include an intercept and a sequence of 7 indicator variables that correspond to the three quarters before the reference quarter associated with observation i (q = -3, -2, or -1), the reference quarter itself (q = 0), and the three quarters after the reference quarter (q = 1, 2, or 3). Each indicator variable equals 1 if the individual extracted home equity in that quarter and zero otherwise.
We estimate equation (1) as a linear probability model and report the coefficient estimates in the first column of Table 3. As indicated by the estimate of the intercept, about 3.6 percent of individuals who did not extract home equity at any point in the relevant seven quarter window originated an auto loan in the reference quarter. The estimates of the βq coefficients, when q < 0, measure the additional probability that an individual takes out an auto loan in the reference quarter if they extracted home equity q quarters ago; when q > 0, these estimates 13 measure the additional probability that an individual takes out an auto loan in the reference quarter if they will extract home equity q quarters in the future. Individuals are about 1.1 percentage points more likely to take out an auto loan if they extracted home equity three or two quarters ago (β-3 and β-2). Individuals are 1.4 and 1.8 percentage points more likely to originate an auto loan if they extracted equity one quarter earlier or in the same quarter (β-1 and β0).
Individuals are about 1.1 percentage points more likely to originate an auto loan if they will extract home equity either 1, 2, or 3 quarters in the future (β1, β2, and β3); estimates of these coefficients are essentially identical to those for β-3 and β-2.
Estimates of all seven β coefficients are positive and statistically different from zero, indicating that individuals who have extracted home equity recently or will do so in the near future are more likely to take out an auto loan than are other individuals. This relationship may reflect factors such as rising housing wealth or low interest rates overall, which boost the likelihood of both equity extraction and auto loan origination, or characteristics of the borrowers that affect the likelihood of both activities. 13 In addition, the β coefficients that correspond to subsets of the sample that extracted home equity during the reference quarter or one quarter before it are larger than the other β coefficients, consistent with equity extraction easing credit constraints in the auto loan market.
As described earlier, our identification stems from the timing of events: if borrowers use the proceeds of home equity extraction to overcome credit constraints in the auto loan market, they cannot originate the auto loan before receiving home equity proceeds. Assuming that other factors that affect auto loan originations do not change systematically around the point of extraction, we interpret the incremental rise in the probability of originating an auto loan after the equity extraction as the causal effect of extraction on auto loan origination. In equation (1), the β-1 coefficient estimate is 0.3 percentage point higher than that for the other βs, and the estimate of β0 is 0.7 percentage point higher, yielding a total effect of 1.0 percentage point. The difference between the β0 and β-1 estimates is statistically significant, and the implied total effect is large relative to the 4 percent unconditional probability in this sample of originating an auto loan in a typical quarter. 14 The magnitude of the effect that we measure is also similar to the increase in the probability of taking out an auto loan after equity extraction measured in Beraja et al. (2017). 15 One possible concern with using this event study setup with these data is that quarterly observations may be too coarse to assert that home equity extractions predated the auto loan originations when both occurred in the same quarter. The results in Beraja et al. (2017) assuage this concern. In their credit bureau data-which unlike ours are measured at a monthly frequency-auto loan originations begin to rise in the month after equity extraction, with the peak occurring two months after the extraction.
Next, we add person fixed effects to the probability model to control for each individual's innate probability of taking out an auto loan. Other than the intercept that now varies across individuals, this model, shown in equation (2), is the same as equation (1).
The coefficient estimates from this specification are shown in the second column of Table 3. The estimates of β-3, β-2, β1, β2, and β3 are around 0.4 percentage point, β-1 is 0.6 percentage point, and β0 is 1.0 percentage point. These β coefficient estimates are all about 0.7 percentage point below the corresponding estimates in equation (1), a comparison that indicates 15 that individual heterogeneity explains much of the correlation between auto loan origination and home equity extraction. However, even with the addition of the fixed effects, individuals are still 0.8 percentage point more likely to originate an auto loan in the reference quarter if they extract home equity in that quarter or the one before it, and this increase remains statistically significant.
So the conclusion that equity extraction eases borrowing constraints in the auto loan market is robust to the inclusion of person fixed effects.
In equation (3) we add year fixed effects to the model in equation (2) to capture omitted factors that vary over time but not individuals; examples of these factors might include the level of interest rates or the national unemployment rate, which affect both equity extraction and auto loan originations. Coefficient estimates from this specification are in the third column of Table 3. The estimates of the β-3, β-2, β1, β2, and β3 coefficients are even lower in this specification than in equation (2)-by around 0.2 percentage point. Although the coefficient estimates are still statistically significantly different from zero, for practical purposes home equity extraction affects the probability of originating an auto loan in this specification only if the extraction occurs in the reference quarter or the quarter before it. The probability of originating an auto loan in the reference quarter is 0.7 percentage point higher if equity is extracted in the same quarter than if it is extracted in the next quarter, and the probability is 0.3 percentage point higher if equity is extracted one quarter earlier. 16 Finally, in equation (4) we add the one year change in a house price index for the borrower's Zip code, ΔHPIi, to the model in equation (3). Changes in local house prices can vary considerably across the country and therefore are only partly captured by the year fixed effects. The Zip code house price indexes are from CoreLogic, and we are able to match these indexes to borrowers for 72 percent of the borrowers in the sample. 17 We include this specification to take into account the tendency of some households to make a number of home price appreciation-related financial decisions at one time. If paying attention is costly, for example, households might react to an increase in house prices by extracting equity and originating an auto loan in the same quarter. In this case, it is the fixed cost of paying attention rather than the presence of borrowing constraints that explains the pattern of the relevant beta coefficient estimates in equations (1) through (3). Coefficient estimates from this house price augmented specification are in the fourth column of Table 3. The estimate of γ indicates that the association of regional house price changes with auto loan originations is statistically significant but very small; the 0.00007 coefficient estimate means that even a fairly large one year house price increase of 19 percent (the 95 th percentile of one year house price increases in our sample) is associated with an increase in the probability of originating an auto loan of only 0.1 percentage point. In a more flexible nonlinear specification (not shown), we allow γ to vary across six increments of house price increases and similarly find that living in a Zip code with the largest increase-a 10 percent or greater increase from the previous year-is associated with only a 0.1 percentage point increase in the probability of originating an auto loan.
Importantly, the estimates of the sequence of β coefficients are essentially unchanged in this specification relative to equation (3). As before, the coefficient estimates imply that borrowers are 0.7 percentage point more likely to originate an auto loan in the reference quarter if they extract home equity in the same quarter and 0.3 percentage point more likely to do so if they extract equity one quarter earlier. Similarly, characterizing house prices with the nonlinear transformation described above has little effect on the β coefficient estimates, and the same is true if we use three year changes in the regional house price indexes in place of one year changes.
The various robustness checks support our conclusion that the rise in auto loan originations that occurs during and shortly after a home equity extraction stems from an easing of credit constraints.
LOAN MARKET
If home equity extraction boosts the likelihood of taking out an auto loan because it eases credit constraints in the auto loan market, we would expect the relationship to be stronger for borrowers with lower credit scores. We look for this corroborating evidence by adding variables to the event study probability model that allow the intercept and coefficients on the equity extraction time indicators to vary for six credit score categories, indexed by c. 18 As in equation (3), this specification includes person and year fixed effects. The βqc coefficient estimates are graphed in Figure 2. 19 The probability of taking out an auto loan in the reference quarter is higher for borrowers in all of the credit score groups who extract home equity in the same quarter or one quarter earlier, but the magnitudes of the increases vary substantially. These probabilities are shown in the second column of Table 4.
For comparison, the first column shows the probability that a borrower from each group originates an auto loan in the reference quarter if they extract home equity during the next quarter. As in the earlier exercises, this probability represents the rate at which borrowers take out an auto loan if they extract equity but do not face borrowing constraints. To gauge the contribution of the role of home equity in easing credit constraints for each group, column 3 shows the percent increase in the probability of originating an auto loan associated with having extracted equity before or during the reference quarter as opposed to after it.
Individuals with subprime credit scores who extract home equity after the reference quarter have a 5.0 percent probability of originating an auto loan in the reference quarter. If these individuals instead extract equity during the reference quarter or the quarter before it, that probability is 7.1 percent, which represents a 42 percent increase in the probability of originating an auto loan. In contrast, individuals with the highest (ultra-prime) credit scores who extract home equity after the reference quarter have a 5.8 percent probability of originating an auto loan in the reference quarter, and that likelihood only edges up to 5.9 percent if the home equity 19 extraction instead occurs during the same quarter or one quarter. In relative terms, borrowing constraints have barely any impact on the rate at which this group originates auto loans. For individuals with middle credit scores, extracting equity before or during the reference quarter increases the probability of originating auto loans between 22 and 27 percent. The larger effect observed for subprime individuals relative to other groups corroborates our conclusion that credit constraints underlie the relationship between equity extraction and auto loan originations identified by the event study.
Although our data do not reveal much about how home equity extraction eases credit constraints in the auto loan market, one theory we can test is whether borrowers who extract home equity appear to use the proceeds to pay down high-interest consumer debt. Bhutta and Keys (2016) show that credit card debt decreases only slightly after a home equity extraction, on average, but the decreases are larger and more persistent for individuals with lower credit scores.
Such a maneuver could make a household a better credit prospect by reducing its credit utilization rate (which counts toward a borrower's credit score) and its debt service relative to income (which might be a factor in auto loan underwriting). 20 Independent of the lender's determination, the borrower might feel a greater capacity to take out an auto loan after paying down higher-interest debt.
To look for evidence of consumer debt paydown, we construct an indicator variable CC_Payi that equals 1 if the individual pays down half or more of the existing uncollateralized consumer debt in the quarter associated with observation i, and it is set to 0 otherwise. 21 We then assess whether individuals who took this action are more likely than other equity extractors to purchase cars. The exercise is shown as equation (6), which includes a term that interacts CC_Payi with an indicator of whether equity was extracted in the reference quarter. 22 A positive 20 and significant estimate of η would suggest that consumer debt paydown is part of the relationship between home equity extraction and auto loan originations. This result is not surprising, as the existing literature suggests that the constraint most likely eased by equity extraction is down payment requirements. These studies identify down payments as a major credit constraint in the auto lending market (Adams, Einav, and Levin 2009) and find a relationship between equity extraction and increased spending on auto loan down payments (Cooper 2010).
MARKET?
Having established that home equity extraction has a statistically significant effect on auto loan originations, we now ask whether equity extraction plays a quantitatively important role in aggregate auto loan originations. To begin, we first estimate in our data that To calculate the effect of these equity extractions on car purchases, we apply to the extraction volumes the coefficients from our preferred specification in equation (3), which imply that home equity extraction raises the probability of an auto loan origination by 0.9 percentage point. (This is the incremental probability of an auto loan origination in a given quarter associated with households who extract equity in the same quarter (β0 -β1 = 0.65) plus the incremental probability associated with those who extract equity in the preceding quarter (β-1 -β1 = 0.25)). 25
CONCLUSIONS
In this paper, we demonstrate that home equity extraction does not appear to be the direct source of funding for many car purchases. Estimates from three nationally representative surveys indicate that very few households purchase cars directly with home equity. Further, the share of those who report doing so does not appear to vary with the housing cycle.
However, home equity extraction is associated with an increase in auto loan originations.
Using an event study framework with credit bureau data, we show that home equity extraction increases the likelihood of originating an auto loan in a statistically significant and causal way.
We also show that this increase is distinct from the effects of other factors that cause equity extractions and auto loan originations to move together, such as the changes in house prices and interest rates, and that the effect of home equity extraction on auto loan originations is more pronounced for borrowers with lower credit scores. Our results suggest that home equity extraction increases auto loan originations by easing down payment and other credit constraints in the auto loan market. In contrast, we find no evidence that equity extraction increases auto loan originations by allowing households to pay down high-interest debt and thereby free up space in their budgets for auto loan payments. Nonetheless, when we put the effects we estimate into the context of the U.S. auto loan market, the number of additional auto loan originations in recent years that we can attribute to home equity extraction is very small.
Our results cast doubt on the narrative that home equity extraction was an important source of funds for auto purchases during the housing boom in the mid-2000's, but they do not imply that housing wealth was inconsequential for these purchases. At least two other (not mutually exclusive) channels contribute to the relationship between auto purchases and home equity. First, households are wealthier when their homes increase in value, and their demand for 23 cars should also increase. Rising housing wealth may have boosted car purchases considerably, even if these households did not report purchasing cars directly with home equity; different types of wealth are, to some extent, interchangeable. For example, paying for other goods and services with home equity may free up balance sheet space to purchase a car with cash or an auto loan.
Second, home equity might indirectly facilitate auto loans if lenders are more willing to extend credit to households in neighborhoods with rising house prices. Home equity is typically not considered directly in the underwriting of auto loans, but lenders may take into account local economic conditions, which can be correlated with house prices. Alternatively, lenders may have an easier time raising capital in areas of the country that are booming. Households in these markets may also be more likely to retain their good credit standing when their income is disrupted, because they can more easily refinance their mortgages or sell their homes. Ramcharan and Crowe (2013) show that peer-to-peer lenders were less willing to extend unsecured credit to homeowners in areas with declining house prices; a similar dynamic may occur in the auto credit market, although we are not aware of any research on this topic.
A.1 The University of Michigan Surveys of Consumers (Michigan survey)
The Michigan survey data come from a special module that the Federal Reserve has sponsored three times per year since 2003. Survey respondents are asked if they purchased a car in the previous six months, and if so, whether they borrowed money to purchase the car or paid cash. If the answer is "cash," respondents are asked whether the source of the cash was savings or investments, a home equity loan, a mortgage refinancing, or "somewhere else." 27 Respondents can cite multiple sources of the cash, although this is rare. We define the car purchase as a home equity extraction if the respondent identifies a home equity loan or mortgage refinancing as the source of the cash. We define the purchase as an auto loan if the respondent indicates that a car was purchased with borrowed money. We define all other purchases as cash/other. The data span the 2003 to 2014 period and include 2,388 purchases of new and used cars.
A.2 Consumer Expenditure Survey (CE)
In the CE, households are asked about the vehicles that they currently own. We focus on cars purchased in the survey year. For each car owned, households are asked whether any portion of the purchase price was financed. 28 If so, they are asked whether the source of credit was a home equity loan. Households are not asked if the car was purchased with the proceeds from a cash-out refinancing, and so we will miss these purchases.
We define the purchase as a home equity extraction if the respondent identifies a home equity loan as a source of credit. We define the purchase as an auto loan if the respondent financed the purchase but does not indicate they used a home equity loan. We define all other purchases as cash/other. The data cover the 1997 to 2012 period and include 28,290 car purchases.
A.3 Survey of Consumer Finances (SCF)
In the SCF, like in the CE, households are asked about the cars that they own at the date of the interview. We focus on cars that were likely purchased recently. Unlike the CE, the SCF does not ask households whether their cars were purchased with home equity, and so we infer these purchases when an SCF respondent both appears to have purchased a car recently and reports having used the proceeds from a recently originated cash-out refinancing, second or third lien, or HELOC to buy a car. 30 If a household does not appear to have used home equity but does report having an auto loan outstanding, we assume the car was purchased with an auto loan. All other purchases are defined as cash/other.
One potentially important consequence of using the definitions described above is that households who buy the newest models early in the model year are likely overrepresented in our SCF sample of new car purchases. And, as noted earlier, we also miss a few purchases of older car models. All told, these factors may bias upward some of the sample statistics on new car buyers, such as average income and wealth, because new car prices decline over the course of the model year (Aizcorbe, Bridgman, and Nalewaik, 2009) and can drop when newer models are these states averaged 0.4 percent from 1997 to 2006 and 0.8 percent from 2007 to 2012. These tabulations are based on smaller samples than the overall shares.
5 The Tax Cuts and Jobs Act of 2017 suspends the tax deductibility of this interest from 2018 to 2026. Under the provisions of the law, the interest on home equity loans is only tax deductible if the loan is collateralized by a loan "used to buy, build or substantially improve the taxpayer's home that secures the loan." See https://www.irs.gov/newsroom/interest-on-home-equity-loansoften-still-deductible-under-new-law for a summary of the changes.
27 According to the Michigan survey staff, some respondents who purchase autos with home equity appear to consider these purchases as funded with "borrowed" money rather than "cash." If so, the survey instrument will miss some car purchases funded by home equity extraction. The survey staff catch many of these instances and recode the answers as cash/home equity. We do not think that this aspect of the question structure leads to a significant understatement of home equity funded purchases because the Michigan results are in line with the results from the other two surveys, which have different question structures.
28 The CE asks households a separate set of questions about the vehicles they purchased during the reference period. Our analysis is based on the set of questions about vehicles owned (in the EOVB files) because these data include questions about how the purchases were financed. 30 We consider the origination of a cash-out refinancing or second lien to be recent if it occurred in the survey year or in the year prior. We include the prior year because, as described earlier, our sample of recent vehicle purchases likely includes some cars purchased in the previous year, and because there may be a lag between the cash-out refinance and the purchase of the car. We assume that a HELOC funded a recent car purchase if the proceeds of the most recent draw were used for a car. The SCF does not ask when that draw took place; depending on the timing, our definition could either understate or overstate the share of vehicle purchases funded with HELOCs.
31 The SCF and CE samples also miss vehicles purchased during the calendar year but sold (or scrapped) before the date of the survey. We assume, given our short lookback period, that this bias is small. (5) estimated with FRBNY CCP data. The first column in the table shows estimates of α c + β1c for credit score groups 2 through 6. The second column shows estimates of α c + β1c + (β-1c -β1c)+ (β0c -β1c) for credit score groups 2 through 6. The estimates in the first and second columns also include the intercept generated by STATA's areg procedure. The third column is the percent change of the second column from the first column. The credit score is the Equifax 3.0 risk score.
Figure 1: Share of Cars Purchased with a Home Equity Loan
Note: Authors' calculations based on data from the Consumer Expenditure Survey. Figure
|
2019-05-20T13:06:28.292Z
|
2015-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "aa651291cfbdfd7572bfd38de6becdf6f0229a42",
"oa_license": null,
"oa_url": "https://doi.org/10.17016/feds.2015.106r1",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "90ebd5ed54b792b380141254b9054fe8a64fa25e",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics",
"Medicine",
"Business"
]
}
|
3855980
|
pes2o/s2orc
|
v3-fos-license
|
Vascular endothelial growth factor modified macrophages transdifferentiate into endothelial-like cells and decrease foam cell formation
Macrophages are largely involved in the whole process of atherosclerosis from an initiation lesion to an advanced lesion. Endothelial disruption is the initial step and macrophage-derived foam cells are the hallmark of atherosclerosis. Promotion of vascular integrity and inhibition of foam cell formation are two important strategies for preventing atherosclerosis. How can we inhibit even the reverse negative role of macrophages in atherosclerosis? The present study was performed to investigate if overexpressing endogenous human vascular endothelial growth factor (VEGF) could facilitate transdifferentiation of macrophages into endothelial-like cells (ELCs) and inhibit foam cell formation. We demonstrated that VEGF-modified macrophages which stably overexpressed human VEGF (hVEGF165) displayed a high capability to alter their phenotype and function into ELCs in vitro. Exogenous VEGF could not replace endogenous VEGF to induce the transdifferentiation of macrophages into ELCs in vitro. We further showed that VEGF-modified macrophages significantly decreased cytoplasmic lipid accumulation after treatment with oxidized LDL (ox-LDL). Moreover, down-regulation of CD36 expression in these cells was probably one of the mechanisms of reduction in foam cell formation. Our results provided the in vitro proof of VEGF-modified macrophages as atheroprotective therapeutic cells by both promotion of vascular repair and inhibition of foam cell formation.
Introduction
Atherosclerosis is the primary cause of mortality and morbidity in cardiovascular disease, which is the leading cause of deaths in industrialized society [1,2]. It is generally recognized that endothelial dysfunction or disruption is the initial process of atherosclerosis [3][4][5]. Endothelial progenitor cells (EPCs) are capable of facilitating re-endothelialization through direct differentiation into endothelial cells and/or via the paracrine mechanisms [6,7]. Moreover, a recent research points that endothelial to mesenchymal transition (EndMT) contributes to atherosclerotic pathobiology and is associated with complex plaques that may be related to clinical events [8]. Therefore, maintenance of the endothelial homeostasis and integrity by promoting early repair is an important strategy for preventing atherosclerosis [9,10].
Monocytes are recruited from peripheral blood and attach to the activated/damaged endothelium, then migrate to subendothelial space and differentiate into macrophages. The uptake of oxidized LDL (ox-LDL) by monocyte-derived macrophage induces foam cell formation, which is a hallmark of the development of atherosclerosis [11,12]. Inhibition of foam cell formation in early stage is another fascinating approach for the prevention of progression of atherosclerosis.
Cells of the monocyte-macrophage lineage are characterized by considerable diversity and plasticity [13]. Monocytes are not only the precursors of lipid-laden foam cell macrophages, but also display high developmental plasticity to differentiate under appropriate stimulation into different cell types, including endothelial lineage cells [14]. Can we take the advantage of the potential developmental relationship between monocytes/macrophages and endothelial lineage cells to alter some properties of macrophages to inhibit even the reverse negative role of macrophages in atherosclerosis?
In our previous research, we transiently transfected mouse primary macrophages with human vascular endothelial growth factor 165 (hVEGF 165 ) plasmid and found that they could transdifferentiate into endothelial-like cells (ELCs) and incorporated in the newly formed vessel [15,16]. It indicated that the macrophages overexpressing VEGF might have a promising role like EPCs for cell-based therapy. However, the primary macrophages transiently transfected with hVEGF 165 only expressed the target protein for a few days. To further investigate the effect of stable endogenous VEGF on macrophages, we have successfully established hVEGF 165 -ZsGreen1-RAW264.7 cells -a mouse macrophage cell line stably overexpressing hVEGF 165 by lentiviral vector [17].
Then, the aims of the present study were to: (i) identify the phenotype and function of hVEGF 165 -ZsGreen1-RAW264.7 cells to determine whether they transdifferentiate into ELCs; (ii) investigate the capability of hVEGF 165 -ZsGreen1-RAW264.7 cells to become foam cells and the underlying mechanism.
qRT-PCR
To detect the level of some endothelial marker genes, mRNA expression of VEGF receptor-2 fetal liver kinase 1 (FLK-1), von Willebrand factor (vWF), endothelial NO synthase (eNOS), vascular endothelial-cadherin (VE-cadherin) and Tie-2 were evaluated by qRT-PCR using the THUNDERBIRD SYBR qPCR Mix (Toyobo, Japan) with gene-specific primers on a 7500 Fast Real-Time PCR system (Applied Biosystems, Alameda, CA, U.S.A.). The expression of CD36 mRNA in ox-LDL-induced hVEGF 165 -ZsGreen1-RAW264.7 cells was also detected as above. Specific fragments were amplified and β-actin was also amplified to serve as an internal standard. The results were normalized to β-actin and presented as fold difference relative to RAW267.4 control. All primer sequences are shown in Supplementary Table S1. All experiments were repeated three times, and the representative data are shown.
Western blot analysis
To detect the level of some endothelial marker proteins, expression of FLK-1, vWF and eNOS were performed by Western blot analysis. Briefly, cells were lysed in RIPA buffer (Sigma-Aldrich, MO, U.S.A.), followed by protein quantitation using Pierce BCA Protein Assay Kit (Thermo Fisher Scientific, Pittsburgh, PA, U.S.A.). Cell lysates containing the same amount of proteins were separated on SDS/PAGE, followed by transferring on to the nitrocellulose membranes. After non-specific blocking with 5% non-fat milk in TBS containing 0.05% Tween 20 at room temperature for 1 h, the membranes were incubated with the specific antibodies toward mouse FLK-1 (Invitrogen; 1:1000), vWF (Santa Cruz Biotechnology, Santa Cruz, CA, U.S.A.; 1:600) and eNOS (Abcam, Cambridge, MA, U.S.A.; 1:500) at 4 • C overnight. Subsequently, membranes were incubated with appropriate secondary antibody, and protein bands were visualized using ECL (Thermo Fisher). Bands were quantitated by densitometry and normalized to those of β-actin (Abcam; 1:1000). The expression of CD36 protein in ox-LDL-induced hVEGF 165 -ZsGreen1-RAW264.7 cells was detected by the specific antibody (Abcam; 1:1000) as above. All experiments were repeated three times and the representative data are shown.
In vitro angiogenesis assay
The formation of tubular-like structures was assessed using Matrigel (BD Biosciences, San Jose, CA, U.S.A) by in vitro angiogenesis assay. A 96-well plate was precoated with Matrigel and incubated at 37 • C for 2 h prior to the addition of 2 × 10 4 cells/well suspended in 100 μl conditioned medium. Following additional incubation for 24 h, three fields were chosen at random and the formation of tubular-like structures was observed using an inverted microscope (IX51; Olympus, Japan).
ELISA for VEGF concentration
After 72 h of culture, VEGF concentration in supernatants was measured in triplicate using the VEGF human ELISA kit and VEGF mouse ELISA kit (both from Abcam), respectively. Briefly, VEGF standards and samples were pipetted into wells and VEGF present in a sample was bound to the wells by the immobilized antibody. The wells were washed and biotinylated anti-human or anti-mouse VEGF antibody was added. After washing away unbound biotinylated antibody, peroxidase (HRP)-conjugated streptavidin was pipetted into the wells. The wells were again washed, a TMB substrate solution was added to the wells and color developed in proportion to the amount of VEGF bound. The Stop Solution changed the color from blue to yellow, and the intensity of the color was measured at 450 nm. VEGF concentrations were calculated (in pg/ml) with the standard curve.
Foam cell formation assay
An in vitro foam cell formation assay was performed as described previously with minor modification [18]. Briefly, hVEGF 165-ZsGreen1-RAW264.7, ZsGreen1-RAW264.7, or RAW264.7 were cultured in 12-well plate in serum-free medium and treated with 100 μg/ml ox-LDL (Yiyuan Biotechnologies, Guangzhou, China) for 24 h to induce foam cell formation. Oil red O powder (Sigma-Aldrich, MO, U.S.A.) was dissolved in isopropanol (0.5%; Sigma-Aldrich). The stock was then diluted to 0.3% oil red O solution with distilled H 2 O and filtered through a 0.22-μm filter. After fixation with 4% paraformaldehyde for 1 h at room temperature, cells were stained with oil red O solution to detect the lipid accumulation for 5 min. Then, the cells were observed with a microscope and those containing oil red Opositive fat droplets were considered foam cells.
Quantitation of total lipid content
Quantitation of lipid accumulation in cells was measured based on a previously published protocol [18]. Cells stained with oil red O were treated with 1 ml of 60% isopropanol for 1 h to redissolve the oil red O and absorbance was detected at 518 nm through a spectrophotometer.
Statistical analysis
Data are presented as means + − S.D. The statistical significance of differences between groups was analyzed using one-way ANOVA with Tukey's test post hoc. Values of P<0.05 were considered significant.
Stable overexpression of VEGF induces macrophages to acquire phenotypic characteristics of ELCs
Compared with ZsGreen1-RAW264.7, untransfected RAW264.7 or VEGF-treated-RAW264.7 cells, all of which are small and round in shape, hVEGF 165 -ZsGreen1-RAW264.7 cells appeared elongated spindle-like shape like endothelial cells (Supplementary Figure S1). To investigate the impact of autocrine VEGF on macrophages, the expression of endothelial cell markers in hVEGF 165 -ZsGreen1-RAW264.7 were investigated by qRT-PCR and Western blot. ZsGreen1-RAW264.7 or untransfected RAW264.7 cells were used as controls. Meanwhile, the VEGF-treated RAW264.7 cultured at 50 ng/ml for 48 h was another control to determine whether exogenous recombinant hVEGF 165 could replace the endogenous protein and have a similar effect. qRT-PCR analysis indicated that in hVEGF 165 -ZsGreen1-RAW264.7 cells compared with other groups, the expression of endothelial marker genes, such as FLK-1, vWF, eNOS, VE-cadherin and Tie-2, were dramatically increased by 11-fold, 48-fold, 13-fold, 10-fold, and 13-fold, respectively (all P<0.01) ( Figure 1A). There was no difference amongst other groups. Western blot measurement confirmed that corresponding significantly higher protein expression of FLK-1, vWF, and eNOS in hVEGF 165 -ZsGreen1-RAW264.7 than those in other groups (all P<0.01) ( Figure 1B). There was no difference amongst other groups.
Stable overexpression of VEGF induces macrophages to acquire functional characteristics of ELCs
In order to further elucidate the functional role of stable overexpression of VEGF on the angiogenic potential of macrophages, the cells were cultured in Matrigel and tube formation was investigated in vitro. hVEGF 165 -ZsGreen1-RAW264.7 cells formed several obvious tubular-like structures after 24 h of culture in Matrigel. In contrast, no tubular structure was detected in untransfected RAW264.7, ZsGreen1-RAW264.7, and exogenous VEGF-treated RAW264.7 cells (Figure 2).
Stable overexpression of VEGF reduces macrophage foam cell formation
We detected the capability of VEGF-modified macrophages to become foam cells. Incubation of untransfected RAW264.7 or ZsGreen1-RAW264.7 with 100 μg/ml ox-LDL for 24 h led to abundant cytoplasmic lipid droplets accumulation that was detected by Oil Red O staining. hVEGF 165 -ZsGreen1-RAW264.7 cells had a little lipid accumulation ( Figure 4A).
Stable overexpression of VEGF down-regulates expression of CD36 in macrophages
Therefore, after revealing that stable overexpression of VEGF inhibited lipid droplets accumulation in macrophages, we explored whether this effect was dependent on influx of lipids by analyzing the expression of CD36. As shown in Figure 5, after treating with 100 μg/ml ox-LDL for 24 h, CD36 mRNA ( Figure 5A) and protein expression ( Figure 5B) were visibly reduced in hVEGF 165 -ZsGreen1-RAW264.7 cells compared with untransfected RAW264.7 or transfected control ZsGreen1-RAW264.7 cells (both P<0.05).
Discussion
Macrophages are crucially involved in the whole process of atherosclerosis from early atherogenesis to advanced plaque progression [14,19,20]. During the process of initiation and formation of an atherosclerotic plaque, the inflammatory signals lead to monocyte recruitment into the damaged intima, where they differentiate into macrophages and internalize native and modified lipoproteins, resulting in foam cell formation. Moreover, foam cells can contribute further to, and thus amplify, lipoprotein modifications and retention. During the process of advanced plaques, macrophages can contribute to vulnerable plaque formation through the secretion of cytokines, proteases, and procoagulant/thrombotic factors [19]. During atherosclerotic complications healing (e.g. cardiac repair), monocytes might promote myofibroblast accumulation, angiogenesis, and myocardial healing and remodeling, thus show a protagonist or antagonist influence in post-acute coronary syndrome recovery [14]. Given that macrophages are widely involved and play an important role in the whole process of atherosclerosis, it can be taken as a potential future therapeutic target in atherosclerosis. Our hypothesis is that modified macrophages have new properties that can promote repair of damaged EC by acting as EPCs in early atherosclerosis and can inhibit foam cell formation during the progression of atherosclerosis.
Diversity and plasticity are the hallmarks of cells of the monocyte-macrophage lineage [13]. Monocytes are characterized by an extremely high developmental plasticity, under experimental conditions being able to differentiate into many kinds of cells ranging from epithelial cells, cartilage cells to fibroblasts, cardiomyocytes and neuronal cells, including endothelial lineage cells [14,[21][22][23][24]. On the basis of our previous studies, we found the mouse macrophages transiently transfected with hVEGF 165 might have a promising role like EPCs for cell-based therapy [15,16] and we have successfully established hVEGF 165 -ZsGreen1-RAW264.7 cells -a mouse modified macrophage cell line stably overexpressing hVEGF 165 by lentiviral vector [17]. ZsGreen1 is a bright GFP, which can be used for tracking cells in animal experiment in the future. In the present study, we detected whether the hVEGF 165 -ZsGreen1-RAW264.7 cells can transdifferentiate into ELCs in vitro. The phenotypic features and function for identification of ELCs are similar to those used by Hu et al. [25]. qRT-PCR and Western blot analysis showed that levels of expression of classic endothelial markers were up-regulated in hVEGF 165 -ZsGreen1-RAW264.7 cells. The Matrigel assay further supported the notion that hVEGF 165 -ZsGreen1-RAW264.7 cells exhibited the characteristics of angiogenesis. Together, these findings displayed that VEGF-modified macrophage cells could transdifferentiate into ELCs in vitro.
The potential of macrophages to transdifferentiate into ELCs could be greatly beneficial for vascular repair. The use of hVEGF 165 -ZsGreen1-RAW264.7 cells might overcome limitations in cell numbers reported in tissue engineering and cell-base therapy using EPCs.
Autocrine VEGF may help to unravel the mechanism of transdifferentiation of macrophages to ELCs. In the present study, the RAW264.7 cells treated with exogenous recombinant hVEGF 165 protein did not either increase the mRNA expression or the protein levels of endothelial markers differed. Also, no tubular structure was detected in Matrigel assay. Otherwise, RAW264.7 cells cultured with exogenous recombinant mouse VEGF 165 protein or supernatant that was harvested from hVEGF 165 -ZsGreen1-RAW264.7 cells did not show morphological changes and significant changes in mRNA expression of endothelial markers eNOS and vWF (Supplementary Figure S2). These results suggested that addition of exogenous VEGF did not induce transdifferentiation of ELCs from macrophages. Lee et al. [26] reported that genetic deletion of vegf specifically in the endothelial lineage (VEGF ECKO ) leads to progressive endothelial degeneration and addition of 100 ng/ml exogenous VEGF did not rescue the increased cell death exhibited by isolated VEGF ECKO endothelial cells. Activation of the VEGFR2 in wild-type cells was suppressed by intracellular small molecule antagonists (SU4312) but not by extracellular blockade of VEGF (Avastin). Guangqi et al. [27] further demonstrated that endogenous VEGF-A forms a complex with VEGFR2 in endothelial cells and maintains a basal phosphorylation level of VEGFR2 as well as its downstream signaling proteins. This complex is localized within the early endosome antigen 1 (EEA1) endosomal compartment. In thie present study, stable transfection of macrophages with hVEGF 165 gene produced endogenous hVEGF protein by autocrine. In addition, ELISA showed that mouse VEGF production in the culture medium from hVEGF 165 -ZsGreen1-RAW264.7 cells was approximately three-fold higher than ZsGreen1-RAW264.7 or untransfected RAW264.7 cells, which suggested that stable overexpression of hVEGF 165 in RAW264.7 promoted increased autocrine mouse VEGF 165. Both endogenous hVEGF 165 and mouse VEGF 165 may contribute to maintaining the transdifferentiated ELCs homeostasis and phenotype.
In view of this, macrophages are the one of the major resources of the lipid-laden foam cells in atherosclerosis, the foam cell formation assay is used as a biological indicator of the therapeutic effect of an anti-atherogenic treatment [28]. Our results demonstrated that the stable overexpression of VEGF decreased the number of intracellular lipid droplets and total lipid content in RAW 264.7 macrophages, and inhibited ox-LDL induced foam cell formation.
To further attempt to unveil the possible mechanism, we measured the mRNA abundance and protein levels of CD36 on macrophages to see whether endogenous VEGF inhibit foam cell formation via reduced cholesterol influx. CD36 is one of scavenger receptors responsible for macrophages uptake of ox-LDL [29] and accounts for approximately 60-70% of macrophage-derived foam cell formation [30]. Yao et al. [30] offered a new mechanism to explain the macrophage uptake of ox-LDL without limitation by demonstrating that CD36-mediated ox-LDL uptake in macrophages triggered endoplasmic reticulum (ER) stress response, which, in turn, up-regulated CD36 mainly at the protein level, enhancing the foam cell formation by uptaking more ox-LDL [30].
Our results in the present study indicated that CD36 mRNA and protein expression as well as lipid-droplet accumulation were attenuated in hVEGF 165 -ZsGreen1-RAW264.7 cells, suggesting that down-regulation of CD36 expression in VEGF-modified macrophages is probably one of the mechanisms of reduction in foam cell formation. We also detected the gene and protein expression of ATP-binding cassette transporter A1 (ABCA1), a critical regulator of lipid efflux from cells; however, there was no significant difference amongst the four groups (results not shown).
Our results raised the prospect that macrophages stably overexpressing VEGF may attenuate the progression of atherosclerosis by reducing foam cell formation, as well as inhibit the initiation of atherosclerosis by repairing the injured arterial endothelial cells in early stage. These in vitro data provided a solid basis for further in vivo investigation of atheroprotective effect of hVEGF 165 -ZsGreen1-RAW264.7 cells as a cell-based therapy.
|
2018-04-03T03:30:45.872Z
|
2017-05-23T00:00:00.000
|
{
"year": 2017,
"sha1": "dfd9a4259c10a711bd9a30404d73ace615f206d5",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/bioscirep/article-pdf/37/3/BSR20170002/430854/bsr-2017-0002.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfd9a4259c10a711bd9a30404d73ace615f206d5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
236478025
|
pes2o/s2orc
|
v3-fos-license
|
Modulating Language Models with Emotions
Generating context-aware language that embodies diverse emotions is an important step towards building empathetic NLP systems. In this paper, we propose a formulation of modulated layer normalization -- a technique inspired by computer vision -- that allows us to use large-scale language models for emotional response generation. In automatic and human evaluation on the MojiTalk dataset, our proposed modulated layer normalization method outperforms prior baseline methods while maintaining diversity, fluency, and coherence. Our method also obtains competitive performance even when using only 10% of the available training data.
Introduction
Building interactive systems that can understand and express human emotions has been a long-term goal of artificial intelligence (Shen and Feng, 2020;Salovey and Sluyter, 1997). Given a context, an intelligent agent ought to be able to generate responses that not only consider the context but also reflect a specified emotion, a task called emotional response generation. One common representation of emotions is through emojis, which often convey the underlying emotions in an utterance (Zhou and Wang, 2018). Table 1 shows an example generation in this formulation.
To tackle this problem, prior work has proposed a number of different models, including variants of sequence-to-sequence (Seq2Seq) models (Serban et al., 2016;Li et al., 2016a), variational autoencoders (VAE) (Gu et al., 2019;Shen et al., 2017; and adversarial networks (Kong et al., 2019;. Their generated responses are often dull or generic, partially due to the limited training data for diverse emotions . More recent studies have tried to
Context:
Emotion Response good game start morning off tigers v eagles.
good luck to all the eagles i m not a tigers fan but we ve got a win we ve got to wait for tommorrow for the game hope you enjoyed the match with your team Table 1. Example generation of our method for four different emojis. Context is an actual random tweet, and emotion is specified by emojis.
pre-train language models (LMs) on specific domain data to pivot generation towards certain direction Keskar et al., 2019). However, training a LM from scratch can be costly, and collecting sufficient pre-training data in diverse emotions is also challenging, especially for low-resource emotions (Yang et al., 2019a).
In this work, we present a simple and easyto-deploy technique that can enable pre-trained large-scale LMs to generate fine-grained emotional responses. Specifically, we inject emotional signals specified by 64 commonly used emojis via Modulated Layer Normalization (Mod-LN), a technique widely adopted in computer vision but whose potential has not been well studied yet in NLP. The main advantages of our method are: • Instead of designing or re-training models from scratch, our method is plug-and-play. In this work, we show its effectiveness on BERT (2019) and GPT-2 (2019), but one can easily extend our method to other Transformer-based LMs.
• By fully exploiting the transfer learning ability of pre-trained LMs, we achieve comparable emotional response generation performance as prior best-performing work with only 10% of the training data, which is especially beneficial for low-resource scenarios.
arXiv:2108.07886v1 [cs.CL] 17 Aug 2021 Given a context text and a specified emoji as a target emotion, we aim to generate responses that both reflect the emotion associated with the emoji and the semantic information in the context. In this work, we demonstrate how to inject target emotions through a modulation module of layer normalization ( §2.1). We also provide data preparation and model adaptation strategies on two typical LMs (BERT and GPT-2) to aid reproduction and extension ( §2.2).
Modulated Layer Normalization
Layerwise-normalization (LN) is commonly used in Transformer-based (Vaswani et al., 2017) language models (LMs) (Devlin et al., 2019;Radford et al., 2019;Yang et al., 2019b) to stabilize hidden state dynamics and reduce training time (Ba et al., 2016). In the vanilla implementation (Figure 1(a)), data are normalized by their own mean µ and standard deviation σ without relying on external inputs. In contrast to vanilla LN that only regularizes data itself, Mod-LN introduces an external modulation module shared across the whole dataset, which is independent of the individual data samples and able to modulate the regularization towards external inputs c (Figure 1 (b)). Specifically, for an input hidden state tensor x in layer l, it is normalized by Mod-LN as where is the smoothing parameter to avoid dividing by zero. MLP (l) γ and MLP (l) β are two trainable modulation modules for a certain layer l. They are computed by where W (l,1) and W (l,2) are dense layers belonging to layer l, with weights size of [64, 1 2 · dim h ] and [ 1 2 · dim h , dim h ] respectively 1 . Dense layers connect 64 emoji classes to the output hidden states of the language model, and b is a bias added to γ. We use the Swish activation (Ramachandran et al., 2017), which has been shown to outperform ReLU (Xu et al., 2015) on several challenging datasets. Though conceptually simple, such MLP based modules have been shown to be a faster and more efficient alternative to vanilla dot product selfattention in NLP (Tay et al., 2021) and CV (Tolstikhin et al., 2021). Our work uses MLPs as a plug-and-play modulator rather than a replacement for self-attentions, allowing us to shift the hidden states towards a given target emotion.
Data Preparation and Model Adaptation
For the text input, we concatenate ground-truth context with corresponding response as a whole input to feed into LMs. We add a pre-defined separator token ([SEP] for BERT and [UNK] for GPT-2) between context and response, to make LMs aware of the range of each part. We also pad both context and response to a max sequence length with the padding token.
Encoder-Decoder models have been successful in many text-to-text generation tasks, such as question answering (Chen et al., 2017;Seo et al., 2017), news summarization (Chopra et al., 2016;Rush et al., 2015), and style transfer (Li et al., 2018;Liu et al., 2021). For the response generation task, the encoder encodes the context text into a fixed-length vector in latent space, while the decoder decodes the generated response tokens step-by-step, given the encoded context vector and the ground truth token from the previous step; this method is also known as teacher-forcing (Zhang et al., 2019c;Cho et al., 2014).
In this work, we consider leveraging the transfer learning power of large-scale LMs-using LMs as encoder and decoder-to better capture the complicated relationship between context and response (Rothe et al., 2020). Auto-regressive LMs (ARLMs), such as GPT-2 are trained to iteratively predict the next step token given the past, while Masked Language Models (MLM), such as BERT, are trained to predict missing tokens given both the preceding and subsequent text. In contrast to the uni-directional attention flow in ARLM, the attention flow of MLM is bi-directional, and thus if we directly use MLM as decoder, the prediction of tokens in the response will also attend to (i.e., have the context of) future tokens; this could potentially lead to exposure bias (Schmidt, 2019). Inspired by recent text-to-text LMs such as T5 (Raffel et al., 2020) and BART , for MLM decoder, we modify the original bi-directional attention mask to make it uni-directional.
We experiment with two encoder-decoder models built on MLM and ARLM: 1) BERT-to-BERT: using bi-directional BERT as both encoder and decoder, but forcing the decoder BERT to attend to past context with uni-directional mask, and 2) GPT2-to-GPT2: using uni-directional GPT-2 as both encoder and decoder.
Experimental Setup
Dataset. For all the experiments, we use the Mo-jiTalk (Zhou and Wang, 2018) dataset, a large Twitter conversation corpus (N ≈ 700k) of responses that each contain one or more of 64 popular emojis. Following the original paper, we split the corpus into training, validation, and test sets of 596,959, 32,600, and 32,600 conversation pairs, respectively. We fine-tune the two LM-based encoder-decoder models on this dataset and generate responses given contexts and all possible emotions using top-k random decoding (Fan et al., 2018) on a machine with four RTX 2080 GPUs 2 .
Evaluation
Good emotional responses should accurately reflect the intended emotion, be diverse, and have coherent language. We thus evaluate three aspects of generated responses: emotion control ( §4.1), response diversity ( §4.2), and coherence and fluency ( §4.3). We also use Amazon Mechanical Turk (MTurk) to run a manual evaluation of emotion control and readability in generated responses ( §4.4).
Emotion Control
First, we evaluate whether intended emotions were reflected in the responses generated by various models. We choose DeepMoji (Felbo et al., 2017) 3 as the judgment classifier. DeepMoji was trained on a large-scale emoji dataset containing 1,246 million tweets and 64 distinct emojis, and as far as we know, is state-of-the-art for 64-emoji classification tasks. Since the meanings of different emojis can overlap with subtle differences, we compute Hits@k (k = {1, 3, 5}) classification accuracy to describe the performance of models in different criteria. As shown in Table 1, our proposed models outperform R-CVAE with a large margin. Of note, LM-based models reveal more robust performance in extreme data scarcity cases: our models achieve comparable performance with R-CVAE even when using only 10% of the training data. Between BERT and GPT-2, GPT-2 shows superior performance, partially because its weights are from auto-regressive pre-training.
Generation Diversity
As shown in Table 2, we evaluate the diversity of responses generated by each model in terms of unigram and bigram type-token ratios, average length, and percent of stop words in generated responses, with values for the human-generated responses shown for reference. As measured by the type-token ratio for both uni-and bi-grams, our proposed models generate more diverse responses. In addition, compared with the R-CVAE, the responses generated by our models are longer and use fewer stop words. The advance can be attributed to the using of large-scale language models as base models.
Fluency and Coherence
Moreover, we evaluate the fluency and coherence of machine-generated text. For fluency, we trained a standalone language model on the humangenerated responses using KenLM (Heafield, 2011) to measure the perplexity of generated texts. To evaluate coherence between the context and the generated responses, we compute the similarity between the generated text and human-generated responses using BERTScore (Zhang et al., 2019b), with the human-generated responses as reference. We configure the BERTScore using 24-layer RoBERTa-large as for English tasks. Table 3 shows these results. For perplexity and BERTScore, our Mod-LN models outperform the R-CVAE in both 10% and 100% training data cases.
Human Evaluation
In total 120 MTurk participants manually evaluated the emotion control and readability of responses from our proposed models and the original humangenerated reference data. The average age of participants was 38.40 years-old (SD = 12.26, Me-dian=34.50). More than half (65.8%) of participants were male, and 34.2% were female. The average completion time of each survey was 4.53 minutes. Participants were paid $1 per survey, averaging to more than $13 per hour wage for each participant, significantly above the U.S. federal minimum wage.
Procedure Each participant was assigned to read five randomly selected context-response pairs without being informed of the sources of the responses.
They were asked to rate 1) emotion control: "How well the emotion conveyed in the response agrees with the specified emoji? (1-very well to 7-not at all)", and 2) readability: "Please rate the readability of the response on a 7-point scale.
(1-very low to 7-very high)". The readability measure included five items adapted from a previous study (Graefe et al., 2018), specifically, well-written, concise, comprehensive, coherent, and clear. Since the five measures had very high agreement (Cronbach's 4 α = .91), we average the five measures into one as a general readability index.
Results
The participant's averaged ratings (µ) and Standard Errors (SE) are reported in Table 4. Table 4: Humans manually evaluated the emotional control and readability of responses from the original data (human reference), Baseline and proposed models on a 7-point scale (1: low quality, 7: high quality). We also take the generative LM: vanilla GPT-2, as the ablation reference.
As shown in the table, the standard error of the mean among all annotators is .10, which is very low for a 7-point scale, indicating large agreement between annotators. Responses generated by Mod-LN MLM (BERT), Mod-LN ARLM (GPT-2), and the human-generated references had no statistically significant differences in emotion control and readability. All were rated significantly higher than plain GPT-2 and R-CVAE in both emotion control and readability (p < .001 for one-way repeated measures ANOVA). We also conducted pairwise multiple comparisons in our analysis as post-hoc analysis. In terms of emotion control, both of our two proposed models and original reference data were rated significantly better than vanilla GPT-2 (p < .007). For readability, both our models, vanilla GPT-2, and original reference data were rated significantly more readable than R-CVAE (p < .001).
Related Work
Emotional Text Generation. VAE-based models (Park et al., 2018;Shen et al., 2017;Serban et al., 2017), adversarial networks (Kong et al., 2019;Yu et al., 2017) and reinforcement learning systems (Li et al., , 2016b have dominated sentiment-aware dialogue models. Other methods have been developed using LSTM (Song et al., 2019) and GRU . All these methods, however, are built on relatively coarse emotion types, partially due to the limited modeling ability of RNNs. Our model outperforms current state-ofthe-art R-CVAE (Zhou and Wang, 2018) in the same 64-emoji settings.
Modulated Normalization. Though not common in NLP, modulated normalization has been previously used in computer vision. In addition to work mentioned in the introduction (De Vries et al., 2017), adversarial networks such as CGAN (Miyato and Koyama, 2018), self-attention GAN (Zhang et al., 2019a) and Style GAN (Karras et al., 2019) have used modulated normalization to inject external signal into their models. In NLP, previous studies have tried to modulate normalization for classification tasks (Houlsby et al., 2019) and multilingual machine translation (Bapna and Firat, 2019), however, both these methods require architecture-level modifications. Our method, on the other hand, is plug-and-play, requiring minimal modifications to the architecture and thus easier to deploy for a diverse set of applications.
Conclusions
We have proposed a modulated layer normalization approach to generating responses of varying specified emotions. Our approach allows us to leverage large pre-trained models, while remaining simple and easily-extendable. In empirical experiments, our approach substantially outperforms prior work and achieves comparable results using only 10% of the available training data, all while maintaining diversity, fluency, and coherence.
|
2021-07-29T13:20:02.987Z
|
2021-08-17T00:00:00.000
|
{
"year": 2021,
"sha1": "ac97f277002a9b514e8d667df8367cb979f8b86f",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.findings-acl.379.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7a5e043b6fb51724e33468c99cd4bb1affc270f3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236912584
|
pes2o/s2orc
|
v3-fos-license
|
Concentration inequalities from monotone couplings for graphs, walks, trees and branching processes
Generalized gamma distributions arise as limits in many settings involving random graphs, walks, trees, and branching processes. Pek\"oz, R\"ollin, and Ross (2016, arXiv:1309.4183 [math.PR]) exploited characterizing distributional fixed point equations to obtain uniform error bounds for generalized gamma approximations using Stein's method. Here we show how monotone couplings arising with these fixed point equations can be used to obtain sharper tail bounds that, in many cases, outperform competing moment-based bounds and the uniform bounds obtainable with Stein's method. Applications are given to concentration inequalities for preferential attachment random graphs, branching processes, random walk local time statistics and the size of random subtrees of uniformly random binary rooted plane trees.
Introduction
Stein's method is used to obtain distributional approximation error bounds in a wide variety of settings in applied probability where normal, Poisson, gamma and other limits arise. The method was introduced by Stein for normal approximation [Ste72]. Chen adapted it for Poisson approximation [Che75], and since then it has been developed in many directions. Introductions to Stein's method can be found in [Ste86,BHJ92,CGS11]; we also refer to the surveys [Ros11,Cha14,BC14].
One variant of the approach (see [GR97], [PRR16] and the references therein) starts with a distributional fixed point equation that the limit distribution satisfies and obtains error bounds in terms of how closely both sides of the fixed point equation can be coupled together, i.e., their Wasserstein distance. The fixed point equation often has a probabilistic interpretation that can be leveraged to achieve a close coupling. To illustrate, we define a generalization of the size-bias transform for a random variable.
Definition 1. If X is a nonnegative random variable with 0 < E[X β ] < ∞, we say that X (β) is the β−power bias transform of X if holds for all for bounded measurable f . If X s d = X (1) , the familiar size-bias transform of X, it can be shown that X satisfies the distributional fixed-point equation X the exponential distribution is characterized by the fixed-point equation X d = U X s for an independent Uniform(0, 1) variable U . Under the assumption that X has a finite second moment, Stein's method can be used to show that the Wasserstein distance between X and an exponential distribution is at most twice the Wasserstein distance between X and U X s [PR11b, Theorem 2.1].
The bounds above allow for accurate approximations of the law of X in the bulk of the distribution. Tail bounds can also be obtained when the two sides of the distributional fixedpoint equation can be coupled together with some sort of monotonicity. This is carried out for the Poisson distribution in [GG11,AB15]. For example, if there exists a coupling of X s with X so that X s − 1 X a.s., then X satisfies a Poisson-like tail bound. [CGJ18] loosened this monotonicity condition to P[X s − 1 ≤ X | X s ≥ x] ≥ p and applied the results to bounding the second largest eigenvalue of the adjacency matrix of a random regular graph.
The monotonicity condition we study in the current article involves the following definition from [PRR16].
Throughout the paper, we use the notation X * to mean a random variable with the (α, β)-generalized equilibrium distribution of X, with α and β specified as necessary. The familiar equilibrium distribution X e of X from renewal theory is the case where α = β = 1. It is well known that the fixed points of the equilibrium transform, i.e., the distributions satisfying X e d = X, are the exponential distributions. Generalizing this fact, for any α, β > 0 let GG(α, β) denote the (α, β)-generalized gamma distribution, which has density function βx α−1 e −x β /Γ(α/β) dx for x > 0. The fixed points of the generalized equilibrium transform are the generalized gamma distributions, up to scaling. That is, for any choice of α, β > 0, the random variable X satisfies X * d = X if and only if X d = cZ where Z ∼ GG(α, β) [PK92, Theorem 5.1].
The main result of [PRR16] is a bound on the Kolmogorov distance between a properly rescaled version of X and Z ∼ GG(α, β) when X and X * are not identical but are close in the Lévy-Prokohorov metric. This yields a bound on |P[cX > t] − P[Z > t]| that can be used to estimate probabilities in the bulk of the distribution of X. But since this bound is uniform in t, it is too large to be useful for small tail probabilities in applications. This is the launching point for the current article.
Let X Y denote that X is stochastically dominated by Y in the usual stochastic order. We introduce two stochastic orders as follows. For a constant p ∈ (0, 1], we write X p Y to denote that P[X > t] ≤ P[Y > t]/p for all t ∈ R. Similarly, X p Y denotes that P[X ≤ t] ≥ pP[Y ≤ t] for all t ∈ R. In the p = 1 case, both orders are the usual one. For p < 1, they represent two different relaxations of the usual order and have alternate characterizations given in Lemma 8.
We mention that the case α = β = p = 1 case of Theorem 3 was already proven by Mark Brown [Bro06,Theorem 3.2]; see Section 2.2.
As an application of these concentration theorems, we prove several results for graphs, walks, trees and branching processes. We next define several quantities from such models and show that the above inequalities apply.
Preferential attachment random graphs. Consider a preferential attachment random graph model (see [BA99] and [PRR16]) that starts with an initial "seed graph" consisting of one node, or a collection of nodes grouped together, having total weight w. Additional nodes are added sequentially and when a node is added it attaches l edges, one at a time, directed from it to either itself or to nodes in the existing graph according to the following rule: each edge attaches to a potential node with chance proportional to that node's weight right before the moment of attachment, where incoming edges contribute weight one to a node and each node other than the initial node when added has initial weight one. The case where l = 1 is the usual Barabasi-Albert tree with loops but started from a node with initial weight w. Let W be the total weight of the initial "seed graph" after an additional n edges have been added to the graph.
Random binary rooted plane trees. Let U be the number of vertices in the minimal spanning tree spanned by the root and k randomly chosen distinct leaves of a uniformly chosen binary, rooted plane tree with 2n − 1 nodes, that is, with n leaves and n − 1 internal nodes.
Random walk local times. Consider the one-dimensional simple symmetric random walk S n = (S n (0), . . . , S n (n)) of length n starting at the origin. Define to be the number of times the random walk visits the origin by time n. Let 2n ∼ [L 2n | S 2n (0) = S 2n (2n) = 0] be the local time of a random walk bridge. Here we use the notation [X | E] to denote the distribution of a random variable X conditional on an event E with nonzero probability.
The next result shows that the above concentration inequalities hold for these quantities.
Theorem 5. With the above definitions, 2n with α = 2, β = 2 and the conditions and conclusions of Theorem 3 and Theorem 4 hold for W , V , L n , and L b 2n with p = 1 and the corresponding values in (a)-(d) for α, β.
We also give two tail bounds for Galton-Watson branching processes. See Section 4.5 for a review of previous bounds.
Theorem 6. Let Z n be the size of the nth generation of a Galton-Watson process, and let µ = EZ 1 , the mean of its child distribution. Consider Z * n with α = β = 1. If Z * 1 Z 1 , then Z * n Z n for all n, and and for all t > 0. Theorem 6 requires the child distribution to be supported on {1, 2, . . .}, since the condition Z * 1 Z 1 fails if P[Z 1 = 0] > 0. The next result relaxes this requirement and allows us to consider Galton-Watson trees with a nonzero extinction probability, with our concentration result applying conditional on nonextinction. We impose a condition on the child distribution that comes from reliability theory. For a random variable X taking values in the positive integers, we say that X is D-IFR (which stands for discrete increasing failure rate) if P[X = k | X ≥ k] is increasing in k for k ≥ 1. As we will discuss in Section 4.1, if X is D-IFR then X * X with α = β = 1.
In the following theorem and onward, for a random variable X we use X > to denote a random variable with distribution [X | X > 0].
Theorem 7. Let Z n be the size of the nth generation of a Galton-Watson tree, and suppose that Z > 1 is D-IFR. Then Z * n Z > n with α = β = 1, and with m( and This theorem holds in subcritical, critical, and supercritical cases alike. The difference comes only in the mean m(n), which grows exponentially for a supercritical tree, grows linearly for a critical tree, and remains bounded for a subcritical tree.
Theorems 5, 6, and 7 apply the p = 1 cases of Theorems 3 and 4 with the usual stochastic order. We use the p < 1 case in this paper only in Remark 28 to sketch a way to simplify the proof of Theorem 5 at the cost of a weaker concentration inequality. Nonetheless, we expect that the p < 1 case will prove useful. For the analogous concentration bound in [CGJ18] based on the Poisson distributional fixed point X s d = X + 1, the p < 1 case is essential to applications on random regular graphs (see [CGJ18,Proposition 2.3] and [Zhu22, Theorem 4.1]) and on interacting particle systems [JJLS20, Proposition 5.17].
The tail bounds we produce in this paper are sharp in many circumstances. The results of Theorems 3 and 4 in the α = β case are sharp, which was known already in the previously proven case α = β = p = 1 for the upper tail [Bro13]. When α = β, we expect that the factor t α in the upper tail bound is not sharp. We discuss this further in Section 2.4. The applications of Theorems 3 and 4 given in Theorem 5 with α = β seem likely to be sharp (see Remark 18), and our results on Galton-Watson processes are sharp as well (see Section 4.4).
Our main concentration results, Theorems 3 and 4, are proven in Section 2. In Section 3, we consider an urn model and prove that its counts N satisfy N * N with α and β depending on the model's parameters. The random variables W , U , L n , and L b 2n are expressed in terms of this urn model in [PRR16], and Theorem 5 follows. One part of the proof, a regularity property for the urn model similar to log-concavity, is shown by a very technical argument in Appendix A. Section 4 gives the proofs of Theorems 6 and 7 on Galton-Watson trees. After some background material on reliability theory and on forming the equilibrium transform, Theorem 6 is easy to prove (and in fact is essentially proven in [WDC05]). The proof of Theorem 7 is more difficult and requires us to establish some delicate properties of the D-IFR class of distributions. In Appendix B, we give some proofs that are well known in reliability theory but hard to find in the literature.
Proof of concentration theorems
respectively, for all t ∈ R. We start by characterizing these orders in terms of couplings, along the same lines as the standard fact that X Y if and only if there exists a coupling of X and Y so that X ≤ Y a.s. We have not seen these stochastic orders defined before, but they are used in form (ii) in [CGJ18] and [DJ18].
Lemma 8. The following statements are equivalent: as are the following statements: Proof. It is clear that (i) implies (ii) in both sets of statements. To go from (ii) to (iii) in the first set of statements, observe that for any s ∈ R, applying (ii) in the second inequality. Now let s approach t downward to prove (iii). A similar argument proves that (ii) implies (iii) for the second set of statements. Now we show that (iii) =⇒ (i) for the first set of statements. Let B ∼ Bernoulli(p) be independent of X, and define a random variable X ′ taking values in [−∞, ∞) by by(iii). Thus X ′ is stochastically dominated by Y in the standard sense, and therefore there exists a coupling of X ′ and Y such that X ′ ≤ Y a.s. (The standard fact that P[U > t] ≤ P [V > t] for all t is equivalent to the existence of a coupling for which U ≤ V a.s. holds when U and V take values in [−∞, ∞), and in fact under considerably more general conditions [Str65,Theorem 11].) Under this coupling, given X, it holds with probability at least p that To show that (iii) =⇒ (i) for the second set of statements, we use the same idea but define Y ′ to be equal to Y with probability p and equal to ∞ with probability 1 − p. Then X Y ′ , yielding a coupling of X and Y ′ such that X ≤ Y ′ a.s., and under this coupling given Y we have Y = Y ′ ≥ X with probability at least p.
Before we get started with our concentration estimates, we make an observation that allows us to rescale α and β.
Proof. For a > 0, let V a denote a random variable with density ax a−1 dx, and recall that 2.1. Upper tail bounds for α = β. We start with a technical lemma.
Lemma 10. Suppose that X * p X and EX β = α/β. Let G(t) = P[X > t]. For all t > 0, Proof. Since X * p X, By definition of the β-power bias, Next, we apply this lemma to deduce bounds on E[X | X > t] in the β − α = 1 case and on E[X −1 | X > t] in the β − α = −1 case.
Lemma 11. Suppose that X * p X and EX β = α/β. For all t ≥ 0 such that P[X > t] > 0, and When β − α = 1, we apply Lemma 10 and then switch the order of integration to obtain Canceling the G(t) factors and rearranging terms gives (8). When β − α = −1, the same approach yields proving (9).
To give a sense of the purpose of the preceding lemma, the mean residual life of a random variable X is the function m In general, the distribution of a random variable can be recovered from its mean residual life function [Mei72, Lemma 2]. In Lemma 11, we bound the mean residual life of X when β − α = 1 or of −X −1 when β − α = −1. (A similar approach would bound the mean residual life of log X when β = α, but a different technique used in Section 2.2 gives better results in that case.) To prove Theorem 3 when α = β, we will first use Lemma 9 to rescale α and β so that α − β = ±1, and then we apply the bounds on mean residual life from Lemma 11 to derive tail bounds on X.
2.2.
Upper tail bounds for α = β. As we mentioned in the introduction, Theorem 3 in the case α = β = p = 1 was first proven in [Bro06]. The general α = β case with p = 1 then follows by an application of Lemma 9, and it is not difficult to modify Brown's argument to allow p < 1. To save the reader the effort of going back and forth between Brown's paper and this one, and to highlight his elegant proof, we present the full argument here.
If U ∼ µ is a random variable, let Z(U ) denote a random variable whose distribution is the mixture governed by µ, i.e., Proposition 12. Let π be the law of X, and define (U, W ) as the random variables with joint density where V α has density αx α−1 on [0, 1] and is independent of W ; (d) for any Borel set B ⊂ R, Proof. The density of W is (w β /EX β ) dπ(w), proving (b). Looking at the conditional density of U given W = w, we see that (c) holds. Taking expectations in (c), we have U d = V α W , and together with (b) this implies (a). Fact (d) is proven by observing that the conditional density of W given U = u is Lemma 13. For any p ∈ (0, 1], Proof. First, we observe that Z(u) is stochastically increasing in u. Thus we can couple the random variables (Z(u)) u≥0 so that Z(u) ≤ Z(v) whenever u ≤ v (for example, by coupling all Z(u) to the same Uniform(0, 1) random variable by the inverse probability transform). Under such a coupling, X * ≤ X implies that Z(X * ) ≤ Z(X). Now, suppose X * p X. By Lemma 8, there exists a coupling (X, by Proposition 12(e), this proves the lemma when X * p X. The proof when X * p X is identical except we take conditional expectations given X rather than X * .
The next lemma will be used to prove a tail estimate for Z(X) when α = β.
Lemma 14. Let µ be a probability measure on [0, ∞). Then for any t > 0, Proof. Assume without loss of generality µ[t, ∞) > 0. For some partition 0 = x 0 < · · · < x n = t, let ϕ(x) be the step function taking value with the inequality holding because Now, consider any sequence ϕ n (x) of such step functions where each partition refines the last and the mesh size of the partition goes to zero. Then ϕ n (x) converges upward to by the monotone convergence theorem. This proves the lemma by (18).
Lemma 15. For α = β and any random variable X, Hence, and by Lemma 14 we have Proof of Theorem 3 for α = β. As in the α = β cases, it suffices to prove the theorem under the assumption µ = 1, or equivalently EX β = α/β = 1. From the definition of X (β) , By Lemma 13 followed by Lemma 15, We have now shown that pt β P[
2.4. Sharpness of bounds. Theorems 3 and 4 are nearly optimal when α = β but seem to be missing a factor of t −β when α = β. First, let us assume that p = 1. In [Bro13], it is proven that Theorem 3 is sharp when p = α = β = 1, including the constant factor of e. By Lemma 9, the theorem is sharp whenever α = β. For the reader's convenience, we present a family of examples demonstrating that Theorem 3 cannot be improved in this case; it is a discrete counterpart to the example given in [Bro13]. Choose integers µ and n and let p = 1/µ, and let X have a capped version of the geometric distribution with success probability p as follows: Then X * X with α = β = 1 (easy to check with Proposition 19(b)), and EX = µ. Now, set n = (t − 1)µ for some integer t ≥ 2, and we have This converges to e 1−t as m → ∞, confirming that there exist examples in which P[X ≥ µt] comes arbitrarily close to e 1−t . When α = β, one would hope for a upper tail bound of O(t α−β e −pt β ) rather than the O(t α e −pt β ) achieved in Theorem 3, which would match the tail of the generalized gamma distribution. But the best tail bound via moments for the generalized gamma distribution loses a factor of t −β/2 (the calculation is similar to the one carried out in Remark 18), and the Chernoff approach of bounding the moment generating function used in the proof of Theorem 3 in the α = β case is always inferior to the moment bound [PN95]. Thus a new approach would be needed for the proof if the optimal tail bound is to be achieved. Perhaps Brown's proof for the α = β case could be adapted when α = β, though we are not sure what the replacement for Lemma 15 would be.
As for lower tail bounds, if X has the (α, β)-generalized gamma distribution, then Theorem 4 applies to X with µ = p = 1. Up to constants, the O(t α ) bound for P[X ≤ t] shown in Theorem 4 matches the true tail behavior of X as t → 0. Now, we show that the dependence on p in the theorem is nearly optimal. Suppose that X * X for some α, β > 0, and define Y = X with probability p, 0 with probability 1 − p, for some 0 < p ≤ 1. Then Let µ X = β α EX β 1/β and µ Y = β α EY β 1/β = p 1/β µ X . For any t ≥ 1, applying the p = 1 case of Theorem 3 to X, Meanwhile the bound from applying Theorem 3 to Y is Thus, the bound on Y is equally sharp as the bound on X provided by the p = 1 case of Theorem 3, besides losing a factor of p 1+α/β when α = β.
For an example showing optimal dependence on p in the lower tail bound, for the sake of simplicity take α = β = 1. Choose some b < 1 < a, and define X as the mixture Exp(a) with probability a(1−b) a−b , Exp(b) with probability (a−1)b a−b , and observe that EX = 1. We can compute directly that The equilibrium transform of a mixture is the mixture of the equilibrium transforms, with the new mixture governed by the old governor reweighted by expectation (see Lemma 21). Together with the fact exponential distributions are fixed points of the equilibrium transform, this yields With a bit of work, one can show that X * p X with p = 1/(a + b − ab). Thus Theorem 4 yields matching (19).
Concentration for urns, graphs, walks, and trees
Each of random variables W , U , L n , and L b 2n in Theorem 5 can be expressed in terms of an urn model that we describe now. An urn starts with black and white balls and draws are made sequentially. After a ball is drawn, it is replaced and another ball of the same color is added to the urn. Also, after every lth draw an additional black ball is added to the urn for some l ≥ 1. As defined in Section 1.2 of [PRR16], let P l n (b, w) denote the distribution of the number of white balls in the urn after n draws have been made when the urn starts with b ≥ 0 black balls and w > 0 white balls, and let n (b, w) be a rising factorial biased version of N n (b, w), as defined in Lemma 4.2 of [PRR16], so that . We will use this fact to prove concentration for N n (1, w), but first we relate the rising factorial bias to the power bias N (l+1) n (b, w).
Proof. We will show that P N where C is a value that does not depend on k. The expression (k − l) · · · k/k l+1 is increasing in k, as is for w ≤ k ≤ w + n, since it is 0 for w ≤ k < w + l and each factor on the right-hand side in the product is increasing in k by Lemma 25 for w + l ≤ k ≤ w + n (in this range of k, the denominators of the fractions in the product are all nonzero). This proves the desired stochastic domination by [SS07, Theorem 1.C.1].
Proof. Let Q w (n) have the distribution of the number of white balls in a regular Polya urn after n draws starting with 1 black and w white balls. For now, assume that l ≤ n. We argue that where V w has density wx w−1 dx on [0, 1] and is independent of N (1, w). The first line is Lemma 4.5 from [PRR16]. In the second line, we use the trivial relation N n (0, w + 1) d = N n−l (1, w + 1 + l).
In the third line, we use n (1, w) + r, which is the statement of Lemma 4.2 of [PRR16]. The final line uses which follows from the fact, taken from the proof of Lemma 4.4 of [PRR16], that for independent and identically distributed Uniform(0, 1) variables U 1 , U 2 , . . . U w−1 we can write and this implies Q w (n) max i=0,1,...,w−1 We now apply Lemma 16 to (20) to obtain When l > n, the quantity N n (1, w) is the number of white balls in a regular Polya urn after n draws starting with 1 black and w white balls. Using 21 and observing that the maximum of N n (1, w) is n + w, Proof of Theorem 5. We have W ∼ P l n (1, w), U ∼ P 1 n−k−1 (1, 2k), and L 2n ∼ P 1 n (1, 1) respectively from Remark 1.3, Proposition 2.1, and Proposition 3.4 in [PRR16]. From Proposition 3.2 in [PRR16], we have L b 2n ∼ P 1 n (0, 1), and P 1 n (0, 1) is the same distribution as P 1 n−1 (1, 2). The result then follows from Proposition 17 and then noting that the conditions of Theorems 3 and 4 hold.
Remark 18. The rising factorial moments of N n (1, w) are explicitly computed in [PRR16, Lemma 4.1]. When w = l + 1, the concentration bound given in Theorem 5 is better than the one obtained from these moments via Markov's inequality. For the sake of simplicity, we illustrate with the case l = 1, w = 2. The result of Part (a) of Theorem 5 along with Theorem 3 shows that where γ 2 n = EN n (1, w) 2 . From [PRR16, Theorem 1.2], we know that γ n ∼ 2 √ n.
Now, we compute the concentration inequality given by the rising factorial moments of N n (1, w). Using the notation x [n] = x(x + 1) · · · (x + n − 1), from [PRR16, Lemma 4.1] we have The bound given by applying Markov's inequality to this is (2i + 2)(2n + 2i + 3) (γ n t + 2i)(γ n t + 2i + 1) . Take t to be fixed with respect to n. From the asymptotics for γ n , the right-hand side of this inequality converges to t 2 − 1 as n → ∞. Hence either m * = ⌈t 2 ⌉ − 1 or m * = ⌈t 2 ⌉ when n is sufficiently large. The optimal tail bound obtained from the rising factorial moments is then which converges as n → ∞ to applying Stirling's approximation to obtain the last estimate. Thus this bound is worse than (22) by a factor of t, when n and t are large. A more involved calculation for the general case w = α, l = β − 1 shows that the tail bound from moments is on the order of t α−β/2 e −t β . Outside of the α = β case, our bound is on the order of t α e −t β and is outperformed by the moment bound.
Concentration for Galton-Watson processes
We adopt the terminology from reliability theory that a random variable satisfying X * X with α = β = 1 is NBUE, which stands for "new better than used in expectation" (see Proposition 19(b) for the source of this name). Since we will often be applying the equilibrium transform to discrete random variables (e.g., the child distribution of a Galton-Watson tree), we will use the notation X e to denote the discrete version of the α = β = 1 equilibrium tranform, which we can define by setting X e = ⌈X * ⌉ with X * the standard α = β = 1 equilibrium transform. Equivalently, we can define X e to be chosen uniformly at random from {1, 2, . . . , X s }, where X s is the size-bias transform of X. Observe that for a random variable X taking values in the nonnegative integers, it is a consequence of the coupling interpretation of stochastic dominance that X e X if and only if X * X. 4.1. Some concepts from reliability theory. We consider three classes of discrete probability distributions; we will state the relationship between the three classes and give some characterizations of them. All are standard in the reliability theory literature, sometimes with varying notation.
To define the first class, the log-concave distributions, we first define a sequence t 0 , t 1 , . . . as log-concave if (i) t 2 n ≥ t n−1 t n+1 for all n ≥ 1, and (ii) t 0 , t 1 , . . . has no internal zeroes (i.e., if t i > 0 and t k > 0 for some i < k, then t j > 0 for all i < j < k). For X taking nonnegative integer values, we say that X is log-concave if the sequence P[X = k] for k ≥ 0 is log-concave.
Next, we recall the class of distributions on the positive integers with the D-IFR property, which stands for discrete increasing failure rate. As we defined in the introduction, the distribution of a positive integer-valued random variable X is in this class if P[X = k | X ≥ k] is increasing for k ≥ 1, and in that case we say that X is D-IFR. Sometimes in the literature, such random variables are just said to be IFR, with it understood to use the above definition rather than the continuous version when considering a discrete distribution. Sometimes the notation DS-IFR is used to refer to a random variable X on the nonnegative integers for which P[X = k | X ≥ k] is increasing for k ≥ 0; see for example [PCW06].
As we mentioned in the introduction, a nonnegative random variable X is said to be NBUE if X * X with α = β = 1. In the reliability theory literature, a random variable X taking positive integer values is sometimes said to be D-NBUE if for all n ∈ {0, 1, 2, . . .}.
But since the left-hand side of (24) is equal to P[X e > n] (see (39)), equation (24) is equivalent to the assertion that X e X, which holds for X taking positive integer values if and only if X is NBUE.
Proposition 19. For a positive integer-valued random variable X: (a) X is D-IFR if and only if the distributions
These properties are often stated in the reliability theory literature (see [PCW06,Fig. 2] and [RSZ05, Lemma 2]). Since we have had trouble digging up proofs of some of them, we have provided them in Appendix B. Now, we introduce a class of distributions on the nonnegative integers that we call NBUEZT, with ZT standing for zero-truncated. For X taking nonnegative integer values, we say that X is NBUEZT if X > is NBUE, or equivalently if X e X > (recall from the introduction that X > denotes a random variable with the distribution [X | X > 0]. In the language defined here, Theorem 6 states that if the child distribution of a Galton-Watson process is NBUE, then all generations are NBUE. This is a simple consequence of the statement that a random sum of NBUE-many i.i.d. NBUE summands is NBUE, which was proven in [WDC05, Corollary 2.2] (though we provide a more conceptual proof). Theorem 7 states that with L the child distribution, if L > is D-IFR then all generations of the process are NBUEZT. This raises a number of questions-for example, if L is only assumed to be NBUEZT, then are successive generations NBUEZT?-that we address in Section4.4.
4.2.
Forming the equilibrium transform. First, we give a recipe for forming the equilibrium transform of a sum: Lemma 20. Let X 1 , . . . , X n be i.i.d. nonnegative random variables, and let S = X 1 + · · · + X n . Then where I is chosen uniformly at random from {1, . . . , n}, independent of all else, and X * denotes the α = β = 1 equilibrium transform of X. If X 1 , . . . , X n are integer-valued, then Proof. Equation 25 is the special case of [PR11a, Theorem 4.1] in which X 1 , . . . , X n are i.i.d. When X 1 , . . . , X n are integer-valued, applying the ceiling function to both sides of (25) gives (26).
Next, we consider the equilibrium transform of a mixture. To give notation for a mixture, let h be a probability measure on the real numbers. Suppose that for each b in the support of h, we have a random variable X b with distribution ν b and mean m b ∈ [0, ∞). Also assume that b → µ b is measurable. The random variable X is the mixture of (X b ) governed by h if for all bounded measurable functions g, The basic recipe for the equilibrium transform X e is that it is a mixture of the equilibrium transforms X e b , governed by a biased version of h. The analogous recipe works for forming the size-bias transform of a mixture, and this result follows from that.
Lemma 21. Let X be the mixture of (X b ) governed by h as described above, and assume that EX < ∞. Define the measure h s by its Radon-Nikodym derivative: Then the distribution of X e is the mixture of X e b governed by h s .
Proof. By [AGK19, Lemma 2.4], the size-bias transform X s is distributed as the mixture of X s b governed by h s . With U ∼ Uniform(0, 1) independent of all else, the equilibrium transform ⌈U X s ⌉ is thus the mixture of ⌈U X s b ⌉ governed by h s .
4.3.
Proofs of the concentration theorems for Galton-Watson trees. Theorem 6 is a simple consequence of the following statement that an NBUE quantity of i.i.d. NBUE summands is NBUE. This fact was previously proven in [PCW06, Corollary 2.2]. We include our proof here, as it takes a very different approach from theirs.
Proposition 22. Let X, X 1 , X 2 , . . . be i.i.d. nonnegative random variables, and let L be a positive integer-valued random variable independent of all else. Suppose that X and L are NBUE. Then the random sum S = L k=1 X k is NBUE as well.
Proof. We construct S * using Lemmas 20 and 21, as was done in [PR11b, Theorem 3.1]. Let S n = n k=1 X k , so that S = S L , and let T k d = S * k . By Lemma 20, T k d = X 1 + · · · + X I k −1 + X * I k , where I k is chosen uniformly at random from {1, . . . , k}.
By Lemma 21, the equilibrium transform of S is a mixture of T k governed by a distribution whose Radon-Nikodym derivative with respect to L is which is exactly the Radon-Nikodym distribution of L s with respect to L. Hence, with the second line following because X is NBUE. Since I L s is a uniform selection from {1, . . . , L s }, it is the discrete equilibrium transform of L. Hence I L s L, and S * X 1 + · · · + X L = S.
Proof of Theorem 6. Let L be a random variable whose distribution is the child distribution of the tree. Each generation of the Galton-Watson process is the sum of L independent copies of the previous generation, i.e., where Z (j) n for j ≥ 1 denote independent copies of Z n and L is indepedent of (Z (j) n , j ≥ 1). Observing that Z (j) 1 d = L is NBUE, we can apply Proposition 22 inductively to conclude that Z n is NBUE for all n. The concentration inequalities (3) and (4) then follow from Theorems 3 and 4 with α = β = 1. For Theorem 7, it would be nice to argue that an NBUEZT quantity of NBUEZT summands remain NBUEZT, but we have not been able to prove or disprove this (see Section 4.4). But we can show the following weaker statement. For a random variable X taking nonnegative integer values, we write Bin(X, p) to denote the distribution obtained by thinning X by p (i.e., the sum of X independent Bernoulli(p) random variables).
Proof. Let S = X 1 + · · ·+ X L , and let M be the number of the random variables X 1 , . . . , X L that are nonzero. Then where M ∼ Bin(L, p) is independent of (X > i ) i≥1 . Since S is then a sum of M many strictly positive random variables, it is positive if and only if M is positive. Hence Since M and X k are NBUEZT, their conditioned versions M > and X > k are NBUE. Thus S > is NBUE by Proposition 22, and hence S is NBUEZT.
To apply Proposition 23, the NBUEZT property for L must be preserved under thinning. We now show that this holds when L > is D-IFR.
Lemma 24. Let L be a random variable taking nonnegative integer values. If L > is D-IFR, then Bin(L, p) > is D-IFR for all 0 < p ≤ 1.
Proof. Let (B k ) k≥1 be i.i.d.-Bernoulli(p) for arbitrary p ∈ (0, 1), and let M = B 1 + · · · + B L , so that M ∼ Bin(L, p). Our goal is to show that P[M = n | M ≥ n] is increasing for n ≥ 1. Define which is the conditional probability that B t+1 = · · · = B L = 0 given that L ≥ t. Let T n be the smallest index t such that B 1 + · · · + B t = n. We make the following claims: (ii) the function ϕ(t) is increasing for integers t ≥ 1; (iii) the distributions [T n | L ≥ T n ] are stochastically increasing in n.
To prove (i), we start by observing that M = n holds if and only L ≥ T n and B Tn+1 , . . . , B L are all zero. Thus, Since L and T n are independent, we obtain Observing that the events {M ≥ n} and {L ≥ T n } are the same and dividing both sides of the above equation by its probability yields (i). Now we prove (ii). Since L is D-IFR, the distributions [L − t | L ≥ t] are stochastically decreasing by Proposition 19(a). Thus ϕ(t) is obtained by taking the expectation of the decreasing function x → (1 − p) x under a stochastically decreasing sequence of distributions, showing that ϕ(t) is increasing in t.
To prove (iii), it suffices (see [SS07, Theorem 1.C.1]) to show that Thus we consider with the second line following from the independence of T n and T n+1 from L. The final bit is to compute probabilities for the negative binomial distribution: This is increasing in k for k ≥ n, which proves (28). Now, statements (i)-(iii) combine to prove the lemma: the quantity E[ϕ(T n ) | L ≥ T n ] is increasing in n by (ii) and (iii), and hence M > is D-IFR by (i).
Proof of Theorem 7. Let L be the child distribution of the tree. By Proposition 19(c) and Lemma 24, all thinnings of L are NBUEZT. Hence Proposition 23 applies and shows that L k=1 X k is NBUEZT whenever (X k ) k ≥ 1 are an i.i.d. family of NBUEZT random variables. Applying this inductively to each generation Z n of the Galton-Watson process shows that Z n is NBUEZT for all n. Therefore Theorems 3 and 4 apply to Z > n and prove (5) and (6).
4.4.
On the sharpness and optimality of these results. Consider a Galton-Watson process whose child distribution is geometric with success probability p on {1, 2, . . .}, which is NBUE. The size of the nth generation is geometric with success probability p n . Then We can also ask whether the conditions of Theorems 6 and 7 could be weakened and more broadly what properties of the child distribution are preserved for all generations. Theorem 6 states that if the child distribution of a Galton-Watson process is NBUE, then all its generations are NBUE. Log-concave and D-IFR distributions are not preserved in this way. For a counterexample, consider a child distribution placing probability 1/8 on 1, probability 49/64 on 2, and probability 7/64 on 3. This distribution is log-concave and D-IFR, but the size of the second generation is neither.
Lemma 24 states that if L > is D-IFR, then Bin(L, p) > is D-IFR. The NBUEZT property is not preserved under thinning in this way. Let 1 with probability 89/100, 2 with probability 109/1000, 3 with probability 9/10000, 4 with probability 1/11250, 5 with probability 1/90000. We leave it as an exercise that this distribution is NBUEZT (in fact, NBUE) but that all of its thinnings fail to be NBUEZT.
It seems more natural (and would be a weaker condition) to assume only that the child distribution L is NBUEZT in Theorem 7, rather than that L > is D-IFR. But since the NBUEZT property is not preserved by thinning, our proof of Theorem 7 does not go through with this change. In fact, we are truly unsure the theorem holds with the weaker condition. 4.5. Previous concentration results for Galton-Watson processes. Let Z n be the nth generation of a Galton-Watson process whose child distribution has mean µ > 1. Let W be the almost sure limit of Z n /µ n , which exists and is nondegenerate when E[Z 1 log Z 1 ] < ∞. In Theorems 6 and 7, properties of the child distribution continue to hold for Z n /µ n at all generations. This is in a similar spirit to many results linking properties of the child distribution to those of W . For example, for α > 1 it holds that EZ α 1 is finite if and only if EW α is finite [BD74]. Similarly, Z 1 has a regularly varying distribution with index α > 1 if and only if W does [DM82].
One line of results is on the right tail when the child distribution is bounded. Let d be its maximum value, and let γ = log d/ log µ > 1. Biggins and Bingham [BB93] used a classic result of Harris [Har48] where N (x) is a continuous, multiplicatively periodic function. Hence, in the limit the tail of Z n /µ n decays faster than exponentially. Fleischmann and Wachtel give a more precise version of this result [FW09,Remark 3], showing that the tail of W decays as where N (x) and N 2 (x) are continuous, multiplicatively periodic function. Biggins and Bingham give a version of their result that applies directly to Z n rather than its limit, and more detailed results on the right tail of Z n in this situation can also be obtained from combinatorial results of Flajolet and Odlyzko [FO84,Theorem 1].
Results on the right tail are also available when the child distribution is heavy-tailed.
the tails of Z n satisfy for constants c 1 > 0 and c 2 < ∞ independent of x and n [VDK13, Theorem 1]. This result applies, for instance, when the tail of Z 1 has polynomial decay. A similar result [VDK13, Theorem 3] holds when the tail of Z 1 behaves like e −x α for 0 < α < 1.
For the left tail of Z n , the behavior depends on the weight that the child distribution places on 0 and 1. It is known as the Schröder case when positive weight is placed on those values and as the Böttcher case when it is not. Roughly speaking, the left tail in the Schröder case behaves similarly to the right tail in the heavy-tailed case, while the left tail in the Böttcher case behaves similarly to the right tail in the bounded child distribution case. For example, suppose that the child distribution places no weight on 0 and weight p 1 on 1.
Our results apply best to distributions that are unbounded but have exponential tails, a case that seems poorly covered by the existing literature. Our bound is also more explicit than any we have encountered, with no limits or unspecified constants.
Appendix A
Recall that P l n (b, w) is the distribution of the number of white balls after n draws in the urn model defined in Section 3, and let N n (b, w) ∼ P l n (b, w). This appendix is dedicated to proving the following result, which is used in Lemma 16 to compare the rising factorial bias transform of these distributions to their power bias transforms. The property proven in the following lemma is something like log-concavity of the sequence P[N n (b, w) = k] for fixed n (which is proven along the way, in Lemma 26), but it involves varying both k and n. We cannot give much intuition for the proof; it seems to us to be a technical fact that happens to be true and can be proven by pushing symbols around in the right way.
Lemma 25. For all n, k ≥ 0, We will in fact prove Lemma 25 for a slightly generalized version of the urn process. As with that process, start with b ≥ 1 black balls and w ≥ 1 white balls, and after each draw add an extra ball with the same color as the ball drawn. Instead of adding an additional black ball after every lth draw, we allow black balls to be added at arbitrary but predetermined times. Thus the number of balls in the urn after n draws, denoted by B n , is an arbitrary but deterministic strictly increasing sequence with B 0 = b + w. Let N n be the number of white balls in the urn after n draws. Let t (n) k = P[N n = k]. The dynamics of the urn process gives First, we show that t (n) k is log-concave in k for each fixed n: Lemma 26. For all n ≥ 0 and all k, Proof. We prove this by induction. For the base case, we have t (0) k = 1{k = w}, and hence the right-hand side of (33) is always zero when n = 0. Now, we expand t where we have simplified notation by writing t k for t (n) k . Applying the inductive hypothesis to (34) and (35) gives To bound A 3 , we note that the inductive hypothesis implies t k−2 t k+1 ≤ t k−1 t k , which together with (36) gives Hence Next, we establish a variant of log-concavity with a similar but more complicated proof.
Lemma 27. For all n ≥ 0 and all k, Proof. We proceed by induction. Let E (n) k be the left-hand side of (37). Since t ≥ 0 under the assumption that B n+1 = B n + 1, because B n+1 is at least this large, and we can see that E (n+1) k is increasing in B n+1 by writing it as as and applying Lemma 26.
For the sake of readability, we write t k for t (n) k and B for B n in this proof. We apply (32) to obtain Now, under the assumption that B n+1 = B + 1, we expand E (n+1) k as (A 1 + A 2 + A 3 )/B 2 , where k−1 − 2(B − k)t k−2 t k ≥ −2(k − 2)(B − k)t k−2 t k , and where we have applied the inductive hypothesis in each final step. Combining these bounds, Proof of Lemma 25. First, we dispense with the case that any of t is {w, . . . , w + n}; in this case both sides of (31) are zero. Thus we assume from now on that these four terms are all nonzero. Now, proving the lemma is equivalent to showing t k B n t (n) which is nonnegative by Lemma 27.
Remark 28. It is possible to avoid all the work of this appendix, at the cost of a slightly inferior concentration bound for N n (1, w). The result of this appendix (Lemma 25) is used to prove that N [l+1] n−l (b, w) + l N (l+1) n (b, w) (Lemma 16), which is then applied in the proof of Proposition 17. An alternate path is to invoke the following stochastic inequality between the factorial and power bias transformations, which holds for any nonnegative random variable: where p = EX l+1 E X(X + 1) · · · (X + l) .
Modifying the derivation in Proposition 17 slightly, we get N n (1, w) d = Q w (N n−l (1, w + 1 + l) − w − 1) Q w (N n (1, w + 1 + l) − l − w − 1, with the second line holding since at most l white balls can be added from steps n − l to n. Then following the same steps as in Proposition 17, n (1, w).
Finally, invoking (38), we have (1, w) d = N * n (1, w). The concentration bounds obtained from this are worse because of the factor of p in the exponent, but it does illustrate how the p < 1 versions of our concentration bounds can be used.
Appendix B
In this appendix, we prove Proposition 19 for the convenience of the reader. See also [BP75, Chapters 4 and 6] for more background material.
Proof of Proposition 19(a).
Suppose that X is D-IFR and let p n = P[X = n | X ≥ n]. To show that [X − k | X > k] is stochastically decreasing in k, construct a random variable T by the following procedure: Fix some k ≥ 0. Start at 1 and halt with probability p k+1 ; otherwise advance to 2 and halt with probability p k+2 ; otherwise advance to 3, and continue like this, letting T be the value where we halt. It is evident that T ∼ [X − k | X > k]. Since p n is increasing, we are more likely to halt at each step when k is increased. By a simple coupling, this demonstrates that [X − k | X > k] is stochastically decreasing in k.
Proof of Proposition 19(c)
. Suppose X takes values in the positive integers and is logconcave. Let p n = P[X = n], and let N be the highest value such that p N > 0, with N = ∞ a possibility. From the definition of log-concave, p n−1 p n ≤ p n p n+1 for all 1 ≤ n ≤ N . This implies that for any fixed k, the ratio is increasing in n, and this condition implies that [X − k | X > k] stochastically dominates [X − k − 1 | X > k + 1] (see [SS07, Theorem 1.C.1]). Hence X is D-IFR by Proposition 19(a). Now, suppose that X is D-IFR. It follows from Proposition 19(a) that E[X − k | X > k] is decreasing in k for integers k ≥ 0, proving that By Proposition 19(b), this shows that X is NBUE.
|
2021-08-05T01:16:12.869Z
|
2021-08-04T00:00:00.000
|
{
"year": 2021,
"sha1": "3262ebb17806e15ef906bb1a92c94eb9a2b88268",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2108.02101",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3262ebb17806e15ef906bb1a92c94eb9a2b88268",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
174798758
|
pes2o/s2orc
|
v3-fos-license
|
Electronic Effects of Substituents on fac-M(bpy-R)(CO)3 (M = Mn, Re) Complexes for Homogeneous CO2 Electroreduction
Synthesis and characterization of 14 new 2,2′-bipyridine metal complexes fac-M(bpy-R)(CO)3X (where M = Mn, X = Br or M = Re, X = Cl and R = -CF3, -CN, -Ph, -PhOH, -NMe2) are reported. The complexes have been characterized by NMR, IR spectroscopy and elemental analysis. Single crystal X-Ray diffraction structures have been solved for Re(dpbpy)(CO)3Cl (dpbpy = 4,6-diphenyl-2,2′-bipyridine) and Re(hpbpy)(CO)3Cl (hpbpy = 4-(2-hydroxy-phenyl)-6-phenyl-2,2′-bipyridine). Electrochemical behaviors of the complexes in acetonitrile under Ar and their catalytic performances for CO2 reduction with added water and MeOH have been investigated by cyclic voltammetry and controlled potential electrolysis. The role of the substituents on the electrochemical properties and the related over potentials required for CO2 transformation have been analyzed. The complexes carrying only electron withdrawing groups like -CF3, -CN totally lose their catalytic activities toward CO2 reduction, whereas the symmetric -NMe2 substituted and push-pull systems (containing both -NMe2 and -CF3) still display electrocatalytic current enhancement under CO2 atmosphere. The complexes carrying a phenyl or a phenol group in position 4 show catalytic behaviors similar to those of simple M-bpy systems. The only detected reduction product by GC analysis is CO: for example, fac-Re (bpy-4,4′-NMe2)(CO)3Cl gives CO with high faradic efficiency and a TON of 18 and 31, in absence of external proton source and with 5% MeOH, respectively. DFT calculations were carried out to highlight the electronic properties of the complexes; results are in agreement with experimental electrochemical data.
INTRODUCTION
Nowadays the CO 2 concentration in the atmosphere is continuously increasing alongside with the overall world energy demand. Converting carbon dioxide via electrochemical reduction into useful chemicals and fuels for energy storage is an attractive and promising approach. CO 2 reduction is a competition between its thermodynamic and kinetic: the one electron reduction in water occurs at a very negative potential (−1.90 V vs. SCE at pH 7) (Hammouche et al., 1988;Saveant, 2008) because it requires a drastic change in the geometry, from the linear CO 2 molecule to the bent CO −• 2 radical anion. The reason of such high negative overpotential is due to the slow kinetics of the electron transfer, which is associated to the different geometries of the neutral and reduced species, respectively. However, the reduction reactions involving multiple electron transfers coupled with proton transfers provide a significant lowering of the thermodynamic barrier. To avoid the CO −• 2 as intermediate and to lower the energy cost of the reduction process, key catalytic strategies have been developed with the aim of obtaining the various products selectively. The best electrocatalysts currently studied work at a potential 100 mV negative with respect to E 0 CO 2 /P (where P generically indicates the reduction products, CO, HCOOH, HCHO, CH 3 OH, CH 4 ) (Francke et al., 2018;Franco et al., 2018). Despite the numerous advantages of heterogeneous electrocatalysis (Sun et al., 2016;Rotundo et al., 2019a), clever integration with the homogeneous counterpart allows a rational design of the catalysts, by tuning both the metal center and/or the ligand. One of the greatest challenge of the homogeneous approach lies in the search of stability, durability and improved turnover number (TON) efficiencies (Grice, 2017;Takeda et al., 2017;Wang et al., 2019). Bipyridine transition metal complexes represent one of the most studied classes of molecular electrocatalysts since the 1980s (Stanbury et al., 2018). Bipyridine has been extensively studied as ligand in the field of electro and photo-catalysis because of the capability to store electrons and subsequently delocalize electronic density on its π orbitals (Vlček, 2002;Elgrishi et al., 2017). Both fac-Re(bpy)(CO) 3 Cl and fac-Mn(bpy)(CO) 3 Br are capable of reducing CO 2 to CO with high faradaic efficiency (Hawecker et al., 1984;Bourrez et al., 2011). Comparing these two complexes, Mn exhibits a catalytic peak that is shifted more anodically (around 300 mV, E p = −1.51 V vs. Ag/AgCl in MeCN) in comparison with the second reduction of Re analogous (E p = −1.8 V vs. Ag/AgCl in MeCN). Another important difference is that usually the Mn-bpy complexes show their catalytic activities only in the presence of external proton sources. To better shed light on this uniqueness, our research group synthesized Mn(pdbpy)(CO) 3 Br (Franco et al., 2014(Franco et al., , 2017, in which two pendant phenolic groups act as local proton source (Costentin et al., 2012), capable of reducing CO 2 even in anhydrous acetonitrile. In this case a considerable amount of HCOOH was also detected. Conversely the analogous complex in which the OH groups were replaced by methoxy groups did not show any catalytic activity without the addition of Brønsted acid.
The electrochemical behavior of the two well-known Re and Mn-bpy complexes could be reasonably altered by varying the bipyridine moiety, i.e., introducing electron-withdrawing and electron-donating groups (Machan et al., 2014;Walsh et al., 2015;Stanbury et al., 2018). Kubiak and his group investigated the effect of 4,4 ′ -di-tertbutyl-2,2 ′ -bipyridine (tBu 2 -bpy) firstly on rhenium carbonyl complexes, secondarily on manganese (Smieja and Kubiak, 2010;Smieja et al., 2013). In other works they studied the role of the modification in the 6,6 ′ position of the bipyridine (Sampson et al., 2014;Sampson and Kubiak, 2016). A similar approach has already been applied to Mo and W-bpy complexes (Franco et al., 2017;Rotundo et al., 2018), to both Mn and Rebpy complexes by some of us (Franco et al., 2017;Rotundo et al., 2019b) and in the current work.
The target of the modification is the reduction potential of the catalyst, namely E 0 cat : usually a less negative E 0 cat corresponds to a decreased rate of CO 2 conversion (Francke et al., 2018). Electronic properties of organic groups are commonly described by the inductive (±I) and mesomeric (±M) effects. Substituents with electron withdrawing groups like -CF 3 (strong -I effect) and -CN (weaker -I effect) and with electron donating groups, like -N(Me) 2 (strong +M effect), -Ph and -PhOH (weaker +M effect), were placed in the 4,4 ′ , 4,6, and 5,5 ′ positions of the bipyridine ligand coordinated to the metals. Combining both push and pull effects in the so called "push-pull" system, an electronic gradient is forced through the bipyridine. In this paper we explore the electrocatalytic properties of novel Mn and Re bpy-type complexes, bearing 7 differently substituted ligands (Scheme 1). More generally electron-donating groups are expected to convey greater nucleophilicity to the metal-center, although catalysis should require higher overpotentials. DFT calculations have been used as complementary tool to better correlate experimental electrochemical data, whereas Controlled Potential Electrolysis (CPE) experiments are useful to elucidate the stability and durability of the catalysts in acetonitrile solutions.
The ligands and the corresponding Mn and Re complexes have been synthesized according to the procedure reported in the experimental section. The complexes have been characterized by NMR, IR spectroscopy and elemental analysis. Single crystal X-Ray diffraction structures have been solved for 2e and 2f (for XRD data see Tables S1-S6).
The complex Re(dpbpy)(CO) 3 Cl (2e) crystallized from both acetonitrile and benzene solutions by slow evaporation, forming prismatic orange platelets of a phase of the pure molecular product (structure A, Figure S1a) and a solvate with two benzene SCHEME 1 | Chemical sketches of the investigated complexes, where M = Mn, X = Br, or M = Re, X = Cl. molecules (structure B, Figure S1b). The first presents the monoclinic centrosymmetric P2 1 /a space group (Figure S1c) while the second the monoclinic centrosymmetric P2/n space group ( Figure S1d). The complex Re(hpbpy)(CO) 3 Cl (2f) crystallized in dark by slow evaporation of solutions of toluene and benzene as yellow platelets and from ethyl acetate as orange prisms, both stable to air ( Figure S2a). The crystal structure has been obtained from a platelet obtained from toluene solution and has monoclinic P2 1 /n space group type ( Figure S2b). The structures of both 2e and 2f present a quite distorted octahedral geometry around the rhenium center, as can be seen by the values in Table 1. The coordination bond of N1 is 0.1 Å longer than that of N2 (see Figure 1 for numeration) and a similar asymmetry can be observed in the coordination of CO in trans position to the nitrogens. This asymmetric coordination, also present in other terpyridine derivatives (Anderson et al., 1990;Civitello et al., 1993;Wang et al., 2013;Klemens et al., 2016) is completely different from the very symmetrically bonded bpy derivatives (with N-Re average distances equal in the two coordinating pyridyl rings and long 2.17 Å). At the same time, while most of the 2,2 ′ -bipyridine derivatives of fac-Re(CO) 3 Cl complex unit are almost planar respect to the basal OC-Re-CO plane (Kurz et al., 2006;Smieja and Kubiak, 2010;Bullock et al., 2012;Machan et al., 2014;Manbeck et al., 2015), in the case of 2e and 2f, the ligand is distorted outside this plane (see Table 1). This behavior can be detected in all the terpyridines, in which the third pyridine ring is not coordinated to the metal center and is equivalent to a phenyl ring, and in 6-phenyl substituted 2,2 ′ -bipyridine derivatives. The reason of such distortion is clarified by considering the steric hindrance of the vicinal phenyl ring (the distance between the nearest CO group and the centroid of the phenyl ring in ortho to N1 is about 3.2 Å) that pushes up all the framework of the organic ligand, modifying the Re environment. This phenyl group is rotated to follow the shape of carbonyls with a torsion angle respect to the central pyridine ring of about 130 • .
The distortion effects observed in the solid state are predicted also by molecular DFT calculations. Indeed, the optimized geometries of 2e and 2f perfectly overlap with the experimentally determined structures (Figure 1D and Figure S3), confirming that these anomalies originate from the coordination of 6-phenyl substituted 2,2 ′ -bipyridine ligands to carbonyl complexes and are not induced by crystal packing contributions. The crystal packing of 2e (both structure A and structure B) and 2f is dominated by weak C-H···Cl, C-H···O and π···π stacking interactions. In the case of 2f, the presence of -OH group on the organic ligand interacting with the chloride induces the formation of hydrogen-bonded molecular chains (see Figure 1C and Figure S2c). For a more detailed analysis on the crystal packing consults the Supplementary Material (see Figures S4-S9).
Cyclic Voltammetry Under Ar
Cyclic Voltammetries (CVs) of all manganese complexes are reported in Figure 2. The CV of the Mn(bpy)(CO) 3 Br under our experimental conditions (in black) is included for comparison. This complex undergoes two successive irreversible reduction reactions and two reoxidation peaks (Bourrez et al., 2011). The first (E p1 = −1.29 V vs. Ag/AgCl) and the second (E p2 = −1.51 V vs. Ag/AgCl) reduction processes lead to the formation of the dimer and the mononuclear pentacoordinated anion species, respectively. Reoxidations of the pentacoordinated anion and of the dimer are located at −1.09 and −0.21 V vs. Ag/AgCl, respectively. The new synthesized complexes are supposed to display similar electrochemical behavior. Table 2 reports the peak potentials of the first and second reductions. As expected, the presence of -CF 3 (1a and 1b) and -CN (1c) shifts first and second potentials toward more positive values, when compared to Mn(bpy)(CO) 3 Br. In a recent paper (Rawat et al., 2019) DFT calculations suggested that electron-withdrawing substituents like -CF 3 stabilize the radical anion. Furthermore, the formation of all Mn-Mn dimers was indicated as unfavorable. However, the electrochemical mechanism outlined above is commonly accepted, and the Mn dimer is strongly favored. We experimentally found that all complexes undergo a first and second chemically irreversible reductions. For example, 1a shows two chemical irreversible processes followed by the reoxidations of the pentacoordinated radical anion and that of the dimer ( Figure S10), even at high scan rates (1 V/s, Figure S11), thus confirming the general mechanism. CV of 1g confirms that the strong electron-donating properties of dimethyl amino group result in more negative reduction potentials with respect to Mn(bpy)(CO) 3 Br. The push-pull system 1d, 1e and 1f display similar potential values to the unsubstituted bpy-complex. In some complexes decomposition processes occurring after reduction generate small peaks (i.e., for 1e and 1f E p = −1.65 and −1.63 V vs. Ag/AgCl are observed, respectively). The very negative peak around −3V vs. Ag/AgCl, is commonly assigned to a ligand-centered reduction.
CVs of all rhenium complexes are reported in Figure 3. CV of the reference Re(bpy)(CO) 3 Cl under our experimental conditions is included for comparison. This complex exhibits a first reversible reduction, due to the formation of the radical anion, which is more stable than the analogous with manganese, thus resulting in electrochemical reversibility and no presence of the reoxidation peak of the dimer (Hawecker et al., 1984). The second reduction leads to the pentacoordinated anion. Intrinsically, rhenium complexes require slightly higher overpotentials with respect to the corresponding manganese ones. The first reversible reduction of Re(bpy)(CO) 3 Cl is located at E 1/2 = −1.35 V vs. Ag/AgCl, whereas the second chemically irreversible reduction is at E p2 = −1.80 V vs. Ag/AgCl. Similarly to manganese, the new synthesized rhenium complexes show no significant difference in the electrochemical pathway under Ar when compared to the reference Re(bpy)(CO) 3 Cl ( Table 2). Electron withdrawing groups (2a, 2b, and 2c) shift the reduction processes toward less negative values; electron donating groups like in 2g essentially merge the first and second reductions into a single peak. The push-pull system 2d, in analogy with the case of 1d, do not significantly alter the positioning of the reduction potentials. The third peak, around −2.5V, also for these complexes, is generally attributed to a bpy-centered reduction.
The reduction potentials estimated by DFT calculation are in excellent agreement with the experimental values obtained for the first reduction potential of all rhenium complexes (Figure 4). This confirms the reversible nature of the first reduction process, leading to stable radical anions characterized by a very small increase (about 0.05 Å) in the Re-Cl bond length ( Figure S12). Conversely, in the case of manganese complexes, DFT calculations, which compute thermodynamic reduction potentials, are not suitable to estimate the irreversible electrochemical behavior of the compounds ( Figure S13). Indeed, the high instability of the radical anion leads to the weakening of the Mn-Br bond, as clearly evidenced by the significant increase of the Mn-Br bond length in the anion structures ( Figure S14). Our DFT calculations that include weak interactions, are in agreement with the chemically irreversibility of the first reduction and dimer production, even in the case of electron-withdrawing substituents. For example, the formation of the dimer [Mn(bpy)(CO) 3 ] 2 from its radical anion is favored by 143.5 kJ/mol, and in the case of 1a is favored by 102.6 kJ/mol.
Cyclic Voltammetry Under CO 2
The electrochemical behavior of manganese complexes under CO 2 and with H 2 O (5%v) is reported in Figure 5 for 1d to Frontiers in Chemistry | www.frontiersin.org 1g and in Figure S15 for 1a, 1b, and 1c, (these complexes are catalytically inactive toward CO 2 reduction). CVs of 1a-1g under CO 2 with 5%v MeOH are included for comparison in Figure S15 too. All complexes do not show significant current increases switching from Ar to CO 2 atmosphere. While this is expected for manganese complexes, 1g shows a current increase, though limited, even in absence of Brønsted acid addition. The electrochemical behavior of rhenium complexes under CO 2 and with 5%v MeOH is reported in Figure 6 for 2d to 2g and in Figure S16 for 2a, 2b, and 2c. CVs of 2a-2g under CO 2 with 5%v H 2 O are included for comparison in Figure S16 too. All complexes exhibit catalytic current switching from Ar to CO 2 atmosphere, even in absence of Brønsted acid, as expected for rhenium complexes.
Controlled Potential Electrolysis Experiments of the Complexes
Bulk electrolysis experiments of all manganese and rhenium complexes series under CO 2 were performed upon setting the potential at values slightly negative to the second reductions, with and without external added Brønsted acids (water and methanol, 5%). A CO 2 flow of 50 mL min −1 was kept constant during the experiments, gaseous products were determined by gas chromatography, and formate, if present, was assessed by NMR spectroscopy at the end of the experiments. Table 3 summarizes the results obtained during these CPE experiments. A general trend can be outlined: addition of water results in increased TONs for Mn complexes (Table 3 and Figure S17), differently from methanol, which drops them to lower values (Table S7). On the other hand, for the case of Re complexes, addition of methanol (Table 3 and Figure S18) seems to enhance the catalytic activity with respect to water, except for complex 2d, which is catalytically inactive, despite the promising current increase in presence of MeOH under CO 2 .
CONCLUSIONS
In summary, a systematic study of the effect of the electronic properties of the substituents on 2,2 ′ -bipyridine Mn and Re complexes was conducted. Electron-withdrawing substituents shift the reduction potentials to more positive values, and eventually inhibit the catalytic activities of the corresponding Mn and Re complexes toward CO 2 reduction. In the case of electron-donating substituents the opposite trend is observed. These observations are in agreement with the induced electron density localized on the metal, strongly influencing the change in the reactivity with the weak electrophile CO 2 . Increasing or decreasing the electron density on the metal should facilitate or prevent the formation of intermediate in which the CO 2 is coordinated to the metal. Another interesting effect in varying the electron properties of the substituents is the merging of the first and second reduction processes, observed in some Re and Mn complexes. A judicious selection of bpy substituents provides an alternative way to the use of bulky substituents to prevent dimer formation (Sampson et al., 2014), with the aim of transforming two 1e reduction into a single 2e reduction process (CO 2 to CO reduction requires 2e).
It is interesting to note that in spite of the presence of a -OH group in 1f, no formic acid is detected, probably because in contrast with Mn carrying local proton sources (Franco et al., 2014(Franco et al., , 2017) the hydroxyl group is located far from the metal center, thus the generation of the metal-hydride, commonly considered catalyst for formate production, is no longer entropically favored. It is interesting to note how CVs under CO 2 of the push-pull systems 1d and 2d show enhanced catalytic currents; during CPE the Mn derivative 1d displays the higher TON value, while the corresponding Re derivative 2d, albeit from CV appears to be a potentially highly active catalyst, undergoes decomposition. While Mn catalysts suffer from the presence of MeOH, they appear to work better in water, which seems to react promptly with reduced Mn. In fact, even if the reduction potential of 1g is rather negative ( Table 2), no hydrogen is produced. This is in line with the high TON values observed for the CO 2 electrochemical reduction in pure water by Mn electrocatalysts supported on electrode surface (Walsh et al., 2015;Reuillard et al., 2017;Rotundo et al., 2019b). DFT calculations performed with dispersion correction agree with the experimental data for both Mn and Re complexes. The Mn-Br bonds computed for the Mn radical anions undergo a significant elongation, around 0.2 Å, which indicate nonnegligible weakening of the Mn-Br bonds. The release of halogen is also very probably favored by the polar and coordinating solvent MeCN; indeed Br substitution by MeCN has been experimentally observed not only in the radical anion, but also in neutral Mn species (Franco et al., 2017). We demonstrated here how the appropriate choice of the electron properties of the ligands is of critical importance in the design of more effective bipyridine Mn and Re electrocatalysts for CO 2 reduction.
General Considerations
CV and CPE experiments were performed using a Metrohm Autolab 302n potentiostat. CO and H 2 as CO 2 reduction products were detected and quantified by an Agilent 490 Micro GC. NMR spectra were recorded on a JEOL ECP 400 FT-NMR spectrometer ( 1 H operating frequency 400 MHz) or on a JEOL ECZR 600 FT-NMR spectrometer ( 1 H operating frequency 600 MHz) at 298 K. 1 H and 13 C chemical shifts are reported relative to TMS (δ = 0) and referenced against solvent residual peaks.
IR-ATR spectra were collected on a Fourier transform Equinox 55 (Bruker) spectrophotometer equipped with an ATR device; resolution was set at 2 cm −1 for all spectra. A spectral range of 400-4,000 cm −1 was scanned, using KBr as a beam splitter. GC-MS spectra were obtained on a mass selective detector Agilent 5,970 B operating at an ionizing voltage of 70 eV connected to a HP 5,890 GC equipped with a HP-1 MS capillary column (25 m length, 0.25 mm I.D., 0.33 µm film thickness. Elemental analyses (C, H, N) were performed on a Fisons Instruments EA-1108 CHNS-O Elemental Analyzer. ESI-MS spectra were recorded with a Thermo Advantage Max Spectrometer equipped with an ion trap analyzer and an ESI ion source.
Synthesis of 4,4 ′ -bis(trifluoromethyl)-2,2 ′ -bipyridine: the method of O' Donnell et al. (2016) was slightly modified to synthesize 4,4 ′ -bis(trifluoromethyl)-2,2 ′ -bipyridine first reported by Furue et al. (1992). In a degassed 20 mL screw cap vial a solution of 2-bromo-4-(trifluoromethyl)pyridine (500 mg, 2.2 mmol) in 7.5 mL of anhydrous DMF was poured and degassed for 5 min. Pd(OAc) 2 (25 mg, 0.11 mmol) was added later and the mixture degassed for additional 5 min. TBAI (815 mg 2.2 mmol), anhydrous K 2 CO 3 (460 mg, 3.3 mmol), and i-PrOH (0.35 mL, 4.4 mmol) were then added to the mixture, subsequently heated at 100 • C for 20 h. The heating was suspended, and the reaction mixture was filtered through a pad of celite. The filtrate was diluted in 25 mL of DCM and was washed with deionized water (3 × 25 mL). The organic phase was collected, and the water layer further extracted with 10 mL of DCM. The collected organic phase was dried with anhydrous MgSO 4 , filtered, and the solvent removed under reduced pressure to afford the crude solid that was purified by column chromatography on silica gel (eluent PE/AcOEt 10/1) to afford 220 mg of white solid as product. (Yield = 80%).
Synthesis of [2,2 ′ -bipyridine]-4,4 ′ -dicarbonitrile: the procedure reported by Losse et al. (2008) was followed with slight modifications. 4-Cyanopyridine (700 mg, 6.7 mmol) and 10% Pd/C (50 mg) were added into a 25 mL round-bottomed flask and five vacuum/nitrogen cycle were operated to minimize the amount of oxygen, then the flask was connected to a reflux condenser under nitrogen atmosphere. The mixture was heated to 230 • C in a sand bath in order to reflux the 4-Cyanopyridine. After 24 h the mixture was cooled to room temperature, CHCl 3 (15 mL) was added and the black suspension filtered through a frit. Solvent was removed under reduced pressure from the paleyellow solution obtained until the product started to crystallize. Pentane (15 mL) was then added, and the concentrated solution cooled at 4-6 • C for 16 h. The precipitate was filtered on Buchner funnel and washed with cold EtOH and dried to yield 100 mg of the product as a yellow-orange solid. (Yield 15%).
General procedures for synthesis of 2-substitued N,Ndimethylpyridin-4-amine derivatives. The previously reported procedure of Cuperly et al. (2002) was adapted according to the used electrophile. 2-(dimethylamino)-ethanol (0.8 mL, 8.0 mmol) was added to a three-necked round bottomed flask and dissolved in hexane (10 mL) under a N 2 atmosphere. The solution was cooled at −5 • C, BuLi (2.5 M, 6.4 mL, 16.0 mmol) was added dropwise for 10 min and the resulting mixture stirred for 40 min at −5 • C. 4-DMAP (488 mg, 4.0 mmol) was then added and stirring continued for additional 60 min at 0 • C, then the reaction medium was cooled at −78 • C and the solution of the appropriate electrophile (10.0 mmol) was added dropwise by mean of a dropping funnel with pressure balance. Once the addition of the electrophile was completed the temperature was allowed to raise to 0 • C (1.5 h) and the reaction quenched with deionized water at this temperature. Synthesis of N,N-dimethyl-2-(tributylstannyl)pyridin-4-amine. 2-(dimethylamino)ethanol (0.8 mL, 8.0 mmol) in hexane (10 mL), BuLi (2.5 M, 6.4 mL, 16.0 mmol), and 4-DMAP (0.488 g, 4.0 mmol) were reacted as previously described then a solution of Bu 3 SnCl (3.255 g, 10.0 mmol) in 15 mL of hexane was added dropwise for 20 min. The reaction was quenched with deionized water (15 mL) then the aqueous phase was extracted with DCM (2 × 20 mL) and AcOEt (2 × 20 mL). The collected organic phase was dried with anhydrous Na 2 SO 4 , filtered, and the solvent removed under reduced pressure to afford a crude orange oil, that was used without further purification in the cross-coupling reactions (NMR-calculated yield = 69%).
Synthesis of 2-iodo-N,N-dimethylpyridin-4-amine: 2-(dimethylamino)ethanol (1.6 mL, 16.0 mmol) in hexane (25 mL), BuLi (2.5 M, 12.8 mL, 32.0 mmol), and 4-DMAP (0.980 g, 8.0 mmol) were reacted as previously described, then a solution of resublimed I 2 (5.080 g, 20.0 mmol) in 50 mL of freshly distilled Et 2 O was added dropwise for 35 min. The reaction was quenched with a saturated solution of Na 2 S 2 O 3 (25 mL) and stirred for additional 20 min at 0 • C. The organic phase was separated and washed again with Na 2 S 2 O 3 solution (2×15 mL) and brine (2 × 15 mL). The collected organic phase was dried with anhydrous Na 2 SO 4 , filtered, and the solvent removed under reduced pressure to afford a crude brown solid that was purified by column chromatography on silica gel (eluent: PE/AcOEt 5/5) to afford 1.683 g of white solid as product (Yield = 86%).
General Procedure for Stille cross-coupling of N,N-dimethyl-2-(tributylstannyl)pyridin-4-amine with 2-halopyridine. In a degassed 50 ml screw cap vial freshly distilled toluene (25 mL) was added and degassed for 10 min then Pd(OAc) 2 (22 mg, 0.1 mmol) and PPh 3 (52 mg, 0.2 mmol) were added in one portion. The resulting mixture was stirred and degassed until toning of the solution to red. Subsequently, N,N-dimethyl-2-(tributylstannyl)pyridin-4-amine (370 mg, 0.9 mmol), LiI (40 mg, 0.3 mmol), and CuI (57 mg, 0.3 mmol) were added to the mixture, then degassed for additional 5 min. Finally, the appropriate 2halopyridine (1.1 mmol) was poured in the reaction medium that was kept under N 2 atmosphere and heated to reflux for 16 h. After being cooled at room temperature, the resulting mixture was diluted with EtOAc (30 mL) and washed with a NH 4 OH solution (10 M) until the water layer did not turn blue any more, indicating that all the copper had been extracted. The collected organic phase was filtered on a pad of celite, diluted with DCM (30 mL) and dried with anhydrous Na 2 SO 4 . Then the solvents were removed under reduced pressure to afford the crude solid subsequently purified to yield the desired product.
Synthesis of N-(2-pyridylacetyl)-pyridinium iodide (1): in a three-necked flask 82.5 mmol of 2-acetylpyridine and 90.5 mmol of iodine are dissolved in 100 ml of pyridine. The solution is refluxed for 3 h. The shiny black precipitate is then filtered and washed with diethyl ether (20 ml). The solid is recrystallized in hot ethanol: fine-scaled golden crystals are obtained (overall yield: 54%).
Elemental Analysis of the Complexes
The samples for microanalyses were dried in vacuum to constant weight (20 • C, ca. 0.1 Torr). Elemental analysis (C, H, N)
Single-Crystal X-Ray Diffraction
The single-crystal data were collected with a Gemini R Ultra diffractometer with graphite-monochromated Mo-Kα radiation (λ = 0.71073) by the ω-scan method. The cell parameters were retrieved with the CrysAlisPro (Agilent, 2015) software, and the same program was used to perform data reduction with corrections for Lorenz and polarizing effects. Scaling and absorption corrections were applied through the CrysAlisPro1 multiscan technique. The structures of complex 2e (both structure A and B) were solved with direct methods, while in the case of 2f a meaningful initial guess for electron density was obtained only with Patterson Function by using SHELXS-14 (Sheldrick, 2008(Sheldrick, , 2015. All the structures were refined with fullmatrix least-squares techniques on F 2 with SHELXL-14 (Macrae et al., 2006) using the program Olex2 (Dolomanov et al., 2009). All non-hydrogen atoms were refined anisotropically. Hydrogen atoms were calculated and riding on the corresponding bonded atoms. The graphic of the crystal structures was generated using Mercury 3.9 (Macrae et al., 2006). CCDC codes 1891407-1891409 contain the supplementary crystallographic data for 2e (structure A), 2e (structure B) and 2f. These data can be obtained free of charge via https://www.ccdc.cam.ac.uk/conts/retrieving. html, or from the Cambridge Crystallographic Data Center, 12 Union Road, Cambridge CB2 1EZ, UK; fax: (+44) 1223-336-033; or e-mail: deposit@ccdc.cam.ac.uk.
CV and CPE Experiments
Acetonitrile used for the experiments was freshly distilled over calcium hydride and purged with Ar before use. 0.5-1 mM solutions of the complexes were prepared with tetrabutylammonium hexafluorophosphate (TBAPF 6 , Sigma-Aldrich, 98%) as supporting electrolyte (0.1 M). A single-compartment cell was employed for CV measurements, equipped with working a glassy carbon electrode (GCE, Ø = 1 mm), alongside a Pt counter electrode and a Ag/AgCl (KCl 3 M) reference electrode. The Ar-and CO 2 -saturated conditions were achieved by purging gases for 5 min before each potential sweep. A double-compartment H-type cell was used for CPE measurements, thus allowing to separate through a glass frit the anodic compartment from the cathodic one (a Pt wire is placed as counter electrode). A glassy carbon rod is employed as working electrode jointly with the Ag/AgCl reference electrode. A controlled and constant flow of CO 2 (50 mL min −1 ) was maintained during the CPE measurements by means of a Smart Trak 100 (Sierra) flow controller. Under these experimental conditions, the redox couple (Fc + /Fc) is located at E 1/2 = 0.35 V.
Quantitative Analysis of CO 2 Reduction Products µGC measurements were used to detect CO and H 2 . Two modules equipped with CP-Molsieve 5 Å columns were kept at 105 • C and at a pressure of 30 and 28 psi, with a thermal conductivity detector. The carrier gases were Ar for H 2 and He for CO detection, respectively. The backflush vent option time was set to 7 s. The gas inside the measurement cell was sampled for 30 s every 3 min to fill the Micro GC 10 µL sample loop, and eventually 500 nL was injected into the column for the analyses. Instrument calibration was carried out measuring two different certified standards of CO and H 2 in Ar matrix (Rivoira). Formate production was assessed by NMR spectroscopy.
Computational Details
All the calculations were performed by the Gaussian 16 Revision B.01 (G16) program package (Frisch et al., 2016), employing density functional theory (DFT). Calculations were run using the Becke three-parameter hybrid functional (Becke, 1993), and the Lee-Yang-Parr gradient-corrected correlation functional (B3LYP) (Lee et al., 1988). Dispersion effects were added as semiempirical corrections with Becke-Johnson damping approach (GD3BJ) (Grimme et al., 2010(Grimme et al., , 2011. The solvent effect was included using the conductor-like polarizable continuum model (CPCM) with acetonitrile as solvent (Miertus et al., 1981). The def2TZVP basis set and effective core potential were used for the Mn, Br, and Cl atoms and the def2-SVP basis set was used for all the other atoms (Weigend and Ahlrichs, 2005). Unrestricted open-shell calculations were performed on the radical anions. Geometry optimizations were carried out without any symmetry constraints. The nature of the stationary points in the potential energy hypersurface was characterized by using harmonic vibrational frequency calculations. No imaginary frequencies were found, thus indicating we had located the minima on the potential-energy surfaces. Molecular-graphic images were produced by using the UCSF Chimera package from the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco (Pettersen et al., 2004).
AUTHOR CONTRIBUTIONS
RG and CN as corresponding authors wrote and revised the manuscript. EP performed Crystal X-ray structures. EA, AD, and PQ made the synthesis of the ligands. LR, RR, and LN synthesized the organometallic catalysts and performed the electrochemical and GC measurements. CG and CN made the DFT calculations.
|
2019-06-06T13:33:24.104Z
|
2019-06-05T00:00:00.000
|
{
"year": 2019,
"sha1": "8123da33845dd69f0a17e2d8b5503248c21ee739",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2019.00417/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8123da33845dd69f0a17e2d8b5503248c21ee739",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
9123004
|
pes2o/s2orc
|
v3-fos-license
|
Asymmetric positioning of Cas1–2 complex and Integration Host Factor induced DNA bending guide the unidirectional homing of protospacer in CRISPR-Cas type I-E system
CRISPR–Cas system epitomizes prokaryote-specific quintessential adaptive defense machinery that limits the genome invasion of mobile genetic elements. It confers adaptive immunity to bacteria by capturing a protospacer fragment from invading foreign DNA, which is later inserted into the leader proximal end of CRIPSR array and serves as immunological memory to recognize recurrent invasions. The universally conserved Cas1 and Cas2 form an integration complex that is known to mediate the protospacer invasion into the CRISPR array. However, the mechanism by which this protospacer fragment gets integrated in a directional fashion into the leader proximal end is elusive. Here, we employ CRISPR/dCas9 mediated immunoprecipitation and genetic analysis to identify Integration Host Factor (IHF) as an indispensable accessory factor for spacer acquisition in Escherichia coli. Further, we show that the leader region abutting the first CRISPR repeat localizes IHF and Cas1–2 complex. IHF binding to the leader region induces bending by about 120° that in turn engenders the regeneration of the cognate binding site for protospacer bound Cas1–2 complex and brings it in proximity with the first CRISPR repeat. This appears to guide Cas1–2 complex to orient the protospacer invasion towards the leader-repeat junction thus driving the integration in a polarized fashion.
INTRODUCTION
Archaea and Bacteria defend themselves from the assault of phages and plasmids by employing CRISPR-Cas adaptive immune system (1)(2)(3)(4)(5). CRISPR constitutes an array of direct repeats (each of ∼30-40 bp) that are intervened by similarly sized variable spacer sequences. The spacer sequences are captured from the invading foreign DNA and they serve as immunological memory--akin to antibodies in higher organisms--to mount retaliation during recurrent infection (6,7). Several studies in the recent past revealed that CRISPR interference proceeds via three stages: (i) adaptation, (ii) maturation and (iii) interference. Immunological memory is generated during adaptation wherein short stretches of DNA from invaders (protospacer) is acquired and incorporated into the CRISPR locus. This is followed by the transcription and processing of the pre-CRISPR RNA transcript that generates the mature CRISPR RNA (crRNA) onto which several Cas proteins assemble to form a ribonucleoprotein (RNP) surveillance complex. The crRNA within the RNP guides the target recognition by base complementarity whereas the protein components facilitate the cleavage of the target DNA (2)(3)(4)(5)8).
While the adaptation constitutes the cornerstone of CRISPR-Cas system by expanding the immunological memory, it is also the less well-understood process than the other two (8,9). The adaptation process can be envisaged to encompass two subsets of events: the uptake of protospacer fragments from the foreign DNA and their subsequent insertion into the CRISPR array. The generation of protospacer fragments from the foreign DNA in Escherichia coli (Type I-E) involves the RecBCD nuclease activity (10); however, it appears that only those fragments of about 33 bp DNA that border the protospacer adjacent motif (PAM) are captured and integrated into the CRISPR array (11)(12)(13)(14)(15). In Type-I, Type-II and Type-V CRISPR-Cas systems (4), the PAM comprises of short stretch (2-5 nucleotides) of conserved sequence present on either upstream or downstream of acquired protospacer element (9,13,(16)(17)(18)(19). This sequence is varied among different species and assist in discriminating self-versus non-self during interference step. Point mutations in PAM and protospacer of invading nucleic acid elements lead to imperfect pairing and abrogate target cleavage by interference complex. This mismatched priming leads to acquisition of new spacers more rapidly and efficiently from the mutated invader by a process termed as 'primed acquisition'. This feedback loop mechanism in addition to naïve adaptation (or non-primed adaptation) effectively aids the bacteria to counter mutated phages (9,11,19,20).
Two of the highly conserved Cas proteins, Cas1 and Cas2 form a complex (Cas1-2 complex) that captures the protospacer element and promotes its insertion into the CRISPR array. Here, Cas1 is shown to function like an integrase and Cas2 provides a structural scaffold that stimulates the catalytic activity of Cas1 (21)(22)(23). This complex structure acts as a molecular ruler that appears to determine the length of the acquired protospacer element (21,23). Nucleophilic attack mediated by free 3 -OH ends of protospacers integrates them into the repeatspacer array (21,22,24). In Type I-E, in order to mediate naïve adaptation, Cas1-2 complex alone is sufficient whereas the active interference complex is indispensable for primed acquisition (11,13). On the contrary, Type-IB, Type-IF and Type-II systems require all the Cas proteins (including maturation and interference proteins) for the incorporation of new spacer in vivo (25)(26)(27)(28). In addition to the involvement of Cas proteins, recent studies have highlighted the importance of host-encoded proteins in CRISPR immunity. A nucleoid protein H-NS was shown to control CRISPR immunity by regulating Cas operon expression (29). Recent study also demonstrated the requirement of genome stability proteins like RecG helicase and PriA in E. coli primed acquisition (30). Physical and genetic interaction studies performed on E. coli Cas1 revealed its interaction with various DNA repair pathway proteins, viz., RuvB, RecB, RecC and others (31).
While Cas1-2 complex seems to be essential, it is not sufficient for the spacer uptake in vivo. Sequences upstream of the first CRISPR repeat (referred as leader) are shown to harbor DNA elements critical for adaptation process (13,32,33). Despite the presence of several repeat-spacer units in the CRISPR array, the site of integration of new protospacer has always been at the leader-repeat junction resulting in the integration of the protospacer and concomitant duplication of the first repeat (11)(12)(13)(14)(15). This polarization preserves the chronology of the integration events such that the newest protospacer is closer to the leader proximal end and the oldest protospacer at the distal end. Intriguingly, while Cas1-2 complex alone is sufficient for the integration of protospacer elements in E. coli and shown to have intrinsic sequence specificity in vitro (24), it lacks the homing site specificity towards the leaderrepeat junction leading to integration at all CRISPR repeats (22). This hints at the involvement of accessory factors that bring in specificity towards the integration site for the invading protospacers. Recently, Integration Host Factor (IHF) was shown to act as an essential accessory factor that determines the specificity of protospacer acquisition in E. coli (34). Here, based on CRISPR/dCas9 mediated immunoprecipitation (35) and biochemical analysis, we were independently led to identify IHF as an essential factor in protospacer acquisition. We further show that the leader region harbours binding site for IHF and that it participates in the protospacer acquisition by bending the leader region by about 120 • to produce a reversal in the direction of DNA. This brings the Cas1-2 complex, which is also localized adjacent to IHF, into proximity with the first CRIPSR repeat favouring the nucleophilic attack of the invading protospacer on the leader-repeat junction.
Construction of bacterial strains and plasmids
Descriptions of the strains, plasmids and oligonucleotides are listed in supplementary Tables S2-S4, respectively. Escherichia coli IYB5101 (referred as Wt) (13) was used as parental strain for all the genomic manipulations, unless specified otherwise. Knock-out strains of ihfα ( IHF␣) and ihfβ ( IHF) were created using Red recombineering (36). Keio collection strains (37) carrying deletions of ihfα and ihfβ were used as templates for amplification of kanamycin resistant cassettes along with 100-130 bp flanking sequence. Amplified cassettes were used to transform Red recombinase expressing E. coli IYB5101 to create IHF␣ and IHF strains.
pdCas9-bacteria (38) was modified with the construct encoding 3XFLAG-dCas9-StrepII. Overlap extension PCR was used to generate a 166 bp DNA fragment encoding a gRNA complementary to a region that is 86 bp upstream of first CRISPR repeat in E. coli BL21-AI (NCBI accession: NC 012947.1, nucleotide positions: 1002800-1003800). This region was inserted in between SpeI and HindIII sites of the pgRNA-bacteria (38) to create the pgRNA-leader.
To generate pBend-Wt and pBend-CBS2, 81 bp complementary oligos (encompassing 69 bp of leader sequence) corresponding to Wt and CBS2 leader sequences were annealed and end filled by PCR, phosphorylated using T4 polynucleotide kinase and inserted into pBend5 using HpaI site (40).
Escherichia coli K-12 MG1655 genomic DNA was used as template to amplify genes encoding IHF␣, IHF, Cas1 and Cas2. To generate p8R-IHF␣ and p1R-IHF␣, a bicistronic cassette encoding IHF␣ and IHF was amplified and inserted into p8R and p1R using SspI site. Whereas p13SR-Cas1 and p1S-Cas2 were generated by inserting the region encoding Cas1 and Cas2 into p13SR and p1S, respectively, using SspI site. All constructs were verified by sequencing.
Expression and purification of proteins
Escherichia coli BL21(DE3) harbouring p1R-IHF␣ was grown in terrific broth supplemented with 100 g/ml kanamycin at 37 • C till the OD 600 reaches 0.6. At this point, IHF expression was induced with addition of 0.5 mM IPTG and the cells were allowed to grow for 4 h at 37 • C. Thereafter, the cells were harvested and resuspended in IHF binding buffer (20 mM Tris-Cl pH 8, 150 mM NaCl, 10% glycerol, 1 mM PMSF and 6 mM -mercaptoethanol). The cells were then subjected to lysis by sonication and clarified soluble extract was loaded on to 5 ml StrepTrap HP column (GE Healthcare). After loading, column was washed with IHF binding buffer and proteins were eluted with IHF binding buffer containing 2.5 mM D-desthiobiotin (Sigma). Eluted protein fractions were pooled up and loaded on to 5 ml HiTrap Heparin HP column (GE Healthcare). The column was washed with IHF binding buffer and bound proteins were eluted with a linear gradient of 0.15 -2 M NaCl in IHF binding buffer. Purified fractions were pooled and dialyzed against IHF binding buffer. Dialyzed protein was concentrated, flash frozen and stored at −80 • C until required.
In order to express Cas1, E. coli BL21(DE3) harbouring p13SR-Cas1 was grown until OD 600 = 0.6 at 37 • C in auto-induction media supplemented with 100 g/ml spectinomycin. Thereafter, growth and induction were continued for 16 h more at 16 • C. Subsequently, cells were harvested and resuspended in Buffer 1A (20 mM HEPES-NaOH pH 7.4, 500 mM KCl, 10% glycerol, 1 mM PMSF and 1 mM DTT) and lysed by sonication. The clarified soluble cell extract was loaded on to 5ml StrepTrap HP column, which was then washed with Buffer 1A. Proteins were eluted with Buffer 1A containing 2.5 mM D-desthiobiotin. Eluted protein fractions were dialyzed against Buffer 1B (20 mM HEPES-NaOH pH 7.4, 50 mM KCl, 10% glycerol and 1 mM DTT) and loaded onto 5 ml HiTrap Heparin HP column. Protein loaded columns were washed with Buffer 1B and bound proteins were eluted with a linear gradient of 0.05-2 M KCl in Buffer 1B. Purified fractions were pooled up and dialyzed against buffer containing 20 mM HEPES-NaOH pH 7.4, 150 mM KCl, 10% glycerol and 1 mM DTT. Dialyzed protein was concentrated, snap frozen and stored at −80 • C until required.
For purification of C-terminal Strep-II tagged Cas2, E. coli BL21(DE3) harbouring p1S-Cas2 was grown until OD 600 = 0.6 in auto-induction media supplemented with 100 g/ml kanamycin at 37 • C. Thereafter, growth and induction were continued for 16 h more at 16 • C. Subsequently, the cells were harvested and resuspended in Buffer 2A (20 mM HEPES-NaOH pH 7.4, 500 mM KCl, 10% glycerol, 10 mM Imidazole, 1 mM PMSF and 1 mM DTT) and lysed by sonication. The clarified soluble extract was loaded onto 5 ml HiTrap IMAC HP column (GE Healthcare). After loading, column was washed with Buffer 2A and proteins were eluted using a linear gradient of imidazole (0.01-0.5 M) in Buffer 2A. Purified fractions were pooled up and mixed with TEV protease (in 10:1 ratio of His-SUMO-Cas2-strep: TEV) and incubation was continued during dialysis with Buffer 2A at 4 • C overnight.
Dialyzed protein mixture was loaded onto 5 ml HiTrap IMAC HP column 5× times to allow binding of Histagged SUMO-Cas2-Strep, SUMO and TEV protease. Subsequently, a 5 ml StrepTrap HP column was connected tandemly and protein mixture was allowed to pass 5× more times. The C-terminal strep-tagged Cas2 was later eluted with Buffer 2B (20 mM HEPES-NaOH pH 7.4, 500 mM KCl, 10% glycerol, 2.5 mM D-desthiobiotin and 1 mM DTT). Eluted fractions were pooled up and dialyzed against buffer containing 20 mM HEPES-NaOH pH 7.4, 150 mM KCl, 10% glycerol and 1 mM DTT. Dialyzed protein was concentrated, snap frozen and stored at −80 • C until required.
CRISPR/dCas9 mediated immunoprecipitation
Escherichia coli BL21-AI was transformed with p3XF-dCas9, pgRNA-leader and pCas1-2[K] (14) and was allowed to grow in a shaker operated at 180 rpm till OD 600 = 0.6 at 37 • C in LB media supplemented with 0.2% Larabinose, 0.1 mM IPTG, 25 g/ml chloramphenicol, 100 g/ml ampicillin and 50 g/ml spectinomycin. 100 ng/ml anhydrotetracycline was added to induce the expression of 3× FLAG-tagged dCas9 and growth was continued for four more hours to allow dCas9-gRNA complex to anchor on its target site, i.e. upstream region of CRISPR leader. Chemical crosslinking and cell lysis were performed as described previously (41) with few modifications. Formaldehyde was added to a final concentration of 1% to crosslink proximally interacting nucleic acids and proteins. Crosslinking was continued for 20 min at 25 • C with gentle rocking. Glycine was added to a final concentration of 0.5 M and incubation was continued for 5 min at 25 • C to quench the crosslinking reaction. 10 ml cells were centrifuged at 2500 g at 4 • C for 5 min and the pellet was washed twice with equal volume of buffer W (20 mM Tris-Cl pH 7.5 and 150 mM NaCl). Pelleted cells were resuspended in 1ml buffer L (10 mM Tris-Cl pH 8.0, 20% sucrose, 50 mM NaCl, 10 mM EDTA, 10 mg/ml lysozyme) and incubated at 37 • C for 30 min. Lysate was resuspended in 4 ml of buffer R (50 mM HEPES-KOH pH 7.5, 150 mM NaCl, 1 mM EDTA, 1% Triton-X 100, 0.1% sodium deoxycholate, 1 mM PMSF and 0.1% SDS). The cells were subjected to sonication for four rounds of 15 × 1 s pulses with 2 min pause between each round in Vibra-cell probe sonicator that was set at 33% amplitude. Clarified supernatant containing sheared DNAprotein complex was separated by centrifugation. 800 l of supernatant was mixed with 200 l of protein G dyna beads (Life technologies) conjugated with 20 g of anti-FLAG M2 antibody (Sigma) and rocked gently at 4 • C overnight. Incubated beads were separated by centrifugation and washed twice each with 1 ml of Low Salt Wash Buffer (20 mM Tris-Cl pH 8.0, 150 mM NaCl, 2 mM EDTA, 1% TritonX-100, 0.1% SDS), High Salt Wash Buffer (20 mM Tris-Cl pH 8.0, 500 mM NaCl, 2 mM EDTA, 1% TritonX-100, 0.1% SDS), LiCl Wash Buffer (10 mM Tris-Cl pH 8.0, 250 mM LiCl, 1 mM EDTA, 0.5% Nonidet P-40 (NP-40), 0.5% sodium deoxycholate), and TBS Buffer (50 mM Tris, pH 7.5, 150 mM NaCl) with 0.1% NP-40 as described previously (35). In the final step, beads were separated by centrifugation and resuspended in 100 l of buffer containing 20 mM Tris-Cl pH 8 and 150 mM NaCl. 30 l of resuspended beads were mixed with 10 l of 4× SDS sample buffer and heated at 95 • C for 30 min to reverse crosslink and denature the proteins. Heated mixture was loaded on to SDS-PAGE and electrophoresed to enter stacking gel. The part of stacking gel containing the proteins was sliced and analyzed by mass spectrometry for the identification of protein factors in the sample.
Spacer acquisition assays
In vivo acquisition assays were performed as described earlier (13). Briefly, three cycles of growth and induction was performed with E. coli IYB5101 (Wt) or its variants carrying pCas1-2[K] (14) at 37 • C for 16 h in LB media supplemented with 50 g/ml spectinomycin, 0.2% Larabinose and 0.1 mM IPTG. In between each cycle, cultures were diluted to 1:300 times with fresh LB media containing aforementioned supplements and growth was continued for 16 h. For IHF complementation experiments, IHF␣ and IHF strains were transformed with p8R-IHF␣ and pCas1-2[K] and 3 cycles of inductions were performed as discussed above. To monitor CRISPR array expansion, 200 l of induced cells were collected after cycle 3 and washed thrice and resuspended in distilled water. These cells were used as template for PCR to monitor CRISPR array expansion either in CRISPR 2.1 array (in case of Wt, IHF␣ and IHF strains) or P21 locus integrated CRISPR DNA (in case of Wt, IBS, IBS, CBS1-3, CBS2(L), CBS2(C) and CBS2(R) strains). All the PCR amplified samples were separated on 1.5% agarose gels to identify parental and expanded arrays (parental array + 61 bp).
FRET-based monitoring of DNA bending
A 35 bp DNA encompassing leader sequence (−4 to −38 from the leader-repeat junction) of Wt (or Wt without quencher or IBS) was assembled from three oligos by annealing and end labeled with 3 ,6-FAM and 5 Iowa Black (IDT). 222 nM of DNA was incubated with increasing concentrations of purified IHF (0, 0.3, 0.6, 0.9, 1.2, 1.5 and 1.8 M) in buffer containing 0.5× TBE, 100 mM KCl, 10% glycerol and 5 g/ml BSA for 20 min at 25 • C. Postincubation samples were excited at 495 nm and emission was monitored from 500 to 600 nm, with averaging over three scans in FluoroMax-4 spectrofluorometer (Horiba Scientific, Edison, NJ, USA). The slit width used for excitation and emission was 2 and 7 nm, respectively. In order to further ascertain that the enhanced quenching is due to IHF mediated DNA bending; a fluorescence recovery assay was designed. In this assay, buffer (0.5× TBE, 100 mM KCl, 10% glycerol and 5 g/ml BSA) containing 222 nM DNA was excited at 495 nm and emission was captured for 200 s at 520 nm. To this sample, IHF was added to a final concentration of 1.8 M and fluorescence emission was recorded till 600 s. Thereafter, IHF degradation and DNA release were initiated by addition of proteinase K to a final concentration of 1 mg/ml and emission was monitored for another 400 s. After background correction, the fluorescence intensity of DNA in the presence of IHF is normalized relative to that of DNA alone.
Estimation of bending angles by circular permutation gel retardation assay
pBend-Wt and pBend-CBS2 were digested with HindIII and EcoRI to produce a 329 bp DNA fragment. This fragment was gel purified as per the manufacturer's instruction (Qiagen) and digested with BamHI, KpnI, SspI, EcoRV, SpeI, BglII and MluI in separate reactions. All the digested DNA samples were further purified (Qiagen) and 21 nM of each DNA was incubated individually with 0.7 M IHF in buffer containing 0.5× TBE, 100 mM KCl, 10% glycerol and 5 g/ml BSA for 30 min at 25 • C. Post-reaction samples were directly loaded on 8% native acrylamide gel and electrophoresed in 1× TBE at 4 • C. Gels were poststained with EtBr and DNA bands were visualized in gel documentation system. IHF bending angles were calculated as described previously (42). Mobilities of IHF bound DNA complex (R b ) and the respective free DNA (R f ) were calculated for all the restriction-digested fragments. R b values were normalized to the respective R f values and were plotted against flexure displacement (length from the middle of the binding site to the 5 end of the restriction fragment/total restriction fragment length). The resulting plot was fitted to a quadratic equation: y = ax 2 − bx + c, where x and y denotes flexure displacement and R b /R f , respectively. The bending angle (α) was calculated using the relationship a = −b = 2c(1 − cosα). Here, we have represented the bending angle (␣) as the average value that was calculated from the parameters a and b.
Circular permutation gel retardation assay in the presence of Cas1-2 was performed as described above with few modifications. 210 nM of Cas1-2 was incubated with 21 nM of digested pBend-Wt fragments in the presence or absence of 0.7 M IHF in buffer containing 20 mM HEPES-NaOH pH 7.5, 25 mM KCl, 10 mM MgCl 2 and 1 mM DTT for 30 min at 25 • C. Post-reaction samples were directly loaded on 8% native acrylamide gel and electrophoresed in 1× TBE at 4 • C. Gels were poststained with EtBr and DNA bands were visualized in gel documentation system. Nucleic Acids Research, 2017, Vol. 45, No. 1 371 In vitro integration assay Wt or mutant leader DNA (IBS, IBS and CBS1-3) was PCR amplified from the strain carrying the respective construct that is integrated into P21 locus. The constructs for repeat variants (Wt and Rep1-2) were synthesized (Genscript) and extracted from the plasmid by restriction digestion using the respective restriction sites (EcoRI/BamHI/XhoI). Various types of protospacers (Supplementary Figure S7A) were prepared by annealing the corresponding oligos. In vitro integration assays employing Cas1 and Cas2 were performed as previously described (22) First set of reaction mixtures were directly loaded and electrophoresed on 8% native acrylamide gel in 1× TBE at 4 • C. Whereas second set of reaction mixtures was treated with 1 mg/ml proteinase K for 30 min at 37 • C prior to electrophoresis. Electrophoresed gels were post-stained with EtBr and imaged in gel documentation system.
The sizes of the integrated products were analyzed by denaturing capillary electrophoresis. DNA with 6-FAM labeled on 5 end of the leader (L*) or on 5 end of spacer2 (R*) was used as substrate (Supplementary Figure S5). Integration reactions were performed as describe above. Post-reaction samples were treated with 1 mg/ml proteinase K and separated by capillary electrophoresis. The intensities of the fragments were visualized using GeneMapper (Thermo Fisher Scientific) after loading the corresponding .fsa files.
Spacer disintegration assay
The reaction mixture from integration assay was purified using PCR purification kit (Qiagen) as per the manufacturer's instruction. 210 nM of Cas1 and Cas2 were mixed and incubated at 4 • C for 15 min. To this complex, 21 nM purified integration product was mixed with or without 0.7 M IHF and incubated at 37 • C for 60 min in buffer containing 20 mM HEPES-NaOH pH 7.5, 25 mM KCl, 10 mM MgCl 2 and 1 mM DTT. Subsequently, proteinase K was supplemented to a final concentration of 1 mg/ml concentration and incubated for 30 min at 37 • C. Sample was mixed with 6× DNA loading dye and electrophoresed on 8% native acrylamide gel in 1× TBE at 4 • C. Electrophoresed gels were post-stained with EtBr and imaged in gel documentation system.
Genome analysis
The lists comprising of type I-E and other type I (excluding type I-E) organisms were compiled from previous study (4). Using IHF-␣ as a query, we initiated blastp (43) search against the genomes harbouring type I-E and other type I (non-type I-E) CRISPR systems. Hits were considered bonafide if the E-value is less than 0.005 and the alignment coverage with respect to the query is at least 60%. Based on these criteria, we estimated the distribution of IHF across the species. Since HU and IHF are structurally similar and also shares similarity at the sequence level (44), we relied on the annotation to distinguish between the two. The multiple sequence alignment corresponding to the leader region for type I-E CRISPR system was obtained from the CRISPRleader database (45). The conservation profile was generated using WebLogo 3 (46).
CRISPR/dCas9 based immunoprecipitation detects the participation of IHF as accessory factor for adaptation in vivo
In order to identify the potential host factors that are likely to promote the directional insertion of protospacer fragment, we exploited the CRIPSR/dCas9 based immunoprecipitation (35). Here, we expressed the Cas1-2 complex together with the inactive form of FLAG-tagged Cas9 (dCas9) and gRNA that is targeted towards the leader region of the CRISPR array in E. coli BL21-AI (NCBI accession: NC 012947.1, nucleotide positions: 1002800-1003800). Subsequent to the chemical crosslinking, the DNA bound protein factors that are localized into the leader region were selectively pulled down using the anti-FLAG coated beads against the FLAG-tagged dCas9 ( Figure 1A and Supplementary Figure S1). The immunoprecipitated protein factors were analyzed using mass spectrometry to identify the associated factors with the DNA. We hypothesized that since the Cas1-2 complex shows integrase-like activity, the presence of host factors that are previously characterized to facilitate the integration of DNA elements would be prospective candidates. Most of the identified factors belong to ribosomal proteins and translational factors -an aspect characteristic of their omnipresence due to their housekeeping functions.
Remarkably, a few of the identified factors were mapped to Cas proteins including Cas1 and Cas2 that bolstered the utility of this approach. Among others, we also noted the presence of the DNA architectural proteins such as H-NS and IHF and DNA repair proteins such as RecA (vide. Supplementary Table S1). We filtered out the factors if they were previously shown not to be involved or essential in determining the protospacer integration (10,12,29) or functionally unrelated such as chaperones, proteases, metabolic enzymes, etc. For example, though the DNA architectural protein H-NS was identified with high score than that of Cas1 and Cas2, it was previously shown that it is not essential for CRISPR adaptation and that it acts as a repressor of cas operon in E. coli (12,29). Therefore, we didn't pursue further with H-NS and similar such rationale was exercised to exclude other factors. On the other hand, though another architectural protein IHF scored lower than H-NS, its role in site-specific recombination championed by integrase as well as in DNA transposition was well characterized (47,48). Given that Cas1-2 complex functions like an integrase -akin to integrase -and since the role of IHF in CRISPR adaptation is not previously characterized, we were tempted to probe the involvement of IHF in protospacer acquisition.
IHF is essential for protospacer acquisition into the leader proximal end in vivo
IHF is a heterodimer comprising of ␣ and  subunits. To test the involvement of IHF in protospacer acquisition, we created a null mutant of IHF devoid of either ␣ or  subunits in E. coli IYB5101. It was found that deletion of either ␣ or  subunit abrogates the acquisition of protospacer elements ( Figure 1B). In order to reinforce this, we complemented the null mutant with plasmid borne IHF␣ and IHF that restored the expansion of CRISPR 2.1 array in E. coli IYB5101 ( Figure 1B). This strengthened our conjecture that the acquisition of protospacer requires the participation of IHF in vivo.
Unlike some of the related DNA architectural proteins such as HU, IHF exhibits sequence specific DNA binding. It recognizes the consensus sequence 5 -WATCAANNNNTTR-3 (where W -A/T, N -A/T/G/C, R -A/G). Therefore, we searched for potential IHF binding site abutting the CRISPR 2.1 locus in E. coli IYB5101 as well as in related strains (Supplementary Figure S2). This search led to the identification of a putative binding site adjacent to the first CRISPR repeat ( Figure 1C). We wondered whether this region could act as a potential binding site for IHF. To test this, we deleted the binding site partially ( IBS in Figure 1C) and assayed for the acquisition. Interestingly, we found no expansion of the array ( Figure 1D). Similarly, mutation of the key binding nucleotides (IBS in Figure 1C) also abolished the acquisition ( Figure 1D). This suggests that the identified site for IHF binding indeed impacts the adaptation process Nucleic Acids Research, 2017, Vol. 45,No. 1 373 and these findings are also in concurrence with the recent report (34).
IHF binding induces bending of the leader region
The structure of IHF-DNA complex shows that the IHF ␣ and  subunits form an intertwined compact body from which two  structures protrude out clamping the DNA (49). This induces bending of DNA by about 160 • leading to the reversal of the direction of DNA (Figure 2A). This prompted us to investigate whether IHF binds and bends the putative site (IBS) in the leader region. Towards this, we purified the IHF from E. coli and tested the DNA binding using EMSA ( Figure 2B). This showed retardation of DNA mobility in the presence of IHF indicating that indeed IHF binds the leader region ( Figure 2B). In line with the recent study (34), substitution or deletion of key binding nucleotides drastically reduced the IHF binding (Supplementary Figure S3).
Motivated by the IHF binding to the leader region, we wondered whether the binding leads to bending of the DNA. To assess this, we designed a FRET based assay wherein one end of the IHF binding region is tagged with a fluorophore (6-FAM) and the other end with the quencher (Iowa Black). In the linear DNA, the fluorophore and the quencher will be sequestered and hence this won't quench the fluorescence. However, if IHF bends the DNA, this brings both the fluorophore and quencher into proximity leading to quenching of the fluorescence. Indeed, we observed that addition of IHF led to drastic reduction in the fluorescence intensity (Solid line in Figure 2C and Supplementary Figure S4). However, in the same reaction, when a protease was added to remove IHF, it restored the fluorescence (solid line in Figure 2C). On the contrary, similar experiment performed with the 6-FAM labeled DNA, albeit without the quencher, showed that despite addition of IHF the intensity of the fluorescence remained constant (dotted line in Figure 2C). This allows us to exclude the possibility that the quenching of fluorescence is not caused by IHF binding alone and it is indeed the DNA bending effected by the IHF that brings the two ends into proximity leading to quenching of fluorescence.
Having established the fact that IHF indeed bends the leader region, we were interested in investigating the extent to which IHF bends the leader DNA. To address this, we utilized the bending vector pBend5, which contains circularly permuted duplicated restriction sites (40). Cloning of the IHF binding site (IBS) into pBend5 and subsequent digestion using the restriction enzymes ensue fragments with same length but with the binding site distributed to different positions, either in the middle or towards the end ( Figure 2D). When the DNA undergoes bending due to protein binding, the fragment that harbors the binding site in the middle migrates slower than the one with the binding site at the end. From this mobility differences, it is possible to estimate the bending angle, which is defined as the angle by which the DNA deviates from the linearity (vide. methods). We estimated that IHF bends DNA by ∼120 • suggesting that the sharp deformation could result in the reversal of the DNA direction ( Figure 2E and F).
IHF induced bending of the linear DNA facilitates protospacer integration
Since IHF deforms the linear DNA, we were interested in deciphering the mechanism by which it influences the integration of the protospacer into the CRISPR locus. Further, our analysis of CRISPR leader sequences from organisms harbouring type I-E system showed that the identified IHF binding site (-9 to -35 nt; boxed in solid line in Figure 3A) along with another region (-44 to -59 nt; boxed in dotted line in Figure 3A) is highly conserved across other species as well. Therefore, we designed a linear DNA construct encompassing the above mentioned leader region and two units of repeat-spacer segments ( Figures 1C and 3A). When IHF was added to the CRISPR DNA, we noted a single slow migrating band suggesting that the IHF induced DNA bending retards the mobility of the CRISPR DNA (lane 4 in Figure 3B). Subsequent addition of Cas1-2 complex and protospacer fragment resulted in the appearance of a super-shifted band (lane 12 in Figure 3B). Strikingly, this band was not observed in the absence of IHF (lane 11 in Figure 3B). This hints that Cas1-2 complex associates with the CRISPR DNA only when IHF is present. When the DNA bound proteins were removed using proteinase K treatment, we spotted a slow migrating band, whose size seemed to be larger than the CRISPR DNA (lane 12 in Figure 3C). Remarkably, this band appeared only from the proteinase K treated reaction mixture consisting of CRISPR DNA, protospacer fragment, IHF and Cas1-2 complex (lane 12 in Figure 3C). To further probe the requirement of IHF for the formation of super-shifted band, we performed the experiment with IHF binding site variants. This showed that either deletion ( IBS) or mutation of the IHF binding site (IBS) completely abolished the appearance of supershifted band (lanes 6 and 9 in Figure 3D and E). This suggests the possibility that the slow migrating band represents the protospacer integrated into the CRISPR DNA.
In order to ascertain the protospacer integration, we designed 5 6-FAM labeled linear CRISPR DNA constructs and repeated the aforementioned experiment. The proteinase K treated reaction mixture corresponding to the slow migrating band was resolved by denaturing capillary electrophoresis. The fragments were analyzed in comparison with the fluorescently labeled standards to estimate their sizes. This showed that when the label was at the 5 leader proximal end, the size of the fragment was estimated to be ∼161 and 63 nt, whereas when the label was at the 5 distal end, the size of the fragment was ∼168 nt (Supplementary Figure S5). Since the 63 nt fragment maps the cleavage around the leader-repeat junction, this suggests that the protospacer integration has taken place proximal to the leader region in the top strand leading to half-site integration intermediate (vide. Supplementary Figure S5). In agreement with recent work (34), this further suggests that IHF indeed stimulates the protospacer incorporation into the linear CRISPR DNA. Further, since it was reported that halfsite integration intermediate is selectively excised by the Cas1-2 complex (22,24), we reasoned that this could serve as an additional diagnosis for the existence of half-site integration intermediate. Therefore, we purified the reaction mixture containing the half-site integration intermediate and monitored disintegration in the presence of Cas1-2 complex. Indeed, we observed that the presence of Cas1-2 complex led to the disappearance of the integrated product (Supplementary Figure S6). Interestingly, the disintegration activity of Cas1-2 complex is significantly inhibited in the presence of IHF (Supplementary Figure S6). Taken together, it is possible to reiterate that the protospacer integration occurs into the leader-repeat junction in the top strand. Further, it appears that once the site of invasion is marked, in line with earlier reports (21,22,24), we recognized that it is the 3 -OH of the protospacer with 3 overhang that mounts the nucleophilic attack on the CRISPR DNA (Supplementary Figure S7).
Cas1-2 complex is localized upstream of IHF binding site
It was shown that up to 60 bp leader segment adjoining the first CRISPR repeat is essential for spacer acquisition (13). However, the IHF binding region falls within the 35 bp from the first CRISPR repeat (boxed in solid line in Figure 3A). Given the importance of this region, we wondered what the function of the remaining 25 bp could be in the leader region. Intriguingly, we also noted high conservation of sequence upstream to that of IHF binding site (boxed in dotted line in Figure 3A). Therefore, we randomly mutated the 36 bp leader region upstream of the IHF binding site, 12 bp at a time (CBS1-3), and tested whether this modified region could support acquisition ( Figure 4A). We observed that though CBS1 [-34 to -45] and CBS3 [-58 to -69] had no effect on the spacer acquisition, surprisingly, no expansion was seen for the CBS2 [-46 to -57] (Figure 4A and B). This led us to wonder whether any of these mutated sequences affects the binding of IHF and thereby leading to the abrogation of acquisition. Hence we tested the binding of IHF to leader region containing CBS1-3. We noted that while CBS1 reduces the IHF binding, perhaps owing to its marginal overlap with the cognate binding site ( Figure 4A), the other two showed only minor effect on the IHF binding (Supplementary Figure S9). Interestingly, we also found that the IHF mediated DNA bending is not affected for CBS2 (Supplementary Figure S10). This suggests that the impairment of spacer acquisition due to CBS2 is not effected by IHF.
Since CBS2 does not impact the IHF binding to the leader region significantly, we hypothesized that CBS2 could be a binding site for Cas1-2 complex. Therefore, we conducted the integration experiments involving CBS1-3.
This showed that the super-shifted band that was seen in wild type CRISPR DNA (lane 3 in Figure 4C) appeared in CBS1 and CBS3 too (lanes 6 and 12 in Figure 4C). Intriguingly, this band was absent when CBS2 was utilized (lane 9 in Figure 4C). Moreover, IHF dependent mobility shift was prominently seen for Wt and CBS2, albeit it was weak for CBS1. For CBS3, the IHF dependent mobility shift was not prominent despite the presence of super- shifted band. This suggests that except for CBS1--that overlaps partly with the IHF binding site--the IHF binding is not impaired in others. In line with this, all except CBS2 showed the presence of integration product ( Figure 4D).
Highly conserved sub-motif region within the CBS2 is crucial for protospacer integration
In order to probe the CBS2 further, we made three constructs, viz., CBS2(L), CBS2(C) and CBS2(R). In each of these constructs, 4 bp were mutated with respect to CBS2 ( Figure 5A). We tested each of these constructs for their ability to support protospacer acquisition in vivo. Remarkably, we found that except CBS2(C), other two constructs showed expansion of CRISPR array suggesting that the 4 bp (GTGG) in the middle of CBS2 (−50 to −53 nt) are crucial for protospacer acquisition ( Figure 5B). To assess how these residues are impacting the protospacer acquisition, we conducted integration assays involving these constructs. In line with the acquisition assay in vivo, we observed that both CBS(L) and CBS(R) showed shift in the mobility of the CRISPR DNA in the presence of IHF and Cas1-2 complex (lanes 9 and 15 in Figure 5C). Surprisingly, in case of CSB2(C), there was no super-shifted complex even in the presence of IHF and Cas1-2 complex (lane 12 in Figure 5C). In line with this, the integration product was absent from the proteinase K treated CBS2(C) sample (lane 12 in Figure 5D). On the contrary, the integration products were observed for CBS2 (L) and CBS2 (R) (lanes 9 and 15 in Figure 5D). This suggests that CBS2(C) residues are essential for the integration of protospacer fragment into the leader-repeat junction. In tune with this, we also noted high conservation of residues corresponding to CBS2(C) in organisms harbouring type I-E system (boxed in dotted line in Figure 3A). Taken together, the disappearance of super-shifted band despite the presence of IHF related band in CBS2 and CBS2(C) led us to reason that the CBS2(C) is likely to harbour the binding site for Cas1-2 complex (see Discussion). We refer to residues (-50 to -53 nt) corresponding to CBS2(C) as integrase anchoring site (IAS).
DISCUSSION
An outstanding question regarding CRISPR adaptation pertains to the mechanism regulating specific integration of new protospacers into the leader-repeat junction amidst the presence of several repeat regions. Unlike in vivo, the integration of protospacer in vitro occurs in other repeats too (22). This observation led us to hypothesize the involvement of specific host proteins in defining the site of protospacer invasion in vivo. Our genome wide search by employing the CRISPR/dCas9based immunoprecipitation (35) led us to recognize the participation of DNA architectural protein IHF in specifying the directional insertion of protospacer elements into the leader proximal end of CRIPSR array. IHF is known to specifically recognize its binding region and induce sharp DNA bends thereby facilitating site-specific recombination and DNA transposition (48,50,51). IHF mediated positioning of distantly oriented low affinity core site and high affinity attachment site of integrase into proximity facilitates bacteriophage integration into the genome of E. coli (51). Here, DNA deformation is utilized by IHF in bringing remotely located recognition sites into proximity. Indeed, our experiments with bending vectors showed that IHF bends the linear CRISPR DNA (Figure 2). Supercoiled plasmids carrying a CRISPR DNA were shown to act as in vitro substrates for integration whereas no integration was observed when linearized CRISPR encompassing plasmids were used (22). In comparison to linear DNA, supercoiled plasmids are inherently compact and bent and therefore it is intrinsically possible to bring remotely located recognition sites into juxtaposition. However, in the case of linear DNA, we identify that IHF is indispensable and it may facilitate favourable conformation of DNA for integration ( Figures 1-3). Further, since some transposases such as Tn10 prefer deformed target DNA (52), it is possible that the IHF mediated bent DNA conformation could become a substrate for Cas1-2 complex. In addition to this, the fact that the presence of IHF ensue reduced disintegration of protospacer implies that IHF induced DNA bending appears to stabilize the integration intermediate by modulating the integrase/excisionase activity of Cas1-2 complex (Supplementary Figure S6). This shows semblance to how IHF along with integrase promotes integration over excision (51,53).
The involvement of IHF in the protospacer acquisition is recently reported (34); however, the precise connection between IHF induced DNA bending and directional integration of the protospacer remains elusive. Further, while IHF binding to linear DNA was shown earlier, the extent to which it deforms the leader region is not clear (34). Our findings suggest that IHF bends the linear CRISPR DNA by ∼120 • , which is likely to prompt reversal in the DNA direction ( Figure 2). One possible consequence of this bending could be to bring the leader region in proximity to the first repeat. While pursuing this hypothesis, we discovered that in addition to the IHF binding site, the leader region also harbours binding site for Cas1-2 complex (referred as IAS) that is located just upstream of IBS (Figures 4 and 5). We also observed IAS to be highly conserved within the leader region among the type I-E organisms that harbour IHF (boxed in dotted line in Figure 3A). This presents an attractive proposition that the IHF induced DNA bending is likely to facilitate the proximity between the Cas1-2 complex and the leaderrepeat junction. The higher order nucleoprotein complex (vide. Super-shifted band in Figures 3-5) that appears in the presence of Cas1-2 complex and IHF is also noted in the case of site-specific recombination catalyzed by integrase and IHF (53,54). However, in the absence of IHF, since CRISPR DNA is not bound by Cas1-2 complex, it is likely that IHF induced DNA bending precedes the loading of Cas1-2 complex onto the CRISPR DNA ( Figure 3). Moreover, we observed no appreciable changes in the bending angle even in the presence of Cas1-2 complex suggesting that the loading of Cas1-2 complex doesn't introduce further DNA deformation (Supplementary Figure S11).
Cas1 is reported to have an intrinsic specificity towards Figure 6. Model depicting the factor dependent and independent protospacer integration into CRISPR locus. Based on the proximity between IAS and leader-repeat junction, the requirement of accessory factor(s) may be predicted. In type I-E, where the IAS and leader-repeat junction are segregated, IHF is required to bring them into proximity (shown in the left panel). In this case, IHF (in cyan and blue ovals) binds to IBS (in grey) within the CRISPR leader (black) that leads to bending of the DNA. This brings IAS and leader-repeat1 (R1) junction into close proximity thereby regenerating the cognate binding site for the Cas1-2 integrase complex. This enables the loading of the protospacer-bound Cas1-2 complex that orients the 3 -OH end of protospacer for nucleophilic attack on the leader-repeat1 junction producing the half-site integration intermediate. Subsequent nucleophilic attack by 3 -OH on the other end of the protospacer generates the full-site integration. DNA repair by polymerases and ligases seals the nicks to generate an expanded CRISPR array.
On the other hand, as evidenced by type II-A, the requirement of accessory factor(s) may be precluded if IAS and leader-repeat junction lie juxtaposed (shown in the right panel).
the sequences spanning the leader-repeat junction (24). In the vast genome sequence, it is not infrequent for Cas1 to encounter such nucleotide preference and hence this is unlikely to be a principal specificity determinant. Therefore, the role of IHF could be attributed to biasing the preference of Cas1-2 complex towards shape-based recognition as exhibited by homing endonucleases (55). In this context, it is tempting to propose that Cas1-2 complex prefers a bipartite binding site that is complemented by a part of the leader region (IAS) and leader-repeat junction. This is akin to the distantly located low affinity core site and high affinity attachment site in the case of integrase (51). Proximity of these complementary sites--IAS and leaderrepeat junction--mediated by the IHF induced DNA bending is aptly poised to regenerate the cognate binding site for Cas1-2 complex. The following observations appear to bolster this conjecture: First, the formation of higher order nucleoprotein complex requires IHF induced DNA bending--akin to 'intasome' in the case of bacteriophage integration--suggesting that the loading of Cas1-2 complex onto the CRISPR DNA is contingent upon the proximity of the aforementioned complementary sites (Figures 3-5). Therefore, in the absence of such proximityinduced regeneration of the cognate binding site, Cas1-2 complex is unlikely to facilitate the protospacer integration into the leader proximal end. Second, in line with the above, we could observe IHF binding onto linear CRISPR DNA in the absence of Cas1-2 complex and not vice versa (Figures 3-5). Third, in conjunction with the acquisition assay, we noted that the presence of IHF abolishes the nonspecific nicking activity of Cas1-2 complex (Supplementary Figure S5). Given this, it is possible to reiterate that Cas1-2 complex loading onto the CRISPR DNA is governed by the IHF mediated regeneration of the distantly located bipartite binding site.
While type I-E system requires accessory factor for protospacer acquisition, it was shown in vitro that type II-A system exhibits robust polarized protospacer incorporation into linear CRISPR DNA in the absence of any host factor (56). Further, another study showed that substitution or deletion of leader region (−1 to −5 from repeat) bordering leader-repeat junction (termed as leaderanchoring sequence or LAS) in Streptococcus pyogenes (type II-A) induces an ectopic spacer incorporation at fifth repeat where the sequence derived from fourth spacer acts as LAS (57). In Sulfolobus solfataricus (type I-A), it was observed that CRISPR locus E alone exhibits ectopic spacer incorporation whereas polarized acquisition was observed in loci C and D (58). CRISPR locus E encompasses a deletion of −47 to −70 in the leader region (58,59), which could possibly disrupt the accessory factor/Cas1-2 binding site. This in turn may impair bipartite site formation and since ssoCas1 is shown to have intrinsic sequence specificity (24), it could favour integration at region that closely resembles that of leader-repeat junction thus tuning it towards ectopic acquisition. These studies lend credence to our hypothesis that the distance between IAS and leader-repeat junction (bipartite site for Cas1-2 binding) governs the requirement of accessory factor(s) for protospacer incorporation (see below).
In addition to the sequences bordering the leader-repeat junction, modification of the repeat sequences or structure in vivo is also reported to inhibit the protospacer integration (32,60,61). On the contrary, we noticed that the protospacer integration due to such modifications remains unaltered in vitro (Supplementary Figure S8). This suggests that the bipartite binding site of Cas1-2 complex is unlikely to extend deep into the CRISPR repeat region. Based on fragment analysis, under our experimental conditions, we deciphered that the integration of the protospacer occurs into the top strand and we found no integration into the bottom strand (Supplementary Figure S5). This allows us to infer that since the underlying leader region harbouring the IBS and IAS remains intact, in such scenario, the modification of repeat sequences or structure is not expected to inhibit the top strand invasion. On the other hand, we speculate that such modification could reduce the efficiency of bottom strand invasion--wherein the integrity of the repeat sequences or structure could play a leading role in determining the specificity towards repeat1-spacer1 junction--leading to unproductive fullsite integration in agreement with spacer integration assay (32,60,61).
Based on our data and previous reports (13,22,24,32,34,(56)(57)(58), we present an updated model for CRISPR adaptation ( Figure 6). This model can be dichotomized based on the proximity between IAS and leader-repeat junction, which allows us to predict the requirement of accessory factor(s). In cases where IAS and leader-repeat junction are segregated, in order to bring them into proximity for Cas1-2 binding, accessory factor(s) may be required. As exemplified by type I-E, this role is adopted by IHF in E. coli. IHF binding to the leader region of the CRISPR locus (IBS) leads to DNA bending. This deformed conformation ensue proximity of the distantly located IAS and leader-repeat junction that leads to the regeneration of the cognate binding site for the Cas1-2 integrase complex. Subsequently, this allows the Cas1-2 complex to orient the 3 -OH end of the protospacer fragment suitably for nucleophilic attack on the leader-repeat junction thus leading to the first nick on the top strand. This is followed by the second nucleophilic attack on the bottom strand leading to the full integration of the protospacer. We analyzed the distribution of IHF in organisms possessing type I CRISPR systems (type I-E and non type I-E systems). Out of 76 organisms encompassing type I-E CRISPR system, we found that 56 of them possess IHF (about 73%) and its distribution is predominant among enteric bacteria (Supplementary Table S5). In the case of non type I-E, 104 out of 242 organisms (∼43%) carry IHF (Supplementary Table S5). Interestingly, wherever IBS is conserved in type I-E systems, we also noted a strong correlation for the existence of IAS suggesting that these two sites co-evolve to preserve the CRISPR adaptation active ( Figure 3A). However, since few organisms that harbour type I-E system in our analysis lack IHF (27%), it is possible to envisage the participation of other DNA architectural proteins such as HU or other auxiliary Cas proteins to facilitate protospacer integration (62,63). It may be noted that HU is structurally similar to IHF; however, unlike IHF, it binds DNA non-specifically. Further inspection of the type I-E organisms that lack IHF showed that a few of them lack cas operon suggesting that they are non-functional similar to E. coli BL21. A few others co-exist with other CRISPR subtypes including other type I (non type I-E), type II and type III, which suggests that the acquisition machinery may be shared across subtypes. On the contrary, if the IAS and leaderrepeat junction lie juxtaposed as observed in type II-A system (33,56,57), the requirement of accessory factor(s) may be precluded ( Figure 6). Nevertheless, co-opting the host proteins during adaptation epitomizes just the tip of the iceberg of the functional diversity embodied in the CRISPR-Cas system.
|
2018-04-03T00:38:56.881Z
|
2016-11-28T00:00:00.000
|
{
"year": 2016,
"sha1": "1b065e95f31d96afeafb603302a40f28fe56a33f",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/45/1/367/9249466/gkw1151.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b065e95f31d96afeafb603302a40f28fe56a33f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
253237709
|
pes2o/s2orc
|
v3-fos-license
|
Interactive cohort exploration for spinocerebellar ataxias using synthetic cohort data for visualization
Motivation: Visualization of data is a crucial step to understanding and deriving hypotheses from clinical data. However, for clinicians, visualization often comes with great effort due to the lack of technical knowledge about data handling and visualization. The application offers an easy-to-use solution with an intuitive design that enables various kinds of plotting functions. The aim was to provide an intuitive solution with a low entrance barrier for clinical users. Little to no onboarding is required before creating plots, while the complexity of questions can grow up to specific corner cases. To allow for an easy start and testing with SCAview, we incorporated a synthetic cohort dataset based on real data of rare neurological movement disorders: the most common autosomal-dominantly inherited spinocerebellar ataxias (SCAs) type 1, 2, 3, and 6 (SCA1, 2, 3 and 6). Methods: We created a Django-based backend application that serves the data to a React-based frontend that uses Plotly for plotting. A synthetic cohort was created to deploy a version of SCAview without violating any data protection guidelines. Here, we added normal distributed noise to the data and therefore prevent re-identification while keeping distributions and general correlations. Results: This work presents SCAview, an user-friendly, interactive web-based service that enables data visualization in a clickable interface allowing intuitive graphical handling that aims to enable data visualization in a clickable interface. The service is deployed and can be tested with a synthetic cohort created based on a large, longitudinal dataset from observational studies in the most common SCAs.
Introduction
Visualizing data can significantly contribute to understanding and active interpretation of the data at hand.However, the preparation of data visualization is often tedious and demanding, as it involves data handling and pre-processing, for which certain technical skills are required.Moreover, to finally plot the data, one needs the ability to work with the appropriate libraries such as Pyplot (https://matplotlib.org/stable/tutorials/introductory/pyplot.html) or Seaborn (https://seaborn.pydata.org/).Pre-built software exists, such as the i2b2 analysis tool [1].i2b2 is a powerful solution that allows filtering for patient sets and plotting various clinical data.However, extensive technical knowledge is already required for installation and usage, which is a significant barrier for non-technical researchers to use such tools for initial data analysis.A visualization tool should meet the everyday needs of clinicians, in particular a quick and easy visualization of data.In addition to ease of use, the availability of datasets is often limited.Especially for rare diseases, individual centers usually have only limited data available, and access to multicentric data is rather limited due to particular data protection regulations.As a use case and for the implementation, cohort data of ataxia studies and longitudinal data from European and US natural history studies of spinocerebellar ataxia type 1, 2, 3, and 6, were used.This outstanding large dataset found the basis for a synthetic cohort, that is integrated into the public SCAview web service allowing a broad community of researchers and clinicians to browse a reasonable number of rare disease cases on their own.SCAview was developed in close cooperation with clinicians (non-specialists as well as specialists for ataxias) and their feedback was continuously included in the tool development.In summary, SCAview's goal is to provide a lightweight open-source visualization tool that can be deployed behind the firewall of protected areas 1 of a hospital or research institute with a special focus 1 Such as virtual machines that align with the clinic's data protection guidelines on ease of use and the opportunity to browse a large synthetic dataset of rare neurological diseases.
Methodology and Implementation
The SCAview visualization tool is based on a data and a visualization service.The data service acts as a backend for processing and providing the data, while the visualization service functions as a frontend interacting with the user.The backend is developed as a Django (https://www.djangoproject.com/)application.The frontend uses React (https://reactjs.org/) and the React version of Plotly (https://plotly.com/javascript/react/) to create interactive plots.The data itself as well as the user sessions are persisted in a RedisDB (https://redis.io/),which allows instant data access.The tool stores every plot and its settings in a user session, allowing users to seamlessly continue working with their plots even after closing and re-opening the browser.Adopting Plotly, the application implements scatter plots, histograms, bar plots as well as timeline plots.The whole stack of components used in SCAview has little performance requirements as it was tested on standard personal laptops and ran smoothly throughout various demonstrations.The visualizer was intentionally created to visualize and analyze an ataxia dataset that was assembled from various different sources comprising SCAregistry [2], EUROSCA [3], CRC-SCA [4], and RISCA [5].This dataset is harmonized with respect to the individual data variables and contains 115 variables that were measured over up to 9 visits in a total of 1,417 patients.Here, we define a visit as a patient coming to the hospital/medical institution/research center in the scope of the study.A visit includes standardized clinical assessments of commonly used scales to assess the severity of the core symptom ataxia, neurological symptoms other than ataxia as well as information on the disease stage.Characterizing data such as demographic and genetic information are assessed at baseline.Multiple visits at the hospital are merged into one if they occur in a time span of less than 28 days.The viewer is accessible via a public URL restricted by a password.In order to address privacy concerns as well as protect patients' personal information, a synthetic cohort was derived from the original cohort that preserves global linear correlations, but changes critical data points sufficiently to mask the original values and thereby reduce the risk of re-identification to a negligible minimum.The synthetic cohort was generated patient-wise, where a synthetic value for a variable (one measurement for a particular patient at one visit) of the original cohort is derived via the model , where , , () = + ϵ ϵ ∼ (0, ) = () * , where is the empirical variance of all data ∈ (0, 1] () points available for the variable .Further, note that the term is a scalar necessary to regularize the variances of synthetic variables (Supplementary Text 1).Vividly speaking, this method adds normal distributed "noise" to the original data.This method is applied throughout the entire dataset for each patient and all critical variables 2 corresponding to them.For categorical variables, such as test scores, the method is applied equally followed by a mapping step that maps the output to the () respective category.(Supplementary Text 2).That relationship between the correlation coefficients reveals that in order to preserve linear correlations as good as possible, should be chosen as small as possible.However, smaller values for add less noise to the original dataset which might alter the variables not enough.Hence, the exact value for needs to be selected based on the extent to which the data needs to be noised.To the best of our knowledge, the applied method is considered to alter the data enough such that no re-identification of patients is possible with reasonable efforts.
Results
The frontend of SCAview allows the exploration of clinical data in an interactive viewer that enables data visualization and exploration of a synthetic cohort derived from a large dataset of standardized clinical assessments in rare neurological diseases.The backend handles all of the data logic, which comprises sub-filtering of data, adjusting the data to different plot types, storing and querying sub-groups as well as session management.With that backend functionality implemented the frontend is able provide the following features.General filtering of items.The user can either select the data to be filtered patients-wise or visits-wise.The patient-wise viewing includes all patient visits where at least during one visit the value suits the selected boundaries ("any"), whereas the visits-wise viewing selects only the exact visit that matches the current filter.Plot types.The tool allows plotting four different types of plots namely histogram, scatter plot, bar plot, and timeline plot.Certain plot types are limited to a particular type of data e.g.timeline plots can only display data with a time dimension such as age (related to the variables date of birth and visit date) or the reported disease duration.All plots created by the user are displayed centered in a grid-like arrangement, selectable by clicking.On the right-hand side of the application, the plot type, data variables to be plotted, and further options, such as for linear regression fits, are located.Subgroup definition.Subgroups can be defined graphically or via a filter panel.On the left-hand side, the user can set filters, which allows them to manually edit stratification parameters to define subgroups.Furthermore, subgroups can be defined graphically, by drawing a rectangle on a selected area in a plot that outlines the desired values.The latter, in particular, enables setting filters intuitively based on data distributions.Several filter settings can be combined and in addition, any filter or filter set can be saved as a tagged subgroup.Thus, further analyses of subgroups of interest can be performed.Easy reset.Furthermore, buttons located at the top allow resetting of the current session, which deletes all plots, filters, and named subgroups and clears the whole view (Figure 1).Finally, the software is currently deployed with a synthetic cohort in order to protect data privacy while still allowing the exploration of realistic data.This cohort is equal in the number of patients to the original spinocerebellar ataxia (SCA) dataset, hence it includes 1,417 unique patients with 255,027 data points, ready to be explored by ataxia researchers and clinicians.General distributions and correlations were visually inspected by ataxia experts and compared to the corresponding plots in the real dataset.
Discussion and Future Work
SCAview is a browser-based visualization tool, that provides an easy-to-use interface that can, in a few steps, create plots and combinations of plots with great expressive power, enabling a convenient way of highly interactive data exploration.Various plot types and intuitive creation of subgroups allow visualization of particular interest allowing one to gain a deeper knowledge and generate hypotheses.The low entrance barrier with almost no technical skills necessary for the implementation and usage are a big advantage of SCAview, in contrast to tools with complex interfaces that usually have a steeper learning curve such as i2b2 [1].The creation of a synthetic cohort based on real data of more than 1,400 patients suffering from rare neurological movement disorders, namely spinocerebellar ataxia type 1, 2, 3, and 6, was necessary due to data protection.However, the chosen approach preserves means and relations, thus allowing to study almost realistic dataset.SCAview with integrated analysis of the original as well as the synthetic cohort has been successfully introduced as a resource in the "Ataxia Global Initiative" Uebachs et al. [6], a worldwide platform for clinical research.The acceptance in the ataxia community will show, whether SCAview is a useful tool for clinical research.We chose a rare neurological disease as a use case since access and availability to data in rare diseases is in particular challenging.However, the tool functions implemented in SCAview are able to handle data from various sources and can easily be modified for general-purpose visualizations.
Finally, SCAview nevertheless has some limitations.For example, it allows currently only a linear regression of data.We plan to include further functionalities in a next version, e.g. an attribute workbench that would enable users to craft custom variables or to add other methods, e.g.data interpolation, in a user-friendly way.Moreover, the next steps in increasing SCAviews capabilities would include the improvement of already existing functionality like adding more options for data fitting.Finally, the quality of the synthetic cohort could be improved by applying more sophisticated methods to create synthetic patient data such as GANs (Generative Adversarial Networks) or VAEs (Variational Autoencoders).
Figure 1 :
Figure 1: SCAview interface with two plots in parallel view
|
2022-11-01T01:16:03.436Z
|
2022-10-29T00:00:00.000
|
{
"year": 2022,
"sha1": "26f6d5d6cf77e6ff1d25ade1d151d50fe68d27a8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "26f6d5d6cf77e6ff1d25ade1d151d50fe68d27a8",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
247547245
|
pes2o/s2orc
|
v3-fos-license
|
A full-process intelligent trial system for smart court
In constructing a smart court, to provide intelligent assistance for achieving more efficient, fair, and explainable trial proceedings, we propose a full-process intelligent trial system (FITS). In the proposed FITS, we introduce essential tasks for constructing a smart court, including information extraction, evidence classification, question generation, dialogue summarization, judgment prediction, and judgment document generation. Specifically, the preliminary work involves extracting elements from legal texts to assist the judge in identifying the gist of the case efficiently. With the extracted attributes, we can justify each piece of evidence’s validity by establishing its consistency across all evidence. During the trial process, we design an automatic questioning robot to assist the judge in presiding over the trial. It consists of a finite state machine representing procedural questioning and a deep learning model for generating factual questions by encoding the context of utterance in a court debate. Furthermore, FITS summarizes the controversy focuses that arise from a court debate in real time, constructed under a multi-task learning framework, and generates a summarized trial transcript in the dialogue inspectional summarization (DIS) module. To support the judge in making a decision, we adopt first-order logic to express legal knowledge and embed it in deep neural networks (DNNs) to predict judgments. Finally, we propose an attentional and counterfactual natural language generation (AC-NLG) to generate the court’s judgment.
Introduction
During the pandemic of COVID-19, online trials based on the intelligent trial system have become ubiquitous. The smart court relies on Internet courts to turn offline litigation activities into online activities. Online trials reduce the flow of personnel and keep trials in working order. The smart court has successfully implemented full-service online processing and built a comprehensive, multi-functional, and intensive online litigation platform, which has alleviated judicial urgency issues. The Supreme People's Court promptly issued the "Notice on Strengthening and Standardizing Online Litigation during the COVID-19 Prevention and Control Period," which created a comprehensive deployment of online litigation for the courts to conduct proceedings with smart court. The smart court has formulated clear regulations for judicial tasks, such as online court hearings, electronic service, identity authentication, and material submission, and provided full judicial services and guarantees for online litigation promotion and regulation. According to the statistical data during the COVID-19 period (from February 3 to November 4, 2020), the people's courts at four levels filed 6.501 million online cases, 778 000 online court sessions, 3.23 million online mediations, and 18.15 million electronic services.
To make the smart court operate efficiently and improve trial efficiency in simple cases, Zhejiang Higher People's Court, Zhejiang University, and the Alibaba Group have jointly developed a full-process intelligent trial system (FITS), which provides strong technical support for constructing a smart court for the Zhejiang Provincial People's Court. FITS has played an essential role in financial lending and private lending cases, which moves the trial procedures of the court to the network platform, supports judicial trials in a highly informative manner, and assists judges in making judicial decisions. As shown in Fig. 1, the intelligent trial system implements the following judicial tasks: (1) extracting essential information from the legal text (indictment, lending contract, court debate transcript, etc.) to help the judge promptly grasp the key case information; (2) summarizing the controversy focuses from the court debate transcript recorded during the trial; (3) verifying the authenticity, legality, and relevance of the evidence; (4) recommending candidate questions to the judges to assist in the necessary trial procedures and discover facts related to the case; (5) retrieving the most similar cases from the historical data, and leveraging the knowledge of legal experts to predict case facts and help judges make judicial decisions; (6) generating a judgment document with complete structure, complete elements, and rigorous logic after confirming the facts of the case and applying laws and regulations.
Zhejiang University and the Alibaba Group have conducted much research on the above judicial tasks. Zhao et al. (2018) proposed a named entity recognition model based on the BiLSTM-CRF architecture, with two novel techniques of multi-task data selection and constrained decoding. Liu XJ et al. (2018) introduced a graph convolution based model to combine textual and visual information presented in visually rich documents (VRDs). Zhou et al. (2019) studied a novel research task of legal dispute judgment (LDJ) prediction for e-commerce transactions, which connects two isolated domains, e-commerce data mining and legal intelligence. Duan et al. (2019) introduced a delicately designed multirole and multi-focus utterance representation technique and provided an end-to-end solution specializing in controversy focus based debate summarization (CFDS) via joint learning. Wang et al. (2020) Full-process intelligent trial system Fig. 1 Overview of the full-process intelligent trial system (FITS) (ASR: automatic speech recognition; OCR: optical character recognition; NLP: natural language processing) investigated dialogue context representation learning with various types of unsupervised pretraining tasks, where the training objectives were given naturally according to the nature of the utterance and the structure of multi-role conversation. Wu et al. (2020) proposed a novel attentional and counterfactual natural language generation (AC-NLG) method, in which counterfactual decoders were employed to eliminate the confounding bias in data and generate judgment-discriminative court views by incorporating a synergistic judgment predictive model. Ji et al. (2020) proposed a novel network architecture, cross copy networks (CCNs), for content generation by simultaneously exploring the logical structure of the current dialogue context and similar dialogue instances.
FITS is designed by following the trial process and by emulating the way by which the judge makes judicial decisions. We adopt a combination of the knowledge-guided method and the big data driven method. The knowledge-guided method is to simulate judges based on knowledge and use logical reasoning to make judgments. The big data driven approach is to simulate judges to make judgments based on the principle of "treating like cases alike." Most of the technologies in these papers directly serve the FITS. Many new technologies were born in developing this system, and their original purpose was to perform the judicial tasks in trial practice. FITS applies these technologies to reengineer the existing case trial process and promote the intelligence of all nodes of the judicial process. In practice, FITS also provides judges and parties with intelligent assisting services at each node of the case trial procedure. Based on these works, we will show the operation process of the intelligent trial system. To summarize, we make several noteworthy contributions as follows: 1. We are the first to propose an FITS that serves primary phases of the trial procedure in the smart court.
2. We convert central judicial tasks of the trial procedure into corresponding natural language processing (NLP) problems, and adopt a combination of knowledge-based models and data-driven models.
3. Based on our FITS, we have developed an artificial intelligence (AI) judge assistant robot called Xiaozhi (micro intelligence) and achieved satisfactory results that have already assisted several courts in Zhejiang Province in financial lending cases and private lending cases.
The rest of this paper is organized as follows: In Section 2, we introduce a BiLSTM-CRF neural architecture and use it for legal text (indictments, judgment documents, etc.) information extraction. In Section 3, we justify the validity of evidence based on historical data and logical knowledge graphs. In Section 4, we propose an automatic questioning system to help judges ask procedural and factual questions. In Section 5, we summarize the focuses of the dispute during a trial by employing a multitask learning framework called CFDS and propose a framework of dialogue inspectional summarization (DIS). In Section 6, we combine first-order logic and deep neural networks to discover the facts of the case. In Section 7, we propose the AC-NLG method to generate the court's judgment-discriminative view. In Section 8, we introduce the results achieved by FITS in the application to smart court. Section 9 discusses related research work and the last section concludes this paper.
Information extraction from legal documents
Information extraction (IE) aims to extract structured information from unstructured documents. It has been explored extensively due to its significant role in NLP. Legal information extraction includes the extraction of legal ontology, legal relations, and legal named entities. Earlier research studied the extraction of legal case information (Jackson et al., 2003), and combined information retrieval and machine learning to extract the correlation between current cases and precedent texts using support vector machine (SVM) and other algorithms. The transfer learning approach (Elnaggar et al., 2018) using a neural network has been trained for linking of named entities to legal documents. Recently, the popular neural structure for IE, BiLSTM-CRF (Lample et al., 2016), has shown excellent performance on numerous sequence-labeling tasks with high robustness and low computational complexity. We have collected more than 70 million judgment documents to build the corpus, including more than 360 000 court records and more than 100 000 evidence samples.
BiLSTM-CRF
The model of long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) is a type of recurrent neural network (RNN) architecture, in conjunction with an appropriate gradient-based learning algorithm, which addresses the vanishing/exploding gradient problem of learning long-term dependencies by introducing a memory cell with self-connections that store the temporal state of the network. Although numerous LSTM variants have been described, we employ the version proposed by Google (Sak et al., 2014). LSTM takes input as a sequence of vectors x=(x 1 , x 2 , ..., x n ) and returns another sequence y=(y 1 , y 2 , ..., y n ); then the network can be calculated using the following equations iteratively: where W is the weight matrix, b is the bias vector, σ is the logistic sigmoid function, and i, f , o, and c are, respectively, the input gate, forget gate, output gate, and cell activation vectors, all of which are of the same size as the cell output activation vector m, and is the element-wise product of the vectors.
The LSTM model takes past information into account, but ignores future information, because conventional RNNs are only able to make use of the previous context. Bidirectional LSTM (BiL-STM) can better exploit context in forward and backward directions. BiLSTM (Graves and Schmidhuber, 2005) combines bidirectional RNNs (BRNNs) with LSTM. BRNNs present each training sequence forward and backward to two separate recurrent nets by processing the data in both directions with two separate hidden layers that are fed forward to the same output layer. The hidden state of BiLSTM at time t generates the forward hidden sequence − → h t and the backward hidden sequence ← − h t . A popular probabilistic method for structured prediction, conditional random fields (CRFs), is widely applied in segment and label sequence data. The advantage of CRFs is to avoid a fundamental limitation of maximum entropy Markov models (MEMMs) based on directed graphical models (Lafferty et al., 2001). We describe the definition of a general CRF (Sutton and McCallum, 2007) based on a general factor graph. Let G be a factor graph over X and Y . Then (X, Y ) is a conditional random field if for any value x of X, the distribution p(y|x) factorizes according to G. If F = {Ψ a } is the set of factors in G, then the conditional distribution for a CRF has the form where A is the number of factors in the collection, both feature functions f ak and weights θ ak are indexed by factor index a to emphasize that each factor has its own set of weights, and Z(x) is a normalization factor over all state sequences for sequence x.
BiLSTM-CRF is a widely adopted neural architecture for sequence labeling problems, including entity recognition. It is a hierarchical model, and the architecture is illustrated in Fig. 2. The network can effectively obtain two-way input features through the BiLSTM layer and sentence-level tags through the CRF layer. Note that the CRF layer has a state transition matrix as a parameter, and we can effectively use past and future tags to predict the current tag. The first layer of the model maps words to their embeddings. X=(x 1 , x 2 , ..., x n ) is a sentence composed of n words in a sequence, regarded as input to a BiLSTM layer. In the second layer, word embeddings are encoded and the output is h = (h 1 , h 2 , ..., h n ). We record the features extracted from the linear layer as matrix P =(p 1 , p 2 , ..., p n ), in which the element p ij corresponds to the score of the j th tag of the i th word in a sentence. We introduce a tagging transition matrix T , where T ij represents the score of transition from tag i to tag j in successive words. The score of the sentence X along with a sequence of predictions Y =(y 1 , y 2 , ..., y n ) is then given by the sum of transition scores and network scores: A softmax for all tag sequences obtains the normalized probability: where Y X represents all possible tag sequences for a sentence X. The model is trained by maximizing the log-probability with a log-likelihood function (Lample et al., 2016). From this, BiLSTM-CRF obtains the sequence of output tags. In decoding the prediction, we seek the optimal path to obtain the maximum score driven by y * = arg max y ∈YX score(X, y ).
Domain adaptation maps the source domain with the label and the target domain with different data distributions to the same feature space (embedding manifold). BiLSTM-CRF is combined with domain adaptation to explore external datasets (Zhao et al., 2018), as illustrated in Fig. 3, in which the full-connection layer maps the distributed feature representation to the sample label space. The CRF features can be computed separately, i.e., φ T (x) = G T · h, φ S (x) = G S · h for the target and source datasets, respectively. The loss functions p y | x; θ T and p y | x; θ S are optimized in alternating order.
Multi-task BiLSTM-CRF for IE
BiLSTM-CRF has been widely used in neural entity recognition (Lample et al., 2016;Liu XJ et al., 2018) and information extraction (Yang ZL et al., 2017;Zhao et al., 2018) in the legal domain. FITS applies it to the financial lending case and the private lending case. Taking the financial lending case (Zhao et al., 2018) as an example, the coverage of the extraction includes 45 types of documents (loan contract, loan extension contract, guarantee contract, mortgage contract, credit contract, pledge contract, pledge registration certificate, joint repayment commitment documents, loan vouchers, guarantor industrial and commercial registration materials, etc.), involving about 550 kinds of elements (plaintiff, defendant, defendant's ID card, litigation claims, facts and reasons, loan amount, loan contract number, signing date, the content of the indictment, etc.). On average, there are at least seven elements (fields) for each document to be extracted.
The BiLSTM-CRF model first matches each input character to a word vector that is pre-trained on a large corpus (usually based on word2vec, Glove, BERT, and other language models). Then the model uses BiLSTM to perform encoding on the word vector sequence, and obtains BiLSTM word encoding after concatenation. BiLSTM word encoding is used as the top CRF layer input to obtain the final result of the beginning, inside, and outside (BIO) information identification, thereby obtaining the result of information extraction. For the example in Fig. 2, the information "joint and several liabilities" in the input will be marked and extracted. Meanwhile, many original materials are obtained through optical character recognition (OCR) or automatic speech recognition (ASR). Missing information and noise exist in the recognition process, so we use regularization rules to extract some particular information fields as a supplement.
In practice, we divide all information into two categories: general fields and specific fields. General fields refer to fields that are included in every case, such as party information. Specific fields are fields unique to each case, such as the date of contract signing for financial loan cases. For any case, general fields will be extracted by a common model shared by all cases, and the corresponding proprietary model will extract the specific fields for this type of case. In other words, a legal case text will be extracted by two models to extract corresponding fields.
To avoid supervised learning that requires a large amount of data annotation, we also adopt the transfer learning method. We use the annotation data of one case reason to improve the information extraction ability of another case reason from transfer learning. The diagram of the migration learning model for a "financial lending case" and a "private lending case" is shown in Fig. 3. The model adds a fully connected layer (FCL) under different domains between the BiLSTM layer and the CRF sequence output layer, thereby enhancing the model's transfer learning ability.
Evidence analysis
In the trial, evidence analysis plays an essential role in determining the facts of the case. The primary task is to classify the evidence, which aims to divide each piece of evidence into different categories, and its purpose is to study the characteristics of different types of evidence and its application rules. The evidence materials discussed here are texts or images (for example, evidence in private lending cases includes loan agreements, guarantee conditions, payment delivery, repayment conditions, etc.). The second task of evidence analysis is to justify each piece of evidence's authenticity, legality, and relevance. These three aspects determine whether the evidence is probative.
Evidence classification
We classify different types of evidence through multi-modal analysis. The preliminary work of evidence classification is to extract text evidence from the original evidence materials through OCR technology. We then use the NLP engine to understand the text content and extract the semantic features at the text level. For the part of the evidence materials from which OCR cannot identify or accurately extract useful information, we introduce the method of visual feature recognition to improve the effect of evidence recognition. The text features and visual features are merged to classify the evidence finally. For simplicity, we here introduce mainly the classifi-cation of the evidence after it is extracted as text.
We propose a classifier by representing the evidence in a vector. Specifically, we employ the BiL-STM model introduced in the previous section to build a classifier to perform evidence classification. We apply a hierarchical attention network (Yang ZC et al., 2016) for evidence classification. The model constructs a hierarchical structure of "word-sentenceevidence text" and has two attention-level mechanisms applied at the word-and sentence-level. We learn from the idea that the model uses the attention mechanism twice under the hierarchical structure. We embed evidence in a vector representation by first using word vectors to represent sentence vectors and then using sentence vectors to represent evidence vectors.
We first encode words by embedding the words in vectors through a matrix W , and then use the BiLSTM model to obtain annotations of words by summarizing information from both directions. Afterward we obtain an annotation for a given word w by concatenating the hidden state h= − → h ; ← − h , which summarizes the information of the whole sentence centered around w. Then we introduce the attention mechanism to extract words that are important to the meaning of the sentence and aggregate the representation of those informative words to form a sentence vector. We have u w =tanh(W w h + b w ) as a hidden representation of h and obtain a normalized importance weight α through a softmax function. We have the sentence vector as a weighted sum of the word annotations based on the weights.
After we have the vector of the sentence, we further similarly obtain a vector of evidence. We also use BiLSTM to encode the sentences, again use attention mechanism and introduce a sentence-level context vector u s =tanh(W s h + b s ), and then have v= i α i h i , which indicates the evidence vector that summarizes all the information of sentences in the evidence text. The evidence vector v is a high-level representation of the evidence and can be used as features for evidence classification: An overview of evidence classification is shown in Fig. 4. Evidence analysis also contributes to the formation of the evidence chain, which can visually show the case fact structure. This helps the judge sort out the details of the case and grasp the trial's progress. Evidence confirmation ensures that every piece of evidence in the evidence chain is legal and credible. Evidence classification automatically identifies different types of evidence and provides structured input for the components of the evidence chain. (2) hs (1) hw (1) hw (
Evidence justification
The justification of evidence is the prerequisite of legal reasoning and fact-finding. The attributes of evidence are reflected in three aspects: (1) authenticity of the evidence, including authenticity of the form and authenticity of the content; (2) legality of the evidence, including legality of the source and legality of the state; (3) relevance of the evidence, that is, whether the evidence is related to the facts to be proved. Two novel methods are proposed to characterize these three attributes.
First, we evaluate the authenticity and the legality of evidence based on the analysis of historical data. In practice, it is not appropriate to determine the attributes of evidence from the legal text itself. The judge determines the authenticity and legality of evidence depending on the state of the evidence and the procedure of obtaining the evidence. The technical proposal is to mine massive evidence materials from real cases and then to calculate the prior probabilities of certain types of materials. On this basis, we build a knowledge base composed of different kinds of evidence with prior probability. According to the relevant evidence in the historical data, we evaluate the attributes by adopting the Bayesian theory to assess the probability that the evidence is real or legally obtained.
Second, we evaluate the relevance of evidence by analyzing the relationship between evidence and relevant knowledge. We adopt a logical knowledge graph based reasoning method to automatically determine the relevance of evidence. For example, in response to the "financial borrowing case," we sort out the correlations between various types of evidence based on the judge's experience and form a logical map of correlation review. For all relevant evidence materials, if there is a direct or indirect relevance between the elements of any two sets of evidence, we believe that the evidence's relevance is valid. We apply a logical graph G=< E, R > to represent the relevance of evidence, where E is a set of nodes representing the type of evidence, and R is a set of links representing the relationship between two pieces of evidence.
Automatic questioning in trail
During the trial process, we design an automatic questioning robot to assist the judge in presiding over the trial. The trial is a particular multi-agent dialogue situation. The participants include the judge, the plaintiff, and the defendant. The judge is the trial organizer, while the plaintiff and the defendant ask questions to understand the facts. They also need to maintain order in the court trial and promote the trial process. The automatic questioning system for the judge contains multiple modules: First, the judge's original speech is converted into text with ASR, and then the text is transformed into the context and state of the questioning system with semantic understanding. Second, a module for question management (QM) is constructed and the candidate questions are generated within this module. Finally, automatic questioning is realized with a text-to-speech (TTS) technique that transforms the text into speech. According to the question's content, we divide the judge's questions into two categories: procedural questioning and factual questioning.
Procedural questioning
Procedural questioning refers mainly to some relatively fixed questions used by judges to organize and promote court trials, such as "identity information of the plaintiff and the defendant" and "the plaintiff and the defendant read the indictment and the defense." Procedural questioning is closely related to the procedures of the trial procedure, which has strong regularity. The system of procedural questioning focuses on solving the problem of questioning automatically in the trial procedure. The following is a sequence diagram (Fig. 5) of an automatic questioning system, where fact stands for the node of factual questioning, while procedure identifies the node of a procedural questioning node. Factual questioning is inserted in the process of procedural questioning, and multiple fact nodes can be inserted. It can be seen that an essential function of process questioning is state management.
Procedure state
Output Input Fact state The process of procedural questioning can be defined as a natural language generation problem, and the solution includes rule-based methods and abstract generation methods. The rule-based approach has the advantages of accuracy and practicability, but it requires a large number of custom rules. The abstract generation method currently has technical bottlenecks; the generated text usually has incomplete speech, repetition, and faulty speech. The automatic questioning system innovatively proposes a scheme combining a finite state machine (FSM) and an affair map. The finite state machine is responsible for state management, and the affair map is responsible for selecting subsequent actions, which can also flexibly configure templates for downstream text generation.
Factual questioning
The judge's factual questioning is aimed mainly at the factual elements of the plaintiff's and defendant's petitions and defenses and also refers to the factual questions that the judge has asked before. Factual questioning is considered to be a textgenerated problem. We obtain factual questions raised by the judge in the trial's historical dialogues, using joint learning of classification and retrieval. Therefore, we first need to define dialogue in the trial and then give an encoder to delicately represent the hierarchical information in the dialogue context.
Let D = {U 1 , U 2 , ..., U n } denote a dialogue containing n utterances, where each utterance U i is composed of a sequence of words (namely sentence) S i , which means the text content of U i . We employ BiLSTM to encode the semantics of the utterance. BiLSTM has been widely recognized for encoding the utterance's semantics while maintaining its syntax (Wang et al., 2020). We use BiLSTM to learn a feature representation of dialogue by masking and recovering its unit elements, such as evidence and laws in the legal domain for trial dialogue.
In the utterance layer, the input source is a set of dialogue information obtained from the speech-transformation of the judge's factual questions, denoted as a sequence of {utterance 1 , utterance 2 , ..., utterance n }, and each utterance is composed of the questions asked by the judge. It contains L utterances where each utterance U i is composed of a sequence of l words (namely sentence) S i = {w i1 , w i2 , ..., w il } and the associated role (the judge) r i . We employ BiLSTM to encode the semantics of the utterance. Note that the judge's role information should be embedded in the utterance. We connect the judge's role information with each word in the sentence so that the same word can be projected into different dimensional spaces. The representation of BiLSTM is obtained by concatenating its left and right context representations.
To strengthen the relevance between words in an utterance, the attention mechanism is employed to obtain U i , which can be interpreted as a local representation of an utterance: where Q u are learnable parameters.
In the dialogue layer, to represent the global context in the dialogue, we use BiLSTM again to encode the dependencies between utterances and obtain a global representation of each utterance, which is expressed as U i .
where dim h refers to the dimensionality of the hidden state h. We next perform word segmentation on the judge's utterance in the dialogue and word vector representation for each word segment to obtain X={x 1 , x 2 , ..., x n }, and then employ BiLSTM and other neural network units to encode X and conduct automatic feature selection. Because the judge's question in the dialogue contains many utterances, it therefore generates a new vector sequence V J ={v 1 , v 2 , ..., v n }. We further use the attention mechanism to perform a secondary representation of V J . These neural network units can enhance information interaction between different levels of dialogue. After the hierarchical representation, we obtain a mapping from V J to V J h ={h 1 , h 2 , ..., h n }, where v and h have a one-to-one correspondence.
Because the judge's factual questions are related to the case's facts in the dialogue between the plaintiff and the defendant, it is also necessary to segment the plaintiff's litigation request and the text of the defendant's defense. We first represent the word vector for each word segment to obtain Y ={y 1 , y 2 , ..., y n }. We then employ the attention mechanism to encode Y to form an encoding vector V W for each combination. The function of V W is to encode the information of the plaintiff's request and the defendant's defense in the encoded text. We combine the element y in V W and the element h in V J h one by one according to the serial number, and the combination result is recorded as V J h ={h t 1 , h t 2 , ..., h t n }. The new statement contains the prosecution and defense information of the plaintiff and the defendant and contains information about the judge's questions in the dialogue.
We employ a classification task to recommend the most likely problem categories. We first predefine a number of problem categories. Under each question category, there are several standard question templates. For example, "recovery of debt" and "the spouses' joint debt" belong to different question categories. When recommending questions to the judge, the system obtains the indictment and pleading, as well as the historical questions raised by the judge as input, and returns the top K most likely question categories according to the steps as mentioned earlier. Finally, in the top K question categories, it returns the standard question template with the highest probability. An example of factual questioning is shown in Fig. 6.
Trial summarization
Trial summarization consists of two tasks. The first task is to summarize the court debate transcript during the trial stage. The other task is to summarize the controversial focuses of the dialogue in the trial. Summarization-based algorithms have enabled a broad spectrum of applications, such as auto-abbreviated news and retrieval outcomes (Gerani et al., 2014) to assist users in consuming lengthy documents effectively. Thanks to the development of ASR techniques, dialogue summarization (Goo and Chen, 2018;Liu CY et al., 2019) has also attracted much attention in recent years, with exemplar applications like the judicial trial, customer service, and meeting summarization. Different from the plain document, multi-role dialogue is more complicated due to the interactions among various parties. Enhanced representation of the atomic components (e.g., utterance and role) of the dialogue prequalifies summary generation optimization.
Summarization of controversy focus
During the trial process, the judge needs to discover the common focus of the dispute between the plaintiff and the defendant in the debate and identify how the two sides defend and refute the other party's arguments. The summary of the dialogue during the trial is vital in helping the judge grasp the critical information in the dialogue between the two parties. They include both useful information that appears during the dialogue (for example, private lending cases include the names of the parties, loan amounts, repayment records, etc.) and the focal point of the case (for example, the fact that both parties have repeatedly defended and questioned). The judge finally completes the case trial by analyzing the focus of the dialogue between the two parties and combining the judgment logic.
We have realized the automatic generation model of court trial abstracts in the intelligent trial system, mainly the automatic abstracts of dispute focuses. This task includes (1) extracting dialogue fragments related to the dispute focus in the dialogue and (2) classifying the dispute focus corresponding to each dialogue. Through the generation and processing of the court trial summary, the judge can obtain important dispute fragments in the court trial dialogue, to understand and deal with the court trial more efficiently.
The Alibaba Group proposed a multi-task learning framework called CFDS (Duan et al., 2019;Wang et al., 2020) to summarize the focus of court disputes, which includes mainly the following parts: (1) Using a sequence encoder, we model the text of the trial, semantic information of dispute focus, the role related to utterances, and the node sequence in the corresponding legal knowledge graph, and obtain the vector representation of context information through an attention mechanism. (2) According to the different dispute focuses, the focus classifier takes the category of the dispute focus involved in each utterance as the target, and obtains the label of the dispute focus.
(3) For the court record summary extraction task, the objective of the summary extraction classifier is whether each utterance is extracted.
We adopt a multi-task learning strategy including the following parts: (1) the prediction of the controversy focus, (2) the highlighted sentence, and (3) the recognition of sentence elements. To distinguish between different roles in the dialogue, such as judge, plaintiff, and defendant, we use different embeddings to represent different roles. We apply word embedding to express an utterance in the dialogue through a convolutional neural network and pooling mechanism, and then use a CNN with an attention mechanism to express the entire dialogue. The process of trial summarization is shown in Fig. 7.
St
St St ht ht ht Ut Ut Ut Fig. 7 The process of trial summarization 5.1.1 Controversy focus assignment The first task is to assign a controversy focus to each utterance. Different debate dialogues may have various controversy focuses, and the judge concludes each controversy focus according to the content of debate dialogue D. Because the number of controversy focuses varies in different debate dialogues and each controversy focus differs in semantics and syntax, we can hardly cope with this task using text classification. We calculate the relevance between utterance u i and each controversy focus f m in F with respect to debate dialogue D.
To do so, we need to compute the embedding of each controversy focus. As both controversy focuses and sentences in the debate are natural language, we use the BiLSTM encoder to obtain the controversy focus embedding f m . In addition, not every utterance u i is assigned a controversy focus. Some utterances do not belong to any controversy focus and they can be regarded as irrelevant content, namely noise. Thus, a category Noise is created for every debate dialogue and a dense vector is used to represent it. Then we calculate the attention score α f ij of utterance u i with f j : Controversy focus with the highest normalized score α f ij is the controversy focus assigned to u i .
Utterance extraction
The second task aims to extract the crucial utterances from the debate dialogue about the different controversy focuses and to form multiple summarizations. The utterance extractor considers two aspects: utterance content and controversy focuses. To enhance utterance representation learning, we employ the normalized controversy focus distribution as the input to this task: Then F i and u i are concatenated and fed into the fully connected layers as follows: where W fc 1 and W fc 2 are two weight matrices and o i ∈ [0, 1] is the output of the utterance extractor, which indicates the probability of extracting utterance u i .
Dialogue inspectional summarization
In the court debate scenario, the judge summarizes the case narrative based on facts recognized from the court debate during the trial and relies on the evidence or materials submitted by the litigants. We particularly propose a framework of DIS, which includes four parts: (1) For the text of the trial transcript, the multi-role dialogue encoder can hierarchically and serially model the semantics of the court trial transcript, and obtain the vectorized representations of the word level, speech level, and dialogue level, respectively. (2) The decoder uses the attention mechanism and the replication mechanism to generate the sequence results identified by the court.
(3) The target fact element regularizer classifies the relevance of fact elements, and the element level in the generated text should be consistent with the content of the court trial. (4) The missing fact entity discriminator uses the classification of missing fact entities to predict the inconsistency between the decoder state representation and the dialogue encoding representation in fact entity classification.
We design a hierarchical dialogue encoder involving role information to accommodate extended context and multiple turns among the multiple roles. Rather than directly aligning the input dialogue and its summary, within the generation framework, we propose two additional tasks in the manner of joint learning: expectant factual aspect regularization (EFAR) can estimate the factual aspects to be contained in the summary to make the model emphasize the factual coverage of logical reasoning, and missing factual entity discrimination (MFED) predicts the missing aspects, which discover/alarm the factual gap between the input and the output. Specifically, the DIS framework is shown in Fig. 8
Inspectional decoder
We propose an inspectional decoder for generating summaries. The inspectional decoder generates the summary via a pointing mechanism, while the expectant factual aspect regularizer ensures factual consistency from the aspect level.
From the perspective of bionics, humans tend to write a draft before focusing on factual aspects. We treat the inspectional decoder as a drafter, whose states need to be further regularized by the aspectaware module.
With the pointing mechanism integrated, the decoder can directly copy tokens from dialogue, making the generated summary more accurate and relevant in factual details.
Expectant factual aspect regularizer
When writing formal documents like the legal verdict, people always carefully review their drafts to ensure that there are no inconsistencies in the expected aspects. Inspired by this process, we propose an expectant factual aspect regularizer to verify the aspect level's consistency.
For each aspect e i , we use the aspect encoder to obtain its semantic embedding a i . The encoder Enc A is single-layer bidirectional LSTM to represent the aspect description text: We then produce a weighted sum of the decoder hidden states, known as the aspect-aware decoder state s a : where K is the number of factual aspects and the score function uses additive attention: score(a i , s t ) = v T tanh(linear(a i , s t )).
Finally, we feed s a into a three-layer classifier to predict the expectant aspects: where F a is the notation of linear layers and y a ∈ R K indicates the related probability of K aspects.
Missing factual entity discriminator
There are always factual inconsistencies between the dialogue and reference summary. In the Seq2Seq framework, inconsistencies mislead the decoder to generate incorrect factual details. The missing factual entity discriminator tries to detect the inconsistencies, thus mitigating the problem. Motivated by this observation, we design the discriminator to classify whether the factual entity is missing in the conversation. In real applications, human summarizers can refer to the predictions to complete generated text based on additional information. Intuitively, we view inconsistency as the factual divergence between source and target content, using the bilinear layer as the classifier.
Judgment prediction
Legal judgment prediction (LJP) is one of the most attractive research topics in the field of legal AI (Xiao et al., 2018;Chao et al., 2019;Zhong et al., 2020aZhong et al., , 2020b. LJP aims to predict legal judgment based on a legal text including the description of the case facts. Most previous works treated LJP as a text classification task and generally adopted DNNbased methods to solve it. Zhong et al. (2018) and Yang WM et al. (2019) used multi-task learning to capture the dependencies among subtasks by considering their topological order. Zhong et al. (2020b) applied a question-answering task to improve the interpretability of LJP through reinforcement learning. Luo et al. (2017) formulated legal documents as a knowledge basis and used attention mechanisms to aggregate representations of relevant legal texts to support judgment prediction.
We combine DNNs with a symbolic legal knowledge module, in which legal knowledge is expressed as a set of first-order logic (FOL) rules. The application of FOL to represent domain knowledge has already demonstrated its effectiveness on many other tasks, including visual relation prediction (Xie et al., 2019), natural language inference (Li et al., 2019), and semantic role labeling (Li et al., 2020). The advantages of representing legal knowledge as FOL rules can make judgment prediction more interpretable and provide models with inductive bias, which reduces neural network dependency.
The proposed model unifies the gradient-based deep learning module with the non-differentiable symbolic knowledge module via probabilistic logic. Specifically, we build a deep learning module based on a co-attention mechanism, which benefits the information interaction between fact descriptions and claims. Afterward, the deep learning module outputs, predicted probability distribution for judgments, will be fed into the symbolic module.
Legal knowledge represented as logic rules
Before presenting how to integrate legal knowledge into DNNs, we briefly introduce FOL to express legal knowledge. To preserve the advantages of gradient-based end-to-end training schema, we convert the Boolean operations of FOL into probabilistic logic, denoted in the continuous real-valued space.
Specifically, we associate the variable X in preconditions with corresponding neural outputs x. Then, Lukasiewicz T-norm and T-conorm (Klement et al., 2000) are used to relax the logic rules to a softened version based on the associated outputs of the deep learning module. A set of functions is denoted to map the discrete outputs of FOL into continuous real values as follows: 1. Γ (X i ) = x i with X i denoting a variable in FOL and x i as the associated output of neural networks.
2. Γ ( In designing qualified mapping functions, when the precondition holds, the mapping function should generate a predefined maximum positive score to lift the original score produced by neural networks. The mapping functions should also reveal the semantics of propositional connectives. For example, the conjunctive precondition's mapping score becomes zero if even only one of the conjuncts is false. For a disjunctive precondition, the mapping score becomes zero when all the disjuncts are false. Moreover, the mapping score will increase as the number of disjuncts increases.
In addition to the functions listed above, two mapping functions are used for negated predicates. One of them is for negated predicates in preconditions, e.g., ¬X i . The soften output of ¬X i is denoted as 1 − x i . The other is for negated consequent ¬Y , designated as −y i to reduce neural networks' original outputs.
Three typical types of legal knowledge
We investigate compiling three specific types of legal knowledge into FOL rules, which are frequently referred to by legal experts in private loan cases.
The first legal logic rule comes from article 28 of the Supreme People's Court's Provisions on Several Issues Concerning the Application of Law in the Trial of Private Loan Cases (http://www.court.gov.cn/fabuxiangqing-15146.html). In short, it is stated that the law shall not support the interest rate agreed by the lender and the borrower exceeding four times the quoted interest rate on the one-year loan market when the contract was established. We formulate this legal knowledge as the following FOL rule K 1 : where X TIR is a variable that indicates if the current claim is for interest. X RIO indicates if the claimed interest rate exceeds four times the quoted interest rate on the one-year loan market. This rule reflects the decrease in the illegitimate interest rate. The second legal logic rule comes from article 29 of the same law. In short, it is stated that if neither the interest rate during the loan period nor the overdue interest rate has been agreed upon, the people's court shall support the unpaid interest from the date of overdue repayment. We formulate this legal knowledge as the following FOL rule K 2 : where X RIA indicates if the borrower and the lender have made an agreement on the interest rate, and X DIL indicates if the date of overdue repayment is legitimate.
In private loan law cases, the plaintiff often proposes multiple claims and the judgments on these claims are not independent. For example, when a plaintiff proposes two claims, one is for the principal and the other is for the interest. If the judge does not support the principal claim, then the interest claim should not be supported either. Such prior knowledge should be injected into the deep learning module as well. Another example showing the dependency among multiple claims is that the losing party shall bear the litigation costs. The third FOL rule, K 3 , is formulated as where X TIC indicates if the current claim is for litigation fees or not. This rule will affect those claims for litigation costs.
Injecting legal knowledge into DNNs
We first build a co-attention network as our base model, which can enrich the representations by exchanging information between fact descriptions and claims. Formally, we provide an abstract denotation of the co-attention network as follows: Here, the encoder and layers are deep neural networks. σ and W are the activation function and model parameters, respectively. Note that the softmax outputs of co-attention networks will be input into the logic module and adjusted accordingly. As shown in Fig. 9, the proposed model consists of a deep learning module based on co-attention networks and a symbolic legal knowledge module. We first input fact descriptions and multiple claims in the co-attention network to obtain contextual representations for both fact descriptions and claims. The predicted probability distribution of the deep learning module is then re-weighted by first-order logic rules in the symbolic module. The logic rules represent professional legal knowledge, which is essential for making correct judgments. The co-attention model can fuse the claim representations and fact descriptions to create implicit reasoning. However, the related legal knowledge used by legal experts (e.g., lawyers or judges) can hardly be learned by the co-attention network. For example, the rule that a private loan interest rate that exceeds 2% per month is not protected by law may not always be followed by the neural networks. Thus, it is crucial to explicitly inject such declarative legal knowledge into neural networks, so they can make interpretable judgment predictions.
Before introducing substantial legal knowledge related to our private loan scenario, we first show how to inject symbolic FOL rules into the deep learning module using the above mapping functions Γ (·). In short, the core idea of this legal knowledge injection is to re-weight the output y of co-attention networks as introduced in the previous subsection so that when the facts in the text satisfy conditions in the legal knowledge, the associated value of y increases. Otherwise, the value of y decreases.
Specifically, given the softmax outputs y of Eq. (15) and an FOL rule X → Y , the FOL rule and DNNs are combined by regulating the outputs of the deep learning module as follows: where ρ is a hyper-parameter which denotes the importance of each rule. Through Eq. (16), we can directly regulate the deep learning module's outputs.
Given a set of samples, }, the model is trained by maximizing the following objective function:
Judgment document generation
Judgment document generation is based mainly on the judge's view, which is often regarded as a "court view" in the judgment document (Ye et al., 2018), and its content includes mostly the determination of the case facts and the matching of laws and regulations. Therefore, the core task of judgment document generation is the generation of the court's view. Details about the proposed algorithms and experimental results on court's view generation can be found from our previous conference paper published in EMNLP 2020 (Wu et al., 2020).
Due to the popularity of machine learning, especially NLP techniques, many legal assistant systems have been proposed to improve the effectiveness and efficiency of the legal system from different aspects. The court's view can be regarded as the interpretation of the sentence in a case. As an important portion of the verdict, the court's opinion is difficult to generate due to the logical reasoning required in the content. Therefore, the generation of the court's view is regarded as one of the most critical functions in a legal assistant system. The court's view consists of two main parts, the judgment and the rationales, where the judgment responds to the plaintiff's claims in civil cases or charges in criminal cases, and the rationales are summarized from the fact description to derive and explain the judgment.
In this work, we focus on the problem of automatically generating the court's view in civil cases by injecting the plaintiff's claim and fact description (Fig. 10). In such a context, generating the court's view can be formulated as a text-to-text NLG problem, where the input is the plaintiff's claim and the fact description. The output is the corresponding court view, which contains the judgment and the rationales. Because the claims are various, for simplification, the judgment of a civil case is defined as supported if all its requests are accepted and nonsupported otherwise.
Fact description
The plaintiff A claimed that the defendant B should return the loan of $29 500 Pri nci ple claim and the corresponding interest Interest claim .
After the hearing, the court held the facts as follows: defendant B borrowed $29 500 from plaintiff A, and agreed to return after one month. After the loan expired, the defendant failed to return Fact .
Court's view
The court concluded that the loan relationship between plaintiff A and defendant B is valid. The defendant failed to return the money on time Rationale . Therefore, the plaintiff's claim on principle was supported Acceptance according to law. The court did not support the plaintiff's claim on interest Rejection because the evidence was insufficient Rationale . Fig. 10 An example of the court's view from a legal document (Wu et al., 2020) Although classical NLG models have been applied to many text-generation tasks, when generating the court's view, such techniques cannot be applied for the following reasons: (1) The "no claim, no trial" principle exists in civil legal systems; the judgment is the response to the claims declared by the plaintiff, and its rationales summarize the corresponding facts. (2) The distribution of judgment results in civil cases is very imbalanced. Such an imbalance of judgment would blind the model's training by focusing on the supported cases while ignoring the non-supported cases, leading to incorrect judgment generation of the court's view.
To address these challenges, we propose the AC-NLG method by jointly optimizing a claim-aware en-coder, a pair of counterfactual decoders to generate judgment-discriminative court views (both supportive and non-supportive), and a synergistic judgment predictive model. Comprehensive experiments show the effectiveness of our method under both quantitative and qualitative evaluation metrics.
Backdoor adjustment
Causal inference (Pearl, 2009;Kuang et al., 2020) is a powerful statistical modeling tool for explanatory analysis that removes the confounding bias in data. That bias might create a spurious correlation or confounding effect among variables. Recently, many methods have been proposed to remove the confounding bias in the literature of causal inference, including do-operation based on a structure causal model (Pearl, 2009) and counterfactual outcome prediction based on a potential outcome framework (Imbens and Rubin, 2015). With do-operation, a backdoor adjustment (Pearl et al., 2016) has been proposed for data debiasing. In this study, we sketch the causal structure model of our problem, as shown in Fig. 11, and adopt the backdoor for confounding bias reduction. Confounding bias from the data generation mechanism (Wu et al., 2020) In this subsection we introduce the effect of mechanism confounding bias on the generation of the court's view and propose a backdoor-inspired method to eliminate that bias. Then, we describe our AC-NLG model in detail. Fig. 12 shows the overall framework.
As shown in Fig. 11, u refers to the unobserved mechanism (i.e., plaintiffs sue when they have a high probability of being supported) that causes the judgment in dataset D(J) to be imbalanced. D(J) → I denotes that the imbalanced data D(J) has a causal effect on the representation of input I (i.e., plaintiff's claim and fact description), and D(J) → V denotes that D(J) has a causal effect on the representation of court's view V . Such imbalance in D(J) leads to the confounding bias that the representations of I and V tend to be supportive and blind the conventional training on P (V |I). The confounding bias from the data generation mechanism would blind the conventional training on P (V |I), and current sequence-tosequence models struggle to solve this problem. For a particular case, given the input I = (c, f ), and using the Bayes rule, we would train the model to generate the court's view V as follows: The backdoor adjustment creates a dooperation on I, which promotes the posterior probability from passive observation to active intervention. The backdoor adjustment addresses the confounding bias by computing the interventional posterior P (V |do(I)) and controlling the confounder as Because the backdoor adjustment helps cut the dependence between D(J) and I, we can eliminate the confounding bias from the data generation mechanism and learn an interventional model for debiased court's view generation.
As shown in Fig. 12, to optimize Eq. (19), we use a pair of counterfactual decoders to learn the likelihood P (V |I, j) for each j. At inference, we propose to use a predictor to approximate P (j). Note that our implementation on backdoor-adjustment can be easily applied for multi-valued confounding with multiple counterfactual decoders.
Model architecture
Our model is conducted in a multi-task learning manner that consists of a shared encoder, a predictor, and a pair of counterfactual decoders. Note that the predictor and the decoders take the output of the encoder as input.
1. Claim-aware encoder Intuitively, the plaintiff's claim c and the fact description f are sequences of words. The encoder first transforms the words into embeddings. Then the embedding sequences are fed to BiLSTM, producing two sequences of hidden states h c and h f corresponding to the plaintiff's claim and the fact description, respectively.
After that, we use a claim-aware attention mechanism to fuse h c and h f . For each hidden state h f i in h f , e i k is its attention weight on h c k , and the attention distribution q i is calculated as follows: where v, W c , W f , b attn are learnable parameters. The attention distribution can be regarded as the importance of each word in the plaintiff's claim. Next, the new representation of the fact description is produced as follows: After feeding to another BiLSTM layer, we obtain the claim-aware representation of fact h. Fig. 12 Architecture of the attentional and counterfactual natural language generation (AC-NLG) method (Wu et al., 2020) 2. Judgment predictor Given the claim-aware representation of fact h, the judgment predictor produces the probability of support P sup through a fully connected layer and a sigmoid operation. The prediction result j is obtained as follows: where 1 means support and 0 means non-support.
Counterfactual decoder
To eliminate the effect of data bias, here we use a pair of counterfactual decoders, which contains two decoders, one for supported cases and the other for non-supported cases. The two decoders have the same structure but aim to generate the court's view with different judgments. We name them counterfactual decoders because every time only one of the two generated court views is correct. Still, we apply the attention mechanism. At each step t, given the encoder's output h and the decode state s t , the attention distribution a t is calculated in the same way as q i in Eq. (21), but with different parameters. The context vector h * t is then a weighted sum of h: The context vector h * t , which can be regarded as a representation of the input for this step, is concatenated with the decode state s t and fed to linear layers to produce the vocabulary distribution p vocab : where V , V , b, b are all learnable parameters. Then we add a generation probability to solve the out of vocabulary (OOV) problem. Given the context h * t , the decode state s t , and the decoder's input (the word embedding of the previous word) x t , the generation probability P gen can be calculated: where w h * , w s , w x , and b ptr are learnable, and σ is the sigmoid function. The final probability for a word w in time step is obtained: We introduce how to alienate the two decoders in the training part.
Training
For the predictor, we use cross-entropy as the loss function: whereĵ is the real judgment.
For the decoders, the previous word in training is the word in the real court's view, and the loss for time step t is the negative log-likelihood of the target word w * t : and the overall generation loss is where T is the length of the real court's view.
Because we aim to make the two decoders generate two different court views, we use a mask operation when calculating the loss of each decoder. The exact loss for the support decoder is The loss for the non-support decoder L nsup is obtained in the opposite way. Thus, the total loss is where we set λ to 0.1 in our model.
Application and results
To investigate the effectiveness of FITS, we conducted experiments on a real private loan dataset. We developed an AI-judge assistant, named Xiaozhi, based on FITS. We also applied FITS in real courts and achieved satisfactory results.
Experiment
Due to the page limitation, here we show only the comparison results of judgment prediction, which is the most important task of a smart trial. We compare our method with other deep learning baselines on the collected private loan dataset and discuss the role that legal knowledge plays in its performance.
Experimental setting
We collected a total of 61 611 private loan law cases. Each instance in the dataset consists of a factual description and the plaintiff's multiple claims. We will release all the experiment data to motivate other scholars to investigate this problem further. Macro F1 and Micro F1 (Mac.F1 and Mic.F1 for short) were adopted as the primary metrics for algorithm evaluation. We denoted the co-attentionbased method as CoATT+LK, which means we injected legal knowledge into neural networks.
Overall performance
We evaluated our model and the baselines on the private loan dataset. In addition to Mac.F1 and Mic.F1, we used macro-precision (Mac.P) and macro-recall (Mac.R) to evaluate the methods. The performance on the test set is summarized in Table 1. We can draw the following conclusions from the results: First, the performance of the deep learning based methods, e.g., TextCNN, BiLSTM+ATT, and HARNN, significantly exceeded the traditional machine learning method TF-IDF+SVM, which shows the success of applying neural networks for LJP. Second, LSTM-based methods gave better results than the CNN-based approach, demonstrating the advantages of extracting contextual features using LSTM. Third, BERT outperformed all the deep learning based methods, which shows the pre-trained language model's strong representation abilities, even for the legal domain.
Finally, the co-attention model gave a 4.8% absolute increase in performance (the average of Mac.F1 and Mic.F1) compared with BERT, which leads to two conclusions. First, directly applying pre-trained models to specific domains still has room for improvement. Second, it verifies our assumption that the bi-directional attention flows of information between facts and claims help locate crucial facts. Most importantly, injecting legal knowledge into co-attention networks gave another 1% absolute increase compared with the co-attention model and achieved the best results among all methods.
Application
The full-process smart trial system has played an important role in the construction of the smart court in Zhejiang Province. We developed a substantive AI-judge assistant robot, called Xiaozhi based on FITS, which has already assisted seven Zhejiang Provincal courts in financial lending cases and private lending cases. Xiaozhi moved the full procedural trial mode from the experimental stage to application practice. As a judge's assistant, Xiaozhi demonstrates the advantages of AI in the judicial field. FITS can understand legal documents, extract case information, justify evidence, and record the parties' speeches. It assists the judge in automatically questioning, promoting the trial process independently, summarizing the focus of disputes, predicting the outcome of the judgment, and generating judgment documents. If the judge's judgment deviates from a similar case, the system will also remind the judge of risks.
Compared with the traditional court, FITS has allowed realization of a new "human-machine integration" mode of intelligent trial in real applications. The litigation procedures in China consist of four phases: (1) In the trial preparation phase, Xiaozhi can push the pre-trial report to the judge and analyze the report's elements. (2) In the investigation stage, Xiaozhi synchronously conducts semantic recognition and text conversion, automatically helps The best results are in bold the judge with questioning, and justifies the validity of evidence.
(3) In the debate stage, Xiaozhi can convert the dialogue between the parties into text in real time, and summarize the dispute's focus from the dialogue and extract its elements. (4) In the judgment stage, Xiaozhi helps predict the outcome of the case and generate judgment documents in real time, which enables the judge to pronounce judgment in court after review and confirmation.
FITS breaks through the geographical limitations and avoids the inefficiency of traditional courts. It has launched "networking," "digitization," and "intelligence" in the smart court. The application of FITS has achieved satisfactory results: (1) In the automatic questioning task, the accuracy rate of procedural questioning can reach 96%. The hit rate for factual questioning can reach 70%. (2) In the high-frequency private lending and financial borrowing cases, the summaries of court trial records can reach 90%, and the accuracy rate of generating dispute focuses can reach 70%. The factor prediction accuracy rate can reach 80%. (3) The accuracy of financial loan evidence determination is 92%, and the accuracy of private lending is 95%. The accuracy of evidence classification can reach 90%. (4) FITS predicts the trial's outcome by combining the legal knowledge graph and big data analysis, with an accuracy rate of 96%. (5) With the help of our system, the rate of sentence pronouncement in court can be improved from 40% (traditional judge system) to 90%, and the proposed system can also shorten the trial time from 2-3 h (traditional judge system) to 20-30 min. Moreover, the average number of trial days for initial financial loan cases has been shortened from 98 in 2017 to 66 in 2020, and no case has been revised or remanded for retrial.
Related works
This paper attempts to cover the primary process of adjudication; the essential steps/stages for a trial pipeline include making judgments and writing judgment documents. The technologies of text classification and legal prediction are often used to assist in these tasks. In the history of AI and law, there have been many research works. Basically, the legal text classifier is the fundamental technology of our work. Dahbur and Muscarello (2003) gave a classification system for serial criminal patterns. Ashley and Brüninghaus (2009) proposed a model of SMILE+IBP to automatically classify textual facts in terms of a set of classification concepts that capture stereotypical fact patterns. Passage-based text summarization was used to investigate how to categorize text excerpts from Italian normative texts (Kanapala et al., 2019). Liu CL and Chen (2019) applied machine learning methods, including gradient boosting, multilayer perceptrons, and deep learning methods with LSTM units, to extract the gist of Chinese judgments of the supreme court.
Concerning the works of legal prediction, remarkable results have been achieved (Arditi et al., 1998). In the early stages, machine learning, such as argument based machine learning (Možina et al., 2005), was applied to the legal domain. Machine learning has also been applied to predict decisions of the European Court of Human Rights (Aletras et al., 2016;Medvedeva et al., 2020). A time-evolving random forest classifier was designed to predict the behavior of the Supreme Court of the United States (Katz et al., 2017). Recently, Chao et al. (2019) improved the interpretability of charge prediction systems and improved automatic legal document generation from the fact description. They further proposed an interpretable model for charge prediction for criminal cases using a dynamic rationale attention mechanism (Ye et al., 2018). Hu et al. (2020) studied the problem of identifying the principals and accessories from the fact description with multiple defendants in a criminal case.
Conclusions
This paper presents a full-process intelligent trial system. The technical route adopts mainly a combination of knowledge-based models and datacentric models. The method of knowledge expression and reasoning formalizes mainly the judge's legal knowledge and implements logical reasoning according to the judge's logical rules. Big data driven technology realizes the tasks of classification, summarization, and prediction through big data analysis of massive legal texts. Several deep learning models are proposed for legal information extraction, evidence justification, trial summarization, outcome prediction, and judgment document generation.
Note that the application of FITS has not been extended to criminal cases. The application to criminal cases should be very cautious because the standard of judicial proof in criminal cases is "beyond a reasonable doubt," but the prediction results of the intelligent system cannot be guaranteed to be 100% correct. The predictive model contains machine learning algorithms that are uninterpretable or have "black box" problems, which means that the process from data input to result from output is nontransparent. Therefore, the use of FITS in criminal case trials will be very cautious.
The system explores the in-depth application of big data, modern logic, and AI in the full trial process. The AI trial system also has shortcomings. Even if the existing technologies are good at handling simple cases (such as financial lending and private lending cases), for complex cases, the determination of the facts of the case and the application of laws are inseparable from the experience of the judge, especially for ethics and morality. It is difficult for AI to accurately predict the outcome of complex cases while taking into account these empirical factors. Therefore, we need to formulate the AI trial system in a human-machine interaction mode, and enable judges to provide real-time feedback on algorithm results.
Contributors
Bin WEI, Kun KUANG, Changlong SUN, and Jun FENG discussed the organization of this paper from different aspects, including the views of both law and computer science. Bin WEI drafted mainly Sections 1, 3, 4, and 10.
Kun KUANG drafted mainly Sections 6 and 7. Changlong SUN drafted mainly Sections 2 and 9 and provided judicial big data and technical models for experiments in Section 8.
Jun FENG drafted mainly Section 5 and conducted the experiments in Section 8. Fei WU, Xinli ZHU, and Jianghong ZHOU guided the research. All authors revised and finalized the paper.
|
2022-03-20T05:18:53.012Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d9ce0c7af0744b3f769a12ff212adbcf59695f24",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1631/FITEE.2100041.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9ce0c7af0744b3f769a12ff212adbcf59695f24",
"s2fieldsofstudy": [
"Computer Science",
"Law"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
56017722
|
pes2o/s2orc
|
v3-fos-license
|
FAILURE RATE MEASUREMENT ON SILICON DIODES REVERSE POLARIZED AT HIGH TEMPERATURE
This paper calculates the failure rate on reversed polarized silicon diodes with the aim to justify, experimentally, the rules of the European Space Agency (ESA) which are referred to the component life’s extension, the reliability increase and the end of life performance enhancement, by using oversized devices (derating rules). In order to verify the derating rules, 80 silicon diodes are used, which are reverse polarized in a high temperature environment. The diodes are divided in 4 groups of 20 diodes, applying a different voltage to each group, in order to relate the failure rate to the applied derating rule. The experiment described in this paper is developed using a temperature accelerated test to check the leakage current in reverse polarization (HTRB High Temperature Reverse Bias), with the purpose of obtaining results applying an acceleration factor in order to reduce the test duration. By using a thermal model of the whole system and the equations that describe the reverse polarized diode behaviour, it is possible to stress the 80 diodes up to very high temperature avoiding the runaway effect. Finally, the failure rate is calculated and a revision of the derating rules are proposed by using the experimental result obtained.
INTRODUCTION
ESA (European Space Agency) uses rules to extend the life of the devices, thus guarantee the proper operation for the duration of the mission [1].This practice, which oversizes the devices and makes them work below their maximum ratings, is known as "derating".But these rules, mandatory in space missions, increase the economic cost, the weight and the size of the selected devices to use for the design.These rules also reduces the list of a available devices for design.Thus complicating the suitable selection of these devices (which are not always available for space).In the end, the selection of the needed devices becomes a difficult task and increases the cost of the development.Of course this assures a higher reliability.For example, the derating rules established by ESA for a Schottky diode define maximum values not to be surpassed and are as follows: • 75 % of maximum forward current, • 75 % of maximum reverse voltage, • 50 % of maximum dissipated power and • a maximum junction temperature of 110 • C or 40 • C below the manufacturer's maximum rating, whichever is lower.
Thus, in the case of the reverse voltage, the maximum value permitted under derating conditions would be 75 % of the maximum reverse voltage specified in the datasheet of the manufacturer.Additionally, NASA (National Aeronautics and Space Administration) also proposes derating rules for devices, although they are different, in some cases, from the rules proposed by ESA.For example, the rules from NASA of 2003 [2], for the same Schotkky diode of the last example, establishes the following deratings: • 50 % of maximum forward current, • 70 % of peak inverse voltage (PIV), • 50 % of maximum peak current, • 80 % of maximum junction temperature (does not exceed 125 • C or 40 • C below the manufacturer's maximum rating, whichever is lower).
And some years before, in 1991, NASA had published derating rules [3] that resembled those currently used by ESA, but different to the used nowadays.In the example of the Schottky diode, the former compulsory derating asked by NASA then was: • 75 % of maximum reverse voltage, • 50 % of maximum dissipated power and • a maximum junction temperature of 110 • C.
Other references [4] also propose that the failure rate is related to the stress and follows the 5 th power law, which says, for example, that an increase of a 15 % of the applied voltage to the device, implies doubling the failure rate.Summarising, the derating rules proposed by ESA and NASA, and other laws also related to derating, applied to electronic devices, are all based on mathematical models and also in the practical experience.However, it is possible to consider that a 10 year long mission should have a different requirement than a mission that is only 6 months long, thus, the derating rules should be different.Also, when both rules (ESA's and NASA's) are compared, some differences are detected that could mean that the reliability of electronic devices is still not fully understood.
A more in depth research is therefore needed, to better understand the reliability of semiconductor devices, together with experimental results to verify this research.
The following question should be answered: are derating the rules applicable to any mission?and, could these rules be updated with new results and in order to optimise cost, mass and volume of space missions?[5] [6] The experimental results obtained that correspond to the failure rate as a function of the applied derating can be used to justify and/or update the existing derating rules applied in space missions.
EXPERIMENT DEVELOPMENT
This paper follows these steps: The first task to do consist in selecting the device and the variable to test.The next task is to propose an experimental setup.Then all additionally needed components of the setup will be identified.Finally, the thermal model and the acceleration factor will be calculated in order to predict the results of the experiment.These steps should guarantee positive experimental results showing the failure rate of the devices.
Selection of the device under test
In order to simplify the experiment, a diode is considered as a device under test (DUT), because other different devices, like transistors imply additional complexity.The final selection of the DUT is based on the diode list found in the EPPL (European Preferred Parts List) [7].
A commercial equivalent for each diode of this list is identified, and then, a selection is made keeping in mind, availability, electric characteristics, package and price of this equivalent diode (see Tab. 1).After discarding the diodes which have no commercial equivalent or have a different package than SMD (in order to solder the device on a PCB to a copper plane), the diode STPS1045 is selected as the best option for this test, due to its size, package and price.Additionally, this diode is an European low voltage Schottky, which is commonly used in space missions in output rectifiers of 3.3 V and 5 V.This diode has a maximum reverse voltage of 45 V, and a reverse leakage current of 1.8 mA at 100 • C (45 V).
Selection of the variable to analyse
ESA's diode derating rules consider: direct current, reverse voltage, dissipated power and junction temperature.
In the first analysis, the dissipated power and the junction temperature won't be studied, because these characteristics can be calculated as a function of the direct current and/or reverse voltage.Finally, the reverse voltage (V R ) is chosen as a characteristic to analyse.The reverse current will be measured using a shunt resistor connected in series to the DUT and independently from the other diodes.
Additionally, the influence of the self-heating effect is less important in reverse polarization than in direct polarization.
Experimental setup
Once the diode and the characteristic to analyse is chosen, it is proposed to carry out an accelerated test increasing the temperature [8], [9] y [10], also known as an HTRB (High Temperature Reverse Bias) test.The derating rules will be applied as reverse biased diode, limiting the maximum reverse voltage and it will be studied how this extends life of the devices.
In order to guarantee the thermal uniformity of all DUT, they must be soldered to the copper plane of the PCB as close to each other as possible, and placed inside a climatic chamber, which will control the ambient temperature surrounding the devices and all its setup (see Fig. 1).
During the test continuous reverse voltage stress will be applied to the diodes reverse and temperature will be kept constant.
The total population of diodes used in this experiment is 80, which will be divided in 4 groups of 20 diodes and each group is reverse biased with a different voltage (V R ).
In order to obtain results, that could show which is the most convenient derating to apply, the following reverse voltages (V R ) are proposed:
Auxiliary components
To measure the reverse leakage current (I R ) that appears when the diode is reverse polarized, which is the variable to analyse, a shunt resistors will be used in series with each diode.The value of this shunt resistor must maximize the measurement resolution and, at the same time, the voltage drop should not interfere in the experiment (V R shunt < 2 % of the voltage V R ), then finally, the shunt resistor's value selected are:
Proposed thermal model
With the aim of guaranteeing that the DUT does not reach the maximum junction temperature and/or thermal runaway due to self-heating during the experiment, the operating temperature for the HTRB must be chosen using an appropriate thermal model of the setup.
To do this, the thermal resistance from junction-to-case (Rth j−c = 3 • C/W) and from case-to-ambient (Rth c−a = 56 • C/W) is obtained from the datasheet of the DUT.This last parameter depends on the copper area which the DUT is soldered to.These constants are included into the thermal diagram of the used setup, obtaining thus the thermal model (see Fig. 2).On the other hand, from the I R vs. V R curve of the datasheet of the DUT (see Fig. 3), the I R values can be obtained for different points of T j (25 • C, 50 • C, 75 • C, 100 • C, 125 • C and 150 • C) for the four used values of V R (45 V, 35 V, 24 V and 12 V) (see Fig. 4).Next, with these values, it is possible to obtain the equations of the regression curves given by any curve fitting software, by using 5 th order (see Fig. 4 and equations 1, 2, 3 y 4), for each of the four reverse voltages of this test.With these four equations, being V R = 45V the voltage with more self-heating influence in the junction temperature, a recursive simulation is carried out, using the equation of the dissipated power of the diode (Eq.5) and the expression that relates T j with this dissipated power (Eq.6) in order to verify the diode does not thermally runaway.In this case we can determine the maximum operating temperature to stress the diodes, before thermal runaway occurs. (1) (2) where, • P D is the dissipated power in W where, The result given by the recursive simulation (see Fig. 5) is that, with an operating temperature of about 110 • C, diodes blocking V R = 45 V reach the thermal runaway (T j > 150 • C) destroying the silicon junction of the semiconductor.Therefore, the temperature of the experiment is reduced by 10
Analysis of the acceleration factor
As said before, the experiment shown in this paper stressed the diodes after derating their limits [1], which makes more difficult to obtain a failure.To observe a failure an accelerated test will be done.By using the Arrhenius' equation (Eq.7) in order to calculate this acceleration factor, customized by NASA (Eq.8), where the constant of activation energy is (E A = 0.2656 eV) [11] and knowing the operating temperature of the experiment (T a = 100 • C), the obtained acceleration factor is equal to 8 units.Por example, 1000 hours of this test are equivalent to 8000 hours of operation of the DUT (at 25 • C). where, • A T acceleration factor • T 1 y T 2 is the temperature 1 and 2 respectively in • C • λ T 1 and λ T 2 is the failure rate at temperature 1 and 2 respectively where, • π T is the acceleration factor • λ Tj is the failure rate at the operating temperature • T j is the junction temperature of the semiconductor in K
RESULTS
Until now, the results (see Fig. 6) show a quite stable behaviour of the reverse leakage current for the four derated diode groups.In a first transient stage, an increase of the reverse leakage current is observed until it reaches the steady-state after the adjustment of temperature of the climatic chamber up to 100 • C. The standard calculation method to obtain the failure rate is shown in Eq. 9. where, • λ is the failure rate in number of failures / 10 6 hours • x is the number of failures • T DH is the summation of the number of units in operation multiplied by the time of operation.
• AF is the acceleration factor And, additionally, the mean time to failure is defined as: where, • MT T F is the mean time to failure (time to fail of 63% of the population).
• λ is the failure rate.
Although, degradation has not been detected and no broken diodes appeared after 4887 hours of test, and the exact value of the failure rate cannot be obtained.Anyhow, is possible to obtain comply with predicted maximum value of the failure rate in case one diode brakes down just one hour after measuring the devices.Finally, the failure rate can be calculated by using x=1, T DH = 80 devices multiplied by 4887 hours, and AF = 8 (as previously calculated) in Eq. 9: λ < 1 80 • 4887 • 8 = 0.320/10 6 (11) The mean time to failure (MTTF), in this case, using the equation 10 is: This fact points out the derating rules should be reviewed in order to take account the duration of the mission, for instance, the increase of the derating values at least in 6-months missions.
A)
Selection of the device under test.B) Selection of the variable to analyse.C) Experimental setup.D) Auxiliary components.E) Proposed thermal model.F) Analysis of the acceleration factor.
Figure 3 .
Figure 3. Relationship I R vs. V R found in the DUT's datasheet.
Figure 4 .Figure 5 .
Figure 4. Values extraction of I R as a function of T j and V R , and adjustment to 5 th polynomial order.
Table 1 .
Diode selection • C and so fixed at 100 • C, programming the climatic chamber used.
This paper studies the effect of the derating rules, applied to reversed polarized Schottky diodes, to the failure rate with the aim of comparing the derating rules published by ESA or NASA with the experimental results.A high temperature reverse bias experiment is developed using a population of 80 diodes divided in 4 groups (4 different derating values) of 20 diodes each, with the aim of measuring the leakage current of every single diode reverse polarized.With 4887 h under test at 100 • C, which is equivalent to more than 39 000 h at 25 • C, no degradation has been detected and no broken diode happened, obtaining a maximum value of the failure rate equal to 0.320 failures / 10 6 h.The absence of failures in the diodes, after 4887 h under test at 100 • C with a derating between 100 % and 30 %, means that in short term (taking into account the scope of this work with only 4887 h under test) the derating has not reduced the failure rate.
|
2018-12-05T21:27:31.860Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "068d939f2637e91946d0f85241db6a82dfcc68b8",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2017/04/e3sconf_espc2017_11001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "068d939f2637e91946d0f85241db6a82dfcc68b8",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
258656752
|
pes2o/s2orc
|
v3-fos-license
|
On Magnetic Field Screening and Expulsion in Hydride Superconductors
Reference [1] presents evidence for magnetic field screening and “subtle” evidence for magnetic field expulsion in hydrides under high pressure, which is argued to support the claim that these materials are high temperature superconductors. We point out here that data presented in different figures of Ref. [1] are inconsistent (i) with one another, (ii) with other work by the same authors on the same samples [2, 3], and (iii) with the expected behavior of standard superconductors. This suggests that these magnetic phenomena reported for these materials are not associated with superconductivity, undermining the claim that these materials are high temperature superconductors.
In 2015, Eremets and coworkers reported high temperature superconductivity in sulfur hydride (hereafter H 3 S ) under pressure [4], starting the hydride superconductivity epoch. Since then to the present, considerable evidence for superconductivity in various pressurized hydrides has been presented based on resistance measurements [5]; however, little magnetic evidence for superconductivity has been reported so far. In the original paper [4] some magnetic evidence based on SQUID measurements was presented. After a 7 year hiatus, new magnetic evidence was presented by Minkov et al. in [1]. That evidence is the focus of this paper. We discuss here the magnetic measurements reported for sulfur hydride ( H 3 S ), but exactly the same considerations apply to the same measurements reported for lanthanum hydride ( LaH 10 ) in Ref. [1], the only other hydride material for which magnetic measurements have been reported to date.
We raised the issues discussed in this paper with the authors of Ref. [1] through emails on repeated occasions beginning October 2022, with no response from the authors. We also submitted a Comment on Ref. [1] to Nature Communications raising these issues in November 2022. Five months later the journal decided that the technical details which explain the discrepancies that we point out will not be of sufficient interest to the wide readership of Nature Communications. Therefore we are bringing here these issues to the attention of researchers in the field that we believe could benefit from this information. Figure 1 top left and right panels reproduce Fig. 3a and Fig. 3e of ref. [1] respectively. To the best of our understanding, according to the description provided in the paper, both panels show in their light blue and blue curves respectively the same quantity: magnetic moment versus magnetic field, for the same sample at the same temperature (100K) and same pressure (155 GPa). The center blue curve in the top right panel is the virgin curve, which starts (when properly shifted vertically, as shown in Fig. S10 of [1]) with zero moment for zero applied field. It should be the same as the light blue curve labeled 100K on the top left panel. Yet the curves look very different. The left panel curve shows an upturn for magnetic field beyond 95mT while the right panel curve show no upturn. When plotting both curves on the same scale in the bottom panel in Fig. 1 it is apparent that they are very different in magnitude and shape.
It should also be noted that the rapid decrease in the magnitude of the magnetic moments beyond the minimum points of the curves shown in Fig. 1 top left panel is inconsistent with what is expected for a type II superconductor with very large upper critical field [6], estimated in Ref. [1] to be H c2 (T = 0) ∼ 97T . For example, at T = 100K H c2 (T) should be above 60T. When corrected for demagnetization factor estimated as 1∕(1 − N) ∼ 8.5 in Ref. [1], it implies that the curve labeled T = 100K should evolve smoothly from its value attained at H ∼ 95mT to reach zero at H c2 (T)(1 − N) ∼ 7T . This is qualitatively inconsistent with the behavior seen in Fig. 1 top left panel that shows that the magnetic moment magnitude has already decreased to less than 15% of its maximum value for field as small as Figure 2 shows as a green curve the magnetic moment versus magnetic field for a hysteresis cycle at temperature 100K for the same sample at the same pressure, reported in Fig. 4a of Ref. [2]. In the same figure we show the magnetic moment versus magnetic field at the same temperature from the left panel of Fig. 1, i.e., Fig. 3a of Ref. [1]. The blue curve on the left panel of Fig. 2 should be the virgin curve for this hysteresis cycle, joining smoothly the green curve, as is universally seen in such measurements for superconductors. One such typical example is shown on the right panel of Fig. 2, from Ref. [7]. It can be seen that the blue curve on the left panel shows no hint of joining the green curve. In other words, these measured results on the same sample for the same temperature and pressure measured in the same laboratory are completely incompatible with each other under the assumption that they arise from superconductivity in the sample. Note that the results from the hysteresis cycles from Fig. 4a of Ref. [2], for temperatures T = 100K, 140K, 160K, 180K were used to infer the values of critical current versus temperature plotted in Fig. S5 of Ref. [1] (and Fig. 4c of Ref. [2]). For these higher temperatures the magnetic moment curves of Fig. 3a of Ref. [1] are equally incompatible with the hysteresis cycle curves of Fig. 4a of Ref. [2].
In Fig. 3 we consider the magnetic moment measurements of Ref. [1] at lower temperature in relation with the flux trapping results under zero field cooling (ZFC) reported [1]. The middle blue curve in (b) is presumably the virgin curve, which should be identical to the light blue curve in (a) labeled 100K. c Quantitative comparison of the virgin curves for 100K from a (Fig. 3a of Ref. [1]) and b (Fig. 3e of Ref. [1]) in Ref. [3] for T = 30K . We show the curve for magnetic moment from Fig. 1 for T = 20K , the curve for T = 40K it is very similar, as seen in Fig. 1. The ZFC flux trapping data at low fields, shown in the inset of Fig. 3, show onset of field trapping at field H t p = 42mT . That necessarily implies that the sample is allowing the magnetic field to penetrate. Yet the magnetic moment behavior shown in Fig. 3 main panel shows that the diamagnetic moment at 20K continues to grow linearly up to H p = 95mT , indicating that the field is not penetrating. H p was identified in Ref. [1] as the field corresponding to H c1 when corrected for demagnetization, where the magnetic field starts to penetrate the sample. However, there can be no flux trapping for fields below the lowest field for which the field starts to penetrate, namely H p . Hence H t p < H p is an impossibility. Thus, these reported measurements are in clear contradiction with each other.
Furthermore, the ZFC trapped moment measurements found a saturation moment m s ∼ 16 × 10 −9 Am 2 [8]. This is larger that the largest diamagnetic moment induced under an applied field, −12.5 × 10 −9 Am 2 , as shown in Fig. 3. It is usually the case that the remnant (trapped) moment obtained after a magnetic field is applied and then removed is smaller than the largest diamagnetic moment generated while the field is applied, as seen, e.g., in Fig. 2 right panel. While the data for the diamagnetic moment are shown only up to field 200mT, it is clear from the behavior seen in Fig. 3 that the induced diamagnetic moment certainly would not increase again for larger field (if it did, it would be completely inconsistent with standard superconductivity).
The trapped flux measurements of Ref. [3] were interpreted according to the Bean model, where the applied field penetrates partially when it exceeds the lower critical field and is prevented from further penetrating due to strong pinning centers. The model assumes that a critical current j c flows that is independent of field magnitude, estimated in Ref. [3] to be j c ∼ 7.3 × 10 10 A∕m 2 . When the external field is removed, Faraday's law implies that a reverse current is induced, and field remains trapped due to the same pinning. The maximum trapped moment under ZFC conditions was found in Ref. [3] to be m s ∼ 16 × 10 −9 Am 2 attained for applied field H M ∼ 1.7T , from which it was inferred that the field magnitude necessary for the applied field to reach the center of the sample was H * ∼ 0.8T . With (Fig. 3a of Ref. [1]) at T = 20K . Inset: trapped field under ZFC (zero field cooling) and FC (field cooling) protocols at T = 30K, from Ref. [3]. H t p = 0.042T is the threshold field for onset of trapping under ZFC conditions. The curve shows the behavior of the induced moment expected from the Bean model, which was used to interpret the field-trapping results in Ref. [3]. It also shows (blue curve) the qualitative behavior expected for an ideal type II superconductor with no pinning, where the moment would reach zero at the upper critical field (corrected for demagnetization) those parameters, the Bean model predicts [9] the behavior of magnetic moment versus applied field shown in Fig. 3, qualitatively different from the measured behavior. Allowing for variations in the critical current with magnetic field would somewhat change this behavior [11], e.g., to what is shown in Fig. 2 right panel, but its magnitude would never fall below what is expected for an ideal type II superconductor, shown as the blue curve in Fig. 3. Thus, the reported behavior of magnetic moment versus field shown as the red curve in Fig. 3 is inconsistent with the ZFC trapped flux results reported in Ref. [3]. We have previously also pointed out that the reported ZFC linear behavior of moment versus field shown in the inset of Fig. 3 is inconsistent with what is expected for a superconductor [9]. Ref.
[1] uses a background subtraction procedure that involves "recent improvements in the background subtraction procedure (that) have greatly expanded the scope of the method" [3]. However, neither is the background signal given in Ref. [1] nor is the improved procedure explained. Perhaps, information on the data processing that has been performed, together with the raw data obtained in the measurements [10], would help explain some of the anomalies pointed out above. But even with such clarification we believe that the above analysis indicates that the various reported magnetic measurements are incompatible with one another under the assumption that they originate in superconductivity. Instead, we suggest that they must originate in localized magnetic moments associated with the samples, the diamond anvil cell environment, and/or the measuring apparatus.
The signature property of superconductors, that cannot be mimicked by localized magnetic moments, is the Meissner effect, the ability to expel magnetic fields when cooled in a field (FC). In Ref. [1], the authors claim to find "subtle Meissner effect in FC measurements at 2 mT" indicated by the light blue smoothed curve shown in Fig. 4 middle panel. However, in the absence of the blue curve no such evidence is apparent, as Fig. 4 left panel shows. In addition, before laser heating the precursor sample is not expected to be superconducting, yet the same measurements show an unexpected divergence between FC and ZFC moments around the supposedly critical temperature, as shown on the right panel of Fig. 4. Similar behavior as in Fig. 4 is shown also for a field of 4mT in Ref. [1].
While for some standard superconductors with strong pinning the percentage of flux expulsion (Meissner fraction) can be very small for larger fields, it rapidly increases for small fields, as shown, e.g., in Refs. [12][13][14][15]. The Meissner fraction is expected to depend on the ratio H∕H c1 [16], and for H 3 S H c1 is estimated to be 0.82T [1], which is more than an order of magnitude larger than lower critical fields for standard superconductors with high T c such as cuprates and pnictides. So the field 2mT of Fig. 4 is equivalent to a field of less than 2 Oe for those materials, for which a sizable Meissner fraction is found [12][13][14][15]. Additionally, the Meissner fraction is expected to increase as the thickness of the sample decreases [14,17], and the samples used in these high pressure experiments are rather thin. The absence of any evidence of flux expulsion for hydride materials under pressure, contrary to all other known superconductors, is incompatible with the claim that these materials are superconductors.
In summary, the considerations in this paper, together with other analysis of magnetic evidence for superconductivity in sulfur hydride and lanthanum hydride discussed recently [9,[18][19][20][21], strongly indicates that the magnetic measurements interpreted to show superconductivity in hydrides under pressure do not originate in superconductivity. Acknowledgements JEH is grateful to R. Prozorov for illuminating discussions. FM was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by an MIF from the Province of Alberta.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2023-05-13T15:23:29.638Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "50b572981b965774831564e1380cf755e8e8c87c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s10948-023-06569-6",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "aec86e9fd8ac7a7ffff089382512cd7cc9c9c640",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
12939897
|
pes2o/s2orc
|
v3-fos-license
|
Interaction of NADPH-adrenoferredoxin reductase with NADP+ and adrenoferredoxin. Equilibrium and dynamic properties investigated by proton nuclear magnetic resonance.
NADPH-adrenoferredoxin reductase, a flavoprotein from bovine adrenocortical mitochondria, has been investigated to elucidate the equilibrium and dynamic properties of the interaction with NADP+ and adrenoferredoxin (adrenodoxin) using proton NMR spectroscopy. The line width of the signals from NADP+ depends on the presence of the reductase. The off rate constant of NADP+ from the reductase is estimated to be about 15-20 s-1 on the basis of line width measurements. No appreciable difference in off rate is detected between adenine and nicotinamide moieties of NADP+. Transferred nuclear Overhauser effect experiments for NADP+ indicate the time-dependent magnetization transfer profiles with a long lag phase. The proton NMR spectra during the titration of the reductase with adrenodoxin reveal that the reductase possesses distinct binding sites for both NADP+ and adrenodoxin. The sharp resonances in the aromatic region due to His-10 and His-62 of adrenodoxin were utilized as a probe to explore the interaction with the reductase. IN the mixture of adrenodoxin and the reductase at the mol ratio of 6:1, T1 values of the histidine residue in adrenodoxin were measured by the inversion recovery method. At low ionic strength, T1 values of the resonances are not affected in the presence or absence of the reductase. In the presence of the reductase, T1 values of resonances resulting from the histidine residues become shorter as the concentration of KCl increases because of rapid exchange between bound and free states. At low ionic strength (10 mM phosphate buffer), the off rate from the reductase is estimated to be less than about 4 s-1. The off rate of adrenodoxin from the reductase could be the rate-limiting step in cytochrome c reductase activity at low ionic strength.
Interaction of NADPH-Adrenoferredoxin Reductase with NADP' and Adrenoferredoxin
EQUILIBRIUM AND DYNAMIC PROPERTIES INVESTIGATED BY PROTON NUCLEAR MAGNETIC RESONANCE* (Received for publication, May 18, 1993, and in revised form, September 30, 1993) Shigetoshi MiuraS and Yoshiyuki Ichikawa From the Department of Biochemistry, Kagawa Medical School, Miki-cho, Kita-gun, Kagawa 761-07, Japan NADPH-adrenoferredoxin reductase, a flavoprotein from bovine adrenocortical mitochondria, has been investigated to elucidate the equilibrium and dynamic properties of the interaction with NADP' and adrenoferredoxin (adrenodoxin) using proton NMR spectroscopy. The line width of the signals from NADP+ depends on the presence of the reductase. The off rate constant of NADP+ from the reductase is estimated to be about 15-20 s-l on the basis of line width measurements. No appreciable difference in off rate is detected between adenine and nicotinamide moieties of NADP'.
Trans- resulting from the histidine residues become shorter as the concentration of KC1 increases because of rapid exchange between bound and free states. At low ionic strength (10 m~ phosphate buffer), the off rate from the reductase is estimated to be less than about 4 s-l. The off rate of adrenodoxin from the reductase could be the rate-limiting step in cytochrome c reductase activity at low ionic strength.
NADPH-adrenoferredoxin oxidoreductase (adrenodoxin reductase)' is a n FAD-containing flavoprotein with a molecular mass of 54,000. It is located on the inner membrane of adrenal mitochondria (1). The primary structure of bovine adrenodoxin reductase has been deduced from the nucleotide sequence of cDNA (2,3). I t functions in the mitochondrial electron trans-* This work was supported by Grants-in-aid 04225224 and 05209222 for Scientific Research on Priority Areas from the Ministry of Education, Science, and Culture of Japan. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
port system supporting cytochrome P-450-dependent steroidogenic hydroxylation reactions (4). Adrenodoxin reductase receives two reducing equivalents from NADPH a t once and then delivers one reducing equivalent to adrenoferredoxin (adrenodoxin), an iron-sulfur protein containing a 2Fe-2S cluster. This two-to one-electron step-down process via a semiquinone form was thought to be the major function of the electron transferring system composed of adrenodoxin reductase and adrenodoxin (5 and references therein). It was found that adrenodoxin forms a tight one-to-one complex with adrenodoxin reductase (6). At first, the complex of adrenodoxin and adrenodoxin reductase was regarded as an active species for electron transfer to cytochrome c. Later, Lambeth et al. (7) proposed that adrenodoxin transfers electrons as a mobile shuttle between adrenodoxin reductase and cytochrome P-450,,, (8).
The studies by chemical modification and oligonucleotidedirected mutagenesis of adrenodoxin proved that the binding sites on adrenodoxin for the reductase and cytochrome P-450.,, are nearly identical and are located in the sequence spanning the negatively charged amino acid residues between Asp-72 and Asp-86 (9-11). Recently, proton NMR studies of adrenodoxin demonstrated that a conformational change that occurred upon reduction of adrenodoxin was in the sequence of negatively charged amino acid residues assigned to the interaction site for the redox partners (12,13). The conformational change of adrenodoxin depending on its redox state could control the binding affinity of adrenodoxin for the redox partners. This may provide a structural basis for adrenodoxin to work as a moving shuttle in the electron transfer mechanism.
Kinetic properties of adrenodoxin reductase in cytochrome c reductase activity have been explored extensively (8). It was shown that the maximum turnover number of the reductase (kcat) in adrenodoxin-mediated electron transfer to cytochrome c was about 10 s-l at the optimum conditions. The rate-limiting step in NADPH-dependent cytochrome c reductase activity was thought to be the electron transfer process from FAD of adrenodoxin reductase to the iron-sulfur cluster of adrenodoxin within the complex between adrenodoxin and the reductase (14). In contrast, an analogous electron transfer system, ferredoxin-NADP+ reductase and ferredoxin, exhibits a much larger turnover number in cytochrome c reductase activity (100 cytochrome c reduced per s ) (15, 16). So far, however, no direct measurement showing the electron transferring rate within the complex has been reported.
The present study was conducted to elucidate the interaction of adrenodoxin reductase with adrenodoxin and NADP+ in dynamic and equilibrium states. The use of proton NMR spectroscopy for small molecules interacting with a macromolecule has provided some insight into their static and dynamic binding modes (17)(18)(19)(20)(21). We measured the proton NMR spectra for adrenodoxin and NADP+ in the presence of adrenodoxin reduc-8001 tase. The interactions of adrenodoxin and NADP+ with the reductase were characterized by measuring N O E S ,~ Ti values, and line widths of the resonances. The present results revealed the binding mode of NADP+ to the reductase and the dissociation rates of adrenodoxin and NADP+ from the reductase.
MATERIALS AND METHODS
Chemicals-NADP' and NADPH were obtained from Sigma. Deuterated water and sodium 3-trimethylsilylpropionate-2,2,3,3-d4 were purchased from MSD isotopes (Montreal, Canada). Other chemicals were of the highest quality obtained from Wako Pure Chemicals (Osaka, Japan).
Preparation ofAdrenodoxin and Adrenodoxin Reductase-Bovine adrenodoxin and NADPH adrenodoxin reductase were prepared from bovine adrenocortical mitochondria. Adrenodoxin was prepared as described previously ( Samples for 'H NMR experiments were prepared by repeated concentration and dilution with 99.9% deuterated potassium phosphate buffer using the ultrafiltration membrane cone (Centriflo CF-25, Amicon). The pH values of the samples were directly measured in the NMR sample tubes on a Beckman model @21 pH meter equipped with a combined glass electrode. No correction for the deuterium isotope effect on the glass electrode was made, and pH* denotes a meter reading. Anaerobic samples for NMR measurement were prepared in the NMR sample tube sealed with a rubber septum by repeated evacuation and flushing with oxygen-free nitrogen gas. The titrations under nitrogen atmosphere were performed using a n air-tight syringe washed with deoxygenated buffer under nitrogen gas.
' H NMR Method-"H NMR spectra were acquired with a JEOL GX-400 NMR spectrometer equipped with an array processor. The temperature of the sample was controlled by using a JEOL GVT temperature control unit. One-dimensional lH NMR spectra were recorded over 5,000 Hz with digital resolution of 0.61 Hz/point unless otherwise stated. An exponential line broadening of 1 Hz was applied to increase signa-to-noise ratio. The spin-lattice relaxation time (T,) was measured using an inversion recovery pulse sequence. T1 values were calculated by a nonlinear curve fitting method. Two-dimensional nuclear Overhauser effect spectra (NOESY) were obtained in the phase-sensitive mode with a mixing time of 150 ms (25, 26). Two-dimensional spectra were collected over 5,000 Hz with quadrature detection mode in both dimensions. For tl (evolution period) dimension the time proportional phase incrementation method (27) was employed. The carrier frequency was placed on the residual H2H0 signal. Solvent suppression for NOESY spectra was achieved through the observation channel with a DANTE (delays alternating with nutations for tailored excitation) pulse sequence (28). Normally, 32 scans were accumulated for each value of tl, and 512 t , values were recorded with free induction decays of 2,048 complex points. Gaussian for t, (detection period) and shifted sine-bell for tl window functions were used for resolution enhancement. Spectra were zero filled to 2048* x 1024 points with digital resolution of 2.44 Hdpoint on t, and 4.9 Hdpoint on t,. All spectra were referenced to internal sodium 3-trimethylsiIylpropionate-2,2,3,3-d4.
The time-dependent transferred NOE on the proton resonances of the substrate was measured using selective irradiation of the resonances from the free substrate (29, 30). Irradiation power was attenuated down to 32 decibels below 0.5 watt but was enough to saturate the interested resonance within 30 ms. The ratio of the resonance intensity with irradiation on the resonance of free substrate was measured as a function of duration of irradiation Kt). The resonance intensity with control irradiation at 2.25 ppm, Ic(t), was also measured as a function of irradiation time. The ratio between them was expressed as the relative intensity.
RESULTS
Adrenodoxin reductase is a membrane-associated flavoprotein with a relatively large molecular mass. There has been reported little 'H NMR work for the reductase. Thus, we first measured the ' H NMR spectra of the reductase and characterized their pH*-dependent spectral changes. The 'H NMR spectra of adrenodoxin reductase for the aromatic region are shown in showing pH-dependent chemical shift could be tentatively assigned to the C2H or C4H protons of histidine residues.
Znteraction of N m P + with Adrenodoxin Reductase-'H NMR spectra of the reductase during the titration of the reductase with NADP+ indicate no major spectral changes in the spectral region for the aromatic resonances. No signals arising from NADP+ appear in the spectra until more than the stoichi- ometric amount of NADP+ is added, indicating that there is essentially no free NADP+ under the condition with a substoichiometric amount of NADP+ (Fig. 2). This suggests that the dissociation constant (Kd) of NADP' is much lower than the concentration of the reductase employed in this NMR study (0.25 mM). This is consistent with the reported K j value of 5.32 PM for NADP+ in a cytochrome c reductase activity assay (6,31). During the titration of the reductase with more than the stoichiometric amount of NADP+, the line width of the resonances due to NADP+ get sharper as the concentration of NADP+ increases. Under the fast exchange condition, the observed line widths of the substrate are the weighted average of the line widths for bound and free substrates. As the resonances due to bound NADP+ are excessively broadened beyond detection, averaging of the line width between free and bound NADP+ in the sample of the reductase with 1.3 eq of NADP+ causes extensive broadening of the resonances from free NADP+. The observed line width of the resonances from free NADP' indicates no appreciable averaging with those of bound NADP+, suggesting that the exchange rate is enough slower than the line width of bound NADP+. Thus, under the slow exchange condition, the line width depending on the mol ratio of NADP+ to the reductase could be due to the life time broadening. As the Ki of NADP+ is much lower than the concentration employed in the present experimental conditions, the binding site for NADP+ is fully occupied. Thus, the life time of free NADP' is dominated by the off rate constant (kos) of NADP+ from the reductase. Under these conditions, the observed line widths at the various concentrations of NADP+ are described as follows (32). where T,, is the observed spin-spin relaxation time, TPfree is the spin-spin relaxation time of free NADP+, and x is the mol ratio of NADP+ to the reductase. The estimated off rates of NADP+ from the reductase are calculated from the line width of N2H proton for nicotinamide moiety and ASH proton for adenine moiety. The off rates are plotted as a function of the mol ratio between NADP+ and the reductase in Fig. 2 B . The calculated kOR remains constant over a wide range of the mol ratio examined, suggesting that the equation is valid over the condition employed and that no other significant effects interfered. The resonances from both adenine and nicotinamide moieties of NADP+ indicate a similar off rate of about 15-20 s-'. No significant difference in the off rates between nicotinamide and adenine moieties of NADP+ was found.
NOE Measurement-NOESY spectra for NADP+ in the phase-sensitive mode were measured in the presence of adrenodoxin reductase. Fig. 3 shows a two-dimensional phase-sensitive NOESY spectrum with a waiting time of 150 ms. Negative NOES were observed between the Nl'H/NSH and N2'H/N6H protons for nicotinamide-ribose moiety of NADP+, and NOES between the M'WA8H protons and a faint signal between the Al'WA8H protons were observed for adenosine-ribose moiety. These negative NOES were not observed in the system without adrenodoxin reductase (data not shown), indicating that the observed NOESY cross-peaks in Fig. 3 arose due to the magnetization transfer within the NADP+ molecule that was complexed with the reductase. To the contrary, NAD+ did not ex- hibit any negative NOESY cross-peaks, even in the presence of adrenodoxin reductase (data not shown). This could be due to the low affinity of the reductase for NADH ( K , = 5.56 mM) a s determined in cytochrome c reductase activity (6).
Interaction of Adrenodoxin Reductase with N A D P and Adrenodoxin
Time-dependent transferred NOE between the specific pairs of the resonances were measured by one-dimensional NOE. Only resonances with cross-peaks in NOESY spectrum show time-dependent negative NOEs (data not shown). Time-dependent profiles for development of NOE exhibit a lag phase of as long as 80 ms. Usually, a long lag phase was observed in such a system with a slow cross-relaxation rate. This was thought of as a n indication of the spin diffusion, which is no longer directly connected with the distance between the specific pair of protons (29,30,33). However, the observed negative NOEs are highly specific for the pairs of resonances that form cross-peaks in the NOESY spectrum. Recent simulation demonstrated the exchange lag phase for the system with a n intermediate exchange rate comparable with the cross-relaxation rate (19). This implication is consistent with the off rate of NADP+ from the reductase determined in this work.
Unfortunately, we failed to determine the interproton distances from the data for time-dependent transferred NOE measurement. Still, it is possible to estimate qualitatively the conformation of the bound NADP+ from the NOESY spectrum. Within the adenine-ribose moiety of bound NADP+, the intense NOESY cross-peak between A2'WA8H protons and the weaker signal between Al'WA8H protons suggests the adenine-ribose glycosidic torsional angle to be an anticonformation. Likewise, the NOESY cross-peaks between Nl'HLN2H and N2'H/N6H protons for the nicotinamide-ribose moiety confine the nicotinamide-ribose glycosidic torsional angle to an anticonformation.
Interaction ofAdrenodoxin Reductase with Adrenodoxin-"H NMR spectra during the titration of the reductase with adrenodoxin are shown in Fig. 4. Several of the resonances of the reductase (resonances A , I , and J in Fig. lA) are broadened and/or shifted out upon the complex formation with adrenodoxin. As the stoichiometric amount of adrenodoxin was added, these resonances were completely abolished on the spectrum. This indicates that the interaction between adrenodoxin and the reductase is highly specific and that a tight one-to-one complex is formed between them. I t is interesting to note that the resonance J a t 8.75 ppm is broadened away not only upon 5 eq ( b ) , 1.0 eq ( e ) , and 1.5 eq ( d ) of adrenodoxin in 50 IIIM phosphate buffer at pH* 7.6, 28 "C. The resonances for the side residues of His-10 and His-62, which have been assigned previously (39), are indicated over the spectra.
complex formation with adrenodoxin but also upon reduction of FAD with sodium dithionite.
Noncompetitive binding of adrenodoxin and NADP+ for adrenodoxin reductase is demonstrated in Fig. 5. In the system of the reductase with 2 eq of NADP', further addition of adrenodoxin abolished the above mentioned signals arising from the reductase without changing the signal intensity of the resonances due to NADP+. This indicates that the reductase does not release NADP+ upon complex formation with adrenodoxin, suggesting that the reductase possesses distinct binding sites for both NADP+ and adrenodoxin. Close inspection of the spectra further reveals that the line width of the resonances due to free NADP+ becomes sharper upon complex formation with adrenodoxin. This may imply that binding of adrenodoxin to the reductase slows the off rate of NADP' from the reductase.
Nonselective T1 values for the C2H protons of His-10 and His-62 residues in adrenodoxin were measured in the system containing adrenodoxin reductase and adrenodoxin. The complex formation with the reductase affects the relaxation time of these resonances. Chemical modification of adrenodoxin with diethyl pyrocarbonate indicated that the side residues of His-10 and His-62 were not directly involved in the site for interaction with redox partners (13). Thus, the effects on the relaxation time could be due to the change in the rotational correlation time upon complex formation. In the presence of the reductase at the concentration of one-sixth of adrenodoxin, T1 values of histidine residues in adrenodoxin become shorter as the concentration of NaCl increases. On the other hand, without adrenodoxin reductase, the addition of 0.2 M NaCl does not change T1 values for the histidine residues significantly (Fig. 6).
At low NaCl concentration, T1 values of histidine residues in where Tlo is the observed spin-lattice relaxation time, T I B and T 1~ are those for bound and free states, respectively. In contrast, under slow exchange conditions, the observed spin-lattice relaxation time is expected to exhibit biphasic time dependence, the fast phase from adrenodoxin in bound state and the slow phase from free adrenodoxin. Adrenodoxin in complex with the reductase contributes to only one-sixth of the observed signal intensity. Furthermore, as the line width of the signal from adrenodoxin in the bound state is much broader than that of free adrenodoxin, its contribution to the signal intensity be about 15-20 5-l. The isoalloxazine ring of FAD is located at the site close to both NADP+ and adrenodoxin. measured as the signal height must be even smaller. Actually, the semilogarithmic plots of the signal intensities were apparently monophasic under these experimental conditions. Assuming the slow exchange condition a t low ionic strength, the upper limit of the off rate is estimated to be less than 6/T1F, which is approximately equal to 4 s-'. The estimated off rate constant is comparable with the kcat in cytochrome c reductase activity at low ionic strength. In the presence of 0.2 M NaCl, assumption of the fast exchange condition allows us to estimate the T 1~ to be about 0.6 s, which is comparable with T I of the spectral envelope of the reductase complexed with a stoichiometric amount of adrenodoxin at low ionic strength. This favored our explanation for the present results of relaxation time.
DISCUSSION
Adrenodoxin reductase transfers electrons from NADPH to adrenodoxin. Dynamic and equilibrium properties of the interaction among adrenodoxin reductase and its redox partners were monitored through measuring the 'H NMR spectra for NADP+ and adrenodoxin. Titration of the reductase with adrenodoxin and NADP' on the 'H NMR spectra indicates that they form a stable ternary complex among them. The kOm of NADP+ from the complex with the reductase is shown to be as slow as 15-20 s-'. This could be the reason for a long lag phase found in time-dependent transferred NOE measurement (19). The slow off rate of NADP+ from the complex with adrenodoxin reductase prevented us from reliably calculating the distance between the specific protons. Fig. 7 is a schematic presentation of the ternary complex among the reductase, NADP', and adrenodoxin. Asp-76 and Asp-79 of adrenodoxin have been shown to be located in the interface for protein-protein interaction and to participate in the electrostatic interaction with the reductase (10). The resonance J a t 8.75 ppm (Fig. 1) is probably located at the interaction site with adrenodoxin and is also close to the isoalloxazine ring of FAD.
Qualitative interpretation of the NOESY spectrum demonstrates that the nicotinamide-ribose glycosidic torsional angle is an anticonfomation in the binary complex with adrenodoxin reductase. Light and Walsh (34) investigated the stereochemistry of hydrogen transfer using [4(S)-2H,4(R)-1H]NADPH and observed a large isotope effect, indicating that the bond between the N4 carbon and the pro-S hydrogen is broken upon reduction of adrenodoxin reductase with NADPH. This implies that the pro-S hydrogen of a nicotinamide ring faces on top of an isoalloxazine ring of FAD with an anti-nicotinamide-ribose glycosidic torsional angle. Karplus et al. (35) determined the three-dimensional structure of spinach ferredoxin-NADP+ reductase by x-ray crystallography. They revealed the binding mode of 2'-phosphoadenosine monophosphate and further ex-
Interaction of Adrenodoxin Reductase with N A D P and Adrenodoxin
tended the discussion on a hypothetical complex with NADPH, which indicates that pro-R hydrogen of a nicotinamide faces on top of an isoalloxazine ring with an anti-nicotinamide-ribose glycosidic torsional angle. This conformation is in agreement with the stereospecificity of the reaction mediated by ferredoxin-NADP+ reductase. The ferredoxin-NADP+ reductase structure with 2'-phosphoadenosine monophosphate was compared with the flavine and NADPH from glutathione reductase after the two flavines had been superimposed, showing that the nicotinamide from glutathione reductase approaches FAD from the opposite direction of that in ferredoxin-NADP+ reductase. Glutathione reductase catalyzes the pro-S hydrogen-specific transfer with an anti-nicotinamide-ribose glycosidic torsional angle (36,37). Thus, the present results indicate that the relative arrangement of FAD and nicotinamide-ribose moiety of NADP+ in adrenodoxin reductase is similar to that in glutathione reductase rather than that in ferredoxin-NADP+ reductase, which is long believed to be analogous to adrenodoxin reductase. This may imply that adrenodoxin reductase folds in a manner that is different from that found in ferredoxin-NADP+ reductase. Considering the key residues identified in the related enzymes which are functionally important for both substrate recognition and forming the catalytic center, it was concluded that the ferredoxin-NADP+ reductase family does not include adrenodoxin reductase (35). This agrees with our conclusion based on the present lH NMR study.
The mechanism proposed by Lambeth et al. (71, in which adrenodoxin transfers an electron as a moving shuttle between NADPH-adrenodoxin reductase and cytochrome P-450,,,, has been well recognized; that is, adrenodoxin is released from the reductase after being reduced, then reduced adrenodoxin forms a tight complex with cytochrome P-450,,,. After transferring electrons to cytochrome P-450,,,, oxidized adrenodoxin binds less tightly to cytochrome P-450,,, and is released to complete this cyclic process. A similar mechanism is assumed for electron transfer to cytochrome c as a terminal electron acceptor. The present results demonstrated that the off rate of adrenodoxin from the complex with the reductase became slower as ionic strength of the solution is lowered. The rate is estimated to be less than 4 s-' and is comparable with kc,, for cytochrome c reductase activity in low ionic strength. This suggests that the off rate from the complex could be a rate-limiting step in cytochrome c reductase activity, especially at low ionic strength. The rate for adrenodoxin reductase activity was shown to exhibit a bell-shaped dependence on ionic strength of the solution (8,38) i.e. the rate is reduced at both high and low ionic strength, giving a maximum at about 100 m M NaCl. This is in agreement with our conclusion that the off rate of adrenodoxin from the complex with the reductase depends on ionic strength and is comparable with the turnover number at low ionic strength. Lambeth et al. (14), however, mentioned that the complex between adrenodoxin and the reductase must have an off rate constant much greater than that for the slowest step in the electron transferring chain. This was regarded to be a requisite condition in order that the "shuttle mechanism" works properly. Actually, they showed that the first order rate constant for dissociation of the complex between adrenodoxin and the reductase was approximately 300 s-l and was independent of ionic strength. We have no definite explanation for this discrepancy at this moment. Even though the off rate of adrenodoxin from the complex with the reductase is the ratelimiting step in this electron transfer system, the electron transfer mechanism, in which adrenodoxin transfers electrons as a moving shuttle, is still valid and the most plausible.
|
2018-04-03T00:36:17.947Z
|
1994-03-18T00:00:00.000
|
{
"year": 1994,
"sha1": "6acb30bbb441c1cd9f8cdd46336dd3677e242f97",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(17)37151-x",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "c4246552c3525bf3f7ba3f8a4f7023ba6f36849b",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
14833945
|
pes2o/s2orc
|
v3-fos-license
|
MIT Open Access Articles High resolution fabrication of nanostructures using controlled proximity nanostencil lithography
Nanostencil lithography has a number of distinct benefits that make it an attractive nanofabrication processes, but the inability to fabricate features with nanometer precision has significantly limited its utility. In this paper, we describe a nanostencil lithography process that provides sub-15 nm resolution even for 40-nm thick structures by using a sacrificial layer to control the proximity between the stencil and substrate, thereby enhancing the correspondence between nanostencil patterns and fabricated nanostructures. We anticipate that controlled proximity nanostencil lithography will provide an environmentally stable, clean, and positive-tone candidate for fabrication of nanostructures with high-resolution.
direct ion beam milling 2 , ultraviolet interference lithography 3 , beam induced material deposition 4 , nanoimprint lithography 5 , and nanostencil lithography 6 , and each has their own process compatibility and advantages.
Nanostencil lithography, a process in which nanostructures are deposited onto, or etched into, a substrate through a stencil mask, provides a number of distinct benefits. First, nanostencil lithography allows for fabrication of nanostructures on substrates with topography. Second, stencils are patterned independently of the target substrate, enabling parallel processing of the stencil and substrate which, in turn, allows for higher process yields and increases in production throughput 7 . The independent patterning of the stencil also ensures that the nanostructure deposited through the stencil is free from any ion-beam contamination and ion-beam induced sputtering. Furthermore, nanostencil lithography decouples the patterning resolution of the mask from the thickness of the structure, making it particularly useful for thick nanostructures. Finally, nanostencil lithography does not require treatment with acids, bases, or high temperatures, thereby making the technique particularly suitable for sensitive substrates 8 .
Over the past few years, significant advances in nanostencil lithography have enabled the routine fabrication of ~200 nm feature sizes 6 . However, the inability of nanostencil lithography to fabricate features with nanometer precision has significantly limited its utility. In one instance, nanostencil lithography has reported sub-20 nm dot resolution 9 , the large distance between the stencil and substrate led to significant feature broadening, limited the thickness of patterned structures to 10 nm, and would prove exceedingly challenging for the creating of sub-20 nm nanogap resolution. In this paper, we report a process for performing nanostencil lithography fabrication down to a feature and gap resolution of 15 nm for the patterning of nano-gap structures. We chose to demonstrate the effectiveness of the developed nanostencil lithography process by fabricating nanostructures with nanogaps, which are typically more challenging to fabricate compared to stand-alone structures.
Nanogap structures are important in a broad range of devices, including surface enhanced Raman spectroscopy (SERS) 10 , single molecule fluorescence sensing 11,12 , transverse electrodes for nanopore sensing 13 , spin 14 and ultrafast 15 transistors, molecular electronics 16,17 , magnetic tunneling junctions 18 , and nanomagnetics 19 . In many of these applications, device performance depends critically on both the gap size as well as the feature dimensions 20 , and is sensitive to geometric and surface imperfections that often lead to a significant deterioration in performance 21 . Most methods for achieving sub-15 nm gaps, such as electromigration break junction 22 , direct ion beam milling 23 , and controlled electroplating 24 require performing active feedback control processes to achieve the desired gap size, thereby significantly limiting throughput and process compatibility. Alternatively, sub-10 nm resolution has been achieved directly with electron beam lithography; however, obtaining this patterning resolution requires use of the high contrast resist, hydrosilsesquioxane (HSQ) 1 . There are a number of drawbacks associated with the use of HSQ, including its environmental stability issues 25 as well as dose and development sensitivity 26 .
Furthermore, as a negative tone resist, HSQ may not be suitable for the patterning of some structures.
Herein, we demonstrate the ability of our nanostencil lithography process to achieve sub-15 nm resolution on nanogap structures. The improvement in resolution is achieved by addressing the dominant limitations of nanostencil lithography, foremost of which are geometric blurring of deposited features due to insufficient proximity of the stencil mask and the substrate, and the patterning resolution of the stencil mask itself.
We first describe the nanostencil lithography process flow to pattern nanostructures onto a target silicon substrate (Fig. 1). A photolithographically patterned 100 μm thick SU-8 layer was first used as a mask to etch a raised square platform into the target silicon substrate (Surface Technology Systems, Inductively Coupled Plasma Reactive Ion Etch). A piranha bath (3 H 2 SO 4 : 1 H 2 O 2 , at 80 °C for 60 min) removed the remaining SU-8 from the wafer. A 200 nm thick sacrificial poly-methyl methacrylate layer (PMMA 950 A4) was spin coated onto the raised square platform on the substrate (Fig. 1a, left). The thickness of the sacrificial PMMA layer determined the final distance between the stencil and the substrate. In parallel to substrate preparation, ultrathin freestanding silicon nitride membranes (TEMwindows) were patterned using a Ga + focused ion beam (FEI Helios 600 Dualbeam) with the desired nanostructures corresponding to milled-through areas in the silicon nitride (Fig. 1a, right). The focused ion beam was then used to mill a set of cuts 27 (Fig. 2a,b) in the membrane, enclosing the desired features in a much larger 50 μm square area defined by the cuts. The stencil mask was then aligned over, brought into contact with, and consequently detached onto, the raised platform on the silicon substrate ( Fig. 1b). The PMMA exposed by the stencil pattern was then removed by treatment of the substrate with oxygen plasma for 20 min at a pressure of 800 mT of air at a power of 29.6 W (Harrick Plasma, Fig. 1c).
The patterns in the stencil were converted into metallic nanostructures on the target substrate by The nanostencil lithography method described above improves the resolution by controlling the proximity of the stencil to the substrate through direct deposition of the stencil mask on top of a sacrificial PMMA layer. Owing to the direct transfer of the stencil, the thickness of the spin-coated PMMA spacer allows for precise control of the gap between the stencil and the substrate. The gap size has been reported to have a significant impact on the achievable resolution with nanostencil lithography, which is traditionally limited by geometric blurring (blurring from non-normal angle of incidence of material evaporated through the stencil), halo blurring (blurring from surface diffusion of deposited material), and stencil clogging (reduction in stencil aperture size due to deposition of material onto the stencil) 28,29 . For example, feature sizes of ~ 100 nm, and feature resolution down to 25 nm, were reported when the stencil is contacted directly with a flexible substrate, indicative primarily of the potential benefits of reductions in the gap between stencil and substrate 30 .
The pre-patterned set of cuts in the stencil mask (Fig. 2a,b) facilitates the stencil mask's detachment from its own carrier substrate onto the PMMA layer with high yield. Deposition of the stencil to the substrate is further assisted by aligning and contacting it with the PMMA coated raised platform on the target silicon substrate (Fig. 2b) 27 . The raised platform reduces the sensitivity of the deposition process to the intrinsic curvature of, and orientational misalignments between, the stencil and the substrate. However, deposition of the stencil does not critically rely on transfer to raised platforms: an alternative method for high yield transfer is to mount the stencil onto a flexural stage that can correct for orientational misalignments 31 . The membrane transfer process is also extremely effective when used with flexible substrates 32 .
Following transfer of the stencil, areas of the sacrificial PMMA layer exposed through the stencil mask are selectively removed using an oxygen plasma etch. Since the oxygen plasma etch is isotropic, it leads to an undercut of the PMMA under each feature in the stencil mask. This undercut allows for an additional (optical) confirmation that the ion beam milling of the stencil mask penetrated completely through the membrane (Fig. 2d).
To test the achievable resolution with the controlled proximity stencil mask, a 20 nm-thick stencil was patterned with an array of holes varying from 20 nm to 40 nm in diameter (geometric mean of the major and minor axis lengths). A comparison of transmission electron microscope (TEM) images of the stencil holes to scanning electron microscope (SEM) images of the gold nanodots formed after evaporation suggests minimal geometric blurring (Fig. 3a,b). However, small gold particles, attributed to surface diffusion of gold following deposition, are visible 6,33 . If needed, previous reports have demonstrated the ability to selectively remove these smaller gold particles with a short dry etch 34 . The gold nanodot size was found to be consistently 2 -3 nm smaller than the corresponding hole in the stencil mask (Fig. 3c). As deposition of metal onto the stencil reduces the size of the hole, the difference in size is likely attributable to stencil mask clogging. Nonetheless, the correspondence between the stencil hole and metallic nanodot structure down to ~20 nm indicates that the dominant limitation in resolution when using controlled proximity nanostencil lithography is the patterning resolution of the stencil mask.
Patterning of the stencil masks was performed by direct Ga + focused ion beam (FIB) milling of silicon nitride (SiN x ) membranes. The patterning resolution of the Ga + ion beam is limited not only by the diameter of the focused ion beam (8 -10 nm Full Width Half Max 35 ), but a non-trivial interplay between multiple effects that include sputtering, atomic recoil, redeposition, and ion implantation 36,37 . While holes with diameters as small as 3 nm have been reported with Ga + ion beams, the fabrication of holes with diameters smaller than the beam width is often less repeatable due to a significant increase in sensitivity of the hole diameter to the ion beam dose 38 . To explore the trade-off between feature resolution and reproducibility with the Ga + ion beam, we measured the ion dosehole diameter profile for holes milled in a 20 nm-thick SiN x stencil mask (green line, Fig. 3b). At the chosen ion dose step size of 0.25 fC / nm (the ion dose is normalized by the stencil thickness), the change in the milled hole diameter with each dose step was typically between 1 -3 nm. However, decreasing the dose from 1.75 fC / nm to 1.5 fC / nm led to a sharp transition from a hole diameter of 22 nm to an incompletely milled through hole, indicating that the hole diameters produced for ion doses below the threshold dose of 1.75 fC / nm are extremely dose sensitive. We call the hole diameter just above the threshold dose as the effective reproducible patterning resolution of ion beam milling, which defines not only the minimum feature size, but is likely to determine the resolution with which curved elements or sharp corners can be fabricated on larger structures. Therefore, we further investigated methods for improving the effective reproducible patterning resolution.
Simulations of ion trajectories and atomic recoil in materials clearly suggest that the effective diameter over which the ion beam damages the substrate increases with distance from the top surface 39 .
We therefore reasoned that reducing the SiN x membrane thickness would allow for higher resolution patterning of the stencil masks. To test this hypothesis, the Ga + FIB was used to make a dose array of holes in 20 nm, 10 nm, and 5 nm thick SiN x membranes, and the resulting ion dosehole diameter profiles were measured in a TEM (Fig. 3b, green, blue, and red lines respectively). The results indicate thinner stencil masks have a slightly higher effective reproducible patterning resolution (22, 19, and 17 nm for 20, 10, and 5 nm thick membranes, respectively) (indicated by circles in Fig. 3d). As thickness related beam broadening effects are suppressed in the 5 nm thick membrane, further improvements in patterning resolution, potentially down to the 5 nm regime, likely require the use of lighter ion beams, such as Ne + 40 or He + 41 .
The improved patterning resolution of the stencil was further verified by characterizing the fabrication of bowtie nanogap structures. Bowtie nanoantennas are a geometry that exhibit particularly advantageous plasmonic properties for the collection of SERS spectra 42 . Arrays of bowties were fabricated using both 20 nm-thick and 5 nm-thick silicon nitride stencils (Fig. 4a). SEM images of the fabricated bowtie stencil mask (Fig. 2c) were taken directly after milling with the ion beam in the same instrument without the need to unload and reload the sample. The in situ SEM visualization allows for verification of the milling process, and enables immediate optimization of the milling parameters including the ion beam dose and the programmed nanogap, δ, between the two triangles in the bowtie. A comparison between the TEM images of the stencil (Fig. 4c,e) and the SEM images of the structures after gold deposition (Fig. 4d,f) indicates excellent correspondence between the stencil shape and the resulting metal structure for both the 20 nm and 5nm-thick stencils. Similarly, a direct comparison between gold structures fabricated from different stencil thicknesses revealed that the thinner 5 nm stencil more accurately resolved the corner (11 nm radius of curvature, compared to the 21 nm radius of curvature for the 20 nm-thick stencil). For both membrane thicknesses, sub-15 nm gaps were achieved (Fig. 4c,e with gaps of 12 nm and 14 nm respectively).
Controlled proximity nanostencil lithography can easily allow for deposition of nanostructures of at least 70 nm thickness, as the PMMA spacer layer thickness can be varied to ensure successful lift-off without affecting the patterning resolution of the stencil mask. The use of the sacrificial PMMA layer is reminiscent of developed bilayer HSQ-PMMA electron beam lithography techniques 43,44 , which allow for patterning of nanostructures with similar thickness, and have been recently reported to achieve < 10 nm resolution 45 . However, HSQ-PMMA bilayer electron beam patterning still relies on the negative tone HSQ as the high-resolution electron beam resist. In contrast, the presented controlled proximity nanostencil lithography process provides patterning resolution approaching HSQ electron beam lithography, while retaining several distinct advantages of nanostencil lithography, including parallel processing of the stencil, environmental stability of the stencil to fluctuations in humidity or temperature, the lack of contamination from ion beam deposition and implantation, and the absence of any harsh acid or solvent treatment. Controlled proximity nanostencil lithography provides users with an environmentally stable, dose insensitive, positive tone fabrication method that is likely to be preferable to HSQ-PMMA bilayer electron beam patterning in many circumstances. The presented method also allows the precision patterning step to be performed in parallel to any processing that the target substrate requires. As a result, higher throughput can be achieved using nanostencil lithography. For applications that have particularly tight patterning tolerances, the stencils can also be inspected with SEM or TEM prior to deposition.
While the controlled proximity nanostencil lithography method presented herein achieves higher gap resolution compared to other nanostencil lithography methods, it does so at the expense of using a sacrificial PMMA resist. Resist-free alternatives for high point resolution (sub-20 nm) for thin (sub-15 nm resolution) metallic nanostructures do exist 9 , and should be seriously considered if the cleanliness of the substrate is critical. However, the advances in stencil mask fabrication with ion beam milling presented in this paper are generally compatible with all nanostencil lithography procedures.
In conclusion, we report a process that advances the achievable resolution for nanostencil lithography. The achievable size, shape, and gap resolutions were characterized through the deposition of thick gold nanodots and bowtie nanogap structures. The fabricated structures revealed that controlled proximity of the stencil and the use of ultrathin stencils allowed the patterning resolution to approach the limit of the Ga + ion beam diameter. We anticipate that the present fabrication strategy will enable nanostencil lithography to be an effective technique for fabrication of high-precision nanostructures.
|
2019-04-19T13:08:00.146Z
|
2014-02-01T00:00:00.000
|
{
"year": 2014,
"sha1": "9860de5ab0076cc4b1fcc2d1c23b6790c4924bff",
"oa_license": "CCBYNCSA",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/99473/1/2014%20-%20Jain%20-%20High%20Resolution%20Fabrication%20of%20Nanostructures%20using%20Controlled%20Proximity%20Nanostencil%20Lithography.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f538fba859ef57b8181d0944608dac19e855ba6b",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
20288282
|
pes2o/s2orc
|
v3-fos-license
|
Cost-effectiveness analysis of neonatal screening of critical congenital heart defects in China
Abstract Background: Pulse oximetry screening is a highly accurate tool for the early detection of critical congenital heart disease (CCHD) in newborn infants. As the technique is simple, noninvasive, and inexpensive, it has potentially significant benefits for developing countries. The aim of this study is to provide information for future clinical and health policy decisions by assessing the cost-effectiveness of CCHD screening in China. Methods and Findings: We developed a cohort model to evaluate the cost-effectiveness of screening all Chinese newborns annually using 3 possible screening options compared to no intervention: pulse oximetry alone, clinical assessment alone, and pulse oximetry as an adjunct to clinical assessment. We calculated the incremental cost per averted disability-adjusted life years (DALYs) in 2015 international dollars to measure cost-effectiveness. One-way sensitivity analysis and multivariate probabilistic sensitivity analysis were performed to test the robustness of the model. Of the three screening options, we found that clinical assessment is the most cost-effective strategy compared to no intervention with an incremental cost-effectiveness ratio (ICER) of Int$5,728/DALY, while pulse oximetry plus clinical assessment with the highest ICER yielded the best health outcomes. Sensitivity analysis showed that when the treatment rate increased up to 57.5%, pulse oximetry plus clinical assessment showed the best expected values among the three screening options. Conclusion: In China, for neonatal screening for CCHD at the national level, clinical assessment was a very cost-effective preliminary choice and pulse oximetry plus clinical assessment was worth considering for the long term. Improvement in accessibility to treatment is crucial to expand the potential health benefits of screening.
Introduction
Congenital heart defects (CHD) are the most common type of birth defect and a leading cause of infant mortality in China where approximately 216,000 infants with CHD are born every year [1] . The worldwide prevalence of CHD is estimated at 4 to 10 per 1000 neonates; of these, 1 to 2 neonates have critical CHD (CCHD), which can cause death or the need for surgical or catheter-based intervention in the neonatal period. [2] Populationbased studies in Europe and America have shown the accuracy and value of adding pulse oximetry screening to the routine clinical assessment of neonates to aid in the detection of CCHD. [3][4][5][6] Early detection is critical to preventing infant morbidity and mortality. Combined with advances in therapeutic interventions, early detection can enable the majority of children born with CCHD to lead normal productive lives. [7] In 2011, after being widely advocated by major medical societies, pulse oximetry screening of newborns for CCHD was included in the US-recommended uniform screening panel. [2,7,8] CCHD screening reportedly reduced the number of apparently healthy infants who might have died or suffered cardiovascular collapse without CCHD detection. [9][10][11] The impact of screening for CCHD in developing countries, however, is less certain. Owing to delays in timely diagnosis and case management, infant and child mortality related to CCHD remains high in developing countries. [12] As pulse oximetry screening provides an accurate, noninvasive approach that is simple, inexpensive, and less resource-intensive, it could be very beneficial for developing countries as long as access to treatment is available after detection.
A large-scale, multicenter, prospective screening study conducted in China confirmed the feasibility and accuracy of pulse oximetry screening for the detection of CCHD in neonates before discharge and recommended its widespread use in maternity hospitals. [13] This screening method is considered feasible for the majority of Chinese neonates because the pulse oximeter is readily available in most secondary and tertiary hospitals, screening can also be provided by outreach services, and the proportion of neonates delivered at hospitals exceeds 90%. However, there are tremendous variations in socioeconomic status, access to antenatal screening and pediatric cardiological care, and performance and quality of healthcare across regions and facilities. Neonatal screening for CCHD is still at the pilot stage and has not yet been widely adopted in most Chinese hospitals. No study has yet evaluated the cost-effectiveness of CCHD screening in a developing country. Therefore, this study aimed to evaluate the cost-effectiveness of neonatal CCHD screening for neonates in China.
Decision model
A decision-analytic and cost-effectiveness analysis model was generated using TreeAge Pro.2015 (Fig. 1) and was programmed for a hypothetical annual birth cohort of 16 million neonates. The aim of the screening was to detect neonates with CCHD, whose condition had gone undiagnosed during antenatal care, before they were discharged from the birth hospital so that timely treatment could be administered before cardiac collapse. The primary outcomes were the number of lives saved during infancy and the Disability-Adjusted Life Years (DALYs) averted as a result. The time horizon was the lifetime.
Three screening options, namely, clinical assessment alone, pulse oximetry screening alone, and pulse oximetry screening as an adjunct to clinical assessment for early detection of CCHD, were compared to no intervention (status quo). Clinical assessment has always been fundamental to routine clinical practice in China where it encompasses 4 components: family history, particular facial features, heart murmurs, and extra cardiac malformation and is carried out before discharge (depending on the human and technical capacities of the hospital). [13] Because neither clinical assessment nor pulse oximetry alone can detect all CCHD, a combination of the 2 is ideal. [14] Infants in whom CCHD was diagnosed by fetal ultrasound during antenatal care were excluded from postnatal screening. Pulse oximetry measures the oxygen saturation of arterial blood 24 to 48 hours after birth. Whenever a positive result was identified by screening, the neonate underwent a diagnostic echocardiogram at the birth hospital or was referred to another hospital as needed. Furthermore, neonates with a CCHD diagnosis were expected to receive pediatric cardiological care including surgery or catheterization generally at the tertiary level before cardiovascular collapse.
Costs
Cost estimates were based on the societal perspective and discounted at 3%. [15] Data were first collected in Chinese yuan in 2015 and then converted to international dollars using purchasing power parities and gross domestic product (GDP) deflators. [16] The estimates included 3 items: cost of screening by either clinical assessment or pulse oximetry, the cost of diagnostic echocardiography, and the cost of treatment.
The cost of pulse oximetry screening was estimated based on the salaries of doctors and nurses and the average screening time, reported as 1.6 minutes. [13] We also considered equipment and maintenance costs and program costs for implementing screening. The figures for the salary of the medical staff and the direct medical costs for clinical assessment, echocardiography tests, surgery, and catheterization were obtained from tertiary hospitals and local health services, as infants with, or suspected of having, CCHD in rural areas are referred to tertiary hospitals in urban centers for diagnosis and treatment (pediatric cardiac surgery or catheterization). Considering the diversity in the cost of screening and diagnosis across health facilities at different levels, an up-and-down level of 50% was used in the sensitivity analysis to examine these uncertainties (Table 1).
Screening performance and diagnostic follow-up
Data on screening accuracy were obtained from the largest multifacility investigation in the developing world, which was conducted by Zhao et al. [13] As screening performance was likely to vary across different health facility levels, sensitivity and specificity were reduced by 50% in the sensitivity analysis, taking into account the probability that lower level and remote hospitals would perform more poorly than major urban hospitals. Generally, newborns can receive a diagnostic echocardiogram either at the birth hospital or a nearby tertiary hospital, and confirmed cases receive pediatric cardiological care at a tertiary hospital. Table 2 shows the base-case values and plausible ranges used for the sensitivity analysis.
Estimates of health impacts
The model assessed the number of additional neonates with CCHD detected at the birth hospital before discharge, the number of lives saved, and the number of disability adjusted life years (DALYs) averted. The willingness-to-pay (WTP) threshold was calculated at 34,857 international dollars (Int$) per DALY averted, or 3 times the GDP per capita based on the WHO guidelines for cost-effectiveness analysis of interventions. [16] In China, access to healthcare, especially advanced-level pediatric cardiological care, varies widely across the country. Unlike in developed countries, a significant proportion of infants with CCHD are unable to receive any treatment before cardiovascular collapse. Given the lack of data on infant mortality without treatment, the adverse outcomes owing to poor access to pediatric cardiological care were estimated by using a proxy of the natural history derived from a study conducted in the 1950s when cardiac surgery was not commonly available worldwide. We also used recent reports on infant mortality oowing to CCHD in China to reflect health outcomes with treatment, under the assumption that the probability of death would be reduced if CCHD was detected before discharge and the neonate received advanced-level pediatric cardiological care. The average and incremental costs per DALYs averted were calculated.
Sensitivity analysis
For the base-case, univariate sensitivity analyses were conducted to explore the impact of the parameters listed in Table 2 on the cost, health outcomes, and cost-effectiveness of the 3 screening options. Monte Carlo simulations were then applied to the multivariate sensitivity analyses to test the robustness of the model while taking into account simultaneous changes in key parameters whose variations had the greatest impact on costeffectiveness.
Ethical consideration
As our study was a modeling-based approach and data for cost estimates did not include individual information, no ethical approval was necessary.
Table 2
Baseline values and ranges used for sensitivity analysis.
Sensitivity analysis
The results of the 1-way sensitivity analysis by parameter for the cost-effectiveness of the 3 screening options are shown in a Tornado diagram (Fig. 3). The parameter with the greatest range was the proportion of patients receiving pediatric cardiological care, followed by infant mortality averted by timely treatment, unit cost of pulse oximetry screening, the prevalence of CCHD, and the proportion of suspected cases, which were diagnosed. Because treatment rate was found to be the most influential parameter and equal in importance to the proportion of patients receiving pediatric cardiological care, its impact on costeffectiveness and the expected value of different screening options was explored, as shown in Figure 4. When the treatment rate was increased to 57.5%, the ICER of pulse oximetry plus clinical assessment acquired the best expected values among the 3 options at the threshold of Int$34,857/DALY. The cost-effectiveness acceptability curve indicated the robustness of the cost-effectiveness of different options at different WTP thresholds. At a threshold of Int$34,857/DALY, clinical assessment alone was very cost-effective with a probability of 100%. The probability of cost-effectiveness of pulse oximetry plus clinical assessment gradually increased with the WTP threshold and exceeded that of clinical assessment when the threshold reached Int$57,000/DALY (Fig. 5).
Discussion
To the best of our knowledge, this study provides the first costeffectiveness evaluation of the universal application of CCHD screening in maternity hospitals in China as well as in the developing world. Screening makes it possible to detect CCHD before discharge, potentially reducing infant deaths because of late case management. Our analysis found that under base-case assumptions, clinical assessment was a very cost-effective preliminary choice. Pulse oximetry plus clinical assessment, however, yielded the best health outcomes on DALYs averted and became the dominant option as the WTP threshold and the proportion of patients receiving pediatric cardiological care increased. Clinical assessment is a basic practice for detecting CCHD and can be implemented immediately after delivery. It is not yet a routine practice in China owing to varying human and technical capacities across regions and institutions. Training is necessary for physicians at lower level and remote hospitals to recognize typical symptoms such as heart murmurs, tachypnea, and overt cyanosis. Compared with pulse oximetry, clinical assessment demonstrated a higher detection rate for critical left heart syndrome, critical coarctation of the aorta, interrupted aortic arch, and critical aortic stenosis, whereas pulse oximetry was more likely to detect total anomalous pulmonary venous connection, transposition of the great arteries, pulmonary atresia, and double outlet right ventricle. [13] Therefore, the combination of pulse oximetry with clinical assessment is likely to improve performance significantly and is an ideal option for achieving the best screening results. Once the treatment rate, that is, the proportion of children with CCHD who are able to access pediatric cardiological care increases to 57.5%, this combined approach will become cost-effective and practicable for use in hospitals universally in the long term.
The findings of our study highlighted the impact of accessibility to pediatric cardiological care on the health and economic effects of screening strategies. It is reasonable to predict that the timely treatment of infants with positive screening results for CCHD will improve significantly if screening were to be universally introduced. However, in light of China's low "ceiling" levels, post-payments, and various restrictions on reimbursements, the current medical insurance system is failing to fulfill its protective function against catastrophic payments and impoverishment owing to serious illness including CHDs, particularly for rural and rural-to-urban migrant children. Medical expenditures for pediatric cardiological care principally relies on out-of-pocket payments and charities, with an actual reimbursement rate of 20% to 45% or even <20% for payments exceeding 200,000 Chinese yuan, a figure far removed from the Ministry of Health's ambitious target of 90%. [17] Questions also remain regarding the highly concentrated distribution of advanced medical technology for diagnosis and treatment in urban hospitals. This situation leads to difficulties not only in access but also reimbursement because healthcare obtained outside of one's area of residence is subject to much lower reimbursement and complicated procedures. Furthermore, significant geographical gaps remain in facilities' technical capacity for pediatric cardiological care across the country. Timely treatment after early detection is often hindered by catastrophic payments because of out-of-pocket expenses and the lack of advanced medical technology in lower level and remote facilities, which negatively affect the potential benefits of screening strategies, especially those of pulse oximetry as an adjunct to clinical assessment. Previous economic evaluations of neonatal CCHD screening have been published, but all were conducted in the developed world. [18][19][20][21] Although these studies suggested that screening by pulse oximetry plus clinical assessment was cost-effective in light of the accepted thresholds in high-income countries, in China and other developing countries, the health system and socioeconomic environment influencing clinical and policy decision-making differ from those of developed countries. First, access to pediatric cardiological care is limited by technical, geographical, and financial factors, whereas an access rate >10% was reported for the developing world. [12] Second, China has huge gaps in socioeconomic status as well as in the quality, capacity, and accessibility of medical care. Last but most importantly, the WTP threshold varies significantly across regions in China, causing correspondingly greater variations in cost-effectiveness analysis results than in developed countries.
This study has some limitations. A major limitation is the lack of precise population-based information on the outcomes of childhood mortality and morbidities during the long term in cases of timely treatment, delayed treatment, and no treatment, and the impact of early detection by neonatal or prenatal screening on the improvement of those outcomes. We primarily considered DALYs because of infant mortality averted based on currently available information. However, the potential health benefits are not limited to infant mortality, but also include morbidities avoided in the long term and facilitating and informing pediatric cardiological care. Additionally, we did not investigate the impact of secondary life-threatening neonatal conditions that may be detected by pulse oximetry, such as pneumonia and sepsis. In the developing world, the detection of these conditions may be of more benefit than the detection of CCHD. Therefore, the potential benefits of neonatal CCHD screening may be largely underestimated. Moreover, data on screening performance derived from urban tertiary hospitals and real-world accuracy in lower-level hospitals are likely to be poorer and largely dependent on physicians' clinical experience and facility capacity. To adjust for this uncertainty, we reduced sensitivity by 50% in the sensitivity analysis. Finally, owing to the lack of information, the proportion of infants with CCHD receiving pediatric cardiological care was based on the opinion of an expert panel.
To adjust for this uncertainty, we set a wide range in the sensitivity analysis to accommodate the huge geographical and socioeconomic diversity within the country.
Conclusion
In China, for neonatal screening of CCHD at the national level, clinical assessment is a very cost-effective preliminary choice and pulse oximetry plus clinical assessment is worth considering for the long term as accessibility to timely treatment improves and the WTP threshold increases with socioeconomic development. Public investment and insurance coverage for children with CCHD are crucial for exploiting the health benefits of the screening.
|
2018-04-03T04:02:11.988Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "9cc428df4b57c27e53745552761c74bb56f15522",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000008683",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9cc428df4b57c27e53745552761c74bb56f15522",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9343593
|
pes2o/s2orc
|
v3-fos-license
|
Reproducibility of oligonucleotide arrays using small samples
Background Low RNA yields from small tissue samples can limit the use of oligonucleotide microarrays (Affymetrix GeneChips®). Methods using less cRNA for hybridization or amplifying the cRNA have been reported to reduce the number of transcripts detected, but the effect on realistic experiments designed to detect biological differences has not been analyzed. We systematically explore the effects of using different starting amounts of RNA on the ability to detect differential gene expression. Results The standard Affymetrix protocol can be used starting with only 2 micrograms of total RNA, with results equivalent to the recommended 10 micrograms. Biological variability is much greater than the technical variability introduced by this change. A simple amplification protocol described here can be used for samples as small as 0.1 micrograms of total RNA. This amplification protocol allows detection of a substantial fraction of the significant differences found using the standard protocol, despite an increase in variability and the 5' truncation of the transcripts, which prevents detection of a subset of genes. Conclusions Biological differences in a typical experiment are much greater than differences resulting from technical manipulations in labeling and hybridization. The standard protocol works well with 2 micrograms of RNA, and with minor modifications could allow the use of samples as small as 1 micrograms. For smaller amounts of starting material, down to 0.1 micrograms RNA, differential gene expression can still be detected using the single cycle amplification protocol. Comparisons of groups of four arrays detect many more significant differences than comparisons of three arrays.
Background
The ability to measure the expression of thousands of genes at once using microarrays has opened new areas of research, including global examination of the effects of perturbations on cells or animals and the classification of tumors by their pattern of gene expression. Microarrays using cDNAs [1,2] and oligonucleotides [3][4][5] have both proven valuable.
Commercially available oligonucleotide microarrays provide a standardized tool that allows assay of thousands of mRNAs at one time. Affymetrix GeneChips ® contain pairs of 25-nucleotide sequences (probe pairs) synthesized on silica wafers; one of each pair exactly matches the sequence of interest and the other contains a single mismatching nucleotide in the center [6,7]. A single sequence is queried by a group of 8 to 16 probe pairs that constitute a probe set. RNA from the sample is converted to doublestranded cDNA and then labeled by in vitro transcription with biotinylated nucleotides. The biotinylated cRNA is hybridized to the GeneChip ® , unhybridized material is washed off, and the signal is detected using fluorescein-labeled Streptavidin [8].
The standard Affymetrix protocol [8] uses as starting material 10 µg of total RNA, from which biotinylated cRNA is synthesized. This can limit the use of this system for small samples from biopsies, laser capture microdissection or tissues from model organisms such as mice. Mahadevappa and Warrington [9] examined the effect of using less biotinylated cRNA in the hybridization. Hybridization reactions that contained the recommended 10 µg of cRNA (from human endometrium adenocarcinoma cells) detected 35% of the 1779 transcripts on the Gene-Chip ® . Reducing the amount of cRNA in the hybridization to 5 µg reduced the fraction detected to 30%, and further reducing the cRNA to 2.5 µg allowed detection of only 27% of the sequences [9]. Ohyama et al. [10] tested a modified protocol for synthesizing biotinylated cRNA from very small amounts of starting material. Total RNA from laser capture microdissected human oral cancer tissues was converted into cDNA and transcribed in vitro; the resulting cRNA was converted into cDNA and transcribed in vitro a second time to generate more cRNA; this cRNA was again converted into cDNA and biotinylated cRNA was synthesized by a third in vitro transcription. This procedure produced 10 µg of biotinylated cRNA from 0.1 µg of starting total RNA. Hybridization with 10 µg of biotinylated cRNA generated by this amplification protocol allowed detection of 30% of the 7000 transcripts on the HuGeneFL GeneChip ® [10]. In contrast, hybridization with 10 µg of biotinylated cRNA generated by the standard protocol from total RNA extracted from similar tissues resulted in detection of 35% of the transcripts being detected [10].
Rather than focusing upon the number of transcripts detected, the real test of a microarray protocol is the extent to which it allows differences in expression levels to be reliably detected. The biological variability inherent in most experimental models, including both genetic and environmental differences between animals or even replicate cell cultures, limits the detection of such differences. Additional variability in the treatment and handling of the models and in the RNA extraction typically occur outside the microarray laboratory, and can be reduced (but not eliminated) by careful experimental design. There is the potential for introducing additional technical variability during the synthesis of biotinylated cRNA. In evaluating a new protocol or comparing existing protocols, measures such as the yield of cRNA or the fraction of probe sets detected can be useful, but the key measure is the extent to which differences in gene expression can be detected in a realistic experiment.
We have systematically explored the use of smaller amounts of starting material (total RNA) in a model experiment that retains the individual-to-individual biological variation of a real experiment. This allowed us to compare technical variability to the biological variability in a typical experiment. We started with total RNA from individual rats exposed to two different nutritional regimens and used serial dilutions of the RNA to simulate experimental systems that provide smaller quantities of total RNA. We used the Rat Genome RGU34A GeneChip ® for all of the experiments. Our first goal was to determine a reasonable lower bound for total RNA that could successfully be used in the standard protocol. Second, we wanted to test a modified version of the Ohyama amplification procedure [10] that we thought would be faster, simpler and less likely to skew results due to truncation of the labeled cRNA. We examined the variability and the ability to detect significant changes between animals fed the 2 different dietary regimens when different amounts of starting material and different protocols were used.
Yields and quality of cRNA
The standard protocol uses 10 µg of total RNA to produce biotinylated cRNA [8]. In our experiment, the average yield of biotinylated cRNA was 97 µg (± 41 µg; mean ± standard deviation) when we started with 10 µg of RNA and used one-half of the double stranded cDNA product in the in vitro transcription reaction. Starting with 2 µg of total RNA, we obtained an average of 50 µg (± 20 µg; standard deviation) of biotinylated cRNA after using all of the double stranded cDNA product in the in vitro transcription reaction. The 1 µg pooled samples produced an average of 5.6 µg of biotinylated cRNA (yields ranged from 2.4 µg to 7.7 µg). No difference in yield was observed between the samples prepared with the ENZO and Epicentre T7 polymerases. This lack of difference led us to select the Epicentre Ampliscribe™ T7 High Yield Kit for the extra in vitro transcription step to decrease expense.
The standard Affymetrix protocol uses 15 µg of biotinylated cRNA to make a 300 µl hybridization cocktail, of which 200 µl is injected into the chip for hybridization. The yield of cRNA from both 10 µg and 2 µg of starting RNA (above) were more than sufficient for this. Because yields of cRNA starting from 1 µg total RNA were too low, we decided to use an additional amplification step for RNA samples less than 1 µg. Our modified protocol uses the initial in vitro transcribed cRNA as starting material for a second round of cDNA synthesis and in vitro transcription (Methods). The cRNA yields from the 0.5 µg samples with our protocol averaged 32 µg (± 13 µg), more than enough for the standard hybridizations. The cRNA yields from the 0.1 µg samples using the same protocol averaged 10 µg with a range from 5 to 21 µg, so some samples had too little to prepare a hybridization mixture at the same concentration. Therefore, the 0.1 µg samples were hybridized to GeneChips ® using 5 µg to 7.5 µg of cRNA to assess results using these limited amounts.
Aliquots of the biotinylated cRNA samples were analyzed by agarose gel electrophoresis to check the quality and length. The cRNA for both the 10 µg and 2 µg samples ranged from 200 to over 2,000 bases (before fragmentation). The cRNA from the 0.5 µg sample prepared by the amplification protocol ranged from 200 to 850 bases, a considerable decrease in maximum length. The 0.1 µg samples were too faint to judge their size range.
Changes in sensitivity as measured by detection of probe sets
For a particular tissue or cell type, the percent present and the scaling factors should be similar among all arrays in the same experiment in the absence of variability introduced by preparation, labeling, and handling of the individual samples. Because we created groups of samples diluted from the same individual RNA preparations, any group differences reflect differences in labeling and handling of the samples. We used the percent of probe sets called present and the scaling factor (see Methods) for an initial comparison among the groups ( Table 1). The 2 µg samples were essentially equivalent to the 10 µg samples by these measures (Table 1). For the amplified samples (0.5 µg and 0.1 µg starting material), the percent present was decreased and the scaling factor increased compared to the non-amplified samples. The percent present dropped to 30.2% in the 0.5 µg group (78% of that detected in the 10 µg group) and even further, to 24.5%, for the 0.1 µg samples. Two other quality control measures, noise and background, were similar across all 30 GeneChips ® (data not shown).
If the decrease in starting amount of RNA or the differences in protocol had no effect on the outcome, the signals from all of the reduced RNA sample size groups would be distributed similarly to the 10 µg group. SAS was used to analyze the signals from each RNA sample size group. Probe sets were separated by detection call (absent, present and marginal) and analysis was performed separately for each group (only 2% of the probe sets were called marginal; these were omitted from Table 2). Signals for the 2 µg samples are distributed similarly to the 10 µg samples ( Table 2). For the amplified samples the range of signals is increased and the distribution is shifted toward higher signal values. The decrease in percent of probe sets called present on the GeneChips ® from the amplified samples has the effect of lowering the average signal on the chip, requiring a higher scaling factor (Table 1). This results in an inflation of the signal values for all of the probe sets on these arrays.
To examine the effects of starting with smaller amounts of RNA on the variability in detection of probe sets, we examined the number of probe sets that changed from present to absent or from absent to present when comparing the 10 µg sample to the smaller RNA samples from the same animal (Table 3). The average number of probe sets called present on the 10 µg chip and absent on the 2 µg chip (P 10 to A) and called absent on the 10 µg chip and present on the 2 µg chip (A 10 to P) changes were similar. The bulk of these changes were for probe sets with lower levels of expression (Table 3). This distribution is consistent with the greater variability seen in probe sets with low signals (see below); there is no significant loss of low-level transcripts in the 2 µg samples.
Because the amplified samples (from 0.5 µg and 0.1 µg starting material) have a decrease in the percent of probe sets called present (Table 1), the number of probe sets called present in the 10 µg samples and absent in the amplified sample from the same animal must be greater than the number called absent in the 10 µg sample and present in the amplified samples ( Table 3). Loss of signal is expected in low-level transcripts for the amplified samples because of the decrease in starting material, but probe sets present in the 10 µg and absent in the amplified samples are not confined to those probe sets with low signals. Forty-three percent of the probes not detected in the amplified samples have a signal greater than 600 in the 10 µg sample, and 5% have a signal of at least 3200. In comparison, for the 2 µg samples, only 14% of the probe sets present in the 10 µg sample and absent in the 2 µg sample had signals greater than 600, and none had signals over 3200.
The probe sets changing from absent in the 10 µg samples to present in the amplified samples (Table 3) were mostly those with low signals in the 10 µg samples, reflecting the greater noise found in probe sets with low signals ( Figure 1).
Biological and technical variability
The Affymetrix MAS5 comparison analysis tool was used to compare expression levels between pairs of Gene-Chips ® . This analysis directly compares two arrays at each individual probe pair rather than merely comparing the signal computed from all of the probe pairs for a probe set [11]. Comparisons among the 10 µg samples from different animals within each of the treatment groups showed that the biological (between animal) plus random technical variability is considerable (Table 4, 10 vs. 10). There was an average of over 700 apparent differences in expression level between pairs of GeneChips ® within a single treatment group, 12% of which were 2-fold or greater. The number of increases and decreases were comparable, suggesting random rather than systematic changes. Increases and decreases were randomly distributed across probe sets with different levels of expression. This can be visualized as a scatter plot comparing two different 10 µg samples both from the vitamin-deficient group (Figure 1a and 1b); results are similar for a pair of control samples. There is noticeable scatter from the expected diagonal, and the scatter is greatly exacerbated at low signal levels. This shows variability between animals within a single For each starting amount of RNA, the signals corresponding to the 25 th percentile (25%), 50 th percentile (50%), 75 th percentile (75%) and 90 th percentile (90%) are shown, along with the maximum signal. 10ug signal treatment group (plus the technical variability in handling two samples, even with the same protocol). Figure 1b is the same pair of GeneChips ® but is restricted to probe sets that were called present in the first sample (Sample A, xaxis). Analysis of the signals for all of the 10 µg arrays shows that 59% of the probe sets have signals below 300, and that 89% of these are called absent. Removing the probe sets called absent from further analyses removes most of the variability seen in this low signal range. The data in Table 4 were limited to probe sets that were called present in the baseline sample for each comparison to avoid the noise of low signal absent and marginal calls (cf. Figure 1).
GeneChips ® from the lower RNA sample size groups were compared to the 10 µg chip from the same animal; these comparisons represent technical variation only, because the RNAs were dilutions from the same RNA preparations. To compare variability introduced by the same labeling protocol with different amounts of starting material, in Figure 1c and 1d we compared a 2 µg sample to a 10 µg sample from the same animal (sample A, x-axis in a and b). As can be seen, the variation due to differences in sample size plus the technical variability is less than the between-animal variation shown in Figure 1a and 1b. The 2 µg arrays have an average of 134 decreases and 136 increases when compared to the 10 µg samples from the same animals (Table 5, 2 vs. 10); this reflects technical variability of samples prepared by the same protocol from different amounts of starting material. Only 4% of these changes had a magnitude of 2-fold or greater, for an average of 10 changes per chip. This variability was much smaller than the biological variation seen between pairs of arrays from different animals (10 µg arrays, Table 4). There were a balanced number of increases and decreases.
The changes in the 2 µg samples appear random: different probe sets change in different comparisons. Only 8 probe sets changed consistently (in at least 5 of the 6 comparisons).
To examine variability introduced by the amplification protocol, we compared a 0.5 µg sample to a 10 µg sample The average number of changes detected by MAS5 per comparison between 2 GeneChips ® (among probe sets called present in the baseline sample). Comparisons were made between animals in the same treatment group for both the 10 µg and 0.5 µg sample size groups to measure technical plus biological variability. "Total" is the number of changes regardless of magnitude; "≥ 2-fold" is the number of probe sets with changes of 2-fold or more. "Marginal Increases" and "Marginal Decreases" are included. The average number of changes detected by MAS5 per comparison between 2 GeneChips ® (among probe sets called present in the baseline sample). Comparisons were made between samples from the same animal in two different RNA sample size groups to measure technical variation. "Total" is the number of changes regardless of magnitude; "≥ 2-fold" is the number of probe sets with changes of 2-fold or more. "Marginal Increases" and "Marginal Decreases" are included.
from the same animal (Figures 2a and 2b). This comparison includes both systematic and random variability introduced by the amplification protocol used for the 0.5 µg sample. There was much more variability between the 10 µg and 0.5 µg samples than between 10 µg and 2 µg samples or between 10 µg samples (compare Figures 1 and 2). Only probe sets that were called present in the 10 µg sample have been plotted in Figure 2b; the variability is still high. This remaining variability extends over a wider range of signals than for the comparisons in Figure 1.
More of the probe sets decreased in signal than increased in signal. Many more of these changes are at least 2-fold. The comparison of 0.1 µg to 10 µg data (not shown) is very similar to the 0.5 µg to 10 µg comparison.
The number of changes observed from the comparison analysis of the amplified samples to the 10 µg samples is significantly higher than for the 2 µg samples, and there are many more decreases than increases ( (Table 5), another indication of the increased variability also seen in Figures 2a and 2b. The changes for the amplified samples were spread evenly across the range of signals, except there were fewer increases seen in the lowlevel transcripts of the 0.1 µg samples than for the 0.5 µg samples, reflecting the loss of more low-level transcripts in the 0.1 µg samples. Decreases in the amplified samples were more consistent, with 727 and 794 probe sets that decreased in at least 7 of the 8 samples for the 0.5 µg and 0.1 µg groups, respectively. Of the 727 probe sets that consistently changed in the 0.5 µg samples, 712 decreased in at least 6 of the 0.1 µg samples. This indicates that a group of probe sets is being systematically affected in both of the amplified groups (see below). A percentage of these decreases actually result in loss of detection of probe sets, 33% for the 0.5µg and 43% for the 0.1 µg samples.
To measure the level of variability within an amplified group, the 0.5 µg arrays within a treatment group were compared (Table 4, 0.5 vs. 0.5). The number of changes within the 0.5 µg arrays was greater than within the 10 µg arrays, 932 per chip compared to 709 for the 10 µg. Not only were there more changes, a larger percentage of the changes were 2-fold or greater, 30% vs. 12% for the 10 µg samples. This indicates that additional noise was introduced by the amplification.
The 0.1 µg and 0.5 µg samples from the same animal were similar to each other (Figures 2c and 2d, Table 5). Only 16% of the differences between the 0.1 µg and 0.5 µg groups were ≥ fold, compared to 65% that were ≥ 2 fold when comparing the amplified samples to 10 µg samples. The variation between the amplified groups (0.1 µg and 0.5 µg) is greater than the variation between the non-amplified groups (2 µg and 10 µg), and decreases outnumber increases because of the greater loss of signal in the 0.1 µg samples.
Amplification truncates the 5' ends of the RNA
A likely cause for the consistent decreases of particular probe sets in the amplified samples, not related to low signal level, is the loss of the 5' end of the transcript. Synthesis of cDNA from the cRNA initially prepared is expected to lead to some truncation of the 5' ends of the original mRNA, due to the requirement for priming during synthesis of the second strand. Indeed, the cRNA prepared by amplification from the 0.5µg samples was noticeably shorter than that prepared by the standard protocol, as detected by agarose gel electrophoresis (above). Another way to detect such potential shortening of the probe sets is to compare the signals from the Affymetrix control probe sets. There are 3 probe sets each for GAPDH and βactin, designated 3', Middle and 5' based on their relative distance from the 3' end of the transcript. The average 3'/ 5' ratio for both 10 µg and 2 µg samples was 1.7 or below (Table 1), representing good samples http://www.affymetrix.com. The 3'/5' ratios of the amplified samples all exceeded 3 and were as high as 14, with the average near 6 for GAPDH and 8.5 for β-actin (Table 1). These ratios indicate a differential loss of the 5' ends of the transcripts for the amplified samples (see below).
Examination of the Affymetrix comparison analyses for the GAPDH and β-actin probe sets gives an even better picture of the 5' loss. None of the comparisons of the 2 µg samples to the 10 µg samples from the same animals showed a significant change in signal for these probe sets. All 8 of the 0.5 µg and 0.1 µg samples had significant decreases as compared to the 10 µg sample from the same animal, the magnitude of which increases as the distance of the probe sequence from the 3' end of the transcript increases ( Table 6). The amplified samples both show similar progressive loss of the more 5' sequences. For example, both amplified samples have an average of 33% of the GAPDH 3' signal (mean log 2 ratio -1.6), but only 11% as much GAPDH 5' signal (log 2 ratio-3.1).
To determine if truncation of the 5' ends of the RNA may be a significant problem for many of the sequences on the GeneChip ® , the distance of the target sequence (from which probe sets were designed) from the 3' end of the transcripts was determined for as many probe sets as possible by BLASTing the target sequence against the nr database ( Figure 3). We then compared the percent of probe sets that decreased in signal at different 3' distances.
The differences between the 2 µg and 10 µg samples were evenly distributed across the 3' distances ( Figure 4), reinforcing the idea that these are random differences. In con- (a) 0.5ug vs 10ug 0.5ug signal trast, the amplified samples (0.1 µg and 0. 5 µg) show a marked increase in the percent of probe sets that were decreased as the target sequence is moved farther from the 3' end ( Figure 4).
Both sets of amplified samples were affected in the same manner by this 5' truncation. The decreases seen when comparing the 0.1 µg to the 0.5 µg samples were not associated with distance from the 3' end and were not consistent for particular probe sets (only 26 probe sets decreased in at least 7 of the 8 samples). This indicates that the truncation is a result of the single cycle of amplification, rather than the amount of starting material.
Ability to detect differences in expression between treatment groups
The main goal in an experiment comparing two treatment groups is to find genes whose expression differs significantly. To assess whether the lower starting amounts of total RNA can be used successfully, a comparison of results from standard t-tests was performed. Based upon the data in Figure 1, we filter out those probe sets that are not detected in at least one of the treatment groups to be compared before performing statistical comparisons. To be conservative, we only eliminated probe sets that are not called "present" on at least half of the GeneChips ® in either of the treatment groups (rather than demand that a probe set be present in all of the GeneChips ® in a set; others can choose different fractions), and call this a Average log 2 ratios of expression in the amplified samples to that in the control sample (that started with 10 µg RNA) from the same animal. Target range is the position of the target sequence from the 3' end of the transcript.
3' Transcript
Target Sequence 5' Distance from 3' end "detection filter." This does not eliminate probe sets that are either turned on or off, because these would be present in one of the two treatment groups. Table 7 gives the number of probe sets that met our criteria for significance: they passed the "detection filter" and were significant at p ≤ 0.01 for the t-test or at the appropriate level for the Wilcoxon rank sum test (non-parametric) [13]; both tests give generally parallel results. Even though both the 0.5 µg and 0.1 µg comparisons are 4 × 4 comparisons, there is a 50% drop in the number of significant probe sets as compared to the 10 µg samples. Only part of this drop can be attributed to the decrease in the percent of probe sets that met the "detection filter" for these samples (Table 7); the extra noise introduced by the amplification, as seen in the 0.5 µg group (cf. Figure 2, Tables 4 and 5), contributes substantially, since the t-test is Because of the loss of two of the 2 µg arrays, the table also contains data for the subset of six that match the six remaining 2 µg arrays. Note the sharp decline in the number of probe sets meeting the p ≤ 0.01 significance criteria (ttest) when the number of samples in the 10 µg class is reduced from a 4 × 4 comparison to a 3 × 3 comparison (Table 7); this attests to the additional power gained by using the additional array. The best (lowest) p-value that can be achieved in the Wilcoxon test with four samples for each treatment group is 0.0143; for three samples per group it is 0.05 [13]. The 2 µg samples also produced a similar number of significant probe sets for the Wilcoxon with a 0.05 p-value as the 10 µg samples with the same number of arrays, 585 for the 2 µg samples and 620 for the corresponding 10 µg samples. These numbers can be compared to the 869 found in the complete set of 10 µg samples (sample size of 4) with a p-value of 0.0571 for the Wilcoxon.
As expected, probe sets with low expression level (low signal) were less reproducible in comparisons between the different sample groups, as were probe sets with very low fold-changes. Reproducibility for the 2 µg samples was best for probe sets with a fold change ≥ 1.7 (log 2 ratio ≥ 0.75). For the amplified samples, good reproducibility was achieved for probe sets with fold changes ≥ 2 (log 2 ratio ≥ 1). Calculated fold changes for the concordant probe sets were reasonably stable across the different groups.
Estimate of technical false positive rate
The 2 µg and 10 µg groups were from the same original RNA extractions and were labeled by the same protocol. Because they were so comparable in all of our measures, these 2 groups were used to estimate the number of false positives due to technical variability to be expected using our standard t-tests. The three 2 µg samples from the nor-mal diet animals were compared to the three 10 µg samples from the same animals using the t-test, and the three 2 µg samples from the diet deficient animals were similarly compared to the 10 µg samples from the same animals.
Since both of these comparisons are between samples from the same set of similarly treated animals, one should expect no changes. Therefore, any probe sets that were found to be significantly different between the 2 µg and 10 µg samples from the same animals would be false positives. For each comparison, normal and deficient, 14 probe sets were found to be significantly different between the 2 µg and 10 µg samples, which is a false positive rate of 0.4% of the probe sets that met the "detection filter." Of these 28, only 3 of the normal group and 4 of the deficient group had a fold-change greater than 1.5 and only one in each group had a fold-change that exceeded 2. Fold-changes larger than 1.5 were seen only in probe sets with average signals < 900. Those probe sets with signals over 900 had smaller fold changes, most less than 1.3.
In comparison, for those probe sets that did not meet the "detection filter", the false positive rate was approximately 1% at a p-value of 0.01. This set of false positives was equally balanced between increases and decreases (53 increases and 58 decreases in the two treatment groups), indicating random noise and not an effect of using 2 µg instead of 10 µg of RNA. The fold changes ranged from 1.2 to 12.5. The large fold changes result from very small denominators used in the fold change calculations for these probe sets.
Discussion
This experiment demonstrated that technical variability is much smaller than biological variation when using the standard protocol. The number of differences between samples from the same animal that resulted from using the lower amount of starting material was much smaller than the biological variation between animals treated the same and labeled by the standard protocol. The standard The number significant is the number of probe sets significantly different between vitamin sufficient and -deficient animals among those that are present in at least half of the arrays in at least one of the groups, 1 number of probe sets that passed the detection filter, 2 p-value ≤ 0.01 (t-test); 3 pvalue ≤ 0.0143 for Wilcoxon test; *Number in parentheses is the number of GeneChips ® in each treatment group for the comparison; protocol worked well with samples as small as 2 µg, producing results very similar to those of the 10 µg sample from the same animal. Although in this experiment, starting with less than 1 µg of total RNA did not produce enough biotinylated cRNA for hybridization under the normal protocol, a minor change (using vacuum evaporation to concentrate samples before cRNA synthesis, and mixing the minimum 200 µl hybridization volume) should allow use of 1 µg samples with the standard protocol. This extends the range of samples that can readily be analyzed on Affymetrix GeneChips ® using the standard protocol.
We have demonstrated here that a single cycle of amplification sufficed to produce cRNA from samples as low as 0.1 µg of total RNA. The amplification protocol uses the cRNA from the initial protocol as starting material for a second round of cDNA synthesis and in vitro transcription. We hypothesized that each round of cDNA synthesis would lead to some truncation of the molecules, due both to the possibility of priming the second strand from an internal site and to cleavage of the relatively labile RNA. For this reason, we limited our amplification to a single round, rather than using two cycles as had previously been reported [10]. The hypothesized shortening was observed: the cRNA extended to about 850 nt, compared with about 2000 nt for the standard protocol. This shortening was also demonstrated by differential loss of signal from probe sets further away from the 3' end of the RNA, as shown with the Affymetrix control probe sets (Tables 1 and 6), and by the progressive loss in probe sets detected as a function of their distance from the 3' end ( Figure 4). The amplification systematically affected over 700 probe sets in at least 7 of 8 samples for both the 0.1 µg and 0.5 µg groups. Indeed, of the probe sets with consistent decreases across the 0.1 µg and 0.5 µg samples about 90% have target distances greater than 400 nucleotides from the 3' end of the measured transcript, with 26-28% in the 400-600 range and 60-62% over 600. Because the effect of the loss of the 5' end is systematic, it renders a group of probe sets designed from sequences further from the 3' end undetectable. Therefore, we recommend using the standard protocol instead of using the amplification strategy for samples down to 1 µg of total RNA, and our simplified amplification protocol for smaller samples, at least down to 0.1 µg. This extends the range of samples that can be usefully analyzed by oligonucleotide microarrays. This experiment used the Affymetrix RGU34A Gene-Chip ® , which was designed using version 34 of Uni Gene for the rat, November 1998 For newer arrays such as the new human U133 GeneChip ® , designed using better sequence information and improved probe designs (Affymetrix technical report Array Design for the GeneChip ® Human Genome 133 Set), we expect that the problem with loss of the 5' end of the transcript. will be lessened, but not eliminated, because the targets are more likely to be near the real 3' ends of the mRNAs The truncation that results from amplifying samples means that the same protocol should be used for all samples in a given study.
New protocols that increase the production of cDNA [14,15] using primers attached to the 5' end of the transcript could increase cDNA yields from small amounts of RNA But Iscove [14] states "only a few hundred bases of extreme 3' sequence" are amplified by their method This procedure would be expected to greatly exacerbate the loss of signal due to shortened transcripts.
There was additional variation in the amplified samples ( Table 5) that is probably due to the extra steps required in the protocol. This extra noise is partially responsible for the decrease in the number of probe sets that differ at a pvalue ≤ 0.01. This, coupled with the decrease in the percent of probe sets present, reduces the ability to find transcripts that differ significantly in expression when using the amplification protocol. The problem is exacerbated for transcripts with small differences in expression (low fold changes). Fold changes ≥ 2 were much more likely to be identified in the amplified samples. It may be especially helpful to increase the number of arrays used in these amplification experiments to get more power to detect changes Our experiment also provided a false positive estimate of technical variability for the t-test with 14 found in each treatment group when comparing the 10 µg samples to the 2 µg samples (0.4% of the present probe sets.). These false positives came disproportionately from the genes expressed at lower levels. Genes expressed at these low levels often show high fold-changes because the denominator is so low (often near background), this points out the danger in emphasizing high fold-changes, rather than reproducible changes. Therefore, for genes expressed at lower levels, it might be reasonable to use a more restrictive pvalue, none of the false positives had a p-value less than 0.001. Restricting probe sets to those minimally present (our "detection filter") dramatically decreases the number of false positives, from an average of 56 down to 14, restricting analysis to genes called present in a higher fraction of the arrays from one of the comparison groups could further reduce false positives, but at the cost of missing some true positives. The tradeoff can be chosen by an investigator based upon the relative cost of false positives and value of detecting differences in genes expressed at low levels.
The statistical power to detect differences was much reduced when 3 samples per group were analyzed instead of 4. The 2 µg samples and the matched 10 µg samples were able to detect 83-86 differences at p ≤ 0.01 as compared to 150 differences when using all of the 10 µg samples. This decrease was expected, but illustrates that a 25% decrease in expense may result in a much greater loss of information.
Conclusions
This experiment explored the effects of using less than the standard 10 µg of total RNA for an Affymetnx GeneChip ® experiment, and examined how biological and technical variation affect the ability to detect biological differences in a typical experiment comparing gene expression in two conditions. The overall conclusions are that (1) small amounts of RNA can be used effectively in the standard protocol, (2) even very small amounts of RNA (0.1 µg) can be used with our simplified amplification protocol to detect differential gene expression, (3) biological variation is larger than technical variation, (4) very low-level signals are prone to false positives and to less reliable foldchanges, false positives can be reduced by filtering out probe sets not reliably detected before statistical comparisons, and (5) using 4 independent biological samples is much better than using 3 samples in allowing detection of consistent changes and reducing false positives.
Labeling test
Total RNA was extracted from the livers of 4 rats fed a normal diet (untreated) and 4 fed a vitamin-deficient diet (treated) using the RNeasy ® kit (Qiagen Inc, Valencia, CA). The RNA was resuspended and re-extracted using the same protocol, to reduce DNA contamination. For the labeling test, two pools were created, one treated and one untreated, by mixing equal aliquots from each of the 4 RNA samples. The final concentration was adjusted to 1 µg/2 µl, and each pool was divided into 4 aliquots of 1 µg each. These pooled samples were used to determine the average cRNA yield from 1 µg of total RNA and to compare the yields of the T7 polymerases from two different in vitro transcription kits. Biotinylated cRNA was prepared using the standard Affymetrix protocol [8] except that the Epicentre AmpliScribe™ T7 polymerase (Epicentre, Madison, WI) was substituted for the ENZO T7 polymerase (Bi-oArray, High Yield RNA Transcript Labeling Kit, ENZO Diagnostics, Inc., Farmingdale, NY) for 2 of the treated and 2 of the untreated pooled samples. The yield of cRNA was estimated from absorbance at 260 nm, using an Amersham Pharmacia Biotech Ultrospec 3100 pro spectrophotometer. This part of the experiment was the only time RNAs from different animals were pooled, the cRNAs from these pooled samples were not hybridized to arrays.
Labeling for microarrays
Aliquots of each of the original 8 samples of total RNA (one from each rat) were treated as individual samples for all hybridization experiments. Each sample was serially diluted to yield 10 µl samples containing 10 µg, 2 µg, 0.5 µg and 0.1 µg total RNA (32 samples). For the 10 µg and 2 µg samples, cRNA was prepared using the standard Affymetrix protocol [8]. We made slight modifications for the 2 µg samples to increase the concentration by decreasing added water where that was possible. Because the cRNA yield from the 1 µg pooled sample test was low, we decided to amplify the smaller samples. The 0.5 µg and 0.1 µg samples were amplified by a modification of the protocol of Ohyama et al. [10], using only a single round of amplification. In short, double-stranded cDNA was synthesized from the total RNA using the SuperScript II kit from Invitrogen and the Affymetrix T7-(dT) 24 primer, which contains a T7 promoter attached to a poly-dT sequence 5'-GGCCAGTGAATTGTAATACGACTCACTATAG-GGAGGCGG-(dT) 24 -3'. The Epicentre AmpliScribe T7 High Yield Transcription kit was used to produce unlabeled cRNA by in vitro transcription with unbiotinylated NTPs. This in vitro transcription step was followed by a second round of double-stranded cDNA synthesis, and finally by in vitro transcription using the T7 RNA polymerase with the Enzo BioArray, High Yield RNA Transcript Labeling Kit with biotinylated NTPs in the usual manner. Yields of cDNA and cRNA were measured using an Amersham Pharmacia Biotech Ultrospec 3100 pro spectrophotometer in order to adjust concentrations for subsequent steps. Measurements of the amplified samples were made after the second round of cDNA synthesis and in vitro transcription, to limit the loss of sample. After the final round of cRNA synthesis, aliquots of biotinylated cRNA from each sample were electrophoresed on 1% agarose gels in TBE buffer to check them for quality and length, the buffer and gel both contained a 500 ng/ml concentration of ethidium bromide. The Invitrogen 1 Kb Plus DNA ladder was used to provide a relative measure of length.
Since it was not feasible to label this many samples at one time, a balanced experimental design was used processing groups of 10 µg and 2 µg samples for the same animal together and always labeling an equal number of normal and vitamin-deficient samples at one time. For the amplification protocol, 0.5 µg and 0.1 µg samples from the same animal were processed together for each step, again with balanced numbers of normal and vitamin deficient samples processed together.
Array hybridization
Each sample was hybridized to a separate Affymetrix RGU34A GeneChip ® . For most samples, hybridization cocktails of 300 µl contained 15 µg of fragmented cRNA, 200 µl of the cocktail were injected into the GeneChip ® for hybridization. One 2 µg sample had a lower yield so a 200 µl hybridization cocktail containing 10 µg of cRNA was made, to keep the concentration fixed. For the 0.1 µg samples, the cRNA yield was less than 15 µg so 5-7 5 µg of cRNA was used for the hybridization cocktail. Data from two of the 2 µg GeneChips ® (1 from each treatment group) and their hybridization cocktail were unusable due to a bad lot of BSA (used as a blocking factor during hybridization). The analyses for the 2 µg group were completed using the remaining 6 GeneChips ® . For some comparisons with this group, only the corresponding 6 GeneChips ® from the 10 µg samples were used.
Scanning and analysis
Each GeneChip ® was scanned and analyzed using Affymetrix Microarray Analysis Suite (MAS) version 5.0 [11]. Each sample was scaled to a target intensity of 1000 using the "all probe sets" scaling option, this option scales the trimmed mean target intensity to the specified value [11]. The Affymetrix MAS5 expression report provides statistics for each chip that can be used for quality control purposes. Included in this report are noise, background, the percent of probe sets called present, the scaling factor calculated by the absolute analysis algorithm and the ratio of 3' to 5' signal for GAPDH and β-actin. These measures were used to judge the quality and similarity of data from the various RNA sample size groups.
MAS5 "absolute" and "comparison" expression analyses were performed [11]. The RGU34A chip contains 8799 probe sets (series of probe pairs that query parts of the same gene or EST) [7]. For each probe set, absolute analysis generates a signal value (expression level), a detection call of absent, present, or marginal, and a p-value associated with the detection call [11]. Comparison analysis examines 2 GeneChips ® and indicates for each probe set whether there is a significant difference in the signal between the two arrays. The output is a change call of increase, marginal increase, decrease, marginal decrease or no change, a p-value associated with the change call, and the magnitude of the difference (as the signal log ratio, the log 2 (log base 2) ratio of the signal from first chip to the signal on the second or baseline chip). Comparisons were made between the vitamin-deficient and normal diet animals in each sample size group using the normal diet samples as the baseline for each comparison. All possible comparisons were made, this equaled 16 comparisons each (4 × 4) for the 10 µg, 0.5 µg and 0.1 µg groups and 9 comparisons for the 2 µg samples. Comparisons were also made between each of the 2 µg, 0.5 µg and 0.1 µg samples and the 10 µg sample from the same animal, to look for apparent differences in gene expression that are really due to the differences in the amount of starting material and sample processing. For these comparisons, the 10 µg sample was used as the baseline.
In addition, comparisons were made between the 10 µg arrays within each treatment group to measure variability, both between-animal and technical. This analysis was done for the 0.5 µg group as well. The 0.1 µg arrays were compared to the 0.5 µg arrays from the same animal, to determine how well these two amplified samples corresponded with one another.
Data from each RNA sample size group were exported and each probe set analyzed in the following ways: 1. The number of GeneChips ® on which a probe set was detected within each of the treatment groups (normal vs. deficient diet) was calculated. Each present call was assigned the value 1 and each marginal call assigned 0.51. Only probe sets that are present in at least one-half of the samples from one of the treatment groups (either normal or deficient diet) were used for further analysis, we call this the "detection filter". Note that we did not require the probe set to be present in both treatment groups, just in one. For this experiment, we required a score of at least 2 for the 10 µg, 0.1 µg and 0.5 µg groups (4 GeneChips ® in each treatment group) and 1.51 for the 2 µg and corresponding 6-sample subset of the 10 µg groups (3 GeneChips ® in each treatment group).
2. The sum of the calls from the comparison analyses was calculated, using 1 for increase, 0.5 for marginal increase, 0 for no change, -0.5 for marginal decrease, and -1 for decrease.
3. The mean, standard deviation and coefficient of variation of the signal for each probe set within each treatment group (normal and deficient diet) were calculated.
4. Student's t-test for equal means with the assumption of unequal variance [12] was calculated to test for significant differences in signal (expression level) between the normal and vitamin deficient groups. This test was applied separately to the signals and the log 2 transformation of the signals.
5. The Wilcoxon rank-sum non-parametric test for equal means between normal and deficient diet groups [13] was performed.
6. The log 2 ratio (fold change) of the mean signals for normal vs. deficient diet groups was calculated: log 2 ratio = log 2 ((mean of signal for normal) / (mean of signal for deficient)) Changes within an RNA sample size group between normal and deficient diet groups were assessed as significant if the probe set passed the "detection filter" (#1 above) and the p-value was less than 0.01 for the t-test using either the original signal or log transformed signal (#4 above).
|
2014-10-01T00:00:00.000Z
|
2003-01-30T00:00:00.000
|
{
"year": 2003,
"sha1": "aa9f183a66f8c4f86a4b6688c4ae054fa98b6ec9",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-4-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa9f183a66f8c4f86a4b6688c4ae054fa98b6ec9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
250120000
|
pes2o/s2orc
|
v3-fos-license
|
Exposure–response analyses for the MET inhibitor tepotinib including patients in the pivotal VISION trial: support for dosage recommendations
Purpose Tepotinib is a highly selective MET inhibitor approved for treatment of non-small cell lung cancer (NSCLC) harboring METex14 skipping alterations. Analyses presented herein evaluated the relationship between tepotinib exposure, and efficacy and safety outcomes. Methods Exposure–efficacy analyses included data from an ongoing phase 2 study (VISION) investigating 500 mg/day tepotinib in NSCLC harboring METex14 skipping alterations. Efficacy endpoints included objective response, duration of response, and progression-free survival. Exposure–safety analyses included data from VISION, plus four completed studies in advanced solid tumors/hepatocellular carcinoma (30–1400 mg). Safety endpoints included edema, serum albumin, creatinine, amylase, lipase, alanine aminotransferase, aspartate aminotransferase, and QT interval corrected using Fridericia’s method (QTcF). Results Tepotinib exhibited flat exposure–efficacy relationships for all endpoints within the exposure range observed with 500 mg/day. Tepotinib also exhibited flat exposure–safety relationships for all endpoints within the exposure range observed with 30–1400 mg doses. Edema is the most frequently reported adverse event and the most frequent cause of tepotinib dose reductions and interruptions; however, the effect plateaued at low exposures. Concentration-QTc analyses using data from 30 to 1400 mg tepotinib resulted in the upper bounds of the 90% confidence interval being less than 10 ms for the mean exposures at the therapeutic (500 mg) and supratherapeutic (1000 mg) doses. Conclusions These analyses provide important quantitative pharmacologic support for benefit/risk assessment of the 500 mg/day dosage of tepotinib as being appropriate for the treatment of NSCLC harboring METex14 skipping alterations. Registration Numbers NCT01014936, NCT01832506, NCT01988493, NCT02115373, NCT02864992. Supplementary Information The online version contains supplementary material available at 10.1007/s00280-022-04441-3.
Introduction
Tepotinib is an oral, highly selective MET inhibitor approved in Brazil, Canada, Great Britain, Japan, Switzerland, Taiwan, and the USA for the treatment of patients with unresectable, advanced or metastatic non-small cell lung cancer (NSCLC) and MET exon 14 (METex14) skipping alterations. Recent guidelines for the treatment of NSCLC recommend tepotinib as a preferred first-line monotherapy option for patients with metastatic NSCLC and METex14 skipping alterations [1,2].
MET is a tyrosine kinase receptor expressed by epithelial cells, neurons, hepatocytes, and hematopoietic cells [3,4]. Activation by the ligand, hepatocyte growth factor (HGF), induces MET receptor dimerization and phosphorylation of tyrosine residues in the cytoplasmic tail of the receptor that engages with intracellular signaling pathways [3,4]. Mutations in the MET splicing regions for exon 14 can lead to exon 14 skipping and the resulting translation of a shortened MET receptor, which lacks the juxtamembrane domain of the cytoplasmic tail [3][4][5]. The resulting aberrant HGF-MET signaling is involved in oncogenesis, promoting tumor proliferation, invasive growth, and angiogenesis.
Clinical evaluation of tepotinib 500 mg/day in patients with advanced NSCLC and confirmed METex14 skipping alterations is continuing with the ongoing phase 2 VISION study (NCT02864992) [6]. In the primary analysis of the VISION study, which assessed efficacy in patients with ≥ 9 months' follow-up as of January 1, 2020, the objective response (OR) rate (by independent review) was 46% (95% confidence interval [CI] 36, 57) and the median duration of response (DOR, based on Kaplan-Meier [KM] analysis) was 11.1 months (95% CI 7.2, not estimable) [6]. In this study, 28% of all patients receiving tepotinib had grade ≥ 3 treatment-related adverse events (AEs) and 11% of patients had treatment-related AEs that led to permanent discontinuation of treatment. Treatment-related peripheral edema was the most common grade ≥ 3 toxicity, occurring in 7% of patients, and leading to treatment discontinuation in 5% of patients.
Tepotinib has shown activity in preclinical models of cancer [7][8][9][10] and promising anti-cancer activity in patients with MET-driven tumors [11][12][13]. In NSCLC with MET amplification-driven resistance to epidermal growth factor receptor (EGFR) inhibitors, the combination of tepotinib plus gefitinib showed anti-tumor activity in the INSIGHT study [13], and clinical activity of tepotinib plus osimertinib is being evaluated in the INSIGHT 2 study (NCT03940703).
The maximum tolerated dose of tepotinib was not reached in the first-in-human dose ranging study, which evaluated the safety profile of tepotinib at doses between 30 and 1400 mg/ day [11]. A tepotinib dose of 500 mg/day (as hydrochloride hydrate, equivalent to 450 mg/day tepotinib free base) was subsequently established as the recommended phase 2 dose, using a translational modeling approach that integrated clinical and non-clinical pharmacokinetic (PK) and tumor pharmacodynamic (PD) data, and non-clinical efficacy data [14]. This model suggested that a once daily tepotinib dose of 500 mg will achieve plasma concentrations at or above the PD threshold of close-to-complete tumor phospho-MET inhibition (≥ 95%) in at least 90% of patients.
The aims of the present analyses were to evaluate the relationship between the exposure of tepotinib, and efficacy and safety outcomes following tepotinib administration. The relationship between the exposure of the major circulating human metabolite MSC2571109A, and safety outcomes was also evaluated. MSC2571109A is not thought to contribute to the efficacy of tepotinib based on preclinical PK/ efficacy, and clinical PK profiling. The influence of potential covariates on exposure-efficacy analyses (OR, DOR, and progression-free survival [PFS]) was evaluated in patients with NSCLC harboring METex14 skipping alterations from the VISION study [6]. In summary, the analysis is considered comprehensive to offer a holistic view on the benefit/ risk profile across the attained exposure, confirming the relevance of the clinical dose in the target population.
Exposure-efficacy analyses
Study design and patient population VISION (NCT02864992) is an ongoing, multicenter, phase 2, single-arm study of patients with histologically or cytologically confirmed advanced (stage IIIB/IV) NSCLC with measurable disease (confirmed by independent review committee per Response Evaluation Criteria in Solid Tumors [RECIST] 1.1) and METex14 skipping alterations (cohorts A and C) or MET amplification (cohort B), based on liquid or tumor biopsy [6]. All patients are receiving oral tepotinib 500 mg once daily until disease progression, death, or undue toxicity. The primary endpoint is OR assessed by an independent review committee; secondary endpoints include investigator-assessed OR, DOR, and PFS. Overall survival is also a secondary endpoint, but this endpoint was not included in the current analyses. The present exposure-efficacy analyses were based on all patients from cohort A who had completed a minimum of 9 months' follow-up from start of treatment at the time of data cut-off (July 1, 2020) [15].
Analyses
The relationship between tepotinib exposure and the efficacy outcomes, OR, DOR, and PFS was evaluated using tepotinib 24-h area under the concentration-time curve at steady state (AUC τ,ss ) as the exposure metric. Individual tepotinib AUC τ,ss was predicted from a tepotinib population PK model [14]. Other exposure metrics, AUC 0-24 on day one of treatment and mean daily AUC until the first confirmed best overall response, were explored in an earlier analysis version based on a subset of the patients, but did not result in meaningful differences in the exposure-response association (data on file).
Efficacy endpoints were stratified by tepotinib exposure quartile. The relationship between OR and AUC τ,ss was examined graphically by estimating OR and the corresponding 2-sided exact Clopper-Pearson 95% CIs for each quartile of tepotinib AUC τ,ss . Relationships between tepotinib exposure and DOR or PFS were visualized using KM curves stratified by exposure quartile. The influence of covariates and tepotinib exposure AUC τ,ss on the OR was assessed using the full fixed effects model approach [16].
Study design and patient population
The data for safety analyses were based on all patients who received 30-1400 mg/day tepotinib monotherapy in four completed studies and the ongoing VISION study (Table 1). These studies included two phase 1 dose-finding studies in patients with advanced solid tumors (referred to herein as studies 001 and 003) [11,12]; two phase 1b/2 studies that were conducted in patients with advanced hepatocellular carcinoma (HCC) (studies 004 and 005) [17,18], and the ongoing phase 2 VISION study in patients with advanced NSCLC and METex14 skipping alterations or MET amplification (study 022) [6]. All completed studies used the data cut-off for the final analyses and the VISION study used the January 1, 2020 data cut-off. Data from serial samples were also assessed to evaluate the time course of change in serum creatinine in healthy volunteers following a single dose of tepotinib 500 mg (study 007) [19].
Analyses
Safety endpoints of identified risks assessed were: edema (time-to-first event and maximum severity grade, based on a composite endpoint that included the terms: face edema, edema, edema peripheral, localized edema, edema genital, periorbital edema, scrotal edema, peripheral swelling and abdominal wall edema), serum albumin, creatinine, amylase, lipase, alanine aminotransferase (ALT), aspartate aminotransferase (AST), and QTc interval. The relationship between tepotinib exposure and edema was evaluated using different exposure metrics, mean tepotinib AUC 0-24 in the week before the edema event or mean AUC 0-24 until the event in the visual exploratory analyses, and time-varying daily AUC 24h for the edema time-to-event (TTE) and longitudinal albumin modeling. The relationships between exposure of tepotinib or MSC2571109A, and grade ≥ 3 AE (as defined by Common Terminology Criteria for Adverse Events [CTCAE] v4.03 [20]), treatment discontinuation due to an AE, and dose reduction due to an AE were also evaluated.
An exploratory graphical analysis was performed for each safety endpoint of interest to evaluate the potential association to tepotinib exposure quartile. The relationships between exposure quartile of tepotinib or MSC2571109A and grade ≥ 3 AE, treatment discontinuation due to an AE, and dose reduction due to an AE were also evaluated.
KM plots, boxplots, or spaghetti plots of the safety endpoints, stratified by tepotinib, exposure quartiles were visually inspected to determine the feasibility of a model-based analysis.
Time-to-event model for edema
The occurrence of first edema event was described using a TTE model. Exponential (constant hazard), Weibull and Gompertz (hazard changes over time) distributions were tested, and the impact of drug exposure (time-varying AUC 24h ) on the hazard was modeled according to Eq. 1, where h 0 (t) is the base hazard. Covariate (risk factors) on the base hazard were evaluated as shown in Eq. 2, where EFF drug is the drug effect and θ i is the coefficient describing the impact of covariate (risk factor) cov i .
Covariates tested in the edema TTE model were sex, age, body weight, race, tumor type, number of lesions, Eastern Cooperative Oncology Group performance status (ECOG PS) score, metastatic status, number of prior systemic anticancer therapies in the locally advanced/metastatic setting, concomitant diuretic use, and creatinine clearance. The covariate effect was illustrated using Forest plots after 100 bootstraps.
Serum albumin model
The time course of the changes in serum albumin was modeled using an indirect response model, assuming zero-order production and first-order degradation. The impact of tepotinib was assumed to affect the zero-order production rate constant (k in ) of albumin. The structural model is described in Eqs. 3
and 4.
k out is the first-order degradation rate constant of albumin, and Albumin Baseline is the baseline albumin concentration. Covariates considered in the exposure-serum albumin model were body weight, body mass index, AST, bilirubin, hematocrit, erythrocyte count, hemoglobin, and albuminuria/ proteinuria.
Drug effect
The exposure metrics for tepotinib and MSC2571109A, used in the graphical analyses and in model-based analysis, were derived from a tepotinib population PK model [14], including individual predictions of longitudinal, time-varying AUC 24h (for the edema TTE and albumin modeling), and steady-state AUC (AUC τ,ss ), AUC 24h immediately preceding the event, the time-averaged AUC 24h during the week prior to the event, or the time-averaged AUC 24h during the 2 weeks prior to the event (for the graphical analyses). Linear, log-linear, and E max (or I max ) models of drug effect (EFF drug ) were evaluated in the exposure-response models, as indicated by the exploratory graphical analysis (Eqs. 5-8).
Slope drug is the slope of the linear drug response relationship, AUC x is the individually predicted tepotinib area under the curve exposure metric, E max is the maximum effect, I max is the maximum inhibition and AUC 50 is the AUC x at half the maximum effect.
Covariate modeling
A stepwise covariate model (SCM) building procedure was performed with a forward inclusion phase and backward elimination phase [21]. The forward selection p-value was set to 0.01 and the backward elimination p-value to 0.001. Adaptive scope reduction (ASR) [22] was added to the model to reduce the defined search scope during the forward search (see Supplementary Materials for additional information). Continuous covariate relationships were coded as linear (for logit-transformed parameters) (Eq. 9a) or power models (Eq. 9b), and categorical covariates were coded as a fractional difference to the most common category (Eq. 9c). Serial-sampled serum creatinine data from study 007 were used to illustrate the time-profile of serum creatinine Table 1 (continued) Study number 001 (NCT01014936) [11] 003 (NCT01832506) [12] 004 (NCT01988493) [17] 005 (NCT02115373) [18] 007 (EudraCT 2013-003226-86) [19] VISION; 022 (NCT02864992) [6] Observations where Cov ref is a reference covariate value for covariate m, to which the covariate model is normalized (usually the median or mode).
Concentration-QTc analysis
PK time-matched electrocardiograms (ECGs) collected in studies 001, 003, 004, and 005 (tepotinib multiple doses ranging from 30 to 1400 mg) contributed to an integrated concentration-QT interval corrected using Fridericia's formula (QTcF) analysis [23]. A second concentration-QTcF analysis was performed on centrally read, PK time-matched, triplicate 12-lead digital ECGs collected in cohort A of VISION (tepotinib 500 mg/day). In both analyses, concentrations of tepotinib and its metabolite MSC2571109A were evaluated as the predictor using linear mixed effects models in SAS (v7, Cary, NC). See Supplementary Methods for additional details.
Exposure-efficacy analyses
A total of 146 patients with NSCLC harboring METex14 skipping alterations who received tepotinib 500 mg/day in the pivotal phase 2 VISION study were included in this analysis. The characteristics of patients from the VISION study, according to tepotinib exposure quartile, are shown in Supplementary Table S1. Covariates were generally balanced across tepotinib AUC τ,ss exposure quartiles, with the exception of minor trends towards a lower body weight (< 10% relative difference vs overall mean) and a higher proportion of females (~ 10% difference vs overall mean) with increasing tepotinib exposure quartile. Mean tepotinib AUC τ,ss in the overall population was 25.3 µg·h/mL (range 4.7-51.1 µg·h/mL).
All 146 patients were included in the exposure-efficacy analyses for OR and PFS, and 66 patients who attained a response were included in exposure-efficacy analyses for DOR. Graphical analysis indicated that increasing tepotinib exposure was not associated with higher OR according to investigator-and independent-assessments, with OR 95% CIs that overlapped across exposure quartiles (Fig. 1a). Increasing tepotinib exposures were also not associated with DOR or PFS as assessed by independent evaluation (Fig. 1b-c)
Exposure-safety analyses
A total of 499 patients from five clinical trials who received multiple doses of tepotinib monotherapy ranging from 30 to 1400 mg/day were included in the exposure-safety analyses (Supplementary Table S2
Edema and serum albumin
Of the 499 patients in the pooled safety analysis set, 239 patients (47.9%) had at least one edema event. KM analysis of edema incidence indicates a longer time-tofirst edema event within the lowest tepotinib AUC 24h quartile (0.05-12.1 µg·h/mL) relative to tepotinib AUC 24h > 12.1 μg·h/mL (i.e., quartiles 2-4) (Fig. 2a). The distribution of tepotinib exposure (defined as mean AUC 24h during the week prior the edema event) was similar across all edema severity grades (Fig. 2b). This observation also remained consistent when mean tepotinib AUC 24h up to the time of the event and mean tepotinib AUC 24h , during the 2 weeks prior to the edema event, were employed as metrics of tepotinib exposure (data not shown). A model-based evaluation of the relationship between tepotinib exposure and the first occurrence of edema was performed using a TTE model (Supplementary Table S3). A constant hazard (exponential distribution) was found to provide the best description of the base hazard. Tepotinib exposure, expressed as time-varying AUC 24h , did not have a discernible impact on the hazard model with a drop in objective function value (OFV) of less than -1.6 for all tested exposure-response models. A visual predictive check of the base model confirmed that it adequately described the probability of edema during tepotinib treatment ( Supplementary Fig. S1). The final TTE model also revealed that advanced age was associated with an increased risk of edema, independent of tepotinib exposure. The median hazard ratio for risk of edema was estimated to be 1.3 (90% CI 1.2, 1.5) for a 75-year-old patient relative to a typical reference 66-yearold patient (median age in the analysis population) (Fig. 2c). There was no discernible association between any other variables and risk of edema (a full list of the variables included in the TTE model is provided in Supplementary Table S2).
Median baseline serum albumin concentration was 38.4 g/L (range 16.0-72.0 g/L). There was a trend toward decreasing serum albumin concentrations with time in all studies. The time course of serum albumin concentration was described using an indirect response model, with tepotinib exposure-related inhibition of the formation of albumin (i.e. exposure-related decrease in the formation rate constant k in in a Michaelis-Menten fashion). The visual predictive check plot of the indirect response model across the full population is shown in Fig. 2d Table S4). This represents 1% of the AUC τ,ss at clinical dose, suggesting that the time course of the effect on albumin was likely driven by the turnover rate of albumin, rather than an accumulation of tepotinib exposure.
The association between change in serum albumin levels and risk of edema was graphically evaluated. The risk of edema within 72 days of initiating treatment appeared to be slightly lower in patients within the highest quartile of baseline serum albumin (> 41 g/L) (Fig. 2e). However, there was no clear association between the time to the first edema event and the mean serum albumin concentrations (data not shown). The magnitude of decrease in serum albumin over time also appeared to be positively associated with maximum severity of edema (Fig. 2f). This trend was apparent when change in serum albumin was assessed, based on all observations up to the time of the most severe edema events (as shown in Fig. 2f), and when change in serum albumin was based on all reported serum albumin observations (data not shown). However, there was no clear association between baseline albumin concentration and the severity of edema. It is important to note that there was substantial variability in serum albumin levels at baseline and that approximately Clopper-Pearson 95% CI and points are observed OR per AUC quartile (dark gray represents OR assessed by independent evaluation, and light gray represents OR assessed by investigator review). In panels b and c, shaded areas represent 95% confidence intervals. AUC τ,ss , area under the curve at steady state; CI, confidence interval; NSCLC, non-small cell lung cancer; OR, objective response; ORR, objective response rate; PFS, progression-free survival Fig. 2 Relationship between tepotinib AUC 24h quartile and edema events and change in serum albumin levels. Panel a presents time-tofirst edema event stratified by tepotinib AUC 24h quartile on the day of the edema event or day of censoring. Panel b presents the distribution of mean tepotinib AUC 24h during 1 week prior to an edema event according to edema severity (maximum severity per participant). Panel c presents impact of age on the predicted risk of edema based on the final TTE model with model-estimated hazard ratios for edema relative to a typical participant of median age of 66 years (the closed symbols represent the median hazard ratio for the applicable age category. The whiskers represent the 90% CI of the median values, based on 100 bootstrap datasets. The vertical black line represents the hazard ratio for a typical patient in the analysis data set, aged 66 years).
Panel d presents the visual predictive check of the indirect response model of serum albumin with an inhibitory effect of tepotinib exposure on albumin formation. In panels a and d, shaded areas represent 95% CI. In panel e, solid and dashed red lines represent the observed median, 5th and 95th percentiles; the shaded red area represents the 95% CI of the model predicted median, and the shaded blue areas represent the 95% CI of the model predicted 5th and 95th percentiles. Dots are observed values. Panel e presents a Kaplan-Meier analysis of time-to-first edema event stratified by quartiles of baseline serum albumin. Panel f presents mean change from baseline serum albumin according to edema severity. AUC 24h , 24-h area under the curve; CI, confidence interval; TTE, time-to-event 25% of patients had baseline albumin concentrations that were lower than the lower limit of normal range.
Serum creatinine
Graphical analysis indicates a consistent trend of increasing serum creatinine concentration over time which reached a plateau with continued tepotinib exposure (Fig. 3a). The maximum increase in serum creatinine was on average approximately 30 μmol/L and appeared to saturate at a tepotinib AUC 24h of approximately 10 μg·h/ mL (representing 45% of the tepotinib AUC τ,ss at clinical dose), or an MSC2571109A AUC 24h of approximately 5 μg·h/mL (representing 66% of the metabolite AUC τ,ss at clinical dose) (Fig. 3b and Supplementary Fig. S2). To explore the reversibility of the increase in serum creatinine following tepotinib administration, serial-sampled serum creatinine levels were assessed following a single dose administration of tepotinib in healthy volunteers (Fig. 3c). The time course suggests a rapid reversal of serum creatinine changes, returning toward baseline concentrations approximately 10 h following a single dose.
QTc interval
Linear mixed effects modeling was used to quantitatively assess the effect of tepotinib concentration on QTcF. The model-based regression line of the population mean ΔQTcF and its two-sided 90% CIs, obtained by bootstrapping of 1000 datasets, is shown in Fig. 4. There was a slight increase in ΔQTcF with increasing tepotinib exposure. The upper bound of the 90% CIs of the predicted mean ΔQTcF were 3.57 ms at the observed geometric mean steady-state C max at the proposed clinical dose of 500 mg, and 7.54 ms at the geometric mean steady-state C max at the highest administered dose of 1400 mg (Supplementary Table S5). The impact of MCS2571109A was assessed in patients for whom matched tepotinib and MCS2571109A concentrations and ΔQTcF data were available using multivariate regression. At the proposed tepotinib clinical dose of 500 mg, the mean predicted ΔQTcF was 3.1 ms at a tepotinib C max of 1000.2 ng/ mL and MSC2571109A C max of 319.3 ng/mL. At a tepotinib dose of 1000 mg, the mean predicted ΔQTcF was 5.2 ms at a tepotinib C max of 1199.4 ng/mL and MSC2571109A C max of 384.4 ng/mL. The upper bound of the 90% CIs of the predicted mean ΔQTcF at the observed geometric mean steady-state C max was 4.3 ms at the proposed clinical dose of 500 mg, and 6.8 ms at the highest administered dose of 1000 mg.
Similarly, an additional QTc analysis of tepotinib at the clinical dose of 500 mg in cohort A of the VISION study (N = 107 patients) showed that the upper bound of the 90% confidence interval of the estimated population mean ΔQTcF was 7.9 ms (Supplementary Table S5). Similar results were obtained in VISION cohort A with matched tepotinib and MCS2571109A concentrations; timepoint and categorical analyses for both the integrated population and the VISION cohort did not show any clinically significant changes (Supplementary Tables S6-9).
Lipase, amylase, ALT, and AST
Trends towards treatment-emergent amylase increase, and transient increases in AST and ALT were noted. There was no discernible association between tepotinib exposure and median observed increases, or median relative change from baseline for lipase, amylase, ALT or AST (data not shown).
Severe AEs/dose reductions or treatment interruption due to AEs
In updated safety analyses of the VISION study, comprising all patients in cohorts A and C who received tepotinib by July 1, 2020 (N = 255), treatment-related AEs led to dose reductions in 71 patients (27.8%), to temporary treatment discontinuations in 90 patients (35.3%), and to permanent treatment discontinuation in 27 patients (10.6%) [24]. The median dose intensity corresponds to 99.6% of the target dose intensity. Fig. 4 Relationship of ΔQTcF interval versus tepotinib plasma concentration. The model derived predicted population ΔQTcF from baseline is shown as the continuous blue line and the two-sided 90% bootstrapped confidence limits of predicted mean ΔQTcF are shown as broken lines for pooled study patients. The vertical red lines correspond to geometric mean C max at steady state in the 500 mg and 1400 mg dose levels. The brown horizontal lines represent the regulatory threshold of potential concern of 10 ms, and an additional 20 ms reference line as a threshold of potential clinical relevance applicable for oncology drugs. Open symbols represent observed data. CI, confidence interval; QTcF, QT interval corrected using Fredericia's formula There was no clear association between tepotinib or MSC2571109A AUC τ,ss and grade ≥ 3 AE or dose reduction, treatment interruption, or permanent treatment discontinuation due to an AE ( Fig. 5 and Supplementary Figs. S3 and S4). Furthermore, there is no indication that exposure to MSC2571109A is a more accurate predictor of safety endpoints than tepotinib exposure.
Discussion
Rational dose selection and pharmacologic contextualization of the benefit/risk profile of the recommended clinical dosage, including dose modifications for treatment-emergent toxicities, is a critical component of anticancer drug development [25,26]. This is particularly crucial in the development of molecularly targeted agents, where dosing at or near the maximum tolerated dose without appropriate pharmacologic contextualization, can compromise the overall benefit/ risk profile due to poor long-term tolerability [27,28]. This raises important opportunities for PD biomarker and PK/ PD model-informed approaches to rational dose selection [29,30]. Tepotinib, a highly selective inhibitor of the MET receptor tyrosine kinase, was developed using a fully biomarker-driven and model-informed approach to dose selection in early development, with the recommended phase 2 dose of 500 mg/day selected to provide sustained maximal target inhibition in tumor tissue, based on integrated modeling of preclinical PK/PD relationships, clinical PK, and tumor PD data evaluating inhibition of tumor MET phosphorylation in the first-in-human study [14,31]. Efficacy and overall benefit/risk of the 500 mg/day dosage for the treatment of NSCLC harboring METex14 skipping alterations have been demonstrated in the pivotal phase 2 VISION trial [6]. Herein, we report exposure-response analyses of the efficacy of tepotinib in the VISION trial in patients with NSCLC with METex14 skipping alterations, and integrated exposure-safety analyses for key safety/tolerability outcomes across multiple clinical studies of tepotinib, aimed at quantitative pharmacologic contextualization of the benefit/ risk profile of the recommended clinical dosage.
Clinical efficacy
Graphical and model-based analyses suggest a flat exposure-efficacy relationship for OR, DOR, and PFS in the VISION study. It is in agreement with our dose selection rationale that 500 mg/day regimen of tepotinib is expected to achieve close-to-complete (≥ 95%) intra-tumoral phospho-MET inhibition in the majority (> 90%) of treated patients [14], and renders clinical efficacy independent of individual factors that may influence exposure. At a reduced tepotinib dose of 250 mg/day, which is recommended to manage AEs, targeted sustained nearly-complete MET inhibition (≥ 95%) would still be expected in ≥ 80% of patients. The 90% prediction interval of AUC τ,ss at the 250 mg/day (8.1-26.9 µg·h/ mL) falls within the observed tepotinib AUC range achieved in the VISION study, in which a flat exposure-efficacy relationship was observed. These data, therefore, indicate that efficacy would be maintained in patients who require a temporary dose reduction to 250 mg/day for the management of AEs. This is also supported by the observation that patients, with dose reductions in the VISION study, remained on treatment and continued to benefit from tepotinib for prolonged periods [32]. However, the majority of AEs reported in patients receiving tepotinib in clinical trials to date did not require dose modification.
Edema
The most frequently reported AE for tepotinib is edema. While graphical analyses indicated that the risk of edema appeared to be lower in patients with tepotinib exposures within the lowest quartile, no readily apparent association between tepotinib exposure and edema grade was observed, and model-based analysis did not identify a discernible relationship between tepotinib exposure and the risk of edema. In summary, development of edema is clearly associated with the administration of tepotinib, but the effect seemed to plateau at low tepotinib exposures and the underlying exposure-response relationship therefore could not be fully quantified in the present analyses. Edema was also the most frequent cause of dose reductions and treatment interruptions in the VISION study, with a median time-to-first onset of 7.9 weeks (range 0.1-58.3) [6,24]. The present TTE model indicated that advanced age was associated with an increased risk of edema, independent of tepotinib exposure, and consistent with an age of > 70 years typically seen in patients with METex14 skipping alterations [33].
Edema is a commonly reported AE among patients receiving MET inhibitors [34][35][36], suggesting that the underlying pathology is possibly a target-mediated effect. Some evidence points to a role for the MET/PI3k/Akt pathway and the MET ligand, HGF, in the modulation of endothelial permeability [37,38]. Inhibition of the HGF/MET signaling axis may, therefore, lead to a reduction in the integrity of the endothelial barrier and subsequent fluid accumulation and edema. Therefore, it is not surprising that both clinical efficacy and development of edema follow the same flat exposure-response relationship in this MET-driven tumor indication. This also suggests that temporary treatment interruptions, rather than dose reduction, may be a more effective approach to managing patients who develop edema.
Fig. 5
Relationship between tepotinib exposure and grade ≥ 3 adverse event, and dose reduction due to an adverse event. Panel a presents Kaplan-Meier analysis of time-to-first grade ≥ 3 AE stratified according to tepotinib exposure quartile. Panel b presents Kaplan-Meier analysis of time-to-first dose reduction due to an AE stratified according to tepotinib exposure quartile. Shaded areas represent 95% confidence intervals. AE, adverse event; AUC τ,ss , area under the curve at steady state
Serum albumin
A trend towards decreasing serum albumin concentrations over time was noted in patients receiving tepotinib, with a decrease of 26% at steady state. Model-based analysis suggests that the inhibitory effect saturates at low exposure of tepotinib. Furthermore, both the risk and severity of edema were associated with serum albumin, with high baseline albumin likely providing some protection against early development of edema. There was also an apparent trend for positive association between magnitude of decrease in serum albumin and maximum severity of edema, with more severe edema seen in patients with the greatest reduction in serum albumin. This highlights an opportunity for further analyses to quantitatively evaluate the link between time course of changes in albumin and time course of edema. However, both the risk of edema and effect on serum albumin appeared to plateau at low exposure of tepotinib.
The underlying mechanism(s) are poorly understood. Treatment-emergent hypoalbuminemia was also observed in other MET inhibitors [39,40]. Owing to its physiologic role of maintaining oncotic pressure, decreases in serum albumin may be an independent factor for edema pathogenesis.
Serum creatinine
Tepotinib treatment was associated with, on average, an approximately 30 μmol/L maximum increase in serum creatinine levels. This increase in serum creatinine plateaued with time and continued drug exposure, and was found to be reversible, based on data from healthy volunteers receiving single dose administration. No other clinical laboratory or clinical findings suggested a relation to kidney injury.
A potential explanation for the increase in serum creatinine is that tepotinib or MSC2571109A inhibit the elimination of creatinine through inhibition of the organic cation transporter 2 (OCT2) or the multidrug and toxin extrusion (MATE) transporters. At clinical doses, tepotinib reaches a steady-state free peak plasma concentration of 0.05 μM whilst inhibiting MATE1 with an IC 50 of 3.6 μM and MATE2 with an IC 50 of 1.1 μM, and MSC2571109A reaches a steady-state free peak concentration of 0.01 μM whilst inhibiting OCT2 with an IC 50 of 0.04 μM. This hypothesis is supported by a recent report from Mathialagan and colleagues [41], who found that serum creatinine can be increased by inhibition of renal transporters, including OCT2 and MATE1, without renal toxicity. Furthermore, a case report from Mohan and Herrmann showed that, despite elevations in serum creatinine levels after treatment with the MET inhibitor capmatinib, estimated glomerular filtration rate (eGFR) derived from cystatin C and renal iothalamate clearance was stable [42]. OCT2 and MATE1/2 inhibition may also contribute to treatment-emergent transient increases in creatinine with tucatinib [43], and can be described using physiologic modeling [44]. From a practical standpoint, the available data with tepotinib suggest that renal function markers that rely solely on serum creatinine levels (creatinine clearance, eGFR) should be treated with some caution when measured during tepotinib pharmacotherapy, and that careful consideration should be given prior to basing dose adjustment recommendations on such data. Based on the absence of clinical signs or other lab markers of renal toxicity, e.g., electrolytes, urea, the observed increases in creatinine have no causal relationship to edema.
QTc interval
Concentration-QTc analyses for tepotinib and MSC2571109A showed no evidence of a clinically significant prolongation effect on QTcF interval. Based on linear mixed effects modeling, QTc prolongation did not exceed the threshold of 10 ms for either the proposed clinical dose of 500 mg, or for the highest administered dose of 1400 mg. In vivo and in vitro safety pharmacology data also suggest no anticipated risk for QT prolongation at the clinical dose of tepotinib (data on file). In these studies, tepotinib inhibited Kv11.1 (hERG) with an IC 50 of 1.2 µM, which is 24-fold higher than the mean unbound steady-state C max of 0.05 µM achieved with the 500 mg clinical dose. There was also no meaningful effect of tepotinib on other key cardiac ion channels (hNav1.5, hKv1.5, hKv4.3/hKChIP2, Cav1.2, hKCNQ1/hminK, hHCN4, and hKir2.1) up to the highest tested concentration of 10 µM. MSC2571109A had no effect on key cardiac ion channels, and no effect of tepotinib, or MSC2571109A was seen in dedicated cardiovascular safety pharmacology studies in rats and dogs.
The present exposure-safety analysis included a large number of patients pooled from different phase 1/2 trials, providing a robust assessment of the underlying exposure-response relationships. The flat exposure-efficacy relationship is consistent with the quantitative understanding of target modulation and, therefore, represents a classic case study of a model-informed dosing strategy confirmed by clinical data. A limitation of this analysis is that clinical situation with edema varies in anatomic location and intensity, as well as mixed and combined countermeasures, including overlapping dose reductions and temporary treatment interruptions plus other supportive therapies. The longitudinal profiles of edema, including onset/offset and severity of each episode, and its response to dose reduction/treatment interruption are still under investigation. The association between edema and serum albumin decrease has been observed in the exploratory graphical analysis, and further model-based characterization may provide some insight to support the hypothesis of its causal relationship. Such data will be necessary to inform the development of more comprehensive, pharmacometric models that may help elucidate the complex inter-relationships between the time course of tepotinib exposure, serum albumin, and the onset, severity and offset of edema.
In conclusion, a flat exposure-efficacy relationship was observed within the exposure range achieved after administration of tepotinib 500 mg/day in patients with advanced NSCLC harboring METex14 skipping alterations. The relationships between tepotinib exposure and edema, serum albumin, creatinine, lipase, amylase, AST, and ALT were also flat within the observed exposure range at a 500 mg daily dose. Concentration-QTc analyses indicate that tepotinib does not produce clinically relevant increases in the QTcF interval at the 500 mg daily dose. Taken together, these exposure-response analyses provide important quantitative pharmacologic support for benefit/risk assessment of the 500 mg once daily dosage of tepotinib, as being appropriate for the treatment of NSCLC harboring METex14 skipping alterations.
|
2022-06-30T13:33:26.300Z
|
2022-06-30T00:00:00.000
|
{
"year": 2022,
"sha1": "304045748f9739a79c2f9f96cbf562c621ad3fdc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00280-022-04441-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "304045748f9739a79c2f9f96cbf562c621ad3fdc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263793684
|
pes2o/s2orc
|
v3-fos-license
|
Surgical stabilization versus nonoperative treatment for flail and non-flail rib fracture patterns in patients with traumatic brain injury
Purpose Literature on outcomes after SSRF, stratified for rib fracture pattern is scarce in patients with moderate to severe traumatic brain injury (TBI; Glasgow Coma Scale ≤ 12). We hypothesized that SSRF is associated with improved outcomes as compared to nonoperative management without hampering neurological recovery in these patients. Methods A post hoc subgroup analysis of the multicenter, retrospective CWIS-TBI study was performed in patients with TBI and stratified by having sustained a non-flail fracture pattern or flail chest between January 1, 2012 and July 31, 2019. The primary outcome was mechanical ventilation-free days and secondary outcomes were in-hospital outcomes. In multivariable analysis, outcomes were assessed, stratified for rib fracture pattern. Results In total, 449 patients were analyzed. In patients with a non-flail fracture pattern, 25 of 228 (11.0%) underwent SSRF and in patients with a flail chest, 86 of 221 (38.9%). In multivariable analysis, ventilator-free days were similar in both treatment groups. For patients with a non-flail fracture pattern, the odds of pneumonia were significantly lower after SSRF (odds ratio 0.29; 95% CI 0.11–0.77; p = 0.013). In patients with a flail chest, the ICU LOS was significantly shorter in the SSRF group (beta, − 2.96 days; 95% CI − 5.70 to − 0.23; p = 0.034). Conclusion In patients with TBI and a non-flail fracture pattern, SSRF was associated with a reduced pneumonia risk. In patients with TBI and a flail chest, a shorter ICU LOS was observed in the SSRF group. In both groups, SSRF was safe and did not hamper neurological recovery.
Introduction
Traumatic brain injury (TBI) and thoracic trauma are the number one and two leading causes of trauma-related mortality annually, respectively [1,2]. In the Intensive Care Unit (ICU), rib fractures and TBI are the most prevalent injuries and up to 25% of patients with multiple rib fractures have concomitant TBI [3,4]. Both injuries are associated with prolonged mechanical ventilation requirement and ICU days, and combined they have been shown to increase the risk of pneumonia, which is a strong independent predictor of mortality after trauma [1,3,5].
Utilization of surgical stabilization of rib fractures (SSRF) has increased significantly over the last two decades [6][7][8]. In patients with a flail chest, SSRF has been associated with a reduced pneumonia rate, and shorter duration of mechanical ventilation and hospital and ICU length of stay (HLOS and ICU LOS) as compared to nonoperative management [9][10][11][12][13]. Studies specifically evaluating outcomes 1 3 after SSRF in patients with a non-flail fracture pattern are scarce [14]. A recent randomized controlled trial indicated less pain at 2-week follow-up and fewer pleural space complications after SSRF in these patients [15]. Other injury characteristics for which SSRF have been recommended include ≥ 3 bi-cortically displaced rib fractures or a hemithorax volume loss of ≥ 30% [16]. The exact effect of SSRF in these populations remains uncertain however as these are often collectively evaluated with patients with a flail and non-flail fracture pattern [17].
The presence of TBI has been considered a relative contraindication for surgery, including SSRF and was often used as an exclusion criterion for rib fracture-related research [15,[18][19][20]. Recently however, the multicenter, retrospective Chest Wall Injury Society (CWIS)-TBI study reported SSRF to be safe in the presence of moderate to severe TBI (Glasgow Coma Scale [GCS] score ≤ 12) and associated with a reduced odds ratio of pneumonia and 30-day mortality [21]. This study was the first to specifically assess SSRF in the TBI population with rib fractures, but did not stratify by rib fracture pattern. As the established grounds for SSRF have expanded, a small number of studies have assessed the flail chest and non-flail fracture pattern separately due to their injury-related dissimilarities [14,22]. Therefore, the aim of this study was to evaluate the effect of SSRF versus nonoperative management in patients with TBI and either a flail chest or non-flail fracture pattern on ventilator-free days. Secondary aims were to assess in-hospital outcomes, such as pneumonia rate, motor neurological status, HLOS, ICU LOS, and mortality. We hypothesized that SSRF is associated with improved outcomes including more ventilator-free days, shorter ICU LOS, and a lower pneumonia rate, as compared to nonoperative management without hampering neurological recovery in patients with both flail and non-flail rib fracture patterns.
Design and participants
This CWIS-TBI study was a multicenter, retrospective cohort study involving 19 trauma centers conducted through the Chest Wall Injury Society (http:// www. cwiso ciety. org) [21]. The study was approved by each center's local medical research ethics committee or institutional review board and informed consent was exempted. Eligible patients were identified through the hospitals' electronic medical record and by searching their trauma registry for admitted patients with a registered Abbreviated Injury Scale (AIS) for rib or sternal fractures in combination with an AIS ≥ 3 of the head. Figure 1 lists the inclusion and exclusion criteria. Patients were stratified by having sustained a flail chest or non-flail fracture pattern. A flail chest was defined as having sustained ≥ 3 bi-cortical consecutive ribs fractured in two or more locations on chest computed tomography (CT; radiographic flail segment) or ≥ 3 ribs fractured with a paradoxical chest wall respiratory motion (physiologic flail chest). A non-flail fracture pattern was defined as the absence of a radiographic on chest CT or physiologic flail chest.
Data collection and outcome measures
The primary outcome measure was the number of ventilatorfree days during primary hospital admission, defined as the number of days the patient breathed without assisted (non)invasive ventilation. Secondary outcome measures were ICU LOS, HLOS, the occurrence of thoracic complications (i.e., pneumonia within 30 days as defined according to the Centers for Disease Control and Prevention (CDC) guidelines [23], pleural empyema within 30 days as diagnosed Fig. 1 Study inclusion and exclusion criteria. CPR cardiopulmonary resuscitation, CT computed tomography, GCS Glasgow Coma Scale, HD hemodynamic, TBI traumatic brain injury on CT scan and/or pus evacuation [24]), and SSRF-related complications (i.e., superficial and deep wound infection, post-operative bleeding, implant failure requiring removal, and perioperative intracranial pressure increase requiring [non]invasive intervention), neurological outcome (rate of and time to motor GCS [mGCS] score = 6 achieved), and < 30 days and in-hospital mortality.
In addition to the outcome measures, patient characteristics and injury-related variables were collected. The TBI severity at hospital admission was defined as moderate (GCS score, [9][10][11][12] or severe (GCS score, ≤ 8). Intracranial hypertension was defined as an intracranial pressure (ICP) of > 20 mmHg. Also, treatment-and outcome-related variables were collected. Therapy for reducing ICP consisted of having received or undergone ≥ 1 of the following: mannitol, hypertonic saline, pentobarbital, ventriculostomy, craniotomy, or placement of a subdural evacuation port system.
Statistical analysis
Data were analyzed using the Statistical Package for the Social Sciences (SPSS) version 25 or higher (SPSS, Chicago, Ill., USA). Normality of continuous variables was tested with the Shapiro-Wilk test, and homogeneity of variances was tested using the Levene's test. A p value lower than 0.05 was considered statistically significant and all tests were two-sided. Descriptive analysis was performed to report the data for the entire flail chest and non-flail fracture pattern population and for the treatment groups. For continuous data, the median and percentiles (non-parametric data) were reported. Statistical significance of differences between treatment groups was assessed using Mann-Whitney U test (non-parametric data). For categorical data, numbers and frequencies are reported per treatment group and compared using Chi-squared or Fisher's exact test, as applicable.
In multivariable analysis, a regression model was developed to control for potential confounders, as described in the main study manuscript [21]. The final regression model for the non-flail fracture pattern group consisted of the covariates number of fractured ribs, chest tube requirement, and intracranial hypertension presence. The model for the flail chest group consisted of BMI, COPD, number of fractured ribs, chest tube requirement, and intracranial hypertension presence. Given the multicenter design of the study, participating center was also considered as a confounder. Study center was however not included in the final model as it did not statistically correlate with outcomes. The final crude regression model included the outcome measure as the dependent variable, and SSRF as covariate. In the adjusted analysis, the covariates mentioned above were added as covariates. For binary regression analysis, the OR for SSRF over nonoperative treatment is reported with 95% confidence interval (CI) and p values. For linear regression analysis, the beta value with 95% CI and p value is reported.
Results
In total, 449 (55.2%) patients with multiple rib fractures and TBI were included (Fig. 2). For each study center, the number of included patients with multiple rib fractures and TBI ranged from 2 to 65. The percentage of these patients who underwent SSRF ranged from 0 to 67%.
In multivariable adjusted analysis, ventilator-free days did not differ between the treatment groups ( Table 2). Odds of developing pneumonia were significantly lower in patients who underwent SSRF (OR 0.29; 95% CI 0.11-0.77; p = 0.013). Other outcomes, including mortality, were similar across the treatment groups.
Discussion
This study investigated the effect of SSRF versus nonoperative management on in-hospital outcomes in patients with a flail or non-flail fracture pattern and concomitant TBI. For both types of rib fracture patterns, no beneficial effect of SSRF on the primary outcome of ventilator-free days was demonstrated. In patients with a flail chest, a 3-day decrease in ICU LOS was observed in patients who underwent SSRF. In patients with a non-flail fracture pattern, SSRF was associated with three times lower odds of pneumonia. In both rib fracture groups, SSRF was safe with a low complication rate and no pre-or postoperative neurological deterioration.
Patients with multiple rib fractures and TBI are often not considered candidates for SSRF, regardless of pulmonary abnormalities [12,13]. This reason is likely multifactorial: the perioperative setting might cause increased intracranial pressure and patients with TBI are often expected to have lengthy mechanical ventilation requirement and ICU LOS, making it difficult to distill an effect of the severe rib fractures and SSRF on in-hospital outcomes. This dogma was challenged by the CWIS-TBI study, which showed that SSRF did not impair neurological recovery, had a low perioperative risk, and was associated with a lower risk of pneumonia and mortality [21]. As follow-up to this study, CWIS-TBI data were used to evaluate whether more specific rib fracture patterns benefit from SSRF. Patients with a nonflail fracture pattern who underwent SSRF had relatively similar thoracic injuries as compared to the nonoperative group. Patients with a flail chest had more severe thoracic injuries in the SSRF group and more severe brain injuries in the nonoperative group. This finding might provide reflection of the surgeon's decision-making who considers TBI a contraindication for SSRF, and subsequently is more likely to offer SSRF to patients with the more severe rib fracture patterns and less severe TBI characteristics or improved neurologic prognosis. For both rib fracture pattern groups, the current study indicates that SSRF is safe and might be of benefit in these patients.
In patients with a flail chest, SSRF has previously been associated with decreased ICU LOS, as compared to nonoperative treatment [18,20,25,26]. Several of these studies however, including two randomized controlled trials, specifically excluded patients with TBI [5,18,20]. In the current study, a shorter ICU LOS was observed in the SSRF group of patients with a flail chest, and SSRF was safe without signs of peri-procedural neurologic deterioration in the patient with TBI. This ICU LOS decrease did not result in shorter HLOS or increased ventilator-free days on multivariable Data are shown as median (P 25 -P 75 ) or as N (%) BMI Body Mass Index, COPD chronic obstructive pulmonary disease, GCS Glasgow Coma Scale, HLOS hospital length of stay, ICP intracranial pressure, ICU LOS intensive care unit length of stay, ISS injury severity score, SSRF surgical stabilization of rib fractures, TBI traumatic brain injury Bold and underlined p values are considered statistically significant a Provides the exact number of patients for whom data were available analysis. This might be due to for example the effect of TBI extent or another unaccounted confounder which impacted ventilator-free days more strongly than chest wall injury severity or SSRF. This is supported by the increased ventilator-free days on univariate analysis for the SSRF group which was similar on multivariable analysis after correcting for intracranial hypertension presence. Also, with no data on mechanical ventilation mode, SSRF might have improved respiratory mechanics, assisted in stabilizing the patient, and allowed for a quicker wean and more rapid discharge from the ICU after complete ventilation liberation. A shorter ICU stay is also beneficial for the cost-effectiveness as SSRF has been shown to be economically more beneficial regarding hospital charges [26,27].
Literature on the effect of SSRF versus nonoperative treatment in patients with a non-flail fracture pattern is scarce [14]. Only three studies have assessed the outcome pneumonia and either excluded patients with TBI or did not provide insight in patient selection [15,28,29]. This study is the first to specifically assess pneumonia rates following SSRF or nonoperative treatment in patients with a non-flail fracture pattern and TBI. On multivariable analysis, SSRF was associated with three times lower odds for developing pneumonia. Interestingly, this lower risk did not appear to Furthermore, as has been corroborated by the previous CWIS-TBI study, SSRF is a safe procedure in patients with TBI, also when specifically evaluated in chest wall injury subgroups. With high rates of mGCS score recovery to 6 and a low complication rate, SSRF and the consequent perioperative setting is safe and does not hamper neurological recovery. This is of importance as early SSRF (≤ 48-72 h after trauma) is associated with shorter HLOS, ICU LOS, mechanical ventilation duration, and lower rates of pneumonia [30][31][32]. With a median time from trauma to SSRF of 2 and 3 days in patients with a non-flail fracture pattern and a flail chest, respectively, this benefit of early SSRF might already be present. The optimal timing of SSRF in this population requires further evaluation. The benefit of early SSRF and the demonstrated safe perioperative SSRF setting might assist surgeons in decision-making in the acute setting when neurological prognosis is often unsure.
The results of this study should be interpreted acknowledging several limitations. First, the inclusion criterion of TBI through using a single GCS score at admission has known limitations (e.g., in intoxicated patients) and might be of less clinical significance than ongoing GCS score assessment or the GCS score at the day of SSRF. To minimize the impact of this limitation, the presence of intracranial injuries on brain CT was required. In addition, patients were identified for having a head AIS of ≥ 3 besides rib fractures, thus excluding patients with minor TBI with a lowered GCS. Also, the GCS score is the most commonly used parameter to assess TBI severity and is readily available in the acute setting in contrast to the AIS [33,34]. Furthermore, the regression model corrected for TBI severity through the variable intracranial hypertension which was more strongly associated with outcomes than individual intracranial injuries. Future research should prospectively evaluate (acute and long-term) outcomes in the patient with TBI and use standardized treatment protocols across centers, consider ongoing GCS scores or on the day of SSRF instead of at admission, whether intracranial hypertension might be a SSRF contraindication instead of the general umbrella title TBI, and TBI improvement post-SSRF through CT scan instead of mGCS.
Second, the observational non-randomized study design might have introduced selection bias. Patients who are selected for SSRF often have more severe thoracic injuries but are also younger with less comorbidities than those treated nonoperatively, requiring adjusting for when assessing outcomes [35,36]. In the current study, the treatment groups were relatively similar regarding thoracic injury severity but had significant dissimilarities in the severity of TBI and rate of associated intracranial injuries, being higher in the nonoperative group. Previously, recommendation of SSRF has been shown to be significantly impacted by TBI presence and degree; the more severe TBI, the less likely SSRF was recommended [37]. The prognosis assessment in patients with TBI remains difficult and a standardized treatment protocol regarding SSRF in this population is lacking [12,38]. This might have resulted in SSRF being performed in patients with a better neurological status or those who were expected to have improved outcomes in terms of (neurological) recovery and during hospitalization, confounding observed outcomes which might subsequently be more strongly affected by the effect of the associated injuries than the treatment effect. To mitigate this effect, multivariable analysis was performed adjusting for intracranial hypertension. However, the extent to which the individual intracranial injuries or other uncaptured confounders might have affected outcomes or (not) being selected for SSRF remains unknown.
Third, the multicenter design might have impacted outcomes as both the numbers of included patients and rates of SSRF performed varied significantly between centers. Also, since there was no standardized (non)operative treatment protocol, heterogeneity of managing rib fractures across centers or potential confounding of within-center covariates might be present [39,40]. However, the variable "study center" did not correlate significantly with outcomes and this design made the results more generalizable to daily practice. The large variability in the rate of patients with TBI who underwent SSRF shows that there currently is no consensus on this patient group's optimal treatment. The retrospective nature of this study might have resulted in missing data or underreporting, but the rate of missing data was < 4% for all variables except BMI and smoking status.
In conclusion, SSRF did not impact the number of ventilator-free days in patients with a flail or a non-flail rib fracture pattern and TBI. In patients with TBI and a nonflail fracture pattern, SSRF was associated with a reduced pneumonia risk. In patients with TBI and a flail chest, a shorter ICU LOS was observed in the SSRF group. In addition, SSRF was a safe procedure in both rib fracture groups and did not hamper neurological recovery. The presence of TBI in patients with a specific severe rib fracture pattern that possibly necessitates SSRF, should not be considered a contraindication for this treatment. In the setting of TBI, the decision to perform SSRF should be made by carefully weighing the risks of surgery against the benefits of both pulmonary and overall recovery.
|
2022-02-22T14:58:56.855Z
|
2022-02-22T00:00:00.000
|
{
"year": 2022,
"sha1": "62ffaefe94ba7ec0d6ed17b0578826761ea05ef1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00068-022-01906-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "62ffaefe94ba7ec0d6ed17b0578826761ea05ef1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11738612
|
pes2o/s2orc
|
v3-fos-license
|
Esophageal, pharyngeal and hemorrhagic complications occurring in anterior cervical surgery: Three illustrative cases
Background: The number of esophageal and pharyngeal perforations occurring in anterior cervical surgeries ranges from 0.25% to 1% and 0.2% to 1.2%, respectively. Symptoms usually appear postoperatively and are attributed to: Local infection, fistula, sepsis, or mediastinitis. Acute postoperative hematoma, although very rare (<1%), is the first complication to rule out due to its life-threatening complications (e.g. acute respiratory failure). Case Description: Over a 36-year period, the author(s) described three severe esophageal/pharyngeal complications attributed to anterior cervical surgery. As these complications were appropriately recognized/treated, patients had favorable outcomes. Conclusions: Anterior cervical spine surgery is a safe approach and is associated with few major esophageal/pharyngeal complications, which most commonly include transient dysphagia and dysphonia. If symptoms persist, patients should be assessed for esophageal/pharyngeal defects utilizing appropriate imaging studies. Notably, even if the major complications listed above are adequately treated, optimal results are in no way guaranteed.
INTRODUCTION
Although anterior cervical spine surgery is complex, it may be performed safely, and effectively. It has been performed since the 1950s [2] and many different anterior cervical spine fixation techniques have evolved over the subsequent decades. [3,4,16,24] Major complications for anterior cervical discectomy/fusion (ACDF) include dysphagia/dysphonia, postoperative hematoma, and esophageal or pharyngeal perforation, all of which may lead to respiratory decompensation. Here the authors discussed the diagnosis, treatment and management of three cases of esophageal/ pharyngeal and hemorrhagic complications attributed to ACDF utilizing a Smith-Robinson approach.
Case 1
A 45-year-old female presented with myelopathy/cord compression attributed to C3 to C7 disk herniations and ossified posterior longitudinal ligament (OPLL). She underwent anterior corpectomy/fusion (ACF) from C3-C5 utilizing cadaveric tricortical iliac crest bone graft, and ACDF at the C5-C6 and C6-C7 levels utilizing intersomatic tricalcium phosphate grafts [ Table 1]. During bone milling, the external side of the lower pharyngeal wall was lacerated; the lesion was closed with a single suture. A few hours after the surgery, the patient developed dysphagia, which became increasingly severe over the next few days. When the ear/nose/ throat (ENT) physician performed a fiber endoscopy, no abnormalities were found. However, an esophageal barium swallow documented a leak of iodinated contrast involving the lower peripharyngeal area [ Figure 1]. The patient underwent antibiotic endovenous therapy and nasogastric tube placement for 3 weeks but she did not show improvement. The cervical spine computed tomography (CT) scan showed an abscess-like mass shifting the trachea in the posterior aspect of the piriform sinus [ Figure 2].
Despite ENT draining the collection, the patient aspirated in the intensive care unit (ICU) after 5 days. A few hours later, this led to pneumonia and septic shock, with near-fatal outcome. Following treatment with intravenous antibiotics, parenteral nutrition, and nasogastric tube placement for 2 weeks, the patient improved and was then transferred. Fifteen days later, the patient was discharged home on intravenous antibiotics. Ultimately, she was asymptomatic.
Case 2
A 23-year-old male was injured in a motor vehicle accident that led to a cervical C2 fracture with traumatic grade II listhesis over C3 (CT-documented). Notably, there was no overt radiographic spinal cord and/or root injury, and the patient remained neurologically asymptomatic. The patient underwent a C2-C3 ACDF utilizing a titanium plate, screws, and tricalcium phosphate graft. He had multiple X-rays and a CT scan performed postoperatively that documented the known misalignment but the graft stability. He was discharged home neurologically intact.
Four years after the surgery, he was referred by ENT due to the recent onset of dysphagia and visualization of a . The plate had perforated the posterior wall of the pharynx and was removed utilizing a routine anterolateral cervical approach (e.g. a transoral approach was not chosen, as the lesion was too caudal) [ Figure 5]. Following plate removal, the patient was admitted to the ICU, and placed on intravenous antibiotics; a nasogastric tube was also utilized for the ensuing month to allow the pharyngeal injury to heal. The patient was ultimately discharged without any neurological deficits or other sequelae, and returned to his former active life-style.
Case 3
A 49-year-old male had disc herniations and ACDF procedures performed at the C4-C5 and C5-C6 levels utilizing a titanium plate and Polyether ether ketone (PEEK) intersomatic grafts. No drain was placed during the procedure. Twenty-three hours postoperatively, the patient developed anterior cervical swelling accompanied by respiratory distress/dysphagia. The cervical CT showed an acute prevertebral hematoma Figure 6]. He was immediately taken to the OR, where, prior to surgery, an emergency tracheotomy was performed as orotracheal intubation was not feasible. Oozing and organized hematoma was found in the surgical bed; hemostasis was ultimately achieved, and a drain was placed. The patient remained in the ICU for 2 days and was then transferred to the medical ward. Six days later, several hours after removal of the tracheotomy, the patient again deteriorated from a recurrent hematoma CT-documented [ Figure 7]. After emergent removal of the clot, the patient was again admitted to the ICU. A hematology consult was called due to the repeated hemorrhagic event; a platelet aggregation disorder was diagnosed. He was treated with Amchafibrib (tranexamic acid) during the following days of his ICU stay. The drain was removed 72 h later, and subsequently the tracheotomy was also discontinued 4 days later. The patient developed no further bleeding complications and currently does not have any residual neurological deficits.
DISCUSSION
Dysphagia and dysphonia are the most common postoperative complications (transient 5-30%; persistent 0.8-5%) following anterior cervical surgery. [14,18,22] The incidence of esophageal perforation is between 0.25% and 1%, [1,12,20,24] and for pharyngeal perforation is between 0.2% and 1.2%. [9,15,17,26] Most of these complications are attributed to esophageal traction or recurrent laryngeal nerve (3.5%) and/or injury leading to postoperative edema and neuroapraxia. [12] Other symptoms of transient injuries esophageal/pharyngeal injuries include: Cough, fatigue while talking, aphonia, and bronchoaspiration. Symptoms usually appear postoperatively from a local infection, fistula, sepsis or mediastinitis, and may lead to septic shock, airway obstruction and even death. It is recommended to address these complications with a multidisciplinary approach that includes ENT and/or General Surgery.
If dysphagia and/or dysphonia persist, an organic defect should be ruled out utilizing fiberoptic endoscopy, barium swallow, or CT scans. Interestingly, Gaudinez et al. [10] found that only 72% of patients with esophageal perforation had positive results on an imaging study, and endoscopy was required in 64% of cases to correctly establish the diagnosis. Laceration of the esophagus/ pharynx typically occurs during surgery (e.g. during dissection/placement of instrumentation), or in the immediate postoperative period. Only rarely, does this complication appear years later (e.g. case 2) In case 2, the upper portion of the plate was not in close contact with the cortical vertebral body, thus leading to microtrauma to the posterior wall of the pharynx, and resulting in a delayed esophageal erosion. [23,25]
Treatment options for esophageal/hypopharyngeal perforation repair
There are several treatment options to repair esophageal or hypopharyngeal perforations. [21,27] For small defects, conservative treatment (e.g. enteral nutrition, antibiotics, and observation) may suffice. Others, however, may require direct repair of the surgical perforation utilizing a muscle flap (e.g. sternocleidomastoid or omohyoid muscles). [5,28]
Respiratory decompensation
Another rare complication from this surgery is postoperative respiratory failure; such decompensation may be variously attributed to laryngospasm, vocal cord paralysis, allergic reaction, foreign bodies, and/or hematoma (0.52%) (case 3). [6,7,9,13,22] If a postoperative clot is suspected (5.6%), an emergent/urgent cervical spine CT should be performed to look for bleeding and tracheal compression/deviation. Surgical decompression should then be performed emergently; [8] occasionally, this even warrants opening of the surgical wound at the patient's bedside followed by completion of clot removal in the operating room.
Postoperative hematoma/recurrent hematoma
Meticulous hemostasis is required during anterior cervical surgery to minimize postoperative hemorrhagic complications. However, when postoperative hemorrhages recur, a hematology consult is warranted to rule out an underlying bleeding disorder/coagulopathy. Another remaining question is about the role of postoperative drainage. Some surgeons, including vascular, general, or ENT surgeons, routinely place drains following neck surgeries. However, postoperative hemorrhages are not clearly avoidable even if a drain is placed. [11,19]
CONCLUSIONS
Anterior approach cervical spine surgery is a safe surgical procedure with few complications, if it is properly indicated and performed. We should emphasize on presenting this kind of disorder in order to disclose its treatment, apart from its medical-legal consequences, so that it is reflected not only in the general literature but also in medical publications and neurosurgery history.
|
2018-04-03T01:22:51.298Z
|
2014-04-16T00:00:00.000
|
{
"year": 2014,
"sha1": "b14ec29253756448a8b59e314183f599b8d3ce19",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4023006",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b14ec29253756448a8b59e314183f599b8d3ce19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119812061
|
pes2o/s2orc
|
v3-fos-license
|
Solitary and periodic solutions of the generalized Kuramoto-Sivashinsky equation
The generalized Kuramoto-Sivashinsky equation in the case of the power nonlinearity with arbitrary degree is considered. New exact solutions of this equation are presented.
Introduction
In this paper we present exact solutions of the generalized Kuramoto -Sivashinsky equation u t + u m u x + α u xx + β u xxx + γ u xxxx = 0 (1.1) Nonlinear evolution equation (1.1) at m = 1 has been studied by a number of authors from various viewpoints. This equation has drown much attention not only because it is interesting as a simple one -dimensional nonlinear evolution equation including effects of instability, dissipation and dispersion but also it is important for description in engineering and scientific problems. Equation (1.1) in work [1] was used for explanation of the origin of persistent wave propagation through medium of reaction -diffusion type. In paper [2] equation (1.1) was obtained at m = 1 for description of the nonlinear evolution of the disturbed flame front. We can meet the application of equation (1.1) for studying of motion of a viscous incompressible flowing down an inclined plane [3][4][5]. Mathematical model for consideration dissipative waves in plasma physics by means of equation (1.1) was presented in [6]. Elementary particles as solutions of the Kuramoto -Sivashinsky equation were studied in [7]. Equation (1.1) at m = 1 also can be used for the description of physical applications. For a example this equation were derived for description of nonlinear long waves in viscous -elastic tube [8].
Exact solution of equation (2.2) in the form of the solitary wave at m = 1 and at β = 0 were obtained in [1,9]. Solitary wave solutions of equation (2.2) at m = 1 in the case β = 0 were found in works [10][11][12] for cases: Other forms of these solutions were presented in works [13][14][15][16][17]. Periodical solutions of equation (2.2) were found in [18,19] at α = β 2 16 γ . Exact solutions of equation (2.2) at m = 2 recently were obtained [8] for the four cases: The aim of this paper is to search for exact solutions of equation ( Study of analytical properties of equation (2.4) allows us to determine that in the general case the meromorphic solutions of equation (2.4) can be found taking into account two cases: Consider the first case. The pole order of the solution of equation (2.4) is equal to three therefore we look for exact solutions of equation (2.4) where Y (z) is solution of equation with the first order pole. Let us take simplest equation in the form [20,21] Substituting (2.6) into (2.4) and taking into account equation We have also the following values of coefficients in (2.6) (2.9) We have solutions of equation (2.7) in the form . (2.10) Solution of equation (2.2) takes the form We also obtain coefficients A 3 , A 2 , A 1 , C 0 and A 0 as following (2.13) Using these coefficients we have solution of equation (2.2). It takes the form (2.14) Other forms of this solution were found in works [22][23][24]. At m = 1 from (2.14) we have one of the solitary solutions of work [10]. In the case m = 2 we get solution [8].
Periodical waves of equation (1.1)
Let us find periodical solutions of equation (2.2) taking into consideration the formula [18] w(z) We assume that R(z) is solution of equation for the Weierstrass function
Substituting (3.1) into equation (2.3) and taking into account equations (3.3) and (3.4) we have
Solution of equation (2.2) in this case takes the form where R(z) satisfies equation has two equal roots. However there are periodic solutions of equation (1.1) at m = 1 and m = 3. Parameter C 0 in these cases will be arbitrary constant. Other coefficients can be found from expressions (3.5). Let us present these solutions using formula (3.1).
In the case m = 1 we have α = β 2 16 γ . Solution of equation (1.1) takes the form [18] y(z) = C 0 − 5 4 β a − 1 64 where R(z) satisfies equation for the Weierstrass function where R(z) satisfies equation for the Weierstrass function in the form
|
2019-04-18T13:12:38.399Z
|
2008-06-12T00:00:00.000
|
{
"year": 2011,
"sha1": "6b9d58b932c023faf6d4e6e3c5f9fb5defb3afd3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1112.5707",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b2ab919da69d5e6422f7c73a2cc58152fc308f2a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
59937508
|
pes2o/s2orc
|
v3-fos-license
|
CONFRONTING THE INEVITABLE : ISO 14001 IMPLEMENTATION AND THE DURBAN AUTOMOTIVE CLUSTER
The aim of the article is to explore the complexities of the ISO 14001 implementation process, with the objective of identifying the barriers to its implementation, the factors that influence these barriers, and finding possible solutions to address these barriers. The theoretical basis of ISO 14001, the implementation process, and its strategic implications were established by reviewing previous research. Based on this theoretical review, a self-administered questionnaire was designed to serve as a measuring instrument for the empirical research conducted among members of the Durban Automotive Cluster (DAC). The specific objectives of the empirical study were: to determine the reasons for seeking ISO 14001 certification, to determine the perceived and experienced barriers to its implementation, and to investigate the strategic implications of an Environmental Management System (EMS) such as ISO 14001. Finally, the findings, recommendations, caveats, and suggestions for further research are summarised.
INTRODUCTION
The re-emergence of South Africa into the global automotive market after 1994 has had a major impact on domestically-based Original Equipment Manufacturers (OEMs) and their component suppliers.While South Africa's increased exposure to the global market has brought about opportunities for firms to gain access to global markets, it has brought with it a vast number of pressures, including the pressure to comply with local and international environmental standards.The notion of protecting the environment has been noticeable since the 1960s, but it only gained prominent international acknowledgement at the 1992 Earth Summit in Rio de Janeiro, when Agenda 21 was adopted.This document encouraged businesses to adopt codes that establish "best environmental practice" [1].During this period, the establishment of the Strategic Action Group on the Environment (SAGE) by the International Organisation for Standardisation (ISO) in 1991 and the Eco-Management and Audit Scheme (EMAS) by the European Union in 1993, added additional prominence to efforts to establish an improved international standard for environmental management.In 1993 ISO formed the Technical Committee (ISO/TC) 207 in order to draft the ISO 14000 series.From its inception, ISO/TC 207 worked closely with ISO/TC 176, the technical committee responsible for the ISO 9000 family of quality management systems, in order to ensure compatibility between the ISO quality management standards and the ISO environmental standards [1].
A comprehensive set of ISO 14000 standards, guides, and reports was established.ISO 14001, a specific standard in the ISO 14000 series, can be regarded as the cornerstone of the series.It provides a good model for the development of an environmental management system, and is the only standard in the series under which an organisation can become certified.ISO 14001 can also be used as the starting point for organisations that use other environmental management tools developed by ISO/TC 207 [2].QSI-Afrocare stated that the basic philosophy behind the ISO 14001 standard centred on the environmental management of all processes and activities, the management of the infrastructure addressing environmental issues, and the application of ISO 14001 to all industry and organisational sectors in a country [3].
In this article the complexities of the ISO 14001 implementation process are explored with the aim of gaining a better understanding of them.An overview of the implementation process and its strategic implications is provided in the literature review, while the empirical dimension of the research is based on a survey conducted among the members of the DAC, in which the objective was to determine:
LITERATURE REVIEW
The literature review will address issues regarding the rationale for implementing ISO 14001 and its potential benefits, as well as the implementation process and its barriers.Some strategic issues and international trends will also be dealt with.
Reasons for implementing ISO 14001
According to South African research conducted by Keogh [4], the main reasons for implementing ISO 14001 were business conditions, public pressure, and government regulations.This was similar to findings in Europe that identified the main reasons as being increasing legal requirements, the desire for a competitive advantage, and the need to satisfy customer requirements [5].Tibor and Feldman found that the implementation of an ISO 14001-compliant EMS and achieving third-party certification could become a de facto requirement for conducting business [6].Yadav drew a similar conclusion, emphasizing the influence of customer demands with respect to environmentalism [7].In this regard, the desire to satisfy customer requirements -the pressure from major customers to deal only with ISO 14001 certified companies -can be regarded as the major driving force behind the DAC's attempt to gain ISO 14001 certification.Murray found internationally that more and more companies were becoming certified to ISO 14001, as it has become a precondition for doing business [1].Since the 1990s environmental regulations have begun to move away from the penalty-driven approach, towards incentive-driven voluntary self-regulation [4].Care has to be taken not to lose sight of customer pressure when considering this observation.
Benefits of ISO 14001
While some critics may question the monetary advantages of gaining ISO 14001 certification, others claim that the potential benefits outweigh the cost in the longterm.Miles and Russell identified seven potential benefits of ISO 14001 certification, including the ability to increase price due to differentiation, to use certification as a barrier to entry for potential competitors, to enhance corporate image, to gain protection against claims of environmental negligence, to pre-empt government regulations, and to have opportunities to make an input on standards [8].Sissell and Mullin regarded the most important value of ISO 14001 as its ability to promote better management controls and more clearly defined targets [9].Cochran identified focus and discipline that could result in cost savings, as well as the provision of a systematic structure to comply with environmental regulations as the main advantages of ISO 14001 [10].Morrison et al regarded the systematic structure of ISO 14001 as a platform for continuous improvement in organisations [11].Curkovic et al listed factors such as improved environmental performance and management methods, and the gaining of competitive advantages, as some of the likely benefits [12].Alberti et al identified two broad categories of benefits that could be achieved from an effective environmental management system such as ISO 14001: Economically Quantifiable Benefits, and Economically Non-Quantifiable Benefits [13], as shown in Table 1.
Economic
Non-economic Considering the economic and non-economic benefits of an EMS, it is not difficult to understand the potential value to politicians of propagating the benefits of its implementation for the good of both the local community and industry.Owing to increased international pressure for organisations to implement an EMS, in order to preserve, and grow, employment in the industrial sector, its potential long-term values versus the short-term cost of implementation must be emphasised by political and statutory bodies.
The implementation process and its barriers
ISO 14001 provides basic requirements for firms implementing an environmental management system (EMS).The implementation process involves taking the organisation from the stage of having no EMS in place to the point where it has a fully functional EMS certified by an accredited certification body.(Self-accreditation is possible, but does not have the credibility offered by an accredited body.) There are many similarities between ISO 14001 and ISO 9000 in terms of the structure of the code.In fact ISO 14001 has been described as a 'Quality Management System' where the customer is the natural environment, its regulators, and the community at large.However, as Curkovic et al [12] warn, an existing quality management system (QMS), such as ISO 9001, cannot be transformed into an EMS by merely replacing the word 'quality' with the word 'environmental'.With this comparison in mind, the implementation process may be viewed as a process of evaluating the organisation and its activities, establishing how it interacts with the environment, and putting in place a system to manage these interactions in order to control their environmental impacts.This evaluation will result in a whole series of policies, procedures, general instructions, and protocols that the organisation will adopt in order to minimize the effect of the organisation's activities on the environment.Figure 1 illustrates a typical implementation of an EMS.
Figure 1: Diagram of a typical environmental management system
Source: Anon [14] Many organisations are still reluctant to attain an EMS, naming barriers such as lack of funds, availability of skills, and resource constraints as reasons.Biondi et al [5] found that these limitations applied to SMEs in particular.In order to overcome these barriers, organisations should break the implementation process down into a series of steps and address each step one at a time [2], as shown in Figure 1.
ISO 14001 requires that the organisation's management demonstrate a commitment to its EMS, as senior management's commitment is essential to ensure successful implementation and operation of the EMS [14].Graves [15] found that the ISO 14001 certification process required a broad spectrum of support from the organisation and a strong internal commitment from its employees, particularly from management.It has to be emphasized that ISO 14001 is often focused on technical considerations of the organisational components, and is thus seldom efficiently implemented [16].In this regard Welford [17] advised that a corporate culture conducive to change be established in order to support the paradigm shift needed for the successful implementation of ISO 14001.
Challenges that organisations have to address when implementing ISO 14001 are to be found in the very nature of contemporary capitalist structure, which stresses competition, the maximisation of profits, and the reduction of costs -fundamental barriers to the adoption of ethical practices such as environmentalism in business [17].
Stephens [18] suggested that environmental performance equates economic performance with social benefits, and thus organisations should look beyond their immediate economic performances when motivating and implementing a system such as ISO 14001.For Kershav [19] the solution to this predicament lies in understanding environmental economics and making a paradigm shift.In order to make this paradigm shift, organisations need to demolish certain myths -for example, the myth that anything concerning environmental improvement is a cost and will drain the organisation's profits, or the myth that current practices are so close to perfection that changes are unnecessary [17].
Strategic considerations
It could be argued that in order to survive, the fundamental purpose of any business is to make a profit -and in order to do this in a competitive environment; the business needs to 'stay ahead of the game'.In order to do that, businesses need to take a holistic view of their organisations to avoid the risk of compartmentalising the organisation and having separate stand-alone management systems.Should this holistic approach be applied to an EMS, the result could be a proactive organisation that will attune the entire organisation to its business environment -inclusive of environmental and strategic objectives -in such a manner that the competitive standing of the organisation is improved [2].Apart from adopting a holistic approach towards the implementation of an EMS, organisations should take note of global trends among automakers and suppliers with regard to environmental issues.In this regard, trends such as the increase in environment-related regulations in Europe and Japan in particular, challenges with regard to material development, waste reduction, alternative power usage, post-use, and supply chain management systems should be considered [20].
RESEARCH METHODOLOGY
The research conducted in this study was in essence descriptive.Dane [21] defined descriptive research as research that endeavours to define or measure a particular phenomenon, usually by attempting to estimate the strength or intensity of behaviour or the relationship between two behaviours, while Parasuraman [22] defined descriptive research as a form of conclusive research intended to generate data describing the composition and characteristics of relevant groups or units.For descriptive research, Dane [21] recommended a survey as the most appropriate data collection method, as it may include a variety of questions, and allows for a variety of concepts to be described.
Population and sampling
"The basic idea of sampling is that by selecting some elements of the population, conclusions may be made about the entire population" [23].In this research the population was the DAC.The objective of the research was to survey the entire DAC population, and therefore no specific sampling methodology was needed.A limitation of restricting the survey to the DAC only was that inter-regional and crosssectoral variables within South Africa could not be addressed.Therefore the outcomes of the research can not be generalised to other populations.
Questionnaire design
The survey instrument was a self-administered questionnaire.The questionnaire design was based on the literature review.Since the population size was smallnamely 37 -a pre-test was carried out by conducting a focused interview with an ISO 14001 consultant in order to get a balanced perspective on the issues involved with ISO 14001 implementation.Kohne [24] found that a pre-test was vital as it ensured that the questionnaire performed the various intended functions.The negative side of conducting a pre-test when working with small populations is that the population size could be jeopardised by removing members from the population to be used in the pre-test.By using a consultant (who in essence is an external party to the population) to review the questionnaire, the final sample size was not affected: no members had to be removed for the pre-test sample [2].
The covering letter provided the background to and purpose of the research.It also explained how information would be obtained, and assured respondents of confidentiality.The questionnaire consisted of six sections: Section A: This dealt with a profile of the responding organisation and information about the respondent.
Section B:
This dealt with the organisation and environment-related issues.
Section C:
This perused the reasons for having sought ISO 14001 certification, as well as the benefits and barriers.
Section D:
This dealt with the respondents' opinions and attitudes on specific 14001 issues.
Section E:
This dealt with opposition to and criticism of ISO 14001.
Section F:
This was an optional section that allowed for the respondents' opinions and views on any issues raised in the previous sections.
Data analysis
The responses were segregated along the line of firms with and without ISO 14001, and their stage in the implementation process.The results of this analysis were used in order to facilitate two-way tabulation of the survey results.Due cognisance was given to the warning of Parasuraman [22] that, although two-way tabulation is helpful in uncovering relationships, it has pitfalls that could lead to unwarranted conclusions being drawn, since it does not always tell the whole story about the relationships between the sets of variables.One of the pitfalls would be to focus on percentages and ignore the size of the raw totals involved.For this reason the sizes of each segment were given.
Having segmented the respondents into the three categories, descriptive analysis techniques were applied to the data in order to provide insight into the opinions of the various segments about particular issues related to ISO 14001 and its implementation.Differences of opinion between the various segments were established by determining the statistical significance of these differences.By conducting the significance testing, the areas of the survey that provided conclusive evidence were highlighted, thereby allowing the research to focus on particular issues.In order to determine the statistical significance of the data, the method of hypothesis testing was used.Hypothesis testing is a form of inferential analysis, and it involves data analysis that goes beyond descriptive analysis, as it involves verifying specific statements or hypotheses about the population [22].The hypothesis that was tested was that organisations with ISO 14001 would respond differently from those that are in the process of implementing the standard, and from those that have not yet started the implementation process.Cooper [25] recommended the Kruskal-Wallis Test as an appropriate test for data collected using an ordinal scale where there are three or more independent samples.
As a result of the high response rate (76%), it can be reasonably expected that the response was representative of the whole population.However, the characteristics of random sampling are not claimed.As the research is of an exploratory nature, the emphasis is on the determination of the opinions of those organisations that are certified to ISO 14001 and those that are not.As differences may occur in their responses, it was decided to use the Kruskal-Wallis Test where appropriate, as three related groups were identified [26].
RESULTS
Twenty-eight of thirty-seven firms in the DAC responded, giving a total response rate of 76%.The certification status of the responding firms is provided in medium-sized ones, with an average of 462 employees, were in the process of certifying; while the small organisations, with an average of 171 employees, have not yet started.This trend could be attributed to two factors: first, the larger pool of resources present in larger organisations; and second, the larger the organisation, the greater its need for systems to help it manage its activities.All the organisations with ISO 14001 had other management systems in place, including ISO 9001 (the quality management system recently replaced by ISO/TS 16949 as mandatory for all first tier original equipment (OEM) suppliers in the automotive industry).
In order for environmental management programmes to become effective, sufficient representation at senior management level is needed.This was reflected in organisations with ISO 14001 and organisations who are currently implementing EMS, as both had 45% of environmental managers in senior management and 55% in middle management positions.The results showed that environmental management was the domain of other functional areas within the organisations, such as Health and Safety, Quality, and even Maintenance.None of the surveyed organisations had environmental managers solely dedicated to environmental management.Forty-eight percent of the respondents felt that ISO 14001 would have an impact on their management structures.
The perceived and real impact on management structures could have been the reason why environmental managers were appointed from relatively senior management positions.Seventy-three percent of organisations with ISO 14001 had staff dedicated to EMS implementation and maintenance.This was much higher than the organisations in the process of implementation.The fact that most organisations with ISO 14001 were larger could have played a role.All organisations with ISO 14001 had set environmental performance measures, which is to be expected, as it is a requirement for the EMS.In addition, 55% of all respondents had a cultural change programme in place, which was indicative of their willingness to accept change.
Section C: Motivation for and barriers to implementing ISO 14001
Section C was used to determine what motivated the organisation to seek ISO 14001 certification; what the major benefits of certification were; and what the barriers to gaining certification were.Section C1 dealt with the statement that best described the organisation's reason to seek ISO 14001 certification (an option was also given to add other reasons).The findings are given in Table 4 below.
It is apparent from Table 4 that the main reason for seeking ISO 14001 certification is pressure of customer requirements.This applied to all three groups, and is in accordance with the finding of Keogh [4] in the literature survey.
Section C2 dealt with the benefits that would be derived from ISO 14001 certification.The five most important benefits for all firms, in chronological order, were: 1. Certification results in the adoption of sound environmental practices that will lead to cost savings.
2. Certification is a mandatory customer requirement of existing customers, and is thus required in order to protect current business.3. Certification would enhance the company's corporate image, which may allow some special considerations when dealing with customers.4. Certification results in better management controls and more clearly defined targets and responsibilities; and 5. Certification provides a systematic structure for complying with environmental regulations.The most important benefits mentioned were directly profit-related (1-3) and management-related (4)(5).It therefore appears that profit and management considerations carried more weight than environmental concerns.Once again this finding is in accordance with Welford's finding [17] in the literature survey, which stated that the maximisation of profits and the reduction of costs could become a major barrier to the adoption of ethical practices such as environmentalism.Section C3 offered options for additional benefits, but nothing substantial was reported.Section C4 dealt with the difficulty of the stages of implementing ISO 14001.The Kruskal-Wallis Test was applied to the results of Section C4, as shown in Table 5.Using a significance level of 80% (namely, a critical value of 5.99), only one stage of the implementation process was accepted by the alternate hypothesis -that is, there is a difference in the responses obtained from the organisations in the three different segments, and that was to do with stage 9, writing the management manual.Looking at stage 9 in more detail, as shown in Table 6, and interpreting the results as per Table 5, it appeared that ISO 14001 certified organisations believed that writing the manual was difficult; organisations that had not started the implementation process had no opinion; while organisations that had started the implementation believed that it would be easy to write the management manual.Expanding on Table 5, but excluding stage 9, it appears as though organisations had no problems with getting commitment from top management (stage 1), conducting the initial environmental review (stage 3), preparing the organisation's environmental policy (stage 4), listing environmental aspects and impacts (stage 5), and the internal audit of the organisation against ISO 14001 (stage 12).Organisations had problems, or foresaw problems, with establishing a register of all pertinent legislation (stage 6) and establishing the management programme and structure (stage 8).No opinion could be gained on the issues of obtaining commitment from middle and lower management (stage 2), establishing operational controls and procedures (stage 10), or training of company personnel (stage 11).
Table 6: Implementation process accepted by alternate hypothesis Section D: Opinions on ISO 14001 issues
Section D was used to gather the respondents' opinions and attitudes on specific ISO 14001 issues.Questions were designed to collect information on issues such as changes in the organisation, resource allocations, environmental awareness, and organisational expertise.Respondents were asked to rate how strongly they agreed or disagreed with the given statements pertinent to ISO 14001, its implementation, or the organisation's strategic points of view on environmental management.The Kruskal-Wallis Test was applied to the results of Section C4.Using a significance level of 80% (namely, a critical value of 5.99), only five points were accepted by the alternative hypothesis -that is, there was a difference in the responses obtained for organisations in the three different segments, and they were at points D8, D9, D15, D16, and D23 (see Table 7).On the issue of technical knowledge of environmental matters (D8), organisations with ISO 14001 felt that they had very good technical knowledge, as would be expected.On the other hand, organisations that had not yet implemented the standard felt that they did not have a very good knowledge.These findings are closely tied to those of D15 and D23, as both ultimately came down to the issue of technical capability.Issue D15 asked the question about reliance on external expertise to cover any shortfalls in internal expertise.Organisations with ISO 14001 felt that they would not have to rely on external experts, while those organisations that had not yet been certified felt that they would.Issue D23 focused on the issue of internal auditor skills or expertise, and again the same trend was observed: organisations with ISO 14001 felt they had the expertise, while organisations not yet certified felt they did not have expertise among their internal auditors.
In terms of financial resources available for the implementation of ISO 14001, organisations with ISO 14001 felt that they had sufficient funds to cover all the costs involved, while organisations in the process of implementation also believed they had sufficient funds, but were not sure.Organisations that had not started felt they did not have enough funds.The opinion of organisations that had not started the implementation process were in accordance with the literature survey (Biondi et al [5]).The expectations of what an environmental management system could deliver http://sajie.journals.ac.za (D16) was another area of major difference.Organisations with ISO 14001, and those in the process of implementation, felt that the expectations were not unrealistic or exceptionally high, while those who had not started were negative about an EMS and felt that the expectations were unrealistic and exceptionally high.These two issues were both raised in the literature review as being reasons why organisations had chosen not to certify, as the system is allegedly expensive to implement and does not deliver what it claims to.Section E was used to gather information about opposition to and criticism of ISO 14001.Respondents were asked to rate how strongly they agreed or disagreed with a number of given statements about ISO 14001.The Kruskal-Wallis Test was applied to the results, as shown in Table 8.Two points of criticism were accepted by the alternative hypothesis -namely, that there was a difference in the responses obtained for organisations in the three segments.They were the accusations that if organisations improved their environmental performance substantially, regulators would just impose more stringent regulations (E10); and the notion that the ends do not justify the means (E3).Organisations with ISO 14001 and organisations in the process of implementing it felt that the ends did justify the means (E3) and that improvements to their environmental performance would not result in more stringent regulations being imposed (E10).However, organisations that had not yet started http://sajie.journals.ac.za implementation felt that the ends did not justify the means (E3), and that improvements to their environmental performance would result in more stringent regulations being imposed (E10).Both these findings are closely correlated to D9 and D8.The results of the survey in terms of criticism of ISO 14001, as shown in Table 8, showed that organisations felt ISO 14001 was criticised because organisations were reluctant to comply with environmental best practices and seek certification (E1); ISO 14001 placed high demands on organisational resources (E2); ISO 14001 did not materially alter the quality of the organisation's products (E5); and the current system of environmental regulations did nothing to encourage companies to do more than merely comply with minimum regulatory requirements (E7).However, organisations disagreed with the statements that management was too busy doing business to worry about environmental considerations (E4); that ISO 14001 did not necessarily guarantee improvements in environmental performance and regulatory compliance (E6); that information uncovered by the EMS could be used as roadmap to prosecution (E9); and that environmental regulations eroded competitiveness (E11).No opinion could be obtained on the topic of maintaining continuous compliance with environmental legislation being problematic and requiring serious managerial effort, (E8).
CONCLUSIONS
The research problem was to determine the strategic implications of implementing an Environmental Management System (EMS).The initial literature survey in general, and the empirical survey in particular, addressed this problem.In this regard the empirical finding that customer requirements and profit incentives were the most important reasons for implementing ISO 14001 was not only in agreement with the literature review, but emphasized that management regarded ISO 14001 certification as a strategic pre-condition for sustainable profit and long-term survival.The fact that the majority of respondents in the Durban Automotive Cluster (DAC) had either implemented ISO 14001 or were in the process of implementing it supported the notion that ISO 14001 certification was regarded as a high priority in the DAC.As ISO 14001 certification has become a pre-condition for doing business in many countries, as mentioned in the literature survey, it may be reasonably predicted that it will only be a matter of time before all members of the DAC will have gained ISO 14001 certification, much as they gained certification to the ISO 9000 Quality Management System.
The research objectives were individually addressed with regard to: • Reasons for seeking ISO 14001 certification.(The findings agreed with the major reasons provided in the literature survey, namely customer requirements and profit motives.Strangely enough, fear of not meeting regulatory requirements was not given as an important reason.) • Perceived barriers to implementation prior to starting implementation process.(The findings did not strongly support most of the barriers mentioned in the literature survey, as most of these barriers were regarded as relatively easy to overcome.) • Perceived barriers to implementation during the implementation process.
(The findings indicated that the only significant difference between the three groups was the perceived difficulty of writing the management manual.) • Strategic implications of an EMS -this objective was addressed by all sections, inclusive of Section D (Opinions on ISO 14001 issues) and Section E (Criticism http://sajie.journals.ac.za of ISO 14001).(The findings in this regard can be summarised as follows: (1) An EMS is necessary in order to retain current customers, attract new customers, and survive in global markets.(2) Perceived barriers to implementing an EMS can be overcome, and are not as serious as they appear to be.Proof of this can be found in the 'incorrect' assumption of firms that had not started the process, that lack of funds was a major barrier, while certified or certifying firms held an opposing view.
(3) The negative perceptions with regard to the potential benefits of EMS held by firms that had not started the implementation process, pointed to the possibility that lack of knowledge or unwillingness to change could be the reasons.Considering this opposition by a small group, however, it could still take some time before all firms in the DAC are ISO 14001 certified).
RECOMMENDATIONS
Owing to the geographical and branch-of-industry limitations of the study, recommendations can only be made for the Durban Automotive Cluster specifically.
In this regard, it appears as though the reality of market forces will require all firms in the DAC to acquire ISO 14001certification as soon as possible.The implications are that firms that already have ISO 14001 certification should ensure that they retain it; firms in the process of implementing ISO 14001 should speed up the process; and firms that have not started the implementation process should start immediately or consider getting out of the business.A recommendation that applies to all groups is that a paradigm shift is needed on the part of management to include environmental management as a strategic option in their strategic planning process.
CAVEATS
The major limitations of the study are that it was limited to the geographical area of Durban, which made inter-regional comparisons impossible; and that it was confined to the automotive industry only.
SUGGESTIONS FOR FURTHER RESEARCH
Expand the study to include automotive clusters throughout South Africa.Expand the study to other countries as well, to allow for international comparisons.
Expand the study to include other branches of industry at both national and international level.
Table 2 : Certification status of surveyed firms
These findings are displayed in Table3, which shows a relationship between size (based on number of employees) and the certification status of the organisations.The larger organisations, with an average of 599 employees, have all been certified; the As mentioned in the section on questionnaire design, Section A dealt with categorisation of the organisations, and Section B with general environmental issues.http://sajie.journals.ac.za
Table 3 : Results of Sections A and B -General organisational information and environmental related matters
http://sajie.journals.ac.za
|
2019-02-11T08:10:44.707Z
|
2012-01-15T00:00:00.000
|
{
"year": 2012,
"sha1": "5fe24c27ba4b77276a840b0f14f5a44c60460be2",
"oa_license": "CCBY",
"oa_url": "http://sajie.journals.ac.za/pub/article/download/117/113",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5fe24c27ba4b77276a840b0f14f5a44c60460be2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
253594132
|
pes2o/s2orc
|
v3-fos-license
|
Magnetic-Sparseness and Schrödinger Operators on Graphs
We study magnetic Schrödinger operators on graphs. We extend the notion of sparseness of graphs by including a magnetic quantity called the frustration index. This notion of magnetic-sparseness turns out to be equivalent to the fact that the form domain is an ℓ2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell ^{2}$$\end{document} space. As a consequence, we get criteria of discreteness for the spectrum and eigenvalue asymptotics.
In this paper, we study estimates on the quadratic forms associated with magnetic Schrödinger operators. These estimates yield spectral bounds as well as asymptotics of the eigenvalues in the case of purely discrete spectrum. The concept we use is called magnetic-sparseness which is composed of various components arising from the magnetic Schrödinger operator. The first ingredient is the geometry of the graph which enters via a combination of sparseness and isoperimetry. The second ingredient is the positive part of the potential of the Schrödinger operator. The third ingredient enters via the magnetic field as the so-called frustration index.
Let us illustrate this concept in the context of the literature in the most basic setting of combinatorial graphs. In Sect. 2, we then introduce the setup of weighted graphs in full detail and generality. For now, let X be a discrete set with an adjacency relation ∼ and let Q(ϕ) = 1 2 x,y∈X,x∼y |ϕ(x) − ϕ(y)| 2 be the quadratic form associated with the Laplacian on 2 (X) with the scalar product ·, · . A first step to get estimates on Q is the so-called isoperimetric constants. These constants measure the area of the boundary of sets in relation to the volume. For graphs, there are numerous ways to define such a constant. For the context of this work, the area and volume with respect to the number of edges are most relevant. Specifically, an isoperimetric constant is the smallest a such that for every finite vertex set W where E(W ) are the directed edges within W and ∂W are the directed edges with one vertex in W and the other in X\W . (To count only the edges leaving W , the term on the right-hand side is multiplied by 1/2.) Building on the seminal work of Alon/Milman [1], Dodziuk [7] and Dodziuk/Kendall [8], it was shown by Fujiwara [10] that for compactly supported ϕ This concept was merged with concept of isoperimetric constants in [5] into the so-called (a, k) sparse graphs. For such graphs, one assumes that there are nonnegative a and k such that for all finite W It was shown in [5] that validity of such estimates is equivalent to the existence ofã ∈ (0, 1) andk ≥ 0 such that for compactly supported ϕ. Again this directly allows for spectral estimates via the min-max principle. These concepts of isoperimetry and sparseness consolidate the geometric ingredients into the form estimates for the form Q of the Laplacian Δ.
If one now has a Schrödinger operator Δ+v with positive v ≥ 0, then one can incorporate v into the sparseness assumption by asking for nonnegative a and k such that for all finite W where v(W ) = x∈W v(x). This observation that a positive potential can be thought as weighted boundary edges of a set was already made in [16]. In [13], it was shown that this sparseness assumption is equivalent to form estimates as in (E) for the form Q + v. Furthermore, estimates for forms, where v is not necessarily nonnegative, were then achieved by perturbation theory. Finally, we discuss the last ingredient which is a magnetic field. A magnetic potential enters a Schrödinger operator Δ + v via an antisymmetric function θ on the edges to result in a magnetic Schrödinger operator of the form H θ = Δ θ + v acting as In [5], it was shown that (a, k)-sparseness yields a form estimate as in (E) for Q θ associated with H θ . However, these estimates are in general no longer equivalent to (a, k)-sparseness.
The purpose of this work is to incorporate the magnetic field into the form estimate. This way the influence of the magnetic field on existence of a spectral gap or on eigenvalue asymptotics is made transparent. Furthermore, we pursue the question whether such a form estimate as above is equivalent to a sparseness assumption which includes the magnetic field. To achieve this, we integrate a concept which is called the frustration index into our notion of sparseness. This will be then called magnetic-sparseness. Specifically, in [18] the following frustration index was considered for a finite set W ι θ (W ) = min 1 2 x,y∈W,x∼y where the minimum is taken over all functions τ from W into the complex unit circle T which is attained due to compactness of T. Now, assume the existence of a such that for all finite W Although it is not explicitly stated in [18], one can obtain from their techniques a lower bound for Q θ for v = 0 as it appears in (E) for someã ∈ (0, 1) depending on a andk = 0. In this work, we merge all these concepts-isoperimetry, sparseness, the positive part of the potential and the frustration index-into one concept called magnetic-sparseness. Specifically, we ask for the existence of nonnegative a and k such that In Theorem 3.1, we show that such an inequality is equivalent to a lower bound in the form estimate of (E). However, an upper bound as it appears in (E) is not necessarily valid. In view of the results in [5], this is a surprising new phenomena. Recall that in [5], one shows that (a, k)-sparseness without the frustration index is equivalent to (E) for non-magnetic Schrödinger operators. Furthermore, it is shown in [5] that for the magnetic form Q θ a two-sided estimate as in (E) holds under the assumption of sparseness.
Here, we improve the lower bound (by including θ into the estimate) but we loose the upper bound. This phenomenon can be explained as follows: In [5], the results for magnetic operators were deduced from the non-magnetic case using Kato's inequality to deduce the bounds for H θ from H 0 . However in general, Kato's inequality fails to be an equality, so, one can not infer a lower bound for H 0 from a lower bound of H θ .
In summary, we obtain lower bounds on Q θ which involves the geometry, the potential as well as the magnetic field. Indeed, our estimates make the influence of any of these components transparent and allow for a very conceptual understanding of the phenomena as well as for various applications. In this sense, our approach not only generalizes all the results above but it also allows for a structural understanding of the recent considerations of [12].
Furthermore, in the literature other variants of the frustration index appear. One prominent example is the following [3] ι (2) Thus, it is natural to ask whether the frustration index from [18] discussed above (which under this perspective has to be denoted by ι (1) θ ) is special as it characterizes validity of a lower bound in (E). It turns out-and that may come as another surprise-that this is not the case. Indeed, replacing 2 by any p ∈ [1, ∞) (and Q 0 by Q p 0 ) one can show that magnetic-sparseness with respect to p is also equivalent to the lower bound in (E) (of course withã andk now depending on p). However, to prove this one has to open a whole other Pandora's box. Specifically, one now has to consider the functionals of the magnetic p-Schrödinger operator given by So, as a side product one also obtains an p -version of the lower bound in (E) for Q (p) θ . Although our main interest is in the case p = 2, these functionals come up naturally in the proof and therefore we included them in the statement of the main theorem, Theorem 3.1, as well. The case p = 2 is then again separately discussed in two corollaries, Corollaries 3.5 and 3.7. Let us stress that having the inequality for one p ∈ [1, ∞), then one obtains it for all p including p = 2 (of course with different constants).
The paper is structured as follows. In the next section, we introduce the setup which deals with the more general case of weighted graphs instead of combinatorial graphs only as considered above. In Sect. 3.1, we present the main results and in Sect. 3.2 we prove a magnetic co-area formula that is the core of our proof. Furthermore, the main theorem is proven in this section. Finally, we present examples in Sect. 4. In particular, in Sect. 4.1, we consider products of graphs and discuss how the results of [12] can be embedded in our context. Furthermore, we discuss magnetic cycles in Sect. 4.2 and give a criterion for magnetic-sparseness in terms of subgraphs. Subsequently, we apply this criterion to tessellations in Sect. 4.3.
Magnetic Forms
Let X be a discrete set. We denote the complex-valued functions with finite support by C c (X). Any function m : X → (0, ∞) extends to a measure of full support on X and we denote by p (X, m), 1 ≤ p < ∞ the complex Banach space with norm We say x, y ∈ X are connected by an edge if b(x, y) > 0. We say a graph b has standard edge weights if b : An electric potential is a function v : X → R and a magnetic potential is an antisymmetric function θ : X × X → R/2πZ.
In the subsequent, m always denotes a measure, b denotes a graph, v denotes an electric potential and θ denotes a magnetic potential and we refer to the quadruple (b, θ, v, m) as a magnetic graph.
We define Q (p) for ϕ ∈ C c (X) and p ∈ [1, ∞). Since the focus of this paper is to study the influence of the magnetic potential, we highlight θ in notation. We have to bound the negative part v − of v, where v ± = (±v)∨0. To this end, we introduce the dual pairing for functions f ∈ C c (X) and g : Let p ∈ [1, ∞), b, v + , θ and m be given. We say the negative part v − of an electric potential Again we write K θ α := K 2,θ α and K θ 0 + := K 2,θ 0 + . In [13,Proposition 2.8], it is shown that in the case p = 2 the forms Q (2) b,θ,v,m are closable in 2 (X, m) for any v − ∈ K θ α with α ∈ (0, 1) and we denote the closure by Note that, in the case v = 0, even if the value of Q θ (f ) does not depend on m for f ∈ C c (X), its domain D(Q θ ) does depend on m. Moreover, by [13, Theorem 2.12] the self-adjoint operator We define the weighted vertex degrees via Observe that Deg (+) = Deg + v − and deg (+) = deg +mv − .
Frustration Indices
Physically, two magnetic fields θ 1 and θ 2 act in the same way if H θ1 and H θ2 are equivalent. From the perspective of the magnetic field, this fact can be characterized in several equivalent ways, see, e.g., [6,12,15,18,19]. In this article, we put forward the notion of frustration index.
where T:={z ∈ C | |z| = 1} and the minimum is attained by the compact- We summarize the basic properties of the frustration indices in the following proposition.
The following statements are equivalent: If additionally v = 0, then also the following statement is equivalent: Proof. Statement (a) follows directly from the fact |τ (x) − e iθ(x,y) τ (y)| ≤ 2 and (b) and (c) are clear from the definition.
Let us turn to the equivalence in (d). The equivalence (i) ⇐⇒ (ii) is trivial since W is assumed to be finite. The equivalence (i) ⇐⇒ (iii) can be seen using [18, eq. (3.3)]. To see the equivalence (i) ⇐⇒ (iv), recall that min σ(H θ, This yields the implication (i) =⇒ (iv). On the other hand, assume (iv), i.e., ker(H θ,W ) = {0}. Then, there exists a non-trivial f : A common and direct way to understand the magnetic field is through fluxes which is the sum of the θ's around the edges of a cycle in a graph. In [ (2) θ (W ) is obtained by minimizing the quadratic form of H θ,W over all gauges. This gives another interpretation of the frustration index for p = 2.
Remark 2.4. On cycles the frustration indices for p = 1 and p = 2 can be explicitly calculated, see Proposition 4.6.
Magnetic-Sparseness
The boundary of a set W ⊆ X is defined as To define quantities like the measure of the boundary or the potential of a set, we will use the convention that a nonnegative function on a discrete set extends to a measure via additivity, i.e., given a set A ⊆ X and f : We turn to the central notion of the paper.
Definition (Magnetic-sparseness). Let a, k ≥ 0 and p ∈ [1, ∞). We say the Remark 2.5. Let us discuss the ingredients which go into the definition of magnetic-sparseness. Fundamentally, it is an assumption that the edge weights within finite sets can be bounded by various other quantities of the graph. The edge weights within W appear as the term b(W × W ) on the left-hand side.
Reading the terms which bound b(W × W ) from the right, the first term that appears is km(W ). If everything else on the right-hand side was equal to zero, then the graph is k sparse, i.e., the edge weight of W is bounded by the measure of W .
The next term that appears on the right-hand side is a( 1 2 Assuming for a moment that v + = 0 as well as θ = 0 and k = 0, then this condition relates directly to an isoperimetric inequality. Specifically, having such a bound for some positive a > 0 is equivalent to positivity of the classical Cheeger constant (Note that the 1/2 in front of the boundary measure stems from the fact that we only count the edges leaving W .) In the case of non-trivial v, it was observed already in [16] that the positive part v + of v has the effect of boundary edges in isoperimetric considerations. Indeed, one can think of a virtual vertex at infinity which is connected to the graph via v + and on which we put Dirichlet boundary conditions, i.e., we ask for all functions to vanish on this virtual vertex.
The remaining term on the right-hand side is the frustration index ι (p) θ (W ). It is easily observed that in the case v = 0, positivity of the isoperimetric quantity is equivalent to (a, 0) p -magnetic-sparseness with some a < ∞, for p ≥ 1. Moreover, a and h (Whenever the constant a is chosen optimally, then one even has equality.) The constant h (1) θ was considered in [18]. The constant h (2) θ appears in the work of [3]. Note that for finite and connected X and The reason to choose the parameter (1 + a) in front of the frustration index is that it arises as the natural choice when proving the functional inequality for the Laplacian (see Theorem 3.1 as follows) which is the main result of the paper.
Remark 2.6. If the graph is (a, k) p -magnetic-sparse, Proposition 2.1 (a) ensures that the graph is also is (a , k) 1 -magnetic-sparse with a = (1+a)2 p−1 − 1. We will prove below that the converse is also true. Namely, if a graph is (a, k) 1 -magnetic-sparse for some a, k ≥ 0, it is also (a(p), k(p)) p -magneticsparse for all p ∈ [1, ∞) and some a(p), k(p) ≥ 0, see Theorem 3.1.
Trivially, every graph over a finite set is (0, k) p -bi-magnetic-sparse for all p ∈ [1, ∞) and some k ≥ 0. However, there is also the following simple example which shows that a graph can become magnetic-sparse due to the magnetic field. This example is also discussed in [12].
Example 2.7. We consider the combinatorial graph which arises as the Cayley graph of X = Z×Z/3Z with the generators (±1, 1); see Fig. 1. We put standard weights on this graph, i.e., we choose the weights b to take values in {0, 1}. Now, we put a finite measure m on X, i.e., m(X) < ∞, and we let v = 0.
For θ = 0, the graph is obviously not magnetic-sparse for any choice of parameters a and k. However, choosing θ to be non-trivial it is not hard to check that this allows us to obtain magnetic-sparse graphs by the virtue of the magnetic potential alone. For details and a more involved example, we refer the reader to Sect. 4.
The next proposition deals with magnetic bi-partite graphs.
, where the decay of size of the triangles of 3Z indicates the decrease in the measure m The map τ →τ is clearly a bijection, thus The last point is clear from the definition.
For further examples, we refer the reader to Sects. 4.2 and 4.3.
Main Results
Before we state the main result of the paper, let us recall the fundamental notions that appear. For a graph b over (X, m), an electric potential v and a magnetic potential θ a graph is called (a, k) p -magnetic-sparse for a, k ≥ 0 and where the p-frustration index ι Observe that (a, k) p -magnetic-sparseness depends only on the positive part v + of v. On the other hand, to deal with non-positive potentials as well we introduced the classes K p,θ α which put a requirement on the negative part of a potential. Thus, these two requirements are independent and can be asked for different p as well, i.e., the graph being (a, k) p -magnetic-sparse but the negative part v − of v being in K q,θ for p = q. Furthermore, recall that Deg (+) = Deg + v − and deg (+) = deg +mv − . Let (b, θ, v, m) be a magnetic graph. The following assertions are equivalent: b,θ,v+,m (f ). Moreover, given p 0 ≥ 1, α ∈ (0, 1) and v − ∈ K p0,θ α , the previous points are also equivalent to (iii) There areã ∈ (0, 1) andk ≥ 0 such that for f ∈ C c (X) α for some α ∈ (0, 1), the above assertions are also equivalent to: Remark 3.2. All the constants that appear in the different equivalences can be tracked explicitly, see Lemmas 3.13, 3.14, and also 3.15. For instance, in (i ) and p ≥ 1, the constant a can be chosen depending on a and k given in (i) such that a = 2 1−p (p 2 + 1) and k is a constant such that k = 0 if k = 0. In (ii) and p ≥ 1, with respect to a and k given in (i), we can takẽ In (iii), for v − ∈ K p0,θ α , starting with of a and k given in (i), which holds on F(X) and the Green's formula statement (iii) in the theorem above can be seen to be equivalent to From Proposition 4.8 and Example 4.12 in the next section, it can be seen that there exist magnetic-sparse graphs which are not bi-magnetic-sparse. That is, a corresponding upper bound for Q θ does not necessarily hold even if Q θ is lower-bounded. Nevertheless, for bi-magnetic-sparse graphs we can obtain lower and upper bounds in the case p 0 = 2. Let (b, θ, v, m) be a magnetic graph such that v − ∈ K 2,θ α for some α ∈ (0, 1). The following assertions are equivalent: Proof. Apply Theorem 3.1 with θ and θ + π and condition (iii ) of Remark 3.4. θ+π (f ) and Deg, |f | p for all f ∈ C c (X). Next, we turn to the corresponding eigenvalue asymptotics in the case p = 2. When H θ has purely discrete spectrum, i.e., the spectrum of H θ consists of discrete eigenvalues with finite multiplicity, we denote the eigenvalues counted with multiplicity in increasing order by λ n , n ≥ 0. Furthermore, set where x → ∞ means that x converges to the point ∞ in the one-point compactification X ∪ {∞} of X, i.e., the liminf is taken over all sequences of vertices which eventually leave every finite set. In the case D ∞ = ∞, we order the vertices N 0 → X, n → x n bijectively such that D n := Deg(x n ) ≤ Deg(x n+1 ). magnetic graph (b, θ, v, m) with v − ∈ K θ α for some α ∈ (0, 1) is (a, k) 1 -magnetic-sparse. Then, the following statements are equivalent:
Corollary 3.7. Assume the
In this case, we have for the eigenvalues λ n of H θ Proof. The statement follows directly from the two-sided estimate in Theorem 3.1 (iii) (confer Corollary 3.5 (ii) as well), the formula forã in Remark 3.2 and the min-max principle.
Remark 3.8. In the case where v − ∈ K 0 + , one can take α = 0 in Corollary 3.7 and Remark 3.8.
Magnetic Isoperimetry
We prove the main theorem via isoperimetric techniques. For a function f : X → R, one defines the level sets In the non-magnetic case, the following area and co-area are well known; see, e.g., [ The key ingredient of the proof of our main theorem is the following magnetic co-area inequality.
Proof. For finite graphs and p = 2, this formula has been shown in [18, Lemmas 4.3 and 4.7], which can also be extracted from the proof of [3, Lemma 3.2]. The ideas carry over directly to our setting.
For a given function f : X → C, we define the following complex-valued function: Then, we can calculate Note in the last equality above, we used Tonelli's theorem. For two vertices x, y ∈ X, we assume w.l.o.g. that |f (x)| ≤ |f (y)|. We calculate
Now, we apply Lemma 3.12 below with
Combining this with the estimate in the beginning, the statement of the lemma follows.
Lemma 3.12. For all v, w ∈ T and for all β, p ∈ [1, ∞) one has
Proof. The claim is obvious for β = 1. Thus, we can assume β > 1. Similar to [3, Proposition A.1], we set t := |v − w| and s := t/(β p − 1). We obtain due to where we used |β p − 1| ≥ p|β − 1| in the last estimate. Moreover, which yields together with the estimate above Hence, taking the last two inequalities together we obtain the desired estimate This finishes the proof.
Deg, |f | p < k f p p , then the announced values ofã andk work in this case sincẽ Note that the following area formula holds, In the above inequality, we used the (a, k) 1 -magnetic-sparseness. Applying Lemma 3.11 and Hölder inequality, we obtain with q = p/(p − 1) that Since the left-hand side of the above inequality is nonnegative by assumption, we can take pth power on both sides. Therefore, we arrive at This implies due to p/q = p − 1 that This shows the statement with the choice of (ã,k) in the statement of the lemma.
Lemma 3.14. If the magnetic graph is (a, k) 1 -magnetic-sparse, a, k ≥ 0, and v − ∈ K p,θ α then and C α is the constant from the bound v − , |f | p ≤ αQ Proof. By the lemma above we have, for all f ∈ C c (X), the inequality with constantsã 0 andk 0 . Hence, together with the bound for v − we obtain by a straightforward calculation (similar to [5,Lemma A.3]). With the specific constants of Lemma 3.13, the statement follows.
If for some 0 <ã < 1 the magnetic graph satisfies Proof. Let W ⊆ X be a finite set and let τ 0 : W → T be the function that attains the minimum in the definition of ι (p) θ (W ). We define the following function: We calculate Applying the assumed inequality to the function f 0 , we obtain Therefore, we have This proves the lemma.
To prove Theorem 3.1, we apply the lemmas above and the closed graph theorem.
Proof of Theorem 3.1. First, we prove the theorem for nonnegative potential v ≥ 0 in which case deg = deg (+) := deg +v − m. Let p ≥ 1 and let us denote by (i ) p and (ii) p the statements (i ) and (ii) with this p, respectively. We now consider the case p = 2 and prove (ii) 2 ⇔ (iv). We now turn to the case where v is not necessarily positive but v − ∈ K p0,θ α . Since from the definition of magnetic-sparseness v − appears neither in (i), (i ) nor in (ii), each of the assertions (i), (i ) p , (ii) p holds for (b, θ, v, m) if and only if it holds for (b, θ, v + , m). The equivalence between (i), (i ) and (ii) follows.
The equivalence between (ii) and (iii) is easy. Indeed, if (iii) holds for Q θ,v , then (ii) holds for Q θ,v+ with the same constants. The converse (ii) ⇒ (iii) is true with a change in the constants using the assumption v − , |f | p0 ≤ αQ p0 , which is the definition of K p0,θ α and the proof of Lemma 3.14.
In the case p 0 = 2, that is v − ∈ K 2,θ α , the domains of Q θ,v and Q θ,v+ are the same. As a consequence, (iv) in the case v − ∈ K 2,θ α holds for (b, θ, v, m) if and only if it holds for (b, θ, v + , m).
Examples of Magnetic-Sparseness
In this section, we consider products of graphs, magnetic cycles and tessellations. In the section of products of graphs, we provide a structural description of part of the results of [12]. In the section of magnetic cycles, we compute the frustration index for magnetic cycles for p = 1, 2. Finally, we use these results to conclude magnetic-sparseness for tessellations under the assumption that the magnetic strength of cycles within the tessellation is large with respect to their length.
Products of Graphs
In this section, we apply our results to Cartesian products of graphs. Let two magnetic graphs (b 1 , θ 1 , v 1 , m 1 ) over X 1 and (b 2 , θ 2 , v 2 , m 2 ) over X 2 together with their magnetic forms Q θ1 = Q b1,θ1,v1,m1 and Q θ2 = Q b2,θ2,v2,m2 be given. Furthermore, let μ : This product is a natural choice as the following lemma shows.
There are several ways to prove that the essential spectrum of the magnetic Laplacian is empty, e.g., [6,12]. Here, we use Lemma 4.1 and the sparseness of only one of the graphs in order to prove the discreteness of the spectrum of H θ . That is the spirit of the techniques developed in [12] (where the authors use a slightly different product). (b 1 , θ 1 , v 1 , m 1 ) be a magnetic graph over X 1 and v 1 ≥ 0. Let (b 2 , θ 2 , v 2 , m 2 ) be a (a, 0) 1 -magnetic-sparse graph over X 2 , with v 2 ≥ 0 and inf Deg 2 > 0. Let μ :
If X 2 is finite and connected and v 2 ≥ 0, note that (b 2 , θ 2 Proof. Let 0<D 2 ≤ Deg 2 , where Deg 2 is the weighted vertex degree of (b 2 , θ 2 , v 2 , m 2 ). Using Lemma 4.1 and Theorem 3.1 for (b 2 , θ 2 , v 2 , m 2 ), we infer for all f ∈ C c (X) withã < 1. The discreteness of spectrum of H θ follows from the min-max principle and the fact that μ tends to zero as leaving every compact set.
We now compute the magnetic-sparseness constants and the frustration indices of products of graphs.
In order to prove Theorem 4.3, we show a lemma, which is interesting in its own right.
Proof. Let p ∈ [1, ∞) be fixed. Let τ 0 : W → T be the function that attains the minimum in the definition of ι On the other hand, letting τ 1 : W x2 → T and τ 2 : W x1 → T be the function that attains the minima in the definitions of ι By direct calculation using the (a, k) p -magnetic-sparseness, we obtain Invoking m = m μ /μ, the statement follows by the assumption k/μ ≤ k μ .
Remark 4.5. Instead of Cartesian products, one can consider also sub-Cartesian products. The considerations are almost identical.
Magnetic Cycles
We study the notion of frustration indices and of magnetic-sparseness in the case of a cycle. We start with a definition.
(c) The strength of the magnetic field of a cycle C is defined as Note that while the sign of F θ (C) still depends on the choice of Φ, the value of s θ (C) is independent of Φ.
We turn to the computation of the frustration indices.
Proof. For (a) and (b), it is enough to consider with W = Y by Proposition 2.1 (d).
(a) This follows easily from [20,Theorem 4.10]. There it is proven that ι (1) θ (Y ) is attained for a function τ that is supported on a spanning tree and that satisfies τ (x) = e iθ(x,y) τ (y) for neighbors x and y on this spanning tree. (b) In [6, Lemma 2.3], the bottom of σ(H θ ) is computed for m = 1 to be |1 − e iδ/n | 2 . Since the eigenfunctions have constant absolute value (say, equals to 1), the minimizer of the Rayleigh quotient minimizes also ι (2) θ (X) and, thus, Remark 4.7. (a) The minimizers of ι (1) θ and of ι (2) θ are very different. For ι (2) θ , all edges have the same contribution, whereas the contribution for ι (1) θ is concentrated solely on one edge.
(b) Note that ι (1) θ is bounded by 2 and is independent of the length of the cycle. It depends only on the magnetic flux. (c) In the case of a general magnetic cycle C over Y (not necessarily with standard edge weights), the same proof as above gives: The exact value of ι (2) θ (Y ) in this situation is not clear.
In the case when l(C) is even, C is bi-partite and by Proposition 2.8, C is (a, 0)-bi-magnetic-sparse if and only if it is (a, 0)-magnetic-sparse; the result follows. In the case when l(C) is odd, one has: F θ+π (C) ≡ F θ (C) + π mod (2π) and the result follows. Remark 4.9. Using Remark 4.7 (c), it can be shown that a similar statement holds in the case of general magnetic cycles (with non-necessarily standard edge weights).
Subgraph Criterion and Tessellations
In this section, we give a useful criterion for magnetic-sparseness using subgraphs. Furthermore, we estimate the sparseness-constant for tessellations. At the end, we show that regular triangulations with θ = π are magnetic-sparse graphs, but not bi-magnetic-sparse.
The results of this section are based on the following proposition, where the subgraphs can be thought of as cycles Proposition 4.10 (Subgraph criterion for magnetic-sparseness) . Let (b, θ, v, m) be a magnetic graph over X and let p ∈ [1, ∞). Let J be a set, a, k ≥ 0, C > c > 0, and M > 0. Suppose (b j , θ, v j , m j ) j∈J is a family of (a, k) pmagnetic-sparse-graphs such that for all x, y ∈ X, Then, (b, θ, v, m) is (aC/c + C/c − 1, Mk/c) p -magnetic-sparse. Moreover, if k = 0, then (a) and (b) are sufficient to conclude (aC/c+C/c−1, 0) p -magneticsparseness of (b, θ, v, m).
Proof. Let W ⊂ X be finite and fix p ∈ [1, ∞). We write ι (p) θ,j for the frustration index of (b j , θ, v j , m j ). Since (b j , θ, v j , m j ) is (a, k) p -magnetic-sparse for all j ∈ J, we obtain This finishes the proof of the first the statement. The statement about k = 0 is clear.
Here, a tessellation is a planar graph such that there exists a set of subgraphs that are cycles such that every edge belongs to exactly two cycles. (A subgraph is a restriction of the corresponding maps to a subset of the space X.) For a more restrictive notion of planar tessellations, see, e.g., [4].
We will apply Proposition 4.10 to tessellations using the faces as subgraphs. We show that every tessellation is magnetic-sparse whenever the face degree is upper-bounded and the magnetic strength of the faces is lowerbounded from zero.
Let F be the set of faces of the graph. Let F ∈ F, we denote by X F the vertices which belong to F . We define also b F := b · 1 XF ×XF and θ F := θ · 1 XF ×XF . The graph C F := (b F , θ F , 0, 1) over X F is a magnetic cycle; see Sect. 4.2. Corollary 4.11 (Magnetic-sparseness of tessellations). Let a magnetic tessellation (b, θ, 0, 1) over X with standard edge weights be given. If then the tessellation is (a, 0) 1 -magnetic-sparse.
Proof. Due to Proposition 4.6, the graph over C F is (a F , 0) 1 -magnetic-sparse with a F = 2l(CF ) s(CF ) − 1. By the tessellation property, every edge belongs to exactly two cycles and, hence, Thus, by Proposition 4.10 we conclude the statement.
We give the example of triangulation Cayley graphs (b, θ, v, m) which turn out to be magnetic-sparse, but not bi-magnetic-sparse. These triangulation Cayley graphs can be understood as a generalization of the triangulation of the plane by equilateral triangles.
Example 4.12 (Magnetic-sparse, but not bi-magnetic-sparse). Let (G, ·) be an infinite abelian group which is finitely generated by at least two elements with neutral element e and symmetric generating set S such that e ∈ S. By choosing a possibly bigger S, we can furthermore assume that for every s ∈ S there exists r = r(s) ∈ S such that rs ∈ S. Because of the latter condition, we refer to the corresponding Cayley graph as a triangulation, i.e., every edge {g, sg} in the corresponding Cayley graph belongs to at least one triangle, namely {g, sg, rsg} with r chosen as above. Then, 1 ≤ c := min s∈S #{r ∈ S | rs ∈ S} ≤ max s∈S #{r ∈ S | rs ∈ S} =: C < ∞.
|
2022-11-18T15:28:00.872Z
|
2020-03-07T00:00:00.000
|
{
"year": 2020,
"sha1": "c4cd2af08e5de877fb13c8eee2992a27249270bc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00023-020-00885-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "c4cd2af08e5de877fb13c8eee2992a27249270bc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
256856212
|
pes2o/s2orc
|
v3-fos-license
|
Food Allergens and Essential Oils in Moisturizers Marketed for Children in Japan
Introduction Personal skincare leave-on products increase the risk of food allergies. Parents must be imparted with an elevated degree of cognizance regarding the allergenic nature of pediatric skincare products. Material and methods We aimed to examine the data inferred from the promotional material on labeling these products about their proclivity to elicit skin sensitization. This study investigated the relationship between food allergens and essential oil ingredients and highlighted marketing terms, product prices, and ratings of moisturizers for children that are sold on Amazon, Japan. We searched and recorded the product labels and website marketing terms, price (per gram or milliliter), the number of reviews, and allergens and investigated the relationship between the percentage of food allergens in those products and marketing terms, price, and the number of Amazon reviews. Results Among the 164 pediatric skincare products we included, 144 (87.8%) that were manufactured in Japan were the most common; 7 (4.3%), 15 (9.1%), 23 (14.0%), 24 (14.6%), and 54 (32.9%) contained the eight regulated food allergens, grain, nut, fruit, and essential oils, respectively. Marketing terms emphasizing “natural/organic” were more likely to contain grain allergens and essential oils and were more expensive with and without “organic” labeling, respectively, whereas those labeled with marketing terms emphasizing “hypoallergenic” were less likely to contain fruit allergens or essential oils. Products with fewer Amazon reviews were more likely to use the marketing term “natural/organic” and had a higher grain allergen content. Conclusion In Japan, 4.3% of children's skincare products sold on Amazon contain eight food allergens that should obligatorily be labeled when included in food products. In addition, more than 10% of these children's skin care products contain ingredients derived from nuts, while more than 30% contain fruit extracts or essential oils.
Introduction
Worldwide, the number of children with food allergies is increasing, and 8% of children in the USA and 5% of young children in Japan have food allergies [1,2]. In a single Japanese urban center, the incidence of food allergies has been demonstrated to escalate annually [3]. Childhood food allergy decreases family quality of life and confers nutritional and financial burdens [4]. Therefore, avoiding risk factors for food allergies is essential. Furthermore, food allergens, such as peanuts and oats, are risk factors for food allergies even on skin contact [5,6]. Essential oils derived from fruits and plants can induce sensitization following skin contact [7,8]. As advanced by the Dual Allergen Exposure Hypothesis, it is widely acknowledged that topical exposure to food allergens increases the susceptibility of individuals to food allergies.
Many personal care products for children contain food ingredients, although the mechanism of percutaneous sensitization increases the risk of food allergies [5,6]. Moreover, various marketing terms suggest that these food ingredients have beneficial effects on the skin, and products purchased on ecommerce sites are more valued by consumers, thereby reinforcing the purchasing behavior of consumers [9]. Thus, the consumers' emphasis on marketing terms and e-commerce site reviews may constitute risk factors for selecting personal care products that pose a risk of allergen sensitization.
We analyzed the relationship between food allergens and essential oil content and emphasized marketing terms used for personal care products for children sold on Amazon, one of the largest e-commerce sites in Japan, and product prices and ratings. We investigated the impact of these factors on the content of allergens and essential oils in personal skin care products for children. We believe that the examination of the correlation between allergens and essential oil elements in children's skin care products and their advertising branding would furnish crucial information for parents in the act of making informed buying choices for such items.
Materials And Methods
On June 27 and 28, 2022, we searched for products on Amazon using the terms "moisturizer," "baby," and "children," which were separated by one-byte spaces. After recording all products displayed, we clicked "Next" and checked all products on all pages. This study was exempted from the need for ethical review because it is not a survey of human subjects.
The web browser used for the search was Google Chrome (version: 102.0.5005.115, 64-bit), and incognito browsing was used to avoid the effects of personalized data retrieval. Product label and website marketing terms, price (per gram or milliliter), and the number of reviews and ratings were recorded.
In Japan, the "Code of Fair Competition for the Labeling of Cosmetics" sets forth guidelines for general consumers to make informed selections of products [10]. However, there are no strict regulations for "names by type," which are descriptions of names that describe uses and names that describe dosage forms.
We defined 10 products ascertained to be personal-care skin-moisturizing products for children: "cream," "ointment," "body butter," "body balm," "oil," "lotion," "gel," "powder," "spray," and "foam." Additionally, we categorized and summarized marketing terms believed to have the same meaning. The marketing terms we searched for and their possible suggested meanings are listed in Table 1.
Marketing terms
Themes that marketing terms may emphasize
Contains no additives
Emphasize that the product has few additives Contains no fragrance Contains no coloring matter
Minimal skin irritation
Emphasize that the product is less irritating to the skin Low allergenicity Suitable for sensitive skin Organic Emphasize that the product is made with natural and organic ingredients Natural Nature Allergy testing is in place Emphasize that the product is less allergenic Skin irritation tests are performed
TABLE 1: Classification of marketing terms based on product packaging and product web pages
We excluded products that manifest in Amazon's search results but are no longer in production and inaccessible during the moment of search, products not labeled for children, and products intended to treat specific conditions (e.g., sweat rash). The manufacturer's website and product photos were used to obtain information on raw materials. However, in cases in which the raw materials in the product could not be determined after searching for sufficient information, the corresponding product was purchased, and the ingredients were confirmed.
The number of ratings and rating scores were recorded based on Amazon reviews. Products with zero ratings were excluded from statistical evaluation regarding Amazon reviews. Additionally, we examined whether the number of Amazon reviews was associated with food allergen content and marketing terms. We categorized the products into groups based on the number of reviews and compared the top quartile of products with the highest Amazon review counts to those with the lowest Amazon review count to examine the relationship between organic labeling and review frequency, to compare the relationship between organic labeling and the number of reviews.
In the selected personal-care skin-moisturizing products, we identified eight food allergens regulated by Japanese food allergen-labeling requirements. Representative grain, fruit, and essential oil allergens were selected, and this information was entered into a Microsoft Excel for Microsoft 365 (64-bit) MSO Version 2207 (16.0.15427.20182) (Microsoft Corp, Redmond, WA, USA) spreadsheet ( Table 2). Fisher's exact test was used to determine the differences in continuous variables, and values are presented as means and SD. Differences in categorical variables were evaluated with the chi-square test, and P-values lower than 0.05 were considered significant. Analyses were performed using IBM SPSS Statistics for Windows, Version 17 (Released 2008; IBM Corp., Armonk, New York, United States). 2023
Results
During the initial search, 164 pediatric personal-care skin products met the criteria, which were defined to include leave-on skincare products. Products manufactured in Japan accounted for the largest share (n=144 products, 87.8%), whereas 20 products were manufactured outside Japan (five products (3.0%) in Malaysia, three (1.8%) in the People's Republic of China, two (1.2%) in Thailand, two (1.2%) in Germany, one (0.6%) in New Zealand, one (0.6%) in Italy, one (0.6%) in Switzerland, one (0.6%) in France, and four [2.4%) with unknown country of origin).
Marketing terms Number of products (%)
Emphasizes that the product is low in additives 141 (86.0) Emphasizes that the product is less irritating to the skin 114 (69.5) Emphasizes that the product is made with natural and organic ingredients 73 (44.5) Emphasizes that the product is less allergenic 76 (46.3)
TABLE 6: Relevance of allergen content in skin care products and marketing terms
Products with marketing terms emphasizing "natural and organic" were more likely to contain grain allergens and essential oils and had a higher cost per milliliter (gram; 26.8 ± 45.4 and 13.1 ± 14.0 yen for the groups with and without the term "organic," respectively, in the labeling). In contrast, products with the marketing term emphasizing "hypoallergenic" were less likely to contain fruit allergens or essential oils.
Discussion
The risk of percutaneous sensitization from personal-care products became widely recognized after Lack et al. reported, in 2003, that children who used peanut-containing personal care products during infancy developed a high rate of peanut allergy [5]. This generated considerable attention to food allergens in overthe-counter skin care products meant to be left on the skin. However, few reports have examined the extent to which food allergens and essential oils are present in moisturizing personal-care products to be left on children's skin. To our knowledge, only one report has examined food allergens in children's skin care products [11]. The authors stated that the most common food allergens in personal skin care products for children are almonds, wheat, soy, oats, and sesame. However, our study found that macadamia nuts, oranges, rice, and various essential oil components were more common ingredients in skin care products for children in Japan.
In Japan, nuts and fruits are common causes of new food allergies after the age of 1 year, and the prevalence of egg, milk, nut, and wheat allergies is substantial [12]. Furthermore, the prevalence of nut allergy in Japan is increasing [2,13]. For example, owing to the increased prevalence of walnut allergy, walnuts have been added to Japan's list of food allergen-labeling control foods in 2022 [13]. In Japan, these foods are not to be started early in complementary diets. Accordingly, proceeding with transdermal sensitization by these products before the complementary diet is initiated would not be beneficial.
Adomaite et al. showed significantly more food allergens in personal care products for the skin when marketing terms emphasizing "natural and organic" were presented, that is, almonds and wheat were the most common allergens [11]. This may be because the impression that products labeled organic are safe as skin care products for children is widely accepted by caregivers and more likely to be chosen as a marketing term [14]. However, our research showed that when marketing terms emphasizing "natural and organic" were included, grains and essential oils were often included. Therefore, the types of food allergens in skin care products may vary by country or region, even if the same marketing terms are used. In contrast, products with marketing terms that emphasized "low allergenicity" had low fruit and essential oil content. Accordingly, we inferred that these marketing terms were labeled with consideration for fruits and essential oils, and not for nuts and grains.
The prevalence of food allergies to hens' eggs, milk, wheat, and nuts is high in Japan [12]. However, in this study, small amounts of these food allergens were present in children's skin cosmetics. A well-known wheat-allergic patient developed wheat allergy owing to percutaneous sensitization to soap-containing wheat in Japan [15]. The majority of products in this study were manufactured in Japan. Therefore, they may have avoided the inclusion of well-known food allergens in their skin care products. However, the content of nuts, grains, fruits, and essential oils is not minimal.
Many consumers will refer to product prices and Amazon reviews when selecting products, and these factors were examined in this study. Products labeled with the marketing term "natural and organic" were more expensive and more likely to contain grain-based allergens and essential oils. Moreover, products with more Amazon reviews had less "organic" labeling and less grain allergen content. Thus, products with fewer reviews may have structured their marketing strategy by emphasizing the "natural and organic" label, which was associated with a higher food allergen content. Therefore, there is a requirement for conscientiousness regarding the marketing vocabulary.
Conversely, topically applied food ingredients and essential oils may confer skin-beneficial effects. Some ingredients, such as oats, calendula, and aloe vera, are effective in treating diaper dermatitis and other skin problems of infancy [16,17]. However, oats cause sensitization [6], and calendula cross-reacts in individuals with sensitization to Asteraceae pollen [18]. The increasing rate of pollen sensitization in Japanese children necessitates caution concerning these ingredients [19].
A limitation of this study is its cross-sectional design, which precludes the determination of a causal relationship between personal skin care products that can act as food allergen-sensitizers and cause food allergies. A limitation of this study is the reliance on raw material and product labeling verification based on information obtained from the websites. Thus, the possibility of drawing definitive conclusions is precluded. However, previous studies have shown that these food allergens and essential oils can cause sensitization. Therefore, it is important to determine the extent to which food allergens and essential oils are present in children's skin care products. Furthermore, it is possible that not all food allergens were identified from the product labels. However, in this study, the researcher visually verified all labels; thus, it is unlikely that any allergens were overlooked. Another limitation is that the only e-commerce site searched was Amazon. However, Amazon is one of the largest e-commerce sites in Japan, and we believe that it is possible to identify most products sold in Japan and trends in personal skin care products.
Conclusions
In conclusion, food allergens and essential oil ingredients in personal skin care products for children sold in Japan were investigated, and more than 10% contained nuts, grains, and fruits, whereas more than 30% of the products contained essential oils. Marketing terms that emphasized "natural and organic" were associated with products with an increased percentage of grain and essential oil content, whereas terms emphasizing "hypoallergenic" were associated with products with lower fruit and essential oil content. Furthermore, the risk of products containing food allergens might be lower for products with more Amazon reviews and lower prices. The results of this study may help caregivers select personal skin care products for their children. It is imperative for parents to exercise vigilance when procuring skin care products for their offspring, as some of these products may pose the possibility of eliciting an adverse sensitization response. In addition, it is advisable for them to request that the purveyors of these products update the accompanying user guide labeling to ensure informed choices for future acquisitions.
Additional Information Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2023-02-15T16:12:58.946Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ebd1a558e903521745d811ecb75a98d93c05a2a1",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/136450/20230213-22173-1gw3n7w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c1a0ffdb23bd5124aecea53f6d7321eb4faaac3",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
249421296
|
pes2o/s2orc
|
v3-fos-license
|
LogStamp: Automatic Online Log Parsing Based on Sequence Labelling
Logs are one of the most critical data for service management. It contains rich runtime information for both services and users. Since size of logs are often enormous in size and have free handwritten constructions, a typical log-based analysis needs to parse logs into structured format first. However, we observe that most existing log parsing methods cannot parse logs online, which is essential for online services. In this paper, we present an automatic online log parsing method, name as LogStamp. We extensively evaluate LogStamp on five public datasets to demonstrate the effectiveness of our proposed method. The experiments show that our proposed method can achieve high accuracy with only a small portion of the training set. For example, it can achieve an average accuracy of 0.956 when using only 10% of the data training.
I. INTRODUCTION
Logs are one of the most valuable data sources for largescale services maintenance [1,2], which report service runtime status and help operators to find trace workflows. Logs have been widely applied for a variety of service management and diagnostic tasks. Prior research has proposed automated approaches to analyze logs, such as status monitoring [3], anomaly detection [4,5], failures prediction [6] and root cause analysis [7]. The fast-emerging AIOps (Artificial Intelligence for IT Operations) solutions also utilize operation logs as their input data [8].
Logs are designed by developers and generated by logging statements (e.g., printf (), log.info())) in the source code [9]. As shown in Fig.1, a logging function composes of log level (i.e., info), constant parts (i.e., "Interface" and "change state to down"), and variables (i.e., "InterfaceID"). Service and system generate raw logs by printing an unstructured text that contains constant text and specified variables (e.g., "te-1/1/50"). Usually, the constant parts sketch out the event and summarize it, and variables vary from one log to another of the same template.
Since logs are often extensive in size (e.g., Google and Facebook respectively generate 100 Petabyte and 10 Petabyte of log data per month [9]) and have free handwritten constructions [1], log analysis remains a significant challenge. To address the Weibin Meng(mengweibin3@huawei.com) is corresponding author. challenge of the large size of logs, researchers propose approaches for log compression [10]. However, most log compression approaches only aim to save storage space while not assisting when analyzing logs in practice. To address the challenge of log analysis, using rules (e.g., source code [11] and regular expressions [12]) is a simple yet effective approach. However, the source code is not always available, and designing regular expressions relies on domain knowledge, which cannot use in practice. Therefore, automatic log parsing methods are getting attention. Researchers propose many approaches for automatically parsing raw logs into structured forms [13]. The main aim of log parsing is to find templates (constant parts) from logs and replace variables with variable placeholders. Recently, many data-driven log parsing approaches have been proposed. There are multiple techniques, such as clustering [14], longest common subsequence [15], frequent pattern mining [15,16], heuristics parsing [17] and others [9]. However, log parsing still faces two challenges. Firstly, operators continuously conduct software/firmware upgrades on services/systems to introduce new features, fix bugs, or improve performance [9], which can generate new types of logs. Most of the existing approaches do not support online analysis. A small number of approaches (e.g., FTtree [16], LogParse [9]) that support online parsing also have some shortcomings, or they cannot handle new types of words, or they need to be combined with other log parsing algorithms to complete. Therefore, Newly generated logs are difficult to process online .
Besides, most existing log parsing approaches similar group logs and extracts templates for each group by keeping the same parts from logs and replacing different parts with placeholders. By default, log parsing is an unsupervised process. Parsers extract templates based on provided data instead of domain knowledge. Therefore, they only produce accurate results with sufficient historical log data. And, technically, the more data provided, the more accurate result they return. However, when a brand new service goes online, there are usually not enough historical logs to generate accurate templates. Therefore, it's challenging to train a parsing model with small amounts of log data.
To address the above challenges, we propose LogStamp. The key intuition is based on the following observations: When Operations reads the log, they mentally mark the words in the log to identify the template. In LogStamp, we turn the log parsing problem into a sequence labelling problem and find templates from logs online. LogStamp's contribution can be summarized as follows: • LogStamp is an accurate online log parsing method.
LogStamp can parse logs one by one, and has extremely high accuracy. • LogStamp can train an accurate log parsing model based on a small amount of log data, which ensures that it can analyze online logs. Experiments show that it can achieve an average accuracy of 0.956 when using only 10% of the data training. The rest of the paper is organized as follows: We discuss related works in Section II and propose our approach in Section III. The evaluation is shown in Section IV. In Section V, we discuss LogStamp's limitations and future works. Finally, we conclude our work in Section VI.
II. RELATED WORK
Logs play an important role in service management. Log parsing usually serves as the the first step towards automated log analysis [1].
The most straightforward approach is to use rules to parse logs, such as regular expressions. The rule-based log parsing methods rely on handcrafted rules provided by domain knowledge. Though straightforward, this kind of method requires a deep understanding of the logs, and a lot of manual efforts are needed to write different rules for different kinds of logs, which is not general. Commercial log analytic platforms (e.g., Splunk, ELK, Logentries) also allow operators to efficiently manage and analyze large-scale logs by pre-define rules. But they are only applicable to certain types of logs and are not universal.
Utilizing source code can parsing logs accurate. For example, [11] employs source code to extract log templates for system problem detection. However, the source code is not always available, especially for commercial services.
To achieve the goal of automated log parsing, many datadriven approaches have been proposed. There are many categories of log parsing [1,18]. The first category is clusterbased approaches, which log template forms a natural pattern of a group of log messages. From this view, log parsing can be modeled as a clustering problem, such as LogSig [19]. Next is longest common subsequence. For example, Spell [15] uses the longest common subsequence algorithm to parse logs in a stream. Iterative partitioning is used in IPLoM [17]. Some methods use heuristics to extract templates. As opposed to general text data, log messages have some unique characteristics. Consequently, Drain [20] propose heuristicsbased log parsing methods. The next category is frequent item mining, which is straightforward. Tokens, which regularly appear together in different log entries, are built into frequent itemsets. The parser obtains templates by looking up those itemsets. Log templates can be seen as a set of constant tokens that frequently occur in logs, such as FT-tree [16]. The final category is combined approaches. LogParse [9] combines existing unsupervised log parsing approaches and supervised machine learning approaches to generate templates for online logs. The idea of LogParse is similar to our paper. However, it's a pipeline workflow, which will be affected by the accuracy of traditional log parsing approaches because most log parsing approaches cannot achieve high accuracy based on a small number of logs.
III. DESIGN
In this section, we introduce the overall framework of our proposed LogStamp. The overview of the framework is shown in Fig. 2. We first present the offline part in our workflow in Section III-B, then we will describe the online part in detail in Section III-A.
A. Offline workflow
Given a set of historical logs, our goal is to build a tagger to identify if the incoming log is a template or variable. Previous works [9] use the template extraction method to obtain the templates from the logs. Then, a word classifier (i.e., SVM classifier) is adopted to label each word in the logs. There are two drawbacks to such methods. First, the accuracy of labeling depends on the quality of the extracted template. If the template extraction method fails to separate the templates from the raw logs, the pseudo label assign by the classifier would be meaningless and cause failure on log parsing. Secondly, prior works only utilize templates to train the word classifier. Generally, log data contains critical information on both wordlevel and sentence-level (i.e., sentence order). For example, in log anomaly detection task, a common way to detect the anomalous logs in the log data is to see if the log orders are correct. If we have received a log says that "Vlan-Interfenerce ae, change state to up" and no message of "Vlan-Interfenerce ae, change state to down" is followed in a certain time period, we will recognize such log as anomalous log. And because the word-level embedding only focus on the single word, it fails to effective parse such logs .Therefore, learning logs feature from the sentence level is important.
In this paper, we introduce a coarse to the fine framework to generate accurate pseudo labels. First of all, a pretrained bidirectional transformer is adopted to extract the feature representation of log data. Because the structure of raw logs is different from the natural language, we need to finetune the BERT [21] using our data. Note that finetuning the BERT does not need any label. Then we use a dual-path framework to get both coarse level embedding and fine level embedding. On the coarse level, we expect the sentence embedding can reflect the nature of different logs. For instance, the above example of two logs have similar structure, and most of the words in these two logs are the same. However, the meaning of these two logs are completely different. The coarse level feature learns the inherent relations between the words, thus output two embeddings with distant similarity. The sentence embedding can be further group into number of clusters.
In general, one can exploit any clustering algorithm that can split the sentence into clusters according to their embedding features. Our approach is to use DBSCAN [22]. After we get the clusters, we count the frequency of word appearance in each clustering. We mark it as template if the number of appearance is larger than the threshold, variables otherwise. As such, we obtain the labels for each word. For the fine-grained level, we used the finetuned BERT to output word embeddings. For each word embedding, we have its corresponding label from the step above. Given a set of word embeddings and word labels, we can train a classifier that serves as a tagger. As we trained via a deep neural network, this tagger can accurately parse the logs without the interference introduced by the wrong pseudo labels.
B. Online workflow
In real-time systems, systems may generate new log templates online; therefore, building a robust online workflow is critical for real scenario deployment. Our online workflow is simple. Given real-time logs, which can be either a piece of logging information or a set of new logs, we reuse the BERT model to extract the word embedding from the new logs. Then the tagger that is trained in the offline stage will predict a label Hadoop MapReduce job for the logs. As a result, we can immediately know whether the specific words are templates or variables. We will show that our online framework is simple yet surprisingly effective under most of the circumstance.
IV. EVALUATION
In this section, we evaluate our approach using public log datasets and aim to answer the following research questions: • RQ1: How effective is LogStamp in log parsing? • RQ2: Can LogStamp achieve accurate results based on a small amount of log data? • RQ3: How much can the BERT and tagger contribute to the overall performance?
A. Experiment Setting
In this section, we evaluate the performance of LogStamp. The datasets, baselines, evaluation metrics and experimental setup of the experiments are as follows.
1) Datasets: We conduct experiments over five public log datasets from distributed systems, which are BGL [1], HDFS [23], ZooKeeper [24], Proxifier [1] and Hadoop [14]. The detailed information of these datasets is listed in Table I. For each dataset, [1] sampled logs and manually labelled each log's template, which serves as the ground truth for our evaluation.
2) Baselines: To demonstrate the performance of LogStamp, we have implemented five template extraction methods: FT-tree [16], Drain [20], Spell [15], LogSig [19], LogParse [9], MoLFI [25] and IPLoM [17] . The parameters of these methods are all set best for accuracy. LogParse [9] 3) Evaluation Metrics: We apply RandIndex [26] to quantitatively evaluate the accuracy of template extraction. RandIndex is a popular method for evaluating the similarity between two data clustering techniques or multi-class classifications. What's more, RandIndex is applied to evaluating existing template extraction methods in the literature, such as in [9].
For each template extraction method, we evaluate its accuracy by calculating the RandIndex between the manual classification results and the templates learned by it. Specifically, among the template learning results of a specific method, we randomly select two logs, i.e., x and y, and define T P, T N, F P, F N as follows. T P : x and y are manually classified into the same cluster and they have the same template; T N : x and y are manually classified into different clusters and they have different templates; F P : x and y are manually classified into different clusters and they have the same template; F N : x and y are manually classified into the same cluster and they have different templates. Then RandIndex can be calculated using the above terms as follows: RandIndex = T P +T N T P +T N +F P +F N .
4) Experimental Setup:
We conduct experiments on a Linux server with Intel Xeon 2.40 GHz CPU and 64G memory.
1) RQ1: How effective is LogStamp in log parsing?
We first compare accuracies of existing log parsing methods 1 and LogStamp when extract template from historical logs. The comparison results are shown in Fig. 3. We find that most log parsing methods are highly accurate in extract templates from historical logs. However, the accuracy of existing parsers is not always consistent. In other words, the selection of log data impacts the parsing accuracy [1]. Parsers may have a good evaluation result with up to 90% of accuracy and an unacceptable bad outcome down to 50% depending on different input datasets (e.g., Proxifier). Meanwhile, we find LogStamp still achieves high accuracy (the average accuracy is more than 0.999) on different datasets. Therefore, we can directly use the label results to train a tagger.
To demonstrate the performance of LogStamp in supporting online parsing and simulate the launch of new services, for each dataset, we apply each log parsing method to extract templates from 10% of their logs. Fig. 4 shows the comparative results. LogStamp achieves the best performance. Specifically, the accuracy of LogStamp on each dataset is 0.956. 2) RQ2: Can LogStamp achieve accurate results based on a small amount of log data?
As shown in [1], the accuracy of existing parsers is not always consistent, both for the datasets and the percentage of training data. To demonstrate how stable LogStamp is to the scale of training data, Fig. 5 shows the log parsing accuracy of LogStamp on the five datasets, as the percentage of training data increases from 10% to 90%, respectively. The results show that LogStamp is stable to different scales of training logs and can achieve high log parsing accuracy when trained based on a small scale of training data.
3) RQ3: How much can the BERT and tagger contribute to the overall performance?
LogStamp incorporates two modules: BERT and taggers. In this RQ, we evaluate the effectiveness of different version of each module. Firstly, we compare LogStamp with BERTbase, BERT-small and BERT-tiny. Table II and Table III show the performance of LogStamp in the offline stage and the online stage, respectively. We find that three versions of BERT achieve similar performance, which means that LogStamp doesn't need to spend time adjusting the effect of BERT. Then, in Table IV, we compare LogStamp with different taggers, i.e., GCN, RNN, LSTM and CNN. We find that LSTM achieves the best performance on all datasets. Because LSTM is more suitable to natural language processing, and sequence labelling is a problem in natural language processing. V. DISCUSSION AND FUTURE WORK Thanks to BERT for its powerful ability to capture both sentence embedding of log sentences for clustering and word embedding for distinguishing between templates and variables in log. However, during experiments, it is observed that syntactic structure and semantic information contained in log sentences often vary considerably compared to those sentences used to train BERT. One deduction is that if the BERT model is fine-tuned on log datasets with masked language modeling, it might better understand log sentences and thus have higher accuracy in offline and online log parsing. Yet, the experiment result does not prove the deduction to be correct. By finetuning the BERT model with log sentences of each system for 1-3 epochs, the online clustering rand index does not seem to be steadily improved.
We will continue to study how to apply better pre-trained language models to log template extractions in our future work. More abundant logs will be used to fine-tune BERT or to train a BERT from the beginning instead of directly loading weights of model pre-trained with dissimilar vocabularies, e.g., from Wikipedia or books. Besides, as log sentences usually have a more unified structure, we will also attempt to design a more concise model structure based on BERT to achieve higher efficiency in online log parsing to deal with higher concurrency.
VI. CONCLUSION
In this paper, we propose LogStamp, an online log parsing approach. Different from the prior log parsing approach, LogStamp takes semantic into consideration and turns the log parsing problem into a sequence labelling problem. LogStamp supports a training model based on a small number of historical logs. Experimental results on public log datasets have validated the accuracy and stability of LogStamp.
|
2022-06-07T13:09:18.734Z
|
2022-06-02T00:00:00.000
|
{
"year": 2022,
"sha1": "965e17d171ee93f2fd557369c5532ddbc3ee391b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "07cdd0c2ffd333ca30a945d9a035478a9ce05325",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
219913743
|
pes2o/s2orc
|
v3-fos-license
|
Climate change mitigation in cities: a systematic scoping of case studies
A growing number of researchers and stakeholders have started to address climate change from the bottom up: by devising scientific models, climate plans, low-carbon strategies and development policies with climate co-benefits. Little is known about the comparative characteristics of these interventions, including their relative efficacy, potentials and emissions reductions. A more systematic understanding is required to delineate the urban mitigation space and inform decision-making. Here, we utilize bibliometric methods and machine learning to meta-analyze 5635 urban case studies of climate change mitigation. We identify 867 studies that explicitly consider technological or policy instruments, and categorize these studies according to policy type, sector, abatement potential, and socio-technological composition to obtain a first heuristic of what is their pattern. Overall, we find 41 different urban solutions with an average GHG abatement potential ranging from 5.2% to 105%, most of them clustering in the building and transport sectors. More than three-fourth of the solutions are on demand side. Less than 10% of all studies were ex-post policy evaluations. Our results demonstrate that technology-oriented interventions in urban waste, transport and energy sectors have the highest marginal abatement potential, while system-wide interventions, e.g. urban form related measures have lower marginal abatement potential but wider scope. We also demonstrate that integrating measures across urban sectors realizes synergies in GHG emission reductions. Our results reveal a rich evidence of techno-policy choices that together enlarge the urban solutions space and augment actions currently considered in global assessments of climate mitigation.
Introduction: summary of evidence gap and research question
The role of urban areas in contributing to climate mitigation and adaptation, global sustainable development goals (SDG) and the New Urban Agenda (NUA) is undisputed ( UN Habitat 2011, IPCC 2014, UN-United Nations 2015. In the last few decades, a growing number of cities and local gov- . Casestudy evidence is used primarily in an anecdotal fashion leaving a large, untapped potential for systematic learning on urban climate solutions. There are at least two paths to upscale and systematize the study of urban-scale climate solutions (Creutzig et al 2019).
One is data driven starting with city-scale datasets being combined with harmonized remote sensing or other land-use information to develop data-based typologies of cities and climate change (e.g. Creutzig et al 2015, Baiocchi et al 2015, Ahmed et al 2019, Nangini et al 2019. The other is evidence-driven synthesis starting with case studies to systematically compare and aggregate policy insights (Broto and Bulkeley 2013, Kivimaa et al 2015, Reckien et al 2018. Both can be eventually combined to match experience from case studies to urban drivers of energy use and climate change (Lamb et al 2019, Creutzig et al 2019).
What useful information can one derive from urban case studies? A systematic scoping of these studies can reveal a spectrum of urban solutions available to policy makers-instruments, targeted sectors, expected (or documented) mitigation potentials, and social outcomes. This information could support fast learning among peer-cities particularly those responsible for large segments of global greenhouse gas (GHG) emissions (Creutzig et al 2015, Baiocchi et al 2015, Lamb et al 2018. In particular, there is a pressing need to identify solutions for smaller and medium sized cities, that too in developing countries. These cities will host the majority of future population growth, energy consumption and GHG emissions yet are most underequipped in financial and human resources to study and implement local climate action (GEA 2012, Seto et al 2014, Sethi and Puppim de Oliveira 2015. A growing number of studies model climate mitigation potential in cities. Emission inventory exercises identify key priority areas for urban mitigation across multiple sectors-particularly when carried out in a comparative context (e.g. ICLEI 2009, Kennedy et al 2009, Chavez and Ramaswami 2013. For mid-and large-n samples of city inventories, parametric and non-parametric statistical approaches explain variations in urban CO 2 /GHG emissions due to socio-demographics, industrial structure, urban form, local geography and climatic conditions (Brown et al 2008, Glaeser and Kahn 2010, Minx et al 2013, Baiocchi et al 2015. Further refining such analysis, Creutzig et al (2015) use hierarchical regression-tree to endogenously cluster cities according to their GHG emission drivers and to estimate a global urban mitigation wedge. These studies are assimilative explorations into key drivers and thus potential areas to focus mitigation initiatives on, but they do not identify city-specific policy options that are directly available to urban policymakers.
Other studies have studied the ambition, focus, and regional distribution of urban climate actions (Broto and Bulkeley 2013, Reckien et al 2014. Yet, these studies abstract away from specific options and fall short of evaluating actual policy performance. As such, policy learning remains limited and insights are not actionable. This contrasts with a wealth of urban mitigation case studies available in the scientific literature (Lamb et al 2019) that offers the opportunity to systematically review this more granular evidence base and learn from experiences in pursuing technological solutions and urban policy instruments. We acknowledge the difficulties of such an undertaking, with inherent inconsistencies in methods, system boundaries, available data and desired outputs (Seto et al 2014, Sethi 2017). Yet, in the absence of comprehensive and consistent evidence, working towards an initial heuristic for the urban climate solution space is a justifiable goal.
In this research, we apply a systematic scoping review methodology. A scoping review is guided by principles of transparency and reproducibility that follow a clear methodological protocol to analyse quantitative, qualitative or mixed evidence found in the scientific literature (Arksey and O'Malley 2005). As in other systematic evidence-synthesis approaches, it involves the following steps: (a) clearly defining the research question; (b) systematically searching defined literature databases for a defined time period; (c) justifying and making a transparent selection of the literature; (d) assessing the quality of the selected evidence; and (e) synthesizing the evidence based on a clear and transparent method (Berrang-Ford et al 2015, Minx et al 2017. In this scoping review, we assess the urban case study literature pursuing four distinct, but inter-related research objectives: (1) to map global urban interventions, capturing the contributions across different mitigation sub-sectors, (2) to survey key urban mitigation solutions being practiced along with their GHG abatement potential, (3) to examine ex-post policy studies for specificity of opted policies and their governing mode, and (4) to capture trends and focus of the latest research and innovations in urban climate mitigation. In section 2, we outline our review methodology, and in section 3 we describe analytical findings. In section 4, we conclude with recommendations for future research.
Methodology
Climate change assessments, such as those by the Intergovernmental Panel on Climate Change (IPCC), gained status for evidence-based scientific policy advice. The progress in international climate governance would have not been possible without systematic learning in the scientific community. However, there has been little systematic learning on climate solutions from ex-post evidence. Systematic review methods as developed in health and educational sciences provide an adequate methodological toolkit for such learning, but have generally been neglected in climate and energy research. Only recently, a growing number of researchers have started applying systematic review methods in climate studies more widely (Berrang-Ford et al 2015, Fuss et al 2018, Nemet et al 2018. Such systematic reviews are challenging in that they deal with the vast and fastgrowing evidence base. We call this new phenomenon 'big literature': resource-intensive systematic review methods are pushed to the brinks of feasibility (Minx et al 2017). Employing data science methods to assist during the systematic review process by lifting the burden of some of the most repetitive and resourceintensive tasks from human reviewers is a promising and crucial development in the field of evidence synthesis (Minx et al 2017, Westgate et al 2018, Nakagawa et al 2019. In a recent experiment, Lamb et al (2019) apply data science and unsupervised machine learning (ML) methods to automatically map out the case study landscape on urban climate change mitigation. Rather than dozens or hundreds of case studies as analysed by urban climate change assessments (Seto et al 2014), it identifies more than 4000 cases, covering a broad range of topics from emission accounting to technology studies to scenario analysis to policy impact evaluations. We update this with an expanded database of 20 166 studies for our systematic review of technology and policy options in urban climate change mitigation. The detailed methodology for the review process is explained in annex 1 (available online at stacks.iop.org/ERL/15/094067/mmedia) and summarized as a flowchart in figure 1. As a first step, we search the Web of Science and Scopus with a broad query comprising synonyms for climate mitigation and urban policies (annex 1.1, table A1). We filter the resulting documents using a data bank of worldwide city names, resulting into 5635 case studies that mention cities in their title and abstracts. Next, we read a random sample of 250 papers to develop inclusion and exclusion criteria for our scoping review (annex 1.2, table A2). With the developed exclusion and inclusion criteria we then tested inclusion/exclusion for a further set of 200 papers (annex 1.2, table A3). We then used the coded papers as input for a supervised machine learning algorithm that calculates relevance rates for the remaining 5635 case studies. For the final review, we include all studies with a relevance rate of 0.6 or higher, resulting in 867 papers (annex 1.3). In the final stage, post-ML analysis and synthesis involves systematic coding and tagging of the content (annex 1.4) finding that 644 out of the 867 studies matched our inclusion criteria followed by an array of results. Section 3 reports our analytical findings sequentially for each research objective.
Results discussion
The systematic review of case study literature leads to the following major outcomes: (3.1) Mapping of urban interventions globally, capturing the contribution of different mitigation sectors; (3.2) Exploring key urban mitigation solutions being practiced along with their GHG abatement potential; (3.3) Examining ex-post policy studies for specificity of policy mode opted in different urban mitigation solutions; (3.4) Identifying the focus of recent trends and innovations in urban climate mitigation.
Mapping of literature for GHG mitigation sectors
We map case study articles to the following sectors: buildings, energy, transport, waste; agriculture, forestry and other land uses (AFOLU) and industry. In the paper set, we find 548 studies focusing on a single sector, with buildings (249) and transport (148) most frequently investigated. There are 77 studies scrutinizing two sectors simulteneously, most often building and energy (21). A total of 19 studies cover 3 or more sectors (figure 1, details in annex 2). Visualizing the intensity pattern of sectoral-interactions using a chord diagram (figure 2-left) reveals a notable paucity of evidence observed between buildings-waste, industry-AFOLU/land, industry-waste and transport-industry sectors. Systematic reviews can be prone to inaccuracy in reporting if left unchecked for consistency of results. We hence hand-checked the relevance of post-ML results and validated these against the tested precision level (annex 3).
Only 88 case studies across 46 solution typolgies provide quantitative data to estimate GHG abatement potential. Some cities like Vancouver, New York, Toronto, San Fransciso, London, Barcelona, Turin, Beijing, Tokyo, etc provide multiple studies in urban mitigation. These report results either as GHG mitigation, energy reduction or cost savings (essentially GHG abatement potential), all in percentage-points and ennumerated in annex 4. This enables us to evaluate relative opportunities offerred by these solutions. Out of 46 urban mitigation solutions, quantifiable data was available for only 41. We rank demand-side potential for climate change mitigation (figure 3), benchmarked against business-asusual scenario (BAU) as defined in each individual study. Different studies can have a range of baselines and end-points for reporting percentage GHG reductions (Erickson and Broekhoff 2017). In this analysis, we consider studies with baseline ranging from 1979 to 2019 and report results on the basis of project vs non-project percentage variation, drawing intersectoral comparisons as has been the method in the recent AR5 (IPCC 2014, p 92). Several variables contextualize the results: (a) Geographical origin: Many studies originated in Europe and China (annex 5). The most notable heterogeneity is evident in cool-roof performance because of location. Cool roofs are less often deployed in higher latitudes (13%) and more frequently in lower latitudes (28% marginal impact, while simultaneously pursuing more system-wide approaches. A further disaggregation of sectors for quantifiable solutions, in decreasing order of their average mitigation potential, indicates the following-Waste (50%): The GHG abatement potential of 5 climate solutions in waste sector range from biomass, biomass gasification (21%) to waste to energy (87%) in project versus non-project scenario. The surge in climate mitigation potential with rising up the technology ladder are evident. On an average, the waste sector offers the maximum demand side GHG mitigation potential in cities with the most concentrated yet least number of measures, thus offering a lowhanging fruit to urban local bodies.
Transport (43%): The mitigation potential of 8 climate solutions in energy sector range from intelligent transportation system (ITS) (20%) to EV and hybrid EV (HEV) in public & private vehicles (94%), with the average being 43%. The results indicate that GHG savings from travel demand management, fuel shift and ITS plateau at 28%, beyond which deep mitigation can be attained only through pan-city expansion of public transportation system, particularly by introducing EV/HEVs. Most of these interventions are supply-driven and controlled by urban transport authorities and local governments.
Energy (38%): The mitigation potential of 14 climate solutions in energy sector range from expanding district heating/cooling (12%) to PV thermal and solar tri-generation (CPVT) solution (73%), with the average being 38%. The energy sector demonstrates a range of solutions, a lot of which are associated with the supply-side than in any other sector. These are district heating/cooling, PV thermal, solar tri-generation CPVT, etc though few demand side energy measure to reduce GHGs are also observed, like EE & conservation measures, consumer demand response models, optimization models in modulating energy consumption at local (community) level, in water systems, heat pumps, street-lighting optimizing energy demand with solar substitution and/or energy storage, demand adjustment for district heating, etc.
Buildings (35%): The relative mitigation potential of 13 climate solutions in building sector ranges from cool roof/facade, roof garden in higher latitudes (13%) to NZEB, carbon neutral building (105%), averaging 35% with all other variables being the same. Excluding NZEB, the mitigation potential in this subsector limits at 50% with building retrofit. The double savings in NZEB against retrofit signifies substantial untapped mitigation potential in creating new infrastructure or redeveloping old precincts to NZEB district than pursuing incremental retrofits.
Conventional insulation and thermal comfort solutions incorporated into the building during construction are twice more effective in reducing energy demand than operational/performance measures like automated building information system (BIS), intelligent controls, smart meters, etc or user driven EE measures. At the same time, urban bodies need to utilize these results with prudence. They should keep in view that the gross mitigation potential of NZEB versus retrofits would depend on multiple local factors, for instance (a) the relative prevalence of new buildings vs. old building stock; (b) how you locally define or interpret 'retrofitting'; and (c) relative costeffectiveness of each solution, amongst others.
AFOLU (5.2%): There is only one urban solutionafforestation/greening with a mitigation potential of around 5%.
Review of ex-post policy studies
One of the key aims of this research is to examine ex-post policy studies for specificity of policy-governance instruments opted in different urban mitigation solutions. Firstly, only 73 (8.5%) out of 867 cases are ex-post policy studies, the most abundant in the buildings sector (26), followed by transport (16), energy (7), waste (5) and AFOLU/land (2). As the chord diagram of these evidences show (figure 2-right), there are few cases observed in the nexus of buildings-energy (5), buildings and AFOLU/land (4), transport and AFOLU/land (2) while only seven urban solutions span through multiple sectors. The results suggest that the industry and waste sectors are most isolated and need integration with the rest of urban functions through innovations and policy convergence, to accrue greater GHG mitigation and climate co-benefits. Upon tagging these cases in accordance to four normative policygovernance modes, including overlapping (annex 6), we find that most of the urban solutions conform to enabling measures (46), regulatory instruments (45), voluntary, behavioural, awareness & education measures (37), followed by market/economic interventions (35). The following key observations emerge: (a) There is a pre-occupancy of regulatory instruments that rely on legislations, standards/codes, certifications, etc across almost all GHG mitigation sectors, frequently observed in buildings and transport sector. (b) Enabling and voluntary measures are not at all observed in waste sector, substantiating its isolation in urban GHG mitigation. (c) There are only two evidences where all policy instruments are simultaneously employed in urban climate solutions. The case of Toronto highlights a mix-methods approach combining infrastructure provision, public acceptance, industry participation, regulating gasoline prices, tax incentives, subsidies for expanding EVs (Ing 2011). Also, local authorities like Leicester demonstrate different stakeholders can use multiple benefits approach with energy savings, job creation and community engagement to proactively meet national carbon reduction targets (Lemon et al 2015).
Surprisingly, initiatives such as car free cities, Fridays for future, odd-even car days, and congestion charges that capture active public interest, participation and media attention are missing in peerreviewed scientific literature that we sampled. That contrasts with the high potential of transport-related lifestyle solutions to reduce individual carbon footprints (Ivanova et al 2020). A key reason might be that many urban-scale transport policies are primarily motivated by local concerns, such as congestion, air pollution, and quality of life, and thus may not occur in our literature data base. Secondly, policies need to be evaluated in terms of their relative effectiveness. For instance, in building projects, total renovation may not be optimal in all cases, while zero-cost measures like information campaigns could produce significant performance improvement ( Ex-post policy evaluation in AFOLU/land demonstrates that expanding of park area was the most appropriate initiative when considering both its effectiveness in reducing emissions, and its implementation cost in Bangkok (Kiewchaum et al 2017). In case of waste sector, different local conditions and waste composition were known to influence the choice of solution-landfill, incineration and composting (Assamoi and Lawryshyn 2012, Hutton et al 2013). Urban mitigation across multiple sectors is scarce and spans across buildings, energy & transport. It involves technologies used for demand-side management that include natural-gas-based residential and commercial building heat pumps and chillers, cooking and water heating appliances developed for restaurant applications, and automobiles, buses, and trucks that use natural gas instead of gasoline (Wang et al 1995). These predominantly hinge on fuel-shift based rapid efforts or building-retrofit related sustained efforts, yet with significant health impacts (Tuomisto et al 2015).
In addition to the above cited evidence, there are certain plausible cross-cutting mitigation interventions viz. (a) Municipal waste-industry: demonstration of circular economy, biogas digestion, biomethanation & CO 2 e certificates, (b) AFOLU/land and waste: landfill site restoration to expand green cover and GHG mitigation, (c) Energy-transport: Instruments for bulk-purchase of green energy by transport companies, (d) Building-energy: Power purchase agreements between renewable energy plants, regional power grid, local electricity distribution companies on one end and townships, special zones, municipal councils, residential communities including prosumers on the other, and last but not the least (e) Ecocity/smart city developments: Integrated planned solutions encompassing solar PV, building EE measures, E-mobility, WTE and/or other combinations of the above interventions.
Recent trends, focus and innovations in urban mitigation
The past few years have witnessed advances in models, technologies and policies for climate urban mitigation, spanning all major GHG sectors except industry and AFOLU/land (table 1). Innovations advanced in the building sector (real time BIS, smart controls, roof-integrated solar technologies, efficient cooling & heating), albeit there is little evidence in their policy application. There is considerable use of technology in urban energy through heat pumps, solar PV, energy storage solutions, biomass gasification and energy-recovery demonstrator in district-heating. Meanwhile, policy innovations utilize community energy plans for utility-scale wind turbines, hybrid renewables and measures to re-evaluate national and local energy-efficiency design standards. There is fair mix of research, technological solutions and policy application evident in the transportation too. Complex computational models are built into apps, storage devices, breaking energy, and hybrid-fuel platforms supported by policy studies that optimize travel demand, improve street design, and strategize integrated and low-carbon transport planning. Other than EV/HEV, most of these measures enable incremental changes with no significant breakthrough from the status quo. The urban waste sector is well posited with research on life cycle assessment (LCA) of waste materials, analyzing optimal mix of different treatment and disposal technologies as well WTE applications. In addition, cross-sectoral interventions are experimenting with modelling & technologies in building-energy sectors (smart cities, green districts), land-transport related emissions as well as mechanisms to integrate climate goals with city master plans and setting up of demonstration projects. However, the cross-sectoral projects are few and need up-scaling to include non-contributing sectors. Also, policy innovations in unique sectors require expansion to apply models and technologies showing positive results for GHG mitigation.
Urban mitigation overwhelmingly presents demand-side solutions, yet it is still unsaturated
Out of 41 quantifiable urban solutions, 33 (80.5%) exhibit demand-side interventions. Our findings support the prevailing literature (Lamb et al 2018, Creutzig et al 2019) that topics like TDM, BEE, urban form, waste management dominate urban climate landscape, with irrefutably measureable evidence on the available mitigation choices and their relative efficacy. Our research pin-points technological and policy options and their GHG potential. For instance, most ex-post policy studies and current experiments are concentrated in the buildings and transport sectors, followed by energy and waste. The dearth of evidence in carbon sequestration initiatives indicates that (a) this topic is understudied in the urban literature, and/or that (b) cities designate insufficient importance to urban greening. However, most 'forward looking' studies (with futuristic scenarios) primarily deal with supply-side technologies in energy, CO 2 emission accounting, transportation and air-pollution (Lamb et al 2018). Thus advancing research should focus more on unexplored demand optimizing technologies and policies in urban industries, land and other cross-sectoral activities.
Technology coupling and synergistic interventions can upscale urban mitigation
Disruptive and synergetic technologies demonstrate that when it comes to GHG mitigation potential, the whole is greater than the sum of its parts (see figure 5).
Expanding extra-regulatory and non-governmental actions is imperative in local climate governance
Technology is necessary but not a sufficient element to deepen urban mitigation. Our scoping review is limited in certain ways. For example, it encounters a lot of heterogeneity and variabilities in cases. The unavailability of consistent data makes it hard to account for costs of mitigation options. We expect costs to vary with situations, depending on availability and price of the concerned resources & technologies, their in-direct costs, socio-economic costs, trade-offs, etc. The above nuances need further exploration through a fullsystematic review and ought to be reasonably assessed while applying results to develop concerted urban policies and projects in different country & local contexts. Nevertheless, the research findings bear significant implication in analysing the efficacy of diverse demand-side climate solutions that will have a special reporting in the next IPCC report (AR6
Acknowledgments
The views presented by authors are independent without any influence or conflict of interest. The first author acknowledges the Alexander von Humboldt Foundation for the research fellowship. The internal review platform-APSIS scoping software is provisioned by Mercator Research Institute on Global Commons and Climate Change, Germany. Thanks is due to Max Callaghan for orientation to and troubleshooting in APSIS operations. Chord diagrams are prepared using open access tool on https://sites.google.com/site/e90e50fx/home/talenttraffic-chart-with-chord-diagram-in-excel.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary information files).
|
2020-06-11T09:09:07.672Z
|
2020-09-03T00:00:00.000
|
{
"year": 2020,
"sha1": "41f5c0f9440669f4639aa33c6d99d93901a4e963",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/ab99ff",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b1952c83cf18d58013c63a07923b7e06dc54149d",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
}
|
232218331
|
pes2o/s2orc
|
v3-fos-license
|
Wild whale faecal samples as a proxy of anthropogenic impact
The occurrence of protozoan parasite, bacterial communities, organic pollutants and heavy metals was investigated in free-ranging species of fin (Balaenoptera physalus, n. 2) and sperm (Physeter macrocephalus, n. 2) whales from the Pelagos Sanctuary, Corsican-Ligurian Provencal Basin (Northern-Western Mediterranean Sea). Out of four faecal samples investigated, two from fin whales and one from sperm whale were found positive to Blastocystis sp. A higher number of sequences related to Synergistetes and Spirochaetae were found in sperm whales if compared with fin whales. Moreover, As, Co and Hg were found exclusively in sperm whale faecal samples, while Pb was found only in fin whale faecal samples. The concentration of both PAH and PCB was always below the limit of detection. This is the first report in which the presence of these opportunistic pathogens, bacteria and chemical pollutants have been investigated in faecal samples of free-ranging whale species and the first record of Blastocystis in fin and sperm whales. Thus, this study may provide baseline data on new anthropozoonotic parasite, bacterial records and heavy metals in free-ranging fin and sperm whales, probably as a result of an increasing anthropogenic activity. This survey calls for more integrated research to perform regular monitoring programs supported by national and/or international authorities responsible for preservation of these still vulnerable and threatened whale species in the Mediterranean Sea.
The Mediterranean Sea, the largest and deepest enclosed sea on Earth, represents an ideal habitat for different species of marine animals, contributing to a great biodiversity at global level as well 1 . The Northern-Western Mediterranean Sea, in which an International Marine Protected Area (Pelagos Sanctuary, Corsican-Ligurian Provencal Basin) is included (Fig. 1), is known to be regularly inhabited by eight species of cetaceans 2,3 . Among them, the Mediterranean subpopulations of fin (Balaenoptera physalus) and sperm (Physeter macrocephalus) whales are ranked as vulnerable and endangered species, respectively, by the International Union for Conservation of Nature Red List 4 . In the Pelagos Sanctuary area, these species are particularly exposed to both infectious diseases and anthropogenic activities that represent a potential threat to their long-term survival 5 . Parasites, bacteria, as well organic and inorganic pollutants, are considered among the main causes of whale death 6,7 or factors predisposing them to other pathologies 8,9 . In detail, whales can be affected by a wide range of naturally occurring endo-and ectoparasites, most of which are highly pathogenic 10 . Furthermore, several parasites species have gained importance as opportunistic pathogens in the marine environment. The introduction of these new emerging and neglected parasites (e.g., protozoan parasites), most of which transmitted by ingestion of contaminated food and water, probably occurs through terrestrial contamination and is generally due to intense human activities 6,11,12 . Whales, like other mammals, host diverse bacterial and archaeal symbiont communities, that play important roles in digestive and immune system functioning. Erwin et al. 13 , investigating pigmy and dwarf sperm whales (Kogia sima) gut microbiomes, showed that host identity plays an important role in structuring cetacean microbiomes, even at fine-scale taxonomic levels. Therefore, understanding whether the gut microbiota could be also affected by diet, environmental pollution, presence of gut pathogens and, in turn, influence the health status of cetaceans 14 , is of paramount importance. Finally, over the last decades, there have been numerous studies demonstrating elevated exposure of marine mammals to both persistent organic pollutants (including polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs)), and heavy metals (e.g., Hg, Pb, As), which are generally associated to increased mortality, incidence of diseases and/or impaired reproduction [15][16][17][18] .
Fin and sperm whale populations strongly need a variety of conservation and monitoring measures, which would benefit of physiological and pathophysiological information, such as pathogen infections or chemical pollutants, gathered from free-ranging instead of stranded or caught animals. Moreover, due to sample collection
Results
Coprological and molecular analyses. Cysts of Blastocystis (Fig. 2) were identified in the faecal samples of fin and sperm whales by coprological examination. No other cysts/oocysts/eggs referred to other parasites were found. The DNA samples subjected to SSU-rDNA PCRs were successfully amplified and, after sequencing, good quality sequences of about 600 bp were obtained. Aligned sequences revealed the absence of any stop codons with 100% identity each other. The alignment with the homologous sequences of Blastocystis sp. available in GenBank showed a mean percentage identity of 99% with B. hominis. The phylogenetic analysis using SSU-rDNA data sets were concordant in confirming the identity of the specimens examined here as Blastocystis and the sequences cluster with the Blastocystis ST3 in a monophyletic group distant from the other Blastocystis subtypes. www.nature.com/scientificreports/ Bacterial pathogens. Among pathogens of human and zoonotic origin, Salmonella spp. and enterohaemorrhagic E. coli were checked in all samples by PCR. They were found to be negative in all cases. The absence of these pathogens has been confirmed by high throughput sequencing analyses, since no homologous sequence at species nor genus level was found. About fish pathogens, that commonly may cause pathologies to marine mammals 20 , Brucella spp., Staphylococcus spp., Leptospira spp. Nocardia spp. and Actynomices spp. were not found in the specimens surveyed. A relative low number of sequences related to marine mammal opportunistic pathogens were found and, in all cases, they were exclusive of one species. In detail, Mycobacterium and Fusobacterium spp. were found only in sperm whale, whereas Erysipelothrix spp. and Helicobacter spp. only in fin whale. Figure 3 reports the phylum level distribution of bacterial taxa in the sampled whale faeces. A clear host specific pattern is visible between the two different species. In both cases Firmicutes and Bacteroidetes were the dominant phyla, and this finding agrees with previous studies 13, 21,22 . www.nature.com/scientificreports/ Sperm whales were characterized by a higher number of sequences related to Synergistetes and Spirochaetae, as well as Verrucomicrobia and Actinobacteria, if compared with fin whales. Figure 4 shows in detail the most relevant differences among the gut microbiome of the sampled whales. The OTUs with an average number of reads > 0.1% threshold and that were significantly different (p < 0.05) were included in this comparison.
Microbial community composition.
In the Firmicutes phylum, fin whales had higher number of Ruminococcaceae, Lachnospiraceae, Oscilligranullum, Ruminiclostridium, Subdoligranulum, while the most representative Firmicutes in sperm whales belonged to Christensellaeae and a different clade of Ruminococcaceae (NK4A214 group).
Among Bacteroidetes, Bacteroides spp. and Alleloprevotella spp. were typical of fin whale microbiomes, while sperm whales were dominated by Rickenellaceae and a different clade of Bacteroidales belonging OTU. Also, the most relevant Proteobacteria were different between the two species, being Sutterella spp. representative of fin whales and Desulfovibrio spp. of sperm whales.
About the Spirochaetae phylum, in sperm whales most sequences belonged to Sphaerocheta spp., while in fin whales Spirochaeta spp. and Treponema spp. were the most representative genus.
Among sequences belonging to Synergistetes, that were almost exclusively found in sperm whales, only one OTU was identified to the family level as Synergistaceae and one OTU at genus level as Pyramidobacter spp. Verrucomicrobia were highly prevalent in sperm whales compared to fin whales. Among the retrieved OTUs in sperm whales, one showed high similarity with uncultured not taxonomically defined bacterium and a second one was identified at genus level as Akkermansia spp. Finally, sperm whales also showed higher number of actinobacteria. More specifically the OTUs found belonged to the Coriobacteriaceae family.
Elemental composition, and occurrence of organic and inorganic pollutants. The elemental composition of faecal samples strongly differed in the two whale species. In particular, faecal samples from sperm whales were characterized by a significantly higher average concentration of carbon (49.8 vs. 21.5%), nitrogen (13.6 vs. 3.4%) and sulphur (2.5 vs. 0.8%).
The concentration of 16 United States Environmental Protection Agency (US EPA) priority PAHs and of 29 PCBs (Table 2) was always below the limit of detection (LOD) of the method, namely 2 μg kg −1 .
The concentration and occurrence of the heavy metals investigated was extremely variable; in particular, some of them (i.e., Be, CrVI, Sb, Sn, Tl and V) were always below the limit of quantification (LOQ), i.e., 10 μg kg −1 , while others occurred mainly or exclusively in one species (Table 3). In detail, As, Co and Hg (7.24, 0.16 and 1.49 mg kg −1 , respectively; average values from two samples) were found only in sperm whale faecal samples, while Pb (65 µg kg −1 ) only in faecal samples from fin whales. The average concentrations of Cd and Se in sperm whale faecal samples (0.45 and 10.6 mg kg −1 , respectively) were one order of magnitude higher than in fin whale faecal samples, while the average concentration of Zn (97 mg kg −1 ) was 1.5-2 × higher (Table 3). On the opposite, fin whale faecal samples showed Cu and Ni average concentrations (61.3 and 1.14 mg kg −1 , respectively) twofold and threefold compared to sperm whale faecal samples (Table 3).
Discussion
Fin and sperm whales residing or circulating in the Mediterranean Sea are exposed to biological and chemical hazard due to the increasing anthropogenic impact. In particular, most of the coastal areas bordering with the Sanctuary is heavily populated and full of commercial, touristic and military ports and industrial areas. As a consequence, a range of diverse human activities exerts several actual and potential threats to cetacean populations in the Sanctuary, including habitat degradation, urban, tourist, industrial, and agricultural development, intense maritime traffic, military exercises and oil and gas exploration, just to mention the most important ones.
This study provides background information on the occurrence and concentration of parasites and bacterial infections/communities as well a first investigation of heavy metals and organic pollutants in faecal samples from fin and sperm whale Mediterranean subpopulations within the Pelagos Sanctuary.
Here, a modified MINI-FLOTAC technique in combination with FILL-FLOTAC were used for parasitological detection of the cysts in the faecal samples of fin and sperm whales. Although this technique has never been used before for whale faecal samples, it has successfully been used in previous coprological surveys for the detection of gastrointestinal parasites in other marine animals as the loggerhead sea turtles (Caretta caretta) 23,24 . The MINI-FLOTAC can be considered as one of the most accurate methods for coprological diagnosis of endoparasite www.nature.com/scientificreports/ infections and cysts/eggs counting nowadays available in veterinary medicine 25 . It allowed an accurate and reliable detection of Blastocystis cysts in both fin and sperm faecal samples. Molecular analysis, sequencing and phylogenetic analysis confirmed the obtained results.
Blastocystis is a common intestinal protozoan parasite reported in several animals, e.g., humans, livestock, dogs, amphibians, reptiles, birds and even insects [26][27][28] . Although it possesses pathogenic potential, its virulence mechanisms in humans are still not well understood 29 . Blastocystis seems to be linked to Irritable Bowel Syndrome, i.e., a functional disorder mainly consisting in chronic or recurrent abdominal pain due to altered intestinal habits 30 . Studying the small subunit ribosomal RNA (SSU-rDNA) gene, several authors identified at least 22 different Blastocystis subtypes (ST) in a variety of animals, humans included, i.e., from ST1 to ST17, ST21, and ST23 to ST26 (Ref. 26 ). To date, human Blastocystis isolates are classified into 10 ST (i.e., ST1-ST9 and ST12) that, with the only exception of ST9, have been identified also in other animals 31 . According to Parkar et al. 32 , Blastocystis has the potential to spread through human-to-human, animal-to-human, and human-to-animal contact.
Few similar parasitological investigations have been conducted in the past and are currently available in the literature. Hermosilla et al. 33 detected three protozoan parasites (i.e., Giardia sp., Balantidium sp., Entamoeba sp.) and helminth parasites in individual faecal samples from wild fin (n. 10), sperm (n. 4), blue (Balaenoptera musculus; n. 2) and sei (Balaenoptera borealis; n. 1) Atlantic whale subpopulations from the Azores Islands, Portugal. Protozoan parasites (Giardia sp., Balantidium sp., Cistoisospora-like indet.) and helminth parasites were also found in individual faecal samples of wild sperm whales inhabiting Mediterranean Sea waters surrounding the Balearic Archipelago, Spain 34 . Out of these, three of herein detected parasites clearly bear anthropozoonotic potential, i.e., Anisakis, Balantidium and Giardia 34 .
In the present work, Blastocystis has been found in fin and sperm whale samples and, to the best of our knowledge, this is the first time that this protozoan genus is reported for any cetacean species. Therefore, this finding represents the first new host record for fin and sperm whales. Blastocystis ST3 was the only subtype found in fin and sperm whales. Molecular studies in human samples showed the occurrence of ST1-ST9, with ST3 as the most prevalent subtype 35,36 . Indeed, ST3 is the Blastocystis subtype with the highest prevalence in humans worldwide and probably represents the human species-specific ST (Ref. 37 ). Consequently, animals harbouring ST3 may thus mirror environmental contamination by humans, confirming the zoonotic potential of animals for Blastocystis human infections. Unlike 33,34 , no eggs of helminths were found in our faecal samples.
Variations in parasites composition and prevalence might be related to several factors such as dietary differences, the parasite life cycle, the availability of hosts necessary to complete their life cycle, the interactions between parasite species, the host immune response, and the host population density 23 . Moreover, parasites can also spread in different way in animal populations in the wild, particularly when they act together with ecological, biological, and anthropogenic factors 38 .
The occurrence in whales of parasites with a zoonotic potential like Giardia or Balantidium, most probably due to coastal waters contaminated by sewage, agricultural and urban run-off, has been already reported elsewhere [39][40][41][42][43] . Furthermore, human excretions from increasing number of pleasure boats, fishing and whale watching boats could be an additional form of contamination. Finally, the intense maritime traffic in the Mediterranean Sea, the percentage of which is higher than in other oceans 44 , represents another source of contamination. In all cases, results highlight that human activities play an important role for the widespread of these pathogens.
No bacterial pathogen of human or terrestrial animal origin has been detected both by targeted PCR and by Illumina high throughput sequencing. This difference could be due to the lower survival rate of bacteria in the sea environment, compared to protozoan parasites 45 .
Previous works reported the occurrence of human pathogens in stranded common minke whale (Balaenoptera acutorostrata) from Philippines 46 and killer whale respiratory microbiome in North Pacific 47 . Although the relatively low number of samples cannot exclude potential risk of transmission of human and zoonotic pathogenic bacteria to cetaceans in the surveyed area, our results suggest to focus on microbiological analyses to track potential internal waterborne pathogens to the ones able to form cysts (like parasites) or other forms of resistance (like spore-forming bacteria) that are more likely to survive for longer period in the seawater.
The dominance of Bacterioidetes and Firmicutes (common with other terrestrial mammals), the baleenspecific higher number of Spirochaetes and the lower of Proteobacteria characterized both species, as also reported elsewhere 21 . Moreover, differences of some taxa related with the diverse diet were confirmed: in the case of sperm whale, whose nutrition is based on cephalopods, a higher proportion of Synergistetes was observed in faecal samples, whereas faecal samples from fin whales had a higher level of Spirochaetes compared those from sperm whales. These findings are in agreement with Erwin et al. 13 . The Synergistetes phylum includes gram negative, anaerobic, rod-shaped bacteria, widely distributed in terrestrial and aquatic environments, including host-associated with mammals 48 . Within this phylum, Synergistaceae family and Pyramidobacter genus OTUs were particularly dominant among sperm whale microbiome (Fig. 4). However, no correlation with potential pathogenicity could be drawn from the presence of these specific OTUs, considering their ubiquity in oral and gut mucosa of marine and terrestrial animals, despite some of the genus belonging to Synergistaceae family (e.g. Cloacibacillus spp.) are considered opportunistic pathogens 49 . About potential health implication of Spirochaetes, similar conclusion than for Synergistetes could be drawn: Treponema sp., found as dominant genus in fin whales, were found in healthy baleen whales by Sanders et al. 21 so as among more dynamical OTU in stranded right whales 50 Sphaerochaeta spp. associated with healthy cetacean monitored oral cavity microbiome 14 . Moreover, fin whale faeces also showed a higher proportion of taxa that are also enriched in terrestrial herbivores, like Lentisphaere, Verrucomicrobia, Actinobacteria and Tenericutes, as also reported elsewhere 21 . Although differences in species sampled and habitats compared to previous studies, we found confirmation of both species and diet-influenced gut microbiota composition. Notably Akkermansia (one of the dominant Verrucomicrobia OTUs) and Coriobacteriaceae (dominant family among Actinobacteria phylum) includes typical holobiont of terrestrial and marine mammals, but also some pathobiont, so far confirmed only for humans 51 www.nature.com/scientificreports/ distribution of some of Synergistetes and Spirochaetes phyla, it is not possible to establish if their presence could be ascribed exclusively to an anthropogenic impact; however, it is worth of interest that some of the genus found in both whale species and belonging to these phyla include opportunistic pathogens whose virulence for marine mammals still need to be confirmed. Interestingly, some archaeal sequences related to the Thermoplasmatales order were also found. This confirms what already reported by Sanders et al. 21 , i.e., that archaea belonging to this order may have a role as methane producer from methylated amines in baleen gut, differently from methanogenic archaea belonging to other orders that typically colonize the gut of terrestrial mammals, including humans. Therefore, the two sampled species harboured typical gut microbiome belonging to fin whale and sperm whale groups. These data extend the spectrum of surveyed whales gut microbiome to previously unsampled species and confirms that NGS analyses could be a useful tool to retrieve information on the health status of wild whales.
While the concentration of 16 U.S. EPA priority PAHs and of 29 PCBs, being always < LOD (Table 2), did not provide useful information, the concentration and occurrence of some heavy metals was extremely useful to speculate about their background values as well as their potential as a proxy to distinguish between the two whale species. In fact, data reported in Table 3 (Table 3), carcinogenic PAHs ranging from 60.3 to 141.7 µg kg −1 , and PCBs ranging from 84.6 to 210.2 µg kg −1 . The highest values were generally detected in the station closest to the Ligurian coast.
On the opposite, cephalopods belonging to the Histioteuthidae family represent the main diet of sperm whales 54,55 and to follow those belonging to the Architeuthis genus. Interestingly, Bustamante et al. 56 , have found high concentrations of Cd, Co, Cu and Se bioaccumulated in the digestive gland of Architeuthis dux from the Mediterranean and Atlantic Spanish waters, whereas high concentrations of As, Co, Hg, Ni, and Se were also found in branchial hearts.
Therefore, the occurrence of a metal exclusively in faeces from one whale species (i.e., As, Co and Hg detected only in sperm whales, and Pb, detected only in fin whale), and/or significant differences in the concentration of other metals (i.e., Cd, Cu, Ni, Se and Zn), may mirror their diverse diet (krill vs. cephalopods), as also suggested by elemental, coprological and microbiological analyses, and, in turn, the bioaccumulation potential of specific heavy metals through the different diet. The absence of PAHs and PCBs in faeces samples is probably due to their lipophilicity; as a consequence, ingestion of these organic pollutants by animals leads to bioaccumulation generally in the fatty tissues rather than their discharge throughout faeces. Considering the relatively low number of samples surveyed in the present work we cannot exclude that organic pollutants are present in the free living whales of the area; therefore the use of faecal samples as an indicator of PAHs and PCBs remains an open question needed to be further investigated.
Conclusions
The present study confirms that fin and sperm whale Mediterranean subpopulations are exposed to anthropogenic pressure, emphasizing the relevance of constant surveillance of marine mammals to prevent pathogens transmission to humans and vice versa, and exposition to chemical pollutants. Among microbial and parasitological health risk, the latter seems to be more relevant in the investigated individuals, and different species may be exposed to specific chemical pollutants and opportunistic pathogens, according to their diet.
Considering the possibility of collecting in an easy way and without any disturb to the animals, the use of a faecal sample as a proxy of anthropogenic pressure has proven to be a valid indicator at least for pathogens and heavy metals.
New insights into these topics in whale populations and other marine animals in the wild will contribute to a better understanding of human-related impacts on marine ecosystem health and to the development of proper conservation tools.
In conclusion, this survey clearly provides baseline data on occurrence and background concentration of a new anthropozoonotic parasite, bacterial communities and heavy metals in free-ranging fin and sperm whales, and calls for more integrated research to perform regular monitoring programs supported by national and/or international authorities responsible for preservation of these still vulnerable and threatened whale species in the Mediterranean Sea.
Material and methods
Study area. The Pelagos Sanctuary is a marine protected area extending > 87.500 km 2 in the Northern-Western Mediterranean Sea between the Italian peninsula, France and the Island of Sardinia, encompassing Corsica and the Tuscan Archipelago (Fig. 1) 57 . The Sanctuary waters include the Liguria Sea and parts of the Corsican and Tyrrhenian Seas, and contain the internal maritime (15%) and territorial waters (32%) of France, Monaco and Italy, as well as the adjacent high seas (53%). Within the Sanctuary area the continental shelf is wide only in correspondence of such limited coastal plains, whereas it is mostly narrow and disseminated with steep, deeplycut submarine canyons elsewhere 57 .
High levels of primary production, with chlorophyll concentrations exceeding 10 g m −3 (Ref. 58,59 ), support a conspicuous biomass of highly diversified zooplankton fauna, including gelatinous macro zooplankton and 57 and cephalopods belonging mainly to Histioteuthidae family 54,55 and to Architeuthis genus. Zooplankton, in turn, attracts to the area a various level of predators, cetaceans included.
Sampling.
In the framework of a research project on the ecology of whales, faecal samples were collected into the Pelagos Sanctuary ( Fig. 1) from sperm and fin whales (Table 1). During the summer boat survey, photo identification and floating faeces were collected from individual whales using a fine nylon mesh net, avoiding direct contacts with animals. Faecal samples were immediately placed in sterile falcon, labelled for whale identification and stored for further parasitological, bacteriological and chemical analyses. The PCR fragments were run on 1.2% agarose gel and positive samples were purified with Exonuclease I (EXO I) and Thermosensitive Alkaline Phosphatase (FAST AP) (Fermentas) enzymes according the manufacturer instructions 61 .
Purified amplicons were directly sequenced in both directions using the ABI PRIMS Big Dye Terminator v. 3.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, California, USA) with the same primers as the respective PCR reaction, according to the manufacturer instructions. Sequences obtained were determined on an ABI PRISM 3130 Genetic Analyzer (Applied Biosystems, USA) and the chromatograms were inspected by eye using the Finch TV software. Primer regions plus bad-quality regions were removed 61 .
Once the sequences had been cleaned up, each sequence was compared with the Blastocystis homologous nucleotide sequences available in GenBank databases using the Blastn program (https ://blast .ncbi.nlm.nih.gov). Then, the obtained sequences corresponding to Blastocystis SSU-rDNA gene portion were gathered in a fasta file and aligned each other using the ClustalW implementation of the BioEdit software v7.0.5, and the alignment was adjusted manually, when necessary 61 . To attribute the subtypes, a phylogenetic analysis of the obtained sequences and the homologous sequences from GenBank representing the SSU-rDNA Blastocystis subtypes ST1-17 were performed using the maximum likelihood method in Mega v7.0.9. The tree was rooted using a Blastocystis lapemi sequence as the outgroup (accession number: AY590115). Bootstrap confidence values for the branching reliability were calculated with 10,000 replicates.
Microbiological analyses. The detection of pathogenic bacteria Salmonella spp. and E. coli O157:H7 were performed by duplex PCR analyses targeting respectively invA and OriC genes for Salmonella 62,63 and Rfb and fliC genes for E. coli O157:H7 (Ref. 64 ). PCR protocols used have already been described in detail in a previous work 65 .
DNA samples were sent to Stab Vida Lda. (Caparica -Portugal) for amplification, library construction and multiplexed sequencing of partial (V3-V4) 16S rRNA gene sequences on an Illumina MiSeq platform. Specifically, the library construction was performed using the Illumina 16S Metagenomic Sequencing Library preparation protocol. The generated DNA fragments were sequenced with MiSeq Reagent Kit v3, using 300 bp paired-end sequencing reads. The generated raw sequence data were analysed by using QIIME2 v2018.6.0 (Ref. 66 ). The reads were denoised using the DADA2 plugin 67 . After denoising a total of 1142 unique features (OTUs) were identified. The scikit-learn classifier was used 68 to train the classifier using the SILVA (release 132 QIIME) database, with a clustering threshold of 97% similarity. For classification purposes, only OTUs containing at least 10 sequence reads were considered as significant.
Chemical analyses. Total carbon, nitrogen and sulphur concentrations were determined by flash combustion using an Elemental Analyser (CHNS vario Macro Cube, Elementar, Germany). Sulfanilic acid was used as a standard. Samples were analysed in duplicate and the coefficient of variation for all elements was always < 2%.
|
2021-03-14T06:16:15.134Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "3750dc80ff8e96b36cd2b7491e2d839dbac78a8e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-84966-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2841f2af5d85965fc00794d3b69933510e32c0d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255636269
|
pes2o/s2orc
|
v3-fos-license
|
A Critical Examination of Rural Out-Migration Studies in Ethiopia: Considering Impacts on Agriculture in the Sending Communities
: Labor migration is a complex phenomenon, yet while much attention has been paid to understanding the drivers of migration, there is a huge knowledge and policy gap regarding the effects of migration on people and communities left behind. We sought to explore the impacts of rural outmigration on migrant-sending communities in Ethiopia. This remains an understudied topic when it comes to research on migration in Ethiopia. Our investigation is based on a critical review of the migration literature pertaining to Ethiopia and, more broadly. We pursued a holistic analysis of the multidimensional aspects of migration. There are indications that rural outmigration impacts involve issues related to remittances, household food security, agricultural labor use, farmland management, and rural infrastructure development. Our analysis revealed that there had been few systematic studies and limited analyses regarding the impacts of outmigration on agriculture and the livelihoods of rural people and households left behind. Instead, Ethiopia’s migration literature largely deals with migration’s causes, including environmental factors, climate variability, agricultural pressures
Introduction and Background
In Ethiopia, migration has become a rapidly growing phenomenon. This article analyzes the multidimensional aspects of migration in Ethiopia in the context of wider migration research trends. Labor migration is a complex phenomenon, yet migration researchers have been paying much attention to understanding the drivers of migration, while there is a huge knowledge gap regarding the effects of migration on people and communities left behind [1,2]. Migration studies, as well as policies, have been shaped by the political wish to deter migration from "poor" countries to "rich" countries [3,4].
Policymakers and officials, particularly in high-income, destination countries, have tended to see a lack of development in low-income, migrant-origin countries as one of the root causes of migration [5]. This is based on the assumption that if development opportunities in economically less developed countries are improved, outmigration will decrease. Such assumptions drive development policy responses aimed at deterring migration flows from poor countries. They have promoted the role of development aid in addressing the "root" causes of migration and reducing migration pressures [2][3][4][5][6].
Similarly, the concern regarding internal rural-to-urban migration also focuses on the causes of migration, particularly associating a lack of "development" with rural outmigration [7]. Governments and their development partners perceive outmigration as a result of poverty and consider that rural development interventions will reduce ruralurban migration, thereby keeping people in rural areas instead of out-migrating [7,8]. They also tend to regard outmigration as a sign of failure and a negative consequence of rural development gaps.
While appreciating the scholarly and policy concerns underlying investigations that focus on the causes of migration, we argue for the importance of considering a holistic approach beyond migration causes. It is important to consider the multidimensional aspects of migration, including its effects on migrant-sending regions, communities, and households. There has been limited research on the impacts of rural outmigration on home communities in developing countries. Some studies highlighted the impacts of outmigration in terms of enhancing non-agricultural income sources of rural households [9], improving livelihoods [10], reducing the supply of agricultural labor [11], leading to the feminization of agriculture [12,13], and increasing inequality and differentiation [11]. Some studies in Africa also highlighted the role of outmigration in improving household food security [14,15], facilitating development projects [16], resulting in the loss of agricultural labor [17], and intensifying income inequality [18]. It is possible that rural outmigration can involve several other impacts on migrant-sending communities. The mention of migration as a development issue in the 2030 Agenda for Sustainable Development [19] is an encouraging remark to examine migration in a wider context of the development agenda.
In Ethiopia, agriculture is the mainstay of the economy. It is particularly characterized by small-scale, rainfed farming which is dependent on unreliable rainfall. It accounts for about 85% of employment and 42% of gross domestic product (GDP) and provides about 90% of export earnings [20]. Ethiopia is the second most populous country in Africa after Nigeria, with a population exceeding 100 million people [21].
Labor migration in Ethiopia has been increasing in recent decades, driven by structural factors in the country's economy and society. Rural-to-urban migration (rural outmigration to urban areas) increased from 24% to 33% from 2005 to 2013 [22]. A recent survey that does not include one of the federal government regions, namely Tigray, puts rural-to-urban migration at 29% [23]. It is estimated that about two million Ethiopians live and work abroad, but this could be a significant underestimation of the actual number, especially since outmigration has recently increased [24]. The international migrant stock of Ethiopians living abroad has increased over the years, from 662,444 people in 2000 to 1,072,949 in 2015 [24]. A national survey conducted in 2021 also indicated an increasing trend of international migration from 4.9% in 2010/2011 to 14.1% in 2018/2019 [23]. It is estimated that around 460,000 Ethiopians legally migrated to Saudi Arabia, Kuwait, and Dubai between September 2008 and August 2013 [24]. However, the number of irregular migrants living and working in the Gulf states is assumed to be much higher than the number of officially registered migrants [25].
The increasing trend of labor migration in Ethiopia and the country's predominantly agrarian economy and population growth provide an important context for this study. The practices and incidences of migration can have crucial and interrelated impacts on diverse livelihood and development outcomes. Understanding and dealing with these issues will significantly contribute to development processes in Ethiopia, thereby involving wider global implications relating to other developing countries. We will conduct a holistic analysis of the multidimensional aspects of migration, including the drivers and impacts of migration. In particular, we seek to explore the impacts of rural outmigration on migrantsending communities. This remains an understudied topic when it comes to research on migration in Ethiopia and other developing and agricultural communities. We will also identify linked research gaps in migration studies.
Methodology
This article presents a comprehensive review of the literature on migration in Ethiopia. We reviewed articles that discuss migration patterns and provide information on the link between migration and agricultural activities in the form of drivers and impacts. The relevant documents were identified through the Google and Google Scholar search engines using both keyword searching and the snowball method. Keywords used were: migration, Ethiopia, agriculture, rural migration, international migration, outmigration, labor migration, and rural-urban migration. Additionally, we reviewed papers that provide background information relevant to Ethiopia's labor migration context. We also reviewed broader theoretical perspectives related to labor migration. The discussions in this paper are grouped into four major topics. These include theoretical perspectives on labor migration, drivers of outmigration in the Ethiopian migration literature, migration impacts, and female migration to the Gulf countries. We summarize key findings, identify research gaps, and make recommendations for future research.
Theoretical Perspectives on Labour Migration
Different theoretical perspectives are used in explaining migration. One of the influential theoretical perspectives on labor migration relates to the Neoclassical economic approach. This approach conceives labor migration in terms of wage differentials between origin and destination areas, explaining that differences in the supply and demand for labor cause the migration of workers from low-wage, labor-surplus regions to high-wage regions facing labor shortage [1,2,26,27]. The Neoclassical economic approach considers labor migration a means of income maximization guided by individual decisions. According to this approach, when individuals decide to out-migrate, they are mainly influenced by the income they expect to gain. It ignores other potential factors that interact with and shape migrants' decision-making.
The historical-structural approach provides another important perspective to explain migration phenomena. As Wood [28] indicated, the historical-structural approach to the study of migration is found in a variety of models, including dependency theory and world systems theory. From the historical-structural point of view, migration is part and parcel of broader processes of structural transformations involving socio-economic and political changes [28,29]. Investigations along the lines of historical-structural analysis conceive rural-to-urban migration in terms of specific historical contexts and transformations in the economic structure of rural and urban areas [30]. Analyses based on the orientations of the historical-structural approach seek to explain migration as a structural outcome of market expansions within the global political hierarchy [30]. This perspective considers international migration an outcome of capitalist market formation and the penetration of the global economy into peripheral regions, along with ensuing disruptions and dislocations that occur in the processes of capitalist development [26,30]. Accordingly, migration is a product of a broader structural process rather than a mere individual decision-making process.
The new economics of labor migration provides a different perspective on migration. It considers that migration is practiced to maximize income and minimize risks [1,2,26]. It also maintains that migration decisions are not made by individuals but by family members collectively [1,2,26]. According to this approach, income maximization and risk spreading constitute the motivation behind migration. The household, rather than the individual, is the main decision-making unit.
The migration networks perspective highlights the role of social networks in facilitating migration processes. Migration networks constitute interpersonal linkages that interconnect migrants, former migrants, and non-migrants across geographical boundaries through kinship, friendship, and community relations and make migration easier by reducing the costs and risks of movement [26,30]. They provide a foundation for the dissemination of information as well as for patronage and assistance [31]. Social net-works have multiplier effects that could result in a migration chain, as when the size of network connections in sending areas reaches a critical threshold, migration becomes self-perpetuating [26,31]. Thus, they are an important determinant of migration plans and decisions [31].
The theoretical perspectives discussed above maintain different positions in explaining labor migration. However, they mostly deal with the motivations of labor migration and largely focus on the causes of migration. The causal focus underlying these migration theories has entailed significant implications for existing migration research, leading to a major focus on motivations that propel people's mobility. We consider that no single cause can wholly explain the motivations underlying labor migration. Instead, a combination of different factors may operate in tandem or at different levels to shape migration decisions and processes. Thus, it is likely that a mix of the different causal conditions associated with the theoretical perspectives discussed above can manifest in various ways and degrees in labor migration decisions, processes, and instances.
After reviewing the literature, we have categorized the possible impacts of migration as impacts on household food security [14,15], impacts on income, remittances, and differentiation [9,11,18,32], impacts on agricultural labor supply, the composition of the agricultural labor force and patterns of land uses [11,13,32], impacts on asset accumulation, business ownership and entrepreneurial/investment activities [32][33][34]. We consider that the impacts of outmigration on sending communities, regions, households, and left behind family members can be complex and involve a wider range of outcomes.
Drivers of Outmigration in the Ethiopian Migration Literature
In this section, we discuss factors that drive rural outmigration in Ethiopia. Based on a review and analysis of the migration literature, we have identified several factors that can be seen as drivers of outmigration in the Ethiopian context. These relate to agroecological, livelihood, socio-economic and social factors that can explain the processes of rural outmigration.
Climate and Environmental Factors
In international migration research, there has been a growing interest in the effects of environmental factors on migration decisions and patterns [35][36][37][38][39]. Environmental problems such as droughts and soil degradation are mostly identified as important drivers of outmigration in sub-Saharan Africa [35,40].
The migration literature in Ethiopia also reveals that environmental shocks have significant consequences for population mobility in rural Ethiopia [41][42][43][44][45]. Rainfall and environmental factors vary over time and space and become unpredictable, often resulting in environmental shocks that undermine household well-being [41]. These studies emphasize that climate variability has long posed a major challenge to the Ethiopian agrarian economy and livelihoods. This is because most Ethiopians live in rural areas and heavily depend on small-scale rainfed agriculture for their livelihoods. However, drought, lack of rain, and erratic rainfall induced by climate variability have often threatened the viability of such small-scale rain-dependent subsistence farming. This situation drives rural outmigration processes.
Drought and related food security disasters have been common in Ethiopia [46,47], although the distribution of these occurrences varies by geographical zones. Climate shocks undermine the periods of short rains, referred to as the Belg season (February/March-April), and the periods of long rains, known as the Kiremt season (June-September). Rural people heavily rely on these rains for food production and keeping livestock. The problem of drought induced by climate and environmental crises is manifested in different ways. It may occur in the form of no rain, shortage of rain, or late rain. Besides, these manifestations of drought may occur in successive agricultural seasons, thereby resulting in massive crop failure, loss of valuable livestock, and major livelihood crises [47]. Some authors consider that in rural Ethiopia, migration serves as a strategy to secure alternative livelihoods in the face of climate shocks such as droughts [41,44,48]. Drought results in a decline in agricultural production, which can lead to food insecurity, loss of grazing land for livestock production, and loss of agricultural employment, thereby becoming a cause of migration in Ethiopia [43,44]. However, drought may not necessarily determine migration decisions. Even in adverse climatic conditions, outmigration depends on the degree of people's vulnerability and adaptation capacities [41,48]. It does not necessarily mean that all rural households will respond to drought and environmental shocks similarly and out-migrate uniformly. Gray and Mueller [41] indicated that drought increases men's labor migration particularly, and land-poor households are most vulnerable to the impacts of drought. They noted that men's labor movements and outmigration in rural highland Ethiopia increased twofold under severe drought conditions [41].
While environmental factors such as climate variability can affect people's migratory behavior, scholars tend to distinguish between short-distance and long-distance migration drivers. For example, Black et al. [43] associate short-distance and circulatory migration with a response to shocks while linking long-distance and more permanent migration with a planned household's investment strategy. This classification of migration can be useful in analyzing migration patterns, drivers, and responses. However, it leads to associating climate-induced migration with a short-term response to shocks in contrast to a more permanent migration and decision pattern. In the case of Ethiopia, where climate shocks have been recurrent phenomena, it can be difficult to clearly distinguish between shock responses and long-term planned mobility. This is because repeated shocks and related responses can end up driving permanent migration decisions and responses.
While our focus is mainly on labor migration, it is worth noting that recurrent droughts and related environmental crises provided an important impetus for government-imposed migration schemes. During the previous pre-1991 Derg regime of Ethiopia, the government carried out a massive resettlement program whereby peasant households from the highlands and northern part of the country were moved to the lowlands, western and southwestern parts of the country [49]. The state-imposed and sponsored migration in the form of resettlement was seen as a strategy for addressing the food security crisis in drought-prone, densely populated, and environmentally degraded areas by relocating people to other areas considered to be fertile, sparsely populated, and 'under-exploited.' The overall assessment of the Derg government resettlement program was largely recognized as a failure, disastrous, unpopular, and coercive [49].
Donor agencies and western development partners of the Ethiopian government have often been against state-imposed migration programs, i.e., resettlement. However, climate-induced droughts and ensuing food crises again provided a similar impetus for state-implemented resettlement during the post-1991 Ethiopian People's Revolutionary Democratic Front (EPRDF) regime of Ethiopia. With increasing numbers of people facing food insecurity in the early 2000s, the government took up resettlement as a crucial component of its food security strategy [49,50]. The EPRDF government considered the resettlement program a reliable and efficient way of achieving food security, arguing that resettling people from drought-prone and land-scarce areas to areas with reliable rainfall and fertile land facilitates household food security [51].
Climate variability can contribute to migration processes. However, we suggest that it alone may not necessarily induce outmigration. Its impact should be seen along with other contributing driving factors. We thus argue that the effects of climate and environmental stress on migration should be considered along with other political-ecological and sociocultural processes affecting migration.
Agricultural and Livelihood Stresses
Ethiopia largely remains an agrarian country where most of the population lives in rural areas. Factors constraining agricultural activities and rural livelihoods can have significant implications for rural outmigration. The land is one of the major productive resources with a tremendous impact on agriculture and rural livelihoods. As pointed out next, access to land, including land availability and individual plot size, can significantly influence decisions related to migration.
Studies conducted in different districts in Northern Ethiopia revealed that farmland shortage, landlessness, and lack of sufficient means of subsistence are among the major driving factors of rural outmigration [52,53]. Another study conducted in several districts in southern Ethiopia also emphasized that limited access to agricultural land is a major factor forcing the rural youth away from agricultural livelihood [54]. Similarly, research in several rural areas in northwest Ethiopia pointed out that rural outmigration is predominantly driven by landlessness or small land endowment [55]. In some studies, however, the depiction of land shortage in terms of driving outmigration appears minimal. For example, a study that looked at the case of migrants from southern Ethiopia to South Africa documented that the lack of employment opportunities at home is the main reason for migration, while land shortage entails minimal influence [56]. Yet, even in this study, a closer look at respondents' responses indicates that land shortage remains an important factor for outmigration for young people between the ages of 15-25 [56]. The land shortage is a growing problem in Ethiopia. Over half of rural households in Ethiopia hold less than 1 hectare of land [57].
All rural land has been under government control following the 1975 land reform. In 1975, the socialist government of Ethiopia decreed a land reform that nationalized all rural lands and prohibited private ownership of land and the sale of land [58]. In effect, the government put rural land under its control while allowing peasants only usufruct access. Most importantly, the government became the only land source, and farmland access was available through land redistribution. The reform stipulated that any person, once they reached 18 years of age, would have the right to have access to farmland through redistributions [58]. Accordingly, periodic land redistributions had to be undertaken to fulfill the demands of land claimants. The EPRDF regime, which took over power in 1991, pursued political and economic reforms, departing from the previous socialist regime. However, it followed a similar land policy in many respects. The ownership of land has continued to be under government control while allowing rural people the right to use rural land and prohibiting the sale of land [59]. While, in principle, the government is still the only source for land provisions, there has been no new land allocation to youth for over two decades. The ensuing lack of land has posed a major livelihood stress for the youth. In the absence of other sources of land, the youth largely depend on inheritance from parents [60,61], thereby leading to progressive divisions of already smaller family landholdings as each generation passes.
Kosec et al. [62] pointed out that the amount of land the youth expect to inherit in rural Ethiopia affects their migration behavior-whether to out-migrate or remain in agriculture. Their analyses indicated that a 10% increase in the size of inherited land reduces rural-tourban migration. By implication, the shortage of land experienced by rural youth drives outmigration. In the most densely populated areas of southern Ethiopia, for example, rural land available to the youth through inheritance is often too small to establish a meaningful livelihood, thereby leading to an increasing trend of youth outmigration [54]. The shortage of family land to address the land needs of aspiring children is a source of conflict and tension among family members. Sibling resource competition in a situation of increasing rural farmland scarcity is an important driver of outmigration [63].
The pursuit of education, which is often more expensive outside rural areas, also creates significant livelihood stress that drives outmigration. Young people, including adolescents, migrate from rural areas to big urban centers in pursuit of educational and work opportunities [64]. In this case, the desire to seek education and the need to cover the financial cost necessitates the search for work opportunities through outmigration. A study in a rural district in northern Ethiopia observed that owing to the prevailing poverty situation in rural households, accessing and completing education is highly difficult and involves a major investment cost [65]. A World Bank report also indicated that the search for educational opportunities and the related need for work are very significant reasons for many people out-migrating from rural areas to the capital city, Addis Ababa [66].
However, we would like to stress that the impacts of access to education on outmigration should be interpreted carefully. If the lack of access to education drives outmigration, as indicated above, this may imply that having access to education limits migration. However, there are other ways in which education plays a role in people's decisions on whether or not they migrate. For example, Schewel and Fransen [67] indicate that young people's changing aspirations and expectations resulting from the expansion of formal education in Ethiopia entail a significant force that drives outmigration to urban areas away from agriculture. Thus, education induced the changing aspirations among young people away from agriculture, which can lead to outmigration.
While the drivers of outmigration are diverse, migration studies in Ethiopia tend to categorize migrants and their households by social and economic groups. In identifying the characteristics of migrant households, some studies state that female-headed households are more prone to send migrants, as they often possess less productive assets than maleheaded households [55]. Other studies suggest that wealthier households are less likely to send migrants, while poor households are more likely to have migrant family members [68]. Such studies, in both cases, emphasize the economically disadvantaged position of households who participate in migration practices. In fact, economic problems are only one of the many livelihood stresses that drive outmigration. However, people who opt for migration for economic gains do not necessarily represent a distinct lower-income household. For example, a study in southwest Ethiopia indicated that rural people practice outmigration as a livelihood strategy regardless of their households' economic status [61]. That means migrants come from different wealth and socio-economic groups. While families of any socio-economic background decide to migrate (or send one member as an outmigrant), the consequences and vulnerabilities for these families can be very different. Wealthier migrants are more resilient to sudden shocks.
Social Networks
Social networks, including ethnicity, kinship ties, 'community' links, and 'community' 'identity', play significant roles in facilitating migration processes by providing relevant information, creating awareness, and reducing the costs of migration. These networks significantly help to inform and facilitate migrants' international migration, particularly where to migrate and how to migrate. Migration aspirations and actual movements depend on potential migrants' awareness of already existing migration corridors, migration opportunities, and connections between would-be migrants and already-migrated friends and relatives [5]. Such aspects of social networks constitute an important driver of migration decisions along the different migration routes and destinations pursued by Ethiopian migrants.
Labor outmigration from Ethiopia has been practiced through three major routes and transits: the Eastern, Northern, and Southern routes [69]. The Eastern route transits through Djibouti, Somaliland, Puntland, and Yemen into Saudi Arabia, other Gulf countries, and the Middle East, while the Northern route transits through Sudan, Egypt, and Libya into Europe and the Southern route passes through Kenya, Tanzania, and other countries within that route to South Africa [69].
Migration networks, opportunities, and experiences developed over time along these migration routes have shaped the movement of Ethiopian migrants to different destination countries. For example, migration to South Africa has been predominantly practiced by people from certain areas of Southern Ethiopia [56,70]. The success of an initial group of youth who got the opportunity to migrate for work to South Africa-through contacts with compatriots based there-and who then invested their earnings back at home spurred the movement of other people to South Africa [56,70]. As these studies indicate, the subsequent migration of people from these areas to South Africa has been further facilitated by access to knowledge and information related to smuggling, traveling experiences, and job opportunities, which are more widely available in these areas than in other parts of Ethiopia.
Similarly, social networks and knowledge of migration opportunities along the northern route have facilitated the movement of migrants from northern Ethiopia to the West. Adugna [70] indicated that following the path of early migrants (who were largely political refugees) from parts of Northern Ethiopia to the USA, North America has become a popular destination for later migrants originating from these areas. Over the years, through migrant networks, including sponsorship systems, migrants from parts of Northern Ethiopia have continued to move to the USA. Labor migration to the Gulf and Arab countries through the Eastern migration route has also been intensified through social networks, connections with recruiting agents, and observed migration outflows from local communities [71]. Most of the migrants crossing the Eastern migration route come from three rural regions of Ethiopia: Oromia, Amhara, and Tigray [72].
Social linkages and networks also play a significant role in internal rural-to-urban migration processes. As Baker [73] noted about northeastern Ethiopia, migration does not involve a sudden movement to another location; instead, it involves acquiring thorough information about a particular location. Other studies looking at rural-urban migration in Ethiopia indicated that at the destination, the presence of people of similar kinship, ethnicity, community, and local background greatly facilitates rural-to-urban migration [53,61]. Would-be migrants acquire information about the destination area from distant and close kin relatives and friends who have already passed through a similar experience of migration. They also rely on such social ties for economic, social, and psychological support [61]. Thus, social networks, personal contacts, and information available through such contacts and relatives significantly influence the patterns of rural-urban migration and migrants' choices of preferred destinations [66].
Migration Impacts
In this section, we will discuss the impacts of migration on source communities, households, and families left behind. Our review of the migration literature in Ethiopia revealed that there had been little research on the impact of outmigration on communities and households left behind. Most of the literature presented here does not directly deal with the impact of migration on source communities. However, through a critical review and analysis of the available literature, we have drawn important insights signifying the impacts of migration. We will particularly discuss issues related to remittances, investments, household welfare, labor and land management, and inequalities.
Remittances, Household Welfare and Investments
The impact of migration on sending regions is primarily conceived in remittances, which are considered the main link between migration and development [1]. In Ethiopia, remittances are a significant source of foreign exchange for the country, which is ranked one of the top ten remittance recipient countries in Africa [74,75]. Most of the data on remittance inflows to Ethiopia is based on information available through formal remittance channels such as banks. However, a large proportion of remittances are informal, with over 75% of remittances to the country sent through informal channels [76]. Official remittance transfers have steadily increased in recent years, from USD 53 million in 2000 to USD 1.796 billion in 2014 [77]. Irregular remittance inflows, however, are likely to be much higher than what is officially recorded [76]. Irregular remittance transfers made by migrants are a significant source of income for many families left behind. Migrants residing in South Africa, the Gulf States, and the Middle East largely rely on informal channels, including informal agents and interpersonal networks, to transfer remittances to family members in Ethiopia [70].
There are indications about the positive impacts of remittances for remittance-receiving households in Ethiopia. Some studies indicate that remittances and outmigration enhance household food security by reducing experiences of insufficient quantity of food intake, enabling access to adequate quality food, and reducing the food poverty gap [78,79]. Others suggest that remittances can improve the well-being of left-behind migrant households in that remittance-receiving households are better off in terms of well-being than households with no migrant family members [80]. Migration may also improve the living standard and welfare of family members who are left behind, as remittances sent by migrants help the left behind family members increase their consumer expenditure [81]. Similarly, remittances can increase households' income and thus enhance households' economic well-being and consumer asset accumulation [82].
Regarding the impact of rural outmigration and remittances on productive investments, however, there has been not only limited information, but the limited information also conveys mixed views. On the one hand, some assert that in rural Ethiopia, remittances have no effect on productive asset investments while strongly affecting consumer asset accumulation [82]. A few other studies note the role of outmigration and remittances for agricultural investments, but they differ in terms of the situations they focus on. One situation is that farmers seasonally out-migrate for wage-earning employment during the low agricultural season, return home during the peak agricultural season, and invest the income they have acquired on productive farm assets such as cattle, land, and water pumps [52]. Another situation is that rural households receive remittances from outmigrant family members and use remitted income for purchasing agricultural inputs such as fertilizer, seeds, pesticides, herbicides, agricultural tools, and livestock [79,83]. Remittance-receiving farm households are considered to have a better capacity to overcome their financial problems and engage in high-return production activities compared to households receiving no remittance [68]. Thus, remittances can play a significant role in alleviating financial constraints that discourage agricultural production.
In addition, remittances available through migration can indirectly contribute to agricultural investments. They help rural households maintain their valuable agricultural productive assets, which otherwise could be lost through distress sales in times of environmental crises and linked misfortunes. For example, Mohapatra et al. [84] indicated that Ethiopian rural households that receive remittances refrain from selling productive assets such as livestock to cope with drought and related food shortages, as they rely on cash reserves from remittances. Little et al. [47] observed that farmers in northern Ethiopia responded to drought shocks during drought events by selling their livestock at a very low price. The distress sale of livestock not only diminishes rural households' productive asset holdings but, most importantly, it results in the loss of oxen serving as the main farming implement in ox-plow-based agriculture. Thus, the role of remittances in furnishing income and preventing the loss of valuable livestock through desperate sales fulfills an indirect form of agricultural investment.
There are also some indications that migrant remittances help families in Ethiopia invest in other business activities such as hotels, public transport, and housing [56,70]. Upon returning home, migrants can contribute to the rural economy by investing in entrepreneurial activities depending on the financial and non-financial assets they have accumulated during their migration activities. For example, they may run economic activities through hotels, shops, transport vehicles, and grain mills [53,56]. The benefits of migration can also feed into rural development through community development activities. For example, the Gurage people in Ethiopia have long practiced rural-urban migration as a livelihood strategy. The community members working in urban areas have developed selfhelp community development traditions by using their money, skills, social networks, and associations, significantly contributing to the development of missing rural infrastructure in home areas, such as roads, schools, and health and communication facilities [61,85,86].
Labor and Land Management
Rural outmigration entails diverse implications for land and labor management. An ethnographic study on natural resource management in southwest Ethiopia indicated that in rural societies where there is an acute land shortage and high population pressure, outmigration could reduce the pressure on scarce land resources [61]. On the other hand, outmigration can reduce household farm labor supply. Some suggest that the impact of outmigration on farm labor and farm outcome differs depending on the type of migration-permanent vs. temporary migration. Redehegn et al. [68], based on a study in Northern Ethiopia, view that permanent migration induces extended loss of labor and involves negative impacts on crop income, while temporary outmigration allows workers to return home regularly and mitigates the negative impact of labor losses on crop production.
For some researchers, the labor impact of rural outmigration can be insignificant depending on the magnitude of the prevailing population pressure and farmland scarcity. For example, a study conducted in two districts in northern and southern Ethiopia maintained that in a situation where there is an ever-growing population pressure and an increasing shortage of farmland, youth outmigration has little impact on labor shortage, as the youth are already underemployed at the family farm [83]. This view assumes that there is already excess labor on the farm, and youth migration will have little impact on labor availability. On the other hand, Dessalegn [60] observed in southwest Ethiopia that rural outmigration can lead to agricultural labor shortage even in a situation of high population pressure and shortage of land. However, despite the potential negative impact of outmigration on rural labor, labor problems are mitigated through rural labor institutions that facilitate mutual labor exchange groups; and through hired wage labor [60].
Mueller et al. [87] indicated that youth outmigration in Ethiopia imposes additional labor on other family members, forcing female heads and spouses to spend more time on the farm. Nevertheless, they state that the migration of youth will not compromise agricultural income; instead, it positively contributes to household income [87].
Changing Inequalities
Remitted income available through migrants entails differential impacts and intra-'community' inequalities. For example, in migrant-sending areas, remittance-receiving households enjoy higher purchasing power than others and often tend not to resist unfair prices of goods and services [70]. This situation contributes to rising costs of living, thereby putting non-migrant families at a disadvantage. Remitted income creates inequalities even between migrant households. This particularly relates to differences in the amount of remitted income available to migrant households. For example, households receiving more remittances or regular remittances have more access to agricultural wage labor than other households who receive remittances in small amounts or intermittently [61]. This is because better access to remittances enables rural households to rely on diversified income sources.
Female Migration to the Gulf Countries
While female labor migration to the Gulf countries generates remittances that support the welfare of families left behind in Ethiopia, it involves challenging circumstances. Understanding this situation helps us appreciate how migration's benefits may come at the cost of individual suffering. This particularly relates to the case of young women migrating to the Middle East and the Gulf States, including Saudi Arabia, Lebanon, United Arab Emirates, Kuwait, Oman, Bahrain, and Lebanon. These women migrants are largely employed as domestic workers. Since the early 1990s, contract migration has increased steadily to meet the rising demand for domestic workers throughout the Middle East and the Gulf States [71,88]. The Gulf countries, particularly Saudi Arabia and Kuwait, emerged as the top destination countries for Ethiopian women migrants [89].
Much of the research on this labor migration focuses on the difficult conditions in host countries, the often extremely damaging psychological consequences of abuse by employers, and on the struggle that some of the return migrants face upon return to Ethiopia [90][91][92][93]. However, despite the difficult conditions under which they operate and their limited income, they send remittances back home. Remittances are largely used for loan repayment, schooling of children, childcare provision, and household expenses [91,93].
There has been little information regarding the impact of this form of labor migration on productive investments. There is an indication that such labor migrants find it difficult to make savings because of their limited income and the responsibility of regularly sending remittances for several expenses [93]. However, there are successful women labor migrants who managed to make some productive investments in the form of owning small shops, restaurants, and taxis and who were able to purchase/build a house [91]. This may have been possible through very prudent savings and restrained allocation of remittances.
What makes "productive investment" of remittances can vary depending on labor migrants' individual aspirations and family commitments. For example, for some female migrants, sending monetary remittances for paying a sibling's university education at home is an important means of achieving betterment in that upon graduation and employment, this sibling can sponsor the education of other siblings [94]. The intent underlying such multiplying effects of remittance use could be more appreciated when considering the situation of many Ethiopian households who rely on family support. The success or failure of a family member entails significant implications for the rest of the family members. Upon return from labor migration, women often find it difficult to involve themselves in productive investments due to a lack of savings and capital, experience in business activities, and reintegration support [90,91].
The labor arrangement governing the employer and migrant domestic worker relations in the Gulf has its own adverse impacts on the success and freedom of women labor migrants. The migration of Ethiopian domestic workers to the Gulf states has been largely governed through the Kafala labor system, whereby the employee's visa and residence permit are sponsored through the financial and legal responsibility of a specific employer who maintains control over the passport and movement of the domestic worker [95,96]. According to this sponsorship arrangement, the migrant worker cannot change jobs and leave the household without the consent of the employer. The arrangement increases the migrant's vulnerability to economic and human rights violations, thereby leading to distress escapes often seen as 'illegal' by employers and authorities. Most Ethiopian migrant workers in the Gulf states face severe labor rights violations and abuses, including forced labor, working for long hours without rest and overtime pay, confiscation of passports, irregular salary payment, or no payment at all [24]. The vulnerability of Ethiopian migrant workers in the Gulf was harshly manifested in the mass deportation of migrants from Saudi Arabia. Around 165,000 Ethiopian migrants were expelled from Saudi Arabia between November 2013 and March 2014 alone [71,97].
Conclusions, Gaps Identified and Research Recommendations
This paper has reviewed rural outmigration studies in Ethiopia, with a particular focus on labor migration. We sought to examine the impact of migration patterns on agriculture and the livelihoods of rural people left behind. We pursued a holistic approach to analyze the multidimensional aspects of migration, including the drivers and impacts of migration. The migration literature in Ethiopia largely deals with the causes of migration. These include environmental factors, including climate variability, drought, and soil degradation; agricultural and livelihood stresses, including problems related to access to land; young people's changing aspirations; and migration facilitating social networks. There is a tendency in the literature to focus on a single factor or individual factors as the driver of outmigration. There is a lack of a systematic analysis regarding how a mix of different factors could interact and shape migration processes.
Our analysis revealed that there had been few systematic studies on the impacts of outmigration on agriculture and the livelihoods of rural people left behind. There are indications that migration-induced remittances enhance household food security and the welfare of left-behind family members. Rural outmigration facilitates the diversification of income sources. Rural households receiving remittances from outmigrant family members gain more purchasing power. They can use remitted income to purchase agricultural inputs essential for crop production. In agricultural communities where there is an acute land shortage and high population pressure, rural outmigration can alleviate the pressure on scarce land resources. On the other hand, outmigration can reduce household farm labor supply and impose additional labor on left-behind family members. The benefits of migration can feed into rural infrastructure development through self-help community development activities. However, most of the literature does not directly deal with the impact of migration on source communities and households left behind.
We suggest that future research should include an in-depth analysis of the impacts of outmigration on sending communities and households left behind. An in-depth analysis should be conducted on the diverse impacts of outmigration on agriculture, rural livelihoods, and rural transformations. Likewise, we suggest that future research should consider the multifaceted aspects of outmigration, its diverse drivers, and how multiple drivers of outmigration interact with each other to influence households' and individuals' decision-making processes of outmigration.
|
2023-01-12T17:37:10.001Z
|
2023-01-05T00:00:00.000
|
{
"year": 2023,
"sha1": "8fdcc13d1f81cb2a16fcd1ba5796ee4ae8985124",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-445X/12/1/176/pdf?version=1672910817",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e12b794b008754f418b82c8656efb3464849ed07",
"s2fieldsofstudy": [
"Sociology",
"Agricultural and Food Sciences",
"Geography"
],
"extfieldsofstudy": []
}
|
9599314
|
pes2o/s2orc
|
v3-fos-license
|
Surgical outcomes of Korean ulcerative colitis patients with and without colitis-associated cancer
AIM: To determine the clinicopathologic characteristics of surgically treated ulcerative colitis (UC) patients, and to compare the characteristics of UC patients with colitis-associated cancer (CAC) to those without CAC. METHODS: Clinical data on UC patients who underwent abdominal surgery from 1980 to 2013 were collected from 11 medical institutions. Data were analyzed to compare the clinical features of patients with CAC and those of patients without CAC. RESULTS: Among 415 UC patients, 383 (92.2%) underwent total proctocolectomy, and of these, 342 (89%) were subjected to ileal pouch-anal anastomosis. CAC was found in 47 patients (11.3%). Adenocarcinoma was found in 45 patients, and the others had either neuroendocrine carcinoma or lymphoma. Comparing the UC patients with and without CAC, the UC patients with CAC were characteristically older at the time of diagnosis, had longer disease duration, underwent frequent laparoscopic surgery, and were infrequently given preoperative steroid therapy (P < 0.001-0.035). During the 37 mo mean follow-up period, the 3-year overall survival rate was 82.2%. CONCLUSION: Most Korean UC patients experience early disease exacerbation or complications. Approximately 10% of UC patients had CAC, and UC patients with CAC had a later diagnosis, a longer disease duration, and less steroid treatment than UC patients without CAC. (CAC), and a longer disease duration, and steroid treatment than patients without CAC.
INTRODUCTION
Despite the growing use of medical salvage treatment, surgery, including total proctocolectomy (TPC), remains a cornerstone for managing ulcerative colitis (UC). Surgery should be regarded as a life-saving procedure for patients with acute severe colitis and must be seriously considered in any medically intractable patient or patient with colonic dysplasia or malignancy [1] . A recent study of ileal pouch-anal anastomosis (IPAA) showed excellent quality of life and a good functional outcome in UC patients treated with this modality [2] . Colitis-associated cancer (CAC) is a well-recognized complication of UC [3] . The overall prevalence of CAC in UC patients is 3.7%, and cases of CAC in UC patients account for only 1% of all colorectal cancer (CRC) cases observed in the Western population [3,4] . There is also a general consensus that patients with longstanding, extensive UC have an increased risk of developing CAC [3,5] .
Although the prevalence of UC is lower in South Korea than in Western countries, the number of patients with UC as well as those with UC and CAC has increased steadily since 1980 [6,7] . There are clear ethnic differences in inflammatory bowel disease (IBD) between Asian and Western populations [8] . The present study is primarily intended to fulfill the current lack of information on the clinicopathologic characteristics of UC patients who undergo surgical treatment in South Korea. We also attempted to compare the clinical characteristics of surgically treated UC patients with and without CAC.
Data collection
The data for biopsy-proven UC patients who underwent abdominal surgery from January 1980 to July 2013 were collected retrospectively. The surgeries were performed at 11 different medical institutions, i.e., ten university hospitals and one colorectal clinic. The data of 419 patients were initially collected, and four patients were excluded because they had not undergone surgery for UC. Thus, data from a total of 415 UC patients were analyzed to compare the clinical variables of UC patients with cancer to those of patients without cancer. The variables were gender, family history, age at diagnosis, symptom duration before diagnosis, preoperative medication, the indication for surgery, the presence of primary sclerosing cholangitis (PSC), the extent of colonic involvement, the type of surgery, the postoperative complications, and mortality. In the patients with cancer, additional data were gathered, including preoperative identification of cancer or dysplasia, preoperative serum carcinoembryonic antigen (CEA) level, pathologic data, adjuvant chemotherapy or radiation therapy, recurrence of cancer, and survival status at the time of the last follow-up. Histologically, tumors were classified as either low-grade (well-differentiated or moderately differentiated adenocarcinoma) or high-grade (poorly differentiated adenocarcinoma or mucinous or signetring cell carcinoma). An early complication was defined as one occurring within 90 d after the main surgical intervention. The study protocol was approved by the Institutional Review Board of each medical institution. The mean follow-up was 68.4 mo (range: 0-286 mo).
Statistical analysis
A cross-table analysis using Pearson's χ 2 test or Fisher's exact test, as appropriate, was used to compare the discrete variables of patients with cancer to those of patients without cancer. The Student's t-test was used for between-group comparisons of continuous variables. Among UC patients with cancer, recurrence and overall survival were used to evaluate the clinical outcome. Survival outcomes were compared using the Kaplan-Meier method with a log-rank test. All reported P values are two-sided, and the P < 0.05 values were considered to indicate statistical significance. SPSS software version 18.0 (SPSS, Chicago, IL, United States) was used for statistical analysis. Table 1 shows the clinical characteristics of 415 UC patients who underwent surgical treatment. The mean preoperative medication period was 41.9 mo. Most of the patients (n = 368, 88.7%) were treated with 5-aminosalicylic acid (5-ASA). Steroids were administered to 315 patients (75.9%) as a first-line treatment for acute severe colitis before colectomy. In patients who failed to respond to steroids, infliximab (n = 33), cyclosporine (n = 26), and 6-mercaptopurine (n = 7) were used as second-line treatments. The most common reason for performing surgery was medical intractability, followed by dysplasia or malignancy, and bleeding ( Mucosectomy was more frequently performed in patients with cancer (27.2%) than in those without cancer (14.7%), although the difference was not significant. Laparoscopic-assisted procedures were performed in 46 patients (11.1%). Complications occurred in 144 patients (34.7%), 79 of whom had early complications and 65 of whom had late complications. The most common early complication was ileus (n = 21), followed by bleeding (n = 16), anastomotic leakage (n = 15), intra-abdominal abscess (n = 8), and major wound dehiscence (n = 6). Late complications were pouchitis (n = 48), fistula (n = 9), and anastomotic stricture (n = 6). Preoperative steroid therapy was more frequently used in open surgery than in laparoscopic surgery (79% vs 50%, P < 0.001). The complication rate in patients undergoing preoperative steroid therapy was higher than that in patients who did not undergo preoperative steroid therapy (38% vs 26%, P = 0.04). There was no significant difference in the complication rates between open and laparoscopic surgery. CAC. During the 37-mo mean follow-up period (range: 1-138 mo), the 3-year overall survival rate was 82.2%.
Clinical characteristics of the patients
The only patient with local recurrence had stage Ⅱ CAC.
She was diagnosed with recurrent pelvic lymph-node metastasis 1 year postoperatively and finally died due to cancer progression. Among the seven patients with stage Ⅳ CAC, five died, and two patients were alive although with disease, at the end of the study period. In patients with stage 0, Ⅰ, and Ⅲ CAC, no recurrences or deaths were observed ( Figure 2).
DISCUSSION
This multi-center study is the first nationwide report on the surgical outcomes of Korean UC patients and reports the recent status of surgically treated Korean UC patients. As our study involved most high-volume, tertiary-care, medical institutions in South Korea, we calculated that the patient cohort (approximately n = 5800 patients) at 11 of these institutions corresponded to approximately 45% of all recorded Korean patients with UC. This was indirectly calculated by counting the number of follow-up UC patients at each institution and using the population data of a previous KASID study [4] . Recently, colorectal surgeons have reported encountering increasing numbers of UC patients with CAC in their clinical practice. Therefore, our study was designed primarily to determine the incidence
CAC in patients with UC
Forty-seven patients had colorectal malignancies (11.3%): 45 patients had adenocarcinomas, and the two had a neuroendocrine carcinoma and a lymphoma, respectively. There was a suspicion of malignancy before surgery in 44 patients (93.6%). Although there was no difference in the annual cumulative number of surgeries performed for UC, surgery for cancer with UC has been increasing recently (Figure 1). Compared with the UC patients without cancer, the UC patients with cancer were older at the time of diagnosis and surgery, and had longer disease duration, frequent laparoscopic surgery, infrequent preoperative steroid therapy, and a slightly lower rate of early postoperative complications (Table 1). Two patients were diagnosed with rectal adenocarcinomas 7 and 11 years after total colectomy with end ileostomy and underwent completion proctectomy. None of the patients had malignancy around the pouch or anal transitional zone (ATZ) after IPAA. Table 4 summarizes the characteristics of the 45 UC patients with colorectal adenocarcinoma. Two patients with rectal cancer underwent preoperative chemoradiation therapy. Adjuvant chemotherapy was administered to 24 patients (53.3%) with advanced and characteristics of Korean UC patients with CAC. Unexpectedly, the number of these patients was relatively small, and their follow-up periods were too short to analyze survival outcomes.
In the clinical course of UC, approximately 4% to 9% of UC patients will require colectomy within the first year of diagnosis, whereas the risk of requiring colectomy after the first year is 1% per year [9] . A European population-based study revealed a 7.5% colectomy rate during the 5-year follow-up period [10] . In our study, two-thirds of the UC patients underwent surgery due to exacerbation of their disease (severe colitis), and approximately 20% of the UC patients underwent surgery due to complicating disease including massive bleeding or perforation. These patients had an average 4.3-year interval between diagnosis and surgery. Conversely, only 10% of the UC patients who underwent surgery had CAC. These patients had different clinical characteristics, such as a later diagnosis, disease duration of more than 10 years, and a lower rate of preoperative steroid therapy than patients without CAC.
Needless to say, the standard-of-care surgery for UC is TPC with IPAA, which most of the patients in this study received. IPAA is a curative and well-tolerated procedure, although it is technically demanding and has a high morbidity rate. A recent study of IPAA demonstrated early complications in 33%, and late complications in 29% of patients, thus resulting in an overall pouch excision rate of 5% [2] . Our study showed slightly lower complication rates than those seen in Fazio's study. However, this is not surprising, considering that their database was prospectively well-maintained at a single medical institution and that our databases were collected at 11 medical institutions over a short period of time. IPAA can be performed in one, two, or three stages. In our study, 84% of the IPAAs were performed using a two-stage procedure, which gave similar results to those reported in a recent, large-scale cohort study [2] . In patients undergoing a three-stage procedure, completion proctectomy or rectal surveillance is very important. A recent study reported that only 65% of patients completed IPAA after subtotal colectomy, 40% complied with rectal surveillance, and two patients developed rectal cancers, which is consistent with our study results [11] . The "double stapled" ileal J pouch-anal anastomosis is the most popular standard pouch-anal anastomosis method, while mucosectomy and hand-sewn anastomosis are reserved for patients with dysplasia or cancer [2,12] . However, whether mucosectomy protects against the development of ATZ and pouch cancer is unclear, and controversy exists over whether the beneficial effect of mucosectomy in preventing neoplasia is outweighed by its negative effect on ileal pouch function [13] . Although mucosectomy was frequently performed on our UC patients with cancer, it was difficult to verify the benefits of mucosectomy for preventing ATZ cancer due to the short followup period in our study. A recent review also showed that 32 UC patients had cancers in the ATZ; of these patients, 28 underwent mucosectomy. The study concluded that mucosectomy does not necessarily eliminate cancer risk in the ATZ [14] . Laparoscopic IPAA for UC is feasible; however, to date, the evidence in the literature is still inconclusive. Current data suggest that it allows a shorter hospital stay, a shorter ileus, faster recovery, and less postoperative pain, along with better cosmesis when minimally invasive surgery is employed. Significantly longer operative times are universally reported when laparoscopy is employed [15] . In our study, only 11% of the UC patients underwent laparoscopic-assisted surgery. Among these, the complication rates did not differ from those of open surgery and were closely correlated with the infrequent use of preoperative steroid therapy. Many studies suggested that patients who are taking high-dose steroids are at an increased risk of early complications after IPAA [16,17] . As cumulative evidence shows that laparoscopic surgery for CRC is not inferior to open surgery, with respect to patient survival and cancer recurrence rates [18] , laparoscopic IPAA surgery might be feasible in selected UC patients with cancer.
Long disease duration, male sex, a young age at diagnosis of UC, extensive colitis, and PSC are well-known risk factors for developing CAC [5,14,19] .
The disease duration is the most important factor for UC-associated CRC, of which the incidence rates correspond to cumulative probabilities of 2% by 10 years, 8% by 20 years, and 18% by 30 years [3] .
As our study included UC patients who underwent surgery, it was difficult to determine the risk factors for UC-associated CRC. We also found that the duration of UC was longer in patients with CAC than in patients without CAC. A previous KASID study of UC-associated CRC in South Korea revealed that the overall prevalence of CRC was 0.37%, the mean age at diagnosis was 49.6 years, and the mean duration of UC was 11.5 years, all of which are consistent with our study results [4] . During the KASID study period of 1970 to 2005, Kim et al [4] found 26 UC patients with CAC. However, 80% of the UC patients with CAC in our study were identified after 2005 and had earlier disease stages than those in the KASID study.
Although it was difficult to identify changes in the management and preventive strategies for CAC during the study period, our findings might be explained by the increase in the UC cohort, as well as by the recent increase in the use of surveillance colonoscopy for the prevention of CAC. In addition, a very low incidence of PSC was consistently found in both studies. By comparing our results with those of the previous KASID study, we also verified that the incidence of UCassociated CRC is rapidly increasing. Compared with sporadic CRC, the carcinogenesis of UC-associated CRC is different, as it develops from dysplasia in a carcinogenic pathway known as the dysplasia-carcinoma sequence [14] . Interestingly, P53 mutations occur earlier in IBD-associated cancer than in sporadic CRC. APC mutations in IBD-associated cancer, a key initiating event, occur later than sporadic CRC [20] . Furthermore, microsatellite instability (MSI) is frequently observed in UC patients [21] , although MSI in IBD shows infrequent MLH1 hyper-methylation, which is a dominant feature of sporadic CRC [22] . These molecular genetic differences between IBD-associated cancer and sporadic CRC might be responsible for their different clinicopathologic features. The pathologic features of UC-associated CRC frequently present as a mucinous or signet-ring-cell histology compared with the features of sporadic CRC (17%-21%) [23,24] , which is consistent with our results (20%). A previous Japanese study indicated that a frequent mucinous or a signet-ring-cell histology in UC-associated cancer contributed to the poorer prognosis of UC-associated cancer compared with that of sporadic CRC [23] .
Whether the survival rate of UC-associated CRC is poorer than that of sporadic CRC is controversial. Earlier studies showed similar survival rates for UCassociated CRC and sporadic CRC [25,26] . However, recent well-designed Danish and Japanese studies revealed slightly poorer survival rates for UC-associated CRC patients [23,27] . In Norwegian and Swedish populationbased studies, the prognosis of IBD-associated CRC was poorer than that of sporadic CRC (a mortality rate ratio of 3.71 for Norwegians and an overall hazard ratio of 1.26 for Swedes) [28,29] . As all of these studies had the common limitation of a small patient cohort, it is difficult to obtain an accurate prognosis for UCassociated CRC patients. Although our study also had the same limitation of a small number of cases of UCassociated CRC as well as a short follow-up period, the survival of UC patients with CAC in our study was much better than that seen in recent studies. Except for stage Ⅳ patients, among 38 patients with stage 0 to Ⅲ disease, only one with a recurrence died. Longterm follow-up and further patient enrollment might help provide an accurate prognosis for Korean UC patients with CAC in the future.
This study had some of the limitations of a retrospective study. There were differences in the reliabilities of the databases, which differed from institution to institution. Although a few institutions had prospectively well-maintained databases, others did not. As we previously mentioned, our study population was very limited as it only included patients who underwent surgery. Therefore, it is difficult to determine the risk factors for UC-associated CRC from this cohort.
In conclusion, Korean UC patients who underwent surgery had two distinct features. Most of the treated patients had early disease exacerbation or complications. Approximately 10% of the surgically treated UC patients had CAC and characteristics of late diagnosis, longer disease duration, and lower preoperative steroid treatment compared with those without CAC.
|
2018-04-03T00:11:00.623Z
|
2015-03-28T00:00:00.000
|
{
"year": 2015,
"sha1": "6911c73909b1c43442e3d7fc9f51ecff98cd57ae",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v21.i12.3547",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3934cba32abe065a8552f040ce9d4b71f67d212a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270763590
|
pes2o/s2orc
|
v3-fos-license
|
Difluoroester solvent toward fast-rate anion-intercalation lithium metal batteries under extreme conditions
Anion-intercalation lithium metal batteries (AILMBs) are appealing due to their low cost and fast intercalation/de-intercalation kinetics of graphite cathodes. However, the safety and cycliability of existing AILMBs are constrained by the scarcity of compatible electrolytes. Herein, we showcase that a difluoroester can be applied as electrolyte solvent to realize high-performance AILMBs, which not only endows high oxidation resistance, but also efficiently tunes the solvation shell to enable highly reversible and kinetically fast cathode reaction beyond the trifluoro counterpart. The difluoroester-based electrolyte demonstrates nonflammability, high ionic conductivity, and electrochemical stability, along with excellent electrode compatibility. The Li| |graphite AILMBs reach a high durability of 10000 cycles with only a 0.00128% capacity loss per cycle under fast-cycling of 1 A g−1, and retain ~63% of room-temperature capacity when discharging at −65 °C, meanwhile supply stable power output under deformation and overcharge conditions. The electrolyte design paves a promising path toward fast-rate, low-temperature, durable, and safe AILMBs.
Statistics
For all statistical analyses, confirm that the following items are present in in the figure legend, table legend, main text, or or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as as a discrete number and unit of of measurement A statement on on whether measurements were taken from distinct samples or or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of of all covariates tested
A description of of any assumptions or or corrections, such as as tests of of normality and adjustment for multiple comparisons A full description of of the statistical parameters including central tendency (e.g.means) or or other basic estimates (e.g.regression coefficient) AND variation (e.g. standard deviation) or or associated estimates of of uncertainty (e.g.confidence intervals) For null hypothesis testing, the test statistic (e.g.F, t, r) with confidence intervals, effect sizes, degrees of of freedom and P value noted Give P values as exact values whenever suitable.
For Bayesian analysis, information on on the choice of of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of of the appropriate level for tests and full reporting of of outcomes Estimates of of effect sizes (e.g.Cohen's d, Pearson's r), ), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above.
Software and code
Policy information about availability of of computer code
Data analysis
For manuscripts utilizing custom algorithms or or software that are central to to the research but not yet described in published literature, software must be be made available to to editors and reviewers.We We strongly encourage code deposition in in a community repository (e.g.GitHub).See the Nature Portfolio guidelines for submitting code & software for further information.
Data Policy information about availability of of data
All manuscripts must include a data availability statement This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or or web links for publicly available datasets -A description of of any restrictions on on data availability -For clinical datasets or or third party data, please ensure that the statement adheres to to our policy Dong Zhou, Feiyu Kang May 8, 8, 2024 Provide a description of all commercial, open source and custom code used to collect the data in this study, specifying the version used OR state that no software was used.
Provide a description of all commercial, open source and custom code used to analyse the data in this study, specifying the version used OR state that no software was used.
Provide your data availability statement here.
Reporting on sex and gender
Reporting on race, ethnicity, or other socially relevant groupings
Recruitment
Ethics oversight Note that full information on the approval of the study protocol must also be provided in the manuscript.
Field-specific reporting
Please select the one below that is the best fit for your research.If you are not sure, read the appropriate sections before making your selection.
Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf
Life sciences study design
All studies must disclose on these points even when the disclosure is negative.
Blinding
Behavioural & social sciences study design All studies must disclose on these points even when the disclosure is negative.
Study description
Use the terms sex (biological attribute) and gender (shaped by social and cultural circumstances) carefully in order to avoid confusing both terms.Indicate if findings apply to only one sex or gender; describe whether sex and gender were considered in study design; whether sex and/or gender was determined based on self-reporting or assigned and methods used.Provide in the source data disaggregated sex and gender data, where this information has been collected, and if consent has been obtained for sharing of individual-level data; provide overall numbers in this Reporting Summary.Please state if this information has not been collected.Report sex-and gender-based analyses where performed, justify reasons for lack of sex-and gender-based analysis.
Please specify the socially constructed or socially relevant categorization variable(s) used in your manuscript and explain why they were used.Please note that such variables should not be used as proxies for other socially constructed/relevant variables (for example, race or ethnicity should not be used as a proxy for socioeconomic status).Provide clear definitions of the relevant terms used, how they were provided (by the participants/respondents, the researchers, or third parties), and the method(s) used to classify people into the different categories (e.g.self-report, census or administrative data, social media data, etc.) Please provide details about how you controlled for confounding variables in your analyses.
Describe the covariate-relevant population characteristics of the human research participants (e.g.age, genotypic information, past and current diagnosis and treatment categories).If you filled out the behavioural & social sciences study design questions and have nothing to add here, write "See above." Describe how participants were recruited.Outline any potential self-selection bias or other biases that may be present and how these are likely to impact results.
Identify the organization(s) that approved the study protocol.
Describe how sample size was determined, detailing any statistical methods used to predetermine sample size OR if no sample-size calculation was performed, describe how sample sizes were chosen and provide a rationale for why these sample sizes are sufficient.
Describe any data exclusions.If no data were excluded from the analyses, state so OR if data were excluded, describe the exclusions and the rationale behind them, indicating whether exclusion criteria were pre-established.
Describe the measures taken to verify the reproducibility of the experimental findings.If all attempts at replication were successful, confirm this OR if there are any findings that were not replicated or cannot be reproduced, note this and describe why.
Describe how samples/organisms/participants were allocated into experimental groups.If allocation was not random, describe how covariates were controlled OR if this is not relevant to your study, explain why.
Describe whether the investigators were blinded to group allocation during data collection and/or analysis.If blinding was not possible, describe why OR explain why blinding was not relevant to your study.
Briefly describe the study type including whether data are quantitative, qualitative, or mixed-methods (e.g.qualitative cross-sectional, quantitative experimental, mixed-methods case study).Describe the methods by which all novel plant genotypes were produced.This includes those generated by transgenic approaches, gene editing, chemical/radiation-based mutagenesis and hybridization.For transgenic lines, describe the transformation method, the number of independent lines analyzed and the generation upon which experiments were performed.For gene-edited lines, describe the editor used, the endogenous sequence targeted for editing, the targeting guide RNA sequence (if applicable) and how the editor was applied.was applied.was applied.
Report on the source of all seed stocks or other plant material used.If applicable, state the seed stock centre and catalogue number.If plant specimens were collected from the field, describe the collection location, date and sampling procedures.
Describe any authentication procedures for each seed stock used or novel genotype generated.Describe any experiments used to Describe any authentication procedures for each seed stock used or novel genotype generated.Describe any experiments used to Describe any authentication procedures for each seed stock used or novel genotype generated.Describe any experiments used to assess the effect of a mutation and, where applicable, how potential secondary effects (e.g.second site T-DNA insertions, mosiacism, off-target gene editing) were examined.
|
2024-06-28T06:17:12.746Z
|
2024-06-26T00:00:00.000
|
{
"year": 2024,
"sha1": "dfba167e2e06e53ca72c00a0febf726cef055db3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41467-024-49795-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47593c7ed1738d83d87ca2cde755567e3d5c5625",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252438703
|
pes2o/s2orc
|
v3-fos-license
|
On PFH and HF spectral invariants
In this note, we define the link spectral invariants by using the cylindrical formulation of the quantitative Heegaard Floer homology. We call them HF spectral invariants. We deduce a relation between the HF spectral invariants and the PFH spectral invariants by using closed-open morphisms and open-closed morphisms. For the sphere, we prove that the homogenized HF spectral invariants at the unit are equal to the homogenized PFH spectral invariants. Moreover, we show that the homogenized PFH spectral invariants are quasimorphisms.
Introduction and main results
Let Σ be a closed surface with genus g and ω a volume form of volume 1 (of course, the number 1 can be replaced by any positive number). Given a volume-preserving diffeomorphism ϕ : Σ → Σ, M. Hutchings defines a version of Floer homology for (Σ, ω, ϕ) which he calls periodic Floer homology [18,20], abbreviated as PFH. called PFH spectral invariants [4] (also see [6,16] for the non-Hamiltonian case).
A link L on Σ is a disjoint union of simple closed curves. Under certain monotone assumptions, D. Cristofaro-Gardiner, V. Humilière, C. Mak, S. Seyfaddini and I. Smith show that the Lagrangian Floer homology of a Lagrangian pair (Sym d ϕ H (L), Sym d L) in Sym d Σ, denoted by HF (Sym d Σ, Sym d L, Sym d ϕ H ), is well-defined and non-vanishing [7]. Here ϕ H is a Hamiltonian symplecticmorphism. They call the Floer homology HF (Sym d Σ, Sym d L, Sym d ϕ H ) quantitative Heegaard Floer homology, abbreviated as QHF. For any two different Hamiltonian symplecticmorphisms, the corresponding QHF are canonically isomorphic to each other. Let HF (Sym d L) denote an abstract group that is a union of all the QHF defined by Hamiltonian symplecticmorphisms, modulo the canonical isomorphisms. Using R. Leclercq and F. Zapolsky's general results [31], they define a set of numerical invariants parameterized by QFH where η is a fixed nonnegative constant. These numerical invariants are called link spectral invariants. Even though these two spectral invariants come from different Floer theories, they satisfy many parallel properties. So it is natural to study whether they have any relation. To this end, our strategy is to construct morphisms between these two Floer homologies. Because these two Floer theories are defined by counting holomorphic curves in manifolds of different dimensions, it is hard to define the morphisms directly. To overcome this issue, the author follows R. Lipshitz's idea [29] to define a homology by counting holomorphic curves in a 4-manifold, denoted by HF (Σ, ϕ H (L), L) [13]. Moreover, the author proves that there is an isomorphism [1]. A version of (1.2) also has been constructed by V. Colin, P. Ghiggini, and K. Honda [15] for a different setting. Using these morphisms, we obtain a partial result on the relation between PFH spectral invariants and HF spectral invariants [13].
In this note, we define the quantum product structures and spectral invariants for HF (Σ, ϕ H (L), L) as the Lagrangian Floer homology. Similar to the QHF, for any ϕ H , the group HF (Σ, ϕ H (L), L) is isomorphic to an abstract group HF (Σ, L) canonically. The spectral invariants defined by HF (Σ, ϕ H (L), L) are denoted by c L,η . To distinguish with the link spectral invariants c link L,η , we call c L,η the HF spectral invariants instead. Via the isomorphism (1.1), we know that c L,η is equivalent to c link L,η in certain sense (see (2.16)).
The purpose of this paper is to study the properties of c L and try to understand the relations between c L and c pf h d . Before we state the main results, let us recall the assumptions on a link. Definition 1.1. Fix a nonnegative constant η. Let L = d i=1 L i be a d-disjoint union of simple closed curves on Σ. We call L a link on Σ. We say a link L is η-admissible if it satisfies the following properties: A. 1 The integer satisfies d = k + g, where g is the genus of Σ and k > 1. k i=1 L i is a disjoint contractile simple curves. For k + 1 ≤ i ≤ d, L i is the cocore of the 1-handle. For each 1-handle, we have exactly one corresponding L i .
A. 2 We require that Σ − L = ∪ k+1 i=1B i . Let B i be the closure ofB k . Then B i is a disk for 1 ≤ i ≤ k and B k+1 is a planar domain with 2g + k boundary components. For 1 ≤ i ≤ k, the circle L i is the boundary of B i .
A picture of an admissible link is shown in Figure 1. Note that if L is admissible, Figure 1: The red circles are the admissible link. so is ϕ(L), where ϕ is any Hamiltonian symplecticmorphism. We assume that the link is η-admissible throughout. Remark 1.1. To define HF (Σ, ϕ H (L), L), we need stronger assumptions on L than [7] for technical reasons.
In the first part of this note, we study the properties of the spectral invariants c L,η . The results are summarized in the following theorem. These properties are parallel to the one in [7,31]. Remark 1.2. At this moment, we haven't confirmed whether the isomorphism (1.1) is canonical, but we believe that it is true from the view point of tautological correspondence. Also, we don't know whether the product µ 2 agrees with the usual quantum product in monotone Lagrangian Floer homology. So we cannot deduce Theorem 1 from (1.1) and the results in [7,31] directly. But the methods in the proof of Theorem 1 basically the same as [7,31].
In [13], we define the closed-open morphisms (1.2). We use the same techniques to construct a "reverse" of the closed-open morphisms, called open-closed morphisms. Theorem 2. Let L be an admissible link and ϕ H a d-nondegenerate Hamiltonian symplecticmorphism. Fix a reference 1-cycle γ 0 with degree d and a base point x ∈ L. Let Z ref ∈ H 2 (W, x H , γ 0 ) be a reference relative homology class. Let (W, Ω H , L H ) be the open-closed symplectic cobordism defined in Section 5. Then for a generic admissible almost complex structure J ∈ J (W, Ω H ), the triple (W, Ω H , L H ) induces a homomorphism satisfying the following properties: • (Partial invariance) Suppose that ϕ H , ϕ G satisfy the following conditions: (see Definition 2.1) ♠.1 Each periodic orbit of ϕ H with degree less than or equal d is either d-negative elliptic or hyperbolic.
♠.2 Each periodic orbit of ϕ G with degree less than or equal d is either d-positive elliptic or hyperbolic.
Fix reference relative homology classes
where A ref is the class defining the continuous morphism I H,G 0,0 . Then for any generic admissible almost complex structures J H ∈ J (W, Ω H ) and J G ∈ J (W, Ω G ), we have the following commutative diagram: is the PFH cobordism map induced by symplectic cobordism (2.9) and I H,G 0,0 is the continuous morphism on QHF defined in Section 3.
• (Non-vanishing) There are nonzero classes e ♦ ∈ HF (Σ, L) and σ x ♦H ∈ P F H(Σ, ϕ H , γ x H ) such that if ϕ H satisfies the condition (♠.2), then we have where j x H is the canonical isomorphism (2.14). In particular, the open-closed morphism is non-vanishing.
There is a special class e ∈ HF (Σ, L) called the unit (see Definition 3.7). Suppose that the link L is 0-admissible. Define spectral invariants Moreover, for any ϕ ∈ Ham(S 2 , ω), we have where µ L,η=0 , µ pf h d are the homogenized spectral invariants (see Section 6 for details). In particular, for any two 0-admissible links L, L with same number of components, then we have µ L,η=0 (ϕ, e) = µ L ,η=0 (ϕ, e) = µ L,η=0 (ϕ, e ♦ ) = µ L ,η=0 (ϕ, e ♦ ). Remark 1.3. For technical reasons, the cobordism maps on PFH are defined by using the Seiberg-Witten theory [28] and the isomorphism "PFH=SWF" [30]. Nevertheless, the proof of the Theorem 2 needs a holomorphic curves definition. The assumptions ♠.1, ♠.2 are used to guarantee that the PFH cobordism maps can be defined by counting holomorphic curves. According to the results in [11], the Seiberg-Witten definition agrees with the holomorphic curves definition in these special cases. We believe that the assumptions ♠.1, ♠.2 can be removed if one could define the PFH cobordism maps by pure holomorphic curve methods.
By Proposition 3.7 of [11], the conditions ♠.1, ♠.2 can be achieved by a C 1perturbation of the Hamiltonian functions. More precisely, fix a metric g Y on S 1 × Σ. For any δ > 0 and Hamiltonian function H, there is a Hamiltonian function H such that Even Theorem 2 relies on the conditions ♠.1, ♠.2 , the above estimates and the Hofer-Lipschitz property imply that Corollaries 1.2, 1.3 work for a general Hamiltonian function.
From [7], we know that the homogenized link spectral invariants are homogeneous quasimorphisms. We show that this is also true for µ pf h d (ϕ). Recall that a homogeneous quasimorphism on a group G is a map µ : G → R such that
Relavant results
The Calabi property in Theorem 1 in fact is an analogy of the "ECH volume property" for embedded contact homology, it was first discovered by D. Cristofaro-Gardiner, M. Hutchings, and V. Ramos [3]. Embedded contact homology (short for "ECH") is a sister version of the periodic Floer homology. The construction of ECH and PFH are the same. The only difference is that they are defined for different geometric structures. If a result holds for one of them, then one could expect that there should be a parallel result for another one. The Calabi property also holds for PFH. This is proved by O. Edtmair and Hutchings [16], also by D. Cristofaro-Gardiner, R. Prasad and B. Zhang [6] independently. The Calabi property for QHF is discovered in [7].
Recently, D. Cristofaro-Gardiner, V. Humilière, C. Mak, S. Seyfaddini and I. Smith show that the homogenized link spectral invariants satisfy the "two-terms Weyl law" for a class of automatous Hamiltonian functions [8] on the sphere. We believe that the HF spectral invariants agree with the link spectral invariants. If one could show that this is true, Corollary 1.3 implies that homogenized PFH spectral invariants agree with the homogenized link spectral invariants. This suggests that homogenized PFH spectral invariants should also satisfy the "two-terms Weyl law".
Periodic Floer homology
In this section, we review the definition of twisted periodic Floer homology and PFH spectral invariants. For more details, please refer to [20,21,4,16].
Suppose that Σ is a closed surface and ω is a volume form of volume 1. Given a Hamiltonian function H : S 1 t × Σ → R, then we have a unique vector field X Ht , called the Hamiltonian vector field, satisfying the relation ω(X Ht , ·) = d Σ H t . Let ϕ t H be the flow generated by X Ht , i.e., ∂ t ϕ t H = X Ht • ϕ t H and ϕ 0 Fix a symplecticmorphism ϕ. Define the mapping torus by There is a natural vector field R := ∂ t and a closed 2-form ω ϕ on Y ϕ induced from the above quotient. The pair (dt, ω ϕ ) forms a stable Hamiltonian structure and R is the Reeb vector field. Let ξ := ker π * denote the vertical bundle of π : Y ϕ → S 1 .
A periodic orbit is a map γ : R/qZ → Y ϕ satisfying the ODE ∂ t γ(t) = R • γ(t). The number q ≥ 0 is called the period or degree of γ. Note that q is equal to the intersection number [γ] · [Σ].
A periodic orbit is called nondegenerate if the linearized return map does not have 1 as an eigenvalue. The nondegenerate periodic orbits are classified as either elliptic or hyperbolic according to the eigenvalues of linearized return maps. The symplecticmorphism ϕ is called d-nondegenerate if every closed orbit with degree at most d is nondegenerate.
Let γ be an elliptic periodic orbit with period q. We can find a trivialization of ξ such that the linearized return map is a rotation e i2πθt , where {θ t } t∈[0,q] is a continuous function with θ 0 = 0. The number θ = θ t | t=q ∈ R/Z is called the rotation number of γ (see [21] for details). The following definition explains the terminologies in the assumptions ♠.1, ♠.2.
• γ is called d-positive elliptic if the rotation number θ is in (0, q d ) mod 1.
• γ is called d-negative elliptic if the rotation number θ is in (− q d , 0) mod 1.
For our purpose, we assume that ϕ is Hamiltonian throughout (but the construction of PFH works for a general symplecticmorphism). Under the Hamiltonian assumption, we have the following diffeomorphism Let H 2 (Y ϕ , γ + , γ − ) denote the set of 2-chains Z in Y ϕ with ∂Z = γ + −γ − , modulo the boundary of 3-chains. We call the element Z ∈ H 2 (Y ϕ , γ + , γ − ) a relative homology classes. This an affine space of
PFH complex
For a relative homology class Z ∈ H 2 (Y ϕ , γ + , γ − ), Hutchings defines a topology index called J 0 index [19] that measure the topology complexity of the curves. Fix a trivialization τ of ξ. The J 0 index is given by the following formula: where c τ (ξ| Z ) is the relative Chern number, Q τ (Z) is the relative self-intersection number and CZ τ is the Conley-Zehnder index. There is another topological index called ECH index. It is defined by We refer readers to [18,19] for more details on I and J 0 . Fix a reference 1-cycle γ 0 transversed to ξ positively. Assume that [γ 0 ] · [Σ] > g(Σ) throughout. An anchored orbit set is a pair (γ, [Z]), where γ is an orbit set and The chain complex P F C(Σ, ϕ, γ 0 ) is the set of the formal sums (possibly infinity) where a (γ,[Z]) ∈ Z/2Z and each (γ, [Z]) is an anchored PFH generator. Also, for any C ∈ R, we require that there is only finitely many (γ, [Z]) such that Z ω ϕ H > C and a (γ,[Z]) = 0.
..} be the Novikov ring. Then the P F C(Σ, ϕ, γ 0 ) is Λ-module because we define an action (2.6) In most of the time, it is convenient to take Differential on PFH To define the differential, consider the symplectization An almost complex structure on X is called admissible if it preserves ξ, is R-invariant, sends ∂ s to R, and its restriction to ξ is compatible with ω ϕ . The set of admissible almost complex structures is denoted by J (Y ϕ , ω ϕ ). Given J ∈ J (Y ϕ , ω ϕ ) and orbit sets γ be the moduli space of punctured holomorphic curves u :Ḟ → X with the following properties: u has positive ends at covers of γ +,i with total multiplicity m i , negative ends at covers of γ −,j with total multiplicity n j , and no other ends. Also, the relative homology class of u is Z. Note that M J (γ + , γ − , Z) admits a natural R-action.
The differential ∂ J on P F C(Σ, ϕ, γ 0 ) is defined by The homology of ( P F C(Σ, ϕ, γ 0 ), ∂ J ) is called the twisted periodic Floer homology, denoted by P F H(Σ, ϕ, γ 0 ) J . By Corollary 1.1 of [30], PFH is independent of the choice of almost complex structures and Hamiltonian isotopic of ϕ. Note that P F H(Σ, ϕ, γ 0 ) is a Λ-module because the action (2.6) descends to the homology.
The U-map There is a well-defined map The definition of the U-map is similar to the differential. Instead of counting I = 1 holomorphic curves modulo R translation, the U-map is defined by counting I = 2 holomorphic curves that pass through the fixed point z and modulo R translation. The homotopy argument can show that the U-map is independent of the choice of z. For more details, please see Section 2.5 of [26].
Cobordism maps on PFH Let (X, Ω X ) be a symplectic 4-manifold. Suppose that there exists a compact subset K such that . Fix a reference homology class Z ref ∈ H 2 (X, γ 0 , γ 1 ). The symplectic manifold (X, Ω X ) induces a homomorphism This homomorphism is called a PFH cobordism map. Following Hutchings-Taubes's idea [23], the cobordism map P F H sw Z ref (X, Ω X ) is defined by using the Seiberg-Witten theory [28] and Lee-Taubes's isomorphism [30]. Even the cobordism maps are defined by Seiberg-Witten theory, they satisfy some nice properties called holomorphic curves axioms. It means that the PFH cobordism maps count holomorphic curves in certain sense. For the precise statement, we refer readers to [11] and Appendix of [13].
In this paper, we will focus on the following special cases of (X, Ω X ).
1. Given two Hamiltonian functions H + , H − , define a homotopy H s := χ(s)H + + (1 − χ(s))H − , where χ is a cut off function such that χ = 1 for s ≥ R 0 > 0 and and χ = 0 for χ ≤ 0. Define This is a symplectic cobordism if R 0 is sufficiently large. Note that we identify Y ϕ H ± with S 1 × Σ implicitly by using (2.4). Fix a reference relative homology class Z ref ∈ H 2 (X, γ 0 , γ 1 ). Then we have a cobordism map This map only depends on H + , H − and the relative homology class Z ref .
2. Let (B − , ω B − , j B − ) be a sphere with a puncture p. Suppose that we have neighbourhood U of p so that we have the following identification where j is a complex structure that maps ∂ s to ∂ t . Let χ : R → R be cut off function such that χ = 1 when s ≥ R 0 and χ(s) = 0 when s ≤ R 0 /10. Take (2.10) For sufficiently large R 0 > 0, (X − , Ω X − ) is a symplectic manifold satisfying (2.8).
In the case (2.9), if H + satisfies ♠.1 and H − satisfies ♠.2, the author shows that the cobordism map P F H sw Z ref (X, Ω X ) can be defined alternatively by using the pure holomorphic curve methods [11]. The holomorphic curves definition will be used to prove Theorem 2. That is why we need the assumptions ♠.1, ♠.2 in the statement.
Filtered PFH We define a functional A η H on the anchored orbit sets deformed by the J 0 index as follows: ) for short. Even through we add an perturbation term to the usual action functional, this still give us a filtration on the PFH complex .
If ϕ H is degenerate , we define where ϕ Hn are d-nondegenerate, {H n } ∞ n=1 C ∞ -converges to H, and σ n ∈ P F H(Σ, ϕ Hn , γ 0 ) is the class corresponding to σ.
One could define the PFH spectral invariants using A η H for η > 0, however, we cannot prove the Hofer-Lipschitz property by using the methods in [4]. This is because Lemma 2.2 is not true for holomorphic currents in the symplectic cobordisms.
Quantitative Heegaard Floer homology
In this section, we review the cylindrical formulation of QHF defined in [13].
Cylinderical formulation of QHF
Fix an admissible link L = ∪ d i=1 L i and a Hamiltonian symplecticmorphism ϕ H . We always assume that ϕ H is nondegenerate in the sense that ϕ H (L) intersects L transversely.
A Reeb chord of ϕ H is a d-union of paths be a union of Lagrangian submanifolds in (E, Ω). Let y ± be two Reeb chords. Then we have a concept called d-multisection in E. Roughly speaking, this is a map u :Ḟ → E which is asymptotic to y ± as s → ±∞ and satisfies the Lagrangian boundary conditions, whereḞ is a Riemann surface with boundary punctures. If a d-multisection is holomorphic, we call it an HF curve. The set of equivalence classes of the d-multisections is denoted by The ECH index and J 0 index also can be generalized to the current setting, denoted by I(A) and J 0 (A) respectively. The definition of relative homology class, HF curves and ECH index will be postponed to Section 3. We will define these concepts for a slightly general setting. Given a Reeb chord y, a capping of y is an equivalence class [A] in H 2 (E, x H , y)/ ker(ω+ ηJ 0 ). Define the complex CF (Σ, ϕ H (L), L, x) be the set of formal sums of capping satisfying that a (y,[A]) ∈ Z/2Z and for any C ∈ R, there are only finitely (y, Together with the trivial strip at Using the same construction, we have another map u x i . The slightly different between u x i and u x i is that ..} be the Novikov ring. To distinguish the one for PFH, we use different notations to denote the ring and the formal variable. Let M J (y + , y − , A) denote the moduli space of HF curves that are asymptotic to y ± as s → ±∞ and have relative homology Fix a generic J ∈ J E . The differential is defined by The homology of (CF Again, the Floer homology is a R-module. By Proposition 3.9 of [13], the homology is independent of the choices of J and H. For different choices of (J, H), there is an isomorphism between the corresponding QHF called a continuous morphism. More details about this point are given in Section 3 later. For two different choices of base points, the corresponding homologies are also isomorphic. Let HF (Σ, L) be the direct limit of the continuous morphisms. For any H, we have an isomorphism (2.14) Combining the isomorphism (1.1) with Lemma 6.10 of [7], we know that Remark 2.1. Even though we only define the QHF for a Hamiltonian symplecticmorphism ϕ H , the above construction also works for a pair of Hamiltonian symplecticmorphisms (ϕ H , ϕ K ). Because ϕ K (L) is also an admissible link, we just need to replace L by ϕ K (L). The result is denoted by HF (Σ, ϕ H (L), ϕ K (L), x).
Filtered QFH and spectral invariants
Similar as [31,7], we define an action functional on the generators by We remark that the term J 0 (A) is corresponding to the ∆ · [ŷ] in [7],where ∆ is the diagonal of Sym d Σ andŷ is a capping of a Reeb chord y. This view point is proved in Proposition 4.2 in [13].
be the homomorphism induced by the inclusion.
Then the spectral invariant can be expressed alternatively as Let HF (Sym d Σ, Sym d L, Sym d ϕ H ) denote the QHF defined in [7]. Because QHF is independent of the choices of ϕ H and x, we have an abstract group HF (Sym d L) and a canonical isomorphism Since the isomorphism (1.1) also preserves the action filtrations, we have
Morphisms on HF
In this section, we define the continuous morphisms, quantum product and unit on HF (Σ, L).
Moduli space of HF curves
In this subsection, we give the definition of HF curves, relative homology class, and the ECH index.
Let D m be a disk with boundary punctures (p 0 , p 1 , ..., p m ), the order of the punctures is counter-clockwise. See Figure 2. Let ∂ i D m denote the boundary of D m connecting p i−1 and p i for 1 ≤ i ≤ m. Let ∂ m+1 D m be the boundary connecting p m and p 0 .
Fix a complex structure j m and a Kähler form ω Dm over D m throughout. We say that D m is a disk with strip-like ends if for each p i we have a neighborhood U i of Let π : We call it a (strip-like) end of (E m , Ω Em ) at p i .
Let L = (L 1 , ...L m+1 ) be a chain of d-disjoint union Lagrangian submanifolds in ∂E m satisfying the following conditions: C.3 Over the end at p 0 (under the identification 3.18), we have are η-admissible and they are Hamiltonian isotropic to each other.
Let (E m , Ω m , L m ) and (E n , Ω n , L n ) be two symplectic fibrations. Suppose that the negative end of (E m , Ω m , L m ) agrees with the i-th positive end of (E n , Ω n , L n ), i.e, In most of the time, the number R is not important, we suppress it from the notation.
consists of exactly one component of ∂Ḟ .
Let H 2 (E m , y 1 , ..y m , y 0 ) be the set of continuous maps satisfying the conditions 1), 2), 3) and modulo a relation ∼. Here u 1 ∼ u 2 if and only if their compactifications are equivalent in .y m , y 0 ) is called a relative homology class. An easy generalization is that one could replace the Reeb chords by the reference chords x H in the above definition.
Definition 3.2. An almost complex structure is called adapted to fibration if Let J tame (E m ) denote the set of the almost complex structures adapted to fibration. Fix an almost complex structure J. If u is a J-holomorphic d-multisection, then u is called an HF curve.
Using the admissible 2-form ω Em , we have a splitting With respect to this splitting, an almost complex structure J ∈ J tame (E m ) can be written as J = J hh 0 J hv J vv . Therefore, J is Ω Em -compatible if and only if J hv = 0. Let J comp (E m ) ⊂ J tame (E m ) denote the set of almost complex structures which are adapted to fibration and Ω Em -compatible. Later, we will use the almost complex structures in J comp (E m ) for computations.
Fredholm index and ECH index
We begin to define the index of an HF curve.
There are two types of index defined for an HF curve, called Fredholm index and ECH index.
To begin with, fix a trivialization of u * T Σ as follows. Fix a non-singular vector v on L. Then (v, j Σ (v)) gives a trivialization of T Σ| L , where j Σ is a complex structure on Σ. We extend the trivialization arbitrarily along y i . Such a trivialization is denoted by τ .
Define a real line bundle L over ∂F as follows. Take L| ∂Ḟ := u * (T L ∩ T Σ). Extend L to ∂F −∂Ḟ by rotating in the counter-clockwise direction from u * T L i p j−1 and u * T L i p j by the minimum account. Then (u * T Σ, L) forms a bundle pair over ∂F . With respect to the trivialization τ , we have a well-defined Maslov index µ τ (u) := µ(u * T Σ, L, τ ) and relative Chern number c 1 (u * T Σ, τ ). The number 2c 1 (u * T Σ, τ ) + µ τ (u) is independent of the trivialization τ . The Fredholm index of an HF curve is defined by The above index formula can be obtained by the doubling argument in Proposition 5.5.2 of [15]. Given A ∈ H 2 (E m , y 1 , ..y m , y 0 ), an oriented immersed surface C ⊂ E m is a τrepresentative of A if 1. C intersects the fibers positively along ∂C; 2. π [0,1]×Σ | C is an embedding near infinity; 3. C satisfies the τ -trivial conditions in the sense of Definition 4.5.2 in [15].
Let C be a τ -trivial representative of A. Let ψ be a section of the normal bundle N C such that ψ| ∂C = Jτ . Let C be a push-off of C in the direction of ψ. Then the relative self-intersection number is defined by Suppose that A ∈ H 2 (E m , y 1 , ..y m , y 0 ) admits a τ -representative. We define the ECH index of a relative homology class by Using the relative adjunction formula, we have the following result. Proof. By the same argument in Lemma 4.5.9 [15], we have where N u is the normal bundle of u and ∂ t is a trivialization of T D m such that it agrees with ∂ t over the ends. On the other hand, we have Combine the above two equations; then we obtain the ECH equality. J 0 index We imitate Hutchings to define the J 0 index. The construction of J 0 here more or less comes from the relative adjunction formula. The J 0 index for the usual Heegarrd Floer homology can be found in [27]. Fix a relative homology class A ∈ H 2 (E m , y 1 , ..y m , y 0 ). The J 0 index is defined by The following lemma summarize the properties of J 0 .
Suppose that an HF curve
3. If the class A supports an HF curve, then J 0 (A) ≥ 0.
Let
Proof. The first item follows directly from the definition and the relative adjunction formula. The second item also follows from defintion directly. Since an HF curve has at least one boundary, hence −χ(F ) + d ≥ 0. By the first two items, the third item holds.
Using the computations in Lemma 3.4 of [13], we obtain the last item. A quick way to see is that adding disks along boundaries will not change the Euler characteristic of the curves.
Cobordism maps
With the above preliminaries, we now define the the product structure on HF. To begin with, let us define the cobordism maps on QHF induced by (E m , Ω m , L m ). Assume that L p i = ϕ H i (L). Define reference chords by δ i (t) := ϕ H i (xH i #H i−1 (t)) for 1 ≤ i ≤ m and δ 0 (t) = ϕ Hm (xH m#H0 (t)), whereH t (x) = −H t (ϕ t H (x)). Here is the composition of two Hamiltonian functions. By the chain rule, we have ϕ t H#K = ϕ t H • ϕ t K . There is another operation on Hamiltonian functions called the join. The join of H and K is defined by where ρ : [0, 1] → [0, 1] is a fixed non-decreasing smooth function that is equal to 0 near 0 and equal to 1 near 1. As with the composition, the time The following proposition is similar to the result in Section 4 of [14].
In particular, the cobordism maps are independent of the choice of almost complex structures.
2. (Composition rule) Suppose that the negative end of (E m , Ω m , L m ) agrees with the j-th positive end of (E n , Ω n , L n ). Then we have where (E m+n−1 , Ω m+n−1 , L m+n−1 ) is the composition of (E m , Ω m , L m ) and (E n , Ω n , L n ) defined in (3.19).
Proof. In chain level, define Here A 0 is determined by the relation To see the above map is well defined, first note that the HF curves are simple because they are asymptotic to the Reeb chords. Therefore, the transervality of moduli space can be obtained by a generic choice of an almost complex structure. By Theorem 3.3, the ECH indices of HF curves are nonnegative.
Secondly, consider a sequence of HF curves {u n :Ḟ → E m } ∞ n=1 in M J (y 1 , ...y m ; y 0 , A). Apply the Gromov compactness [2] to {u n } ∞ n=1 . To rule out the bubbles, our assumptions on the links play a key role here. Note that the bubbles arise from pinching an arc or an interior simple curve in Similarly, if v comes from pinching an interior simple curve in F n , then the image of v must be a fiber Σ. The index formula in Lemma 3.3 [13] can be generalized to the current setting easily. As a result, the bubble v contributes at least 2 to the ECH index. Roughly speaking, this is because the Maslov index of a disk is 2. Also, adding a Σ will increase the ECH index 2(k + 1). This violates the condition that I = 0. Hence, the bubbles can be ruled out. Therefore, M J (y 1 , ...y m ; y 0 , A) is compact. Similarly, the bubbles cannot appear in the module space of HF curves with I = 1. The standard neck-stretching and gluing argument [29] shows that CF A ref (E m , Ω m , L m ) is a chain map.
The invariance and the composition rule follow from the standard homotopy and neck-stretching argument. Again, the bubbles can be ruled by the index reason as the above.
Reference relative homology classes
Obviously, the cobordism maps depend on the choice of the reference relative homology class A ref . For any two different reference homology classes, the cobordism maps defined by them are differed by a shifting (2.13). To exclude this ambiguity, we fix a reference relative homology class in the following way: Let χ + (s) : R s → R be a function such that χ + = 1 when s ≤ −R 0 and χ + = 0 when s ≥ −1. Define a diffeomorphism We view F + as a map on the end of E 0 by extending F + to be (z, x) → (z, ϕ K (x)) over the rest of E 0 . Let L + := F + (∂D 0 × L) ⊂ ∂E 0 be a submanifold. Note that ). The surface F + (D 0 × {x}) represent a relative homology class A + ∈ H 2 (E 0 , ∅, ϕ K (xK #(K#H) )). For any Hamiltonian functions H 1 , H 2 , we find a suitable H such that H 1 = H#K and H 2 = K. So the above construction gives us a class A + H 1 ,H 2 ∈ H 2 (E 0 , ∅, ϕ H 2 (xH 2 #H 1 )). Let D 0 be a disk with a strip-like positive end. Define E 0 := D 0 × Σ. By a similar construction, we have a fiber-preserving diffeomorphism F − : gives a relative homology class in H 2 (E 0 , ϕ H 2 (xH 2 #H 1 ), ∅).
Using A ± H 1 ,H 2 , we determine a unique reference homology class A ref ∈ H 2 (E m , δ 1 , .., δ m , δ 0 ) as follows: For i-th positive end of (E m , L m ), we glue it with (E 0 , L + ) as in (3.19), where L + is determined by H i−1 , H i . Similarly, we glue the negative end of (E m , L m ) with (E 0 , L − ). Then this gives us a pair (E = D × Σ, L), where D is a closed disk without puncture. Note that H 2 (E, L; Z) ∼ = H 2 (E, ∂D×L; Z). Under this identification, we have a canonical class A can = [D × {x}] ∈ H 2 (E, L; Z). We pick A ref ∈ H 2 (E m , δ 1 , .., δ m , δ 0 ) to be a unique class such that
Recall that the reference class A ref is the unique class defined in Section 3.2.1. Lemma 3.6. The naturality isomorphisms satisfy the following diagram.
By the invariance property in Proposition 3.5, the cobordism map HF
In particular, we have (I H 1 ) Proof. To prove the statement, we first split the diagram into two: To prove the first diagram, we define a diffeomorphism Let (R × [0, 1] × Σ, Ω 1 , L) be a Lagrangian cobordism from (ϕ K 1 (L), L) to (ϕ K 2 (L), L). Note that if u ∈ M J (y + , y − ) is an HF curve in (R × [0, 1] × Σ, Ω 1 ) with Lagrangian boundaries L, then F H 1 (u) is a F H 1 * J-holomorphic HF curve in (R×[0, 1]×Σ, (F −1 H 1 ) * Ω 1 ) with Lagrangian boundaries F H 1 (L). This gives a 1-1 correspondence between the curves in (E 1 , Ω 1 , L) and curves in (E 1 , (F −1 H 1 ) * Ω 1 , F H 1 (L)). Note that F H 1 (u) is a holomorphic curve contributed to the cobordism map Therefore, we define the continuous morphism I , we just need to take K 2 = K 1 and H 2 = 0 in the diagram.
Unit
In this subsection, we define the unit of the quantum product µ 2 .
These data induce a cobordism map Again, A ref is the reference class in Section 3.2.1. Define By Proposition 3.5, we have where a ∈ HF (Σ, ϕ H 1 (L), ϕ H 2 (L)). These identities imply that the following definition makes sense.
Definition 3.7. The class e H,K descends to a class e ∈ HF (Σ, L). We call it the unit. It is the unit with respect to µ 2 in the sense that µ 2 (a ⊗ e) = a.
We now describe the unit when H is a small Morse function. Fix perfect Morse functions f L i : L i → R with a maximum point y + i and a minimum point y − i . Extend ∪ i f L i to be a Morse function f : Σ → R satisfying the following conditions: M.1 (f, g Σ ) satisfies the Morse-Smale condition, where g Σ is a fixed metric on Σ.
M.2 f | L i has a unique maximum y + i and a unique minimum y − i .
M.3 {y + i } are the only maximum points of f . Also, f ≤ 0 and f (y + i ) = 0 for 1 ≤ i ≤ d.
Take H = f , where 0 < 1. By Lemma 6.1 in [13], the set of Reeb chords of ϕ H is For each y = [0, 1] × (y 1 , ...y d ), we construct a relative homology class A y as follows: ). Then u = ∪ d i=1 u i is a d-multisection and it gives arise a relative homology class A y ∈ H 2 (E, x H , y). It is easy to show that (see Equation (3.18) [13]) . Let A ref be the reference homology class defined in Section 3.2.1. Then we have a suitable pair (Ω E 0 , L 0 ) such that for a generic J ∈ J comp (E 0 ), we have In particular, (y ♥ , [A y ♥ ]) is a cycle that represents the unit.
The idea of the proof is to use index and energy constraint to show that the union of horizontal sections is the only I = 0 holomorphic curve contributed to the cobordism map CF A ref (E 0 , Ω E 0 , L 0 ) J (1). Since the proof Lemma 3.8 is the same as Lemma 6.6 in [13], we omit the details here. From the Lemma 3.8, we also know that the definition of unit in Definition 3.7 agrees with the Definition 6.7 of [13].
Proof of Theorem 1
In this section, we study the properties of the spectral invariants c L,η . These properties and their proof are parallel to the one in [7,31]. For different base points x, x , we have an isomorphism
The HF action spectrum
preserving the action functional (see Equation 3.17 of [13]). In particular, the action spectrum is independent of the base point. So we omit x from the notation. A Hamiltonian function H is called mean-normalized if Σ H t ω = 0 for any t.
F s t is unique if we require that F s t is mean-normalized. Note that X F s t = 0 along t = 0, 1 because ϕ s,0 = id and ϕ s,1 = ϕ H = ϕ K = ϕ. By the mean-normalized condition, we have Since u is a disjoin union of strips, we have J 0 (A) = J 0 (A 0 #A). By a direct computation, we have (18.3.17) of [32]). Therefore,
Proof of Theorem 1
Proof.
1. Suppose that ϕ H is nondegenerate. Then Spec(H : L) is a discrete set over R. The spectrality follows directly from the expression (2.15). For the case that ϕ H is degenerate, the statement can be deduced from the limit argument in [31].
2. To prove the Hofer-Lipschitz, we first need to construct a Lagrangian cobordism so that we could estimate the energy of holomorphic curves.
Let χ(s) : R s → R be a non-decreasing cut-off function such that and Ω E = ω E + ds ∧ dt.
Then L ⊂ R × {0, 1} × Σ is a disjoint union of Ω E -Lagrangians such that . Take a generic J ∈ J comp (E 1 ). Then we have a cobordism HF A ref (E 1 , Ω E , L) J and it is the continuous morphism I . Let u ∈ M J (y + , y − ) be an HF curve in (E 1 , Ω, L). The energy of u satisfies The inquality in the second step follows the same argument in Lemma 3.8 of [4].
On the other hand, we have Fix a = 0 ∈ HF (Σ, L). For any fixed δ, take a cycle c + ∈ CF (Σ, ϕ H + (L), (L)) represented (j x H + ) −1 (a) and satisfying On the other hand, c L,η (H s , a) is continuous with respect to s. Therefore, it must be constant. By the assumption A.4, we have Define a family of Hamiltonians functions {H s := sH} s∈[0,1] . By the spectrality, Here m 0 must be a constant due to the Hofer-Lipschitz continuity. We know that m 0 λ = c L (0, a) by taking s = 0. Then the Lagrangian control property follows from taking s = 1.
6. Let a, b ∈ HF (Σ, L). Take Let us first consider the following special case: Suppose that there is a base point In particular, x is a non-degenerate Reeb chord of ϕ H , ϕ K and ϕ H • ϕ K . Also, the reference chords become (4.29) Take J ∈ J comp (E 2 ). Then u * ω ≥ 0. By Lemma 3.4, J 0 (u) ≥ 0. Combine these facts with (4.28), (4.29); then we have Note that the above construction works for any δ, we can take δ → 0.
Since the normalization of H K and H#K are homotopic, we can replace H K in the triangle equality by H#K.
8. The proof of the Calabi property relies on the Hofer-Lipschitz and the Lagrangian control properties. We have obtained these properties. One can follow the same argument in [7] to prove the Calabi property. We skip the details here.
Open-closed morphisms
In this section, we prove Theorem 2. Most of the arguments here are parallel to [15] and the counterparts of the closed-open morphisms [13]. Therefore, we will just outline the construction of the open-closed morphisms and the proof of partial invariance. We will focus on proving the non-vanishing of the open-closed morphisms.
To begin with, let us introduce the open-closed symplectic manifold and the Lagrangian submanifolds. The construction follows [15]. Define a base surface B ⊂ The symplectic form Ω H on W H is defined to be the restriction of ω ϕ H + ds ∧ dt. Note that W H is diffeomorphic (preserving the fibration structure) to the B × Σ. So we denote W H by W instead when the context is clear. We place a copy of L on the fiber π −1 W (3, 1) and take its parallel transport along ∂B using the symplectic connection. The parallel transport sweeps out an Ω H -Lagrangian submanifold L H in W . Then L H consists of d-disjoint connected components. Moreover, we have consists of exactly one component of ∂Ḟ .
2. u is asymptotic to y as s → ∞.
A J-holomorphic d-multisection is called an HF-PFH curve. We remark that the HF-PFH curves are simple because they are asymptotic to Reeb chords. This observation is crucial in the proof of Theorem 2. Let We denote H 2 (W, y, γ) the equivalence classes of continuous maps u : (Ḟ , ∂Ḟ ) → (W, Z y,γ ) satisfying 1), 2), 3) in the above definition. Two maps are equivalent if they represent the same element in H 2 (W, Z y,γ ; Z). Note that H 2 (W, y, γ) is an affine space of H 2 (W, L H ; Z). The difference of any two classes can be written as where [B i ] is the class represented by the parallel translation of B i and [S] is a class in the Fix a nonvanishing vector field on L. This gives a trivialization τ of T Σ| L . We extend it to T Σ| L H by using the symplectic parallel transport. We then extend the trivialization of T Σ| L H in an arbitrary manner along {∞} × y and along {−∞} × γ. Then we can define the relative Chern number c 1 (u * T Σ, τ ). This is the obstruction of extending τ to u.
Define a real line bundle L of T Σ along L H ∪ {∞} × y as follows. We set L| L H := T L H ∩T Σ. Then extend L across {∞}×y by rotating in the counterclockwise direction from T ϕ H (L) to T L in T Σ by the minimum amount. With respect to the trivialization τ , we have Maslov index for the bundle pair (u * L, u * T Σ), denoted by µ τ (u).
The Fredholm index for an HF-PFH curve is The notation µ ind τ (γ) is explained as follows. Let γ = {(γ i , m i )}. Suppose that for each i, u has k i -negative ends and each end is asymptotic to γ q j i . Then the total multiplicity is m where CZ τ is the Conley-Zehnder index. Given Z ∈ H 2 (W, y, γ), the ECH index is (see Definition 5.6.5 of [15]) where |γ| is a quantity satisfying |γ| ≥ 1 provided that it is nonempty. Moreover, I(u) = indu holds if and only if u satisfies the ECH partition condition. If u = ∪ a u a is an HF-PFH curve consisting of several (distinct) irreducible components, then In particular, J 0 (u) ≥ 0 for an HF-PFH curve.
In this paper, we don't need the details on "ECH partition condition" and |γ|. For the readers who are interested in it, please refer to [18,19]. The proof of Theorem 5.2 basically is a combination of the relative adjunction formula and Hutchings's analysis in [19]. We omit the details here.
Construction and invariance of OC
In this subsection, we outline the construction of the open-closed morphisms. Also, we will explain why it satisfies the partial invariance.
To begin with, we need the following lemma to rule out the bubbles. .
Proof. Using the same argument in Lemma 3.3 of [13], we know that adding B i to a relative homology class Z will increase the ECH index by 2 because the Maslov index of B i is 2. Also, the adding disks will not change the topology of the curves. Hence, J 0 is still the same. If we add [Σ] to Z, then the ECH index will increase by 2(k + 1). See the index ambiguity formula in Proposition 1.6 of [18]. Similarly, adding [S] doesn't change the ECH index and J 0 index.
The class Z is characterized by A#Z#Z = Z ref . The arguments in [15] (also see the relevant argument for closed-open morphisms [13]) show that this is well defined and it is a chain map. The main difference here with [15] is that the bubbles would appear, but these can be ruled out by Lemma 5.3 and the argument in Proposition 3.5.
To prove the partial invariance, the arguments consist of the following key steps: 1. If we deform the open-closed morphism smoothly over a compact set of W (the deformation needs to be generic), then the standard homotopy arguments show that the open-closed morphism is invariant.
2. Assume that ϕ H satisfies ♠.1 and ϕ G satisfies ♠.2. Let (E 1 , Ω 1 , L 1 ) be a Lagrangian cobordism from (ϕ G (L), L) to (ϕ H (L), L). Let (X, Ω X ) be a symplectic cobordism from (Y ϕ H , ω ϕ H ) to (Y ϕ G , ω ϕ G ) defined by (2.9). Consider the Rstretched composition of (E 1 , Ω 1 , L 1 ), (W H , Ω H , L H ) and (X, Ω X ), denoted by (W R , Ω R , L R ). As R → ∞, the I = 0 HF-PFH curves in (W R , Ω R , L R ) converges to a holomorphic building. Under assumptions ♠.1, ♠.2, the holomorphic curves in (X, Ω X ) have nonnegative ECH index (see Section 7.1 of [11]). Combining this with Theorems 3.3, 5.2, the holomorphic curves in each level have nonnegative ECH index. As a result, these holomorphic curves have zero ECH index. They are either embedded or branched covers of trivial cylinders. By Huctings-Taubes's gluing argument [24,25], the open-closed morphism defined by is the PFH cobordism map defined by counting embedded holomorphic curves in X. By Theorem 3 in [11], we can replace it by P F H sw Z ref (X, Ω X ). Finally, by the homotopy invariance in step 1, we get the partial invariance.
For more details, we refer the readers to [13].
Computations of OC
In this subsection, we compute the open-closed morphism for a special Hamiltonian function H satisfying ♠.1. Using partial invariance, we deduce the non-vanishing result under the assumption ♠.2. The main idea here is also the same as [13].
Suppose Figure 1. This is a nice candidate for computation because we can describe the periodic orbits and Reeb chords in terms of the critical points and the index of holomorphic curves are computable. However, the H doesn't satisfy ♠.1 or ♠.2. We need to follow the discussion in Section 6.1 of [13] to modify H . Fix numbers 0 < 0 1 and δ, δ 0 > 0. By [13], we have a function ε : Σ → R such that 0 < ≤ ε ≤ 0 and the new autonomous Hamiltonian function H ε = −εf satisfies the following conditions: F.4 For each local maximum p, ϕ Hε has a family periodic orbits γ r 0 ,θ (t) that foliates S 1 t × ∂U r 0 p , where δ + δ 0 ≤ r 0 ≤ δ + 2δ 0 . Moreover, the period of γ r 0 ,θ (t) is strictly greater than d. By Proposition 3.7 of [11], we perturb H ε to a new Hamiltonian function H ε (may depend on t) such that it satisfies the following properties: 2. H ε still satisfies F.4 and F.5.
4. The periodic orbits of ϕ H ε with period less than or equal to d are either hyperbolic or d-negative elliptic. In other words, ϕ H ε is d-nondegenerate and satisfies ♠.1.
Remark 5.1. Because we take H ε = −εf , the maximum points {y + i } of f are the minimum points of H ε . We use {y i − } to denote the minimum points of H ε from now on.
Let y be a critical point of H ε . Let γ y denote the constant simple periodic orbit at the critical point y. We define PFH generators and a Reeb chord as follows: 1. Let I = (i 1 , ...i d ). Here we allow i j = i k for j = k. Let α I = γ y i 1 − ...γ y i d − . When I = (1, 2..., d), we denote α I by α ♦ . Here we use multiplicative notation to denote an orbit set instead.
Take a J ∈ J (W, Ω H ε ). Let u y i be the restriction of R × γ y i to W . Obviously, it is a J-holomorphic curve in M J (y i , γ y i ). It is called a horizontal section of (W, Ω H ε , L H ε , J). Moreover, it is easy to check that indu y i − = 0 from the definition. Proof. The proof is the same as Lemma 6.6 in [13].
The horizontal sections ∪ d i=1 u y i − represent a relative homology class Z hor . We take the reference relative homology class to be Lemma 5.5. For a generic J ∈ J (W, Ω H ε ), we have Proof. Consider the moduli space of HF-PFH curves M J 0 (y ♦ , α ♦ ) with I = 0. Let u ∈ M J 0 (y ♦ , α ♦ ). By Lemma 5.3, J 0 (u) = 2m(d + g − 1). Also, I(u) = 0 implies that k i=1 c i + m(k + 1) = 0. On the other hand, Theorem 5.2 and Lemma 5.4 imply that u = ∪ i u y i − is a union of horizontal sections. In other words, the union of horizontal sections is the unique element in M J 0 (y ♦ , α ♦ ). If u is an HF-PFH curve in M J 0 (y ♦ , β) and β = α ♦ , then E ωϕ H ε (u) > 0; otherwise, u is horizontal and u must be asymptotic to α ♦ . By Theorem 5.2, we have Note that the above intersection numbers are well defined because γ r 0 ,θ and α I are disjoint. Because R × γ r 0 ,θ is holomorphic by the choice of J X , the above equality implies that C doesn't intersect R × γ r 0 ,θ . In particular, C is contained in the product region R × S 1 t × (Σ − U δ+δ 0 ). Then C ω X = 0 implies that C is a union of trivial cylinders (Proposition 9.1 of [18]). Thus we must have α I = α ♦ .
Lemma 5.8. Let J X be a generic almost complex structure in J comp (X, Ω X ) and J X is R-invariant in the product region R × S 1 t × (Σ − U δ+δ 0 ). Then we have Proof. By the holomorphic axioms (Theorem 1 of [11] and Appendix of [13]) and Lemma 5.7, we know that Assume that < P F C sw Z ref (X, Ω X ) J X (α ♦ , Z α ♦ ), (β , Z ) >= 1 for some (β , Z ) and β = α I . Again by the holomorphic axioms, we have a holomorphic curve C ∈ M J 0 (α ♦ , β ). It is easy to check that where h(β ) is the total multiplicities of all the hyperbolic orbits in β and e + (β ) is the total multiplicities of all the elliptic orbits at local maximum of H ε . Because β = α I , we have h(β ) + 2e + (β ) ≥ 1. Therefore, we have Lemma 5.9. Let (β, Z) be a factor of c given in Lemma 5.5. Let J X be the almost complex structure in Lemma 5.8. Then we have Proof. First, we show that β cannot be α I . Assume that Then we have a broken holomorphic curve C = (C, C 0 ), where C ∈ M J 0 (y ♦ , β) is an HF-PFH curve and C 0 ∈ M J X 0 (β, α I ). The holomorphic curve gives us a relative homology class Z ∈ H 2 (W, y ♦ , α I ).
Reintroduce the periodic orbits γ i r 0 ,θ 0 near the local maximums of H ε . The superscript "i" indicates that the local maximum lies in the domainB i , where 1 ≤ i ≤ k + 1. In particular, γ i r 0 ,θ 0 lies in S 1 ×B i . Define a curve v i := (R × γ i r 0 ,θ 0 ) ∩ W . Note that it is J-holomorphic and ∂v i is disjoint from the Lagrangian L H ε . Then for any relative homology class Z ∈ H 2 (W, y ♦ , α I ), we have a well-defined intersection number The relative homology class Z ∈ H 2 (W, y ♦ , α I ) can be written as c i q + (k + 1)mq = 0.
To show that c is non-exact, it suffices to prove P F C hol Z ref (X − , Ω X − ) J X − (c ) = 0. In [12], the author computes the map P F C hol Z ref (X, Ω X ) J X for the elementary Lefschetz fibration (a symplectic fibration over a disk with a single singularity). The current situation is an easier version of [12]. By the argument in [12], we have , Ω X − ) J X − (β , Z ) = 0 for (β , Z ) = (α I , Z I ).
(5.36) Therefore, Lemmas 5.8 and 5.9 imply that P F C hol Z ref (X − , Ω X − ) J X − (c ) = 1. Here let us explain a little more about how to get (5.36). Basically, the idea is the same as Lemma 3.8. From the computation of the ECH index, we know that I = 0 implies that the holomorphic curves must be asymptotic to α I . Also, the energy is zero. Therefore, the unbranched covers of the horizontal sections are the only curves that contribute to P F C hol Z ref (X − , Ω X − ) J X − , and this leads to (5.36). Even these holomorphic curves may not be simple, they are still regular (see [9]).
Homogenized spectral invariants
Let Ham(Σ, ω) be the universal cover of Ham(Σ, ω). By Theorem 1, this is well defined. Thus, the HF spectral invariants descend to invariants onφ ∈ Ham(Σ, ω). But in general, the spectral invariants cannot descend to Ham(Σ, ω). This is also true for PFH spectral invariants. To obtain numerical invariants for the elements in Ham(Σ, ω) rather than its universal cover, we need the homogenized spectral invariants. It is well known that Ham(Σ, ω) = Ham(Σ, ω) when g(Σ) ≥ 1. Therefore, we only consider the case that Σ = S 2 . Fix ϕ ∈ Ham(Σ, ω). We define the homogenized HF spectral invariant by These two inequalities implies that µ L,η=0 is a quasimorphism with defect 1. So is µ pf h d .
|
2022-09-23T06:43:00.255Z
|
2022-09-22T00:00:00.000
|
{
"year": 2022,
"sha1": "f7a5be868104109c259d1b34636bad5ad230bd47",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f7a5be868104109c259d1b34636bad5ad230bd47",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
231877472
|
pes2o/s2orc
|
v3-fos-license
|
Two decades since the fetal insulin hypothesis: what have we learned from genetics?
Graphical abstract In 1998 the fetal insulin hypothesis proposed that lower birthweight and adult-onset type 2 diabetes are two phenotypes of the same genotype. Since then, advances in research investigating the role of genetics affecting insulin secretion and action have furthered knowledge of fetal insulin-mediated growth and the biology of type 2 diabetes. In this review, we discuss the historical research context from which the fetal insulin hypothesis originated and consider the position of the hypothesis in light of recent evidence. In summary, there is now ample evidence to support the idea that variants of certain genes which result in impaired pancreatic beta cell function and reduced insulin secretion contribute to both lower birthweight and higher type 2 diabetes risk in later life when inherited by the fetus. There is also evidence to support genetic links between type 2 diabetes secondary to reduced insulin action and lower birthweight but this applies only to loci implicated in body fat distribution and not those influencing insulin resistance via obesity or lipid metabolism by the liver. Finally, we also consider how advances in genetics are being used to explore alternative hypotheses, namely the role of the maternal intrauterine environment, in the relationship between lower birthweight and adult cardiometabolic disease. Supplementary Information The online version of this article (10.1007/s00125-021-05386-7) contains a slideset of the figures for download, which is available to authorised users.
Introduction
Lower birthweight is associated with a higher risk of adult cardiometabolic disease, including type 2 diabetes [1]. This relationship was first observed in a study from 1991 linking birthweight records to results of glucose tolerance tests performed in adult men [2], and multiple epidemiological studies have since confirmed this association [3]. The 'thrifty phenotype' hypothesis was put forward as an explanation in 1992, suggesting that maternal malnutrition led to poor fetal growth, with adaptation to a nutritionally depleted intrauterine environment resulting in abnormal pancreatic beta cell function and reduced capacity to secrete insulin extending into adult life [4]. The thrifty phenotype hypothesis has since expanded to include preconceptual, periconceptual and other intrauterine exposures and postnatal outcomes, and is now known as the Developmental Origins of Health and Disease (DOHaD) hypothesis [5].
An alternative explanation (the fetal insulin hypothesis) was put forward in 1998, proposing that lower birthweight and adult-onset type 2 diabetes are two phenotypes of the same genotype ( Fig. 1) [6,7]. Jørgen Pedersen identified fetal insulin as a key intrauterine growth factor in 1952 [8] and this, together with the observation that monogenic diseases affecting insulin secretion and action were accompanied by lower birthweight, formed the premise of the fetal insulin hypothesis. It proposed that insulin secretion and resistance, genetically determined and present from conception, also affect intrauterine growth and explain the relationship between lower birthweight and adult-onset type 2 diabetes observed in epidemiological studies [1][2][3].
In the two decades since the fetal insulin hypothesis was founded, advances in research encompassing the genetics of type 2 diabetes and birthweight have made it possible to test the hypothesis and answer important questions about the relationship between fetal growth and development of type 2 diabetes in later life. In this review, we evaluate the evidence for and against the fetal insulin hypothesis, considering recent evidence from genetic and epidemiological studies. We also consider how genetics could be utilised to explore the complex relationships between the intrauterine environment, fetal genotype and adult-onset type 2 diabetes. The scope of the review does not encompass evaluation of the position of the DOHaD hypothesis in relation to type 2 diabetes risk, as this has been considered in detail in another recent review [9].
The fetal insulin hypothesis from the perspective of monogenic research The role of fetal genotype in determining insulinmediated growth in utero: studies in families affected by GCK-MODY A study of birthweights from pregnancies affected by MODY due to a heterozygous mutation in the glucokinase gene (GCK) [6] provided important insights into how the fetal genotype determines insulin-mediated growth in utero. These mutations result in reduced sensing of glucose by the pancreatic beta cell, so individuals with GCK-MODY regulate glucose at a higher setpoint (fasting plasma glucose 5.5-8 mmol/l [10]) and have stable, mild hyperglycaemia throughout life [11]. An analysis of birthweights in 23 families with GCK-MODY found that where the mother had GCK-MODY and her fetus did not, birthweight was approximately 600 g higher than average due to higher fetal insulin secretion in response to maternal hyperglycaemia. However, when the fetus had inherited the GCK mutation from their mother, birthweight was no different from average because in such pregnancies glucose is sensed by both mother and fetus at the same level and a normal amount of insulin is secreted. In contrast, where the mother did not have GCK-MODY and the fetus had inherited a mutation in GCK from the father, birthweight was reduced by approximately 500 g (Table 1). In this case, maternal glucose crossing the placenta is sensed at a higher threshold by the fetus, resulting in less insulin secretion. This work contributed important knowledge to the relationship between maternal blood glucose levels and fetal genotype in regulating intrauterine growth, prompting the proposal of the fetal insulin hypothesis [7].
Studying the genetics of GCK-MODY pregnancies to gain knowledge of birthweight has been clinically important as it has informed obstetric care. Historically, these at-risk pregnancies were monitored with serial ultrasound scans and the fetus was assumed not to have inherited the maternal mutation if there was evidence of fetal overgrowth (abdominal circumference >75th percentile for gestational age). In this case, treatment of maternal hyperglycaemia was trialled, followed by planned delivery at 38 weeks gestation to mitigate the intra-and postpartum risks of having a large-for-gestational-age (LGA) baby. More recently, non-invasive prenatal diagnostic testing of cell-free fetal DNA in maternal blood has become available [12] and has the potential to prevent unnecessary treatment of maternal hyperglycaemia in fetuses who have inherited a GCK mutation.
Single-gene mutations that result in reduced insulin secretion typically reduce birthweight
The discovery that neonatal diabetes is commonly caused by mutations in single genes affecting insulin secretion has lent (Table 1) [6,[13][14][15][16][17][18][19][20][21][22][23]. These cases are rare and represent a severe phenotype but the principle that genetics determines both fetal growth and postnatal insulin secretion is supported by the observation that infants with neonatal diabetes have very low birthweights (median SD score (SDS) for sex and gestational age −1.7 [24]). Furthermore, the severity of fetal growth restriction depends on the amount of fetal insulin secretion, as infants with complete absence of fetal insulin secretion due to loss-offunction mutations in the insulin gene or pancreatic agenesis are half of normal birthweight by term gestation (median SDS for sex and gestational age <−3.0, unpublished data from A. Hughes et al). This is in contrast to other animal species, where absent fetal insulin secretion reduces birthweight to a much lesser extent than in humans [25]. Therefore, human birthweight is a bioassay of inherent insulin secretory capacity, and monogenic disorders of insulin secretion provide unique insights into the genetic link between lower birthweight and diabetes resulting from reduced insulin secretion.
Birthweights in HNF4A-MODY and HNF1A-MODY are not consistent with the fetal insulin hypothesis Not all instances of monogenic diabetes secondary to reduced insulin secretion are associated with lower birthweight. Heterozygous mutations in the genes encoding the transcription factors hepatic nuclear factor-4α and -1α (HNF4A and HNF1A, respectively) result in reduced insulin secretion [26,27] and mutation carriers develop diabetes in childhood or early adulthood [28]. The fetal insulin hypothesis would predict that affected individuals have a low birthweight, yet individuals with HNF1A-MODY have normal birthweights and inheritance of HNF4A-MODY is associated with fetal and neonatal hyperinsulinism and macrosomia (Table 1) [29]. It has been proposed that fetal hyperinsulinism causes accelerated postnatal pancreatic beta cell apoptosis, which subsequently predisposes to early-onset diabetes [30]. However, it has recently been found that higher birthweight is associated with reduced penetrance of HNF4A-MODY (unpublished data from J. Locke and K. Patel). Therefore, higher birthweight in HNF4A-MODY is likely to represent a greater inherent capacity to secrete insulin, and differential expression of HNF4A isoforms in the fetus and in later life [31,32] may provide an alternative explanation for these contrasting effects of HNF4A mutations.
Monogenic diseases resulting in severe insulin resistance have heterogeneous effects on birthweight
The relationship between birthweight and monogenic diabetes secondary to impaired insulin action is unclear (Table 1). Consistent with the fetal insulin hypothesis, infants with severe congenital insulin resistance secondary to loss-offunction mutations in the insulin receptor gene, INSR, have very low birthweights [33][34][35]. Single-gene mutations resulting in either congenital generalised or familial partial lipodystrophy are characterised by peripheral insulin resistance due to an absence of subcutaneous adipose tissue, and affected individuals typically develop diabetes in adolescence [36]. However, birthweights of infants with congenital generalised lipodystrophy have been reported to be normal [37] and though there are reports of low birthweight in familial partial lipodystrophy [38,39], this has not been widely reported as a typical clinical feature in the literature [40][41][42]. The fetal insulin hypothesis from the perspective of epidemiological research Paternal type 2 diabetes is associated with lower offspring birthweight but is not clearly related to heritable insulin resistance Observational studies of paternal diabetes status and offspring birthweight have provided evidence for a shared genetic predisposition to lower birthweight and type 2 diabetes [43,44]. The study of paternal diabetes is important, since maternal diabetes leads to higher birthweight [45] and masks the effect of fetal genes predisposing to diabetes inherited from the father. This was clearly shown by a study of 236,030 participants (UK Biobank study) wherein paternal diabetes was associated with a 45 g lower birthweight compared with birthweights of infants who had no parent with diabetes. In contrast, birthweight in offspring of parents who both had diabetes was not different from birthweight of infants for whom neither parent had diabetes (Fig. 2) [43]. The fetal insulin hypothesis proposed a possible role for heritable insulin resistance, and there has been evidence for a relationship between low birthweight and higher levels of paternal insulin resistance in case-control (n=119) [46] and cross-sectional (n=2788) [47] studies. However, paternal insulin resistance was not independently associated with offspring birthweight in a birth-cohort study of 986 UK parentoffspring trios [48], and there was a positive correlation between paternal HOMA-IR and umbilical cord insulin levels in 644 fathers and babies [49]. Together, this suggests that in utero there may in fact be a compensatory rise in insulin levels in the face of insulin resistance to maintain fetal growth.
The fetal insulin hypothesis from the perspective of polygenic research Type 2 diabetes risk loci are associated with lower birthweight The first genome-wide association studies (GWAS) transformed the landscape of research into the genetics of type 2 diabetes [50][51][52] and allowed us to test the fetal insulin hypothesis. Initially, variants at type 2 diabetes risk loci affecting insulin secretion were tested for their association with birthweight and it was found that fetal risk alleles at the CDKAL1 and HHEX-IDE loci were associated with a lower birthweight [53,54]. The effect was also important; the reduction in birthweight in a fetus carrying four risk alleles was equivalent to that seen in a fetus whose mother smoked three cigarettes per day in the third trimester of pregnancy.
The first GWAS for birthweight shortly followed [55] and one of the first variants identified was at the known type 2 diabetes risk locus in ADCY5, which plays a key role in coupling glucose to insulin secretion from the pancreatic beta cell [56]. Since then, successively larger GWAS of birthweight, with the latest including data on >400,000 individuals, have identified a total of 190 loci associated with birthweight [57][58][59]. Using a recently developed method [59,60], the statistical power from these large samples could then be harnessed to estimate the independent maternal and fetal effects at each locus. To date, 11 variants with fetal effects both on birthweight and on type 2 diabetes risk have been identified (Table 2).
There is heterogeneity in the relationship between birthweight and type 2 diabetes risk loci Type 2 diabetes risk alleles associated with pancreatic beta cell function The strongest associations between type 2 diabetes risk alleles and lower birthweight are at loci that primarily affect pancreatic beta cell function (e.g. ADCY5 and CDKAL1; Fig. 3). However, not all risk alleles at beta cell loci are associated with lower birthweight. For example, the fetal risk allele at TCF7L2, which has a relatively large effect on type 2 diabetes risk, has no effect on birthweight, and the fetal risk allele at the ANK1 locus is associated with a higher birthweight [59] despite its role in regulating NKX6-3 [61], a vital transcription factor involved in pancreatic beta cell development [62]. These emerging patterns of association are consistent with the heterogeneous birthweight effects of monogenic causes of diabetes secondary to reduced insulin secretion and suggest that different susceptibility loci exert their effects on beta cell function at different stages in the life course. Fig. 2 Birthweight according to parental diabetes status in the UK Biobank study [43]. **p<0.001 vs birthweight where neither parent was reported to have diabetes. Figure adapted from Tyrell et al [43] under the terms of the Creative Commons Attribution 3.0 Unported License. This figure is available as part of a downloadable slideset Type 2 diabetes risk alleles associated with insulin resistance, obesity or liver lipid metabolism Certain type 2 diabetes risk alleles associated with insulin resistance secondary to a metabolically unfavourable lipodystrophy-like fat distribution (e.g. IRS1) are associated with lower birthweight but those implicated in obesity or liver lipid metabolism are not. Consistent with this, recent evidence shows that fetal carriage of variants associated with adult adiposity and a favourable metabolic profile (including higher insulin sensitivity) [63] is associated with higher birthweight [64]. This could mean that a genetic predisposition to lower insulin sensitivity results in a lower birthweight but, in keeping with the monogenic and epidemiological data, the different pathways affecting insulin action are not consistently shared between birthweight and type 2 diabetes risk (Fig. 3).
Quantifying the relationship between lower birthweight and type 2 diabetes that can be attributed to genetic risk
While there is now clear support for the fetal insulin hypothesis, the question remains as to how much of the association between lower birthweight and type 2 diabetes is explained by the genetic associations. Most variants in the type 2 diabetes risk loci do not appear to be associated with birthweight and the finding that a fetal genetic score for birthweight predominantly influences pathways independent of fetal insulin secretion [65] suggests that a substantial proportion of the fetal genetic variability underlying birthweight does not overlap with underlying susceptibility to type 2 diabetes. However, it remains uncertain how much of the relationship (the covariance) between lower birthweight and type 2 diabetes could be explained by the genetic factors that do overlap. To date, using genome-wide data, shared genetic effects of common variants have been estimated to explain 36% (15-57%) of the negative covariance between birthweight and type 2 diabetes risk [59], although this comes with the important caveat of uncertainty introduced by the likely non-linear relationship between the two phenotypes [57].
Mendelian randomisation studies exploring the role of the intrauterine environment in determining relationships between lower birthweight and adult cardiometabolic disease
While there is accumulating evidence for the relationship between lower birthweight and type 2 diabetes having a shared genetic aetiology, long-lasting effects of the intrauterine environment on early development are thought to play an important role. Many studies of animal models have shown this to be the case [66] and the most convincing evidence in humans has come from studies of offspring born during periods of famine, showing that they are at a higher risk of disorders of glucose metabolism and type 2 diabetes in adulthood (reviewed in detail in [67]). In addition, monozygotic twins discordant for type 2 diabetes have a lower birthweight [68], a finding which supports an effect of the intrauterine environment on both restricted fetal growth and developmental programming of metabolism.
Genetics can be used to test whether there is a causal relationship between an intrauterine exposure and adult type 2 diabetes by analysing genetic variants specifically associated with the exposure in a technique called Mendelian randomisation [69]. It is akin to a randomised control trial, since genetic variants are randomly assigned at birth and as the genes are specific to the exposure it is not generally subject to confounding from other factors that may mediate the relationship between the exposure and outcome. Birthweight SNPs [59] at these loci are in linkage disequilibrium (R 2 >0.3) with a primary or secondary signal type 2 diabetes SNP [61]. A 1 SD change in birthweight is equivalent to~450 g There have been attempts to use Mendelian randomisation to show that lower birthweight is causally related to type 2 diabetes [70][71][72] but the results were difficult to interpret as they did not appropriately differentiate between maternal and fetal effects [73][74][75]. Methods have been established to account for maternal and fetal effects and test for causal associations between pregnancy exposures and offspring traits [59,60,76]. A recent, large study of genotyped parentoffspring pairs (n=45,849) showed no evidence for a causal relationship between maternal intrauterine exposures that influence birthweight and offspring quantitative cardiometabolic traits (glucose, lipids, BP, BMI) [76]. A specific example tested by Mendelian randomisation and relevant to the fetal insulin hypothesis is the relationship between maternal systolic BP (SBP) and offspring birthweight and SBP. This showed that while high maternal SBP results in reduced fetal growth, it is not causal for high offspring SBP but instead reflects a shared genetic predisposition to higher SBP (Fig. 4) [59,76]. Change in BW Z score per fetal T2D risk allele Fig. 3 The effect of fetal type 2 diabetes (T2D) risk alleles on birthweight (BW) clustered by their likely underlying biology (beta cell function, proinsulin secretion and insulin resistance secondary to obesity, lipodystrophy-like fat distribution or disrupted liver lipid metabolism) [81]. SNPs within each cluster are ordered from top to bottom by highest to lowest T2D risk (established from a genome-wide association study of participants of European ancestry [61]). SNPs that appear in more than one cluster (ADCY5, CCND2, CDC123/CAMK1D, HSD17B12, HNF4A) are shown by an accompanying number in parentheses. There are two distinct signals at ANKRD55 (shown as ANKRD55_1 and ANKRD55_2). The error bars show the 95% CIs for the estimated fetal effect on birthweight in Europeans (independent of any maternal effect [59]), with 1 SD change in birthweight being equivalent tõ 450 g. This figure is available as part of a downloadable slideset This example demonstrates a key underlying premise of the fetal insulin hypothesis: that the fetal genotype can explain observational relationships between lower birthweight and adult traits. However, unlike the fetal insulin hypothesis, the relationship between lower birthweight and higher adult SBP may be explained by a combination of maternal intrauterine effects on birthweight and fetal genetic susceptibility to higher adult SBP.
Conclusion
In the two decades since the fetal insulin hypothesis was first proposed, advances in genetic research have shed light on what contributes to fetal insulin-mediated growth and its implications for long-term risk of type 2 diabetes. Strong evidence from monogenic studies has been supported by epidemiological observations and discoveries arising from large-scale GWAS of type 2 diabetes and birthweight. Taken as a whole, it is clear that both lower birthweight and type 2 diabetes reflect, in part, a shared genetic predisposition to reduced insulin secretion. However, while impaired insulin action was considered a key part of the original fetal insulin hypothesis, studies of birthweight relating to monogenic lipodystrophies, paternal insulin resistance and the biology underlying shared birthweight and type 2 diabetes risk loci suggest this may be a less important factor in mediating the relationship between lower birthweight and type 2 diabetes risk.
Research investigating the premise of the fetal insulin hypothesis will continue to be important as type 2 diabetes becomes more prevalent globally. As this is predominantly associated with rising levels of obesity, it is possible that the variance in adult type 2 diabetes risk that can be explained by genes which also reduce insulin-mediated fetal growth becomes less important. This is because risk variants associated with high BMI are not strongly represented in birthweight GWAS and mothers with higher BMIs are at risk for diabetes in pregnancy, which leads to higher birthweights. Addressing this and other important challenges, including diversifying research to include non-European populations and exploring non-linear relationships and gene-environment interactions, will provide further insights into the genetics of insulin-mediated fetal growth and its implications for health and disease across the life course. Fig. 4 Principles of using Mendelian randomisation to explore the roles of pregnancy exposures and fetal genetics in the relationship between birthweight and risk of adult cardiometabolic disease. The example in this figure shows that the relationship between lower birthweight and higher offspring SBP is mediated by a combination of intrauterine effects on birthweight and fetal genetic susceptibility to higher adult SBP. Figure adapted from Lawlor et al [75] under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium. This figure is available as part of a downloadable slideset Contribution statement All authors were responsible for drafting the review and revising it critically for important intellectual content. All authors approved submission. RMF is the guarantor of this work and accepts full responsibility for its content and controlled decision to publish.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2021-02-11T14:44:42.553Z
|
2021-02-11T00:00:00.000
|
{
"year": 2021,
"sha1": "9d8178815c7806bea3c5ebfde245d7132c95d42b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00125-021-05386-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d8178815c7806bea3c5ebfde245d7132c95d42b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237742661
|
pes2o/s2orc
|
v3-fos-license
|
Structural Evolution in the RE(OAc) 3 · 2AcOH Structure Type. A Non-Linear, One-Dimensional Coordination Polymer with Unequal Interatomic Rare Earth Distances
Coordination Abstract: The existing range of the centrosymmetric, triclinic RE(OAc) 3 · 2AcOH structure type has been extended for RE = Eu and Gd while the structure data of the Nd- and Sm-compounds have been revised and corrected, respectively, using low temperature (T = 100 K), well resolved (2 θ max = 56 ◦ ), highly redundant SCXRD data in order to evaluate the structural evolution within this class of acetic acid solvates by statistical methods. Within the nine-fold mono-capped square-antiprismatic coordination spheres of the RE 3+ ions, RE-O distances decrease as a result of lanthanide contraction; some with different rates depending on the coordination modes (2.11/2.21) of the acetate ions. The experimental data show that the internal structural parameters of the acetate ions also correlate with their coordination modes. Both acetic acid molecules act as hydrogen bond donors but only one as monodentate ligand. The geometries of the hydrogen bonds reveal that they are strongly influenced by the size of the rare earth atom. The non-linear, one-dimensional coordination polymer propagates with unequal RE ··· RE distances along the a-axis. Rods of the coordination polymer are arranged in layers congruently stacked above each other with the hydrogen bonded acetic acid molecules as filler in between. In most cases, data fitting is best described in terms of a quadratic rather than a linear regression analysis. the desired anhydrous triacetates —solvates with acetic acid and/or acetic anhydride are formed during the reaction. The formation of such compounds does not seem implausible as several acetic acid solvates , RE(OAc) 3 · m AcOH and acetic acid solvates hydrates , RE(OAc) 3 · m AcOH · n H 2 O are already known in addition to various hydrates , RE(OAc) 3 · n H 2 O acetic elongated along the a-axis, the propagation direction of the coordination polymer (see below). Besides the great influence of the reaction and crystallization conditions on the compounds formed, isolation from the mother liquor also turned out to be problematic as in mother liquor most of the compounds are very sensitive towards moisture whereas they are stable at ambient conditions over periods of hours and days when free of remaining solvents. In most cases we could achieve this challenge when taking the crystals out of the mother liquor by use of a spoon spatula and pouring on a filter paper.
Introduction
Rare earth triacetates are widely used as precursors in the synthesis of NaREF 4 and LiYF 4 core and core/shell nanocrystals. Usually, rare earth triacetate hydrates, RE(OAc) 3 nH 2 O, are employed since they are commercially available or can easily be prepared by dissolving the appropriate rare earth oxide in acetic acid. When nanocrystals of this kind are doped with lanthanide, ions emitting in the near infrared, anhydrous reaction conditions are advantageous since the luminescence is quenched by hydroxyl groups in the crystal lattice [1][2][3]. An example is NaYF 4 :Yb,Er/NaYF 4 core/shell nanocrystals showing upconversion emission with high quantum yield when anhydrous rare earth triacetates are used in the synthesis of the core particle and the shell [4].
In principle, drying of RE(OAc) 3 nH 2 O can be achieved by removal of their water content by extensive water exhausting in a vacuum at higher temperatures or by use of the reaction of the hydrated molecules with acetic acid anhydride, Ac 2 O, in glacial acetic acid, AcOH. With regard to a common, reproducible, easily upscaled protocol for the preparation of a large number of various anhydrous rare earth triacetates, RE(OAc) 3 , we started our attempts with the second method.
We became rapidly aware, however, that extensive drying under vacuum is required after synthesis to obtain a product of constant mass. This indicated that-in addition to the desired anhydrous triacetates-solvates with acetic acid and/or acetic anhydri formed during the reaction. The formation of such compounds does not seem impl as several acetic acid solvates, RE(OAc)3 · mAcOH and acetic acid solvates hydrates, RE · mAcOH · nH2O are already known in addition to various hydrates, RE(OAc)3 · nH2 ure 1). All attempts, however, to identify our products by PXRD failed.
We therefore decided to study the reaction products by single crystal X-ray diffraction (SCXRD) in order to identify the different phases formed as a result of the applied preparation and crystallization conditions. Last but not least we hoped to find unknown phases and compounds in the three phase system shown in Figure 1.
While exploring the reaction products with SCXRD we obtained the structural data for many new compounds, some for new composition opening new structure types, some extending or fulfilling the existence range of well-known structure types and others improving or correcting former results. Here we present our results on the crystal structure determinations of compounds belonging to the class of rare earth triacetates acetic acid solvates with composition RE(OAc) 3 · 2AcOH belonging to structure type 6, a non-linear one-dimensional coordination polymer with unequal interatomic RE···RE distances, observed in the case of the earlier lanthanides RE = Nd, Sm, Eu and Gd. Preparation and solid state structures of two compounds (RE = Nd, Sm) of this structure type have been formerly described in a doctoral thesis [5] but have never been published. In the Cambridge Structural Database [33] their structural data are deposited under the CSD-numbers 653,311 for RE = Nd (data base identifier: JIPLOK) and 653,320 for RE = Sm (data base identifier: JIPNIG). Although it was measured at the same temperature, the deposited data set of the Sm compound exhibited a significantly larger unit cell volume (793.01 Å 3 ) than the Nd compound (779.15 Å 3 ) and in sharp contrast to the so called lanthanide contrac- tion [34], the decrease of the RE 3+ ion radius with increase of the atomic number of the rare earth element.
Materials and Methods
Single crystals of the compounds described here have been obtained alongside crystals of other compounds by dissolving the corresponding rare earth oxides in acetic acid and using acetic anhydride to remove all water. In the cases of Eu and Gd, 15 mmol of RE 2 O 3 was dissolved in 100 mL of aqueous acetic acid (50% acetic acid by volume) by heating the suspension to reflux temperature. The solvent of the resulting clear solution was then removed with a rotavap avoiding prolonged drying of the resulting solid. Subsequently, 50 to 60 mL of glacial acetic acid were added and the solid dissolved by heating at 90 • C. After cooling the clear solution to 40 • C, 30 mL of acetic anhydride was added and the flask tightly closed. In the case of Nd and Sm, similar solutions could be prepared in a simpler way by directly dissolving the rare earth oxide in a mixture of glacial acetic acid and acetic anhydride. In this case, 10 mmol of RE 2 O 3 was refluxed in 35 mL of acetic acid and 15 mL of acetic anhydride under nitrogen atmosphere until a clear solution was obtained. After cooling to 40 • C, 30 mL of acetic anhydride was added and the flask tightly closed.
Parts of these solutions were mixed with additional acetic anhydride in a ratio, by volume, of 1:1, 1:2 or 1:3. All solutions were left for crystallization at room temperature. In cases where no crystals were detected after one week, the solution was concentrated on a rotavap by a factor of two. No attempts were made to further optimize the reaction conditions to exclusively obtain the substances described here. In the case of Gd, for instance, crystals of Gd(OAc)3 · 4AcOH also form, in addition to the title compound.
Morphology of the crystals was characterized by a prismatic habit ( Figure 2) most often resulting from the combination of the pinacoids {100}, {010} and {001}. Crystals are elongated along the a-axis, the propagation direction of the coordination polymer (see below). Besides the great influence of the reaction and crystallization conditions on the compounds formed, isolation from the mother liquor also turned out to be problematic as in mother liquor most of the compounds are very sensitive towards moisture whereas they are stable at ambient conditions over periods of hours and days when free of remaining solvents. In most cases we could achieve this challenge when taking the crystals out of the mother liquor by use of a spoon spatula and pouring on a filter paper.
Materials and Methods
Single crystals of the compounds described here have been obtained alongside cry tals of other compounds by dissolving the corresponding rare earth oxides in acetic ac and using acetic anhydride to remove all water. In the cases of Eu and Gd, 15 mmol RE2O3 was dissolved in 100 mL of aqueous acetic acid (50% acetic acid by volume) heating the suspension to reflux temperature. The solvent of the resulting clear soluti was then removed with a rotavap avoiding prolonged drying of the resulting solid. Su sequently, 50 to 60 mL of glacial acetic acid were added and the solid dissolved by heati at 90 °C. After cooling the clear solution to 40 °C, 30 mL of acetic anhydride was add and the flask tightly closed. In the case of Nd and Sm, similar solutions could be prepar in a simpler way by directly dissolving the rare earth oxide in a mixture of glacial ace acid and acetic anhydride. In this case, 10 mmol of RE2O3 was refluxed in 35 mL of ace acid and 15 mL of acetic anhydride under nitrogen atmosphere until a clear solution w obtained. After cooling to 40 °C, 30 mL of acetic anhydride was added and the flask tigh closed.
Parts of these solutions were mixed with additional acetic anhydride in a ratio, volume, of 1:1, 1:2 or 1:3. All solutions were left for crystallization at room temperatu In cases where no crystals were detected after one week, the solution was concentrated a rotavap by a factor of two. No attempts were made to further optimize the reaction co ditions to exclusively obtain the substances described here. In the case of Gd, for instan crystals of Gd(OAc)3 · 4AcOH also form, in addition to the title compound.
Morphology of the crystals was characterized by a prismatic habit ( Figure 2) mo often resulting from the combination of the pinacoids {100}, {010} and {001}. Crystals a elongated along the a-axis, the propagation direction of the coordination polymer (s below). Besides the great influence of the reaction and crystallization conditions on t compounds formed, isolation from the mother liquor also turned out to be problematic in mother liquor most of the compounds are very sensitive towards moisture where they are stable at ambient conditions over periods of hours and days when free of rema ing solvents. In most cases we could achieve this challenge when taking the crystals o of the mother liquor by use of a spoon spatula and pouring on a filter paper. Single crystals suitable for X-ray measurements were selected under a microsco and mounted on a 50 μm MicroMesh MiTeGen Micromount TM using FROMBLIN Y p fluoropolyether (LVAC 16/6, Aldrich) before they were centered on a Bruker Kappa AP II CCD-based 4-circle X-ray diffractometer using graphite monochromated Mo Kα rad tion (λ = 0.71073 Å) of a fine focus molybdenum-target X-ray tube operating at 50 kV a 30 mA. The crystal-to-detector distance was 40 mm and exposure time was 5 s per fram for all samples with a scan width of 0.5°. Samples were cooled down to 100 K by use o Kryoflex low temperature device. Single crystals suitable for X-ray measurements were selected under a microscope and mounted on a 50 µm MicroMesh MiTeGen Micromount TM using FROMBLIN Y perfluoropolyether (LVAC 16/6, Aldrich) before they were centered on a Bruker Kappa APEX II CCD-based 4-circle X-ray diffractometer using graphite monochromated Mo Kα radiation (λ = 0.71073 Å) of a fine focus molybdenum-target X-ray tube operating at 50 kV and 30 mA. The crystal-to-detector distance was 40 mm and exposure time was 5 s per frame for all samples with a scan width of 0.5 • . Samples were cooled down to 100 K by use of a Kryoflex low temperature device.
Data were integrated and scaled using the programs SAINT and SADABS [35] within the APEX2 software package of Bruker [36]. Special care was taken regarding the data collection strategy in order to reduce absorption effects. A maximum reduction of absorption effects and remaining electron density was achieved by a high data redundancy and numerical absorption corrections.
Structures were solved by direct methods of SHELXS ( [37]) and completed by difference Fourier synthesis with SHELXL [38]. Structure refinements were carried out on F 2 using full-matrix least-squares procedures, applying anisotropic thermal displacement parameters for all non-hydrogen atoms.
All H atoms including those of the acetic acid molecules were clearly identified in difference-Fourier syntheses. Hydrogen atoms of the methyl groups were refined with idealized positions and allowed to ride on their parent carbon atoms with d(C-H) = 0.98 Å and common isotropic temperature factors for all hydrogen atoms of each methyl group. Hydrogen atoms of the carboxyl groups were refined with a common O-H distance of 0.96 Å before they were fixed and allowed to ride on the corresponding oxygen atom with one common isotropic temperature factor.
Details on the crystallographic data, data collection parameters and structure refinement results are summarized in Table 1 where n is the number of reflections and p is the total number of parameters refined.
The listed CCDC numbers (Table 1) contain the supplementary crystallographic data for this paper. These data can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/structures (accessed on 28 June 2021). Molecular graphics were prepared using DIAMOND [39], Mercury [40] and POV-Ray [41], respectively.
In order to compare bond lengths and angles in a simple manner, atoms of all four compounds were labeled according to the common numbering scheme depicted in Figure 3. The listed CCDC numbers (Table 1) contain the supplementary crystallographic data for this paper. These data can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/structures (June, 2021). Molecular graphics were prepared using DIAMOND [39], Mercury [40] and POV-Ray [41], respectively.
In order to compare bond lengths and angles in a simple manner, atoms of all four compounds were labeled according to the common numbering scheme depicted in Figure 3. . Unit cell and labelling scheme of the atoms in the asymmetric unit of the RE(OAc)3 · 2AcOH structure type visualized for RE = Nd. All atoms are drawn as thermal displacement ellipsoids of the 40% level. Hydrogen bonds are indicated as broken sticks in red, additional RE-O-bonds to RE atoms outside the asymmetric unit as shortened sticks.
Results and Discussion
Our measurements of the RE(OAc)3 · 2AcOH structure type confirmed the previous results of a triclinic unit cell with two formula units therein, but in contrast to the depos- Figure 3. Unit cell and labelling scheme of the atoms in the asymmetric unit of the RE(OAc) 3 · 2AcOH structure type visualized for RE = Nd. All atoms are drawn as thermal displacement ellipsoids of the 40% level. Hydrogen bonds are indicated as broken sticks in red, additional RE-O-bonds to RE atoms outside the asymmetric unit as shortened sticks.
Results and Discussion
Our measurements of the RE(OAc) 3 · 2AcOH structure type confirmed the previous results of a triclinic unit cell with two formula units therein, but in contrast to the deposited data (see above) the unit cell volume of the Gd compound was-in accordance with the lanthanide contraction-smaller than the unit cell volume of the Nd compound.
Unit Cell
As expected, the unit cell volumes decreased continuously from Nd to Gd (Table 1, Figure 4). The mathematical correlation of unit cell volume with the atomic number of the rare earth element can be calculated by a linear regression analysis (y = ax + b) with a goodness of fit factor R 2 of 0.9931. Data, however, are better fitted by use of a quadratic expression (y = ux 2 + vx + w) with R 2 = 1.000.
With four data points the reliability of these calculations is limited but the trend is in accordance with the observations of Greis and Petzel [42], whose data on the unit cell volumes of isostructural REF 3 -compounds (orthorhombic YF 3 -structure type, RE = Sm − Lu, 10 data points; hexagonal LaF 3 -structure type, RE = Ce − (Pm) − Eu, 5 data points) support With four data points the reliability of these calculations is limited but the trend is in accordance with the observations of Greis and Petzel [42], whose data on the unit cell volumes of isostructural REF3-compounds (orthorhombic YF3-structure type, RE = Sm − Lu, 10 data points; hexagonal LaF3-structure type, RE = Ce − (Pm) − Eu, 5 data points) support the fit of the unit cell volume against the atomic number of the rare earth element by use of a quadratic equation instead of a linear one.
RE Coordination
Rare earth atoms of the RE(OAc)3 · 2AcOH structure type are ninefold, mono-capped square-antiprismatic coordinated ( Figure 5) with a narrow RE-O bond length distribution (Table 2), with the mean value decreasing from RE = Nd to Gd. Statistically, a linear regression analysis of the nine different RE-O distances results in R 2 -values in the range 0.9996 to 0.9838 but-as in the case of unit cell shrinkage-decrease of bond lengths is best described in all cases by a quadratic regression analysis with R 2 values in the range of 1.000 to 0.9984. The coefficients a, b (linear regression) and u, v, w (quadratic regression) of the different approaches are summarized in Table S1.
RE Coordination
Rare earth atoms of the RE(OAc) 3 · 2AcOH structure type are ninefold, mono-capped square-antiprismatic coordinated ( Figure 5) with a narrow RE-O bond length distribution (Table 2), with the mean value decreasing from RE = Nd to Gd. Statistically, a linear regression analysis of the nine different RE-O distances results in R 2 -values in the range 0.9996 to 0.9838 but-as in the case of unit cell shrinkage-decrease of bond lengths is best described in all cases by a quadratic regression analysis with R 2 values in the range of 1.000 to 0.9984. The coefficients a, b (linear regression) and u, v, w (quadratic regression) of the different approaches are summarized in Table S1. With four data points the reliability of these calculations is limited but the trend is in accordance with the observations of Greis and Petzel [42], whose data on the unit cell volumes of isostructural REF3-compounds (orthorhombic YF3-structure type, RE = Sm − Lu, 10 data points; hexagonal LaF3-structure type, RE = Ce − (Pm) − Eu, 5 data points) support the fit of the unit cell volume against the atomic number of the rare earth element by use of a quadratic equation instead of a linear one.
RE Coordination
Rare earth atoms of the RE(OAc)3 · 2AcOH structure type are ninefold, mono-capped square-antiprismatic coordinated ( Figure 5) with a narrow RE-O bond length distribution (Table 2), with the mean value decreasing from RE = Nd to Gd. Statistically, a linear regression analysis of the nine different RE-O distances results in R 2 -values in the range 0.9996 to 0.9838 but-as in the case of unit cell shrinkage-decrease of bond lengths is best described in all cases by a quadratic regression analysis with R 2 values in the range of 1.000 to 0.9984. The coefficients a, b (linear regression) and u, v, w (quadratic regression) of the different approaches are summarized in Table S1. Figure 5. (a) Distorted mono-capped square antiprismatic coordination polyhedron of the rare earth atoms in the RE(OAc)3 · 2AcOH structure type; example RE = Nd. All atoms are drawn as thermal displacement ellipsoids of the 40% level. Bonds from oxygen to carbon are drawn in two colors, bonds from oxygen to neighboring rare earth atoms as short sticks. (b) Figure 5. (a) Distorted mono-capped square antiprismatic coordination polyhedron of the rare earth atoms in the RE(OAc) 3 · 2AcOH structure type; example RE = Nd. All atoms are drawn as thermal displacement ellipsoids of the 40% level. Bonds from oxygen to carbon are drawn in two colors, bonds from oxygen to neighboring rare earth atoms as short sticks. (b) Evaluation of RE-O distances as function of the atomic number of the RE element with trend lines; empty raw of RE = Pm included in order to get an adequate representation; Symmetry codes to generate equivalent atoms: 1 1 − x, 1 − y, 1 − z; 2 −x, 1 − y, 1 − z.
There are, however, some remarkable exceptions from parallelism in bond length shrinkage: (i) the RE-O(11) distance decreases slower than the RE-O (12) Besides this global view on the RE-O bond lengths, their detailed inspection in terms of the acetate coordination modes will give a deeper insight into the structural evolution of this structure type as a consequence of lanthanide contraction. In the following we will use the Harris symbol as described by Coxall et al. [43] for the different acetate coordination modes to distinguish the three different crystallographic acetate groups shown in Figure 6. The first one (atoms labelled 1n) exhibits a bridging 2.11 (1κO; 2κO') coordination mode. In addition, this acetate ligand acts as a hydrogen acceptor in a hydrogen bridge bond.
Both the second and third acetate group (atoms labeled 2n and 3n, respectively) belong to the three-dentate, bridging-chelating coordination mode 2.21 (1:2κ2O; 1κO'), but only the third one behaves as a hydrogen acceptor too.
2.21-Coordination Mode
The influence of lanthanide contraction on the structural parameters associated with this kind of coordination mode differs for both 2.21-coordinating acetate groups. While all RE-O distances decrease in both cases when the atomic number of the rare earth element increases (see above), the corresponding bonding angles seem to be unaffected for the third acetate group but vary in the case of the second one, indicating a change of their orientation in relation to the coordinated rare earth atoms. Thus the C-O···RE angle of the bridge decreases from 148.
2.21-Coordination Mode
The influence of lanthanide contraction on the structural parameters associated with this kind of coordination mode differs for both 2.21-coordinating acetate groups. While all RE-O distances decrease in both cases when the atomic number of the rare earth element increases (see above), the corresponding bonding angles seem to be unaffected for the third acetate group but vary in the case of the second one, indicating a change of their orientation in relation to the coordinated rare earth atoms. Thus the C-O···RE angle of the bridge decreases from 148.7(1) • (RE = Nd) to 148.0(1) • (RE = Gd) equally as the chelating C-O ···RE angle decreases on the opposite side from 96.0(1) • (RE = Nd) to 95.5(1) • (RE = Gd).
In contrast to the foregoing coordination mode, Janiki et al. [44] report on the subset of nine-fold coordinated rare earth atoms and 2.21-coordination mode mean RE-O distances which are to some extent greater than in the present study (Janiki et al. [44]
Internal Structural Parameters and Hydrogen Bonding of the Acetate Groups
The present SCXRD data not only give us a deeper insight into acetate coordination modes and the structural evolution of the rare earth coordination sphere as a result of lanthanide contraction but also allow us a detailed look at the internal structural parameters of the acetate groups (Table 3). For C-C bonds of type C(sp 2 )-C(sp 3 ) RCOO-, Allen et al. [45] reported an overall value of 1.520 (11) Å. This value is somewhat longer than the C-C bonds in the present study [1.498(1)-1.506(1) Å], which may be ascribed to the fact that our compounds exclusively have acetate groups. There is an indication that the C-C bond is longer for the bridging 2.11-coordination mode [1.506(1) Å] compared with the corresponding bond length in the 2.21-coordination mode [1.498(1) Å], but the data situation is too poor for a clear statement.
Coordination Modes, Internal Structural Parameters and Hydrogen Bonding of the Acetic Acid Molecules
While both crystallographical different acetic acid molecules (Figure 7) of the RE(OAc) 3 · 2AcOH structure type act as hydrogen donors in hydrogen bonds, only the first one (atoms labeled 4n) also acts as an electron donor towards RE in a monodentate fashion (1.10 coordination mode). In the latter case, the RE-O distances [RE-O(41), 2.497(1)-2.445(1) Å; Table 2, Figure 5] are of medium strength. In comparison with the uncoordinated acetic acid molecule (atom labeled 5n), the C=O bond of the coordinated one is significantly (+0.036 Å) longer, while-on the other hand-the C-OH bond is shorter (−0.009 Å) (Table 4). In their review on bond lengths in organic compounds, Allen et al. [45] recorded a value of 1.308 (19) Donor acceptor distances (Table 5) Although the oxygen atoms of the OH-groups are not involved in rare earth coordination, the structural parameters of both hydrogen bonds show a strong correlation with the size of the rare earth atom. As we could localize the hydrogen atoms of the OH-groups from difference Fourier synthesis (see above), the analysis of the structural evolution of the hydrogen bonds as a function of the size of the rare earth element reveals some remarkable features (Table 5, Figure S1
Packing
The {REO 9 } building units are connected with each other to a non-linear, onedimensional coordination polymer along the a-axis with unequal interatomic RE···RE distances (Table 6, Figure 8). Within the series of the investigated compounds, both distances decrease (Table 4) continuously with increasing atomic number as the interatomic RE···RE···RE bond angle decreases from 138. 15(1) • (RE = Nd) to 137.80(1) • (RE = Gd).
nation, the structural parameters of both hydrogen bonds show a strong correlati the size of the rare earth atom. As we could localize the hydrogen atoms of the OHfrom difference Fourier synthesis (see above), the analysis of the structural evolu the hydrogen bonds as a function of the size of the rare earth element reveals so markable features (
Packing
The {REO9} building units are connected with each other to a non-linear, onesional coordination polymer along the a-axis with unequal interatomic RE···RE di (Table 6, Figure 8). Within the series of the investigated compounds, both distan crease (Table 4) In spite of the zig-zag-arrangement of the rare earth atoms, the overall onesional coordination polymers exhibit-neglecting the uncoordinated acetic acid cules-a remarkable circular, rod-like shape with a diameter of about 1.22 nm (Fi In this context, the uncoordinated, only hydrogen bonded acetic acid molecules be knobs that fill the space in the square primitive arrangements of the rods (Figure zipper-like way. In spite of the zig-zag-arrangement of the rare earth atoms, the overall one-dimensional coordination polymers exhibit-neglecting the uncoordinated acetic acid moleculesa remarkable circular, rod-like shape with a diameter of about 1.22 nm (Figure 9). In this context, the uncoordinated, only hydrogen bonded acetic acid molecules behave as knobs that fill the space in the square primitive arrangements of the rods ( Figure 10) in a zipper-like way. . Arrangement of the one-dimensional coordination polymers (simplified visualized as rods) in the crystal structure of the RE(OAc)3 · 2AcOH structure type, looking down the a-axis; example RE = Nd. Uncoordinated, but hydrogen bonded acetic acid molecules filling the space between the rods in a zipper-like manner are shown as ball-and-stick model.
Conclusions
Our low temperature, well resolved single-crystal X-ray data of the rare earth triacetates acetic acids solvates belonging to the RE(OAc)3 · 2AcOH structure type allow a more detailed insight not only into the coordination behavior of the rare earth element but also into the influence of their coordination on the internal structural parameter of the acetate ligands and acetic acid molecules and vice versa. . Arrangement of the one-dimensional coordination polymers (simplified visualized as rods) in the crystal structure of the RE(OAc)3 · 2AcOH structure type, looking down the a-axis; example RE = Nd. Uncoordinated, but hydrogen bonded acetic acid molecules filling the space between the rods in a zipper-like manner are shown as ball-and-stick model.
Conclusions
Our low temperature, well resolved single-crystal X-ray data of the rare earth triacetates acetic acids solvates belonging to the RE(OAc)3 · 2AcOH structure type allow a more detailed insight not only into the coordination behavior of the rare earth element but also into the influence of their coordination on the internal structural parameter of the acetate ligands and acetic acid molecules and vice versa. Figure 10. Arrangement of the one-dimensional coordination polymers (simplified visualized as rods) in the crystal structure of the RE(OAc) 3 · 2AcOH structure type, looking down the a-axis; example RE = Nd. Uncoordinated, but hydrogen bonded acetic acid molecules filling the space between the rods in a zipper-like manner are shown as ball-and-stick model.
Conclusions
Our low temperature, well resolved single-crystal X-ray data of the rare earth triacetates acetic acids solvates belonging to the RE(OAc) 3 · 2AcOH structure type allow a more detailed insight not only into the coordination behavior of the rare earth element but also into the influence of their coordination on the internal structural parameter of the acetate ligands and acetic acid molecules and vice versa.
For the nine-fold mono-capped square-antiprismatic coordination of the rare earth atoms, lanthanide contraction represents the most prominent factor for RE-O bond lengths and RE-O-C bond angles in the different coordination modes of the acetate groups as the specific values most often decrease in accordance with the size of the rare earth atom. The corresponding relationships can be fitted by use of a linear regression analysis but are more often better described by a quadratic equation, an observation that was formerly already observed in the case of some isostructural rare earth trifluorides, REF 3 [42], trisethylsulfate nonahydrates, RE(C 2 H 5 SO 4 ) 3 · 9H 2 O [47], and tris-trifluoromethanesulfonate nonahydrates, RE(CF 3 SO 3 ) 3 · 9H 2 O [48,49]. Based on these data, the parabolic decay of structural parameters associated with lanthanide contraction has been revisited [50] and theoretically reinforced [51]. Our data also show that not all RE-O bond lengths decrease uniformly. The exceptions show that the corresponding acetate groups occupy a somewhat different orientation in space in order to optimize their interactions with the rare earth atoms.
For the acetate ligands our data indicate that the internal structural parameters strongly depend on their coordination modes (2.11, 2.21) and the hydrogen bonds they are involved in, and in particular the strongly different bond angles between the oxygen atoms of the different acetate groups constitute important indicators for this assumption. Unexpected results came from the hydrogen bridging bonds of the acetic acid molecules as their structural parameters strongly correlated with the size of the lanthanide atom, an observation that has to be confirmed by further experiments. With respect to the extrapolation of these data it seems possible that the hydrogen bridging bonds confine the existence range of the RE(OAc) 3 · 2AcOH structure type of the compounds described herein.
|
2021-09-01T15:03:55.822Z
|
2021-06-30T00:00:00.000
|
{
"year": 2021,
"sha1": "5ed287c204ad13a15d0f62c73b8dbbe8de2fde04",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4352/11/7/768/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4bc63de746a194c0c3bfa6dc72a6020f1fb1dc93",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
3569661
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Binaural Spatialization in Wireless Microphone Systems for Hearing Aids on Normal-Hearing and Hearing-Impaired Listeners
Little is known about the perception of artificial spatial hearing by hearing-impaired subjects. The purpose of this study was to investigate how listeners with hearing disorders perceived the effect of a spatialization feature designed for wireless microphone systems. Forty listeners took part in the experiments. They were arranged in four groups: normal-hearing, moderate, severe, and profound hearing loss. Their performance in terms of speech understanding and speaker localization was assessed with diotic and binaural stimuli. The results of the speech intelligibility experiment revealed that the subjects presenting a moderate or severe hearing impairment better understood speech with the spatialization feature. Thus, it was demonstrated that the conventional diotic binaural summation operated by current wireless systems can be transformed to reproduce the spatial cues required to localize the speaker, without any loss of intelligibility. The speaker localization experiment showed that a majority of the hearing-impaired listeners had similar performance with natural and artificial spatial hearing, contrary to the normal-hearing listeners. This suggests that certain subjects with hearing impairment preserve their localization abilities with approximated generic head-related transfer functions in the frontal horizontal plane.
Introduction
Binaural hearing is a fundamental property of the human auditory system. Rather than simply replicating the information at each ear, it provides additional capabilities resulting from the analysis of the binaural sound differences. The different times of arrival of the acoustic signal (the interaural time difference), as well as the difference of sound pressure levels (SPLs; interaural level difference) between the left and right ears make possible the localization of sounds in the horizontal plane. They are combined with the monaural (spectral) cues, which occur at high frequencies and correspond to the effect of the torso, the head, and especially the pinna. These three localization cues are encapsulated in the head-related transfer function (HRTF), as described by Cheng and Wakefield (1999).
It is also well-established that binaural hearing contributes to speech intelligibility in complex and noisy conditions (Carhart, 1965). This is referred to as the cocktail party effect (Bronkhorst, 2000;Hawley, Litovsky, & Culling, 2004). The spatial release from masking denotes the intelligibility gain that is observed when a spatial separation is introduced between the targeted speech signal and the masker(s) (Dirks & Wilson, 1969;Freyman, Helfer, McCall, & Clifton, 1999). It includes two components, which are binaural switching and binaural unmasking. The first designates the selection and focus on the ear bringing the highest signal-to-noise ratio (SNR), due to the head shadow effect (Bronkhorst & Plomp, 1988;Culling & Mansell 2013), while the second denotes the noise suppression mechanism resulting from the analysis of the noise pattern at both ears (Dillon, 2012;Gallun, Mason, & Kidd, 2005). Additionally, binaural localization helps identify the speaker and gives access to lip reading.
A typical example of that tradeoff can be found in FM technology (in this article, the expression ''FM systems'' refers likewise to devices with the old analog transmission or to the most recent ones based on a digital modulation). A usual FM unit consists of a small transmitter microphone, which picks up the voice of a speaker and sends the clean speech to a radio-frequency (RF) receiver plugged into the HA of a listener, via a wireless connection. Many studies evidenced the strong intelligibility enhancements obtained with FM systems (Crandell & Smaldino, 1999;Hawkins, 1984;Lewis, Crandell, Valente, & Horn, 2004;Thibodeau, 2010Thibodeau, , 2014, even for disorders other than hearing loss, for example, autism and hyperactivities (Schafer et al., 2013). FM systems rely on the full binaural summation of a diotic signal captured at the speaker's place. The reproduction of twice the same signal at both ears is known to increase speech intelligibility (Dillon, 2012), and FM systems take advantage of it. The counterpart is that no spatial cues are reproduced. The lack of spatial information can be partially solved in the so-called FMþM mode (as opposed to the FM-only mode), where the clean transmitted speech is mixed with the potentially noisy signal from the HA microphone, at the cost of a lower intelligibility.
The authors addressed this issue with a novel solution that preserves the high quality of the remote microphone signal, while artificially restoring the cues required for sound localization (Courtois, Marmaroli, Lissek, Oesch, & Balande, 2015a, 2015b. This process is referred to as the spatial hearing restoration feature (SHRF) in this article and is summarized in Figure 1. This figure represents a typical use case of FM systems, with two speakers wearing a remote microphone and one HI listener equipped with hearing aids and RF receivers. The algorithm first detects and localizes the current talker, with a spatial resolution of five areas in the FHP. Then, the speech from the wireless microphone is spatialized in the determined position. Thus, binaural hearing can be reintroduced without mixing the FMtransmitted speech with a noisier signal. Nevertheless, it is unknown whether the change from a full binaural summation (i.e., diotic presentation) to a partial binaural summation (i.e., spatial presentation) has an effect on speech understanding.
Participants
Forty subjects took part in this study. They were arranged in four groups: . 10 young adult NH subjects (NH, or control group) with hearing thresholds lower than or equal to 20 dB HL at both ears (125 Hz to 8 kHz), . 10 moderate HI subjects (HI-MOD group) with puretone averages (PTAs over 0.5 to 4 kHz) between 41 and 60 dB HL (all PTAs were computed at the better ear), . 10 severe HI subjects (HI-SVR group) with PTAs between 61 and 80 dB HL, . 10 profound HI subjects (HI-PFD group) with PTAs greater than 81 dB HL.
These categories are in agreement with the ones defined by the World Health Organization (2016). All HI subjects presented a symmetrical hearing loss that did not differ by more than 20 dB between the left and right PTAs. They were all common users of bilateral Phonak HAs with an available direct audio input (DAI). All participants provided written informed consent and were paid for their participation. Table 1 reports statistics related to the four groups. The average hearing loss in the better ear was well centered in the intervals of each category, so as to avoid overlapping hearing losses between groups. Note that the difference in PTAs between the HI-MOD and the HI-SVR groups was approximately 20 dB, while the difference between the HI-SVR and HI-PFD was around 30 dB. This is because all subjects presenting a hearing loss higher than 81 dB HL were included in the HI-PFD group. The individual PTAs in this group range from 83 to 115 dB HL. Figure 1. Principle of the SHRF that allows recovering spatial hearing in FM systems. The process combines a localization algorithm in charge of determining the position of the current talker and a binaural spatialization block to restore the corresponding spatial cues. The average audiograms (better ear) in each group are depicted in Figure 2. Some sloping hearing losses are shown in all HI groups. The overlap between the categories is small.
Hardware
A pair of HAs (Phonak Naida IX SP) with fittings matching an audiogram with no hearing loss (linear amplification) was available for the NH subjects. Once worn, their conchae were filled with some impression material, in order to reduce the contribution of the direct sound in the ear canal. The HI subjects used their own HAs, with their usual settings of dynamic and frequency compression. For safety reasons, the feedback canceller was kept as well. All the other algorithms (adaptive processing such as reverberation canceller, noise cancellation, etc.) were switched off, and the microphone directivity was set to omnidirectional. The HI subjects kept up their usual earmolds.
Before starting the experiments, the HA of the better ear of each HI subject was submitted to a short calibration, so as to characterize its IN/OUT behavior. This procedure was in agreement with the American National Standards Institute (2003), paragraph 6.15.1 (''Input-output characteristics''), except that the signal used was either a speech sequence or a speech-shaped noise (SSN), instead of a pure-tone signal. Figure 3 shows the curves of the averaged HA dynamics in the four groups, measured in the 2 cc coupler for a speech signal, as a function of the root mean square (RMS) value of the electrical signal input. The working level (black line) is around 2 mV RMS (6 dB mV), which is the standard electrical voltage at the input of the DAI when FM systems are used. The knowledge of the IN/OUT characteristics was required to ensure accurate SNR values during the intelligibility experiment. For example, in order to deliver a SNR of þ 3 dB, the gain applied to the masker must be À3 dB for the NH group (linear amplification), while for the HI-PFD, a gain of À15 dB is required, due to the dynamic compression.
A MOTU 896 mk3 soundcard was used to output the audio signals from the computer. These signals were transmitted to a Denon AVR 3300 amplifier via an optical connection, so as to reduce the voltage down to 2 mV RMS. A Samson S-com plus compressor or limiter was then inserted to prevent any accidental excessive SPL. It was set to trigger at voltage levels higher than the one normally encountered in the procedure of the experiments. The attack and release times were equal to 0.3 ms and 5 s, respectively, and the ratio was turned to its ''infinite'' value (limiter operating mode). Finally, the signals were sent to the DAI of the HAs or to five Tannoy Reveal Active loudspeakers. The electric path stood for the FM-transmitted sound, while the loudspeakers (acoustic path) served for the experiments in the FMþM configuration. The electrical path was delayed to partly compensate for the acoustic time of flight, so that the delay between the FM-transmitted signal and the HA microphone signal was between 2 and 4 ms, as measured in the 2 cc coupler. The FMtransmitted speech was always rendered first, so that the localization was dominated by the spatialization processing (see Cranford, Andres, Piatz, & Reissig, 1993, for a review about the precedence effect). In real applications of FM systems, it might happen that the sound picked up by the HA microphone arrives the first, in case of short speaker-to-listener distances, but the FM-transmitted speech is always reproduced louder (typically 10 to more than 20 dB higher, the so-called FM-advantage; Thibodeau, 2014). This is to guarantee a high intelligibility even in very noisy situations. The consequence is that the speech localization is usually governed by the FM-transmitted speech.
Stimuli
The SHRF splits the FHP into five so-called spatial sectors. This is the actual spatial resolution of the localization and spatialization processing. One sector stands for the central positions of the speaker (À20 to 20 ), while two lateral sectors are available on the left and right sides. These are the ''intermediate'' sectors that cover the speaker positions from 20 to 40 (resp. À20 to À40 on the left) and the ''extreme'' sectors corresponding to the speaker positions between 40 and 90 (resp. À40 to À90 ). The FM-transmitted speech is spatialized at 0 in the central sector, at AE 30 in the intermediate sectors, and at AE 65 in the extreme sectors, using 10thorder minimum-phase finite impulse response filters. Figure 4 displays the gain of those filters as a function of the frequency (blue and red solid lines), for a sound spatialized at À65 . Their frequency-gain responses are compared with the original HRTFs that were used to design the filters at this azimuth (dashed lines). These HRTFs were taken from the CIPIC database by Algazi, Duda, Thompson, and Avendano (2001), using the Subject 21 (KEMAR with large pinna) at 0 elevation. The bandwidth has been reduced to 8 kHz (usual bandwidth of FM systems). To avoid excessive binaural gain differences that would reduce the benefits coming from the full binaural summation in conventional FM systems, some amplitude limitation was included in the spatial filters, so that the interaural gain difference reached 20 dB at most. This processing has been described and validated by Courtois et al. (2016) on 38 NH listeners. Two French databases of speech content were used: the HINT database by the Colle`ge National d'Audioprothe`se (2006) for the intelligibility experiment and the SUS database by Raake and Katz (2006) for the localization experiment. The first consists of meaningful and phonetically balanced sentences with four to seven words. The postulate was to resort to meaningful material in order to get closer to the speech understanding in real life, where listeners can exploit their cognitive abilities for guessing possible missing words. On the contrary, the second database is composed of semantically unpredictable (i.e., meaningless) sentences. This material was preferred to ensure that the subjects did not concentrate on the content, but rather on the direction of arrival (DOA) of the voice. The speech stimuli were spatialized with the spatial filters included in the SHRF. Prior to the experiments, the long-term RMS value of a SSN had been measured in the 2-cc coupler when it was either diotic or spatialized and played via the DAI. The levels had been adjusted until the respective computed loudnesses were the same, according to the model of Glasberg and Moore (2002). The other spatialized directions were supposed to yield the same binaural SPL as the one at 0 (Begault & Erbe, 1994). This procedure ensured an equal loudness of the diotic and spatialized speech.
The masker was a mixture of five uncorrelated SSNs having the same long-term spectrum as the HINT corpus. Each of these five noises was spatialized in one of the five sectors. Hence, the spatialized speech signal was always colocated with one of the five noise signals. This was to ensure a limited contribution of the spatial release from masking in the intelligibility assessment, so that the potential effect of the processing on speech understanding would be primarily attributed to the change from a diotic to a dichotic presentation.
For the NH group, the listening level was set to 65 dB SPL. The gain of the HAs had thus been adjusted so that an electrical input at 2 mV RMS corresponded to 65 dB SPL in the ear canal. For the HI subjects, the same input voltage was used, and the listeners' personal-fitted gain delivered the speech at a comfortable acoustic level. A variation of AE 8 dB around the 2 mV voltage was possible if requested by the subject. When the FMþM configuration was considered, the stimuli were simultaneously played via the DAI and through the loudspeakers. Both paths equally contributed to obtain 65 dB SPL at the output of the HA.
Procedure
Intelligibility experiment. The goal of this experiment was to evaluate the impact of the SHRF on the understanding of speech. More precisely, it was intended to study the effect of the change from a diotic (i.e., full binaural summation) to a spatialized (partial binaural summation) sound reproduction. It was performed in two different configurations, corresponding to the FM-only and FMþM modes, respectively. In the first configuration (FM-only), some stimuli were randomly spatialized in one of the five spatial sectors, while some others were left unprocessed (i.e., diotic). In the second configuration (FMþM), some stimuli were simultaneously spatialized in one of the five locations (FM path) and played in the corresponding loudspeaker, to be captured by the HA microphones (M path). When the stimuli were rendered diotic, a random loudspeaker played the sound at the same time.
The listeners sat in the center of the room (background noise < 25 dBA, RT 60 ¼ 0.17 s, volume ¼ 125 m 3 ).
They were asked to look straight ahead, and their head was immobilized by a chin rest. For both configurations, the sentences were played at three different SNRs. The masking noise was always input via the DAI, and all SNRs were adjusted by varying the noise level, while keeping the speech level fixed. The same masking noise (i.e., mixture of the five spatialized SSNs) was used for both the diotic and spatialized sentences in the two configurations. There were two sentences per direction and three diotic ones, giving a total of 39 sentences (3 SNRs Â[3 diotic þ 2 Â 5 directional conditions]). The processing of every sentence (i.e., diotic or spatialized locations) was randomized within and across subjects. After each sentence, the listeners were asked to repeat what they understood, and the correct words were collected to derive the speech recognition score (SRS), that is, the percentage of correctly repeated words in the sentence. The NH and HI subjects were not tested with the same SNRs. The NH group experienced SNRs of À10 dB, À13 dB, and À16 dB. For each HI listener, the procedure suggested by Lewis et al. (2004) was adopted and adapted to the experiment. The examiner fixed a starting SNR for the 13 first sentences, that was equal to À6 dB for the HI-MOD listeners, À3 dB for the HI-SVR listeners, and 0 dB for the HI-PFD listeners. Then, depending on the results, two other SNRs were driven by 3-dB steps, such that: . SNR HIGH yielded the best intelligibility score, . SNR MID yielded an intermediate intelligibility score, . SNR LOW yielded the worst intelligibility score.
For example, if a moderate HI listeners reached a SRS of about 50% after the 13 first sentences (SNR at À6 dB), the two tested SNRs were À3 dB and À9 dB. In the case where a listener provided an intelligibility score close to 0% at the first SNR, he was then submitted to SNRs at À3 dB and 0 dB. For moderate listeners presenting a high-intelligibility performance with the initial SNR (i.e., SRS near 100%), they experienced following SNRs at À9 dB and À12 dB. This procedure was chosen because it avoids encountering undesired floor or ceiling effects (i.e., 0% or 100% intelligibility scores). In the HI-PFD group, only six profound listeners managed to pass the experiment, while the four others could not understand any word, even when no noise was added.
Each configuration started with a training period of six test sentences (one diotic þ one in each sector), such that the listeners could get used to the procedure and hear the various spatial conditions once. The subjects were not aware of the actual start of the experiment after this training time.
Localization experiment. The objective of the localization experiment was to evaluate the effect of the binaural spatialization, as introduced by the SHRF, on the localization performance of NH and HI listeners. This was done in four configurations: . Configuration 1 (Unaided): The subjects were tested without their HAs. The acoustic level was adjusted for each HI listener to be sufficiently loud, . Configuration 2 (Aided): The subjects were tested with their HAs and usual fittings, . Configuration 3 (FM-only): The subjects were tested when the spatialization was played back via the DAI only, . Configuration 4 (FMþM): The subjects were tested when the spatialization was rendered via the DAI, and simultaneously through the corresponding loudspeaker.
The listeners could not see the loudspeakers, which were hidden by a black curtain, as shown on Figure 5. Nine numbers from À4 (left) to 4 (right) were displayed on the curtain, corresponding to azimuths at 0 , AE15 , AE30 , AE45 , and AE65 . This procedure was identical in the four configurations.
In this experiment, three sentences were played in each direction, resulting in a total of 15 sentences per configuration. The processing of every sentence (i.e., spatialized locations) was randomized within and across subjects. After each sentence, the listeners were asked to indicate the perceived location of the sound source by reporting the number corresponding to the incidence direction (see Figure 5). Then, the localization error was computed as the difference between the number reported by the listener and the number of the actual loudspeaker emitting the sound. Note that the localization error was not computed in degree due to the coarse resolution of the SHRF, and the fact that the sectors do not present the same angular span. The listeners were made aware that all the available azimuths may not be played.
They could also answer that they perceived the sound from none of those directions (e.g., above, behind, etc.).
All configurations started with a training period of five test sentences in each spatialized direction, so that listeners could get used to the procedure and hear the various spatial conditions once. The subjects were not aware of the actual start of the experiment after this training time. A roving level of AE 3 dB among stimuli was implemented throughout the experiment. This was to minimize the risk that the listeners rely on potential differences of loudness among locations to infer the position of the sound source, see, for example, Noble et al. (1997), Keidser et al. (2006), and Majdak, Walder, and Laback (2013). All HI subjects managed to perform this test. Table 2 reports the average SNRs experienced by the listeners in the four groups in the HIGH, MID, and LOW intelligibility conditions. The performance of the NH listeners was tested at fixed SNRs, while individualized SNRs were used for the HI subjects. While the average step between the three conditions is around 3 dB for the HI-MOD and HI-SVR group, similarly to the NH group, this step is reduced to 2 dB in the HI-PFD group. This is because, the profound HI listeners presented a narrow SNR interval between a satisfying and a null speech understanding, limiting the SNR range that could be tested. Figure 6 shows the SRS in the four groups for the three tested SNRs (HIGH in blue, MID in orange, and LOW in red) in the FM-only (a) and in the FMþM (b) configurations, averaged over all rendering types (diotic and spatialized). The absolute scores must not be compared between groups because the SNRs were specific to each HI listener. The outcomes from a two-way repeated measures analysis of variance (ANOVA) investigating the influence of the SNR and configuration are reported in Table 3. The statistical analysis revealed that there was a significant effect of the SNR on the speech understanding. Specifically, the speech intelligibility decreased when the SNR diminished for all groups, despite the strong intragroup variations that could be observed in the HI groups, most probably due to the procedure of individualized SNRs. In the NH, HI-MOD, and HI-SVR groups, three different distributions of SRS can be distinguished, with limited overlap. The test failed to show any statistical influence of the configuration. There was no interaction effect between both factors in all groups.
Intelligibility and Rendering
Figure 7 displays the SRS distributions obtained with the usual diotic rendering (yellow) and the spatial processing from the SHRF (green), in the FM-only (a) and FMþM (b) configurations. The SRS is presented in each group and averaged over all SNRs. In the FM-only configuration, the graph suggests an improvement of the intelligibility with the SHRF in the NH, HI-MOD, and HI-SVR groups. In the FMþM configuration, one can suspect an enhancement of the speech understanding for the moderate HI subjects. On the contrary, a diminution of the intelligibility might occur in the HI-PFD group when the SHRF is applied.
Tables 4 and 5 report the results from the two-way repeated measures ANOVA investigating the influence of the rendering and the interaction effect between the rendering and SNR on the speech intelligibility in the FM-only and FMþM configurations, respectively. The goal was to test the alternative hypothesis that the spatialization feature does improve speech intelligibility, against the null hypothesis assuming that it has no influence. If the alternative hypothesis was accepted, it was also desired to know whether the benefit coming from Note. Standard deviations are given into brackets. PTA ¼ pure-tone averages; NH ¼ normal-hearing; HI-MOD ¼ moderate hearing-impaired; HI-SVR ¼ severe hearing-impaired; HI-PFD ¼ profound hearing-impaired; SNR ¼ signal-to-noise ratio.
the SHRF appeared in certain SNRs specifically. In the FM-only configuration, the analysis confirmed a significant enhancement of the speech understanding performance for the moderate and severe HI listeners, while this was not observed in the NH and HI-PFD groups. In the FMþM configuration, a significant effect of the SHRF on the intelligibility was present in the HI-MOD group only. No interaction effect between the rendering and the masker level was found in any case.
Intelligibility and DOA
The influence of the DOA of the speaker's voice on the SRS was investigated. A sequence of one-way repeated measures ANOVAs was performed to look for significant effects of the spatial sectors in the results of the four groups. In the FM-only configuration, a statistical influence was found for the HI-SVR group, Fð5, 45Þ ¼ 2:998, p ¼ :020). In the FMþM configuration, there was a statistical effect with the moderate HI listeners, Fð5, 45Þ ¼ 2:579, p ¼ :039. However, no effect remained after having applying a Bonferroni correction for multiple comparisons.
Localization and Configuration
The localization error in the four configurations is presented in Figure 8. Considering the NH group, the figure suggests that there exist some significant differences between the configurations. This was confirmed by a one-way repeated measures ANOVA, Fð3, 27Þ ¼ 3:837, p 5 :05. A one-tailed Bonferroni post hoc test indicated that there was a degradation of the localization performance between the unaided and FM-only configurations (p ¼ .036). In the three HI groups, no significant difference was found between the configurations: HI-MOD: A Greehouse-Geisser correction was applied for the HI-SVR group due to sphericity assumption violation. Note. The significant effects are given in bold ( ¼ :05). PTA ¼ pure-tone averages; NH ¼ normal-hearing; HI-MOD ¼ moderate hearing-impaired; HI-SVR ¼ severe hearing-impaired; HI-PFD ¼ profound hearing-impaired; SNR ¼ signal-to-noise ratio. Note. The significant effects are given in bold ( ¼ :05). PTA ¼ pure-tone averages; NH ¼ normal-hearing; HI-MOD ¼ moderate hearing-impaired; HI-SVR ¼ severe hearing-impaired; HI-PFD ¼ profound hearing-impaired; SNR ¼ signal-to-noise ratio; Rend. ¼ rendering. Note. The significant effects are given in bold ( ¼ :05). PTA ¼ pure-tone averages; NH ¼ normal-hearing; HI-MOD ¼ moderate hearing-impaired; HI-SVR ¼ severe hearing-impaired; HI-PFD ¼ profound hearing-impaired; SNR ¼ signal-to-noise ratio; Rend. ¼ rendering.
Localization and DOA
perception of the source from above or behind represented 2.8% over the total answers and were mostly encountered in the HI-PFD group.
Localization and Group
Contrary to the intelligibility test, it is possible to compare the localization performance between groups, as shown in Figure 8. To this end, several one-way between-subjects ANOVAs were conducted. First, the HI-PFD group was not considered in the statistical analysis, as it yielded a violation of the assumption of variance homogeneity. The null hypothesis stood that there was no significant difference of localization performance between the three other groups, while the alternative hypothesis claimed that there were significant variations with the degree of hearing loss. Statistical effects were found in the unaided, Fð2, 27Þ ¼ 7:453, p ¼ :002, aided, Fð2, 27Þ ¼ 2:729, p ¼ :037, and FMþM, Fð2, 27Þ ¼ 4:664, p ¼ :009, configurations. Tukey's onetailed post hoc tests showed a significant increase of the localization error between the NH and HI-MOD groups (p ¼ .006) and between the NH and HI-SVR groups (p ¼ .002) in the unaided configuration. Then, only statistical differences between the NH and HI-SVR groups were observed in the aided (p ¼ .031) and FMþM (p ¼ .013) configurations. Comparing the two HI groups, a significant degradation of the performance arose between the moderate and severe HI subjects in the FMþM configuration (p ¼ .024). No significant differences were found between the three groups in the FMonly configuration.
The analysis of the results in the HI-PFD required to resort to another procedure because the data did not fulfill the assumption of variance homogeneity. A one-way between subjects ANOVA including a Brown-Forsythe correction for unequal variances showed a significant degradation of the localization performance between the severe and profound HI subjects, in all configurations: Unaided:
Age and PTA
Although there was a high variability of age between subjects in each group, as described in Table 1, no significant correlation between age and performance was found in the intelligibility experiment for either configuration (HI-MOD: r ¼ À:098, N ¼ 10, p ¼ .787; HI-SVR: r ¼ À:160, N ¼ 10, p ¼ .658; HI-PFD: r ¼ .128, N ¼ 6, p ¼ .810) or in the localization experiment for any configuration (HI-MOD: r ¼ .030, N ¼ 10, p ¼ .935; HI-SVR: r ¼ .209, N ¼ 10, p ¼ .562; HI-PFD: r ¼ .227, N ¼ 10, p ¼ .528). Nevertheless, a significant correlation was found between the PTAs at the better ear and the localization performance in the four configurations (r ¼ .683, N ¼ 40, p < .001), as could be expected.
Intelligibility and SNR
The procedure adapted from Lewis et al. (2004) has yielded a powerful way of conducting intelligibility experiments, by finding the adequate SNRs for every HI subject, while avoiding some undesirable floor and ceiling effects. Lower SNRs produced worse speech understanding performance in each group, as expected. In the FM-only configuration, the general distribution of the SRS with the various SNRs follows the same trend in the NH, HI-MOD, and HI-SVR groups, with median scores around 95%, 70%, and 30% for the HIGH, MID, and LOW SNRs, respectively. This is consistent with the fact that these three groups experienced the same steps of 3 dB between each SNR.
The variations of the performance within the HI-MOD and HI-SVR groups appear to be much higher than in the NH group. The procedure of customized SNRs across the HI subjects is hypothesized as the main reason for that (see Table 2). In fact, the SNRs experienced by the subjects in the HI-MOD group varied from 0 to À9 dB for the HIGH SNR, from À3 to À13 dB for the MID SNR, and from À6 to À15 dB for the LOW SNR, depending on their individual performance. In the HI-SVR group, it varied from 3 to À3 dB (HIGH), from 0 to À6 dB (MID), and from À3 to À9 dB (LOW). Additionally, it is well known that performance between HI subjects that have similar PTAs can dramatically differ, because PTA is not always well correlated with speech perception (Smoorenburg, 1992).
The six subjects presenting a profound hearing impairment were more sensitive to the changes of the masker level than the other HI subjects. Indeed, a variation of the SNR by 3 dB could make their speech intelligibility performance falling from 100% to 0%, somehow reducing the efficiency of the individualized-SNR procedure. This is illustrated in Figure 6, where the distributions of the SRSs in the HI-PFD group are strongly overlapping. It has been evidenced by Duquesnoy and Plomp (1983) that the detrimental effect of interfering noise on speech understanding becomes stronger and faster as the PTA of HI subjects rises. The results obtained in this intelligibility experiment are in agreement with their conclusions.
Intelligibility and Rendering
One of the main results of this study is that the SHRF significantly improved speech intelligibility of the subjects presenting a moderate hearing impairment in both FM-only and FMþM configurations by an average amount of 9%. In more detail, 9 subjects over 10 experienced an increase of the intelligibility between 2% and 17%, while only one subject presented a marginal loss of 0.9%. In the FM-only configuration, all the severe HI subjects improved their speech understanding performance with the SHRF, from 0.5% to 18%. Although there was no overall significant effect of the SHRF in the FMþM configuration, the individual results showed that only half of the subjects still benefited from the spatial processing (range þ 1% to þ 31%), while the five others experienced a degradation of their speech intelligibility (À2% to À13%). No general conclusion could be drawn from the NH and HI-PFD groups. In the latter, three subjects over six (FM-only mode) and two subjects over six (FMþM mode) experienced a gain in speech understanding with the SHRF.
The reported results show that the restoration of spatial hearing as achieved by the SHRF enhances the speech understanding performance of a strong majority of the subjects presenting a moderate or severe hearing loss. This cannot be attributed to the introduction of a spatial separation between the targeted speaker and the masking noise, since the masker was spatially distributed all over the FHP. Furthermore, the procedure of loudness compensation that was performed between the diotic and spatialized rendering ensured that this outcome was not the consequence of higher SNRs artificially introduced by the SHRF. It rather means that the full binaural summation, which is achieved by conventional FM systems, can be slightly modified to incorporate the binaural cues corresponding to the position of the speaker, without lowering the speech intelligibility. Here, the passing from a diotic to a binaural rendering was operated cautiously, with the use of amplitude-limited HRTFs, so that the gain difference applied between both HAs never went over 20 dB (see Courtois et al., 2016, for more detail).
The second conclusion that can be drawn from this experiment is that the improvements offered by the SHRF may be reduced when the processed speech is mixed with the acoustic signals captured by the HA microphone. It is well established that the FMþM configuration tends to reduce the speech perception performance that can be obtained when the FM-transmitted voice is played alone (FM-only; Thibodeau, 2010Thibodeau, , 2014, because the ''M path'' adds part of the acoustic noise present in the environment. However, the SNR was kept similar in both configurations of this intelligibility experiment, and the masker was played through the DAI only. One can hypothesize that this observation might be rather due to the interaction between the artificial spatialization coming from the SHRF and the natural spatial hearing provided by the HA microphones. Depending on the resemblance between the HRTFs of the subjects and the ones used to design the spatial filters in the SHRF, some conflicting binaural information could lead to deteriorated speech understanding. This may also explain the strong disparity of performance that was observed between subjects inside each group.
Localization and Configuration
The experiment showed that the localization performance of the NH group was degraded in the FM-only configuration (spatialization based on generic HRTFs with the SHRF). In more detail, all the NH listeners experienced an increase of their localization error by an average factor of 3.5, compared with the unaided configuration. This does not match the results reported by Wenzel et al. (1993) and Begault et al. (2001), who evidenced that NH listeners do not need to hear with their own HRTFs to preserve their localization abilities in the FHP. Drullman and Bronkhorst (2000) observed no difference on the localization performance of NH listeners in the FHP, despite a reduced bandwidth at 4 kHz. Similar outcomes were mentioned by Majdak et al. (2013) for the entire 3D plane, with band-limited HRTFs (8.5 kHz). However, the spatial filters available in the SHRF are only approximations of the original generic HRTFs, due to the 10th-order filter design and the gain limitation (Figure 4). It is likely that such alterations in the quality of the spatial rendering were sufficient to lower the performance of the NH listeners. The use of HAs (instead of, e.g., headphones) to present artificial spatialization may be responsible of this deterioration as well, since headphones provide a better sound quality and high fidelity. Finally, it must be noted that the spatial resolution in the present investigation was quite a bit coarser than in the aforementioned studies. It is likely that a long-term training with these spatial filters, so that subjects learn to combine vision and hearing stimuli, would result in better performance in localization, as shown by, for example, Mendonca et al. (2012) and Majdak et al. (2013). Thus, this result does not exclude the use of the SHRF on NH subjects using FM systems.
The statistical analysis did not allow to draw general conclusions on the effect of the SHRF on the localization performance in the HI groups. Looking at the individual results, it appeared that 15 HI subjects over 30 experienced an improvement of their localization abilities with the processing achieved by the SHRF, compared with the aided condition (i.e., no spatialization). They were five in the HI-MOD group, three in the HI-SVR group, and seven in the HI-PFD group. Interestingly, the reintroduction of the HA microphone signals in the FMþM configuration made the localization error higher in 10 of them again. Additionally, the performance of five subjects was similar between the natural and artificial spatial rendering. Two hypotheses can be established from these results. First, it seems that a significant part of the HI subjects did not need their own HRTF to localize sounds in the FHP. Second, the SHRF might provide more precise localization cues than the usual HA sound reproduction, especially for those with a high degree of hearing loss. However, in 10 over the 30 tested HI subjects, the localization error rose when the SHRF was activated. Like in the intelligibility experiment, this might be associated with subjects whose HRTFs are much different than the generic ones used to design the spatial filters. The continuous training that would naturally occur if the SHRF would be used with the available visual cue should provide enhancements of the localization performance. Brungart et al. (2017) were one of the first to evaluate the effect of artificial spatialization on HI listeners. They demonstrated that both NH and HI subjects performed better with the natural rather than the artificial spatial hearing in the entire horizontal place. However, it is difficult to compare their results with the ones reported here, since the protocol and the way of assessing the performance were substantially different. Indeed, Brungart et al. (2017) tested their subjects without hearing aids, by comparing their unaided localization performance with the one obtained by virtual playback through headphones, and their head movements were tracked to update the spatialization processing in real time. When looking at the detailed results, it is shown that a great amount of the localization error was due to front/back and back/front reversals, which were not investigated in the present study. It is therefore inappropriate to state that the observations reported in the current experiment contradict the ones from Brungart et al. (2017). The use of artificial spatialization in HI listeners is still at its early stages, and much more research is required to draw general conclusions.
Localization and DOA
The analysis of the effect of the DOA on the results revealed some differences of performance between the spatial sectors. The localization error was significantly worse in the intermediate sectors than in the central or extreme ones in several conditions. Many listeners reported that it was difficult to make a choice between the directions 1, 2, and 3 (resp. À1, À2, and À3 on the left side). It was expected that the localization performance would decrease as the source moved from the frontal to the lateral azimuths, as a consequence of the spatial resolution of the human auditory system (Blauert, 1997). Yet, the results showed that the accuracy was better in the extreme sectors than in the intermediate ones. This was most probably due to a bias in the protocol, because the listeners could not give a perceived position beyond À4 and 4. With the headrest, the subjects were barely able to see the number AE4. Hence, no additional answers (e.g., AE5) could have been added without enabling head motions. The effect of this bias must be tempered by the fact that the listeners were allowed to report a perceived DOA in none of the available positions, and this type of answer remained extremely rare (2.8% of the total number of reported locations); 66% of those responses occurred in the HI-PFD group, where no significant difference was found between the intermediate and extreme sectors.
The localization experiment showed that the NH subjects performed significantly better than the HI subjects in the unaided configuration, despite the SPL compensation. More precisely, the average localization error of the NH group was multiplied by 3.4 in the HI-MOD group, by 3.7 in the HI-SVR group, and by 15 in the HI-PFD group. When subjects were equipped with the HAs, this discrepancy diminishes, and no significant difference in the localization performance remained between the NH and the HI-MOD groups. However, it cannot be concluded from these results that the use of HAs restored the localization abilities of the moderate HI subjects to some ''normal'' performance, because a majority of NH subjects experienced more difficulties to localize the speaker when wearing the HAs. This can be attributed to the fact that the NH subjects had no time to acclimatize to the unusual listening condition they were encountering in this configuration, that is, with conchae closed, pinna filtering shortcut by the microphone location, and sound played back through HAs. When looking at the individual results, it appeared that half of the 30 HI performed better without their fitted HAs than with.
Comparing the HI groups, it was shown that the moderate and severe HI subjects presented close performance, except in the FMþM configuration. In the HI-PFD group, the localization error considerably rose. Wiggins and Seeber (2012) evidenced that the auditory system is still capable of adapting and preserving correct localization performance to a certain extent, especially for broadband stimuli. This might explain why the subjects in the HI-MOD and HI-SVR groups localized sound with a similar accuracy. However, this adaptation seems to be insufficient to maintain satisfying localization performance when the hearing loss reaches high degrees.
Study Limitations
One objective of this study was to end up with conclusions that would be somehow generalizable to various kinds of HI subjects. The main subject-dependent factor that had been retained in the protocol was the degree of the hearing loss, but the dispersion of the performance inside each category, as well as the limited number of effects with statistical significance that were found, suggest that other factors should be considered in further investigations. This may include the age of the subjects, the origin of their hearing loss (congenital/presbysusis), the degree of symmetry between both ears, and so forth. It has been shown that the processing of speech cues, such as the temporal fine structure analysis (Hopkins & Moore, 2011), the neural representation (Tremblay, Piskosz, & Souza, 2003), or the binaural interactions (Neher, 2017), gets poorer with age, even though it is difficult to clearly distinguish between the effects of aging and age-related hearing loss on speech understanding (Divenyi, Stark, & Haupt, 2005;Plomp & Mimpen, 1979). Due to a disordered processing of the interaural time difference, interaural level difference, and precedence effect, the localization of sound events is also slightly, but significantly, worse in the elderly (Abel, Gigure, Consoli, & Papsin, 2000;Cranford et al., 1993;Dobreva, O'Neill, & Paige, 2011). Nevertheless, no effect of age was found in the localization experiment of the current study, and the worsening of the performance was primarily attributed to hearing loss.
Another limitation of this study was the choice to keep the dynamic and frequency compressions as they were fitted for each HI listener. That prevented from clarifying their respective effect on the observed intelligibility and localization performance. Although these processing are known to distort the reproduction of the binaural and spectral cues (see e.g., Keidser et al., 2006;Van den Bogaert et al., 2009;Wiggins & Seeber, 2012), they also bring a proved advantage for speech understanding in aided subjects (Bohnert, Nyffeler, & Keilmann, 2010;McCreery et al., 2014;Moore, Peters, & Stone, 1999). Two motivations supported their preservation. First, the tested HI subjects were accustomed to hearing with their own fittings and may have been disturbed if those processing were switched off. Second, the SHRF would always be followed by frequency and dynamic compressions, as achieved by the HA. Therefore, the reported protocol got closer to real-life listening conditions.
The lack of realism of certain listening scenarios can be seen as a drawback of this study. Although absent in nature, the SSN was chosen since it is frequently used in the literature (see e.g., Culling & Mansell, 2013;Drennan, Gatehouse, Howell, Van Tasell, & Lund, 2005;Duquesnoy & Plomp, 1980;George, Festen, & Houtgast, 2006) and known as the most difficult masker to cope with (Lewis et al., 2004). A more realistic interfering noise could have been an isotropic babble noise. The presentation of the masker via the DAI rather than through loudspeakers also appears to be unrealistic and was motivated by two reasons. First, it avoided the occurrence of any feedback, even at high noise levels. Second, the precision of the desired SNR was quite a bit better through the DAI, thanks to the prior measurements of the IN/OUT characteristics of the HAs of each patient. Finally, the use of static spatialization presentations contributed to an unnatural sound reproduction as well. The use of a head tracker, combined with a dynamic spatialization processing would have provided more realistic listening scenarios. A significant number of severe and profound HI subjects insisted upon the fact that lip reading constituted a prominent help for understanding speech in their daily life, while they could not resort to this cue in the intelligibility experiment. Audiovisual stimuli including a movie showing the face and lips of the speaker should be considered as a complement for voice in further researches, as done by Macleod and Summerfield (1990) and Beskow et al. (1997). An evaluation of the time required by subjects to identify and have access to lip reading should be performed as well. In a complex auditory scene including background noise or several potential talkers, it is expected that this duration could be significantly reduced with the SHRF, leading to a higher speech intelligibility.
Conclusion
This article addressed the topic of artificial spatialization perception by aided HI subjects. Specifically, a novel feature designed to restore binaural hearing within FM systems was evaluated in terms of speech intelligibility and speaker localization. Several conclusions could be drawn from this study: . The SHRF did improve the speech understanding of the tested HI subjects presenting a moderate or severe hearing loss, when the FM-transmitted voice was played back alone (FM-only configuration). This means that the conventional full binaural summation operated by current FM systems can be transformed to incorporate the binaural cues required to localize the speaker, without any loss of intelligibility. The advantage obtained with the SHRF was lost with certain subjects when the spatialized speech signal was mixed with the acoustic signals picked up by the HA microphones (FMþM configuration); . The spatial hearing provided by the SHRF decreased the localization performance of all tested NH listeners, but it is uncertain whether this was due to the use of hearing aids by inexperienced subjects or to the spatial processing itself. No general conclusion could be drawn for the HI subjects, but a majority of them improved or preserved their localization abilities with the spatialization processing. This suggests that HI subjects are less sensitive than NH listeners when hearing with approximated generic HRTFs; . The human auditory system is able to adapt to hearing impairment to maintain satisfying localization performance up to a certain degree of hearing loss, above which the localization abilities dramatically fall, . Intelligibility experiments involving severe-toprofound HI listeners should include audiovisual stimuli to allow for lip reading. In this study, such an approach could have highlighted whether the SHRF provides an advantage on the time required for the speaker identification, and thus on the access to lip reading.
Ethics
The protocol of this study has been validated by the Cantonal Office of Public Health of the Canton de Vaud (CER-VD), Switzerland, and registered under the identifier NCT02693704 on https://clinicaltrials.gov, where all results can be found.
|
2018-04-03T03:52:27.049Z
|
2018-02-18T00:00:00.000
|
{
"year": 2018,
"sha1": "86b5918c600f43b0f68c590702c3d68f15a58e5c",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2331216517753548",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86b5918c600f43b0f68c590702c3d68f15a58e5c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
225643283
|
pes2o/s2orc
|
v3-fos-license
|
Post-harvest Soil Available Nutrient Status and Microbial Load as Influenced by Graded Levels of Nitrogen and Biofertilizers
Maize (Zea mays L.) is a miracle and industrial crop. It is also called as “queen of cereals” for its relative productive potential among other cereal crops. It is a C4 plant that effectively utilize the inputs and respond well to growth resources. It is an exhaustive and nitropositive crop which needs higher quantity of nitrogen for its maximum yield potential. Nitrogen have its dominant role for growth and development as well as yield of maize. The escalating cost of chemical fertilizer has led to considerably lower net returns and continuous application of fertilizers alone in agricultural system deteriorates the soil health and negatively impacts crop productivity (Kannan et al., 2013). Biofertilizers can either fix atmospheric nitrogen for plant or can mobilize unavailable phosphorus, potassium and zinc to the available pool. Low cost and ecofriendly biofertilizers have tremendous potential for supplying nutrients. Azospirillum is known to fix atmospheric nitrogen and increase grain yield in maize by 10-15 per cent (Patil et al., 2001). Keeping in view of the above, the field experiment was conducted to identify the optimum nitrogen level along with suitable biofertilizer to kharif maize in sandy clay loam soil. International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 7 (2020) Journal homepage: http://www.ijcmas.com
Introduction
Maize (Zea mays L.) is a miracle and industrial crop. It is also called as "queen of cereals" for its relative productive potential among other cereal crops. It is a C 4 plant that effectively utilize the inputs and respond well to growth resources. It is an exhaustive and nitropositive crop which needs higher quantity of nitrogen for its maximum yield potential. Nitrogen have its dominant role for growth and development as well as yield of maize. The escalating cost of chemical fertilizer has led to considerably lower net returns and continuous application of fertilizers alone in agricultural system deteriorates the soil health and negatively impacts crop productivity (Kannan et al., 2013). Biofertilizers can either fix atmospheric nitrogen for plant or can mobilize unavailable phosphorus, potassium and zinc to the available pool. Low cost and ecofriendly biofertilizers have tremendous potential for supplying nutrients. Azospirillum is known to fix atmospheric nitrogen and increase grain yield in maize by 10-15 per cent (Patil et al., 2001). Keeping in view of the above, the field experiment was conducted to identify the optimum nitrogen level along with suitable biofertilizer to kharif maize in sandy clay loam soil.
Materials and Methods
A field experiment was conducted in maize during kharif, 2019 at wetland farm of S.V. Agricultural College, Tirupati in a randomized block design with factorial concept and replicated thrice. The soil of the experimental field was sandy clay loam with available nitrogen of 251 kg ha -1 , available phosphorus (180 kg ha -1 ), available potassium (234 kg ha -1 ) and available zinc (3.21 ppm). The initial soil microbial load viz., bacteria (21 x 10 6 CFU g -1 soil), fungi (3 x 10 3 CFU g -1 soil) and actinomycetes (7 x 10 5 CFU g -1 soil). The treatment consisting three levels of nitrogen viz., 75, 100 and 125 % recommended dose of nitrogen (RDN) and five biofertilizers viz., Azospirillum, phosphorus solubilizing bacteria (PSB), potassium solubilizing bacteria (KSB) and zinc solubilizing bacteria (ZnS) and combined application of Azospirillum + PSB + KSB + ZnS each 5 kg ha -1 . Recommended dose of nitrogen was fixed based on soil test value. All biofertilizers were applied at 5 kg ha -1 to soil. Rest of the package of practices were adopted as per the package of practices of Acharya N.G. Ranga Agricultural University. Post-harvest soil available nitrogen (Subbiah and Asija 1956), available phosphorus (Olsen et al., 1956), available potassium (Jackson, 1973) and zinc (Tandon, 1993) were estimated. Soil microbial load of soil viz., bacteria, fungi and actinomycetes were estimated by serial dilution plate count technique (Pramer and Schemidt, 1965).
Results and Discussion
Grain yield of maize was significantly influenced with application of different nitrogen levels and biofertilizers as well as their interaction (Table 1). Application of 125 % RDN resulted in higher grain yield, which was at par with 100 % RDN. This might be due to better growth and yield attributes with higher dose of nitrogen. The increase in grain yield due to application of 125 % RDN was 16.44 per cent compared to 75 % RDN. Similar results were also reported by Athokpam et al., (2017) and Mohammadi et al., (2017). The lowest grain yield was obtained with application of 75 % RDN due to sub-optimal dose of nitrogen. The highest grain yield was obtained with combined application of Azospirillum + PSB + KSB + ZnS each applied 5 kg ha -1 , which was at par with application of Azospirillum and PSB alone each 5 kg ha -1 . These results are in line with the findings of Lakum et al., (2018). Application of ZnS 5 kg ha -1 resulted in lower grain yield. This is possibly due to non-response of zinc solubilizing bacteria. Application of 100 % RDN along with Azospirillum 5 kg ha -1 produced significantly higher grain yield, which was at par with application of 125 % RDN or 100 % RDN with combined application of Azospirillum + PSB + KSB + ZnS each 5 kg ha -1 . It clearly indicate that performance of Azospirillum 5 kg ha -1 found to be more responsive to promote growth and development of maize because of the enhanced mineralization and biological nitrogen fixation.
Post-harvest available nutrient status and soil microbial load was significantly influenced by nitrogen levels and biofertilizers, but their interaction was non-significant ( Table 2). The highest values of post-harvest available nutrient status and microbial population viz., bacteria, fungi and actinomycetes were noticed with application of 125 % RDN which might be due to sufficient substrate available for growth and multiplication of microorganisms, which inturn increased the mineralization and availability of nutrients in the soil.
These results are corroborative with the findings of Abdullahi et al., (2014) and Navsare (2107). Combined application of Azospirillum + PSB + KSB + ZnS each applied 5 kg ha -1 resulted in higher soil available nutrient status due to enhanced mineralization and solubility of insoluble fixed nutrients. The response of Azospirillum 5 kg ha -1 found to be more responsive than others while zinc solubilizing bacteria was found to be poor. The response of microorganisms are highly location specific. These results are in conformity with the earlier findings of Garcia et al., (2017) and Khambalkar et al., (2017).
with the use of biofertilizers in summer
|
2020-09-10T10:14:08.008Z
|
2020-07-20T00:00:00.000
|
{
"year": 2020,
"sha1": "d4821f1f01d46cf16ad4118badbf99ff311ae8d5",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-7-2020/Felix%20Mwiza%20Mayuni,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ab8106b50aa1b387741535fccaa5eee35edf74d7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
15037913
|
pes2o/s2orc
|
v3-fos-license
|
Emergence of multicluster chimera states
A remarkable phenomenon in spatiotemporal dynamical systems is chimera state, where the structurally and dynamically identical oscillators in a coupled networked system spontaneously break into two groups, one exhibiting coherent motion and another incoherent. This phenomenon was typically studied in the setting of non-local coupling configurations. We ask what can happen to chimera states under systematic changes to the network structure when links are removed from the network in an orderly fashion but the local coupling topology remains invariant with respect to an index shift. We find the emergence of multicluster chimera states. Remarkably, as a parameter characterizing the amount of link removal is increased, chimera states of distinct numbers of clusters emerge and persist in different parameter regions. We develop a phenomenological theory, based on enhanced or reduced interactions among oscillators in different spatial groups, to explain why chimera states of certain numbers of clusters occur in certain parameter regions. The theoretical prediction agrees well with numerics.
The collective behaviors of systems of coupled oscillators have been a topic of continuous interest [1][2][3] . A class of oscillator systems is those with non-local interactions, which arise in realistic systems such as Josephson-junction arrays 4 and chemical oscillators 5,6 . A phenomenon of recent interest is chimera states , in which different subsets of the completely identical oscillators exhibit completely distinct dynamical behaviors, e.g., synchronization or incoherent oscillations. In the past decade, chimera states were observed in, e.g., regular networks of phase-coupled oscillators with ring topology [7][8][9] , regular networks hosting a few populations 10,15 , and two-dimensional lattice 6,16 or torus 34,21 . Issues that were addressed include transient behavior of chimera states [17][18][19] , control 26 , the effects of time delay 14,11,40 , phase lags 22 , and coupling functions [27][28][29] . Theoretically, two approaches were developed to analyze and understand the dynamical origin of chimera states: self-consistency equation [7][8][9] and partial differential equation (PDE) 42,43 . Quite recently, the effects of random perturbation and complex topologies of coupling on chimera states were investigated 23,30,37 . Experimentally, chimera states were observed in a system of chemical oscillators 24,31 , in an optical system 25 , in coupled mechanical oscillators 32 , and in electrochemical systems 33,36 . Other natural phenomena such as unihemispheric sleep 44,45 , neural spikes 46,47 , and ventricular fibrillations 48 are among those associated with chimera states. We note that, while the term of chimera states first appeared about a decade ago 7,8 , their signatures were actually observed earlier 49 from the spatiotemporal evolution of a system of coupled nonlinear oscillators and the phenomenon was named "domain-like spatial structure".
While a chimera state is commonly referred to as the situation where two dynamically distinct states coexist in different regions of the physical space, in certain particular settings more than two coexisting states can occur, e.g., in systems with time delay 11,14 , phase lags 22 , or special coupling functions [27][28][29] . Such a situation is typically characterized by the emergence of multiple clusters in the physical space, each being associated with a specific region. For convenience, we use the name "multicluster chimera states".
Scientific RepoRts | 5:12988 | DOi: 10.1038/srep12988 Because of the special system setting required for such states to occur, their generality or "typicality" in realistic physical systems becomes an interesting issue.
In this paper, we demonstrate that multicluster chimera states can occur commonly in the "traditional" setting of Kuramoto networks of phase coupled oscillators, without the need to impose special dynamical features on time delay, phase lag, or special coupling function differing from that associated with the classical Kuramoto model. In fact, perturbing the coupling configuration 23,30 can lead to the emergence of various multicluster chimera states with rich spatiotemporal dynamical patterns. In particular, starting from the classical, non-locally coupled Kuramoto oscillator network, we systematically remove a small number of links. As the fraction of the removed links is increased from zero, chimera states with different number (denoted by m) of clusters emerge, i.e., become stable, and then disappear (become unstable). An interesting phenomenon is that, certain m-cluster chimera states can undergo a period-doubling like bifurcation to states with 2m clusters. We propose a phenomenological theory, based on the intuitive idea of mutual enhancement among oscillator subsets exhibiting similar dynamical behaviors in space, to explain the "bifurcation" behavior of chimera states with distinct spatiotemporal patterns. The theory predicts correctly key features such as the emergent order of m-cluster chimera states, the corresponding region of the topology parameter, and the possible m values for the occurrence of cluster doubling. Our results imply that multicluster chimera states can occur in non-locally coupled oscillator networks more commonly than previously thought.
Model.
We consider a one-dimensional network of N non-locally coupled, identical phase oscillators with periodic boundary condition (the ring configuration). The system is mathematically described as is the phase of the ith oscillator at position x i . For convenience, we choose the range of the spatial variable to be [− π,π]. Since the oscillators are identical, the natural velocity and phase lag parameter, ω and α, respectively, are chosen to be constants that do not depend on the spatial location of the oscillator. Without loss of generality, we set ω = 0 and choose α < π/2. The kernel G( is a non-negative even function that characterizes the non-local coupling among all the oscillators. The quantity c ij is the ij th element of the N × N coupling matrix C, where C ij = 1 if there is coupling from the jth oscillator to the i th oscillator, and C ij = 0 indicates the absence of such coupling. We systematically remove certain fraction of links from every node, while ensuring that all nodes remain identical and structurally indistinguishable. To do this we introduce a tunable topological parameter η = 2L/N (L = 1, . . . , N/2), the fraction of neighbors removed for any given oscillator, where L denotes the number of removed links from each side of the node. We have C ij = 0 for j = i − L, . . . , i + L.
The connection pattern of a node after link removal is shown in Fig. 1, where the node was originally connected to all other nodes in the network, and link removal is carried out in the order of increasing distance from this node. The network dynamics can be characterized by the following complex order parameter Z, defined 7 for oscillator i as where the phase of the oscillator is written as θ = φ − Ω t, and Ω denotes the velocity of the oscillators in the coherent subset when a chimera state emerges. Theoretical insights into the chimera states can be obtained by resorting to the continuum limit N → ∞ to reduce the system to one described by PDE 42,43 , where the state of the system is characterized by a probability density function f(x, φ, t) that satisfies the continuity equation and v is phase velocity 13 . The function f(x, φ, t) can be expressed in terms of Fourier series expansion as taking into account the deletion of the nearby η fraction of couplings. The modulus of the order parameter, R x t Z x t ( , ) = ( , ) , can be obtained from the numerical solution of Eq. (5).
Numerical findings and interpretation. Numerically, we observe a variety of rich phenomena when links are systematically removed. In particular, using the real order parameter R(x, t), we can identify the emergence of multiple cluster chimera states, where each cluster corresponds to a coherent group of oscillators. Figure 2 shows the spatiotemporal patterns of the emergent m-cluster chimera states for different intervals of η, which indicates that the emergence of the chimera-state patterns is robust with respect to reasonable variations of these parameters. In the simulations, the system parameters are A = 0.995 and α = 1.39, and the initial condition is generated 8,9 using the function φ(x) = 6r exp(− 0.76x 2 ), where r is a random variable uniformly distributed in [− 1/2,1/2]. In fact, the results obtained from direct simulations of Eq. (1) for finite-size networks and from the PDE approach [Eq. (5)] in the continuum limit N → ∞ agree with each other with similar spatiotemporal patterns. As shown in the inset of Fig. 2(b), the degree of synchrony as characterized by R for different η values differs by orders of magnitude. For clarity, we use different color bars to distinguish the magnitudes of the spatiotemporal patterns in different panels. As η is increased, the number m of clusters undergoes changes from 4 to 3 (or 3&6), to 5 (or 5&10), to 7, and to 9, etc. Here the 3&6 state (or the 5&10 state) is a state that switches between 3-cluster and 6-cluster (or between 5-cluster and 10-cluster) chimera behaviors. To better understand the impact of multicluster chimera states on global coherence of the system, we calculate the average order parameter R over time and space. For a hypothetical system of the same structure but exhibiting global synchronization, R is given by which serves as a reference to characterize the system's coherence. We can then examine the difference Fig. 2(b)], to quantify the degree of coherence as compared with the synchronized reference state. In general, the coherence of the m-cluster chimera state is weaker than that of the global synchronization state, so the maximum value of R ∆ is zero. In the small neighborhood of zero η value, the observed states are conventional chimera states consisting of a coherent and an incoherent clusters. For η ~ 0.4, 4-cluster chimera states emerge. In the 4-cluster region [m = 4 region in Fig. 2(b)], the value of R ∆ increases with η, which can be attributed to the increasing fraction of coherent groups, as demonstrated by the red color in the spatiotemporal patterns [first panel in Fig. 2(a)]. The behaviors in subsequent parameter regions are richer and more complicated. In particular, the 3-cluster chimera states for small values of η are stable and regular as the 4-cluster chimera states. As η is increased further, the 3-cluster configuration becomes unstable and evolves eventually to global synchronization. In the 3-cluster region, various other states can emerge, which include (in successive order) stable regular 3-cluster states, transient 3-cluster states toward global synchronization, 6π-twisted states and 3&6 cluster double-state switching process, cluster drift states, and so on. One remarkable phenomenon is spatial period doubling (or spatial cluster doubling) in the 3-cluster region, in which each cluster bifurcates into two clusters and a 6-cluster chimera state emerges consequently, as shown in Fig. 2(c) (2nd and 3rd panels). The 6-cluster chimera states are unstable and can evolve into 6π-twisted states, as shown in the 2nd panel in Fig. 2(c), which will be further discussed in Fig. 3. Analogous to chemical oscillating reactions 50 , self-organized double-state switching processes are observed, in which the 3-cluster and 6-cluster chimera states appear and disappear alternatively, leading to spatiotemporal patterns of switching between the two states. The switching process also takes place in the 5-cluster region, where the system alternates between 5-cluster and 10-cluster chimera states, as shown in the 3rd panel in Fig. 2(a). Overall, as η is increased in the 3-cluster region, the resulting state is a cluster drifting state with strong intrinsic correlation in the spatiotemporal dynamics, as characterized by harmonically temporal breathing and spatial drifting of the coherent and incoherent groups [4th panel in Fig. 2(c)].
Chimera states with m = 5, 7, and 9 clusters emerge as η is increased further. In the 5-cluster region, the breathing and spatial period doubling phenomena are also present, as shown in Fig. 2(a) (2nd and 3rd panels). At the boundary between the two neighboring m regions shown in Fig. 2(b), the system with different initial phase configurations can evolve into either of the two m states, leading to fuzziness of the boundary.
As shown in Fig. 2, the destinations of the system can be a stable chimera state, or transient and finally reaching a globally coherent state such as synchronization or a coherent twisted state. Deviations from the structures that sustain the m-cluster patterns will make the m-cluster chimera state transient (a detailed analysis will be given in Methods). Figure 3 shows the spatiotemporal patterns of the order parameter R, instant phase φ and the average velocity v associated with a stable chimera state, a transient 3-cluster chimera state evolving into global synchronization, and two examples of transient 3&6-cluster chimera states evolving into coherent 6π-twisted states 51 that are phase-locked states with the phase difference between neighboring oscillators on the ring to be 2mπ/N. From the patterns of the order parameter R(x, t) in Fig. 3(b-d), we see that the oscillators in the globally coherent state have a identical constant value of R. The R values associated with the twisted states are smaller than that associated with global synchronization. The order parameter of an ideal twisted state can also be obtained from Eq. (2), and the difference R ∆ from that of a synchronous state is plotted in Fig. 2(b) (gray solid curve) in the m = 3 region.
The heuristic reason that a transient chimera state can evolve into either a globally synchronous state or a coherent phase-twisted state can be seen, as follows. The coherent groups (separated by the incoherent groups) in the m-cluster chimera state (with m = 4, 3, 5, 7 and so on) are found to be synchronized with each other. However, for the 2m-cluster chimera state "bifurcated" from the m-cluster chimera state, each pair of the nearby coherent groups have opposite phase φ but the same velocity v. Intuitively, for the first case of m synchronized clusters, global coherence of the system tends to increase when the coherence groups are enlarged, and the incoherent oscillators will consequently join the synchronized groups. As a result, global synchronization finally sets in, replacing the m-cluster chimera state. For the case of coherent phase-twisted state, the 2m-cluster chimera state is composed of opposite-phase coherence groups with large phase differences, as exemplified in Fig. 3(c,d) at time t 1 . The interaction between the coherent and incoherent groups can cause the phases of the oscillators to have uniform and ordered arrangement in each of the m clusters so that the 2mπ-twisted state will finally replace the 2m-cluster chimera state.
Discussion
The discovery of the counterintuitive phenomenon of chimera states in coupled dynamical networks was remarkable [6][7][8]49 . In a spatially extended system of coupled, completely identical oscillators, depending on the coupling parameter the oscillators form two distinguished groups in space, where one group exhibits a highly coherent behavior while oscillators belonging to the complementary group are incoherent. The coherent and incoherent behaviors emerge as a single state of the underlying dynamical system, which is quite different from the phenomenon of multistability in nonlinear dynamical systems 52,53 . Often, a nonlinear dynamical system can exhibit multiple coexisting attractors, each with its own basin of attraction. Starting from a random initial condition the system approaches one particular attractor that can be a stable fixed point, periodic, quasiperiodic, or chaotic. The key difference from the chimera states is that, from a single initial condition the asymptotic state of the system cannot simultaneously exhibit more than one of these traits. Most existing works on chimera states focused on the setting of fully connected, non-local coupling configurations, in which the oscillators of the system typically are self-organized into a coherent and an incoherent groups. The question that we address in this paper is what can happen to the chimera states when structural deviations from the fully connected coupling configuration occur in a systematic fashion.
Our main finding is that, as links are removed from the network in an orderly fashion, multicluster chimera states can emerge. Especially, for any node in the network, we systematically remove a given fraction of links, starting from the nearest neighbors. The network is still regular under such structural changes, because the number of links remains identical for every node. A surprising result is that, as the fraction of the orderly removed links is increased, chimera states consisting of different numbers of spatial clusters are observed in different intervals of the link-removal parameter. While the order of emergence of such distinct chimera states appears to be somewhat irregular, we find that it can be explained by the mechanism of enhanced or reduced interactions among different groups of oscillators through a phenomenological theory (see Methods). Especially, by hypothesizing a simple, binary type of interaction between any pair of oscillators, we can determine the number of clusters embedded in the Scientific RepoRts | 5:12988 | DOi: 10.1038/srep12988 chimera state for any given value of the link-removal parameter, with remarkable agreement with the numerical results.
We note that, when links are randomly removed from the network so that it becomes somewhat random, in a statistical sense chimera states can persist if the fraction of the removed links is relatively Fig. 2, i.e., they denote the regions of m-cluster chimera states, which are obtained from both simulation of Eq. (1) and solution of Eq. (5). The subregions for 3&6-cluster and 5&10-cluster chimera states obtained from Eqs (1) and (5) are also specified with the thin black vertical lines and the corresponding notations. small 23,30 . In such a case the observed chimera states consist typically of two clusters. In this regard, a recent work by Omelchenko et al. 54 studied the robustness of chimera states for coupled FitzHugh-Nagumo oscillator networks. The main finding is that gaps in the coupling matrix can result in a change in the multiplicity of the incoherent regions associated with the chimera state. However, to our knowledge, the orderly emergence of multicluster chimera states under systematic link removal, as uncovered in this paper, has not been reported before. It would be interesting if such exotic chimera states can be observed in experiments.
Methods
We develop a phenomenological theory to understand the emergence of multicluster chimera states and their stabilities. As we find numerically, the clusters emerge according to the order m = 4, 3, 5, and a few subsequent odd numbers as the parameter η is increased. Through extensive simulation with different initial phase configurations, we observe that mutual enhancement between coherent (or incoherent) groups of oscillators in the network is key to emergence of multicluster chimera states.
To gain insight into the mechanism of mutual enhancement, we analyze the stability of the coherent (or incoherent) groups in an idealized m-cluster chimera state. For a given coherent group, the contribution to the coupling from oscillators in other coherent groups tends to stabilize the state (a positive effect), while that from oscillators in the incoherent groups plays the opposite role (a negative effect). For an incoherent group, the effects of other coexisting coherent and incoherent groups are negative and positive, respectively. That is, oscillators in the like groups (coherent versus coherent or incoherent versus incoherent) tend to enhance each other's stability, while those in the unlike groups (coherent versus incoherent or vice versa) tend to destabilize each other. To be concrete and quantitative, we define an enhancement factor I(η) that depends on the system parameter η and assume that, for an oscillator in the coherent group, the contribution from each coherent-group oscillator is + 1, while that from an incoherent-group oscillator is − 1. Consider the oscillator at the center of a coherent group, e.g., the bottom oscillator in Fig. 4(a). The total contribution from other oscillators to the enhancement factor for this oscillator is where the contribution of the oscillator located at x is C(x) = ± 1, depending on whether the oscillator at x is in a like or an unlike group with respect to the group of the reference oscillator located at x ref , and the coupling kernel G(x ref − x) is effectively the weight of the contribution. Figure 4(b) shows the enhancement factor I associated with the reference oscillator as a function of η, calculated from the patterns of different m-cluster chimera states. In addition, the enhancement factor I of the oscillator at the center of an incoherent group exhibits the same behavior. The mutual enhancement factor is increased (or decreased) as more (or fewer) groups of the same kind are involved for different values of η. The dependence of the maximum enhancement factor, I max , among those for different m values on the parameter η are marked by the bold curves in Fig. 4(b). We see that the variation of I max follows the same sequence as that for emergence of m-cluster chimera states, i.e., m = 4, 3, 5, 7, and so on. For a given coupling structure as determined by η, the pattern with the maximum enhancement factor will "stand out" in the competition among patterns of different m values. The estimated η region for each m-cluster chimera state can be predicted through the corresponding region of each m with the maximum enhancement factor. Figure 4(c) shows the estimated regions for each m-cluster chimera state (thick straight lines) and the corresponding regions obtained through direct simulations and PDE (gray and white backgrounds as in Fig. 2). We see that our estimation of the parameter region in which m-cluster chimera states occur based on the maximum enhancement factor agrees well with the simulation results, including the order of m that emerges with increased η and the region of η for each m. The agreement indicates that our phenomenological theory based on mutual enhancement to explain the occurrence of m-cluster chimera state captures the essential dynamics of the emergence of the exotic states.
Our theory is also effective at predicting the emergence of 2m-cluster chimera states from the m-cluster background, through the behavior of the second-largest enhancement factor, where the corresponding state can emerge when it possesses the spatial symmetry as that of the maximum-I state. For example, for the m = 3 maximum-I region, the second-largest I cluster is associated with the m = 4, 7, 10, 6, and 5 states as η is increased. However, only the m = 6 state has the same spatial symmetry as that of the m = 3 background state. For the m = 5 maximum-I region, the second-largest I states are m = 3, 10, and 7, but the state that has the same spatial symmetry as that of the m = 5 state is the m = 10 state. As shown in Fig. 2, the emergence of 6-cluster (or 10-cluster) chimera states from the 3-cluster (or 5-cluster) background chimera states is observed. However, for the m = 4 region, the second largest values of I correspond to the m = 5, 9, and 3 states that have different spatial symmetry than that of the m = 8 state. As a result, no 2m-cluster can arise from the m = 4 state.
The fuzzy boundary between different m regions in Fig. 2 can be understood based on our mutual enhancement theory: at the boundary the m-cluster chimera state no longer possesses the maximum I value. For example, around the boundary of m = 4 and 3, the two chimera states have similar values of I, and therefore both are likely to emerge.
Since the coupling structure of the oscillator system, as controlled by the parameter η, is regular, theoretical analysis of pattern formation can be carried out by using the continuity equation Eq. (3) and the concept of invariant manifold 42,43 . In this approach, various multi-cluster patterns correspond to rotating wave solutions of the underlying infinite-dimensional dynamical equation. A systematic analysis 39 led to a number of results with respect to the general coupling function G(x) in Eq. (1). For example, every non-zero harmonic term in the Fourier series of G(x) gives rise to a number of solutions. A more recent work 41 discussed the simple case where G(x) is a purely harmonic function, e.g., cos(kx), or a superposition of two harmonics in the form of cos(kx) + cos[(k + 1)x], with k being an arbitrary positive integer. The piecewise smooth coupling function G(x) employed in our present work, however, consists of an infinite number of harmonics. In this case, it is not clear whether a mathematical theory can be developed to analyze the pattern formation process. Because of this difficulty, we resort to developing a phenomenological theory, in which the value of I(η) effectively determines, in a self-consistent manner, the emergence of m-cluster patterns in the continuum limit, as exemplified in Fig. 4. This approach yields results that agree well with those from direct numerical simulations, despite the fact that our analysis based on I(η) takes into account only the coarse-grained configuration of multi-cluster patterns formed due to the mutual interactions between oscillators in distinct spatial regions.
To demonstrate the general applicability of our mutual-enhancement theory with respect to the choice of different coupling kernels, we have studied the case of normalized exponential coupling kernel where x i and x j run from 0 to 1 with periodic boundary condition 7 . Figure 5(a) presents the curves of I(η) from Eq. (8) with integral interval [η, 1], while Fig. 5(b) compares the prediction (the colored thick horizontal lines) with results from simulations of Eq. (1) (the gray and white regions). We observe a good agreement. The corresponding spatiotemporal patterns for a representative set of η values (0.51, 0.67, 0.72, and 0.79) are shown in Fig. 6. Furthermore, we have studied the case of rectangular coupling kernel where x i and x j run from − 1 to 1 with periodic boundary condition, and g [0 1] ∈ , is a parameter characterizing the width of the coupling range for oscillators. The behaviors of I(η) and a number of typical spatiotemporal patterns are shown in Fig. 7(a,b), for γ = 0.6 and 0.8, respectively. The results are essentially the same as those for the case of sinusoidal coupling function, demonstrating the general applicability of our mutual-enhancement theory.
|
2016-05-04T20:20:58.661Z
|
2015-09-09T00:00:00.000
|
{
"year": 2015,
"sha1": "35c6db42994820d6fcdabd8f6f9404190400f40b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep12988.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35c6db42994820d6fcdabd8f6f9404190400f40b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
9084935
|
pes2o/s2orc
|
v3-fos-license
|
Simultaneous cancellation of narrow band interference and impulsive noise in PLC systems
The two major sources of disturbances for an efficient and reliable data transmission through power lines, known as power line communication (PLC), are impulsive noise (IN) and narrow band interference (NBI). In this paper, we propose an algorithm to cancel the IN and NBI simultaneously for an OFDM based PLC system. The proposed method exploits the duality of the problem, where the IN is sparse in the time domain and the NBI is sparse in the frequency domain. By virtue of this duality, we use multiple signal characterization (MUSIC) algorithm to estimate both the IN support (in time domain) and the frequency of NBI (in frequency domain). Furthermore, the minimum mean square error (MMSE) estimator is used to estimate the amplitude and phase of IN samples at the determined locations and the least square (LS) estimator is used to estimate the amplitude and phase of the NBI. Finally, the estimated IN and NBI are canceled out from the received signal, providing noise mitigated samples for demodulation. The performance of the proposed scheme is verified via numerical simulations.
I. INTRODUCTION
Narrow band power line communication (NB-PLC) has been proposed as enabler of the smart grid application in the electrical grid networks [1]. The NB-PLC system, which operates in the low frequency range of 10-490 kHz, suffers from background noise, impulsive noise, narrow band interference and the frequency selective multipath behavior of the channel [2]. To overcome the frequency selectivity of the channel, orthogonal frequency division multiplexing (OFDM) has been proposed for the NB-PLC by the existing standards, namely: IEEE 1901.2, PRIME and G3-PLC [1]. In order to mitigate the effects of impulsive noise (IN) and narrow band interference (NBI), some additional efficient signal processing techniques are required [3].
On one hand, the IN, which is sparse in nature in the time domain, presents a very high amplitude as compared to the background noise. If not mitigated, these IN samples damage symbols in all the sub-carriers [4], [5]. On the other, the NBI is sparse in the frequency domain and therefore its presence typically affects only the sub-carrier that is hit by it [6]- [8]. A significant amount of research work that deals with the problem of mitigating the impulsive noise (IN) and the narrow band interference (NBI) can be found in the literature. The studies done so far do not consider the joint effect of the NBI and IN in an OFDM based PLC system. However, for a reliable data transmission through the power lines, joint mitigation of both impairments is required. Some algorithms based on non-linear techniques, such as nulling, clipping and combination of both [5], [9], [10] as well as some recent proposals based on the compressive sensing (CS) technique [7] have been proposed in order to mitigate the IN in an OFDM system. Similarly, non-linear techniques based on frequency excision/nulling and clipping in the frequency domain along with the CS based algorithm can also be found in the literature, addressing the problem of mitigating the NBI [6], [11].
Here, we propose an algorithm to mitigate both the NBI and IN consecutively. The support of IN samples and the frequency of NBI are determined by the multiple-signalcharacterizing (MUSIC) method, exploiting the duality of the problem in the time and frequency domains. The amplitude and phase of each IN sample at the determined location is estimated by the minimum mean square error (MMSE) estimator. Furthermore, the amplitude and phase of the NBI are estimated by the least square (LS) estimator. To present the algorithm in concise manner, the paper is divided into four sections. Section II describes the system model. Section III elaborates the proposed algorithm. Section IV discusses the simulation results and section V provides some conclusions. Notations: The following notation will be used in this paper: A variable in bold lower case, a, defines a vector in the time domain having elements as {a 1 , a 2 , . . . , a n }. A variable in bold uppercase, A, defines a matrix in the time domain. A variable in bold lowercase with a bar,ā, defines a vector in the frequency domain having elements as {ā 1 ,ā 2 , . . . ,ā n }. Finally, a variable in bold upper case with a bar,Ā, defines a matrix in the frequency domain.
II. SYSTEM MODEL
The discrete-time complex baseband equivalent model for the OFDM system under consideration can be written as where n = {1, 2, . . . , M} defines the sample index, b is a Bernoulli distributed random variable having probability of success P and g denotes an independent and identically distributed (i.i.d) Gaussian random variable with variance σ 2 i and zero mean. The variance σ 2 i denotes the IN power. The vector e ∈ C M ×1 contains the time domain samples of the frequency interferer. The frequency interferer is the sum of all the NBIs occurring during a frame transmission and is modeled by collection of superimposed tones. The adopted interference model is widely used to characterize the effect of multiple NBIs in an OFDM system [6]- [8]. According to this model, each NBI is characterized by a tone in the frequency spectrum. The occurrence of multiple NBIs is therefore modeled as the sum of complex sinusoids (equivalent to the number of NBIs) in the time domain. The discrete time representation of the frequency interferer samples, e = {e 1 , e 2 , . . . , e M }, is written as where A k is the amplitude, ω k is the normalized frequency and φ k is the phase of the k th NBI. The variable c denotes the number of NBIs and T is the sampling period. Finally, the vector w ∈ C M ×1 contains the time domain samples of the background noise, occurring during the transmission of the i th frame. The background noise is modeled as AWGN, which is defined as a sequence of i.i.d complex Gaussian random variables with zero mean and variance σ 2 w . The interference to signal ratio (ISR) of the individual NBI is defined as A 2 k /σ 2 s , and the signal to noise ratio (SNR) is defined as σ 2 s /σ 2 w , where σ 2 s is the transmitted signal power. Likewise, the impulsive noise to background noise ratio (INR) is defined as σ 2 i /σ 2 w . For the following derivations, we define the unitary discrete Fourier transform matrix F having elements as: where a, b denotes the row and column indexes in the matrix.
III. PROPOSED NBI AND IN CANCELLATION SCHEME
The frequency interferer is assumed to be deterministic for the duration of a frame transmission. Therefore the estimation of the parameters of the frequency interferer is done using the received samples corresponding to the transmitted preamble symbol. Conversely, we exploit the spectrum of the unused sub-carriers in the system, to estimate the IN. Hence, IN is estimated on symbol-by-symbol basis, where IN in the preamble symbol and transmitted symbol is estimated separately. The schematic diagram of the proposed scheme is shown in Fig 2. Under a perfect timing and frequency synchronization, the preamble observation vector is formed by selecting the first N p samples out of the M received samples in (1) as The matrix S x in (4) To get rid of the IN component, we adopt an iterative implementation of the true support estimation (TSE) algorithm proposed in [4]. As the first iteration we identify the IN contaminated samples in, r p , using the TSE scheme, and null them. The resulting observation vector, denoted by r * p , is now used to perform a preliminary estimation of the frequency interferer, denoted byê * pre . This preliminary estimate of the frequency interferer is subtracted from r p resulting in The resulting observation vector, r p , in (5) is then used to estimate the IN occurring during the preamble symbol transmission. We again take advantage of the TSE algorithm to perform the IN estimation at this stage [4]. The estimated IN noise,î p , is then subtracted from the observation vector r p , resulting in The resulting equation in (6) is now used for the estimation of the NBI. The component (Hx p ) in r + p is an undesired signal affecting the precise frequency interferer estimation. Therefore, we again proceed with the iterative estimation of the frequency interferer. Next, a second preliminary estimation of the interferer is done. This noisy preliminary estimate of the interferer is subtracted from r + p , followed by the channel estimation using the known preamble symbol at the receiver. Furthermore, with the known preamble and the approximated channel coefficients, the contribution of both the channel and preamble from the observation vector is minimized. This minimization provides a refined observation vector that contains samples only from the NBIs and the background noise. The final frequency interferer estimation is done using the refined observation vector.
Note: The estimation ofê * pre is also done following the procedure as mentioned in the following sub-section Frequency Interferer Parameter Estimation using r * p .
A. Frequency Interferer Parameter Estimation
Taking r + p as an observation vector we proceed towards estimating the second preliminary estimate of the frequency interferer after the removal of the IN. The estimation of the parameters A k , ω k and φ k of the interferer from the equation (6) will be carried out in three steps.
the variables identified by r + p are the elements of r + p vector, whose locations are defined by the sub-scripted variables. The window size l, to form the sample vector, should be chosen such that the condition l −c > c, is satisfied. It is also required that the size of l should not be larger than N p /2, where N p is the length of the observation vector. A sample window size larger than N p /2 and not satisfying l − c > c, will in fact degrade the performance of the MUSIC estimator [12]. After generating the L sample vectors, a sample covariance matrix C of size l × l is formed as The number of NBIs is estimated by evaluating the eigenvalues of the sample covariance matrix, given by the eigenvalue decomposition (ED) of the matrix C. The ED of C results into l eigenvectors and l eigenvalues. The eigenvalues arranged in decreasing order λ 1 ≥ λ 2 ≥ λ 3 ≥ . . . , λ l , are then evaluated according to the MDL criterion, from where the number of NBIs c is estimated [13].
2) Frequency Estimation of Each NBI:
After estimating the number of NBIs, in this step we estimate their frequencies by using the high resolution frequency estimator MUSIC.
Based on the estimated number of interferers, the eigenvectors of C are classified into two sub-sets. The first subset, denoted byŜ = {â 1 , . . . ,â c }, contains the c eigenvectors associated to the c largest eigenvalues of C also referred to as signal-subspace in MUSIC terminology, and the second sub set G = {b 1 , . . . ,b l−c } contains the remaining l − c eigenvectors of C namely the noise-subspace in MUSIC terminology. Furthermore, we define the vector α α α as a function ofω as: Based on the estimated value of c, the values ofω corresponding to the c largest peaks of the pseudo-periodogram function f (ω), as defined in (10), are the estimated frequencies of the interferers [12],f 3) Amplitude and Phase Estimation of Each NBI: After estimating the number and the corresponding frequencies of the NBIs, in this step we perform amplitude and phase estimation of each NBI using the LS estimator.
Without loss of generality, let be the variable containing the values of amplitude and phase of the k th NBI. Consider the column vector where the variables z k are defined in (11). Let, be the matrix of size N p × c constructed using the estimated frequencies of NBIs, where {ω 1 , . . . ,ω c } are the estimated frequencies.
Using (12) and (13), (6) can now be expressed as: where the term "noise" corresponds to the received samples of the preamble and background noise in (6). From (14), the vector z is estimated by using the LS estimator. The LS estimated vectorẑ of z, is thus given bŷ whereñ = 1, 2, . . . , N p andẑ k is the estimated value of z k .
B. Observation Vector Refinement
The preliminary estimate of the frequency interferer,ê pre , is subtracted from the observation vector in (6), (N p × 1). (17) The resulting samples in (17) are now used for the channel estimation. The known preamble symbol at the receiver is used to approximate the least square estimate of the channel coefficients. The main motivation behind performing this step is to estimate the channel coefficients such that the contribution from preamble symbol and the channel in the observation vector can be subtracted.
The least square (LS) estimation of the channel is performed after transforming the samples in (17) to frequency domain. The frequency domain transformation is carried out by multiplying (17) with the DFT matrix F.
wherer p is the vector containing the received symbols corresponding to the transmitted preamble symbol,H is the diagonal matrix containing the coefficients of the channel frequency response,x p is the vector of the received symbols corresponding to the transmitted symbols in the preamble, NBI residue is the residue of interference andw p contains the frequency domain samples of the complex Gaussian background noise.
The known preamble symbolx preamble (transmitted in used sub-carriers) at the receiver, is used to determine the least square (LS) estimate of the channel coefficients from (18) as where ./ defines the element wise division operation, the vectorr pused contain the received symbols in used sub-carriers and the resulting vectorh LS in (19), contains the estimated channel coefficients.
With the estimated channel coefficients and known preamble, the effect of the corresponding quantities from (6) is eliminated, providing a refined observation vector for the final frequency interferer estimation. To achieve the refined observation vector, the samples in (6) are transformed to the frequency domain first and then the subsequent quantities are subtracted as: whereH LS is the diagonal matrix with elements of the vector h LS at the locations corresponding to the used sub-carriers and zeros at unused sub-carriers location. Hencer ref is the vector essentially containing frequency domain samples of NBI (ē p ) and the background complex Gaussian noise (w p ). The IDFT of (20) is now the required refined time domain observation vector for the final frequency interferer estimation. The IDFT of (20) is done by multiplying it with F H , The resulting vector r ref in (21) contains the time domain samples of the frequency interferer and the background noise, with some residue from the channel estimation done in the presence of NBI residue in (18) .
C. Final Frequency Interferer Estimation and Cancellation
The final estimation of the frequency interferer is carried out by following the steps exactly as mentioned in the subsection Frequency Interferer Parameter Estimation, using the new observation vector r ref from (21).
The purpose of performing such an iterative estimation operation, is to enhance the precision in the NBI parameter estimation. The first (ê * pre ) and second (ê pre ) preliminary estimated frequency interferer are very noisy, and hence a refined frequency interferer estimation needs to be achieved. The significance of iterative estimation becomes more evident at high SNR values. At high SNR, the preamble symbol with high power acts as strong noise for the interferer estimation. Hence, removal of the preamble along with the channel effect considerably improves a precise interference estimation.
Since the NBI is assumed to be deterministic for the duration of the frame transmission, the individual element of the final estimate vector,ê = {ê 1 ,ê 2 , . . . ,ê M }, is now reconstructed asê whereÂ,ω, andφ are the estimated values of A, ω, and φ. This final estimate of the frequency interferer is canceled out from the received signal in (1).
D. IN Estimation and Cancellation for the Transmitted Symbol
First, the time domain received samples corresponding to the transmitted OFDM information symbol is extracted. The selection of these samples is done by multiplying (23) with a selection matrix Sx, i.e.
The matrix Sx in (24) Furthermore, the vector r s is transformed to the frequency domain and the spectrum of the unused sub-carriers is extracted. To do so, the vector r s is multiplied by the DFT matrix F, followed by extraction of the spectrum corresponding to the unused sub-carriers by multiplying it with a selection matrix S u of dimension ((N −N u )×N ) having a single element equal to 1 per column at the positions that identifies the locations of unused sub-carriers in the system. r s = Fr s =Hx s +ī s +w s , (N × 1). (25) The resulting vectorr s is the observation vector for the TSE algorithm to estimate the IN samples occurring during the transmission of the OFDM symbol in the frame [4]. The resulting estimate of the IN,î s is subtracted from the received samples r s . The final received samples corresponding to the transmitted information at the receiver, after the cancellation of NBI and IN, can now be expressed as: where r s is only affected by the background noise samples and the multipath effect of the channel. The equalization of the multipath effect of the channel is done by using the channel coefficients estimated in (19).
IV. SIMULATIONS AND RESULTS In this section, we evaluate the BER performance of the proposed algorithm. The system parameters, preamble symbol, and the channel model are derived from the NB-PLC standard, IEEE 1901.2. The system parameters for the CENELEC-A band OFDM system in consideration are shown in the table below. The channel follows statistical multi-path fading where N path is the total number of propagation paths between the transmitter and the receiver, g t is the path gain summarizing the reflection and the transmission along t th propagation path, a o and a 1 are the attenuation parameters that depend on the transmission line impedance characteristics, f is the frequency in Hertz, d t is the length of t th propagation path and v o is the wave propagation speed. The above channel model is implemented using the realistic parameter values, as given in the standard IEEE 1901.2, where a 0 = 1 × 10 −3 , a 1 = 2.5 × 10 −3 , d is the Gaussian random variable having mean 1000 and standard deviation 400, g is also a Gaussian random variable with zero mean and variance 1 that is scaled by 1000, N path = 5 and v o = 3/4 × 10 8 [1].
To demonstrate the performance of the proposed algorithm in different scenarios, we define two scenarios characterized as: • scenario I: One NBI, whose frequency is uniformly distributed between [0, F s ], occurs during a frame transmission and IN samples occur with the probability of success, P = 1.2 × 10 −2 . • scenario II: Two NBIs, whose frequencies are uniformly distributed between [0, F s ], occur during a frame transmission and IN samples occur with the probability of success, P = 2 × 10 −2 . The BER performance of the algorithm in scenario I and scenario II are shown in Fig. 3 and Fig. 4 respectively. The AWGN bound in the simulation results, defines the scenario when there is no occurrence of both the NBI and IN. As shown, the proposed scheme has performance close to the AWGN bound and is superior then nulling and clipping with frequency excision schemes. Apart from this, the performance of the non-iterative implementation of the algorithm is also shown. As anticipated, the non-iterative approach reaches the saturation after SNR of 10 dB in both cases. Conversely, the iterative approach converges towards a reasonable SNR value in both scenarios.
V. CONCLUSION
A novel noise mitigation technique for PLC systems has been proposed in this paper. The proposed scheme is robust against both narrow band interference and impulsive noise. Due to the high precision in parameter estimation, precise estimation and hence the cancellation of both types of noise can be done. The performance of the presented scheme is close to the AWGN bound and is consistent over different scenarios having distinct level of NBI and IN power, as verified by simulation results. The use of the same algorithm for estimating both the IN samples support and the NBI frequency, also facilitates an efficient PLC receiver architecture.
|
2017-02-18T21:03:08.438Z
|
2016-11-01T00:00:00.000
|
{
"year": 2016,
"sha1": "f5497784e3b6bb4423b4b3f1766907970eb42a02",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/813771/files/Simultaneous%20Cancellation%20of%20Narrow%20Band.pdf",
"oa_status": "GREEN",
"pdf_src": "IEEE",
"pdf_hash": "f5497784e3b6bb4423b4b3f1766907970eb42a02",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
268750657
|
pes2o/s2orc
|
v3-fos-license
|
Added value of whole‐exome and RNA sequencing in advanced and refractory cancer patients with no molecular‐based treatment recommendation based on a 90‐gene panel
Abstract Introduction The objective was to determine the added value of comprehensive molecular profile by whole‐exome and RNA sequencing (WES/RNA‐Seq) in advanced and refractory cancer patients who had no molecular‐based treatment recommendation (MBTR) based on a more limited targeted gene panel (TGP) plus array‐based comparative genomic hybridization (aCGH). Materials and Methods In this retrospective analysis, we selected 50 patients previously included in the PROFILER trial (NCT01774409) for which no MBT could be recommended based on a targeted 90‐gene panel and aCGH. For each patient, the frozen tumor sample mirroring the FFPE sample used for TGP/aCGH analysis were processed for WES and RNA‐Seq. Data from TGP/aCGH were reanalyzed, and together with WES/RNA‐Seq, findings were simultaneously discussed at a new molecular tumor board (MTB). Results After exclusion of variants of unknown significance, a total of 167 somatic molecular alterations were identified in 50 patients (median: 3 [1–10]). Out of these 167 relevant molecular alterations, 51 (31%) were common to both TGP/aCGH and WES/RNA‐Seq, 19 (11%) were identified by the TGP/aCGH only and 97 (58%) were identified by WES/RNA‐Seq only, including two fusion transcripts in two patients. A MBTR was provided in 4/50 (8%) patients using the information from TGP/aCGH versus 9/50 (18%) patients using WES/RNA‐Seq findings. Three patients had similar recommendations based on TGP/aCGH and WES/RNA‐Seq. Conclusions In advanced and refractory cancer patients in whom no MBTR was recommended from TGP/aCGH, WES/RNA‐Seq allowed to identify more alterations which may in turn, in a limited fraction of patients, lead to new MBTR.
Materials and Methods:
In this retrospective analysis, we selected 50 patients previously included in the PROFILER trial (NCT01774409) for which no MBT could be recommended based on a targeted 90-gene panel and aCGH.For each patient, the frozen tumor sample mirroring the FFPE sample used for TGP/aCGH analysis were processed for WES and RNA-Seq.Data from TGP/aCGH were reanalyzed, and together with WES/RNA-Seq, findings were simultaneously discussed at a new molecular tumor board (MTB).
Results: After exclusion of variants of unknown significance, a total of 167 somatic molecular alterations were identified in 50 patients (median: 3 [1-10]).Out of these 167 relevant molecular alterations, 51 (31%) were common to both TGP/ aCGH and WES/RNA-Seq, 19 (11%) were identified by the TGP/aCGH only and 97 (58%) were identified by WES/RNA-Seq only, including two fusion transcripts in two patients.A MBTR was provided in 4/50 (8%) patients using the information 1
| INTRODUCTION
][10] In contrast, several metaanalysis studies reported a significant benefit of a genomicdriven personalized approach to drive patients in Phase 1 and 2 trials.Extending the molecular analysis to the entire exome may increase the proportion of actionable molecular alterations, of molecular-based treatment recommendations (MBTR) and eventually, of treated patients.
To determine to which extent a whole-exome and RNA sequencing (WES/RNA-Seq) analysis increases the proportion of patients with MBTR, a retrospective analysis was conducted in a subset of 50 patients included in PROFILER study (molecular screening by TGP/aCGH to select molecular-based recommended therapies for metastatic cancer patients).For each cases, there were available germ line DNA and fresh frozen tumor mirroring the FFPE sample used for Tumor Gene Panel and array-based Comparative Genomic Hybridization (TGP/aCGH) analysis and had no MBTR based on the TGP/aCGH. 7
| Patients, sample qualification, and molecular analysis
The study was conducted at Centre Léon Bérard was approved on 2/2/2018 by the institutional review board (Ethics Committee of Lyon Sud-Est IV) and was conducted in compliance with the Declaration of Helsinki and Good Clinical Practice guidelines.
We retrospectively selected 50 patients among the 2,579 patients included in the previously reported PROFILER molecular screening program (NCT01774409) who had no MBTR based on the TGP/aCGH during the course of the trial and for whom a fresh frozen tumor mirroring the FFPE sample together with germ line DNA was available in Centre Léon Bérard certified Biobank (BB-0033-00050). 7ene list of the TGP is provided in Table S1.Fresh frozen surgically resected tumor specimens mirroring the FFPE sample were evaluated by an experienced pathologist for tumor cell content ≥30% was required.The first 50 cases achieving those criteria were included in the study.Each patient included in PROFILER provided written informed consent to participate in the study and use of his or her tumor sample.Figure 1 presents the consort diagram of the selection of studied population from the whole cohort of the PROFILER01 clinical trial.
The molecular analysis conducted in PROFILER trial was reported elsewhere. 7Details on WES/RNA-Seq and bioinformatics analysis are provided in Methods S1.
| Variants interpretation and treatment recommendation
Analysis pipelines are regularly updated overtime, TGP raw data for the 50 selected patients were thus reanalyzed and a new report issued.Both the TGP and WES/RNA-Seq reports were presented at the Molecular Tumor Board (MTB).The interpretation of somatic single nucleotide variants (SNV) was focused on their clinical impacts and categorized into five TIERs according to the ESMO Scale for Clinical Actionability of molecular Targets (ESCAT) classification 11 (Figure S1).MTB presentation was done at the same time to ensure similar treatment options for both tests.
| Statistical analysis
Statistical analyses were conducted with SPSS 23.0 package (IBM, Paris, France).The proportion of variants in each Tier of the ESCAT classification identified with from TGP/aCGH versus 9/50 (18%) patients using WES/RNA-Seq findings.Three patients had similar recommendations based on TGP/aCGH and WES/RNA-Seq.
Conclusions:
In advanced and refractory cancer patients in whom no MBTR was recommended from TGP/aCGH, WES/RNA-Seq allowed to identify more alterations which may in turn, in a limited fraction of patients, lead to new MBTR.
K E Y W O R D S
cancer biology, cancer management, gene panel, molecular tumor board, precision oncology, RNA-sequencing, targeted therapy, whole exome sequencing TGP/aCGH versus WES/RNA-Seq was compared using a Fisher's exact test.A p value of 0.05 was considered significant.
| RESULTS
The cohort of 50 patients included 14 different histological subtypes of cancer (Table 1).They were comparable to the overall population of the PROFILER study.
After exclusion of 4 (TGP) and 9619 (WES) variants of unknown significance, TGP and WES identified 52 SNVs and 121 indels.Respectively 70 and 148 molecular alterations including SNVs (n = 135, 80%), CNVs (n = 29, 17%), one indel (n = 1, <1%), one tumor mutational burden (TMB) >10 mutations per megabase (median TMB: 1, range: 0-24.5), and fusion transcripts (n = 2, 1%-2%) were reported by the biologist with TGP/aCGH (median per patient 1, range 0-6) and WES/RNA-Seq (median per patient 2, range 0-8).Out of 167 molecular alterations (Table S2), 51 (30%) were common to both TGP/ aCGH and WES/RNA-Seq, 19 (11%) were identified by the TGP/aCGH only, and 97 (58%) were identified by WES/RNA-Seq only.Among the latest, two patients were found with a fusion gene by RNAseq (COL1A1::PDGFB or PAX5::FOXP1) that were already known from the initial diagnostic workup.More ESCAT TIER IV and X molecular alterations were identified by WES/RNA-Seq (Table 2).Whether MBTR differed when they were based on TGP/aCGH versus WES/RNA-Seq was discussed at the MTB (Figure 2).A MBTR was recommended in 4/50 (8%) patients using the information from TGP/aCGH versus 9/50 (18%) patients using WES/RNA-Seq findings.Three patients had similar recommendations (PI3K/Akt/mTOR inhibitor and KRAS G12C inhibitor in two and one cases, respectively) based on either TGP/aCGH or WES/RNA-Seq (Figure 2).The presence of these four cases can be explained by the fact that all cases were discussed in molecular tumor board to compare both panel at the same time (second discussion for old cases).The six MBTR exclusively provided by WES/RNA-Seq were (1) a PKC inhibitor for a choroidal melanoma with a GNAQ SNV (not included in the TGP panel), (2) a KIT inhibitor for a gastrointestinal stromal tumor with a KIT D820E mutation (region not covered by TGP), (3) an immune therapy based on a high TMB on WES (not available on TGP) for a malignancy of unknown origin, (4) a PARP inhibitor based on a BRCA loss not identified by TGP for a serous ovarian cancer and ( 5) and ( 6) were recommended a PI3K/Akt/ mTOR inhibitor based on a PI3K p.N345K mutation not identified by TGP for an invasive ductal carcinoma and a PTEN p.M1L for a pyloric adenocarcinoma. 4| DISCUSSION Molecular analysis by WES/RNA-Seq is now available in routine for diagnosis and theranostic purposes to increase the rate of MBTR for patients with advanced cancer.To our knowledge, this is the first report comparing the percentage of candidate patients for a MBTR using both TGP/aCGH and WES/RNA-Seq available in all patients.As expected, WES/RNA-Seq led to the identification of more molecular alterations but most were not used for MBTR in the absence of documented clinical significance.However, a numerically higher rate of MBTR was recommended compared to TGP/aCGH.The translation of these recommendations into clinical benefit for the patients remains to be determined.Only 51 (30%) molecular alterations were common to both TGP/aCGH and WES/ RNA-Seq.These discrepancies may be explained by tumor heterogeneity: Although the same tumor was analyzed, nucleic acids were extracted from a FFPE sample (TGP/ aCGH) and from a frozen sample (WES/RNA-Seq).As expected, some molecular alterations were missed by TGP because genes were not included in the panel, or because no fusion can be studied with TGP.Also, gene expression data and expression profile were not included in discussion for MBTR recommendation but represent promising biomarkers.Discrepancies between TGP and WES/ RNAseq may also be related to lower sequencing depth (false negatives), and to the subtraction of constitutional variants (true negatives).
Results of the prospective randomized PROFILER02 trial were presented in ASCO 2022: Compared to TGP/ aCGH panel, larger NGS panel led to increase MBTR from 5% to 19.8%, very similar to our results from 8% to 18%. 12 Other groups reported impressive rate of 31.8% and 46% of patients treated with molecular-based therapy after extensive genomic analysis. 13,14However, the definition of "actionability" of a given molecular alteration remains unclear. 15In the study, we selected patients who were given no recommendation in the course of the trial based on TGP/aCGH, possibly explaining the low rate of patients with MBTR based on WES/RNA-Seq [9/50 (18%)].In this work, WES/RNA-Seq analysis resulted in a significantly superior but modest improvement of the number of MBTR compared to TGP/aCGH.Discrepancies were observed between the two tests, owing possibly to sample quality bias, and subclonal analysis.As more knowledge is gained on the significance of individual and combined mutations based on WES/RNA-Seq, a careful clinical evaluation of the utility of WES/RNA-Seq for the management of cancer patients with advanced and refractory disease must be undertaken to further compare the utility of narrow panels versus broader but more expensive approaches.
F I G U R E 1
Consort diagram of the selection of studied population from the PROFILER01 clinical trial.
T A B L E 1
Abbreviation: MBRT, molecular-based recommended therapies.
T A B L E 2 11 90
Frequency of molecular alterations identified with the 90-gene TGP/aCGH or WES/RNA-Seq the classified according to ESCAT.Proportion of variants in each Tier of the ESCAT classification identified with TGP/aCGH versus WES/RNA-Seq was compared using a Fisher's exact test (p = 0.0154).F I G U R E 2 Venn diagram of biologically relevant molecular alterations identified with TGP/aCGH versus WES/RNA-Seq in advanced and refractory patients with no molecular-based recommended therapy.*TGP raw data for the 50 selected patients were reanalyzed, and a new report was issued.Both the TGP/aCGH and WES/RNA-Seq reports were presented at the MTB.
|
2024-03-31T06:19:00.245Z
|
2024-03-30T00:00:00.000
|
{
"year": 2024,
"sha1": "04b3a95cc9caf7c97550b8b75721e049d8370cfd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2b1f73813fd6d028915c10a8acab3ebc64b7b950",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267117333
|
pes2o/s2orc
|
v3-fos-license
|
Seasonal and daily patterns in known dissolved metabolites in the northwestern Sargasso Sea
Organic carbon in seawater plays a significant role in the global carbon cycle. The concentration and composition of dissolved organic carbon reflect the activity of the biological community and chemical reactions that occur in seawater. From 2016 to 2019, we repeatedly sampled the oligotrophic northwest Sargasso Sea in the vicinity of the Bermuda Atlantic Time‐series Study site (BATS) to quantitatively follow known compounds within the pool of dissolved organic matter in the upper 1000 m of the water column. Most metabolites showed surface enrichment, and 83% of the metabolites had significantly lower concentrations with increasing depth. Dissolved metabolite concentrations most notably revealed temporal variability. Fourteen metabolites displayed seasonality that was repeated in each of the 4 yr sampled. Concentrations of vitamins, including pantothenic acid (vitamin B5) and riboflavin (vitamin B2), increased annually during winter periods when mixed layer depths were deepest. During diel sampling, light‐sensitive riboflavin decreased significantly during daylight hours. The temporal variability in metabolites at BATS was less than the spatial variability in metabolites from a previous sample set collected over a broad latitudinal range in the western Atlantic Ocean. The metabolites examined in this study are all components of central carbon metabolism. By examining these metabolites at finer resolution and in a time‐series, we begin to provide insights into the chemical compounds that may be exchanged by microorganisms in marine systems, data which are fundamental to understanding the chemical response of marine systems to future changes in climate.
amount of carbon present in dissolved form is 200 times the amount found in particulate form (Hansell et al. 2009).Over the last 30 yr, scientists have made advances in quantifying where DOC is produced, consumed, and stored in the global ocean (Carlson and Hansell 2015).Microorganisms are the main consumers of DOC, and multiple researchers have examined the connections between the diversity of microorganisms in a marine system and their ability to consume dissolved organic matter (DOM) (Bercovici et al. 2021;Liu et al. 2020a;Stephens et al. 2020).These and other interactions between biology and chemistry play a key role in defining the factors that control organic carbon distributions in marine systems, both now and in the future ocean.
Time-series research in the northwest Sargasso Sea extends back to 1954 at Hydrostation S and began in 1988 at the Bermuda Atlantic Time-series Study site (BATS).Repeated sampling at these sites has revealed long-term (multiyear) changes in properties such as temperature, dissolved inorganic carbon, and dissolved oxygen (Bates and Johnson 2020), and changes in the water masses in the vicinity of BATS (Stevens et al. 2020).These studies have further identified seasonally varying physical and biogeochemical processes (Michaels et al. 1994;Bates et al. 1996;Steinberg et al. 2001) that drive changes in DOC concentrations (Carlson et al. 1994;Hansell and Carlson 2001), nutrient cycling and carbon export (Lomas et al. 2013), carbon isotopes (Gruber et al. 1998), oxygen levels (Fawcett et al. 2018;Billheimer et al. 2021), cellular carbon quotas (Casey et al. 2013), and carbon export by phytoplankton (De Martini et al. 2018;Lomas et al. 2022).Seasonal variability at BATS is also evident in the biological community, including viruses (Parsons et al. 2012), autotrophic microorganisms (DuRand et al. 2001;Lomas et al. 2010), heterotrophic microorganisms (Morris et al. 2005;Treusch et al. 2009), and mesozooplankton (Blanco-Bercial 2020).Logically, the temporal variability observed in these chemical and biological parameters should also be expressed in the distribution, quantity, and composition of organic compounds found in the northwest Sargasso Sea.However, quantitative measurements of the individual compounds that comprise DOM and how they change over time at BATS are lacking.
Investigations into individual organic compounds provide insight into biogeochemical processes in marine systems.Yet, the analytical methods needed to track individual organic compounds in marine systems are relatively recent advances.Preseparation techniques such as liquid chromatography, have enabled both targeted investigations of biologically relevant known organic compounds (Durham et al. 2015;Widner et al. 2021) and untargeted investigations that characterize previously unknown dissolved organic compounds in marine systems (Longnecker and Kujawinski 2017;Petras et al. 2021).Our application of these tools to characterize the small, known organic compounds in the marine environment stems partially from our prior research with cultured microorganisms.For example, we have learned that Synechococcus releases select metabolites as waste products (Fiore et al. 2015), while the heterotrophic Ruegeria pomeroyi changes its metabolite release as a function of growth substrate (Johnson et al. 2016).In the ocean, studies have shown that particulate metabolites can vary on diel timescales (Boysen et al. 2021) and sinking particles contain compositionally distinct organic compounds compared to suspended particles (Johnson et al. 2020).However, previous work in the Atlantic Ocean found that the composition of metabolites in suspended particulate organic material is distinct from the dissolved metabolites (Johnson et al. 2023).Therefore, patterns in particulate organic matter do not directly correlate with DOM, and we need to explicitly probe the temporal variability of dissolved metabolites.
As a component of BIOS-SCOPE, a multiyear, transdisciplinary program to study microbial processes, structure, and function in the Sargasso Sea, we used targeted metabolomics to track and quantify a set of compounds central to cellular carbon metabolism.Our first aim was to determine the presence and concentration of dissolved metabolites over multiple time scales (diel to seasonal) at BATS.These compounds were measured in the dissolved organic fraction collected within the upper 1000 m at regular temporal intervals (monthly to seasonal) between 2016 and 2019.Each July, multiday BIOS-SCOPE process cruises enabled higher frequency (6-h) sampling to investigate diel patterns in the summer.We used these data to connect observed patterns in dissolved metabolites with hydrographic trends and biological activity.
Hydrographic data
From July 2016 through July 2019, the upper ocean was sampled at three locations in the Sargasso Sea: BATS (31 40 0 N, 64 10 0 W), Hydrostation S (32 10 0 N, 64 30 0 W), and east of BATS (AE1916, 32 10 0 N, 64 13 0 W).On a subset of BATS cruises, water samples were collected from a minimum of six vertical levels spanning the surface down to 1000 m.During select times (July each year, September 2016, April 2017, and May 2019), the station was occupied for multiple days (process cruises) allowing for higher sampling frequency (Supporting Information Table S1).Because local water mass structure responds to winter mixing, thermal stratification, mesoscale eddies and varying light penetration, we assigned sample depths to a vertical zonation of the water column (Curry et al. unpublished).We have samples from four seasons (mixed, spring, stratified, and fall) with season boundaries set by the relative positions of the chlorophyll maximum (CM) layer and the mixed layer depth (MLD), which was defined by density threshold criteria as the depth where sigma-theta (σ 0 ) exceeds the surface density by 0.125 kg m À3 .We have samples from nine of the vertical zones (VZs) defined by Curry et al., and we grouped our samples into the photic zone (VZ 0 , VZ 1 , and VZ 2 ), the subphotic region (VZ 3 ), and the deep ocean (VZ 4 through VZ 10 ).The details on the bounds used to define the VZs are provided in Supporting Information Table S2.
Water samples were collected using 12-L Ocean Test Equipment bottles mounted on a rosette equipped with a conductivitytemperature-depth, fluorometer, and an oxygen sensor.Water samples were processed to obtain concentrations of particulate organic carbon, dissolved/total organic carbon (TOC), nutrients (nitrate + nitrite, phosphate), bacterial abundance using epifluorescence microscopy, and bacterial production via 3 H-leucine incorporation (process cruises only) using established methods described in Knap et al. (1996), Halewood et al. (2022), andLiu et al. (2022).Both DOC and TOC samples were collected during this project and the two are statistically indistinguishable in oligotrophic waters at the resolution of the high temperature combustion method (Halewood et al. 2022); in this article, we refer to DOC or TOC concentrations as "bulk organic carbon" concentrations.
Metabolite extractions
Water (4-10 L) was collected directly from the sampling bottles into polytetrafluoroethylene (PTFE) or polycarbonate containers.Water was then filtered using a peristaltic pump through a 0.2 μm filter (Omnipore, EMD Millipore) held in a Teflon filter holder.The filtrate was acidified with 12 M HCl to $ pH 2-3.While at sea, the dissolved organic molecules were extracted from the filtrate using solid phase extraction (SPE) with Agilent Bond Elut PPL cartridges (1 g, 6 mL; Dittmar et al. 2008;Longnecker 2015).Briefly, the cartridges were preconditioned with methanol and the filtrate was pulled through the cartridges via PTFE tubing using a vacuum pump.The cartridges were rinsed with $ 24 mL of 0.01 M HCl and then allowed to dry by pulling air over the cartridge for 5 min.The samples were eluted with methanol into a glass test tube.The extracts were stored for up to 1 week at À20 C before they were evaporated to near dryness using a Vacufuge (Eppendorf).Immediately prior to analysis the extracts were reconstituted in 250 μL of 95 : 5 (v/v) water/acetonitrile or 100% water with isotopically labeled compounds that serve as injection standards (Kido Soule et al. 2015).Our prior work has shown that filtration-induced leakage of metabolites from cells will have minimal impact on the measurement of dissolved metabolites in oligotrophic environments (Johnson et al. 2023).
Selection of metabolites
The project focuses on metabolites that play a central role in cellular carbon metabolism.To characterize the number of metabolic reactions known to involve each compound, we used the framework linking metabolites and genomic information established by KEGG (Kanehisa et al. 2023).We used Biopython (Cock et al. 2009) to access KEGG through a REST-style application programming interface.With two exceptions (n-acetyltaurine and chitotriose), all the compounds examined in this project are present in KEGG with corresponding compound numbers (Supporting Information Table S3).We used the compound numbers to query KEGG to identify the chemical reactions associated with each compound.In KEGG, these reactions are assembled into pathways and the query results also provide the number of pathways for each compound.For compounds that we cannot analytically separate (the isomer pairs sarcosine/alanine and threonine/homoserine), we queried both KEGG numbers and separately present the results for each query.The code used to organize the data for these queries and export the results is available at GitHub (https://github.com/KujawinskiLaboratory/metaboliteMath).
Targeted mass spectrometry
Samples were analyzed using ultra high performance liquid chromatography (Accela Open Autosampler and Accela 1250 Pump, Thermo Scientific) coupled to a heated electrospray ionization source (H-ESI) and a triple quadrupole mass spectrometer (TSQ Vantage, Thermo Scientific) operated under selected reaction monitoring mode (Kido Soule et al. 2015).Separation was performed at 40 C on a reversed phase column (waters aquity HSS T3, 2.1 x 100 mm, 1.8 μm) equipped with a Vanguard (waters) precolumn.Mobile phase A was 0.1% formic acid in water and mobile phase B was 0.1% formic acid in acetonitrile.The flow rate was maintained at 0.5 mL min À1 .The gradient began at 1% B for 1 min, increased to 15% from 1 to 3 min, then increased to 50% B from 3 to 6 min, and then increased to 95% B from 6 to 9 min.The mobile phase was maintained at 95% B until 10 min and then decreased to 1% B from 10 to 10.2 min and held at 1% B for the remainder (12 min total run time).Samples were run in both positive and negative ion modes using a 5 μL injection for each.We used two precursor-to-product ion transitions to identify each metabolite, one for quantification and a second to confirm metabolite identity; Kido Soule et al. (2015) provide details on the sourcing for each metabolite.The complete list of metabolites analyzed in this project is provided in Supporting Information Table S3.
Raw data files were converted to mzML format using msConvert (Chambers et al. 2012), and El-MAVEN (Agrawal et al. 2019) was used to identify and integrate peaks for all samples, standards, and pooled samples.Metabolite concentrations were calculated using a standard curve of at least five points.The standard curve was made by adding the known compounds to a pooled sample generated by collecting aliquots from each sample in a batch; each batch of samples contained up to 100 samples.For metabolites with an extraction efficiency > 1%, we corrected the measured concentrations using the extraction efficiency data available from Johnson et al. (2017) and Swarr et al. (unpublished data) for metabolites added since 2017.Concentrations of metabolites with extraction efficiencies of < 1%, see Supporting Information Table S3, should be viewed with caution.Quality control checks for peak shape and instrument response were implemented as described in Kido Soule et al. (2015).
Pre-extraction derivatization to enhance capture of polar dissolved metabolites
In May 2019, we had an opportunity to use a new method developed by Widner et al. (2021) that derivatizes metabolites in filtered seawater prior to SPE to improve their extraction efficiency and quantification.We processed a subset of samples by this method, replicate samples from 8 depths (cast 6) and 12 depths (cast 8).Briefly, sodium hydroxide (8 M) was added to filtered seawater, followed by the addition of 5% v/v benzoyl chloride in acetone.After shaking, samples were neutralized with concentrated phosphoric acid and stored frozen until processing on land through SPE to remove salts.The samples were then analyzed by liquid chromatography coupled to an ultrahigh resolution mass spectrometer (Fusion Lumos tribrid mass spectrometer, Thermo Scientific) to quantify known (derivatized) metabolites.The data files were processed using Skyline (Henderson et al. 2018;Pino et al. 2020).Here, we focus on the results for malic acid, a compound that is not wellcaptured with PPL-based SPE resins.
Statistics and data availability
Statistical analyses and plotting were done using MATLAB version 2019b.We used a non-parametric Kruskal-Wallis test to examine seasonal differences in metabolites and differences in metabolites during daylight (sunrise to sunset) compared to nighttime samples.Gridding of data prior to plotting was done with a MATLAB implementation of data interpolating variational analysis (DIVA for MATLAB, Troupin et al. 2012).Targeted metabolomics data for this project are available at MetaboLights (http://www.ebi.ac.uk/metabolights/) as study accession number MTBLS2356.Environmental data are available from the biological and chemical oceanography data management office (BCO-DMO) (http://lod.bco-dmo.org/id/dataset/861266, and http://lod.bco-dmo.org/id/dataset/3782)and from the BATS FTP data site (http://bats.bios.edu/bats-data).
Hydrography and environmental data
Over each annual cycle at the study site, the upper ocean exhibited large changes in stratification and depth of the surface mixed layer, with enhanced thermal stratification and shallow MLDs (< 20 m) characterizing the warm, summer months, a progressive deepening in the fall, and maximum MLDs (100-300 m) in late winter/early spring.The onset of warming occurred in April in each year of the project with the MLD shoaling to < 20 m by June (Fig. 1).During the July periods, when repeated sampling was possible, the depth of the CM ranged from 80 to 100 m (Supporting Information Fig. S1).
We partitioned the upper 1000 m into vertical layers to facilitate statistical comparisons of metabolite concentrations across seasonal and interannual boundaries.From a total of 372 samples, 75% corresponded to the summer (stratified) period when we occupied the station for multiple days during dedicated process cruises.Approximately 15% of the samples were collected during the winter (mixed) period, and the remaining 10% were associated with the relatively brief spring and fall transition periods (Supporting Information Table S2).Sample collection was evenly distributed throughout the upper 1000 m of the water column, with 25% of samples collected in the surface mixed layer (VZ 0 ), which ranged between 20 and 270 m over the 4-yr period.
Querying the KEGG database revealed that the metabolites examined in this project were found in at least one pathway within KEGG, with the highest number of pathways observed for glutamic acid that appears in 52 different pathways (Supporting Information Table S3).The range of chemical reactions is even broader, with some compounds found in one reaction extending to nicotinamide adenine dinucleotide that appears in over 1000 reactions at KEGG.Thus, aside from the two compounds not listed within the KEGG, the compounds examined in the Sargasso Sea are broadly represented within reactions and biochemical pathways at KEGG.Out of the 95 metabolites available in the analytical methods for this project, 41 compounds passed the quality checks and were present in all 4 yr of the dataset.The majority of dissolved metabolites not measured have a low extraction efficiency in seawater (Supporting Information Table S3) and therefore their absence was not unexpected.Of the metabolites we detected, 34 (83%) exhibited higher concentrations near the surface and decreased in concentrations with increasing depth in the water column (Supporting Information Fig. S2).Thus, correlations between the concentration of each metabolite and other environmental variables generally revealed significant negative correlations with depth and nutrients, and positive correlations with temperature, bacterial production, and the concentration of bulk organic carbon (measured as TOC or DOC) (Supporting Information Fig. S3).By contrast, some metabolites including syringic acid and cyanocobalamin were present in all 4 yr of the project, were highly variable, and had no significant correlations with measured environmental parameters.In the sections that follow, we focus our discussion on metabolites with repeatable seasonal and diurnal patterns.We also compare the temporal variability of metabolite concentrations at BATS to the geographical variability of metabolite concentrations sampled along a latitudinal transect between 38 S and 55 N in the western Atlantic Ocean (Johnson et al. 2023).
Seasonality of metabolites: Mixed vs. stratified season
We had sufficient samples from two seasons, mixed and stratified, to allow for rigorous statistical comparisons of dissolved metabolite concentrations in the photic zone, the subphotic zone, and the deep ocean (Supporting Information Table S2).Differences in metabolite concentrations were most pronounced when comparing the photic zone between the winter (mixed) and summer (stratified) seasons (p-values < 0.05, Kruskal-Wallis test) (Fig. 2).Within the photic zone a total of 14 metabolites showed statistically significant differences in concentration across seasons.For example, the nucleosides adenosine and xanthosine were elevated in the winter, as were S-(5 0 -adenosyl)-L-homocysteine, the vitamins pantothenic acid (B 5 ) and riboflavin (B 2 ), and desthiobiotin, the vitamin precursor to biotin (vitamin B 7 ).In contrast to desthiobiotin, two other vitamin precursors were lower in the winter: 4-methyl-5-thiazoleethanol (HET), the metabolic precursor to thiamine (vitamin B 1 ), and 4-aminobenzoic acid, the metabolic precursor to folic acid (vitamin B 9 ).Yet while thiamine and folic acid were both measured at BATS during this project, neither showed a seasonal difference.A heterogeneous set of compounds, including amino acids and the nucleoside inosine were also higher in the summer.
Seasonality of specific metabolites
Pantothenic acid, taurocholic acid, and tryptophan presented the strongest examples of a repeatable, seasonal pattern over the 4 yr when samples were collected.Each dissolved metabolite showed higher concentrations in the upper 300 m of the water column.
Pantothenic acid
Concentrations ranged from below detection (0.7 pM) to 34.5 pM, with the lowest concentrations at the onset of the stratified period (Fig. 3a).In the winter (mixed) period, pantothenic acid accumulated over the upper 300 m, although its concentrations in the winter of 2017/2018 were notably low compared to the two other winter periods.Pantothenic acid concentrations integrated within the mixed layer showed a recurring annual pattern of decreased stocks during stratified periods when the MLD had shoaled to < 20 m (May, July, and September), whereas stocks increased when the MLD extended deeper than 150 m (January, February, March, and April) (Fig. 3b).
Taurocholic acid
Mean concentrations of taurocholic acid reached a relative maximum of 2.2 AE 0.8 pM in VZ 0 during thermal stratification in July (Supporting Information Fig. S4).Below 100 m, concentrations of taurocholic acid were lower than 1 pM; in the deep ocean a weak seasonal pattern with increased concentrations during the summer (stratified) periods was observed (Fig. 2).
Tryptophan
Regular seasonal variability was also observed for tryptophan with the greatest concentrations observed in July; however, unlike taurocholic acid the temporal variability was most pronounced deeper than the mixed layer depth (MLD) and between 40 and 120 m where the mean concentration was 2.6 AE 3.3 pM (Fig. 4).
Temporal and vertical variability of metabolites during summer stratification
Metabolite samples were collected every 6 h over 3 days during the July process cruises in each year of the study.
Riboflavin
Concentrations of riboflavin demonstrated consistent diel variability in each year of the study, with lowest concentrations in shallowest depths during the mid-day when sampling was coincident with the highest PAR values (Fig. 5).The difference in riboflavin concentrations in the daytime (based on sunrise and sunset times at BATS) vs. nighttime samples was statistically significant (p-value < 0.01).The exception was in July 2019 when riboflavin reached maximum concentrations deeper in the water column (between 80 and 100 m) with values mostly below detection in the surface samples (Supporting Information Fig. S5).Thus, riboflavin presented significant differences on both a diel cycle and a seasonal cycle, as noted in the previous section.
Malic acid
Generally higher concentrations of malic acid ($ 500 pM) were observed in the surface ocean and decreased with depth (Fig. 6).However, the attenuation patterns over depth were not consistent from year to year.For example, in 2016 the measured concentration for malic acid was greatest at 200 m.One caveat to consider for this metabolite is that the extraction efficiency of malic acid with PPL cartridges is low (0.7%, Johnson et al. 2017) and therefore, we did not correct the measured concentrations given the potential error associated with the correction at low extraction efficiencies.The actual concentrations are likely much higher than those shown in Fig. 6a.During the July 2019 cruise, olites where the summer season was higher, while orange shows the metabolites where the winter season was higher.In white are the cases where there was no significant difference between the two seasons.The water column was grouped into three VZ regions for this comparison: photic zone (VZ 0 , VZ 1 , and VZ 2 ), subphotic (VZ 3 ), and deep (VZ 4 through VZ 10 ).Comparisons marked with ** have p-values < 0.001, all other comparisons marked in orange or blue have 0.001 ≤ p-values < 0.05.
we used a pre-extraction chemical derivatization method described in Widner et al. (2021) to validate our observations of malic acid.This method derivatizes the -OH group on malic acid using benzoyl chloride, thereby enhancing its extraction with the PPL resin.Using this method, we quantified higher concentrations of malic acid approaching 2000 pM at the surface and decreasing with depth to a minimum at 600 m.The increased sampling resolution made possible with the benzoyl chloride derivatization method also revealed a deep secondary mesopelagic maximum ranging between 600 and 1700 pM at a depth range of 200-600 m (Fig. 6b).
Additional examples of metabolites that showed generally higher concentrations in the surface and decreased throughout the euphotic zone include 4-hydroxybenzoic acid and 5 0methylthioadenosine (MTA).However, there was significant interannual variability in the shape and magnitude of the depth profile for these metabolites (Supporting Information Fig. S5).For example, 4-hydroxybenzoic acid was higher throughout the upper water column, except in 2018 where it was below detection in all samples collected at the surface.For MTA, both the magnitude and location of the highest concentrations varied by year.In 2017, samples from depths < 10 m had the highest concentrations of MTA, while the samples collected in 2018 and 2019 near the deep CM presented elevated concentrations (Supporting Information Fig. S5).
Comparing temporal vs. spatial variability in metabolites
Data from the current study reveal temporal variability in metabolite concentrations over a 4-yr period in one geographic location.To compare this temporal variability to geographic variability, we calculated the percent relative standard deviation (RSD, standard deviation divided by the mean, multiplied by 100) for 11 metabolites found both in the vicinity of BATS and in a study conducted in the western Atlantic Ocean between 38 S and 55 N latitude in which samples were collected in the austral fall and boreal late summer to early fall of 2013 (Johnson et al. 2023).For the latitudinal study, we restricted the analysis to samples collected from the upper 1000 m of the water column to allow an explicit comparison between the two datasets.The metabolites in this comparison are all observed in picomolar concentrations (Supporting Information Fig. S2), and by calculating the RSD we can compare variability in space and time without considering differences in mean values.
When RSD values from the BATS site are plotted against those from the latitudinal transect, most of the metabolite abundances showed patterns of higher geographic variability compared to temporal variability (Fig. 7; symbols below the 1 : 1 line).Only the amino acid phenylalanine showed higher temporal variability compared to the latitudinal transects, and three metabolites (4-aminobenzoic acid, caffeine, and tryptophan) had the same degree of variability in time and space.As a comparison, the RSD of bulk organic carbon at the BATS site was 15.6% compared to 22.3% in the samples collected along the latitudinal transect in the western Atlantic Ocean.
Discussion
The inventory of marine DOC represents the largest reservoir of exchangeable reduced carbon in the ocean.The biological and physical dynamics that control the sources and sinks of marine DOC over space and time play a critical role in the global carbon cycle.Our understanding of the changes in bulk DOC has evolved over the years (Carlson and Hansell 2015), and about half of marine primary production cycles through the labile pool of DOC (Moran et al. 2022a).Recent advances in analytical methods (Steen et al. 2020) now allow us to track individual molecules traded between microorganisms within the marine carbon cycle (Moran et al. 2022b).This is a corollary to the advances made through GEOTRACES' investigations into inorganic trace elements and isotopes that have been used as markers of a range of oceanographic processes (Anderson 2020).With this project, we focus on compounds that are involved in central carbon metabolism, a small subset of the 19,000 compounds present in KEGG.Of the 41 compounds that met our quality control metrics, there was not a single factor to describe the dynamics of all compounds.Therefore, we group our discussion of the dissolved metabolites into those with annually repeating patterns, those that vary on seasonal and diel cycles, and end with a comparison of temporal and spatial variability of metabolites.
An annually repeatable pattern: Pantothenic acid
The water-soluble vitamin pantothenic acid (vitamin B 5 ) is the clearest example of a compound with an annual pattern that repeats regularly in each of the 4 yr at BATS.Pantothenic acid was present in three-quarters of the samples collected in the upper 300 m, reflecting the origin of its name from the Greek "pantos" meaning from everywhere.Its concentration, however, was not constant throughout the year.We measured the lowest concentrations in the late spring/early summer periods when thermal stratification was greatest and the highest concentrations during periods of vigorous mixing in winter and early spring as the MLD increased.We also observed slightly lower values in the 2018 mixing period which was characterized by shallower maximal MLDs compared to other years.Pantothenic acid was discovered in the 1930s as a growth factor for yeast (Williams et al. 1933), and subsequent work has shown that pantothenic acid is incorporated intracellularly into CoA, a cofactor used in common metabolic pathways including lipid synthesis, processing of fatty acids, and the tricarboxylic acid cycle (Novelli 1953;Leonardi et al. 2007).Microbial production of pantothenic acid can exceed demand for the compound (Jackowski and Rock 1981), which may lead to its release from cells.Yet, details on which microorganisms produce dissolved pantothenic acid and why it is released extracellularly in marine systems are sparse.In laboratory cultures, three strains of the abundant marine phytoplankton Prochlorococcus have been shown to release pantothenic acid to the surrounding media (Kujawinski et al. 2023).At BATS, the distribution of Prochlorococcus is seasonally variable (Olson et al. 1990) with maximum concentrations usually between 60 and 80 m (DuRand et al. 2001), overlapping with the depth range where the maximal concentration of pantothenic acid occurred, suggesting that Prochlorococcus could be a source of pantothenic acid.Furthermore, in heterotrophic marine organisms, genetic information reveals that the abundant SAR86 group lacks a putative metabolic pathway to produce pantothenic acid (Dupont et al. 2012), which indicates this group of bacteria likely require external sources of pantothenic acid.Liu et al. (2022) proposed that some groups of bacterioplankton may shift their metabolisms as the MLD shoals following deep convective mixing, resulting in enhanced scavenging of pantothenic acid in the surface 200 m at BATS.Furthermore, because the short-term scavenging of pantothenic acid observed in April (Liu et al. 2022) is also an annually repeating pattern, this metabolite may be a good model for tracking long-term changes in the sources and sinks of metabolites in the northwest Sargasso Sea.
Seasonal shifts in the balance of dissolved metabolites
The seasonal pattern of bulk DOC dynamics and the role of deep convective mixing in the biological carbon pump have been well established at BATS (Carlson et al., 1994;Hansell and Carlson 2001).The DOC concentrations increase in the euphotic zone as the water column stratifies in late spring or early summer and remains elevated until the mixed layer extends deeper than the euphotic zone in the winter or early spring.During winter convective mixing to depths between 200 and 300 m, DOC is homogeneously redistributed throughout the mixed layer and as a result a portion of the seasonally accumulated DOC is exported from the euphotic zone into the mesopelagic zone (Carlson et al. 1994;Hansell and Carlson 2001).DOC accumulation within the euphotic zone results from a relative imbalance between DOC production processes and heterotrophic bacterial consumption (Carlson et al. 1996).Factors that can affect bacterial consumption of DOC and control the accumulation of DOC include potential inorganic limitation of heterotrophic bacterioplankton production (Cotner et al. 1997;Thingstad et al. 1997), the production of recalcitrant organic compounds (Aluwihare and Repeta 1999) that resist rapid microbial remineralization, and the composition and varying metabolic potential of the resident microbial communities within the euphotic zone (Carlson et al. 2004).At BATS, this imbalance results in the accumulation of dissolved combined neutral sugars (Goldberg et al. 2009) and total combined amino acids (Liu et al. 2022) during the summer stratified period.In the winter, seasonal mixing redistributes DOC that accumulated in the surface to deeper in the water column; organic carbon at deeper depths is then converted to inorganic carbon via microbial respiration as the water column re-stratifies (Hansell and Carlson 2001).
Based on the metabolites examined in this project, compounds with higher concentrations in the mixed layer in the ) in dissolved metabolite concentrations from 2016 to 2019 in the Sargasso Sea (y-axis) compared with data from a latitudinal transect in the western Atlantic Ocean (x-axis, Johnson et al. 2023).The black line is the one-to-one line where the variability in time (Sargasso Sea) matches the variability in space (western Atlantic Ocean transect).The arrows are used to connect metabolite names with data points when crowding would prevent a label from appearing adjacent to its datapoint.The gray diamond is the RSD of bulk concentrations of TOC/DOC in each dataset.
winter were predominantly vitamins, in contrast to amino acids, select nucleic acid and vitamin precursors, and other metabolites that were most pronounced in the summer.Our in situ dissolved metabolite concentration data reveal changes in the standing stock of a metabolite, or the balance between changes in metabolite production and consumption.An increase in the amount of a metabolite measured in the water column can indicate an increase in its production or a decrease in its consumption, or a change in the physical transport of metabolites in and out of the ecosystem.With these caveats in mind, we next consider the metabolites that had relatively higher concentrations in the winter (mixed) period compared to the summer (stratified) period.
Excess S-(5 0 -adenosyl)-L-homocysteine (SAH) in the water column observed during the winter period could be an indication of greater release by cells seeking to avoid the negative effects of higher intracellular levels of SAH.SAH is produced when S-adenosyl-L-methionine (SAM) donates methyl groups to molecules such as DNA, RNA, or proteins (Parveen and Cornell 2011); however, the cellular accumulation of SAH inhibits this methylation.Halophilic cyanobacteria can generate SAH during the synthesis of compatible solutes used to endure increased salt concentrations (Sibley and Yopp 1987), and SAH can be converted to adenosine and homocysteine by prokaryotic cells (Shimizu et al. 1984).Thus, the higher levels of dissolved SAH and adenosine observed during the winter at BATS could have been released from cells.Another possible explanation arises with SAR11, an abundant cosmopolitan marine heterotrophic Alphaproteobacteria, where the transcription of genes producing SAH are enhanced under sulfur-limited growth (Smith et al. 2016).This is of particular relevance given SAR11's inability to use oxidized forms of sulfur such as sulfate and its growth requirement for exogenous reduced sulfur (Tripp et al. 2008).At BATS, the lowest levels of the organic sulfur compounds dimethylsulfide (DMS) and dimethylsulfoniopropionate (DMSP) occur during the winter (Levine et al. 2016).Hence, if SAR11 has an inadequate supply of reduced sulfur in the winter at BATS, it could increase its transcription of genes producing SAH and thereby release increased amounts of SAH into the water column, which would match our observations.
The biochemical reactions described above whereby SAH is produced during the methylation of molecules also produce MTA (Parveen and Cornell 2011).We have measured MTA in the dissolved phase in both the north Atlantic (Johnson et al. 2023), and in laboratory cultures with R. pomeroyi (Johnson et al. 2016) and Prochlorococcus MIT9313 (Kujawinski et al. 2023).Intracellularly, MTA is significantly less abundant in Thalassiosira pseudonana when cobalamin concentrations are low (Heal et al. 2014), which indicates a link between internal MTA levels and conditions experienced by phytoplankton.At the BATS site, dissolved MTA concentrations reach maxima at different depths in the water column, in different years, a pattern that was not correlated to any single environmental factor.Furthermore, dissolved MTA levels did not show significant seasonal differences indicating that further investigation is required to determine the environmental conditions that control the distribution of MTA in the water column.
Two vitamins (pantothenic acid and riboflavin) and one vitamin precursor (desthiobiotin) showed higher concentrations in the mixed layer during the winter mixed period.The production and consumption of vitamins occurs in both phytoplankton and heterotrophic bacteria (Warren et al. 2002;Rodionov et al. 2003;Koch et al. 2012;Sañudo-Wilhelmy et al. 2014), and vitamins are exchanged within microbial communities (Joglar et al. 2021;Wienhausen et al. 2022a;Zoccarato et al. 2022).Vitamin precursors may enable cellular growth in the absence of the vitamin itself, as has been observed in some, but not all, bacterial cells tested for the ability to use desthiobiotin in the absence of biotin (Wienhausen et al. 2022b).In addition, many species of eukaryotic phytoplankton require external sources of vitamins (Croft et al. 2005(Croft et al. , 2006)), and eukaryotic picophytoplankton are prevalent in the winter period at BATS (Treusch et al. 2012) and presumably consume vitamins.However, the winter mixed season is the time when we see relatively higher vitamin concentrations, which contradicts the pattern that would be expected if consumption of vitamins by eukaryotic phytoplankton were the dominant controlling factor.Furthermore, while light and increased temperature can cause degradation of vitamins (Carlucci et al. 1969;Gold et al. 1966), only riboflavin revealed significantly lower concentrations in the day compared to night, suggesting that the accumulation of the other vitamins are not directly controlled by lower light levels in the winter.
The compounds with elevated levels during the summer are select amino acids such as leucine and isoleucine, nucleic acid precursors, and several miscellaneous compounds.Extracellular enzyme activity, especially peptidase activity that cleaves proteins into free amino acids, has been shown to increase with warmer summer temperatures leading to excessive amino acid production relative to consumption (Piontek et al. 2014); this excess production would explain our observations of higher leucine and isoleucine concentrations in the summer.Theoretical calculations show that leucine and isoleucine are among the amino acids that are energetically costly to produce (McClelland and Montoya 2002;Yamaguchi et al. 2017), which may indicate that cells can only produce excess leucine and isoleucine during the summer.Furthermore, inosine and 4-aminobenzoic acid are among the known compounds excreted by copepods (Maas et al. 2020).Zooplankton are known to vary seasonally at BATS (Blanco-Bercial 2020), which could result in a temporally-variable source of these compounds to the mixed layer at BATS.D-ribose 5-phosphate, malic acid, and taurocholic acid are also relatively higher in the summer compared to the winter, as are the vitamin precursors HET and 4-aminobenzoic acid, although the reasons remain unclear.
Diel shifts in riboflavin (vitamin B 2 ) during the summer
The subtropical North Atlantic Ocean experiences higher incoming solar radiation than the temperate and polar latitudes.Riboflavin, which showed significant decreases in concentration during the daylight hours in the surface water masses, was the only measured organic compound responding to daily changes in light.Riboflavin is a water-soluble vitamin that is a precursor for flavin mononucleotide or flavin adenine dinucleotide, cofactors involved in electron transfer, and has been widely studied since it was first isolated in the 1880s (Eggersdorfer et al. 2012).Riboflavin is subject to direct photodegradation due to its highly-conjugated, aromatic structure, and the history of research on the sensitivity of riboflavin to light dates to the 1930s when the first oxidation products of riboflavin were isolated (Warburg and Christian 1932).The loss of riboflavin occurs at wavelengths of light between 350 and 520 nm, with the highest levels of damage in the narrower window between 415 and 455 nm (Ahmad et al. 2006).The interactions between light and riboflavin also vary as a function of pH (Ahmad et al. 2004), ionic strength (Ahmad et al. 2016), and temperature (Sattar et al. 1977).In addition to its role in metabolism, riboflavin can also act as a quorum-sensing molecule (Rajamani et al. 2008).Our dissolved riboflavin concentrations, from below detection to 0.6 pM, are at the low end of the dynamic range of existing data, although coastal seawater from the North Sea had even lower concentrations at < 0.04 pM (Bruns et al. 2022).Along a north-south transect in the western Atlantic Ocean, Johnson et al. (2023) measured values averaging 0.3 pM (range 0-12 pM) with the highest concentrations found between 50 and 150 m, with the exception of the northernmost station where values approached 12 pM at 1 m.The values from the Atlantic Ocean are comparable to dissolved riboflavin concentrations off the coast of California (Sañudo-Wilhelmy et al. 2012).In contrast, in a coastal inlet in the northwest coast of the United States, Heal et al. (2014) found riboflavin had the highest concentrations of the B vitamins, with values ranging from 40 to 120 pM.In laboratory cultures, riboflavin is released by Synechococcus (Fiore et al. 2015) and Thalassiosira (Kujawinski et al. 2017;Longnecker and Kujawinski 2017), but is not released by Prochlorococcus (Kujawinski et al. 2023).Coral reefs (Weber et al. 2022) and sponges (Fiore et al. 2017) are also sources of riboflavin to the marine environment.
The daytime decrease in riboflavin concentrations in the surface water samples could be due to the response of riboflavin to light, consumption by the in situ microbial community, or a combination of both processes.In the Pacific Ocean, intracellular concentrations of riboflavin reached maximal levels at the end of the day (Boysen et al. 2021).Furthermore, while riboflavin is known to react to light, there are certainly other, unknown, compounds in the euphotic zone that exhibit this behavior.To our knowledge, there are no organisms known to be auxotrophic for riboflavin nor has a daily cycle in riboflavin release been examined.The daily decrease in riboflavin in the surface ocean has implications for the microorganisms who rely on production of vitamins by other organisms because excess dissolved riboflavin is only present at night.
Additional compounds of interest
Dissolved malic acid concentrations were higher in the summer in the upper euphotic zone, in VZ 0 .Malic acid is an intermediate in the citric acid cycle and is produced in the first step in the Hatch-Slack, or C 4 carbon fixation, pathway.The C 4 carbon fixation pathway is less common, and on land is dominated by grasses (Sage 2016).In marine ecosystems there is equivocal evidence for a C 4 photosynthetic pathway in diatoms (Mackey et al. 2015), and diatoms with the C 4 photosynthetic pathway may use it to dissipate excess light energy rather than to fix carbon dioxide (Haimovich-Dayan et al. 2013).The Tara Oceans dataset revealed low transcript levels for the enzymes of the C 4 pathway (Pierella Karlusich et al. 2021), supporting the possibility that the higher levels of malic acid in the surface ocean are due to exudation by eukaryotic phytoplankton because they lack an active C 4 carbon fixation pathway.Malic acid is also one of multiple organic compounds measured within marine aerosols collected in the western Arctic Ocean (Kawamura et al. 2012) and western Pacific Ocean (Kawamura and Sakaguchi 1999).Thus there may be atmospheric sources of malic acid to the surface ocean.
The pattern of 4-hydroxybenzoic acid in the water column is an example of a metabolite with irregular vertical and temporal variability during our multiyear project.4-Hydroxybenzoic acid was completely absent from the shallowest samples in 2018, yet was present deeper in the water column at similar concentrations for all years.Furthermore, the return of 4-hydroxybenzoic acid to the surface waters in 2019 indicates a transient decoupling between the production and removal processes in the upper water column.In laboratory cultures, cyanobacteria release 4-hydroxybenzoic acid (Fiore et al. 2015;Kujawinski et al. 2023) or use it as a carbon source (Mou et al. 2007), which complicates defining sources and sinks within the water column.In shallow reef habitats, marine sponges remove 4-hydroxybenzoic acid from the water column, likely due to the actions of the microbial community residing within the sponges (Fiore et al. 2017).The presence of 4-hydroxybenzoic acid in the water column will have unknown effects on the microbial community as it can both stimulate and inhibit prokaryotic activity in a manner that varies for different microbial species (Czerpak et al. 2001;Kamaya et al. 2006).Our previous work (Liu et al. 2020b) demonstrates that we can analytically separate the isomers 3-hydroxybenzoic acid (m-hydroxybenzoic acid) and 2-hydroxybenzoic acid (o-hydroxybenzoic acid), from 4-hydroxybenzoic acid ( p-hydroxybenzoic acid); however, these isomers were not analyzed during this project.This is relevant because the effects of 2-and 3-hydroxybenzoic acid differ from 4-hydroxybenzoic acid on the microbial community, as 2-hydroxybenzoic acid strongly simulates growth, 3-hydroxybenzoic acid inhibits growth, and 4-hydroxybenzoic acid weakly stimulates growth (Czerpak et al. 2001).Thus, the biological community responds differently to each isomer, underscoring the care that must be taken with respect to measuring structural isomers in marine metabolomics.
Scale of variability in metabolites
By comparing data from this multiyear sampling with our previous data from the western Atlantic Ocean (Johnson et al. 2023), we find that the variability in dissolved metabolite abundance across space is generally greater than the variability over time.The greater spatial variability in metabolites is consistent with the expected variability in biological communities over large geographic distances.Furthermore, the variability in the concentrations of individual dissolved metabolites is considerably higher than the variability in bulk organic carbon concentrations, emphasizing that change in bulk DOM can mask changes in the components that make up the pool of DOM.The relatively higher variability in metabolites along a latitudinal transect indicates that a single time-series station cannot serve as a model for the global ocean.However, the repeated sampling at the BATS site enables us to investigate long-term patterns in metabolites as the ocean encounters future changes in climate.Since the 1980s, the BATS site has become warmer and more acidic (Bates and Johnson 2020).While we cannot yet observe long-term changes in our dataset, in future work we can track metabolites over time to view how specific organic compounds change within the context of a changing climate and the subsequent impact on the marine microbial food web.
Conclusions
The dissolved metabolites measured in the northwestern Sargasso Sea during this multiyear study are central carbon metabolites and examining their variability in time provides insight into the processes that underlie chemical variability in a marine ecosystem.These time-series data reveal recurring temporal patterns of known dissolved organic compounds on diel, seasonal, and annual timescales.Understanding how the temporal variability of dissolved metabolites is linked to the sources and sinks of other biological and biogeochemical variables is the next challenge to determine how marine metabolites will respond to future changes in the marine environment.
Fig. 1 .
Fig. 1.Water temperatures in the upper 1000 m of the water column from July 2016 through July 2019, the depth and time range of samples that were collected during this project.The gray dots show the locations where discrete samples were collected.The colored boxes at the top indicate the season in which the samples were collected.The thin contour lines depict chlorophyll fluorescence measured in relative fluorescence units (RFU).The thick black line is the MLD.
Fig. 2 .
Fig. 2. Summary of differences between the winter (mixed) season and the summer (stratified) season in metabolites.Boxes in blue are dissolved metab-
Fig. 3 .
Fig.3.Concentrations of dissolved pantothenic acid in the water column from July 2016 through July 2019, (a) shows pantothenic acid (in pM) in the upper 300 m of the water column.The gray dots represent discrete samples and the black line is the MLD over the sampling period, (b) presents the amount of pantothenic acid integrated to the MLD (in units of μM m À2 ).Values were grouped by month and year before averaging in order to present data from each year over a 12-month annual cycle.The gray line is the averaged MLD.
Fig. 4 .
Fig. 4. Dissolved tryptophan from July 2016 through July 2019 in the upper 300 m of the water column.The black line is the MLD.Colors represent the concentration of tryptophan in pM.
Fig. 5 .
Fig. 5. Concentration of dissolved riboflavin concentrations (pM) in the surface water samples from VZ 0 , sampled over 24 h periods during the July cruises in all years of the project.The PAR data show the average light levels by hour of the day from the 2017 cruise.The x-axis shows local time.
Fig. 6 .
Fig. 6.Two methods for sample preparation were used to measure dissolved malic acid during this project.(a) Shows data from SPE using a PPL resin that was used for all 4 yr of samples collected, and (b) shows malic acid captured with benzoyl chloride derivatization used prior to PPL extraction in the July 2019 cruise.The data in (a) are the discrete samples (filled dots) and interpolated profiles generated using DIVA gridding (open circles).The concentration of malic acid with the SPE method in (a) has not been corrected for the extraction efficiency of malic acid.
Fig.7.The percent RSD (standard deviation divided by the mean, * 100%) in dissolved metabolite concentrations from 2016 to 2019 in the Sargasso Sea (y-axis) compared with data from a latitudinal transect in the western Atlantic Ocean (x-axis,Johnson et al. 2023).The black line is the one-to-one line where the variability in time (Sargasso Sea) matches the variability in space (western Atlantic Ocean transect).The arrows are used to connect metabolite names with data points when crowding would prevent a label from appearing adjacent to its datapoint.The gray diamond is the RSD of bulk concentrations of TOC/DOC in each dataset.
|
2024-01-24T18:56:35.646Z
|
2024-01-10T00:00:00.000
|
{
"year": 2024,
"sha1": "38101756d3627734847c0847b8ae647f471a7996",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lno.12497",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "694394e5c5e4839ab19d0641991827d5b42d6822",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
32121739
|
pes2o/s2orc
|
v3-fos-license
|
Lateral inhibition and granule cell synchrony in the rat hippocampal dentate gyrus
Studies of patients with temporal lobe epilepsy and of experimental models of this disorder suggest that the hippocampal dentate gyrus may be a common site of seizure onset and propagation. However, the nature of the dentate “network defect” that could give rise to spontaneous, intermittent, and synchronous population discharges is poorly understood. We have hypothesized that large expanses of the dentate granule cell layer have an underlying tendency to discharge synchronously in response to afferent excitation, but do not do so normally because vulnerable dentate hilar neurons establish lateral inhibition in the granule cell layer and thereby prevent focal discharges from spreading to surrounding segments. To address this hypothesis, we (1) identified functionally independent segments of the granule cell layer; (2) determined whether discharges in one segment evoke lateral inhibition in surrounding segments; and, (3) determined if disinhibition induces normally independent segments of the granule cell layer to discharge synchronously. Simultaneous extracellular recordings were made from two locations along the longitudinal or transverse axes of the granule cell layer using saline- and bicuculline- filled electrodes that were glued together. Leakage of 10 mM bicuculline from the electrode tip produced no detectable spontaneous activity. However, single perforant path stimuli evoked multiple population spikes at the bicuculline electrode and simultaneous normal responses at the nearby saline electrode. The multiple spikes evoked at the bicuculline electrode did not propagate to, and were not detected by, the adjacent saline electrode, indicating functional separation between neighboring subgroups of granule cells. Paired-pulse stimulation revealed that multiple discharges were not only restricted to one segment of the granule cell layer, but strongly inhibited surrounding segments. This lateral inhibition in surrounding segments often lasted longer than 150 msec. Finally, we evaluated granule cell activity at two normally independent sites within the granule cell layer both before and after disinhibition was induced by high frequency stimulus trains or bicuculline injection. Following a 10 sec, 20 Hz perforant path stimulus train, 2 Hz stimulation evoked virtually identical synchronized epileptiform discharges from normally separated sites. Similarly, intrahippocampal or intravenous bicuculline injection produced spontaneous synchronous epileptiform discharges throughout the granule cell layer. These results indicate that lateral or “surround” inhibition is an operant physiological mechanism in the normal dentate gyrus and suggest that afferent stimuli to a disinhibited dentate network evoke highly synchronized discharges from large expanses of the granule cell layer that are normally kept functionally separated by GABA-mediated inhibition.
Studies of patients with temporal
lobe epilepsy and of experimental models of this disorder suggest that the hippocampal dentate gyrus may be a common site of seizure onset and propagation.
However, the nature of the dentate "network defect" that could give rise to spontaneous, intermittent, and synchronous population discharges is poorly understood. We have hypothesized that large expanses of the dentate granule cell layer have an underlying tendency to discharge synchronously in response to afferent excitation, but do not do so normally because vulnerable dentate hilar neurons establish lateral inhibition in the granule cell layer and thereby prevent focal discharges from spreading to surrounding segments. To address this hypothesis, we (1) identified functionally independent segments of the granule cell layer; (2) determined whether discharges in one segment evoke lateral inhibition in surrounding segments; and, (3) determined if disinhibition induces normally independent segments of the granule cell layer to discharge synchronously. Simultaneous extracellular recordings were made from two locations along the longitudinal or transverse axes of the granule cell layer using saline-and bicuculline-filled electrodes that were glued together. Leakage of 10 mM bicuculline from the electrode tip produced no detectable spontaneous activity. However, single perforant path stimuli evoked multiple population spikes at the bicuculline electrode and simultaneous normal responses at the nearby saline electrode.
The multiple spikes evoked at the bicuculline electrode did not propagate to, and were not detected by, the adjacent saline electrode, indicating functional separation between neighboring subgroups of granule cells. Pairedpulse stimulation revealed that multiple discharges were not only restricted to one segment of the granule cell layer, but strongly inhibited surrounding segments. This lateral inhibition in surrounding segments often lasted longer than 150 msec. Finally, we evaluated granule cell activity at two normally independent sites within the granule cell layer both before and after disinhibition was induced by high frequency stimulus trains or bicuculline injection. Following a 10 set, 20 Hz perforant path stimulus train, 2 Hz stimulation evoked virtually identical synchronized epileptiform discharges from normally separated sites. Similarly, intrahippocampal or intravenous bicuculline injection produced spontaneous synchronous epileptiform discharges throughout the granule cell layer. These results indicate that lateral or "surround" inhibition is an operant physiological mechanism in the normal dentate gyrus and suggest that afferent stimuli to a disinhibited dentate network evoke highly synchronized discharges from large expanses of the granule cell layer that are normally kept functionally separated by GABA-mediated inhibition.
[Key words: hippocampus, dentate gyrus, lateral inhibition, GABA, bicuculline, epilepsy] Recent studies suggest that disinhibition of the granule cells of the hippocampal dentate gyrus may be a crucial step in the epileptogenic process that culminates in complex partial seizures of temporal lobe origin (Lothman et al., 1991;Sloviter, 1994). An understanding of how the dentate network malfunctions in temporal lobe epilepsy requires an understanding of the normal functional organization of the dentate gyrus. In 197 1, Andersen and colleagues suggested that the dentate gyrus and adjoining hippocampus are organized as a series of functionally independent, parallel, transverse "slices." They proposed that activity in a given segment of the entorhinal cortex is relayed to a corresponding segment of the dentate granule cell layer, and that this activity is then conveyed sequentially through a thin slice of the CA3 and CA1 pyramidal cell layer. This "lamellar" hypothesis of hippocampal function (Andersen et al., 1971) has been questioned recently on the basis of anatomical studies that show that extensive longitudinal associational pathways exist in both the hippocampus and dentate gyrus (Amaral and Witter, 1989). Because Amaral and Witter (1989) inferred that these longitudinal associational projections constitute recurrent excitatory connections among widely separated segments of the granule cell layer, they suggested that the existence of longitudinal projections is antithetical to the concept of independent lamellar function. However, as a result of recent studies of granule cell pathophysiology asso-ciated with the loss of the dentate hilar neurons that form the longitudinal projection (Sloviter, 1987;1991b), we view the possible functional significance of the dentate associational pathway differently. We have suggested that by primarily exciting inhibitory neurons in distant segments of the granule cell layer, this pathway evokes lateral inhibition, rather than excitation, and establishes functional separations in the granule cell layer as a result (Sloviter, 1994).
The selective loss of dentate hilar cells after experimentally induced status epilepticus or head trauma (Sloviter, 1987, 199 1 b;Lowenstein et al., 1992) is strikingly similar to the pattern of pathology (endfolium sclerosis) found in the dentate gyrus of patients with temporal lobe epilepsy (Margerison and Corsellis, 1966;Bruton, 1988;deLanerolle et al., 1989). Because many epilepsy patients with hippocampal sclerosis have a history of prolonged febrile seizures or head trauma (Bruton, 1988;Meldrum and Bruton, 1992;Cendes et al., 1993;French et al., 1993), we have hypothesized that the loss of vulnerable hilar neurons niay result in lateral disinhibition. Without lateral inhibition to establish functional separation in the granule cell layer, an enlarged expanse of the granule cell layer can be recruited to form an "epileptic aggregate" capable of responding to normal neocortical stimuli with epileptiform discharges. Thus, we have likened the dentate gyrus to the keyboard of a piano and proposed that individual lamellae are normally like individual piano keys (Sloviter, 1994). When "struck" by a focal afferent input from the entorhinal cortex, individual granule cell lamellae respond and convey that activity to a corresponding slice of the dentate hilus and CA3 pyramidal cell layer via the highly lamellar mossy fiber pathway (Blackstad et al., 1970;Gaarskjaer, 1986;Amaral and Witter, 1989). When hilar neurons that form the longitudinal projection are lost following injury, lateral disinhibition results, functional separation between neighboring lamellae is lost, and lamellae coalesce functionally; that is, the "keys" become glued together. Now, the disinhibited and functionally unified segments respond to a specific afferent input with a nonspecific, epileptiform discharge from an enlarged excitable aggregate of granule cells (Sloviter, 1994).
This article addresses the three cornerstones of this hypothesis. The first is that adjacent segments of the normal granule cell layer are functionally independent, and excitation evoked in one segment does not propagate to surrounding segments. This view is supported by the results of a study by Steward and colleagues (1990), which showed that multiple granule cell population spikes evoked in the immediate region of a bicucullinefilled recording electrode did not affect the potential evoked simultaneously at an adjacent saline-filled recording electrode that was estimated to lie only 0.5-1.0 mm in the longitudinal direction. The second cornerstone is that excitation in one segment of the granule cell layer is not only restricted to that segment, but actively inhibits granule cell excitability in adjacent segments, that is, evokes lateral inhibition. The third issue is that the granule cells in each functionally independent segment of the granule cell layer have an inherent tendency to join together with surrounding lamellae to discharge synchronously when excited, but that they are normally prevented from doing so by lateral inhibition.
Experiments were designed to answer the three questions central to this hypothesis. First, what are the spatial and axial characteristics of functional separation in the granule cell layer? Second, does lateral inhibition occur in the granule cell layer? Third, when inhibition is reduced, do normally independent segments of the granule cell layer coalesce functionally and begin to discharge synchronously and repetitively?
Materials and Methods Animal treatment
Male Sprague-Dawley descendant rats weighing 250-350 gm were treated in accordance with the guidelines set by the New York State Department of Health and the National Institutes of Health for the humane treatment of animals. Rats were anesthetized with urethane (1.25 gm/ kg, i.p.; 250 mg/ml saline). A femoral vein cannula was implanted in some cases to permit intravenous injection of bicuculline base (Sigma Chemical, St. Louis, MO; 1 mg/ml 1% citric acid in saline). Rats were placed in a Kopf stereotaxic device and the top of the skull was made level. A bipolar stainless steel stimulating electrode Rhodes Medical) was placed in the angular bundle of the perforant path (4.5 mm lateral from the midline suture and immediately rostra1 to the lambdoid suture). Each pair of recording electrodes was lowered into the brain (approximately 2 mm lateral from the midline suture, 3 mm rostra1 to the lambda, and 3.5 mm below the brain surface) and placed in the dorsal (suprapyramidal) blade of the granule cell layer by optimizing the characteristic shape of the evoked potential. Responses were recorded in both the longitudinal and transverse axes by placing the tips approximately parallel or perpendicular to the longitudinal axis of the hippocampus (approximately 30" from the sagittal plane). Biphasic current pulses (0.1 msec duration) were generated by a Grass S88 stimulator with a Grass stimulus isolation unit. Potentials were amplified by two Grass preamplifiers, displayed simultaneously on a Nicolet series 420 digital oscilloscope, and stored on diskette.
Stimulation and recording
Given the anatomical features of the perforant path, it is possible to stimulate the angular bundle of perforant path fibers and activate the entire dorsal dentate gyrus simultaneously (Andersen et al., 1966). By doing so, a single stimulus produces similar events simultaneously throughout the longitudinal extent of the dorsal dentate gyrus. Using the bicuculline electrode method devised by Steward et al. (1990), we were able to increase the postsynaptic excitability of a small population of granule cells located around the tip of one electrode, and then determine the effect of that enhanced discharge on responses to the same afferent stimuli recorded simultaneously at a nearby site within the granule cell layer.
Paired electrodes type 1. To identify functionally distinct segments of the granule cell layer, we modified the method of Steward et al. (1990) by gluing two glass microelectrodes together in order to verify microscopically the distance between the electrode tips. Glass electrodes (1.5 mm outer diameter; World Precision Instruments, Sarasota, FL) were pulled on a Kopf electrode puller and glued together with cyanoacrylate glue to produce electrodes of matched length and verified tip separations (50-l 500 microns). Tip separations were measured under a microscope on a micrometer slide. Accurate measurements could be made microscopically within = 15 p. Tip separation was measured as the distance between the inside edges of the two electrode tips. One electrode of each pair was filled with 0.9% NaCl and the other with 10 mM bicuculline methiodide (BMI; Sigma) dissolved in 0.9% NaCl. Tip resistance was typically < 1 Ma.
Paired electrodes type 2. To determine if increased excitation in one segment of the granule cell layer produced lateral inhibition in adjacent segments, we needed to evaluate granule cell population responses to perforant path stimulation both before and after injection of bicuculline from one electrode. This was not possible with the type 1 electrode, from which bicuculline apparently diffuses continuously. To evaluate inhibition both before and after BMI injection, we used a 1 ~1 Hamilton syringe (#7001) as both a recording electrode and an injector. A gold male pin connector was soldered to the shaft of the syringe needle, which was then insulated with a thin coating of Epoxylite. The tip of the needle was left uncoated to permit it to act as a recording electrode. A salinefilled glass microelectrode was then glued to the shaft of the Hamilton syringe and the tip separation (800 p) was measured. BMI (10 mM) was pulled into the syringe, followed by a small volume of boiling agar (2 gm agar/60 ml water) to form a plug to prevent leakage of BMI. After simultaneous recording of control granule cell responses to perforant path stimulation from both electrodes, BMI was injected through the syringe/electrode using a Kopf model 5000 microinjector.
Paired electrodes type 3. Two saline-filled electrodes with a tip separation of 1 .O or 1.5 mm were used to determine the degree of granule cell synchrony at two normally separated sites after a train of perforant path stimuli that overcame inhibition or after intravenous or intrahippocampal injection of bicuculline.
Results
Functional separation in the normal dentate gyrus When first lowered into the granule cell layer, both recording electrodes recorded virtually identical, normal granule cell field Figure 1. Functional separation and lateral inhibition in the granule cell layer of the normal rat dentate gyrus. Simultaneous dentate granule cell responses to 0.3 Hz perforant path stimuli were recorded by paired glass electrodes, one containing 10 mM bicuculline methiodide, the other containing saline. Evoked potentials were recorded along the longitudinal axis by electrodes with a verified tip separation of 670 p. A, Within seconds after electrode insertion, the bicuculline electrode (top truce) records a slightly disinhibited response, and the saline electrode (bottom truce) records a normal response. Note that at this low stimulus frequency, paired-pulse stimulation at a 40 msec interpulse interval produces no suppression of the second spike (arrow). B, Less than I min later, the bicuculline electrode records multiple granule cell responses. As granule cell discharges develop at the bicuculline site, paired-pulse inhibition increases dramatically at the adjacent saline electrode (arrow). Thus, discharges evoked in one segment of the granule cell layer are not only restricted to that segment, but produce lateral inhibition in surrounding segments. Calibration bar: 5 mV and 10 msec. The aforementioned sequence of events that occurred after placement of paired electrodes in the granule cell layer was highly reproducible within the same animal when the electrodes were raised above the hippocampus and then reinserted after a period of time (5-15 min) sufficient for bicuculline to be cleared from the tissue. Each experiment was routinely repeated at least twice in each animal. The effects of bicuculline leakage from one electrode were also highly reproducible qualitatively between animals (n > 25). Normal and abnormal responses (as shown in Fig. 1B) were recorded simultaneously from paired electrodes with verified tip separations as small as 200 Jo. With tip separations of 50 or 100 I.C, both electrodes recorded similar bicuculline-enhanced potentials simultaneously, presumably because bicuculline diffused rapidly to the saline electrode tip and/or both electrodes recorded from overlapping granule cell subpopulations. Thus, Figure 2. Effect of bicuculline on responses in independent segments of the granule cell layer to increasing afferent stimulus voltage. The top and bottom traces of each pair of traces were recorded from the bicuculline and saline electrodes, respectively, after bicuculline leakage had stabilized as shown in Figure 1B. A and B, Similar dendritic field "EPSPs" in response to low stimulus voltage (A, 3 V, B, 3.5 V). C and D, As voltage increases, the afferent stimulus evokes multiple population spikes superimposed on the positive dendritic field depolarization at the bicuculline electrode, but primarily dendritic field depolarizations at the saline electrode (C, 5 V, D, 6 V). E, With supramaximal stimulus voltage (30 V), one stimulus pair simultaneously evokes multiple spikes at the bicuculline electrode and single population spikes with second spike suppression at the saline electrode (arrow). These responses show that, despite bicuculline diffusion from the tip of one electrode, a single pair of afferent stimuli produce similar dendritic excitation in functionally distinct segments of the granule cell layer. Bicuculline specifically decreases the somal spike threshold such that similar dendritic excitation produces multiple population spikes at the bicuculline electrode that do not propagate longitudinally. All stimuli are paired pulses 40 msec apart delivered at 0.1 Hz. Calibration. 5 mV and 10 msec.
although this method did not permit an accurate determination of the minimum distance between functionally independent segments of the granule cell layer, functional separation was clearly evident in granule cell subpopulations as close as 200 ~1. Assuming that bicuculline diffuses some distance toward the saline electrode, functional separation presumably exists at distances considerably less than 200 K. These results with paired electrodes demonstrate that the field potentials evoked by afferent stimulation are generated by very small spheres of granule cells around the electrode tips and that even large amplitude, multiple discharges neither propagate to, nor are detected by, a second electrode located only a few hundred microns away.
Lateral inhibition in the granule cell layer Comparison of responses to paired-pulse stimulation both before and after multiple discharges developed at the bicuculline electrode revealed that multiple discharges evoked in one segment of the granule cell layer were not only restricted to that segment, but were associated with a dramatic increase in pairedpulse inhibition in surrounding segments (Fig. 1). Significantly, and Brisman * Lateral Inhibition i n the Rat Hippocampal Dentate Gyrus Figure 3. Lateral inhibition in the dentate gyrus. Granule cell responses to perforant path stimuli were recorded simultaneously by longitudinally oriented bicuculline-and saline-filled electrodes that had tip separations of 380 P (A) and 1000 p (B). A, Upon initial insertion, both the BMI (top truce ofeuch pair) and saline (bottom truce) electrodes record "normal" evoked responses to 0.3 Hz paired-pulse stimuli delivered at a 40 msec interpulse interval (A, I). Seconds later, bicuculline diffusion from the tip produces low-amplitude multiple discharges on the second waveform (A,2; asterisk) but no inhibition of the second waveform's population spike recorded simultaneously at the saline electrode (arrow). When the next stimulus pair evokes similar multiple discharges on the first waveform at the BMI electrode (A,3; asterisk), spike suppression in the second waveform at the saline electrode is evident (arrow). Note that the discharge in A,3 evokes lateral inhibition (spike suppression at the saline electrode; arrow) and recurrent inhibition of the second spike at the BMI electrode. B, Three consecutive traces of responses in a different rat to 0.1 Hz paired stimuli delivered 80 msec apart. &I, Note that recordings made soon after electrode insertion show early BMI effects (multiple spikes) primarily on the second waveform. &2, As lowamplitude discharges are produced on the first waveform (asterisk), spike suppression occurs at the saline electrode (arrow). B,3, With the next stimulus, discharges at the first waveform at the BMI electrode fully suppress the spike at the saline electrode (arrow), as well as increase inhibition at the BMI electrode (second spike in top truce in B.3). Note the efficacy and selectivity of the effect of BMI, that is, complete suppression of the second spike without any apparent effect on the first potential. These recordings show that lateral inhibition does not require full disinhibition in the segment in which discharges originate. Calibration, 5 mV and 10 msec.
the traces shown in Figure 1 indicate that the lateral inhibition produced by bicuculline was highly selective in that only the second population spike of each pair was inhibited (Fig. lB, bottom trace). If bicuculline were affecting granule cell excitability in other segments nonspecifically, the first of two spikes would also be affected.
The cellular mechanism by which the GABA, receptor an-tagonist bicuculline induced granule cell discharges is illustrated by the simultaneous responses of two distinct segments to increasing stimulus voltage. Figure 2 shows that at low stimulus voltage, the evoked dendritic field "EPSPs" (Andersen et al., 1966) were of similar amplitude ( Fig. 2A,B). Thus, each perforant path stimulus produced similar amplitude dendritic field depolarizations in independent segments of the granule cell layer despite the presence of bicuculline at one site. With increasing voltage, the same degree of dendritic excitation produced very different somal responses recorded simultaneously in independent segments. The responses recorded at the saline electrode exhibited "normal" responsiveness, that is, a relatively high population spike threshold for a given dendritic depolarization.
At the bicuculline electrode, however, the same dendritic excitation produced multiple spikes (Fig. 2C,D). With a further increase in stimulus voltage, additional multiple population spikes were evoked at the bicuculline site, whereas strongly inhibited potentials with single population spikes were recorded at the nearby saline electrode (Fig. 2E). Thus, bicuculline produced a decreased spike threshold for a given degree of dendritic depolarization. Granule cell discharges evoked at the bicuculline electrode were reliably associated with lateral inhibition at the saline electrode, as shown in Figure 2B, which is > 90% suppression of second spike amplitude, in all cases tested (n = 23 rats), and with all tip separations used to evaluate lateral inhibition (380,480,600,650,670,700,710,750,780,800,840,900,1000,1100, and 1500 II). Lateral inhibition was recorded in both the longitudinal and transverse axes. All figure illustrations are of recordings made along the longitudinal axis.
The nature of lateral inhibition Although Figures 1 and 2 show lateral inhibition increasing in one segment of the granule cell layer as large amplitude, multiple discharges develop in another segment, lateral inhibition does not require large amplitude population spikes in the instigating segment. Immediately after paired electrodes were lowered into the granule cell layer, normal responses were recorded at both sites ( Fig. 3A 1). In the first seconds of recording and presumably before sufficient bicuculline diffused from the tip to produce the large amplitude granule cell discharges shown in Figures 1 and 2, small amplitude discharges ("ripples") were often observed at the bicuculline electrode (Fig. 3A2). The relationship between the first of two potentials recorded by the bicuculline electrode and the second of two potentials recorded 40-80 msec later by the saline electrode was apparent from unanticipated variations in the responses to consecutive, identical stimulus pairs. When a pair of stimuli evoked ripples in only the second of two waveforms at the bicuculline electrode, no spike suppression was evident on the second waveform recorded at the saline electrode (Fig. 3A2). Conversely, when an identical pair ofafferent stimuli evoked a ripple discharge in only the first waveform recorded at the bicuculline electrode, strong spike suppression was evident in the second waveform recorded at the saline electrode (Fig. 3A3). In addition to the observed increase in paired-pulse suppression at the saline electrode, that is, inhibition across segments, ripple discharges also increased paired-pulse suppression at the bicuculline electrode, that is, inhibition within the segment (Fig. 3A3J33).
These results suggest that as it begins to diffuse, bicuculline disinhibits a very small sphere of granule cells around the electrode tip. Multiple discharges of these cells (ripples), as shown The top truce of each pair is the recording made with the BMI syringe/ electrode. The bottom truce is the response at the saline-filled glass electrode glued to the insulated barrel of the Hamilton syringe/electrode. Note that before BMI injection (A), the paired-pulse suppression at 0.2 Hz stimulation is partial at 20 msec and nil at longer interpulse intervals. B, Seconds after injection of 0.05 ~1 10 mM BMI from the BMI syringe/ electrode, multiple spikes are recorded at the BMI site and strong, longlasting lateral inhibition is recorded at the saline electrode located 800 p longitudinally. Note long-lasting (> 160 msec) lateral inhibition and the decrease in the amplitude of the dendritic field "EPSP," which appears to be maximal at an interpulse interval of 80 msec. No paroxysmal discharges or voltage changes of any kind were detected during the interval between the first discharge at the BMI electrode and the test pulses at both electrodes. Thus, the long-lasting spike suppression at the saline electrode is apparently due to the initial discharge at the BMI electrode. Calibration, 5 mV and 10 msec.
in Figure 3, A and B, produce lateral inhibition at the adjacent saline electrode, but also increase inhibition in the larger sphere of granule cells from which the bicuculline electrode records but which has not yet been exposed to a significant concentration of bicuculline.
As bicuculline continues to diffuse, the entire sphere of cells from which the bicuculline electrode records begins to discharge synchronously, producing the large amplitude discharges shown in Figure 1B.
Although the use of the paired glass electrodes clearly demonstrated lateral inhibition, there was insufficient time after lowering the electrodes, and before bicuculline diffused from the tip, to optimize the stimulating electrode placement and determine spike amplitudes at different interpulse intervals. Therefore, the Hamilton syringe/electrode (type 2, see Materials and Methods), was designed and constructed to permit evaluation C Figure 5. Bicuculline blockade of bicuculline-induced lateral inhibition. A-F, Sequence of responses from BMI and saline electrodes after injection of bicuculline (0.5 ~1 10 mM BMI) from the syringe/electrode. A-C, Immediately after injection, multiple spikes increased at the BMI syringe/electrode and produced complete spike suppression (arrows) at the saline electrode located 800 p away. D, As bicuculline spread to the saline electrode, lateral inhibition was blocked and multiple spikes were recorded from both electrodes. E and F, As bicuculline cleared from the tissue, lateral inhibition was restored at the saline electrode (arrows). This sequence indicates that lateral inhibition is GABA mediated. Electrodes were oriented along the longitudinal axis. Calibration, 5 mV and 10 msec. of granule cell inhibition and excitability both before and after bicuculline was injected. Before bicuculline injection, both electrodes recorded similar and stable normal potentials in response to perforant path stimulation (Fig. 4A). The degree of pairedpulse inhibition evoked by 0.1-0.3 Hz stimulation was evaluated at different interpulse intervals. At these low stimulus frequencies, paired-pulse suppression is partial at a 20 msec interpulse interval and absent at longer intervals (Fig. 4A), as described in detail previously (Sloviter, 199 1 a).
Within seconds after bicuculline injection (0.05 ~1 10 mM BMI), multiple population spikes developed at the bicuculline syringe/electrode (Fig. 4B, upper traces). As this occurred, the second of two potentials evoked at the saline electrode by the and Brisman * Lateral Inhibition i n the Rat Hippocampal Dentate Gyrus same perforant path stimuli was completely suppressed, and the amplitude of the dendritic field "EPSP" was decreased (Fig. 4B, lower traces). Inhibition of the second population spike occurred despite a relatively unaffected first population spike. The magnitude of lateral inhibition produced by bicuculline was extraordinary in that complete suppression of large amplitude population spikes was consistently produced (n = 14) even at the low stimulus frequencies used, and often lasted more than 150 msec (Fig. 4B). It should be noted that injection of this small volume of BMI was imprecise. The agar plug constituted part of the injection volume and we suspect that little BMI was released from the syringe. Rather, disturbance of the plug probably allowed BMI to diffuse out passively in minute concentration, much like the glass electrode. The GABA-mediated nature of bicuculline-induced lateral inhibition was apparent from experiments in which larger volumes of bicuculline (0.5 ~1 10 mM BMI) were injected. Shortly after injection, multiple granule cell population spikes were produced at the bicuculline electrode and complete spike suppression was produced at the saline electrode ( Fig. 54-C). However, as the bicuculline presumably spread, lateral inhibition decreased at the saline electrode (Fig. 5D). As bicuculline then cleared from the tissue, lateral inhibition was restored at the saline electrode (Fig. 5E,F).
Lateral inhibition was produced in the septal portion of the dorsal dentate gyrus by bicuculhne release in more temporal segments of the dorsal granule cell layer, and in the temporal portion of the dorsal dentate gyrus by more septal injections. Therefore, these results demonstrate that focal discharges induced in, and restricted to, any single segment of the granule cell layer produce strong and long-lasting inhibition, that is, transverse or longitudinal lateral inhibition, in surrounding segments of the granule cell layer, both close to and distant from the site of the focal discharges.
Disinhibition-induced granule cell synchrony in the dentate gyrus Finally, experiments were designed to determine if normally independent segments of the granule cell layer have a tendency to discharge together synchronously when inhibition is overcome reversibly by a stimulus train (Andersen and Lomo, 1967;Ben-Ari et al., 1979;Sloviter, 1983;Thompson and Gahwiler, 1989) or by systemic or intrahippocampal bicuculline (Sloviter, 1991a).
Two saline-filled electrodes with a tip separation of 1 mm recorded independent responses to perforant path stimulus trains delivered at 20 Hz for 10 sec. In the first seconds of the train, 20 Hz stimulation evoked large amplitude population spikes with recurrent inhibition, that is, alternating population spikes and field "EPSPs" (Fig. 6B). Toward the end of the 10 set train, stimulation overcame inhibition; that is, each 20 Hz stimulus evoked a large amplitude population spike (Fig. 6C). Immediately after cessation ofthe stimulus train, and before inhibition recovered, 2 Hz stimulation evoked nearly identical synchronized epileptiform discharges at both saline electrodes (Fig. 6E). That each electrode was recording the synchronous discharges of normally independent and nonoverlapping granule cell subpopulations was evident from recordings in which large amplitude population spikes were occasionally present at one recording site but absent at the other (Fig. 6E, arrows). The same indication is evident from recordings that showed that immediately before the 2 Hz posttrain stimuli evoked synchronous repetitive discharges at both sites, one electrode recorded population spikes and the other electrode recorded field depolarizations that would soon become population spikes (Fig. 60). Identical results were produced with saline electrodes with a tip separation of 1.5 mm (n = 6 at 1 mm and n = 3 at 1.5 mm).
Although the posttrain epileptiform discharges recorded at both saline electrodes were highly synchronous, it was possible that the synchrony within independent segments was due not to an inherent tendency of granule cells to discharge synchronously, but rather, because both discharges were triggered initially by the same perforant path stimuli. To address this possibility, we recorded spontaneous granule cell activity in two functionally independent segments after intrahippocampal or intravenous injection ofbicuculline (0.5 ~1 of 25 mM BMI through the Hamilton syringe/electrode or 0.75 mg bicuculline base/kg i.v.). Within seconds after bicuculline injection by either route, highly synchronous yet clearly distinct spontaneous granule cell epileptiform discharges were recorded simultaneously from both recording electrodes (Fig. 7). Spontaneous granule cell discharges continued for approximately 2 or 20 min after intravenous (n = 6) or intrahippocampal (n = 4) injection, respectively.
These results demonstrate that when granule cell inhibition is overcome reversibly by afferent stimulation or by bicuculline, normally independent segments of the granule cell layer join together to discharge spontaneously in a highly synchronous manner.
Discussion
These results demonstrate for the first time that (1) focal discharges induced in one segment of the dentate granule cell layer are restricted to that segment by GABA-mediated inhibition in both the longitudinal and transverse directions; (2) discharges in one segment of the granule cell layer produce highly effective lateral or "surround" inhibition in neighboring segments; and, (3) when inhibition is reduced, normally independent segments of the granule cell layer coalesce functionally to generate spontaneous and synchronous repetitive discharges.
Relevance to the normal functional organization of the dentate gyrus The results of these experiments clarify several aspects of the functional organization of the normal dentate gyrus. The dentate gyrus is a laminar and relatively simple structure in that it consists of a tightly packed population of principal cells (granule cells) and a variety of nonprincipal cells that are presumed to regulate the excitability of the granule cells (Andersen et al., 1966). The perforant path, which is the main excitatory afferent projection from the neocortex to the dentate gyrus (Ramon y Cajal, 1893, 1968Lorente de No, 1934) innervates and excites dendrites ofgranule cells and nongranule cells alike (Scharfman,199 1). Once driven to discharge by afferent stimuli, the granule cells excite a thin transverse slice of the CA3 pyramidal cell layer via the highly lamellar mossy fiber pathway (Blackstad et al., 1970;Gaarskjaer, 1986;Amaral and Witter, 1989).
Although lateral inhibition has not been previously described in the normal hippocampal formation, its existence should not be surprising because lateral inhibition is an established mechanism in other brain regions. For example, in the cerebellum, the parallel fibers of the granule cells excite a limited number of Purkinje cells as well as inhibitory neurons that inhibit the Purkinje cells adjacent to the targeted Purkinje cells (Eccles, 1973). In the olfactory bulb, afferent stimuli to mitral cells also Figure 6. Granule cell synchrony after stimulus train-induced disinhibition. Responses to 2 Hz paired-pulse perforant path stimulation were recorded from two saline electrodes placed 1 .O mm apart along the longitudinal axis of the granule cell layer before and after a 10 set to 20 Hz stimulus train that overcame inhibition. A, Simultaneous responses before the stimulus train. Note single first spikes and suppression of the second spike (arrows) in response to 2 Hz stimulation. B, During the first seconds of the 10 set 20 Hz train, perforant path stimuli evoke large amplitude population spikes and recurrent inhibition remains intact (alternating spike suppression; arrows). C, Toward the end of the stimulus train, inhibition is overcome and all stimuli evoke large amplitude population spikes. D, Immediately after the end of the stimulus train, 2 Hz stimuli evoke multiple discharges at one site and small field changes at the other site. Note that the spikes recorded at one site are not detected by the other electrode. E, Within seconds after the recording in D, 2 Hz stimuli evoke distinct, highly synchronous, large-amplitude granule cell population spikes from normally independent segments. Note that each recording electrode records distinct events. This is evident from the traces, which show that some spikes (arrows) are recorded at one site but are barely detected by the other electrode, located only 1.0 mm away. Calibration: 5 mV and 10 msec in A-C, 5 mV and 20 msec in D and E.
activate inhibitory granule cells that produce lateral inhibition
The bicuculline-induced lateral inhibition we have demonin nontargeted mitral cells (DeVries and Baylor, 1993). Thus, strated in the normal dentate gyrus is reminiscent of the "infeedforward and feedback lateral inhibitory mechanisms create hibitory surround" described by Prince and Wilder ( 1967) around an inhibitory field penetrable only by strong excitatory stimuli, a neocortical penicillin focus, or the "ring" of inhibition rethereby directing excitation to specifically targeted neurons.
ported by Dichter and Spencer (1969) . Spontaneous granule cell population discharges after intravenous or intrahippocampal bicuculline. Spontaneous activity within the granule cell layer was recorded simultaneously at two sites 1 .O mm apart along the longitudinal axis. After either (A) intravenous injection (0.75 mg bicuculline base/kg) or (B) intrahippocampal injection through the Hamilton syringe/electrode (0.5 ~125 mM bicuculline methiodide), normally independent segments of the granule cell layer exhibited highly synchronous spontaneous granule cell discharges. Note that although the discharges in both segments were highly synchronous, they were not identical in that some spikes (arrows) were recorded by one electrode but not the other. Calibration, 5 mV and 20 msec. on the surface of the hippocampus. Both groups envisaged this inhibition as preventing the spread of epileptiform discharges from the site of seizure origination. However, the present studies are distinct from these earlier studies in that no "focus" of spontaneous epileptiform discharges was produced. Leakage of bicuculline from a microelectrode tip produced no detectable spontaneous activity; the only apparent change was a highly localized disinhibition that produced multiple discharges only in response to afferent stimulation. Thus, the use of the bicuculline electrode method revealed lateral or surround inhibition resulting from brief, highly localized, evoked discharges, and not in response to spontaneous epileptiform discharges.
Given the existence of powerful and remarkably long-lasting lateral inhibition in the granule cell layer, and the shorter latency and greater sensitivity of dentate nongranule cells than granule cells to excitation by perforant path fibers (Buzsaki and Eidelberg, 1982;Scharfman, 1991), we propose that normal excitatory neocortical stimuli intended for a specifically targeted la-mella may first excite nonprincipal cells, enabling them to increase feedforward inhibition in preparation for the imminent excitation of the granule cells. In this way, intralamellar and translamellar inhibition could, in advance of granule cell excitation, create an inhibitory field penetrable only by strong excitatory inputs. By activating inhibition in surrounding segments of the granule cell layer just before the targeted granule cells are excited, incoming stimuli from entorhinal projections could focus excitation to the specifically targeted principal cells. Thus, functional separation and lateral inhibition may establish a large number of independent functional subunits, which, when activated in unique combinations, permit a large number of possible functional responses, each corresponding to unique cortical events. Lamellar function may, therefore, permit complex information processing in what appears superficially to be a relatively simple longitudinal structure, as originally envisaged by Andersen and colleagues (197 1).
Although the functional implications of longitudinally di-rected lateral inhibition are of obvious significance to the la-hippocampal damage also being present. Within the hippocammellar hypothesis of hippocampal function (Andersen et al., pus, extensive loss of dentate hilar neurons was not only a con-1971; Amaral and Witter, 1989;Lothman et al., 199 I), the sistent finding but was, in some cases, the only pathology in the possible functional significance of lateral inhibition in the trans-temporal lobe (Margerison and Corsellis, 1966). Because many verse or radial axes is less clear. Anatomical studies show that patients with hippocampal sclerosis experience status epileptientorhinal cortical neurons innervate the entire transverse ex-cus, prolonged febrile seizures, or head trauma before epilepsy tent of the granule cell layer (Tamamaki and Nojyo, 1993).
develops (Bruton, 1988, Cendes et al., 1993French et al., 1993) Therefore, although bicuculline restricted to one locus of the and because prolonged seizures or head injury produce similar granule cell layer produced lateral inhibition in all directions, selective dentate hilar cell loss and granule cell hyperexcitability normal neocortical excitation may overcome transverse inhi-in experimental animals (Sloviter,199 1 b;Lowenstein et al.,bition and excite an entire transverse lamella in order to convey 1992) hilar neuron loss has been suggested to be the common the appropriate message to the corresponding "slice" of CA3 pathological denominator and primary network defect underpyramidal cells. Thus, the structural network organization of lying development of hippocampal-onset seizures in some cases the dentate gyrus, that is, rough topographic specificity of en- (Sloviter, 1994). If vulnerable hilar neurons mediate lateral intorhinal projections to corresponding septotemporal segments hibition in the normal granule cell layer, their loss would be of the granule cell layer , elliptical granule predicted to decrease translamellar granule cell inhibition. cell dendritic trees, which are 2-4 times broader in the transverse Therefore, our hypothesis predicts that once lateral inhibition than the longitudinal plane (Desmond and Levy, 1982;Clai-is decreased as a result of the loss of the cells that mediate it, borne et al., 1990), innervation of the entire transverse dentate normally independent segments of the granule cell layer would "slice" by perforant path fibers (Tamamaki and Nojyo, 1993), coalesce functionally to form an "epileptic aggregate" capable and longitudinal projections of hilar neurons that may evoke of responding to afferent stimuli with multilamellar discharges lateral inhibition (Amaral and Witter, 1989), may place a lon- (Sloviter, 1994). gitudinal constraint on "surround" inhibition. That is, longi-The present results are consistent with this hypothesis insofar tudinal lateral inhibition and lamellar function may be the pri-as they demonstrate lateral inhibition as an operant mechanism mary functional design in the dentate gyrus.
in the normal dentate gyrus and show that granule cells in nor-However, the possibility that surround inhibition may define mally independent segments of the granule cell layer have an functional separation in several directions must be considered. inherent tendency to discharge in synchrony when disinhibited. It is possible that different afferents to the dentate hilar region This latter observation is surprising because, unlike CA3 pyfrom the septum (Frotscher and Leranth, 1986;Freund and ramidal cells (Miles and Wong, 1986), granule cells lack recur-Antal, 1988), locus coeruleus (Koda et al., 1978), and raphe rent excitatory connections. It is, therefore, necessary to explain nuclei (Moore and Halaris, 1975), for examples, differentially how synaptically unconnected granule cells could have an unregulate specific inhibitory neuron and mossy cell subpopula-derlying propensity for population discharges. Recent studies tions, and influence different groups ofgranule cells. Thus, given by Dudek and colleagues suggest that granule cell epileptiform the heterogeneity of dentate inhibitory neurons and their axonal behavior may result from ephaptic interactions, that is, nonprojections (Halasy and Somogyi, 1993), it is theoretically pos-synaptic field effects that depolarize disinhibited granule cells sible that granule cells are functionally separated in both the Snow and Dudek, 1986;Schweitzer et al., transverse and longitudinal directions. 1992). These authors concluded that increased extracellular po-Because the dentate hilar "mossy" cells (Amaral, 1978) are tassium and decreased calcium are sufficient to induce synglutamate immunoreactive (Soriano and Frotscher, 1994), and chronous granule cell population discharges within a hippocampreferentially innervate segments of the granule cell layer that pal slice. The fact that the development of these synchronous are distant from their somata (Amaral and Witter, 1989), they granule cell discharges was found to be independent of synaptic may be the cell type primarily responsible for establishing lon-activity reinforced the possibly ephaptic nature.of the behavior gitudinal lateral inhibition in the granule cell layer (Sloviter, (Schweitzer et al., 1992). If correct, this explanation, taken to-1994). However, it remains to be determined if, and how ef-gether with the present results, implies that dentate granule cells, fectively, mossy cells excite inhibitory neurons and granule cells.
like other cell populations (McBain et al., 1990), are sensitive Lateral or surround inhibition may also be the result of trans-to excitation-induced changes in the extracellular ionic enviverse and longitudinal projections of inhibitory cells (Struble et al., 1978;Buckmaster and Schwartzkroin, 1993). Clearly, although we have demonstrated lateral inhibition in the dentate gyrus, our data do not identify the cells that mediate it. ronment and may be capable of behaving as a "functional syncytium" when GABA-mediated inhibition is decreased.
|
2017-05-01T22:20:57.742Z
|
1995-01-01T00:00:00.000
|
{
"year": 1995,
"sha1": "c363ab0fb9211297c863d04e2bd237563d96c59b",
"oa_license": "CCBY",
"oa_url": "http://www.jneurosci.org/content/jneuro/15/1/811.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c363ab0fb9211297c863d04e2bd237563d96c59b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
231869623
|
pes2o/s2orc
|
v3-fos-license
|
Effects of cooking conditions on the physicochemical and sensory characteristics of dry- and wet-aged beef
Objective This study aimed to elucidate the effects of cooking conditions on the physicochemical and sensory characteristics of dry- and wet-aged beef strip loins. Methods Dry- and wet-aged beef aged for 28 days were cooked using different cooking methods (grilling or oven roasting)×cooking temperatures (150°C or 230°C), and their pH, 2-thiobarbituric acid reactive substances (TBARS), volatile compounds, and color were measured. Results Cooking conditions did not affect pH; however, grilling resulted in lower TBARS but higher cooking doneness at the dry-aged beef surface compared to oven roasting (p< 0.05). In descriptive sensory analysis, the roasted flavor of dry-aged beef was significantly stronger when grill-cooked compared to oven roasting. Dry-aged beef grill-cooked at 150°C presented a higher intensity of cheesy flavor, and that grilled at 230°C showed a greater intensity of roasted flavor compared to wet-aged beef at the same condition, respectively. Conclusion Grilling may be effective for enhancing the unique flavor in dry-aged beef.
INTRODUCTION
Dry aging is a method that exposes meat to controlled temperature, humidity, and air flow in the absence of packaging, in contrast to wet aging, which stores the meat vacuumpackaged [1]. In recent days, the consumer demand for dryaged beef has been increasing, mainly due to its unique flavor, which is characterized as intensely roasted, beefy, and nutty [2]. It has been reported that dryaged beef has higher quantities of flavor precursors, such as free amino acids and nucleotiderelated compounds [3,4], and aromatic volatile compounds, such as aldehydes, compared to wetaged beef [5].
On the other hand, meat flavor can be developed through the cooking process [6], and cooked meat sensory characteristics can vary depending on the cooking method (e.g., grilling, roasting, boiling, etc.) and conditions, including heating temperature and rate [7,8]. The influence of cooking methods on meat flavor has been observed in several studies to be mainly due to differences in the type of heat transfer (categorized as conduction, con vection, and radiation) and the efficiency of heat treatment on meat [911]. Cooking can also be classified by cooking temperatures: lowtemperature cooking below 100°C, high temperature cooking above 100°C, and very high temperature cooking above 200°C [7]. Cooking at high temperatures increases the heating rate and the degree of Maillard reac tion, which enhances the roasted aroma of meat. On the other hand, extensive cooking can cause high oxidative reactions and the generation of undesirable polycyclic aromatic hydrocarbons [8,12]. However, little is known about the effect of cooking method and temperature on dryaged beef. King et al [13] applied oven roasting and microwave cook ing to dryaged beef and observed higher hydrocarbons and lower terpenoids in ovenroasted beef than in microwave cooked beef. Nonetheless, as the authors stated, the lack of sensory evaluation restricts the estimation of the effect of cooking method on dryaged beef. For this reason, the effective cooking method and temperature for dryaged beef remain unclear.
Among the cooking methods for steaks in households and restaurants, grilling at temperatures above 200°C and oven roasting in the range of 150°C to 250°C are widely used conduction and convection cooking methods, respectively [8,12,14]. Therefore, the objective of this study was to inves tigate the effects of different cooking methods (grilling and oven roasting) and temperatures (150°C and 230°C) on the physicochemical and sensory characteristics of drybeef.
Sample preparation
Raw material and aging process: Beef strip loins (longissimus lumborum) from Holstein steers (21 months old, quality grade 3) were obtained at 48 h postmortem. The visual fat and connective tissues were removed from the surface of beef strip loins, and each muscle was cut into an average weight of 500 g. Then, beef samples were randomly divided into two groups. One group was placed in a dry aging chamber (4°C, 75% relative humidity, and 2.5 m/s air flow velocity) and dryaged for 28 days. The other group was vacuum packaged (HFV600L, Hankook Fujee Machinery Co., Ltd., Hwaseong, Korea) in lowdensity polyethylene/nylon bags (oxygen permeability of 2 mL/m 2 /d at 0°C, 0.09 mm thick ness; Sunkyung Co., Ltd., Seoul, Korea) for wet aging at 4°C, with the same aging duration. After the aging process, the dark and thickened crust of the dryaged beef was trimmed off. Both dry and wetaged beef were stored in a vacuum packaged bag and frozen to -70°C for further analyses.
Cooking process: The beef strip loins were thawed at 4°C for 18 h and sliced into 3.5 cm thick samples (average weight of 100 g). Next, four different cooking conditions (cooking method × cooking temperature) were applied to the dry (n = 3 for each cooking treatment) and wetaged beef steaks (n = 3 for each cooking treatment): grilling at 150°C or 230°C, and oven roasting at 150°C or 230°C. Each cooking condi tion was replicated three times. During cooking, the surface temperature of the electric grill (EGGW1700, Kitchenart, Incheon, Korea) was measured using an infrared thermometer (ST101, Sincon, Bucheon, Korea), and the internal tempera ture of the sample was monitored using a digital thermometer (TM747DU, Tenmars Electronics Co., Ltd., Taipei, Taiwan). Steaks cooked by grilling were turned every two min, and those cooked in an electric oven (MA324DBN, LG Electronics, Seoul, Korea) were turned at 40°C internal temperature. All cooking processes continued until the core temperature of the steak reached 72°C. In every cooking condition, dryaged beef was compared with wetaged beef, to which the same cooking treatment was applied, to determine whether the changes in meat sensory and physicochemical properties derived from the aging methods or cooking conditions.
pH
One gram of meat sample with 9 mL of distilled water was homogenized at 9,600 rpm for 30 s using a homogenizer (T25 basic, IKA Works, Staufen, Germany). Then, the homoge nates were centrifuged at 2,265×g for 10 min (Continent 512R, Hanil Co. Ltd., Daejeon, Korea), followed by filtration with filter paper (No. 4, Whatman International Ltd., Kent, UK). The pH values of dry and wetaged beef before and after cooking were measured using a pH meter (Seven2Go, MettlerToledo International Inc., Schwerzenbach, Switzer land). The pH meter was precalibrated with pH 4.01, pH 7.00, and pH 9.21 standardized buffer solutions at room temperature.
Lipid oxidation
Lipid oxidation was determined by measuring 2thiobarbi turic acid reactive substance (TBARS) values. Each sample (5 g) was homogenized at 9,600 rpm for 30 s using a homoge nizer (T25 basic, IKA works, Germany), with the addition of 15 mL of distilled water and 50 μL of 7.2% butylated hydroxy toluene solution. After centrifugation at 2,265×g for 15 min (Continent 512R, Hanil Co. Ltd., Korea), the supernatants were filtered using filter paper (No. 4, Whatman Interna tional Ltd., UK). Then, 2 mL of the filtrates was transferred to a test tube and mixed with 4 mL of 20 mM 2thiobarbitu ric acid in 15% trichloroacetic acid. The mixture was heated in a water bath at 90°C for 30 min, cooled, and centrifuged at 2,265×g for 15 min (Continent 512R, Hanil Co. Ltd., Ko rea). The supernatant absorbances were measured at 532 nm using a spectrophotometer (Xma 3100, Human Co. Ltd., Seoul, Korea). TBARS values were expressed as mg malond ialdehyde per kg of meat sample.
Volatile compound analysis
The analysis of volatile compounds in cooked beef was performed by solidphase microextraction and gas chro matographymass spectrometry (SPMEGCMS). Five grams of cooked meat samples were placed into a 20mL headspace vial and sealed with a PTFEfaced silicone septum. The samples were incubated at 40°C for 5 min, and then, a 65 μm thick polydimethylsiloxane/divinylbenzene fiber (Supel co Inc., Bellefonte, PA, USA) was exposed to the headspace of the vial for 60 min. The volatile compounds were de sorbed in the injection port of the GC (Trace 1310, Thermo Fisher Scientific, Waltham, MA, USA) at 270°C in splitless mode. Helium was used as the carrier gas at a flow rate of 2 mL/min, and volatile compounds were separated using a fused silica capillary column (DBWax, 60 m×0.25 mm i.d., and 0.50 μm film thickness; Agilent Technologies Inc., Santa Clara, CA, USA). The GC oven was programmed as follows: initial temperature of 40°C, subsequently increased to 180°C at a rate of 5°C/min, then increased to 200°C at a rate of 2°C/min and held for 5 min, and then increased to a final temperature of 240°C at a speed of 10°C/min and held for 10 min. The column was directly coupled to a triple quadru pole mass spectrometer (TSQ 8000, Thermo Fisher Scientific, USA) operating in the electron ionization mode at 70 eV and 250°C. Mass spectra were obtained with a scan rang ing from 35 to 550 m/z at intervals of 0.2 s. The identification of volatile compounds was performed by comparing their mass spectra with those of the National Institute of Stan dards and Technology (NIST) mass spectral library.
Meat color
The cooked meat was cut horizontally to measure its sur face and internal color. Meat color was measured using a colorimeter (CM5, Konica Minolta Censing Inc., Osaka, Japan), which was calibrated using a standard plate before measurements. The CIE L* (lightness), a* (redness), and b* (yellowness) values were determined in the condition of illuminant D65 and 10° standard observer with a 30 mm aperture size plate. A reflectance ratio of 630:580 nm was calculated to estimate the degree of doneness after different cooking treatments [15].
Descriptive sensory analysis
The design of the descriptive sensory analysis for dry and wetaged beef was reviewed and approved by the Institu tional Review Board (IRB) of Seoul National University (SNU) (IRB No. 1810/003001). Immediately after cooking, the samples were cut to 1 cm in thickness, wrapped in alu minum foil and plastic wrap to preserve the aroma and prevent moisture evaporation, and kept in a drying oven (BF80N, BioFree, Seoul, Korea) at 60°C. The holding time of the cooked samples in the drying oven was less than 20 min. Ten trained panelists (6 males and 4 females aged 26 to 33 years) were recruited from SNU researcher and faculty populations, and the panelists participated in the descriptive sensory analysis. Before the analysis, panelists were trained over several sessions for the descriptive sensory analysis of dry and wetaged beef and practiced rating the score of each sensory attribute. All training sessions and descriptive sensory analyses were conducted at SNU. The sensory proper ties were evaluated using a ninepoint hedonic scale, in which the flavor scores ranged from one to nine (extremely weak to extremely strong), the score of surface color ranged from one to nine (extremely bright to extremely dark), the inter nal color score ranged from one to nine (extremely white to extremely red), the tenderness score ranged from one to nine (extremely tender to extremely tough) after 15 bites, and the score of juiciness ranged from one to nine (extremely dry to extremely juicy) after 15 bites. Drinking water was provided to the panelists to cleanse their palates between sample evaluations.
Statistical analysis
All experiments were conducted in triplicate, except the de scriptive sensory analysis, where a randomized block design was applied using the trial and panelist as the block (n = 10 per trial, 20 per 2 trials). Statistical analysis was performed using the general linear model (SAS 9.4, SAS Institute Inc., Cary, NC, USA), which included the aging method, cooking condition, and their interactions as fixed effects and carcass and carcass side as random effects. For the evaluation of de scriptive sensory analysis data, the trial and panelist were also included as random factors. The results were reported as mean values with standard error of the mean, and signifi cant differences among the mean values were determined by the Tukey's multiple comparison test at a significance level of 0.05. In order to identify the difference in the composition of volatile compounds between treatments and classify them, principal component analysis (PCA), partial least squares discriminant analysis (PLSDA), and variable importance in projection (VIP) scores for the PLSDA model were per formed with the contents of volatile compounds using MetaboAnalyst 4.0 (www.metaboanalyst.ca) according to Kim et al [1], and the samples were logtransformed and autoscaled before conducting multivariate analyses. Pear son correlation coefficients and linear mixed model between sensory properties and overall acceptability of dry and wetaged beef strip loins were analyzed using SAS 9.4 (SAS Institute Inc., USA). In the mixed model, random terms included the trial and panelist. The model is as follows: Overall acceptability = surface color + internal color + roasted flavor + dryaged flavor + cheesy flavor + fatty flavor + savory flavor + tenderness + juiciness + trial + panelist.
RESULTS pH
The pH values of dry and wetaged beef differed before cooking (5.57 and 5.27, respectively; p<0.05). Similarly, it was reported that dryaged beef showed significantly higher pH compared to wetaged beef after 21 or 40 days of aging [3,16]. The pH difference between dry and wetaged beef could be due to their different microbiological compositions [2]. Various environmental factors such as temperature, rela tive humidity, air flow velocity, and the presence of oxygen affect the growth of microorganisms [4]. Higher degrees of total aerobic bacteria and mold and yeast counts were found in dryaged beef, while lactic acid bacteria was more dominant in wetaged beef after 14 and 21 days of aging [17]. Notably, the proteolytic and lipolytic effect of mold and yeast on dry aged beef was suggested in previous studies [4,18]. As a result, the formation of ammonia, amines, and basic amino acids by proteolysis might lead to the increase of the pH of dryaged beef. On the other hand, the accumulation of lactic acid by the increase of lactic acid bacteria in wetaged beef could decrease its pH [3]. Dryaged beef also had a higher pH than wetaged beef after cooking (p<0.05; Table 1). Ac cording to Kerth and Miller [6], an increase in pH can lead to an increase in the water holding capacity, which affects heat transfer efficiency and ultimately results in the flavor of meat. They stated that the increase in beef surface tem perature with low water holding capacity during cooking is disturbed by the evaporation of free water on the surface, which leads to the formation of fewer Maillard reaction products that are associated with meaty and roasted aroma. Furthermore, it was reported that the formation of specific Maillard reaction products such as pyrazines was favored as the pH of meat increased between 4.5 and 6.5 [19]. Madruga and Mottram [20] measured volatile compounds in cooked meat at pH values ranging from 4.0 to 5.6, and observed a decrease in 2methyl3furanyl group compounds and sulfur compounds and the increase of pyrazines as the pH of the meat increased. In the present study, we also detected a significant increase in pyrazine compounds in dryaged beef compared to wetaged beef and discussed the results below. However, the effects of cooking method and cooking temperature on pH were not observed in this study.
Lipid oxidation
During the cooking process, lipid oxidation has a huge role in the generation of desirable and characteristic flavor com pounds in meat, although excessive lipid oxidation leads to the deterioration of meat quality, such as undesirable offflavor and texture changes [10,21].
The TBARS value of dryaged beef was significantly lower than that of wetaged beef, regardless of cooking conditions ( Table 2). This finding was inconsistent with the results from the study conducted by Ribeiro et al [22], who reported that the TBARS value of beef loin was significantly higher after dry aging for 42 days compared to that after wet aging for the same duration. The increase of lipid oxidation in dryaged beef may be related with air exposure during aging process, while lipid oxidation is prohibited in vacuumpacked wet aged beef [22]. In the present study, the lower TBARS value of cooked dryaged beef might be related to an increase in antioxidant compounds in dryaged meat, as reported in previous studies [1,18,23]. Kim et al [1] found that after 28 days of aging, the concentrations of anserine, carnosine, and aromatic amino acids, compounds with strong antioxidant activities, were significantly higher in dryaged beef than in wetaged beef. Lee et al [4] also reported higher amounts of amino acids, including phenylalanine, tryptophan, and tyro sine, in dryaged beef compared to wetaged beef, which could result from concentration effects due to moisture evapora tion and microbial proteolysis during the dry aging process. Moreover, Park et al [23] observed that in dryaged beef pat ties made with 5% crust, the surface of dryaged beef, which is usually trimmed off, showed lower TBARS values com pared to those made without crust, and suggested potential antioxidant activity in the crust. The antioxidant activities of dry and wetaged beef and crust from dryaged beef were compared by Choe et al [18] through radical scavenging ac tivities, ferric ion reducing capacity, and metal chelating activity tests. In this study, the investigators found that dry aged beef possessed higher antioxidant activity than wetaged beef, and crust showed the highest antioxidant activity. This antioxidant activity in dryaged beef, especially in the crust, might be attributed to the increase of small peptides (<3 kDa) through the action of microbial enzymes in the crust [18].
Meanwhile, the TBARS value of ovenroasted beef strip loin was significantly higher than that of grilled beef at both cooking temperatures (p<0.05). Moreover, steaks cooked at lower temperatures showed higher TBARS values than those 2) 0.008 0.010 0.016 0.008 -SEM, standard error of the mean. 1) n = 12. 2) n = 6.
x,y Different letters within the same column indicate statistically significant differences (p < 0.05). cooked at higher temperatures, even though there was no significant difference between grilled dryaged beef cooked at different temperatures. It was reported that the TBARS value could be affected by the cooking temperature and cook ing time [9,21]. Broncano et al [21] found that the TBARS value of Iberian pork roasted at 150°C for 20 min was signif icantly higher than that of pork grilled at 190°C for 4 min. Domínguez et al [9] also observed that roasting of foal meat at 200°C for 12 min produced more oxidation compared to grilling at 130°C to 150°C for 5 min on each surface. In this study, we observed that grilling required less cooking time compared to oven roasting (13 min 14 s and 21 min 7 s, re spectively; p<0.05). The correlation between the TBARS value and cooking time was found to be significant (r 2 = 0.64; p< 0.01). Accordingly, the lower TBARS values in dryaged beef compared to those in wetaged beef in the present study might result from the increase of bioactive peptides and antioxi dants by the action of mold and yeast as described above. Furthermore, prolonged cooking process would accelerate the lipid oxidation, which might negatively affect the sensory quality of beef.
Volatile compound analysis
A total of 60 volatile compounds, including 15 alcohols, 10 aldehydes, 15 aliphatic hydrocarbons, 12 aromatic hydrocar bons, 6 ketones, and 2 unclassified compounds were identified in the headspace of cooked dry and wetaged beef. The PCA showed that the volatile profiles of dry and wetaged beef differed, except for those of ovenroasted samples at 150°C (Figure 1a). Similarly, the PLSDA plot distinguished dry and wetaged beef, and we found that dryaged beef had higher 2heptanol, isoamyl alcohol, 3octanone, 2heptanone, and benzaldehyde concentrations, whereas wetaged beef had more abundant benzyl alcohol, 1,2dimethylbenzene, 2,2,6trimethyloctane, 2,5octanedione, and 2,3butanediol species (Figure 1b). The aforementioned compounds were regarded as the most characteristic variables for the separa tion of the two groups, and the alcohol, aldehyde, aliphatic hydrocarbon, and ketone variables represent lipid oxidation derived products [9]. In particular, aldehydes and ketones contribute highly to cooked meat flavor because they have low odor threshold values [5,10]. The difference in the con centration of lipid oxidationderived products might result from the different susceptibilities of dry and wetaged beef to lipid oxidation ( Table 2). Kim et al [3] reported differenc es in the composition of free fatty acids and free amino acids between dry and wetaged beef, and Lee et al [4] observed that dry aging for 28 days was more effective in increasing free amino acids and reducing sugars compared to wet aging for the same duration. It seems that different flavor precur sors might influence the volatile formation of cooked dry and wetaged beef.
The effect of cooking method on the formation of volatile compounds was detected using the PLSDA model ( Figure 1c). Most volatile compounds with VIP scores higher than one were more abundant in grilled beef than in ovenroasted samples. This observation could be related to the surface tem perature of the samples and the efficacy of heat transfer depending on the cooking method. Peñaranda et al [11] re ported that the intensity of meat odor was higher in grill cooked pork compared to ovenroasted pork, possibly because of the higher surface temperature. Silva et al [24] found sig nificantly higher amounts of Maillard reaction products in grilled and fried jerky chicken than in ovenroasted and sous vide cooked chicken, and stated that conduction cooking was more effective in heat transfer than convection cooking.
Finally, noticeable changes were found in five pyrazines (2ethyl3,5dimethylpyrazine, 2,3, 2,5, and 2,6dimeth ylpyrazine, and methylpyrazine) as cooking temperature increased (Figure 1d). This observation was in accordance with the study conducted by Wall et al [25], where the pro duction of pyrazines in beef steak increased with increasing grill surface temperature from 177°C to 232°C. Pyrazines are mainly derived from the Maillard reaction, which requires high temperatures above 110°C in meat [26], and the for mation of these compounds increases at elevated surface temperatures [24]. Yoo et al [27] reported that the searing of beef steaks at 250°C increased meaty and roasted aromas compared to ovencooking 180°C, due to the increased oc currence of the Maillard reaction. Furthermore, 2,3 and 2,5dimethylpyrazine and methylpyrazine were present in significantly higher quantities in dryaged beef when cooked by grilling compared to wetaged beef (data not shown). Ha et al [5] also observed higher abundances of pyrazine, methylpyrazine, 2,5dimethylpyrazine, 2ethylpyrazine, and 2,5dimethyl3ethylpyrazine in dryaged beef com pared to wetaged beef. Pyrazines have meaty, nutty, and roasted aroma flavors. The higher concentration of pyr azine compounds in dryaged beef may contribute to the development of characteristic dryaged flavor [10].
Meat color
The color of cooked meat is attributed to the heatinduced denaturation of myoglobin, which results in a brown appear ance [28]. Meat color can be influenced by various factors, such as pH, cooking conditions, the chemical state of myo globin, and other variables [15]. In order to analyze the effect of cooking conditions on cooked meat color in depth, both the surface and internal meat colors were measured inde pendently. As a result, the beef surface color was generally affected by cooking conditions rather than by the aging method (Table 3). Noticeably, we found that L*, a*, and b* values were significantly lower at the surface of grillcooked steaks at 230°C compared to other treatment combinations, regardless of the aging method. In both dry and wetaged beef, oven roasting led to a brighter surface color compared to grill cooking at the same cooking temperature (p<0.05). Moreover, the b*value was significantly higher at the sur face of ovenroasted beef compared to that of grilled beef when cooked at 230°C. In the case of cooking temperature, lower temperature cooking generally led to higher L*, a*, and b*values of the beef surface compared to higher tem perature cooking. Lower L*values due to grilling or higher temperature cooking might be related to moisture loss and surface drying due to a higher meat surface temperature [22]. The decrease in redness that was observed as cooking temperature increased from 150°C to 230°C indicates that and its variable importance in projection (VIP) scores from dry-and wet-aged beef (b), grilled and oven-roasted beef (c), and beef cooked at 150°C and 230°C (d). The colored box next to the PLS-DA VIP scores represents the relative concentration of each volatile compound (red, high; blue, low). Dry, dry-aged beef; Wet, wet-aged beef; Grill, grill-cooked; Oven, oven-cooked; 150, cooked at 150°C; 230, cooked at 230°C. Different letters within the same row indicate significant differences (p<0.05).
myoglobin denaturation occurred to a higher degree at higher cooking temperatures [28]. Regarding yellowness, Mitacek et al [29] stated that the increase in metmyoglobin might be related to a decrease in yellowness. Consequently, grillcooked dry and wetaged beef showed a higher degree of doneness than ovencooked dryaged beef at 230°C and ovencooked wetaged beef at 150°C, respectively.
On the other hand, all internal beef steak color parame ters showed significant differences depending on the aging method. In general, L*, a*, and b*values were higher and the degree of doneness was lower in wetaged beef than in dryaged beef. It has been reported that the surface of dry aged beef had lower L* and a*values than that of wetaged beef [16,22]; however, the internal color of cooked dry and partial least squares-discriminant analysis (PLS-DA) and its variable importance in projection (VIP) scores from dry-and wet-aged beef (b), grilled and oven-roasted beef (c), and beef cooked at 150°C and 230°C (d). The colored box next to the PLS-DA VIP scores represents the relative concentration of each volatile compound (red, high; blue, low). Dry, dry-aged beef; Wet, wet-aged beef; Grill, grill-cooked; Oven, oven-cooked; 150, cooked at 150°C; 230, cooked at 230°C. Different letters within the same row indicate significant differences (p<0.05). wetaged beef has been rarely compared. The differences in beef color stability might be attributed to lipid oxidation, re ducing ability, oxygen consumption rate, or the composition of three forms of myoglobin [22,28]. In the case of cooking conditions, oven roasting at 230°C showed significantly higher internal meat color L* and b*values compared to grilling at the same cooking temperature. While the internal color of grillcooked dryaged beef at 230°C showed overall low a* and b*values and a high degree of doneness, grilled dryaged beef at 150°C had the highest a* and b*values and the lowest degree of doneness among the four cooking conditions. Yancey et al [14] observed the effect of cooking method on the in ternal cooked color of meat and reported that the conduction cooking method denatured myoglobin to a greater extent, resulting in a less red appearance compared to oven cook ing. In accordance with the surface color results, the high abundance of pyrazine compounds in grilled dryaged beef at 230°C could be explained by the high degree of doneness estimated by the surface and internal color measurements.
Meat color can provide information about the eating quality of meat to consumers [8]. For example, browned surface color can be utilized as an indicator of the Maillard reaction and caramelization, and the internal cooked color can indi cate the doneness of meat [27,28]. The degree of doneness was further evaluated through a descriptive sensory analysis, as discussed below.
Descriptive sensory analysis
Grillcooked steak had a darker surface color compared to oven roasted steak, except for wetaged beef cooked at 150°C (Table 4). However, there was no difference in internal red ness between grilled and ovenroasted beef steaks cooked at the same temperature. Considering the results from the in strumental color measurements (Table 3) and descriptive sensory analysis, the aged beef color was more likely to be affected by cooking conditions than aging method, and grill ing at higher temperatures was more desirable for the cooked meat color.
3) The ratio of reflectance of light at 630 nm and 580 nm, which indicates the degree of doneness [15]. a-d Different letters within the same row indicate statistically significant differences (p < 0.05).
x,y Different letters within the same column indicate statistically significant differences (p < 0.05).
4).
Grilling had a significantly higher score for roasted flavor in dry and wetaged beef at both cooking temperatures com pared to oven roasting. This result supports the results of the volatile compound analysis and meat color, where grilling was more effective for producing desirable flavor compounds and increasing cooking doneness (efficiency of heat transfer).
In case of tenderness, no effect of cooking condition was found within dryaged beef, while in wetaged beef grilling at 150°C led to significantly lower shear force. Wall et al [25] observed no difference in the shear force of grillcooked beef by surface temperature of grill (177°C, 205°C, and 232°C), whereas Yancey et al [14] found that conduction cooking re sulted in a higher shear force than convection cooking. The juiciness score was higher in grillcooked dryaged beef at the cooking temperature of 230°C compared to ovenroast ed dryaged beef. In general, the juiciness is influenced by the amounts of moisture in meat after cooking [8,11]. In the result of present study, the surface of grilled dryaged beef showed higher degree of doneness compared to that of oven roasted one cooked at 230°C. During grilling or roasting, crust can be formed at the dried surface of meat [27] which might reduce the moisture permeability [30]. If higher heat transfer efficiency of grilling is considered than oven roast ing in this study, crust of beef may potentially help increase the juiciness. Based on the Pearson correlation analysis, the meat color, roasted and savory flavor, and juiciness were strongly related to the overall acceptability of both aged beef ( Table 5). The relationships between internal meat color and roasted flavor with overall acceptability were also significant, respectively, when using linear mixed model. As the brown surface color indicates the degree of caramelization and the point of con The descriptive sensory analysis was performed twice, and the result was analyzed using a randomized block design with 10 panelists per trial. SEM, standard error of the mean. 1) n = 8. 2) n = 4.
3) Not tested as dry-aged flavor cannot exist in wet-aged beef. a,b Different letters within the same row indicate statistically significant differences (p < 0.05).
x,y Different letters within the same column indicate statistically significant differences (p < 0.05).
sumption [27,28], the positive correlation between the degree of doneness and overall acceptability is natural. The sensory scores of meat color and roasted flavor suggest that grilling instead of oven roasting would be effective for dryaged beef (Table 4). Moreover, dryaged beef that was grillcooked at 150°C had significantly higher scores for surface color, dry aged and cheesy flavor compared to wetaged beef. We found that cheesy and savory flavors are positively correlated (r 2 = 0.53; p<0.05). This indicates that the unique flavor of dry aged beef could be perceived strongly even at lower cooking temperature and might be more attractive for consumers who prefer dryaged and cheesy flavors than that of wet aged beef after grilling. The flavor difference between dry and wetaged beef was not observed (except the dryaged flavor) only when the dryaged beef was grillcooked at 150°C while the wetaged beef was grilled at much higher cooking temperature (p>0.05). This range of cooking temperature for dryaged beef (150°C) has benefits for reducing overcooking and increases of heterocyclic aromatic amines and polycyclic aromatic hydrocarbons compared to the higher cooking temperature for wetaged beef (230°C) [12]. Meanwhile, an obvious contrast between the roasted flavor of dry and wet aged beef was observed when both samples were grillcooked at 230°C. As discussed above, higher pyrazine compound concentrations in dryaged beef compared to wetaged beef could intensify the roasted flavor of dryaged beef. Although no difference in cheesy flavor was found between them (p> 0.05), the characteristic flavor of dryaged beef compared to wetaged beef could be derived from the significantly higher intensities of roasted and dryaged flavor. As roasted flavor was positively correlated with overall acceptability of beef (Table 5), grilling at higher cooking temperature could also be effective for the purpose of dry aging to develop the de sirable beef flavor. From the results, grilling of dryaged beef at both lower temperature (150°C) and higher temperature (230°C) had their own advantages. The former led to higher intensities of surface color and cheesy flavor of dryaged beef compared to those of wetaged beef cooked at the same temperature. The latter presented a higher roasted flavor than wetaged beef without any perceivable sensory defects. Consequently, grill cooking at both 150°C and 230°C might be promising cooking conditions for dryaged beef to obtain the charac teristic flavors and acceptance by a wide variety of consumers.
CONCLUSION
In conclusion, the advantages of dry aging can be enhanced by grill cooking instead of oven roasting, as grilling improves desirable flavor and color. In addition, the grillcooked dry aged beef might be appealing to consumers due to its intense roasted flavor, compared to grillcooked wetaged beef at the same cooking condition, and it is greater when cooking tem perature is higher. Within the treatments in this study, grill cooking of dryaged beef at a higher temperature (230°C) would be recommended.
CONFLICT OF INTEREST
We certify that there is no conflict of interest with any financial organization regarding the material discussed in the manu script.
|
2021-02-11T06:19:41.401Z
|
2021-01-20T00:00:00.000
|
{
"year": 2021,
"sha1": "a28e0583e6e943f31e8babedaafb5dba8a80d684",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/ab-20-0852.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "503d754cbcea57856570b1e104778b3621d981da",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
242004390
|
pes2o/s2orc
|
v3-fos-license
|
LAISSEZ-FAIRE LEADERSHIP POSITIVELY IMPACTS ORGANISATIONAL COMMITMENT IN HEALTHCARE CENTRES IN QATAR
Leadership engenders an essential element for organisations to develop business strategies and achieve their goals. This research aims to examine the impact of laissez-faire leadership style on organisational commitment (OCOM) in health care centers in Qatar. The researcher adopted a quantitative approach, using a self-administered questionnaire to collect the primary data. The sample consisted of 218 leaders and supervisors from five healthcare centers in Qatar selected employing non-random sampling. The study indicated a significant positive relationship existed between laissez-faire leadership and OCOM. Moreover, leadership behavior significantly impacted OCOM behaviors. but in different degrees: continuance commitment and normative commitment to a higher extent, and affective commitment less so. Also, the results showed the percentage to which Laissez-Faire leadership style was practiced, in the sample, to be high.
Methods:-
The researcher used a quantitative methodology to collect data in a cross-sectional survey among medical staff, including leaders and supervisors of five healthcare centres in Qatar. Purposive sampling was used, for the researcher sent the questionnaire to employees with job position relevant to the research. Prospective participants received a hard-copy questionnaire to participate in the study, and they were informed of the investigation's objectives and ensured participation was voluntary, and responses would be kept confidential. The population equaled approximately 2,590 medical employees. The sample size was calculated based on Krejcie and Morgan's table. According to Sekaran & Bougie (2016), Krejcie & Morgan (1970) simplified the sample size decision by providing a table suggesting population size and sample size. Therefore, the sample size entailed 335 individuals. Table 1 illustrates the sample distribution across medical centres, highlighting the data collection. Excluding disinterested respondents (n = 3), 335 respondents (99% response rate) remained. Arabic versions of the English scales were created using a translation-back-translation procedure (Schaffer & Riordan, 2003). Responses, based on a a five-point Likert scale, ranged from 1 (strongly disagree) to 5 (strongly agree), unless otherwise specified.
In the final sample used for analyses, most participants were male (55.22%), between 31 and 40 years of age (39.40%), worked as nurses (44.78%), and had more than five to ten years work experience (29.85%). The health centres were distributed in three locations (north, centre, and west), with participant representation 20%, 60%, and 20%, respectively. See Table 2. Table 2.
Principal components analysis (PCA) helped to identify and compute composite scores for the items underlying the short version of the laissez-faire scale from the Multifactor Leadership Questionnaire 5X. The initial eigenvalue indicated one factor explained 61.05% of the variance. The second eigenvalue was just over one, and explained 14.61% of the variance. The solution's second factor was examined using varimax and oblimin rotations of the factor loading matrix. The one-factor solution, which explained 61.05% of the variance, was preferred because of previous theoretical support, the levelling off of eigenvalues on the scree plot after one factor, and the insufficient number of primary loadings and difficulty of interpreting the second factor and subsequent factors. Little difference existed between the three-factor varimax and oblimin solutions. Thus, both solutions were examined in subsequent analyses before deciding to use a varimax rotation for the final solution.
Three items were eliminated because they did not contribute to a simple factor structure and failed to meet a minimum criterion of having a primary factor loading of 0.4 or above, and no cross-loading of 0.3 or above. For the final stage, a PCA of the remaining four items, using varimax and oblimin rotations, was conducted, with one factor explaining 61.05% of the variance. An oblimin rotation provided the best-defined factor structure. All items in this analysis had primary loadings over 0.5. Table 4 presents the factor loading matrix for this final solution. The factor labels Hinkin & Schriesheim (2008a) proposed suited the extracted factors and were retained. Internal consistency for scale was examined using Cronbach's alpha. The alphas were high: 0.85 for LFL (4 items). No substantial increases in alpha for any of the scales could have been achieved by eliminating more items.
A composite score was created for one factor, based on the mean of the items, having primary loadings on the factor. Higher scores indicated greater use of the coping strategy laissez-faire leadership with a negatively skewed 970 distribution depicting the coping strategy the participants reported using the most. Table 3 displays the descriptive statistics. The skewness and kurtosis were well within a tolerable range for assuming a normal distribution.
Overall, these analyses revealed that one factor, highly internally consistent, underlined the responses to the short version of the LFL items. Three of the seven items were eliminated. However, the original factor structure Hinkin & Schriesheim proposed (2008a) was retained. Approximately normal distribution was evident for the composite score data in the current study; therefore, the data was well suited for parametric statistical analyses. Table 4), further confirming each item shared some common variance with other elements. Given these overall indicators, factor analysis was deemed suitable with 14 items.
PCA was employed to identify and compute composite scores for the factors underlying the OCOM. Initial eigenvalues revealed the first three factors explained 25.05%, 21.27%, and 15.75% of the variance, respectively. The fourth, fifth, and sixth factors had eigenvalues just over one, and each explained 16.12% of the variance. Solutions for four, five, and six factors were each examined using varimax rotations of the factor loading matrix. The three-factor solution, which explained 62.08% of the variance, was preferred because of its previous theoretical support, the levelling off of eigenvalues on the scree plot after three factors, and the insufficient number of primary loadings and difficulty of interpreting the fourth factor and subsequent factors. Little difference emerged between the three-factor varimax and varimax solutions. Thus, both plots were examined in subsequent analyses before deciding to use varimax rotation for the final solution.
Four items were eliminated because they did not contribute to a simple factor structure and failed to meet a minimum criterion of having a primary factor loading of 0.4 or above and no cross-loading of 0. For the final stage, a PCA of the remaining 14 items, using varimax rotation, was conducted, with three factors explaining 62.08% of the variance. A varimax rotation provided the best-defined factor structure. All items in this analysis had primary loadings over 0.5.
Internal consistency for each of the scales was examined using Cronbach's alpha. The alphas were meritorious: 0.779 for AF (6 items), middling 0.826 for NC (4 items), and 0.734 for CC (4 items). Eliminating more items did not yield any substantial increases in alpha for any of the scales. Overall, these analyses indicated three distinct factors (AF, NC, CC) were underlying manager responses to the OCOM, where AF, NC, CC were meritorious internally consistent.
Composite scores were created for each of the three factors, based on the mean of the items with primary loadings on each factor. Higher scores indicated greater organizational commitment. AF was the most reported organizational commitment, with a negatively skewed distribution, while NC and CCwere used considerably less and had positively skewed distributions. Descriptive statistics are presented in Table 6. The skewness and kurtosis were well within a tolerable range for assuming a normal distribution. Although a varimax rotation was used, only small correlations between each of the composite scores existed: 0.167 between affective commitment and normative; 0.132 between AF and CC; and 0.141 between NC and CC. According to Avolio & Bass (2004), it represents the absence of leadership, and employees working under this leadership seek assistance and supervision from alternative sources since they are left to their own devices to execute their jobs (Dubinsky et al., 1995).
Laissez-faire leadership presents a quandary that must be approached with extreme caution. On the one hand, it permitsemployees the freedomconcerning the completion of their work, encourages personal growth innovation, and allows for fasterdecision making. On the other hand, this leadership style is not appropriate for situations where employees lack the knowledge or are not good at managing projects, setting deadlines, and solving problems. The laissez-faire leader deserts responsibility, and sometimes they answer questions but avoid feedback and make little effort to satisfy employee needs (Yukl & Gardner, 2020). Well-known business leaders have adopted laissez-faire leadership. Steve Jobs, for example, gave instructions to his groups about what he would like to see, then left them to figure out how to fulfill his wishes (Cherry, 2020).
According to Nijhof et al. (Nijhof et al., 1998), company success does not depend only on the human factor but also depends on how the institution motivates its commitment to the enterprise. Employees with OCOM are more willing to accept change and less likely to engage in withdrawal (Iverson & Buttigieg, 1999). According to Kouzes and Posner (Kouzes & Posner, 1997), since no unified leadership approach exists, leaders must select the way of directing based on various situations and circumstances. They need to motivate employees to participate in making decisions and solving problems to increase team and entire institutional efficiency (Lorber et al., 2018).
Yousef ( 2017)defined organisational commitment as the individual's psychological attachment to an organisation. It portrays a state of being and remaining a member of a company. Moreover, it involves feeling like a member of a family (Ibrahim, 2015).Although no universally accepted definition of OCOM exists, a common theme has emerged as a binding of the individual to an enterprise (Samad, 2005). Organisational commitment entails three forms: 973 affective commitment, employee emotional attachment, and organisational involvement, and CC, such that the employee wants to remain employed by a business, weighing the costs of leaving the organization against NC, where employees feel obligated to stay with the firm (Allen, N. Meyer, 1990;Meyer et al., 1993).
Mavens have purported laissez-faire leadership harms employee performance and organisational commitment. Amgheib ( 2016) As defined in this approach, the leaders normally do not interfere in the decision making process. A supervisor allowing employees to make work choices makes them feel free to direct their work, and they feel responsible for their choices. Hence, the researcher formulated these hypotheses: H1. Laissez-faire leadershippositively affects affective commitment. H2. Laissez-faire leadershippositively affects normativecommitment. H3. Laissez-faire leadershippositively affects continuance commitment.
Findings and analysis:
In this section, correlation and simple linear regression were performed to answer the research questions and hypotheses.
Simple linear regression:
After conducting the factor analysis and determining the components, simple regression analysis was performed to uncover the effect of laissez-faire leadership on these elements and examine the first, second and third hypotheses.
Simple regression assumptions:
To examine the hypotheses, the researcher applied simple regression. However, prior conditions and requirements must be met to ensure the test integrity and correctness:
Assumption 1:
A linear relationship exists between the independent variables and the dependent variables. Scatterplots showed this assumption was met. See Appendix 1.
Assumption 2:
The values of the residuals remained independent. The Durbin-Watson tested the residuals from linear regression, or multiple regression were independent. Test statistic values in the range of 1.5 to 2.5 are relatively normal, and residuals remained independent. Values outside of this range could be a cause for concern. The Durbin-Watson results are included in the simple regression tables.
Assumption 3:
The variance of the residuals remained constant. The plot of standardized residuals versus standardized predicted values indicated no apparent signs of funneling, suggesting homoscedasticity (see Appendix 1).
Assumption 4:
The values of the residuals were normally distributed. The histogram and p-p plot for the model supported that the assumption was met (see Appendix 1). After checking the assumptions of linear regression, all assumptions were satisfied. 974
Laissez-faire leadershipand organisational commitment:
The researcher conducted simple linear regression using SPSS V23 to test the hypotheses related to laissez-faire leadershipand organisational commitment. Since the variables were measured on three organisational commitment dimensions, the relationship between organisational commitment and laissez-faire leadership was divided into three sections
Conclusion:-
Laissez-faire leadership augmented organizational commitment as measured in terms of affective, normative, and continuance commitment.
|
2020-10-28T19:19:46.585Z
|
2020-09-30T00:00:00.000
|
{
"year": 2020,
"sha1": "35e96de740520d6b9a9139232657fff63e5916f2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/11750",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "90ee9d70bd8bc0e161abefaa0252e4a1a84be9a9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
245387782
|
pes2o/s2orc
|
v3-fos-license
|
Approaches to Decrease Hyperglycemia by Targeting Impaired Hepatic Glucose Homeostasis Using Medicinal Plants
Liver plays a pivotal role in maintaining blood glucose levels through complex processes which involve the disposal, storage, and endogenous production of this carbohydrate. Insulin is the hormone responsible for regulating hepatic glucose production and glucose storage as glycogen, thus abnormalities in its function lead to hyperglycemia in obese or diabetic patients because of higher production rates and lower capacity to store glucose. In this context, two different but complementary therapeutic approaches can be highlighted to avoid the hyperglycemia generated by the hepatic insulin resistance: 1) enhancing insulin function by inhibiting the protein tyrosine phosphatase 1B, one of the main enzymes that disrupt the insulin signal, and 2) direct regulation of key enzymes involved in hepatic glucose production and glycogen synthesis/breakdown. It is recognized that medicinal plants are a valuable source of molecules with special properties and a wide range of scaffolds that can improve hepatic glucose metabolism. Some molecules, especially phenolic compounds and terpenoids, exhibit a powerful inhibitory capacity on protein tyrosine phosphatase 1B and decrease the expression or activity of the key enzymes involved in the gluconeogenic pathway, such as phosphoenolpyruvate carboxykinase or glucose 6-phosphatase. This review shed light on the progress made in the past 7 years in medicinal plants capable of improving hepatic glucose homeostasis through the two proposed approaches. We suggest that Coreopsis tinctoria, Lithocarpus polystachyus, and Panax ginseng can be good candidates for developing herbal medicines or phytomedicines that target inhibition of hepatic glucose output as they can modulate the activity of PTP-1B, the expression of gluconeogenic enzymes, and the glycogen content.
INTRODUCTION
Diabetes mellitus (DM) is a chronic metabolic disease characterized by high blood sugar levels (hyperglycemia), caused by insulin malfunctioning, deficient insulin secretion, or both . Type 2 diabetes (T2D) is the most important type of DM due to its high worldwide prevalence (American Diabetes Association, 2021). It is characterized by insulin resistance, which is defined as a poor response of insulin-sensitive tissues to normal insulin concentration (Mlinar et al., 2007). The main cause of insulin resistance has been associated to an obesogenic environment in which large amounts of free fatty acids and adipokines are responsible for impairing insulin signaling by increasing serine phosphorylation that inhibits tyrosine phosphorylation of insulin receptor (IR) and insulin receptor substrates (IRSs) (DeFronzo et al., 2015). However, it has also been reported that protein tyrosine phosphatases (PTPs) could have a more important role since they are upregulated in insulin resistant states. Insulin action is negative regulated by PTPs, particularly the PTP-1B, because they promote the dephosphorylation of tyrosine residues of IR and IRSs (Saltiel and Kahn, 2001). When insulin signaling is impaired in liver by either insulin resistance or low insulin levels, the glucose storage and production is dysregulated, increasing the hepatic glucose output rates yielding hyperglycemia in diabetic patients.
Liver represents a crucial therapeutic target for treating hyperglycemia in T2D because hepatic glucose output is the pathophysiological abnormality that contributes the most to the hyperglycemic state in fasting and postprandial state as a consequence of hepatic insulin resistance (Sharabi et al., 2015). During the overnight fast (postabsorptive state), the liver of a normal person produces glucose at a rate of approximately 1.8-2 mg/kg. min. However, this rate increases around 0.5 mg/kg min in a patient with T2D, promoting a significant rise in the basal state of glucose production (Cersosimo et al., 2018). After food ingestion and the subsequent increase in insulin levels, the suppression of glucose production is slower in a diabetic patient, promoting an evident postprandial hyperglycemia due to the excess of glucose produced in addition to that from the exogenous source (Rizza, 2010).
Medicinal plants and natural products have shown to have numerous benefits on processes involved in glucose and lipid metabolism, leading to correct homeostasis imbalances that promote metabolic diseases such as T2D (Li J. et al., 2018;Xu L. et al., 2018;Saadeldeen et al., 2020). Unlike the classic "ontarget" paradigm in pharmacology, namely a drug with a specific target, the polypharmacology approach, or the binding of a drug to more than one target, could be more effective against a disease as complex as T2D due to its multiple pathophysiological abnormalities (Reddy and Zhang, 2013). In this context, extract plants and phytochemicals isolated from medicinal plants exhibit multiple mechanisms of action on assorted metabolic targets that are involved in glucose homeostasis. Therefore, efforts have been made to describe all the beneficial effects on metabolism of these extracts and molecules in recent years.
The current review summarizes the medicinal plants reported from 2015 that can potentially decrease hyperglycemia resulting from imbalance in hepatic glucose metabolism by two different approaches: improving hepatic insulin resistance by inhibiting PTP-1B and decreasing hepatic glucose output by inhibiting ratelimiting enzymes involved in the storage and production of glucose.
METHODOLOGY
Two separate searches were performed based on the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) (Page et al., 2021) in the following databases: Scopus, Clarivate and PubMed (Figure 1). The first involved studies related to extracts or phytochemicals tested against the activity or expression of PTP-1B enzyme, while in the second, studies with extracts or phytochemicals with an effect on the glucose-producing pathways were sought. Only records related to the study of medicinal plants and their isolated compounds were considered.
THERAPEUTIC APPROACHES TO REDUCE HYPERGLYCEMIA RESULTING FROM IMPAIRED HEPATIC GLUCOSE HOMEOSTASIS
Each insulin-sensitive tissue presents abnormal characteristics that contribute to hyperglycemia in an insulin-resistant state. The underlying mechanisms that give rise to insulin resistance converge on deficient insulin signalling that limits the activation of factors involved in energy metabolism. In obesity and T2D, insulin resistance has been linked mainly to defects in the signalling pathway of phosphatidylinositol 3-kinase and protein kinase B (PI3K/Akt), particularly to the Akt2 isoform (Cusi et al., 2000;Krook et al., 2000).
In normal conditions, the insulin secreted by pancreatic β cell binds to its receptor in the target cell, activating the tyrosine kinase activity, which promotes the receptor autophosphorylation and the subsequent phosphorylation of IRSs, mainly IRS-1 and IRS-2, in tyrosine residues. Afterwards, the enzyme P13K is recruited and activated by IRS to convert phosphatidylinositol 4,5-bisphosphate (PIP2) from the plasma membrane to phosphatidylinositol 3,4,5-triphosphate (PIP3), which facilitates the phosphorylation and activation of Akt at two important sites: by phosphoinositide-dependent kinase 1 (PDK1) at residue Thr308 of the catalytic domain, and by mammalian target rapamycin complex 2 (mTORC2) at residue Ser473 of the regulatory domain (Schultze et al., 2012). Specifically in liver, the activated Akt enzyme is responsible for phosphorylating different factors that are involved in the regulation of processes such as glycogen synthesis, gluconeogenesis, and glycogenolysis, which are activated or inhibited under different nutritional circumstances (Dimitriadis et al., 2021).
Due to hepatic insulin resistance, this hormone losses its ability to regulate glucose metabolism in liver, resulting in enhanced glucose output that contributes greatly to fasting and postprandial hyperglycemia, namely glycogen synthesis is reduced, and production of glucose is increased ( Figure 2). Therefore, we proposed two approaches by which medicinal plants could ameliorated hyperglycemia through enhancing hepatic glucose metabolism: improving the function of insulin in the liver by inhibiting the enzyme PTP-1B and modulating the hepatic production/storage of glucose by regulating the enzymes involved in gluconeogenesis, glycogenolysis, and glycogenesis.
Inhibition of Protein Tyrosine Phosphatase 1B
The modification of proteins through phosphorylation and dephosphorylation of tyrosine residues represents one of the main mechanisms of cell signaling regulation (Alonso et al., 2016), which is carried out by two superfamilies of enzymes: protein tyrosine kinases (PTKs), and PTPs. In this regard, the classical PTP subfamily possess a domain of 240-250 amino acids characterized by a conserved site that exhibits a catalytic mechanism based on cysteine (Denu and Dixon, 1998). Specifically, the enzyme PTP-1B is a classic intracellular PTP widely distributed in mammalian tissues that is anchored on the cytoplasmic side of the endoplasmic reticulum membrane. Despite its localization, the PTP-1B enzyme can access its substrates located on the surface of the plasma membrane during endocytosis, biosynthesis, and by the movement of the endoplasmic reticulum towards the plasma membrane in specific regions (Bakke and Haj, 2015).
Since its first isolation from the human placenta in 1988 by Tonks et al., 1988 PTP-1B has become an attractive research object due to its direct link with the etiopathogenesis of insulin resistance. In addition to the processes promoted by the obesogenic inflammatory environment, such as the serine/ threonine phosphorylation of IR and IRS, and their proteasomal degradation (Mlinar et al., 2007;Ahmed et al., 2021), the dephosphorylation of these components by PTP-1B has also been implied to the termination of the insulin signal Kenner et al., 1996;Chen et al., 1997).
Experimental data obtained from various studies have shown that the PTP-1B enzyme is one of the main negative regulators of the insulin signaling pathway. For instance, studies performed in PTP-1B knock-out mice have been shown that the absence of this enzyme produces healthy organisms that exhibit enhanced insulin sensitivity, protection against the weight gain generated by high-fat diet, and increased hepatic phosphorylation of IR and IRS after an intraperitoneal insulin injection (Elchebly et al., 1999;Klaman et al., 2000). On the other hand, it has been reported an increased PTP-1B activity in hepatic cytosolic fractions isolated from streptozotocin (STZ)-hyperglycemic rats (Meyerovitch et al., 1989), while augmented hepatic microsomal enzyme activity, content of protein, and mRNA levels have only been observed after 2 weeks of insulin treatment in these insulinopenic organisms, suggesting that elevated insulin levels are necessary to modify PTP-1B content and activity, namely hyperinsulinemia caused by insulin resistance may lead to altered PTP-1B expression and activity . Additionally, it has also been shown that insulin rises hepatic microsomal PTP-1B activity in rat hepatoma cells (Hashimoto and Goldstein, 1992). Likewise, abnormal expression and activity of PTP-1B have been reported in skeletal muscle of insulinresistant obese people (Ahmad et al., 1997), as well as in nonobese Goto-Kakizaki rats with spontaneously generated insulin resistance (Dadke et al., 2000), and in STZ-hyperglycemic rats fed with high-fat diet (Wu et al., 2005). Based on the aforementioned, the PTP-1B inhibition represents a good therapeutic target for the treatment of insulin resistance-related diseases, such as DM2 (Zhang et al., 2006). Hence, an arsenal of molecules with inhibitory capacity of PTP-1B activity has been generated in recent years. The methodological approaches that have been applied are the rational design of synthetic phospho-(tyrosine)-mimetic molecules to be used as competitive inhibitors, considering the structural characteristics of the protein, and the search for molecules from natural sources (Sun et al., 2018). The latter is based on the statement that nature has a great variety of structures that present diverse pharmacological effects (Atanasov et al., 2021), so natural products can be used as a starting point for the creation of powerful inhibitors. Table 1 summarizes all medicinal plants and their identified compounds that have proved to inhibit the activity or expression of PTP-1B since 2015. It was obtained a total of 125 medicinal plants used in various traditional medicine systems around the world, mainly represented in eastern folk, such as Chinese and Vietnamese. Morus alba L. (Moraceae), a plant used in the traditional Chinese system, has been the most evaluated for this purpose. In addition to direct PTP-1B activity inhibition and molecular docking studies, some extracts and compounds were assessed to improve glucose and lipid metabolism in vivo, such as lowering blood glucose levels, improved insulin resistance and glucose intolerance, and improved lipid profile. Furthermore, their effect on glucose uptake and phosphorylation of some components of insulin signaling, such as IR, IRS, and Akt, was evaluated in cell cultures under insulin-resistant conditions.
Inhibition of Hepatic Glucose Output by Modulating Glucose Metabolism in Liver
The liver is a key organ that plays a crucial role in the regulation of blood glucose because it manages both storage and synthesis of glucose. The latter involves two metabolic pathways: glycogenolysis and gluconeogenesis, which constitute total hepatic glucose production (HGP) . FIGURE 2 | Impaired hepatic glucose homeostasis by insulin resistance. When insulin does not work properly either due to overexpression of PTP-1B or other factors, glucose production in liver is upregulated generating a hyperglycemic state. Both gluconeogenesis and glycogenolysis are enhanced due to poor insulin signaling, namely genetic expression of gluconeogenic enzymes is not repressed and enzymes related to glycogen metabolism are not adequately regulated. Akt functions: green color indicates positive regulation, red color indicates negative regulation, and blue color represents direct or indirect regulation by phosphorylation or allosterism. IR: insulin receptor; IRS: insulin receptor substrate; PI3K: phosphoinositide 3-kinase; PIP2: phosphatidylinositol 4,5-bisphosphate; PIP3: phosphatidylinositol 3,4,5-triphosphate; PDK: phosphoinositide-dependent kinase; Akt: protein kinase B; PTP-1B: protein tyrosine phosphatase 1B; PC: pyruvate carboxylase; OAA: oxalacetate; PEPCK: phosphoenolpyruvate carboxykinase; PK: pyruvate kinase; FBPase: fructose 1,6-bisphosphatase; PFK: phosphofructokinase; GS: glycogen synthase; GP: glycogen phosphorylase; PP1: protein phosphatase 1; GSK3: glycogen synthase kinase-3; GK: glucokinase; G6Pase: glucose 6phosphatase; GLUT2: glucose transporter 2.
Frontiers in Pharmacology | www.frontiersin.org December 2021 | Volume 12 | Article 809994 67, 20.03, 13.07, 30.49, 10.87, 17.48, 4.04, 9.83, 9.52, 10.71, 19.63, 4.69, 13.28, 12.72 7, 9.2, 9.8, 16.1, 10.6, 14.6, 5.5, 6.2, 4.6 Glycogenolysis consists of glycogen breakdown into glucose, being half of the basal HGP in fasting and decreasing the glycogen concentration at an almost linear rate during the first 22 h (Rothman et al., 1991;Cersosimo et al., 2018). In fasting, it is controlled by glucagon and epinephrine that activate glycogen phosphorylase (GP), the major enzyme responsible for digesting glycogen by releasing glucose 1-phosphate. In feeding condition, insulin inhibits glycogen breakdown and promotes glycogen synthesis through the activation of Akt and protein phosphatase 1 (PP1), leading the deactivation of both GP and glycogen synthase kinase-3 (GSK3), which in its active form (dephosphorylated), inactivates glycogen synthase (GS) (Han et al., 2016). Gluconeogenesis, on the other hand, is defined as the production of glucose from a molecule that is not a carbohydrate. Its main substrates are pyruvate, glycerol, and amino acids such as alanine (Hanson and Owen, 2013). Another way to denote gluconeogenesis is as "reverse glycolysis" since both share not only substrates and final products, but also many enzymes. However, the direction of the reactions catalyzed in gluconeogenesis goes in the opposite direction, so the steps that are not shared with glycolysis can be determined as regulatory steps. These reactions are catalyzed by four rate-limiting enzymes: pyruvate carboxylase (PC), which is responsible for converting pyruvate into oxaloacetate; phosphoenolpyruvate carboxykinase (PEPCK), that converts oxaloacetate to phosphoenolpyruvate; fructose 1,6bisphosphatase (FBPase), that dephosphorylates fructose 1,6bisphosphate obtaining fructose 6-phosphate; and glucose 6phosphatase (G6Pase), which is responsible for removing the phosphate group from glucose 6-phosphate, yielding novo synthesized glucose (Postic et al., 2004).
In the diabetic state, increased rates of HGP are observed as a result of an imbalance of various factors, such as the augmented availability of gluconeogenic substrates, the resistance of the liver to the action of insulin, and elevated levels of glucagon that activate HGP (Sharabi et al., 2015). Due to all these factors, the inhibition of HGP turns out to be an important therapeutic target for the reduction of hyperglycemia observed in T2D patients. In this regard, Table 2 summarizes the works made between 2015 and 2021 with extracts or natural products from 47 medicinal plants that showed to modulate hepatic glucose metabolism by inhibiting glucose production or promoting glycogen synthesis. As it can be observed, decreasing the expression of PEPCK and G6Pase is the principal mechanism related to gluconeogenesis inhibition, while phosphorylation of GSK3, promotion of GS activity, and inhibition of GP are the main mechanisms involved in glycogen breakdown and synthesis. Furthermore, although PI3K/Akt pathway stands out as a good pharmacological target to reduce insulin resistance, medicinal plants and their phytochemicals can also decrease HGP through AMPactivated protein kinase (AMPK).
DISCUSSION
Insulin resistance in liver leads to the release of large amounts of glucose into the bloodstream that affects long-term homeostasis. The regulation of hepatic glucose output represents a good pharmacological target for the control of metabolic diseases such as T2D, which are characterized by the presence of this pathophysiological phenomenon. The search for new molecules capable of regulating hepatic glucose metabolism from medicinal plants has focused on screening for phytochemicals that can directly inhibit key enzymes in glucose-producing pathways. However, considering compounds with the ability to also decrease the activity of the enzymes involved in terminating the insulin signal could result in more effective glycemic control.
According to the bibliographic search, plants used in different systems of traditional medicine have shown the ability to inhibit the activity or expression of PTP-1B, which could indicate that In vitro: PTP-1B enzyme assay/IC 50 25.8, 8.9, 28.7, 27.5 µM Zhao et al. (2020) Vigna radiata (L.) R.
Frontiers in Pharmacology | www.frontiersin.org December 2021 | Volume 12 | Article 809994 (aqueous, ethanolic, methanolic, etc.) and then tested on the biological activity to be evaluated following several paths: 1) direct inhibition enzymatic assays, which can be complemented with structure-activity relationship (SAR) studies and molecular docking analysis to find the possible structures responsible for the bioactivity, relating them with the binding of amino acid residues present at the catalytic or regulatory sites (regarding isolated compounds); 2) the use of cell cultures to evaluate the effect of the extract or compound on the expression and protein levels of key enzymes; and 3) in vivo studies, where diabetic (hyperglycemic) animals induced with STZ or alloxan, or insulin-resistant animals generated by the consumption of highfat diet are used.
Regarding PTP-1B, most of the studies published between 2015 and 2021 focused on conducting enzyme activity assays, and few of them had a multidisciplinary approach that encompassed enzyme assays and in vitro or in vivo studies. The main problem with the first type of studies is that, although the inhibition potency and selectivity of the molecule over the enzyme are directly evaluated, the pharmacokinetic properties of the compound are omitted. This particularity stands out since it has been reported that, despite having excellent inhibitory activity, many compounds lack adequate cellular permeability, namely they present poor absorption and low bioavailability . Another aspect to highlight is that PTP-1B is almost identical to TC-PTP, another member of the PTP family with 74% identity at the catalytic site, so it is important that the identified inhibitors have a high selectivity towards PTP-1B to avoid unwanted effects (Dewang et al., 2005). Considering these facts, it would be necessary in the future to carry out more studies involving as many approaches as possible to obtain a more integrative panorama and to be able to evaluate potential inhibitors considering their pharmacokinetic properties and selectivity. Also, it is encouraged to directly evaluate the effect of medicinal plants and their compounds with reported PTP-1B inhibitory capacity on hepatic glucose metabolism.
This work focused on summarizing the medicinal plants with the potential capacity to reduce hyperglycemia resulting from an imbalance in the hepatic metabolism of glucose, encompassing two different approaches: the inhibition of PTP-1B (improvement of hepatic insulin resistance), and the modulation of enzymes involved in gluconeogenesis and glycogenolysis/glycogenesis (decreased hepatic glucose output). In recent years, PTP-1B research has focused on the characterization of different phytochemicals from medicinal plants, such as phenolic compounds, terpenes, and alkaloids. The main methodology used was to carry out direct enzyme inhibition tests to evaluate the potency of these molecules, omitting important aspects such as selectivity or pharmacokinetics. Therefore, it is proposed to use of multidisciplinary approaches that involve in vitro studies, such as the use of cell lines or primary culture to evaluate the effect of the extracts and compounds on expression and protein levels, and in vivo studies, where the concentration of the compound in systemic circulation and its duration is determined, as well as the transformation processes involved. In this regard, not only the inhibitory activity of the compounds is evaluated, but also the impact on other pharmacological aspects that can only be observed using animal models.
On the other hand, research on medicinal plants that modulate hepatic glucose metabolism has primary focused on testing full extracts rather than compounds. However, it is worth mentioning that mixtures could have synergistic effects capable of regulating multiple targets (Caesar and Cech, 2019) and therefore compound fractions may exhibit more bioactivity than isolated molecules. Further studies are needed to identify potential multitarget phytochemicals in plants listed in Table 2. Finally, it is expected that this review will provide greater knowledge of medicinal plants and compounds for the development of drugs that improving hepatic glucose metabolism as a therapeutic target for the treatment of T2D.
We suggest that Coreopsis tinctoria, Lithocarpus polystachyus, and Panax ginseng can be good candidates for developing herbal medicines or phytomedicines that target inhibition of hepatic glucose output as they can modulate the activity of PTP-1B, the expression of gluconeogenic enzymes, and the glycogen content. However, only their full extracts are tested until now. Therefore, compounds responsible for the effects mentioned above have not been identified, and pharmacological and toxicological tests in animal models are Frontiers in Pharmacology | www.frontiersin.org December 2021 | Volume 12 | Article 809994 required to assess their efficacy and safety, with the aim of moving forward to carry out clinical studies.
AUTHOR CONTRIBUTIONS
GM-T and FE-H performed the bibliographical research summarized in tables and wrote the first version of the manuscript. AA-C reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.
FUNDING
This project was partially sponsored by DGAPA PAPIIT IN226719 and IN213222.
|
2021-12-23T14:27:14.422Z
|
2021-12-23T00:00:00.000
|
{
"year": 2021,
"sha1": "e24d74e5340828c920eced6a4d0e2b4bce2c1699",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e24d74e5340828c920eced6a4d0e2b4bce2c1699",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
157686394
|
pes2o/s2orc
|
v3-fos-license
|
The Colombian Peace Negotiation and Foreign Investment Law
The stunning vote against the Colombian Peace Agreement opens an opportunity to include in the negotiations issues that were not included in the first deal—despite the fact that their omission had the potential to undermine the goal of a sustainable peace. One such issue is foreign investment law. Since the beginning of the talks, the Colombian government was keen on emphasizing that the country’s “economic model” was not subject to negotiation. The shadow of Venezuela loomed large in that position. Whatever came out of the talks was to be integrated in a framework of a free market economy, where private property and, above all, foreign investments would be respected.
and require domestic adjudication of disputes related to foreign investment. 2 Ultimately, though, the Peace Agreement made no reference to foreign investment protection, and the government made no promise to denounce or renegotiate existing trade treaties. This will most likely continue to be the case in the negotiation, as trade deals were not an issue of contention for the rejection of the deal.
A Contradictory Land Policy
The choice of keeping peace and investment protection on separate tracks means that a peace deal will be implemented in a regulatory space where foreign investment is strongly protected, a state of affairs that is particularly relevant with regards to land tenure. Historically, land has been at the center of the Colombian armed conflict. Violence and internal displacement has been connected to land tenure, with 79 percent of all displaced people reporting leaving behind some kind of land title. 3 In total, almost 5.5 million hectares were abandoned or forcibly taken as a consequence of the armed conflict-an area twice the size of Massachusetts. 4 Colombia is still characterized by highly concentrated land ownership: almost 78 percent of rural land in Colombia is property of 13.7 percent of owners, and the Gini coefficient for land is 0.88, one of the worst in the world. 5 With figures such as these, no peace is viable in Colombia without some kind of deal on land reform and restitution. The Peace Agreement thus strived to create an "integral land reform" to reverse land concentration and favor small and midsize agriculture. Central to this effort was the creation of a three million hectares "land fund." The fund would be composed of recovered vacant lands that belong to the state; lands whose property titles are administratively extinguished because they "failed to comply with the social and ecological function of property" (an old formula in Colombian property law, common to several Latin-American constitutions, according to which property should not remain idle); and, finally, of lands formally expropriated for public purposes, with "the corresponding compensation," among other sources. Moreover, the Peace Agreement locked-in ongoing efforts of land restitution to victims of the armed conflict, an ambitious program that will continue, regardless of the status of the peace deal. 6 The negotiations could lead to some changes in the details of these programs, but the overall landscape will most likely remain the same. If a peace deal strives to deal with inequality in land tenure as a root cause of the conflict, it will do so mostly by redistributing land, just as the rejected agreement tried to do. But this transitional mechanism will enter a policy space that is already thickly regulated. Just as the Colombian government felt no real pressure to renegotiate its international investment obligations, it will also be negotiating land reform while carrying on with a rural developmental policy focused on incentivizing extractive industries and capital-intensive agroindustry. For the last decades, Colombia has given generous tax breaks to mining companies, and has facilitated the acquisition of extractive licenses. dedicated to capital-intensive agroindustry, most recently through the creation of the Zonas de Interés de Desarrollo Rural, Económico y Social. 7 This parallel framework of rural development is in conflict with the kind of land reform that is still on the negotiating table in Colombia. Where the peace negotiators speak of democratizing access to land, supporting small peasants and restituting land to victims, these agroindustry development programs require land concentration and capital to be successful.
Investment Protection Versus the Peace Agreements
This situation could put provisions of a peace accord on a collision course with foreign investors, particularly if land reform measures trigger an arbitration. Indeed, there is already evidence of such disputes. For example, since 2008, Anglo Gold Ashanti, the third largest gold producer in the world, and other smaller companies, were given concessions for gold mining in the reservation of the Emberá, an indigenous community in the Colombian Pacific. The area had been the center of intense combats between FARC and the Colombian Army, and the latter bombarded part of the reservation, forcing thousands of Emberá to flee. A couple of years later, the community sought to have their land restituted under the transitional justice mechanism created to that effect, but Anglo Gold opposed the restitution. It argued that the mining concession given by the government complied with Colombian law. The judge decided against Anglo Gold, and restituted the land to the Emberá, on the basis that the community had not been consulted when the concessions were granted. 8 The Emberá case provides a glimpse of the kind of investment cases that a Colombian peace deal may trigger. Anglo Gold, or any investor in its situation, could try to seek compensation under a relevant investment treaty, arguing that the Colombian judge's restitution order is a measure tantamount to expropriation, or that it violates the fair and equitable treatment standard. And more ambitious domestic orders of restitution could lead to more ambitious international claims.
Similarly, investment cases could also emerge from the establishment of a "land fund." or any equivalent policy, particularly from the recovery of vacant plots that are possessed by foreign investors. For example, according to Oxfam, Cargill (the largest agricultural commodity trader in the world) may have evaded Colombia's restriction on acquiring previously state-owned land destined for family farming, buying up fifty thousand hectares of previously vacant plots. 9 If this accusation proves to be true, and the Colombian government decides to reverse the acquisitions and include these plots in the "land fund," Cargill could seek compensation under the Colombia-U.S. FTA. Would it prevail? It is of course impossible to know. Cargill has always said that it followed Colombian law scrupulously. 10 However, the risk of compensation will surely weigh on the Colombian decision to include certain vacant plots in the land fund.
To be sure, foreign investment is not, in itself, antithetical to a peace agreement. On the contrary, foreign funds will be needed to implement it. Nonetheless, while there is in theory enough land in Colombia to both foment agroindustry and implement peace initiatives, the reality of a lack of transport infrastructure is so severe 11 that, as a matter of resource allocation, this becomes a zero sum game: a hectare that is used in mining, or for capital-intensive agroindustry, is a hectare that will not go to transitional policies of land reform.
Here, the investment settlement dispute regime becomes important. Its effectiveness in enforcing investment obligations tilts the scale to keeping land in mining and agroindustry, not because the regime is itself biased in favor of these industries, but because foreign investors in Colombia do have a preference to invest in these industries, and the regime puts its weight behind them, backed by the risk of expensive awards. As a result, it becomes harder to move land from these industries to transitional initiatives, or at the very least, the investment regime may provide a political justification for not doing so.
Deciding Colombia's Peace-Related Investment Disputes
Until now, Colombian negotiators have not seriously considered investment protection as part of the limits that international law may impose on the implementation of an eventual deal (unlike, say, human rights or international criminal law). The rejection of the Peace Accord gives them the opportunity to do so. For instance, language could be included in the new deal to the effect that the Agreement and its land reform policies are necessary to protect Colombia's essential security, a move that could set the foundation for an eventual Colombian defense in a future arbitration. 12 Such a clause may be construed as not self-judging, thus opening a wide margin of arbitral interpretation. 13 How should investment arbitrators approach disputes that emerge from the Colombian Peace deal? The few precedents available give little guidance with regards to transitional measures in the context of investment disputes. In Piero Foresti, a group of Italian claimants argued that their shares in a mining operation company had been expropriated, as South Africa's postapartheid mining law required 26 percent ownership of historically disadvantaged South Africans. The dispute, though, was settled and the case discontinued. 14 More substantive was Funnekotter. 15 This case involved a group of Dutch farmers that were deprived of their land by Mugabe's controversial reforms, which sought to redistribute land from white owners to the black population. Zimbabwe argued, first, that its land reform was "in the public interest and under due process of law" and hence required no compensation, and second, that it was adopted as a matter of necessity. The tribunal rejected both arguments, and decided in favor of the claimants, awarding most of the compensation. Interestingly, when debating the amount of compensation, Zimbabwe argued that discounting from the market value of the assets must be made in cases of large scale nationalizations. 16 The tribunal, however, rejected the argument: the value of the asset should be calculated independently "of the number and aim of the expropriations done." In the Colombian case, one central question is whether, and to what extent, an investment tribunal should consider the transitional context that frames land reform in that country. While the precedents are not encouraging, I believe that this context should have weight. The land component of an eventual deal would be the cornerstone for the reparations of human rights abuses that occurred during the armed conflict, and is crucial for preventing more violence. As a general mindset, arbitrators should acknowledge the humanitarian dimension of their responsibility as adjudicators, instead of focusing on investment standards in isolation of their context. This approach does not imply denying investors the protection promised in treaties. Colombia adopted obligations that must be kept. However, investment norms and arbitral procedures open a space for arbitrators to consider all the implications of their decision.
In the Colombian case, such an approach would have certain procedural and doctrinal consequences. Procedurally, it would imply allowing civil society organizations, and particularly Colombian victims' organizations, to participate in the investment arbitration process, to have access to the claims, and to be heard by the tribunal. Moreover, when adjudicating on land reform initiatives, an arbitrator mindful of the humanitarian implications of investment litigation in the Colombian context might also adopt a more deferential standard of review. When the right case comes along, investment arbitration tribunals will be in a position to, in effect, review the domestic legal architecture and implementation of a peace agreement. They should adopt a deferential standard, giving in principle much weight to the decisions of Colombian courts, and only exceptionally deciding land reform questions anew. 17 Doctrinally, this approach might imply a stricter standard for diligence on behalf of the investor. If someone decided to invest in land in Colombia, attracted by the generous incentives in mining or agroindustry, that investor should have also considered that land tenure in Colombia has always been a central element of the conflict, making it reasonable to expect that land tenure would also be a central element of a peace agreement. Prior awards have suggested this caveat emptor possibility. In Hassan Awdi, the tribunal discussed the restitution of a historical building that had been confiscated by the Romanian communists in the 1950s, and then privatized and sold to foreign investors in the 1990s. In its reasoning, the tribunal took seriously Romania's property restitution program, and found that no expropriation had occurred, as the claimants knew that restitution was indeed a possibility when they made the investment. 18 Land reform and restitution is also a clear possibility in Colombia, and arbitral tribunals should expect investors to know so.
This approach could also have an impact on the calculation of investor compensation. Demanding significant amounts of money from a state that is implementing an ambitious transition program might have disastrous humanitarian effects, and might not be justified under basic principles of equity. 19 In his Separate Opinion in CME, Ian Brownlie considered relevant the disastrous effects that the award would have on the Czech Republic. 20 In both Sempra and CMS, the tribunals said that the Argentinean crisis had to be considered when calculating reparations: "the crisis cannot be ignored and it has specific consequences on the question of reparation." 21 Such particular circumstances should be also considered in the Colombian case. Vol. 110 Finally, investment arbitrators approaching a Colombian case should be aware of that state's obligations under the American Convention of Human Rights. The Inter-American regime of human rights and investment protection already butted heads once, when Paraguay tried to use its obligations under the Germany-Paraguay BIT to evade restituting the traditional lands of the Sawhoyamaxa indigenous community. 22 The Inter-American Court rejected the argument. However, the case made clear that the communicating channels between the regimes are crucial for a situation such as Colombia's: a state that has strict obligations under the regional human rights system, which can involve, as we have seen, land restitution. Foreign investment law should not be an obstacle for Colombia to fulfill its other international obligations, particularly when, according the Inter-American Court, the right to due process has achieved ius cogens status. 23
|
2019-05-19T13:07:08.856Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "86cff0fbf65a9a8567c3848c1bcc4f2368ae6764",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B61169B40B1FC77791C4355873950AB1/S239877230000307Xa.pdf/div-class-title-the-colombian-peace-negotiation-and-foreign-investment-law-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "2372b4712cfeb10f414c1a56b797175b52d4ef3c",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
202772436
|
pes2o/s2orc
|
v3-fos-license
|
Deceptive jamming against multi-channel SAR-GMTI
: Aimed at the problem that the deceptive jamming against synthetic aperture radar ground moving target indication generated by single jammer cannot work effectively in multi-channel SAR, a novel deceptive jamming against multi-channel SAR-GMTI based on two jammers is studied. First, signal model of the novel deceptive jamming is built. Then, based on three-channel clutter suppression and interferometry processing, the amplitude and interferometric phase of deceptive moving target generated by the novel deceptive jamming are derived. Last, simulation experiment validates that the novel deceptive jamming can generate false moving targets with the controllable radial velocity and located azimuth position in the multi-channel SAR.
Introduction
Synthetic aperture radar ground moving target indication (SAR-GMTI), which can realise the detection, location and imaging of moving targets in hotspots [1][2][3], has improved the information sensing capability of SAR greatly and brought an austere challenge for SAR jamming, thereby being one of the indispensable reconnaissance tools in military application. So, jamming techniques against SAR-GMTI has already been hot research issue in radar electronic countermeasure (ECM) area.
The jamming technique against SAR-GMTI can always be divided into barrage jamming and deceptive jamming. Barrage jamming can reduce the detection rate of moving targets via concealing the real targets in barrage plaques or strips [4][5][6][7]. On the other hand, deceptive jamming makes it difficult to distinguish fact from fable by generating deceptive moving targets which have the similar characteristics of scattering and motion with real ones. Due to the high fidelity and the low recognition rate, deceptive jamming is always a hot topic in SAR ECM community. Nowadays, most of deceptive jamming methods against SAR-GMTI focus on generating deceptive moving targets with a given motion state exploiting single jammer, and the jamming effect is always evaluated by whether deceptive targets can be detected as moving targets after clutter cancellation [8][9][10][11]. However, most multi-channel SAR-GMTI systems have the capacity of velocity measurement and location [12][13][14][15], so it is necessary to study the rationality of the radial velocity and located position of deceptive moving target. In fact, the signals of deceptive moving targets received by the radar have the same radiation resource, while the echoes of real moving targets are scattered by real targets with different position, that is, the Doppler phase of deceptive moving target is decided by both the radial velocity and the azimuth position, while the Doppler phase of real moving is only decided by the radial velocity. According to the difference between deceptive moving target and real one, literature Zhang et al. [16] points out that the radial velocity and located azimuth position of deceptive moving target are not controllable after multi-channel SAR-GMTI image processing, and deceptive moving target cannot simulate the spatial and motion characteristic of real moving target.
To address the aforementioned issue, profiting from the threedimensional scene generation based on two jammers [17], we propose a novel deceptive jamming thought against multi-channel SAR-GMTI based on two jammers in this paper. The paper is organised as follows: Section 2 introduces the signal model of the novel deceptive jamming in multi-channel SAR-GMTI. In Section 3, we derive the amplitude and interferometric phase of deceptive moving target generated by the novel deceptive jamming. In Section 4, simulation results are presented to verify the validity of the novel jamming method. Finally, conclusions are presented in Section 5.
Signal model of the novel deceptive jamming
This section will introduce the signal model of the novel deceptive jamming.
Without loss of generality, here, a three-channel SAR-GMTI as shown in Fig. 1 is considered, which has three antennas l, m, and r with equal interval d along with the motion direction; the antenna m transmits signal, whereas all antennas simultaneously receive echoes. The system moves at a speed of V a along with the X-axis which represents the azimuth direction. There are two jammers J1 and J2 in the imaging scene. The origin O of the Cartesian coordinate system OXY is the position of J1. The closest distances between the antenna m and J1 and J2 are R J1 and R J2 , respectively. The azimuth coordinate of J2 is Δx Δx ≠ 0 . Assuming that both J1 and J2 generate an arbitrary deceptive moving target located at x T , y T , the radial velocity, tangential velocity, and scattering coefficient are v r , v x and σ x T , y T , respectively. Let R T represents the closest distances between antenna m and deceptive target, where R T = R J1 + y T .
According to Fig. 1, the instantaneous distances between the jammers and the antennas are: where η is the slow time, i ∈ l, m, r represents the sub-antenna channel, and x i represents the azimuth distance between subantenna and middle antenna, where x l , x m , x r = −d, 0, d . If a real target locates at x T , y T with the radial velocity v r and the tangential velocity v x , the instantaneous distance between target and antenna m can be expressed as: where and According to the principle of SAR deceptive jamming, the jammer can generate a deceptive target located at x T , y T via delaying the intercepted SAR signal with a specific time interval and modulating appropriate amplitude, so the double-trip distances of SAR jamming signal received by antennas can be expressed, respectively, as Supposing that SAR signal is s τ , the jamming signal of antennas can be modelled by From (6), it can be seen the jamming signal received by SAR-GMTI is the summation of the jamming signals generated by J1 and J2.
Amplitude and interferometric phase of deceptive moving target
In this section, the amplitude and interferometric phase of deceptive moving target will be analysed via three-channel clutter suppression and interferometry (CSI) [3,[13][14][15]. Three-channel CSI is a mature method combines the advantages of displacedphased centre antenna (DPCA) and along track nterferometry (ATI), which has been widely used in a lot of SAR-GMTI systems such as AN/APY-7, AN/APG-76, RadarSAT-2 and so on. Based on SAR imaging principle, the images of antennas can be derived as where S J1_m τ, η and S J2_m τ, η are the jamming images in antenna m generated by J1 and J2, respectively, and can be expressed, respectively, as From (8) and (9), we can obtain (S J2_m (τ, η))/(S J1_m (τ, η)) ≃ (σ 2 (x T , y T ))/(σ 1 (x T , y T )) . Let (σ 2 (x T , y T ))/(σ 1 (x T , y T )) = ρexp jϕ , which represents the complex scattering coefficients ratio of jamming templates modulated in the two jammers.
The conjugate product of the residual images shown by (14) and (15) is given by So, the interferometric phase of deceptive moving target can be expressed as Δφ J = arg S J_lm S J_mr * = atan asin φ 1 + bsin φ 2 + csin((φ 1 + φ 2 )/2) acos φ 1 + bcos φ 2 + ccos((φ 1 + φ 2 )/2) If Δφ J = − (2πdv r /λV a ), it is easy to obtain the estimated radial velocity of the deceptive moving target is v r and the located azimuth position is x T , which are coincident with the setting value. From (19), it can be seen that Δφ J is the function of ρ and ϕ, which means the key point of the realisation of the deceptive jamming against multi-channel SAR-GMTI is to find the appropriate complex scattering coefficients ratio ρ exp jϕ satisfies Δφ J = − (2πdv r /λV a ). It is worth to note that Δφ J is identical to φ 1 if Δφ = − (2πdΔx/λR T ) = 2kπ(k ∉ Z). Hence, we should assure that Δφ = − (2πdΔx/λR T ) ≠ 2kπ to generate deceptive jamming targets with desired interferometric phases.
Simulation results
In this section, numerical simulations are presented to validate the effectiveness in generating deceptive moving target of the proposed novel deceptive jamming. The main parameters of SAR-GMTI system are listed in Table 1. There are two jammers in the image region with an area of 400 m × 300 m. Jammer J1 locates at the centre of the image region, while the azimuth position of jammer J2 is 30 m with the same range position of J1. The position and radial velocity of each deceptive moving target is set as shown in Table 2, and the spatial distribution of the deceptive moving targets is shown in Fig. 2. In the simulation, we can calculate ρ and ϕ according to (19) first, then ensure the complex scattering coefficients ratio of jamming templates modulated in the two jammers is ρ exp jϕ when jammers generate the jamming signal. After three-channel CSI processing, the imaging results are shown in Fig. 3, in which the red circle represents the position of the deceptive moving target in SAR image, the white number is the serial number of the deceptive moving target, the green triangle represents the located azimuth position and the direction of the triangle is the direction of the estimated velocity of deceptive moving target, and the green number is the serial number of deceptive moving target in the GMTI result image. In Figs. 2 and 3, the range is the differential range between the real range and the range of region centre.
From Figs. 3a and b, it can be seen that all the generated deceptive moving targets deviate from the setting azimuth position which is caused by the existence of radial velocity, and can be detected as real moving targets. From Fig. 3d, after three-channel CSI processing, the located azimuth position of deceptive moving targets seems to be consistent with the setting azimuth position. To further analyse the effectiveness of the proposed jamming method, the estimated interferometric phase, radial velocity, and located position of each deceptive moving target are present in Table 3. Contrasted Table 2 with Table 3, the estimated interferometric phase, radial velocity, and located position of the deceptive moving targets 3, 5, and 6 are coincident with the setting value, while the estimated values of 1, 2, 7, and 8 exist a litter error, which will reduce the fidelity of deceptive moving target to some extent. In fact, because the azimuth intervals between deceptive moving target and jammers are different, especially when the deceptive moving target does not locate in the middle of the jammers, it is difficult to ensure that the complex scattering coefficients ratio of deceptive moving target is the same with the setting value, which will lead to the interferometric phase does not equal to the desired interferometric phase. However, deceptive moving target generated by the novel deceptive jamming against multi-channel SAR-GMTI based on two jammers is still more controllable than deceptive moving target generated by a single jammer.
Conclusion
Aimed at the problem that the radial velocity and located azimuth position of the deceptive moving target generated by single jammer are not controllable in multi-channel SAR, this paper studies a novel deceptive jamming against multi-channel SAR-GMTI based on two jammers. In this method, the deceptive moving target at arbitrary location is composed by the jamming signals of both the two jammers, and the interferometric phase of the deceptive moving target is the function of the complex scattering coefficient ratio of modulated jamming templates in the two jammers. Simulation results of the controllable deceptive moving targets are provided to validate the proposed algorithm.
In the paper, we just point out the interferometric phase of deceptive moving target is decided by the complex scattering coefficient ratio of modulated jamming templates in the two jammers. However, how to choose the complex scattering coefficient ratio according to the interferometric phase of deceptive moving target is still unknown. In the next stage, we will focus determination method of the complex scattering coefficient ratio and develop the integrity flow of deceptive jamming against multichannel SAR-GMTI based on two jammers.
|
2019-09-17T02:47:07.880Z
|
2019-07-11T00:00:00.000
|
{
"year": 2019,
"sha1": "6eba7266f0844fd551c2c06fdf73c3c5bad5d870",
"oa_license": "CCBYNC",
"oa_url": "https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/joe.2019.0504",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "14d2b477fd3a31c222c41a45e24cb90b9a6c1e3f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
33902382
|
pes2o/s2orc
|
v3-fos-license
|
An Analytical Performance Evaluation Tool for Wireless Access Points with Opportunistic Scheduling
This paper introduces a performance analysis tool for a wireless access point serving multiple mobile nodes. The channel conditions vary over time and the opportunistic scheduler in the access point can account for the channel conditions as well as for the number of backlogged packets for the different mobile nodes. The tool allows for fast and accurate calculations of the main performance measures of the system. Calculations are based on a Maclaurin series expansion of the solution of the Markov process that describes the channel conditions and the queue contents for the different mobile nodes. The Maclaurin series expansion only requires the inversion of sparse block triangular matrices, which is considerably faster than directly calculating the solution. The tool provides a user interface for defining the network and channel parameters and can be used for assessing the efficiency of opportunistic schedulers, or for optimizing system parameters and scheduling policies of such schedulers. CCS Concepts •Mathematics of computing→Queueing theory; Markov processes; •Networks → Network performance analysis;
INTRODUCTION
The analytical performance analysis of a wireless access point (AP) with an opportunistic scheduler under varying channel conditions is a challenging task.A Markov model for such an access point needs to track the number of backlogged packets for every mobile user it serves, which quickly results in state-space explosion, rendering a direct solution of the Markov model computationally infeasible.Hence, it is not surprising that most of the existing studies of access points rely on simulation.Only a few authors assess performance of opportunistic schedulers by analytic means [2,7,8].
The tool presented in this paper relies on the analytical method presented in [3] and briefly reviewed below.The tool provides a convenient user interface for manual input of the network and channel parameters, and hides the details of the method.As such, this tool allows one to compare the performance of different scheduling policies, system configurations or channel properties in a practical setting.The analytical performance analysis is expected to be helpful for fast and precise performance assessments when designing a new scheduling policy or when choosing the system parameters under given performance requirements.
In the remainder of the paper, we briefly explain the method for assessing the performance in section 2.Then, the implementation of the software tool is discussed and illustrated by an example in section 3 before drawing conclusions in section 4.
METHODOLOGY
In this section we first introduce the Markov model of the system, accounting for the buffer behaviour at the wireless AP and the transmission channel variability.Afterwards, we outline the main expressions for its numerical solution.
System model
We consider an AP serving K users as depicted in Figure 1.Each mobile node has its dedicated buffer, with capacity C k for node k.Let C k = {0, . . ., C k } and let C = C1×. ..×C k .We make the following assumptions: • There is an exogenous continuous-time Markov process M (t) that modulates the state of the wireless trans- Figure 1: Queueing model for an access point mission channel.Let M = {1, 2, . . ., M } be the state space of this Markovian background process and let αij denote the transition rate from state i to state j, i = j, i, j ∈ M.
• Packets for the kth mobile node arrive in accordance with a Poisson process with arrival rate λ k .
• Let n k be the number of packets in the kth buffer and let n be the vector with elements n k , k = 1, . . ., K. For background state j ∈ M, and queue content n ∈ C, packets leave the kth buffer with rate µ k (n, j).
Analytical method
The state space of the Markovian queueing model introduced above is C × M. Let π(n, j) be the stationary probability that there are n k packets waiting for the kth mobile node and that the channel is in state j.The balance equations are easily obtained: for n ∈ C and j ∈ M. We here defined e k to be the row vector of length K with its kth element set to 1 and all other elements zero, and 1 {X} denotes the indicator function of the event X.For ease of notation, let π(n) = [π(n, 1), . . ., π(n, M )], then we get the equivalent set: with M k (n) the M × M diagonal matrix with diagonal elements µ k (n, j), with IM the M × M identity matrix and with A the generator matrix of M (t).
Series expansion
The size of the state space of the Markov chain at hand is For a moderate number of buffers, directly solving the balance equation for the stationary vector π = [π(n, j)]n∈C,j∈M is computationally demanding.We here show that by considering the Maclaurin series expansion of the stationary solution in some system parameter, we can get an accurate estimate of the performance.Expansions of interest for the access point are discussed in the following subsections.
For a system parameter , consider a Markov chain with generator matrix Q = Q (0) + Q (1) and corresponding stationary distribution vector π ( ) , such that, ( Here e is a column vector of ones.We further assume that the Markov chain is stationary ergodic for all ∈ [0, m).By Cramer's rule, one easily finds that π ( ) is an analytic function of in an open interval around = 0. Therefore, let πn be the nth term in the series expansion of π ( ) , Plugging the series expansion (4) in (3) and identifying equal powers of , we get (1) .
Complementing the former set of equations with the normalisation condition, for n > 0 allows for recursively calculating the terms πn of the series expansion.For a generic matrix Q (0) , there is no gain in computational complexity as one still needs to invert this matrix while solving for the next term in the series expansion.However, for the Markovian queueing model at hand, Q (0) has additional structure, both for the expansion in the arrival rate λ as in the server rate µ.In both cases, Q (0) is a block triangular matrix which reduces the complexity of its inversion.
Overload-traffic analysis
Assume that the service rate for every queue depends on a rate µ: µ k (n, j) = µμ k (n, j).We can now consider the balance equation for µ → 0. In particular we consider the following series expansion of the steady-state probabilities: For ease of notation, let Mk (n) = µ −1 M k (n).Note that Mk (n) does not depend on µ.Plugging the former expression into equation ( 2) and comparing terms in µ i , we get Plugging n = 0 = [0, 0, . . ., 0] and i = 0 in the former equation and post-multiplying with e leads to π0(0)e = 0 , which implies π0(0) = 0 as the elements of π0(0) are nonnegative.Using the same arguments, one then shows by iteration that for all n ∈ C \ {c}, we have π0(n) = 0 and π0(c)A = 0.Here c = [C1, . . ., CK ] is the vector representing full queues for all mobile nodes.Together with the normalisation condition n∈C π0(n)e = 1, this shows that π0(c) = a, the steady-state solution of the Markov process M (t).For the higher-order terms (i > 0), we have For n = c, the matrix on the left-hand side is invertible.Hence, we can calculate the probabilities πi(n) in lexicographical order.For n = c, we get and the matrix on the left-hand side is not invertible.A solution of this equation takes the form for any κi.Here, A # = (A + ea) −1 − ea is the group inverse of A. Finally, the remaining unknown κi follows from the normalisation condition In view of the calculations above, one easily verifies that the numerical complexity of the algorithm is O(N M 2 S) as there are S/M blocks, N terms in the recursion and the operations with blocks have complexity O(M 3 ).
Light-traffic analysis
Similar arguments can be developed for the case of lighttraffic conditions, that is, we set λ k = λ λk and consider an expansion of the form In view of the balance equation, the terms of this series expansion adhere For i = 0, we can show that π0(n) = 0 for n = 0 and π0(0) = a.For i > 0 and n = 0, we can recursively calculate all πi(n) in reverse lexicographical order as the matrix on the left-hand side is invertible.For n = 0, we get where κi can be determined from the normalisation condition (13).We refer to [3] for the detailed analysis.
Modelling of wireless environment
At the wireless AP knowledge of the transmission environment conditions is provided by user nodes reporting their channel state information (CSI).Depending on the system organisation CSI feedback may take different forms including the current signal-to-noise ratio (SNR) level at the receiver, the available service rates, the probability of correct transmission or any other representative value related to the channel quality [5].
Limiting the number of parameters of the channel, we assume that the characteristics of all transmission channels are identical and channel qualities at the receivers can either vary independently or in correlated manner.We first focus on the Markov process which characterises a single channel.Let H be the number of states of the Markov process Hk (t) for the kth channel and let g = [g1, . . ., gH ] and à be the vector of channel qualities in the different states and the generator matrix of the Markov process Hk (t), respectively.Specifically, in this work the element g h of the vector g corresponds to the ratio between the available transmission rate in the hth channel state and the maximally achievable transmission rate.Thus, the value g1 = 0 corresponds to poor channel quality with no transmission, value gH = 1 corresponds to excellent channel conditions associated with the maximum achievable transmission rate.The other elements of g belong to the interval (0 . . . 1) and stand for the attenuation levels of the channel quality depending on the SNR.In such a way we represent the channel qualities by means of adjustable coefficients and without being bounded to certain transmission rates or the type of CSI.The channel transition rate matrix à of the finite-state Markov chain (FSMC) for a single user under the assumption of a Rayleigh channel can be defined according to [10].The additional parameters defining the channel model are: the maximum transmission rate rt of the system, the average SNR level of the Rayleigh channel SN R0, and the Doppler spread Fm, which is caused by the users' motion and increases the sym-bol error probability.However, [10] assumes that channel changes are at discrete time instants.To obtain a corresponding continuous-time Markov model, we introduce an additional rate ε and assume that the channel changes in accordance to [10] on the events of a Poisson process with rate ε .Finally, to obtain the generator matrix A of the FSMC corresponding to the overall channel process M (t) for a multiuser system with K identical independent channels, we apply the Kroneker sum ⊕: In order to introduce correlations between the qualities of the channels we have to alter the elements of A according to the provided correlation matrix R.
Examples of schedulers
For demonstrating purposes, we have implemented several schedulers including purely opportunistic schedulers, some weighted schedulers as well as non-opportunistic schedulers.
A first example of a greedy opportunistic scheduler is the MaxRate scheduler.The scheduler serves node k * (t) at time t, with and where g k (t) stands for the channel quality for user k at time t, which depends on the state of the channel.MaxRate defines a function k(j) such that µ k (n, j) = 0 for all k = k(j).The scheduler was first considered in [6] for single-cell multiuser communication.
The MaxRate scheduler can be extended such that the AP chooses two users for simultaneous service.That is, such a scheduler always divides the transmission rate between two users with the best channel conditions.This is possible in time division multiple access (TDMA) systems with code division multiple access (CDMA) within each time slot.The scheduler serves nodes k * 1 (t) and k * 2 (t) at time t, with This extension of MaxRate defines functions k1(j) and k2(j) such that µ k (n, j) = 0 for all k / ∈ {k1(j), k2(j)}.In contrast to MaxRate, MaxWeight selects the user with the maximum weight, which is calculated as the product of queue length and channel quality, see [1].MaxWeight selects user k * (t) at time t, with As MaxWeight also accounts for the queue length, there is now a function k(n, j), such that µ k (n, j) = 0 for all k = k(n, j).
Schedulers may also not account for the channel state at all.An example of a non-opportunistic scheduler is one that chooses the longest queue.The scheduler is shown to be stable for dynamic server allocation to parallel queues with randomly varying connectivity in [9] and called the Longest Connected Queue (LCQ).The scheduler serves node k * (t) at time t, with As LCQ accounts for the queue length, there is now a function k(n), such that µ k (n, j) = 0 for all k = k(n).Notice that also in this case the actual service rate µ k(n) (n, j) for node k(n) does depend on the channel condition.Finally, we mention two schedulers which are inspired by discriminatory (DPS) and generalised processor sharing (GPS) [4], but with weights set to reflect the channel conditions.The service rate then takes one of the following forms: , where g jk is the channel quality for user k when the overall channel state is j, and the parameter βj describes the overall channel condition βj = (1/K) K k=1 g jk .
THE TOOL IMPLEMENTATION
Figure 2 depicts the main components of the performance analysis tool.The graphical user interface (GUI) front end of the tool is implemented in c#, and calls python scripts and c++ functions in order to set up the system and channel models, and to calculate the performance measures, respectively.Another python script which calls L A T E X then formats the results and creates a report in portable document format.
Via the GUI the various parameters of the model and the analysis method are set.The input parameters can be divided into (i) buffer parameters, (ii) channel parameters and (iii) parameters of the series expansion method.The buffer parameters include (i) the number of mobile nodes K, (ii) the buffer capacity C for the nodes and (iii) the scheduling discipline used.Note that the current version of the GUI considers the same capacity C for all nodes.Predefined scheduling disciplines are the MaxRate, MaxRate with simultaneous service of two queues, MaxWeight, LCQ, DPS and GPS schedulers mentioned in section 2.7.The channel parameters include the number of states H of a single channel, and several parameters needed to construct the channel generator matrix A. The main model parameters that one can manually change via the GUI are listed in Table 1.
The model parameters inserted via the GUI are then exploited in the various predefinition blocks.Here, based on the given channel parameters the channel is represented by its transition rate matrix A, and based on the chosen scheduling policy, the buffer parameters and the channel qualities the service rate matrix µ is constructed.The generator matrix A is formed in the corresponding block according to [10], as explained in section 2.6.In case of correlated channels, the correlations are added based on the given correlation matrix R. Next, the chosen scheduler can be analysed.The performance analysis block calculates the series expansion terms of the steady-state probability vectors for both the overload and light-traffic cases according to (10) and ( 14).The tool also includes functions for simulation in order to estimate the region of convergence of the analytical results.Via the GUI the user can define several additional parameters for the analysis block, such as the number of terms for the series expansions and the number of samples for the simulation.The user can also change the mode of the performance analysis & simulation block, and manually turn off/on simulation or analytical analysis for light or/and overload regimes.The interface window is shown in Figure 3.
The output report generated by the analysis tool consists of plots of the performance measures, namely of the mean buffer content E[Q] and the blocking probability P b .The output format can be adjusted for a better visual representation by changing the expansion terms to plot and the range of the axes in the application window.
Additional functional buttons allow saving the results and input parameters in a separate folder, opening the folder with previously results, uploading input parameters from any given configuration file and the correlation matrix from any text file, as chosen via the dialogue window.Alternatively, the desired correlation matrix can be inserted manually or randomly generated.In both cases the matrix is checked to be positive semidefinite.
For the sake of demonstration Figure 4 shows the output report for the configuration shown in Figure 3.The output file duplicates the input system information and provides plots with analytical estimations and simulation results of the performance measures in the given ranges.
CONCLUSIONS
We have introduced a practical tool for the performance analysis of a multiuser wireless AP operating under varying channel conditions.The tool is implemented as a windows application and presumes the manual input of the AP buffer and channel model parameters through a GUI.The underlying analytical method focuses on the buffer analysis in light and overload traffic regimes.The channel dynamics are modelled by a Markovian exogenous process.To solve the entire continuous-time Markov chain and cope with the prohibitively large size of the state space that is inherent to queueing systems with multiple finite-capacity buffers, series expansion techniques are used.The computational complexity of the suggested method is O(N M 2 S), with N the order of the expansions, M the number of channel states and S the size of the state space of the overall Markov chain.As output, the designed tool gives the performance measures of the system under the given conditions, collected in a portable document format.The tool can be used for a theoretical study of the efficiency of a scheduling policy as well as for the performance evaluation of practical systems.
Figure 2 :
Figure 2: Structure of the performance evaluation tool
Table 1 :
Model parameters
|
2018-01-13T18:20:27.482Z
|
2016-01-04T00:00:00.000
|
{
"year": 2016,
"sha1": "2e295c58598af34d1c13ce699ee81c4325bd2e75",
"oa_license": "CCBY",
"oa_url": "http://eudl.eu/pdf/10.4108/eai.14-12-2015.2262721",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2e295c58598af34d1c13ce699ee81c4325bd2e75",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
252089455
|
pes2o/s2orc
|
v3-fos-license
|
Large-scale Bayesian optimal experimental design with derivative-informed projected neural network
We address the solution of large-scale Bayesian optimal experimental design (OED) problems governed by partial differential equations (PDEs) with infinite-dimensional parameter fields. The OED problem seeks to find sensor locations that maximize the expected information gain (EIG) in the solution of the underlying Bayesian inverse problem. Computation of the EIG is usually prohibitive for PDE-based OED problems. To make the evaluation of the EIG tractable, we approximate the (PDE-based) parameter-to-observable map with a derivative-informed projected neural network (DIPNet) surrogate, which exploits the geometry, smoothness, and intrinsic low-dimensionality of the map using a small and dimension-independent number of PDE solves. The surrogate is then deployed within a greedy algorithm-based solution of the OED problem such that no further PDE solves are required. We analyze the EIG approximation error in terms of the generalization error of the DIPNet and show they are of the same order. Finally, the efficiency and accuracy of the method are demonstrated via numerical experiments on OED problems governed by inverse scattering and inverse reactive transport with up to 16,641 uncertain parameters and 100 experimental design variables, where we observe up to three orders of magnitude speedup relative to a reference double loop Monte Carlo method.
Introduction
In modeling natural or engineered systems, uncertainties are often present due to the lack of knowledge or intrinsic variability of the system. Uncertainties may arise from sources as varied as initial and boundary conditions, material properties and other coefficients, external source terms, interaction and coupling terms, and geometries; for simplicity of exposition, we refer to all of these as parameters. Uncertainties in prior knowledge of these parameters can be reduced by incorporating indirect observational or experimental data on the system state or related quantities of interest into the forward model via solution of a Bayesian inverse problem. Prior knowledge can be incorporated through a prior distribution on the uncertain parameters. The data are typically noisy because of limited measurement precision, which induces a likelihood of the data conditioned on the given parameters. Uncertainties of the parameters are then reduced by the data and quantified by a posterior distribution, which is a joint distribution of the prior and the likelihood, given by Bayes' rule.
Large amounts of informative data can reduce uncertainties in the parameters, and thus posterior predictions, significantly. However, the data are often sparse or limited due to the cost of their acquisition. In such cases it is critical to design the acquisition process or experiment in an optimal way so that as much information as possible can be gained from the acquired data, or the uncertainty in the parameters or posterior predictions can be reduced as much as possible. Experimental design variables can include what, when, and where to measure, which sources to use to excite the system, and under which conditions should the experiments be conducted. This is known as the optimal experimental design (OED) problem [1], or Bayesian OED in the context of Bayesian inference. OED problems arise across numerous fields including geophysical exploration, medical imaging, nondestructive evaluation, drug testing, materials characterization, and earth system data assimilation, to name just a few. For example, two notable uses of OED include optimal observing system design in oceanography [2] and optimal sensor placement for tsunami early warning [3].
The challenges to solving OED problems in these and other fields are manifold. The models underlying the systems of interest typically take the form of partial differential equations (PDEs) and can be large-scale, complex, nonlinear, dynamic, multiscale, and coupled. The uncertain parameters may depend on both space and time, and are often characterized by infinite-dimensional random fields and/or stochastic processes. The PDE models can be extremely expensive to solve for each realization of the infinite-dimensional uncertain parameters. The computation of the OED objective involves high-dimensional (after discretization) integration with respect to (w.r.t.) the uncertain parameters, and thus require a large number of PDE solves. Finally, the OED objective will need to be evaluated numerous times, especially when the experimental design variables are high-dimensional or when they represent discrete decisions.
Here, we consider the Bayesian OED problem for optimal sensor placement governed by large-scale and possibly nonlinear PDEs with infinite-dimensional uncertain parameters. We use the expected information gain (EIG), also known as mutual information, as the optimality criterion for the OED problem. The optimization problem is combinatorial: we seek the combination of sensors, selected from a set of candidate locations, that maximizes the EIG. The EIG is an average of the Kullback-Leibler (KL) divergence between the posterior and the prior distributions over all realizations of the data. This involves a double integral: one integral of the likelihood function w.r.t. the prior distribution to compute the normalization constant or model evidence for each data realization, and one integral w.r.t. the data distribution. To evaluate the two integrals we adopt a double-loop Monte Carlo (DLMC) method that requires the computation of the parameter-to-observable map at each of the parameter and data samples. Since the likelihood can be rather complex and highly locally supported in the parameter space, the number of parameter samples from the prior distribution needed to capture the likelihood well with relatively accurate sample average approximation of the normalization constant can be extremely large. The requirement to evaluate the PDE-constrained parameter-to-observable map at each of the large number of samples leads to numerous PDE solves, which is prohibitive when the PDEs are expensive to solve. To tackle this challenge, we construct a derivative-informed projected neural network (DIPNet) [24][25][26] surrogate of the parameter-to-observable map that exploits the intrinsic low dimensionality of both the parameter and the data spaces. This intrinsic low dimensionality is due to the correlation of the high-dimensional parameters, the smoothing property of the underlying PDE solution, and redundant information contained in the data from all of the candidate sensors. In particular, the low-dimensional subspace of the parameter space can be detected via low rank approximations of derivatives of the parameter-to-observable map, such as the Jacobian, Gauss-Newton Hessian, or higher-order derivatives. This property has been observed and exploited across a wide spectrum of Bayesian inverse problems [27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42] and Bayesian optimal experimental design [7,17,18]. See [43] for analysis of model elliptic, parabolic, and hyperbolic problems, and a lengthy list of complex inverse problems that have been found numerically to exhibit this property.
This intrinsic low-dimensionality of parameter and data spaces, along with smoothness of the parameter-to-observable map, allow us to construct an accurate (over parameter space) DIPNet surrogate with a limited and dimension-independent number of training data pairs, each requiring a PDE solve. Once trained, the DIPNet surrogate is deployed in the OED problem, which is solved without further PDE solution, resulting in very large reductions in computing time. Under suitable assumptions, we provide an analysis of the error propagated from the DIPNet approximation to the approximation of the normalization constant and the EIG. To solve the combinatorial optimization problem of sensor selection, we use a greedy algorithm developed in our previous work [17,18]. We demonstrate the efficiency and accuracy of our computational method by conducting two numerical experiments with infinitedimensional parameter fields: OED for inverse scattering (with an acoustic Helmholtz forward problem) and inverse reactive transport (with a nonlinear advection-diffusion-reaction forward problem).
The rest of the paper is organized as follows. The setup of the problems including Bayesian inversion, EIG, sensor design matrix, and Bayesian OED are presented in Section 2. Section 3 is devoted to presentation of the computational methods including DLMC, DIPNet and its induced error analysis, and the greedy optimization algorithm. Results for the two OED numerical experiments are provided in Section 4, followed by conclusions in Section 5.
Problem setup 2.1 Bayesian inverse problems
Let D ⊂ R nx denote a physical domain in dimension n x = 1, 2, 3. We consider the problem of inferring an uncertain parameter field m defined in the physical domain D from noisy data y and a complex model represented by PDEs. Let y ∈ R ny denote the noisy data vector of dimension n y ∈ N, given by which is contaminated by the additive Gaussian noise ε ∼ N (0, Γ n ) with zero mean and covariance Γ n ⊂ R ny×ny . Specifically, y is obtained from observation of the solution of the PDE model at n y sensor locations. F is the parameter-toobservable map which depends on the solution of the PDE and an observation operator that extracts the solution values at the n y locations. We consider the above inverse problem in a Bayesian framework. First, we assume that m lies in an infinite-dimensional real separable Hilbert space M, e.g., M = L 2 (D) of square integrable functions defined in D. Moreover, we assume that m follows a Gaussian prior measure µ pr = N (m pr , C pr ) with mean m pr ∈ M and covariance operator C pr , a strictly positive, self-adjoint, and trace-class operator. As one example, we consider C pr = A −2 , where A = −γ∆ + δI is a Laplacian-like operator with prescribed homogeneous Neumann boundary condition, with Laplacian ∆, identity I, and positive constants γ, δ > 0; see [29,44,45] for more details. Given the Gaussian observation noise, the likelihood of the data y for the parameter m ∈ M satisfies where Φ(m, y) : is known as a potential function. By Bayes' rule, the posterior measure, denoted as µ post (m|y), is given by the Radon-Nikodym derivative as where π(y) is the so-called normalization constant or model evidence, given by This expression is often computationally intractable because of the infinitedimensional integral, which involves a (possibly large-scale) PDE solve for each realization m.
Expected information gain
To measure the information gained from the data y in the inference of the parameter m, we consider a Kullback-Leibler (KL) divergence between the posterior and the prior, defined as which is random since the data y is random. We consider a widely used optimality criterion, expected information gain (EIG), which is the KL divergence averaged over all realizations of the data, defined as where the last equality follows Bayes' rule (4) and the Fubini theorem under the assumption of proper integrability.
Optimal experimental design
We consider the OED problem for optimally acquiring data to maximize the expected information gained in the parameter inference. The experimental of design seeks to choose r sensor locations out of d candidates {x 1 , . . . , x d } represented by a design matrix W ∈ R r×d ∈ W, namely, if the i-th sensor is placed at x j , then W ij = 1, otherwise W ij = 0: Let F d : M → R d denote the parameter-to-observable map and ε d ∈ R d denote the additive noise, both using all d candidate sensors; then we have Then the likelihood (2) for a specific design W is given by and the normalization constant also depends on W as From Section 2.2, we can see that the EIG Ψ depends on the design matrix W through the likelihood function π like (y|m, W). To this end, we formulate the OED problem to find an optimal design matrix W * such that with the W-dependent EIG given by
Finite-dimensional approximation
To facilitate the presentation of our computational methods, we make a finite-dimensional approximation of the parameter field by using a finite element discretization. Let M n ⊂ M denote a subspace of M spanned by n piecewise continuous Lagrange polynomial basis functions {ψ j } n j=1 over a mesh with elements of size h. Then the discrete parameter m h ∈ M n is given by The Bayesian inverse problem is then stated for the finite-dimensional coefficient vector m = (m 1 , . . . , m n ) T of m h , with n possibly very large. The prior distribution of the discrete parameter m is Gaussian N (m pr , Γ pr ) with m pr representing the coefficient vector of the discretized prior mean of m pr , and Γ pr representing the covariance matrix corresponding to C pr = A −2 , given by where A is the finite element matrix of the Laplacian-like operator A, and M is the mass matrix. Moreover, let F d : R n → R d denote the discretized parameter-to-observable map corresponding to F d , we have F = WF d as in (9). Then the likelihood function corresponding to (10) for the discrete parameter m is given by 3 Computational methods
Double-loop Monte Carlo estimator
To solve the OED problem (12), we need to evaluate the EIG repeatedly for each given design W. The double integrals in the EIG expression can be computed by a double-loop Monte Carlo (DLMC) estimator Ψ dl defined as where m i , i = 1, . . . , n out , are i.i.d. samples from prior N (m pr , Γ pr ) in the outer loop and is a Monte Carlo estimator of the normalization constant π(y i , W) with n in samples in the inner loop, given bŷ where m i,j , j = 1, . . . , n in , are i.i.d. samples from the prior N (m pr , Γ pr ). For complex posterior distributions, e.g., high-dimensional, locally supported, multi-modal, non-Gaussian, etc., evaluation of the normalization constant is often intractable, i.e., a prohibitively large number of samples n in is needed. As one particular example, when the posterior of m for data y i generated at sample m i concentrates in a very small region far away from the mean of the prior, the likelihood π like (y i |m i,j , W) is extremely small for most samples m i,j , which which leads to a requirement of a large number of samples to evaluateπ(y i , W) with relatively small estimation error. This is usually prohibitive, since one evaluation of the parameter-to-observable map, and thus one solution of the large-scale PDE model, is required for each of n out × n in samples. This n out × n in PDE solves are required for each design matrix W at each optimization iteration.
Derivative-informed projected neural networks
Recent research has motivated the deployment of neural networks as surrogates for parametric PDE mappings [24,[46][47][48][49][50][51][52][53]. These surrogates can be used to accelerate the computation of the EIG within OED problems. Specifically, to reduce the prohibitive computational cost, we build a surrogate for the parameter-to-observable map F d : R n → R d at all candidate sensor locations by a derivative-informed projected neural network (DIPNet) [24][25][26]. Often, PDE-constrained high-dimensional parametric maps, such as the parameterto-observable map F d , admit low-dimensional structure due to the correlation of the high-dimensional parameters, the regularizing property of the underlying PDE solution, and/or redundant information in the data from all candidate sensors. When this is the case, the DIPNet can exploit this low-dimensional structure and parametrize a parsimonious map between the most informed subspaces of the input parameter and the output observables. The dimensions of the input and output subspaces are referred to as the "information dimension" of the map, which is often significantly smaller than the parameter and data dimensions. The architectural strategy that we employ exploits compressibility of the map, by first reducing the input and output dimensionality via projection to informed reduced bases of the inputs and outputs. A neural network is then used to construct a low-dimensional nonlinear mapping between the two reduced bases. Error bounds for the effects of basis truncation, and parametrization by neural network are studied in [24,26,46].
For the input parameter dimension reduction, we use a vector generalization of an active subspace (AS) [54], which is spanned by the generalized eigenvectors (input reduced basis) corresponding to the r M largest eigenvalues of the eigenvalue problem where the eigenvectors v i are ordered by the decreasing generalized eigenvalues λ AS i , i = 1, . . . , r M . For the output data dimension reduction, we use a proper orthogonal decomposition (POD) [55,56], which uses the eigenvectors (output reduced basis) corresponding to the first r F eigenvalues of the expected observable outer product matrix, where the eigenvectors φ i are ordered by the decreasing eigenvalues λ POD i , i = 1, . . . , r F . When the eigenvalues of AS and POD both decay quickly, the mapping m → F d (m) can be well approximated when m and F d are projected to the corresponding subspaces with small r M and r F ; in this case approximation error bounds for reduced basis representation of the mapping are given by the trailing eigenvalues of the systems (19), (20). This allows one to detect appropriate "breadth" for the neural network via the direct computation of the associated eigenvalue problems, removing the need for ad-hoc neural network hyperparameter search for appropriate breadth. The neural network surrogateF d of the map F d then has the form where Φ r F ∈ R d×r F represents the POD reduced basis for the output, V r M ∈ R n×r M represents the AS reduced basis for the input, f r ∈ R r F is the neural network mapping between the two bases parametrized by weights w and bias b. Since the reduced basis dimensions r F , r M are chosen based on spectral decay of the AS and POD operators, we can choose them to be the same; for convenience we denote the reduced basis dimension instead by r. The remaining difficulty is how to properly parametrize and train the neural network mapping. While the use of the reduced basis representation for the approximating map allows one to detect appropriate breadth for the neural network by avoiding complex neural network hyperparameter searches, and associated nonconvex neural network trainings, how to choose appropriate depth for the network is still an open question. While neural network approximation theory suggests deeper networks have richer representative capacities, in practice, for many architectures, adding depth eventually diminishes performance in what is known as the "peaking phenomenon" [57]. In general finding appropriate depth for e.g., fully-connected feedforward neural networks requires re-training from scratch different networks with differing depths. In order to avoid this issue we employ an adaptively constructed residual network (ResNet) neural network parametrization of the mapping between the two reduced bases. This adaptive construction procedure is motivated by recent approximation theory that conceives of ResNets as discretizations of sequentially minimizing control flows [58], where such maps are proven to be universal approximators of L p functions on compact sets. A schematic for our neural network architecture can be seen in Fig. 1. This strategy adaptively constructs and trains a sequence of low-rank ResNet layers, where for convenience we take r = r M = r F or otherwise employ a restriction or prolongation layer to enforce dimensional compatibility. The ResNet hidden neurons at layer i + 1 have the form where the parameter k < r is referred to as the layer rank, and it is chosen to be smaller than r in order to impose a compressed representation of the ResNet latent space update (22). This choice is guided by the "well function" property in [58]. The ResNet weights w = consist of all of the coefficient arrays in each layer. Given appropriate reduced bases with dimension r, the ResNet mapping between the reduced bases is trained adaptively, one layer at a time, until over-fitting is detected in training validation metrics. When this is the case, a final global end-to-end training is employed using a stochastic Newton optimizer [59]. This architectural strategy is able to achieve high generalizability for few and (inputoutput) dimension-independent data, for more information on this strategy, please see [26]. Fig. 1 A schematic representation of the derivative-informed projected neural network using ResNet as the nonlinear mapping between the reduced subspace using active subspaces for input parameters and POD for output observables.
Input basis
By removing the dependence of the input-to-output map on their highdimensional and uninformed subspaces (complement to the low-dimensional and informed subspaces), we can construct a neural network of small input and output size that requires few training data. Since these architectures are able to achieve high generalization accuracy with limited training data for parametric PDE maps, they are especially well suited to scalable EIG approximation, since they can be efficiently queried many times at no cost in PDE solves, and require few high fidelity PDE solutions for their construction.
DLMC with DIPNet surrogate
We propose to train a DIPNet surrogateF d for the parameter-to-observable map F d , so thatπ(y i , W) can be approximated as where we omitted a constant 1 (2π) n out /2 det(Γn) 1/2 since it appears in both the numerator and denominator of the argument of the log in the expression for the EIG. To this end, we can formulate the approximate EIG with the DIPNet surrogate as
Error analysis
Theorem 1 We assume that the parameter-to-observable map F d and its surrogatẽ F d are bounded as Moreover, we assume that the generalization error of the DIPNet surrogate can be bounded by ε, i.e., Then the error in the approximation of the normalization constant by the DIPNet surrogate can be bounded by for sufficiently large n in and some constants 0 < C i < ∞, i = 1, . . . , n out . Moreover, the approximation error for the EIG can be bounded by for some constant 0 < C < ∞.
Proof For notational simplicity, we omit the dependence ofπ andπ on W and write F = WF d andF = WF d . Note that the bounds (27) and (28) also hold for F andF since F andF are selection of some entries of F d andF d . By definition ofπ in (18) andπ in (23), and the fact that |e −x − e −x | ≤ |x − x | for any x, x > 0, we have |π(y i ) −π(y i )|, where we used the Cauchy-Schwarz inequality in the last inequality with C i given by (31) For sufficiently large n in , we have that C i < ∞ by y i < ∞ and the assumption (27). Moreover, the error bound (29) holds by the assumption (28).
By the definition of the EIGs (17) and (24), we have For sufficiently large n in , we have that the normalization constantsπ(y i , W) and π(y i , W) are bounded away from zero, i.e., for some constant c i > 0. Then we have where we used the bound (29), which implies the bound (30) with constant
Greedy algorithm
With the DIPNet surrogate, the evaluation of the DLMC EIG Ψ nn defined in (26) does not involve any PDE solves. Thus to solve the optimization problem we can directly use a greedy algorithm that requires evaluations of the EIG, not its derivative w.r.t. W. Let S d denote the set of all d candidate sensors; we wish to choose r sensors from S d that maximize the approximate EIG (approximated with the DIPNet surrogate). At the first step with t = 1, we select the sensor v * ∈ S d corresponding to the maximum approximate EIG and set S * = {v * }. Then at step t = 2, . . . , r, with t − 1 sensors selected in S * , we choose the t-th sensor v * ∈ S d \ S * corresponding to the maximum approximate EIG evaluated with t sensors S * ∪ {v * }; see Algorithm 1 for the greedy optimization process, which can achieve (quasi-optimal) experimental designs with an approximation guarantee under suitable assumptions on the incremental information gain of an additional sensor; see [60] and references therein. Note that at each step the approximate EIG can be evaluated in parallel for each sensor choice S * ∪ {v} with v ∈ S d \ S * .
Algorithm 1 Greedy algorithm to solve (36) Require: data {y i } Ns i=1 generated from the prior samples {m i } Ns i=1 , d sensor candidates set S d , sensor budget r, optimal sensor set S * = ∅ Ensure: optimal sensor set S * 1: for t = 1, . . . , r do 2:
Numerical results
In this section, we present numerical results for OED problems involving a Helmholtz acoustic inverse scattering problem and an advection-reactiondiffusion inverse transport problem to illustrate the efficiency and accuracy of our method. We compare the approximated normalization constant and EIG of our method with 1) the DLMC truth computed by a large number of Monte Carlo samples; and 2) the DLMC sampled at the same computational cost (number of PDE solves) as our method using DIPNet training.
Both PDE problems are discretized using the finite element library FEniCS [61]. The construction of training data and reduced bases (active subspace and proper orthogonal decomposition) is implemented in hIPPYflow [62], a library for dimension reduced PDE surrogate construction, building on top of PDE adjoints implemented in hIPPYlib [63]. The DIPNet neural network surrogates are constructed in TensorFlow [64], and are adaptively trained using a combination of Adam [65], and a Newton method, LRSFN, which improves local convergence and generalization [26,59,66].
Helmholtz problem
For the first numerical experiment, we consider an inverse wave scattering problem modeled by the Helmholtz equation with uncertain medium in the two-dimensional physical domain Ω = (0, where u is the total wave field, k ≈ 4.55 is the wavenumber, and e 2m models the uncertainty of the medium, with the parameter m a log-prefactor of the squared wavenumber. The right hand side f is a point source located at x = (0.775, 2.5). The perfectly matched layer (PML) boundary condition approximates a semi-infinite domain. The candidate sensor locations x i are linearly spaced in the line segment between the edge points (0.1, 2.9) and (2.9, 2.9), with coordinates {(0.1+2i/35, 2.9)} 49 i=0 , as shown in Fig. 2. The prior distribution for the uncertain parameter m is Gaussian µ pr = N (m pr , C pr ) with zero mean m pr = 0 and covariance C pr = (5.0I − ∆) −2 . The mesh used for this problem is uniform of size 128 × 128. We use quadratic elements for the discretization of the wave field u and linear elements for the parameter m, leading to a discrete parameter m of dimension 16, 641. The dimension of the wave field u is 66049, the wave is more than sufficiently resolved in regards to the Nyquist sampling criteria for wave problems. A sample of the parameter field m and the PDE solution u is shown in Fig. 2 with all 50 candidate sensor locations marked in circles.
The network has 10 low-rank residual layers, each with a layer rank of 10. For this numerical example we demonstrate the effects of using different breadths in the neural network representation, in each case the ResNet learns a mapping from the first r M basis vectors for active subspace to the first r M basis vectors of POD. In the case that r M > 50 we use a linear restriction layer to reduce the ResNet latent representation to the 50 dimensional output. For the majority of the numerical results, we employ a r M = 50 dimensional network. The neural network is trained adaptively using 4915 training samples, and 1228 validation samples. Using 512 independent testing samples, the DIPNet surrogate was 81.56% 2 accurate measured as a percentage by For more details on this neural network architecture and training, see [26]. The computational cost of the 50 dimensional active subspace projector using 128 samples is equivalent to the cost of 842 additional training data; since the problem is linear the additional linear adjoint computations are comparable to the costs of the training data generation. As we will see in the next example, when the PDE is nonlinear the additional active subspace computations are marginal. Thus for fair comparison with the same computational cost of PDE solves, we use 4915 + 842 = 5757 samples for simple MC. To test the efficiency and accuracy of our DIPNet surrogate, we first compute the log normalization constant log π(y) with our DIPNet surrogate for given observation data y generated by y = WF d (m) + ε. m is a random sample from the prior. We use in total 60000 random samples for Monte Carlo (MC) to compute the normalization constant as the ground truth. Fig. 3 shows the log π(y) comparison for three random designs W that choose 15 sensors out of 50 candidates. We can see that DIPNet surrogate converges to a value close to the ground truth MC reference, while for the (simple) MC with 5757 samples, the approximated value indicated by the green star is much less accurate than the DIPNet surrogate using 60000 samples with similar cost in PDE solves. Note that the DIPNet training and evaluation cost for this small size neural network is negligible compared to PDE solves.
The left and middle figures in Fig. 4 illustrate the sample distributions of the relative approximation errors for the log normalization constant log π(y) and the EIG Ψ (n out = 200) with (DIPNet MC with 60000 samples) and without (simple MC with 5757 samples) the DIPNet surrogate, compared to the true MC with 60000 samples. These results show that using DIPNet surrogate we can approximate log π(y) and Ψ much more accurately with less bias compared to the simple MC. The sample distributions of EIG at 200 random designs compared to the design chosen by the greedy optimization using DIP-Net surrogate for different number of sensors are shown in the right figure, from which we can see that our method can always chose better designs with larger EIG values than all the random designs. To show the effectiveness of truncated rank (breadth) for DIPNet surrogate, we evaluate the log normalization constant log π(y) and EIG Ψ with breadth = 10, 25, 50, 100 and compared with true MC and simple MC in Fig. 6. We can see that with increasing breadth, the relative error is decreasing, but gets worse when breadth reaches 100. With breadth = 100, the difficulties of neural network training start to dominate and diminish the accuracy. We can also see it in the right part of the figure. The relative error of EIG approximation reduces (close to) linearly with respect to the generalization error of the DIPNet approximation of the observables with network breadth = 10, 25, 50, which confirms the error analysis in Theorem 1. However, when the breadth increases to 100, the neural network becomes less accurate (without using more training data), leading to less accurate EIG approximation.
Advection-diffusion-reaction problem
For the second numerical experiment we consider an OED problem for an advection-diffusion-reaction equation with a cubic nonlinear reaction term. The uncertain parameter m appears as a log-coefficient of the cubic nonlinear reaction term. The PDE is defined in a domain Ω = (0, 1) 2 as −∇ · (k∇u) + v · ∇u + e m u 3 = f in Ω, (39a) Here k = 0.01 is the diffusion coefficient. The volumetric forcing function f is a smoothed Gaussian bump located at x = (0.7, 0.7), The velocity field v is a solution of a steady-state Navier Stokes equation with shearing boundary conditions driving the flow (see the Appendix in [24] for more information on the flow). The candidate sensor locations are located in a linearly spaced mesh-grid of points in (0.1, 0.9) × (0.1, 0.9), with coordinates {(0.1i, 0.1j), i, j = 1, 2, . . . , 9}. The prior distribution for the uncertain parameter m is a mean zero Gaussian with covariance C pr = (I − 0.1∆) −2 . The mesh used for this problem is uniform of size 128 × 128. We use linear elements for both u and m, leading to a discrete parameter of dimension 16, 641. Fig. 7 gives a prior sample of the parameter field m and the solution u with all 100 candidate sensor locations in white circles. The neural network surrogate is trained adaptively using 409 training samples and 102 validation samples. Using 512 independent testing samples the DIPNet network was 97.13% 2 accurate (see equation 38). The network has 20 low-rank residual layers, each with a layer rank of 10, the breadth of the network is r M = r F = 25. The computational cost of the 25 dimensional active subspace projector using 256 samples is equivalent to the cost of 34 additional training data. As was noted before when the PDE is nonlinear the linear adjoint-based derivative computations become much less of a computational burden. Thus we use 409 + 34 = 443 samples for simple MC for fair comparison.
We first examine the log normalization constant log π(y) computed with our 409 PDE-solve-based DIPNet surrogate compared against the truth computed with 60000 MC samples and the simple MC computed with 443 samples. Fig. 8 shows the log π(y) comparison for three random designs that select 15 sensors out of 100 candidates. We can see that DIPNet MC converges to a value close to the true MC curves, while the simple MC's green star computed with the same number of PDE solves (443) as DIPNet MC, has much worse accuracy. The left figure of Fig. 9 shows the relative errors for log π(y) computed with the DIPNet surrogate and simple MC using 443 samples based on the true MC with 60000 samples, for 200 random designs. We see again that DIPNet gives better accuracy with less bias than simple MC. Fig. 10 shows the EIG approximations of three random designs with increasing number of outer loop samples n out using the DIPNet MC with 60000 (inner loop) samples, simple MC with 443 samples, and true MC with 60000 samples. We can see that the values of DIPNet MC are quite close to the true MC, while simple MC is far off. Relative errors of the EIG Ψ (n out = 100) computed with the DIPNet surrogate, the simple MC for 200 random designs is given in the middle figure of Fig. 9. With the DIPNet DLMC Ψ nn , we can use the greedy algorithm to find the optimal designs. The DIPNet greedy designs are presented as the pink crosses in the right figure of Fig. 9. We can see that the designs chosen by the greedy algorithm have much larger EIG values than all 200 random designs.
Conclusions
We have developed a computational method based on DIPNet surrogates for solving large-scale PDE-constrained Bayesian OED problems to determine optimal sensor locations (using the EIG criterion) to best infer infinite-dimensional parameters. We exploited the intrinsic low dimensionality of the parameter and data spaces and constructed a DIPNet surrogate for the parameter-to-observable map. The surrogate was used repeatedly in the evaluation of the normalization constant and the EIG. We presented error analysis for the approximation of the normalization constant and the EIG, showing that the errors are of the same order as the DIPNet RMS approximation error. Moreover, we used a greedy algorithm to solve the combinatorial optimization problem for sensor selection. The computational efficiency and accuracy of our approach are demonstrated by two numerical experiments. Future work will focus on gradient-based optimization also using the derivative information of the DIPNet w.r.t. both the parameter and the design variables, on the use of different optimality criteria such as A-optimality or D-optimality, and on exploring new network architectures for intrinsically high-dimensional Bayesian OED problems.
|
2022-01-21T02:15:44.053Z
|
2022-01-20T00:00:00.000
|
{
"year": 2022,
"sha1": "6e84b6d64bc033ac927634c6d7c0c045e89db21b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6e84b6d64bc033ac927634c6d7c0c045e89db21b",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
269533748
|
pes2o/s2orc
|
v3-fos-license
|
Boosting app-based mobile financial services engagement in B2B subsistence marketplaces: The roles of marketing strategy and app design
micro-enterprises
With the rising availability of the internet and smartphones in subsistence markets over the last decade, formal institutions are increasingly delivering technology-enabled service innovations to improve subsistence B2B value chain efficiency (Chaudhuri, Gathinji, Tayar, & Williams, 2022;Prasad, Jaffe, Bhattacharyya, Tata, & Marshall, 2017).For example, app-based mobile financial services like M-Pesa in Africa and Bikash in Bangladesh, alongside Unilever's mobile app in India, are transforming subsistence markets.They do so by fostering financial inclusion, enhancing value chain efficiencies, empowering retailers to improve purchasing decisions, streamlining operations, and boosting profitability (Prasad et al., 2017).Relatedly, a report by the International Labor Organization (ILO) (2021) calls for subsistence microentrepreneurs to regularly engage with technology-based services (e. g., app-based mobile financial services), beyond mere adoption.The report suggests that sustained engagement with such services should optimize long-term performance and efficiencies in retail supply value chains.Similarly, scholarly research also highlights the potential of regular (continued) engagement with technology-based services in subsistence markets.Such engagement can enable bottom-up market learning through enhanced cognitive, emotional, and behavioral interactions (engagement), in turn, boosting subsistence retail supply value chain performance (e.g., Akareem, Ferdous, & Todd, 2021;Gupta & Ramachandran, 2021;Mason & Chakrabarti, 2017).
Notwithstanding, these scholarly advances have only focused on resource-rich companies and national/global economies from businessto-consumer (B2C) contexts and primarily examine the key drivers of customers' internal motivational states (vs.firm-initiated stimuli) (Hollebeek, 2019;Roy, Singh, Sadeque, Harrigan, & Coussement, 2023).This pertinent literature-based gap thereby limits both theoretical and managerial insight into subsistence business-to-business (B2B) customers' engagement, whether from the supplier's or retailer's perspective in the context of low-income (e.g., micro) businesses (Adhikary, Diatha, Borah, & Sharma, 2021;Berger & Nakata, 2013).Importantly, the findings attained in resource rich B2C environments are expected to differ in B2B subsistence marketplaces, given their unique characterizations such as socially-rich relationships between micro-suppliers and -retailers, which these micro-enterprises draw upon to address financial or skill-based constraints (Mason & Chakrabarti, 2017;Viswanathan et al., 2012;Viswanathan, Rosa, & Ruth, 2010).
Furthermore, power asymmetries and resource scarcity further distinguish subsistence marketplaces (Raghubanshi, Venugopal, & Saini, 2021).For example, a single player might wield considerable social influence within subsistence-based retail supply value chains (Prasad et al., 2017).Despite acknowledging the importance of these factors, the nature and dynamics characterizing B2B customer engagement with technologically driven service in subsistence retail supply value chains remain tenuous, thus exposing an important knowledge gap.To further highlight the motivation and significance of our study, we present Table 1, which succinctly outlines the research gaps pertaining to the key variables (constructs) of our study, thereby emphasizing areas where understanding is currently limited.Indeed, these complex supply chains that provide last-mile delivery in the most challenging of circumstances can lead to unique insights for research and practice.Therefore, addressing these gaps, we explore the following two research questions: a) what factors drives micro-enterprises sustained engagement with technology-based services in B2B subsistence marketplaces, and b) how does this engagement impact the power and relationship dynamics in subsistence retail supply value chains?
To explore the aforestated research questions, we conduct two studies and focus on app-based mobile financial services as a primary manifestation of technology-based services among subsistence microsuppliers and -retailers including those with no formal bank accounts, (Adhikary et al., 2021;Chaudhuri et al., 2022; see Web Appendix A).These services enable money transfers, purchases, withdrawals, deposits, and the opening of new accounts (Klapper, Margaret, & Jake, 2019).In Study 1, we adopt a theories-in-use approach to show that both transactional and relational strategies by app-based mobile financial service providers influence B2B customer engagement.Informed by these insights and drawing on Stimulus-Organism-Response (S-O-R) theory and Service-Dominant (S-D) logic, we develop a conceptual model that posits that marketing strategies directly impact micro-suppliers' app engagement, while also indirectly affecting micro-retailers' perceived power dynamics and satisfaction with micro-suppliers.In Study 2, we empirically tested our model with 253 micro-supplier and retailer pairs, demonstrating that B2B relationship marketing (i.e., customer support) boosts micro-suppliers' engagement.App functionality (vs.aesthetics) was also identified as a key engagement determinant.Conversely, transactional promotion practices, while widely used to boost engagement, were found to impede engagement.
By integrating S-D logic with the S-O-R framework in the B2B subsistence marketplace context our study underlines the collaborative, interactive dynamics among technology providers, micro-suppliers, and -retailers Thus, we provide theoretical understanding of B2B focused technology-based service interventions in subsistence retail supply value chains.Doing so provides insights into the mechanisms through which relationship marketing approach and app design (functionality) serve as stimuli impacting on the organism's (supplier's) internal state.Furthermore, insights are gained regarding the other entity's (retailer's) Note -Highlighted text refers to anecdotal emphasis provided in the study but not empirically examined.
A.S. Ferdous et al. response and how such interventions can transform power structures and bolster relational dynamics in subsistence B2B settings.These insights provide specific impetus for practitioners on how to better formulate their marketing strategies and design-specific app features that boost B2B engagement with technology-driven services among subsistence micro-entrepreneurs.Our study also has implications for policy makers in promoting market harmonization, and inclusion and contributing to the overall resilience and sustainability of subsistence retail supply value chains through engagement-enabled technologydriven services.We next discuss key streams of literature that form the theoretical foundation of this study, followed by a discussion of the research method and findings, and their implications.
Subsistence vs. mainstream market ecosystems
Market ecosystems consist of intricate networks where stakeholders like buyers, sellers, agents, and government bodies interact and cocreate value, forming value chains for distributing offerings (Hollebeek, Kumar, & Srivastava, 2022;Vargo & Lusch, 2011).These chains are prevalent in both mainstream and subsistence markets, often operating independently yet interconnected, as depicted in Fig. 1 (Granados, Rosli, & Gotsi, 2022;Trienekens & Van Dijk, 2012).Mainstream markets typically involve large and small-to-medium enterprises serving a diverse range of customers, while subsistence markets are characterized by subsistence micro-suppliers and micro-retailers catering primarily to low-income customers, including those in isolated or rural areas (e.g., International Labor Organization (ILO), 2021; Maksimov, Wang, & Luo, 2017;Sridharan et al., 2014).
Regarding power dynamics in subsistence marketplaces, microsuppliers are able to exercise coercive power by controlling essential resources, influencing micro-retailers' decisions and actions, significantly affecting the value chain (Boyle, Cornes, & Gilbert, 2016;Prasad et al., 2017).By contrast, non-coercive power stems from intrinsic qualities and relationships, fostering mutually beneficial partnerships (Prasad et al., 2017).Interestingly, studies highlight third-party entities, including governments, multinational, and service providers, can significantly influence power dynamics in subsistence marketplaces (Ireland & Webb, 2007).These entities may centralize control, thus impacting power imbalances, or positively contribute by empowering micro-retailers, thus enhancing non-coercive power and leading to a more equitable supply chain (Prasad et al., 2017).Our study aims to unpack these complex interactions and the role of third parties, which in our case are app-based mobile financial service providers, in modifying power balances, influencing subsistence value chains and its performance (Ireland & Webb, 2007).Furthermore, despite resource limitations, subsistence micro-enterprises often have relationally rich networks, characterized by empathy, loyalty, and enduring interdependencies, crucial for their performance within the retail supply value chains (Hani et al., 2022;Mukherjee et al., 2020;Viswanathan, Rosa, & Ruth, 2010).Understanding the dynamics of power and relationship quality is crucial for improving B2B relationship satisfaction and overall efficiency in subsistence market supply chains, highlighting a significant research gap in the literature on B2B subsistence marketplaces.
App-based mobile financial services offer a useful innovation for subsistence markets.However, approximately 1.7 billion individuals globally do not use them and are unbanked (i.e., do not have a formal back account; Pelletier, Khavul, & Estrin, 2020).This includes many subsistence micro-enterprises that rely on unrecorded cash transactions outside the formal banking system (BFP-B, 2018).The adoption of appbased mobile financial services is thus predicted to improve digital inclusion and business efficiency (Adhikary et al., 2021;BFP-B, 2018).However, challenges in ensuring regular technological post-adoption interactions persist, emphasizing the need to customize engagement drivers, or stimuli, in subsistence B2B retail supply value chains (Adhikary et al., 2021;Gupta & Ramachandran, 2021).We therefore delve into formal service providers particularly those offering subsistence-based technology-enabled services designed to advance micro-suppliers and -retailers' engagement with app-based mobile financial services.Our goal is to clarify the firm-based benefits of such engagement by conducting two studies.In Study 1, we apply a theoriesin-use approach to gather initial insight.Then, using these findings along with the existing literature, we develop and test a model grounded in the stimulus-organism-response (S-O-R) framework (Jacoby, 2002) in Study 2, which offers novel insight to B2B marketing practitioners regarding the implementation of customized marketing strategies to boost subsistence-based B2B customers' engagement with app-based mobile financial services and to understand its outcomes.
Study 1: theories-in-use approach
In subsistence markets, adopting an inside-out micro-foundational lens is crucial to examine key dynamics at a more granular level, extending beyond the limitations of broader meso− /macro-level vantage-points (Viswanathan, 2017).As theories and practices based on developed markets may lack applicability in the subsistence context, reassessment of their applicability is essential (Venugopal & Viswanathan, 2017).For example, while some knowledge exists about stimuli that may foster engagement with technology-driven services (e.g., Eisingerich et al. ., 2019;Roy et al., 2023), limited insight exists of the practices and strategies used by app-based mobile financial service providers to stimulate their customers' engagement with subsistencebased micro-enterprises.While prior research has explored outcomes of technology-driven service adoption (e.g., revenue/efficiency growth; Adhikary et al., 2021), its influence on stakeholder engagement in subsistence retail supply value chains remains under-explored.Addressing this issue, the proposed theories-in-use approach has elevated relevance (Zeithaml et al., 2020).By delving into the bottomup insights of concerned stakeholders, we contribute to the subsistence marketplaces-based B2B theorization.
We adopt the theories-in-use approach to garner insight from app-based mobile financial service providers (also known as MFSs) using a convenience sample of subsistence entrepreneurs (micro-retailers and -suppliers).With the assistance of a leading Bangladeshi university's outreach office, at which one of the researchers is employed, we obtained permission to interview key informants from different financial service providers in an emerging country, Bangladesh.To be eligible to participate, informants were required to have extensive experience (10+ years) in the marketing/customer service department of their organization and be members of the senior management team.At the time of the study, 11 MFS providers operated in Bangladesh.After receiving approval to undertake the study, we approached all of these providers, ensuring participant confidentiality.Six of the MFS providers agreed, and subsequently we conducted six exploratory interviews with senior practitioners employed at app-based mobile financial service providers (see Web Appendix B for detailed participant demographics).The participants included two vice-presidents of operations, two customer experience directors, and two expert marketing managers.We first asked the informants to provide their insights on the commonly adopted initiatives (stimuli) deployed by technology-driven service providers when delivering their offerings to B2B subsistence entrepreneurs.Next, we asked them about initiatives that are taken specifically by mobile financial service providers to foster B2B customers' engagement in subsistence markets.Furthermore, with the assistance of a local nonprofit organization, we were able to contact and conduct eight exploratory interviews using a convenience sample of subsistence entrepreneurs, which included five micro-retailers and three micro-suppliers (see Web Appendix B).We asked the participants about their experiences with app-based mobile financial services and its impact on their business.All interviews, inclusive of service providers and subsistence entrepreneurs, were carried out in the local Bengali language and translated into English by two members of our research team, who possess bilingual proficiency.
The exploratory interviews revealed that app-based mobile financial service providers commonly utilize four stimuli to attract, and engage, B2B subsistence customers (see Table 2), including customer development support, sales promotion offers, and app design (i.e., aesthetics/ functionality).The interviews also uncovered insight into the participants' regular engagement with app-based mobile financial services and their positive impact on subsistence businesses.They reported experiencing benefits, including reduced information asymmetry, less reliance on external enforcements, increased rewards, and stronger supplier/ retailer relationships.As outlined in Table 2, these responses unpacked the importance of non-coercive power, and B2B relationship satisfaction, in subsistence retail supply value chains.Moreover, customers of app-based mobile financial services were required to use the same provider/brand for transactions and business activities, representing a general industry norm in most subsistence markets.Next, we draw on the interview findings, and prior literature, to develop the proposed conceptual model.
Table 2
Summary of findings using the theories-in-use approach.While traditionally rooted in consumer psychology, the multifaceted and dynamic nature of B2B-based inter-organizational relationships has been emphasized (Möller, 2013), suggesting the relevance of frameworks like S-O-R to analyze complex B2B interactions and behaviors (Yoo & Kim, 2019).Cowan, Paswan, and Van Steenburg (2015) rereiterate this notion by highlighting the interconnected nature of B2B engagement, suggesting that a firm's interactions within one organization can elicit responses from its partner organization.Relatedly, S-D Logic suggests that in the B2B context, value can be co-created through actors' interactions, emphasizing the organism's role in shaping its responses in collaboration with the stimulus provider (Hollebeek, 2019;Vargo et al., 2023;Vargo & Lusch, 2011).In the B2B subsistence marketplace context, this perspective shifts the focus towards understanding how micro-suppliers engage with technology (i.e., the stimulus) not just as users but also as value co-creators, influencing both their own operational efficiencies and micro-retailers' satisfaction (i.e., response).
Building on B2B-based SD-logic and drawing on our Study 1 findings, we adapt the S-O-R framework to the B2B subsistence marketplace context (see Fig. 1).We argue for the framework's relevance in exploring subsistence micro-supplier and -retailer dynamics, where the microsupplier's technological engagement serves as a stimulus that influences operational efficiencies and, consequently, retailer satisfaction.This cross-entity dynamic, reflecting the micro-supplier's actions and in turn influencing the micro-retailer's outcomes, aligns with Cowan et al. (2015) observed interconnected nature of B2B engagement and the potential of organizational actions to impact partner outcomes through power-benefit dynamics.
The conceptual model (Fig. 2) depicts the impact of specific stimuli (S) delivered by mobile financial service providers (e.g., customer development support, sales promotion offers, or app design aspects).This includes influence on the engagement of subsistence microsuppliers with app-based mobile financial services, while also engaging them in value co-creation processes.This engagement and subsequent value co-creation, in turn, affects subsistence micro-retailers' behavioral responses (R), notably their perception of the supplier's non-coercive power dynamics and their relationship satisfaction with the micro-supplier.Integrating a S-D Logic informed S-O-R framework in the B2B subsistence marketplaces context enhances our theoretical and managerial understanding of technology's role in transforming power structures and improving relational dynamics (Gupta & Ramachandran, 2021).We next outline the hypotheses, as depicted in the proposed conceptual model (Fig. 2).
Relational customer development support
Customer development support has been defined as the degree to which a B2B customer perceives a provider to assist him/her in the development of brand-related knowledge and/or competence (e.g., customer support, training, or education; Akareem et al., 2021;Karpen, Bove, Lukas, & Zyphur, 2015).Our exploratory practitioner interviews (see Table 2) emphasized the importance of customer development support as an industry-wide practice to stimulate B2B subsistence customers' engagement with technology-driven services.This practitioner-based theories-in-use is congruent with prior research that indicates the importance of relational customer development support strategies to foster customer adoption, and engagement, in the context of new technology-enabled service innovation performance (Kim, Younghoon, Myeong-Cheol, & Jongtae, 2015).Relatedly, Akareem et al. (2021) uncover that subsistence consumers' behavioral engagement with a technology-enabled service innovation was higher when customer development support was provided (vs.not provided).
We argue that a service provider's investment in customer development support initiatives will act as a key resource for subsistence entrepreneurs to better understand the app-based service in turn raising their engagement with the app (e.g., usage; Hollebeek et al. 2014).For example, as highlighted in Fig. 2, customer development support should endow subsistence micro-suppliers with the necessary skills, and knowledge, to regularly engage with the app, reducing perceived usage barriers (e.g., lacking app-related trust, education, or technical skills).We hypothesize: H1. Perceived customer development boosts micro-suppliers' appbased mobile financial services engagement.
Transactional-based sales promotion offers
Sales promotion offers are monetary (e.g., cash discounts), or nonmonetary (e.g., gifts), short-term (transactional) incentives offered to stimulate sales (Chandon, Wansink, & Laurent, 2000).One of our informants summarized the importance of using transactional practices to stimulate mobile financial service adoption and engagement: "Our MFS systems and apps play a crucial role in supporting our small-sized B2B clients with their daily business operations.We understand that for this segment, transaction costs can sometimes be a concern.To address this, we have a short-term marketing strategy that focuses on offering cash discounts, bonuses, rebates, and prizes.It's a win-win situation as it not only helps our clients but also acts as an incentive to keep them engaged and actively using our services."(Customer Experience Director).
Though sales promotion offers can lower brand equity (e.g., by stimulating switching behavior; Valette-Florence, Guizani, & Merunka, 2011), industry reports suggest that they, nevertheless, represent a significant portion of many firms' marketing budget targeting low-and middle-income customers (Mukherjee, Malviya, & Thakkar, 2022).For example, sales promotion offers expenditure secures a large portion of such budgets to induce resellers, salespersons, and customers amid increased competition in emerging markets (Mukherjee et al., 2022).In, 2019, app-based mobile financial services providers in India spent almost US$1 billion on sales promotion offers (e.g., discounts, cashbacks) to retain their customers and/or attract new ones (Palepu & Sharma, 2019).In Bangladesh, mobile financial services providers sales promotion offers tend to primarily attract unbanked customers and/or boost market share in highly competitive markets (Ehsan, Musleh, Gomes, Ahmed, & Ferdous, 2019;Yan, Siddik, Akter, & Dong, 2021).As most subsistence micro-entrepreneurs are resource constrained, we posit that sales promotion offers should generate additional resources for the entrepreneurs and act as key customer engagement stimuli, in turn inducing subsistence micro-entrepreneurs to regularly interact with appbased mobile financial services.We propose: H2.Sales promotion offers boost micro-suppliers' app-based mobile financial services engagement.
App aesthetics
Drawing on Homburg, Schwemmle, and Kuehnl (2015), we define app aesthetics as a micro-supplier's perception of a mobile financial service app's appearance and attractiveness.Our exploratory practitioner interviews highlighted that app aesthetics are primarily designed to stimulate customer engagement, while remaining consistent with the service provider's brand image.For example, as noted in Table 2, one of the interviewees emphasized: "I agree that app appearance is standardized for variety of reasons, such as building a consistent brand image, but most of the MFS providers still invest in upgrading and improving the app appearance, which can be important to drive engagement" (Customer Experience Director).Industry-based reports, likewise, suggest that practitioners invest in app aesthetics to increase B2B subsistence customers' engagement (Fang, Zhao, Wen, & Wang, 2017), consistent with prior research showing that well-designed apps tend to create a good first impression, positively affecting users' (e.g., ease-of-use) perceptions and boosting their usage frequency (Tarute, Nikou, & Gatautis, 2017).
Leveraging on the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), app aesthetics align closely with Perceived Ease of Use (PEOU) (Davis, 1989) and Effort Expectancy (Venkatesh et al., 2012).This alignment suggests that an app's aesthetic design contributes to its perceived ease of use, which is instrumental in fostering continued engagement with the technology.Furthermore, the functional aspects of app design can be linked to TAM's Perceived Usefulness (PU) and UTAUT's Performance Expectancy, highlighting the role of app functionality in enhancing user-perceived app utility, supporting its continued engagement and use (Venkatesh et al., 2012).Therefore, while we focus on app aesthetics, the foundational notion of ease of use and perceived usefulness-central to both the TAM and UTAUT perspectives-are echoed in our exploration of users' technology adoption and continued engagement.These considerations underscore the relevance of integrating traditional technology acceptance models with specific features like app aesthetics and functionality to comprehensively understand user engagement in the B2B subsistence marketplace context.We, thus, posit that app aesthetics will stimulate B2B subsistence entrepreneurs' engagement with app-based mobile financial services, as follows: H3. App aesthetics boost micro-suppliers' app-based mobile financial services engagement.
App functionality
The functionality aspect of a product's design is viewed as utilitarian benefits that the customers receive (Homburg et al., 2015).In our context, app functionality reflects the degree to which subsistence microentrepreneurs perceive an app-based mobile financial service to fulfil its purpose.The app-based mobile financial services utilitarian performance attributes include its ability to send, or receive, money, recharge, take cash out, make payments, pay bills, self-register, scan QR codes, and provide security updates (Adhikary et al., 2021), as one of our interviewees re-iterates: "For this segment, app functionality is absolutely crucial!It's what shapes how these micro-businesses interact with our mobile financial services.A top-notch, user-friendly functional app can really grab their attention and keep them coming back for more, ensuring their financial transactions are smooth and hassle-free and this is what this segment requires" (Marketing Manager).Prior research suggests that an app's functionality conveys not only its quality and reliability, but also how it meets, or exceeds, users' utilitarian expectations (Lee, 2018).Due to resource constraints faced by subsistence micro-entrepreneurs, enhanced app functionality can improve their understanding of the app's various services, leading to increased engagement.We propose: H4.App functionality positively stimulates micro-suppliers' appbased mobile financial services engagement.
Micro-supplier's engagement and micro-retailer's responses
Power, a stakeholder's ability to influence another's behavior, can take the form of coercive or non-coercive power (Geyskens & Steenkamp, 2000).Coercive power relies on force, or threats, to ensure compliance or to discourage non-compliance by posing negative consequences (Han et al., 2022).Non-coercive power, on the other hand, involves implicit influence through suggestions, assistance, or adherence to norms (Ramaseshan, Yip, & Pae, 2006).Our interviews (see Table 2) reveal that subsistence micro-suppliers exert significant influence (power) over micro-retailers in subsistence value chains, congruent with the subsistence retail supply value chain literature (Granados et al., 2022;Prasad et al., 2017).For instance, one of our micro-retailer stated how their response towards micro-suppliers' non-coercive power is shaped by micro-suppliers' engagement with app-based mobile financial services: "As a poor shopkeeper, I face numerous hurdles.We were always worried that suddenly our supplier who often holds the 'khomota' (power), would suddenly ask us for immediate repayments for goods.It required us to travel distances, which made me keep my business closed, and my poor education made me less confident to keep a record of my transactions with my suppliers.After I saw my suppliers started using mobile money apps to do business, I think my relationship with them has deepened.I think I am not in a position where they (supplier) will impose anything on me, and I am also able to track my transactions or at least ask my son, who studies in class 8, to read the records payment records for me" (Micro-retailer: tin-shed garments shop owner).
Drawing on the S-O-R framework, we posit that the micro-supplier's engagement with their app-based mobile financial services (O) acts as a transformative force in the subsistence value chain by influencing the retailer's response (R) to supplier's power and relational dynamics.According to industry reports, when a focal subsistence B2B actor engages with technology, their operational efficiency, and their commitment to invest in resources that enhance the ease, reliability, and effectiveness of their interactions with their B2B partners tend to improve (BFP-B, 2018).These operational enhancements are critical in shifting micro-retailers' perception to recognize micro-suppliers' noncoercive power-a power that is predicated on mutual respect, reliability, and a willingness to support without imposing undue pressure (Cowan et al., 2015).For example, in our research context, micro-suppliers' engagement should not only facilitate more streamlined operations but should also yield micro-retailers' enhanced perceptions of suppliers' reliability and responsiveness.We contend that this perceptual shift directly boosts micro-retailers' positive responses to microsuppliers' non-coercive power-marked by an ability to influence without coercion (Cowan et al., 2015;Ramaseshan et al., 2006).Furthermore, integrating this insight with of S-D logic (Hollebeek, 2019;Vargo et al., 2023;Vargo & Lusch, 2011), we acknowledge the cocreative process of value between micro-suppliers and micro-retailers, is facilitated by technological engagement.This process underlines the organism's (micro-supplier's) active role in shaping the (micro-retailer's) response through reciprocal, value-creating interactions, highlighting the importance of non-coercive power as a collaborative, valuedriven construct in B2B subsistence markets.We propose: H5.Micro-suppliers' engagement with app-based mobile financial services boosts micro-retailers' response to suppliers' non-coercive power.
We build on the theoretical foundation laid out in H5, which we next focuses into the consequences of micro-retailers' positive perceptions of micro-suppliers' non-coercive power on their overall satisfaction with the supplier-retailer relationship.The positive relational dynamic enabled by micro-suppliers' engagement not only uplifts micro-retailers' perception of power dynamics but also cements the foundation for a satisfying, mutually beneficial relationship between micro-suppliers and -retailers.
Prior research suggests that non-coercive power is characterized by accommodative and responsive behavior, in turn promoting trust, cooperation, and the willingness to work towards solutions (Ramaseshan et al., 2006;Prasad et al., 2017).In our study context, micro-retailers' positive response, influenced by micro-suppliers' engagement with mobile financial services (O), represents a critical element of the response (R) in our model.This response is not merely about operational transactions but encompasses a broader spectrum of relational satisfaction derived from a sense of mutual respect, empathy, and collaborative problem-solving.Evidence from our interviews, such as the micro-retailer, who felt empowered and valued by using the supplier's mobile payment app, vividly illustrates this relational satisfaction dynamic: "You know, ever since our suppliers started using those fancy mobile payment apps, things have gotten so much better for us (micro-retailer).We were confused when we heard about it, but after using it, I see they can handle transactions faster and smoother, which makes our business exchanges with our suppliers way more satisfying.I feel I have the same status as my supplier.Actually, it now feels that they really care about us, and it feels great!I think this has given me the confidence to better serve my customers" (Microretailer: mobile shoe cobbler).Such narratives thus echo the broader implications of non-coercive power in enhancing B2B relationship satisfaction, particularly in subsistence contexts characterized by regular, one-on-one interactions (Mukherjee et al., 2020;Viswanathan, Sridharan, & Ritchie, 2010).
Past studies highlight rising levels of perceived non-coercive power between channel partners can boost B2B relationship satisfaction by creating a positive social atmosphere characterized by respect, empathy, and the exchange of ideas (Cowan et al., 2015;Geyskens, Steenkamp, & Kumar, 1999).This positive social environment can, in turn, influence stakeholders' perception of their relationship with their other partners (Cowan et al., 2015).Reflecting on S-D logic's emphasis on interactive value co-creation, this would suggest that the positive response to noncoercive power is a manifestation of co-created value.This reinforces the notion that satisfaction in B2B relationships, especially in subsistence markets, is intricately linked to the collaborative efforts and shared success between micro-suppliers and -retailers (Vargo et al., 2023;Viswanathan, Sridharan, & Ritchie, 2010).We propose: H6.Micro-retailers' response to suppliers' non-coercive power boosts supplier-retailer relationship satisfaction.
Study 2: Field study
To test the model, a dyadic sample comprising subsistence microsuppliers and their primary B2B customers (i.e., micro-retailers) in Bangladesh was deployed in a field study context.This context was chosen for the following reasons.First, the informal (subsistence) sector accounts for over 40% of GDP in Bangladesh, and almost 50% of total employment (Mukherjee et al., 2020).Second, the majority of the approximately 7 million subsistence (informal sector) enterprises in Bangladesh are small micro-enterprises operating under resource constraints (Mukherjee et al., 2020).Yet, these enterprises show relatively high technology adoption rates for mobile financial services (BFP-B, 2018;The World Bank Group, 2019).Third, a Bangladeshi Government initiative -Digital Bangladeshwas launched to encourage technology adoption, including mobile financial services (Zaman, 2019), thus fitting with our selected app-based context.Finally, Bangladesh is recognized as one of the largest mobile money markets globally, with over 110 million users making transactions worth US$100 billion in 2022 (Liaquat, 2022).
Sample and data collection
Collecting data in subsistence marketplaces can be challenging, primarily due to potential biases (e.g., common method bias), false responses, or the non-equivalence of target constructs (Ingenbleek, Tessema, & Van Trijp, 2013).To overcome these issues, we, therefore, adhered to Christensen, Siemsen, Branzei, and Viswanathan (2017), and Ingenbleek et al. (2013) guidelines.For example, the questionnaire was written in English, and then translated into Bengali to ensure high interpretability of the survey items, before being back-translated into English by a team of business academics fluent in English and Bengali.The Bengali version was piloted among 8 subsistence micro-suppliers and 10 subsistence micro-retailers, resulting in revisions to some of the wording to improve comprehensibility, prior to it being backtranslated again.
No official registry of informal/subsistence micro-enterprises was available.Thus, to gain access to participants, a local NGO specializing in subsistence micro-enterprises was consulted to help source potential subsistence micro-suppliers.The research team, assisted by trained research assistants, screened, selected, and administered the participant surveys (Web Appendix A provides visual images of the data collection process undertaken by our research team in the field).Eligible subsistence micro-suppliers were screened to select only those who, in the past six months, had conducted transactions with micro-retailers using the same provider's app-based mobile financial services (i.e., both microsuppliers and -retailer using the same company's app).Microsuppliers, typically located in specific suburbs in major Bangladeshi cities were identified with the NGO's assistance.To mitigate potential self-selection bias, we recruited the respondents as randomly as possible, within the constraints of the subsistence context (Christensen et al., 2017;Ingenbleek et al., 2013).Specifically, we randomly approached micro-suppliers and -retailers in different locations in the major cities of Dhaka and Chittagong, aiming for a diverse cross-section of participants.This strategy was intended to reduce the likelihood of sampling bias by not limiting our respondent pool to those who might be more readily accessible or more visibly interested in app-based mobile financial services (Christensen et al., 2017).Overall, 591 subsistence micro-suppliers were approached to participate in the study, with 253 agreeing.In addition, as most subsistence B2B value chains function on a relational basis (Borchardt, Ndubisi, Jabbour, Grebinevych, & Pereira, 2020), both micro-supplier, and micro-retailer perspectives were considered.Participating subsistence micro-suppliers were requested to recommend two customers to the research team (i.e., subsistence micro-retailers) they had transacted with via the same mobile financial services app (provider) in the last six months.If the first micro-retailer did not agree to participate, the second was contacted.
Several questionnaire items were included to characterize the participants: (1) average monthly income; (2) length of business tenure; (3) number of employees (<10); and (4) micro-supplier offerings targeting micro-retailers (serving subsistence consumers).The majority of the micro-suppliers' (47.83%) average monthly income was BDT 20001-30,000 (approximately USD 185-277), while most of the microretailers' (43.08%) reported revenue was BDT 10000 (approximately USD 92) or below.On average, eligible respondents employed approximately six individuals (including the owner) (SD = 2.85) for microsuppliers, and approximately two employees (SD = 0.76) for microretailers.In terms of business tenure, the enterprises had been operating their business for around ten years (SD = 8.08) for micro-suppliers and around five years (SD = 2.02) for micro-retailers.Monthly income, business tenure (experience), number of employees, and type of businesses for micro-suppliers, and micro-retailers, were included in the model as control variables.
Measurement
The measurement scales were adapted from prior literature (see Table 3).As noted in Study One, we propose two sets of drivers that stimulate micro-suppliers' engagement with app-based mobile financial services: app design and marketing strategies.App design includes app functionality and app aesthetics, and marketing strategies include sales promotion offers (i.e., transactional), and customer development support (i.e., relational) practices.App functionality refers to the degree to which B2B actors (here, subsistence micro-suppliers) perceive a product's (i.e., app-based mobile financial service) ability to effectively fulfil its purpose, as measured by using three items adopted from Homburg et al. (2015).App aesthetics reflects a micro-supplier's perception of an app's appearance, as measured by three items adopted from Homburg et al. (2015).Sales promotion offers refer to micro-supplier's perceived intensity/frequency of such offers (i.e., by the mobile financial service provider), as measured through three items from Yoo, Donthu, and Lee (2000).Customer development support refers to microsuppliers' perception of the mobile financial service provider's assistance in developing knowledge, and competence, as measured through four items adapted from Akareem et al. (2021) and Karpen et al. (2015).
Engagement with app-based mobile financial services was treated as a higher-order multidimensional variable comprising cognitive, affective, and behavioral dimensions (Hollebeek et al. 2014), as perceived by micro-suppliers.We adopted four items from Geyskens and Steenkamp (2000) to measure micro-retailers' perceived non-coercive micro-supplier power.To measure retailer-perceived relationship satisfaction, three items were adapted from Grace and Weaven (2011).Web Appendix C shows the correlations between the constructs, including the control variables, reliability, and average variance extracted (AVE) values.
Table 3
Measurement items.
Constructs and items Standardized estimate
Customer development support: The degree that B2B subsistence entrepreneurs (micro-suppliers) perceive the mobile financial service providers are assisting them to develop knowledge and competence when dealing with the provider's offerings (Akareem et al., 2021;Karpen et al., 2015) [CR = 0.96, AVE = 0.89] The mobile financial service provider: Shares useful information with me.0.90 Helps me become more knowledgeable.0.88 Provides me with the advice I need to use their offerings successfully.
0.86
Offers me expertise that I can learn from.0.78 Sales promotion offers: The intensity/frequency that the sales promotion offers are provided by the service provider ( Yoo et al., 2000) [CR = 0.84, AVE = 0.66] The mobile financial service provider: Frequently offers sales promotion offers.
0.91 Often presents sales promotional offers to me.0.77 Emphasizes sales promotion offers.0.76 App aesthetics: The degree that a B2B actor perceives the appearance of a product (app-based mobile financial services) (Homburg et al., 2015) [CR = 0.92, AVE = 0.74] App-based mobile financial services: Is visually striking.0.96 Is good looking.0.91 Is appealing.0.86 App functionality: The degree that the app-based mobile financial services effectively fulfils its purpose (Homburg et al., 2015) [CR = 0.90, AVE = 0.77] App-based mobile financial services: Is likely to perform well.0.93 Is capable of doing its job.
0.83
Micro-supplier's non-contingent use of non-coercive power: The use of a noncoercive power involves rewards and assistances, the bestowal of consequences that are evaluated as desirable without any punishment involved from a micro-supplier (Geyskens & Steenkamp, 2000) [CR = 0.96, AVE = 0.88] This supplier freely offers its expertise to make our firm stronger and a better partner.
0.96
This supplier provides information and/or assistances without requiring specific behavior in return from our firm.
0.92
This supplier unconditionally shares important information with our firm 0.91 From our association with this supplier, we receive various rewards and benefits with no strings attached.
0.87
Relationship satisfaction: The degree that the B2B subsistence entrepreneur (micro-retailer) perceive the relationship with its subsistence micro-suppliers to be satisfying, productive, and worthwhile (Grace & Weaven, 2011) [CR = 0.92, AVE = 0.81]Over the past six months, when doing business with my supplier using the mobile financial service app: Our relationship has been productive.0.94 Our relationship has been satisfactory.0.91 The time and effort we spent in our relationship have been worthwhile.
0.86
Note -*: Item dropped due to low factor loading.
Common method bias testing
As the constructs in our study were measured using self-report surveys, a theoretically unrelated construct was included (i.e., attitude towards social media advertisements) as a marker variable.Post-hoc CMB testing showed that all the variable correlations remained statistically significant after inclusion of the marker variable, indicating that CMB is not an issue (Malhotra, Kim, & Patil, 2006).As noted, the dyadic sample used in our study also mitigated any CMB-related issues (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003).
Endogeneity testing
CMB testing, and the inclusion of control variables for both microsuppliers and micro-retailers, reduced endogeneity bias concerns.However, the possibility remained that app design, and marketing strategy factors (stimuli) may be correlated with the error term of microsuppliers' engagement with app-based mobile financial services (Semadeni, Withers, & Trevis Certo, 2014).Similarly, micro-suppliers' engagement, and non-coercive power, might be correlated with the error term of relationship satisfaction.As Zaefarian, Robson, Najafi-Tavani, and Spyropoulou (2023) recommend, testing was conducted to check our conceptual model's robustness to endogeneity.Park and Gupta (2012) highlight that in the absence of a recognizable instrument variable, the Gaussian Copula estimation approach can be applied (Park & Gupta, 2012).Therefore, using the REndo package in R, Gaussian copulas were computed for each of the explanatory variables.Nonsignificant copula coefficients ensured that the explanatory variables were not subject to endogeneity bias (Zaefarian et al., 2023; see Web Appendix D).
Table 4 shows that customer development support acts as significant positive driver (stimulus) of engagement (β = 0.33, SE = 0.05, p < .01),supporting H1.We predicted a positive relationship between sales promotion offers and micro-suppliers' engagement with app-based mobile financial services in H2.Surprisingly, our findings show a negative, significant association between them (β = − 0.25, SE = 0.06, p < .01),leading us to reject H2.This unexpected finding suggests that, contrary to traditional marketing expectations, short-term promotional offers may not effectively drive sustained engagement among microsuppliers in subsistence markets.The negative impact likely arises from a mismatch between the immediate benefits offered by promotions and the micro-suppliers' need for reliable, long-term solutions.This suggests the need to rethink promotional strategies to better meet the unique demands of B2B subsistence marketplaces.
App aesthetics was found to have a small positive effect on engagement, but this effect is not statistically significant (β = 0.06, SE = 0.05, p = .27).Therefore, H3 is not supported.App functionality positively affects micro-suppliers' app engagement (β = 0.17, SE = 0.14, p = .00),supporting H4.Our findings related to hypotheses H3 and H4 suggest that micro-suppliers who serve subsistence consumers through microretailers, prioritize the functional aspects of app-design rather than their visual appeal.This suggests that B2B micro-entrepreneurs' primary concern is whether the app delivers on its utilitarian promises made by the mobile financial service provider.
Additional analyses
Our model (Fig. 2) outlines how different stimuli lead to varied responses through sequential mediated paths.For example, enhancing customer development support boosts micro-suppliers' engagement with app-based financial services, affecting micro-retailers' reactions to the non-coercive power of suppliers and their overall relationship satisfaction.Besides direct relationships, our analysis also suggests indirect effects, such as the mediation of the relationship between microsuppliers' engagement and B2B satisfaction by micro-retailers' response to non-coercive power.Using specific mediated path analysis in R-software, we found that increased customer development support and app functionality positively influence B2B relationship satisfaction, whereas increased sales promotion offers have a negative impact.Notably, no significant mediated effects were observed between app aesthetics and relationship satisfaction.Additionally, micro-retailers' responses to noncoercive power were found to mediate the link between micro-suppliers' app engagement and relationship satisfaction (Table 4).
Discussion and implications
Our study explored two critical questions: a) what factors drive microenterprises sustained engagement with technology-based services in B2B subsistence marketplaces, and b) how does this engagement impact the power and relationship dynamics in subsistence retail supply value chains?Our findings reveal that strategies focusing on customer development support, sales promotion offers, and prioritizing app functionality over aesthetics significantly enhance engagement and relationship satisfaction.Relationship marketing strategies, as opposed to transactionfocused ones, are shown to be more effective in fostering non-coercive power dynamics and improving relational outcomes.This detailed examination offers substantive insights into the operational strategies that can transform power structures and enhance the efficiency and inclusivity of subsistence B2B markets, highlighting the pivotal role of technology in these contexts.
Theoretical implications
Our findings contribute to theory in several ways.First, our study makes a significant theoretical contribution by holistically integrating the S-O-R framework and S-D logic in the B2B context.Our findings challenge the consumer psychology rooted S-O-R framework that assumes a response (behavior) can only be generated through an individual's response to their organism and stimuli, where response, organism, and stimuli originate from a single entity (Jacoby, 2002).Our findings show that app-based mobile financial service providers' specific marketing strategy and app design features serve as a stimulus that impacts not only the organism's (supplier's) internal state but also the other entity's (micro-retailer's) response.This finding not only extends the applicability of the S-O-R framework in the B2B context.It does so by demonstrating that a firm's interactions within one organization can elicit responses from its partner organization (Cowan et al., 2015).Furthermore, it also corroborates the S-D Logic that emphasizes the organism's role in shaping its responses in collaboration with the stimulus provider as a value co-creator (Hollebeek, 2019;Vargo et al., 2023;Vargo & Lusch, 2011).
Second, we explored and demonstrated the significant positive role of relational customer development support (H1), and the unexpected negative effect of transactional sales promotion offers (H2) on technology-driven service engagement.We do so by drawing on the stimulus-organism relationship component of our SOR-based conceptual model.Our empirical findings related to H1 corroborate past research.It underscores the importance of developing relational customer development support strategies to foster customer engagement, particularly for new technology-enabled service innovations (Kim et al., 2015).Interestingly, the significant negative association found between sales promotion offers and customer engagement suggest that such promotional strategies may be less effective or even counterproductive in B2B subsistence marketplaces.In these settings, the demand for long-term value and trust in service reliability may outweigh the appeal of shortterm financial incentives.This finding challenges the universal efficacy of sales promotions (Mukherjee et al., 2022;Palepu & Sharma, 2019) by demonstrating a significant negative association in B2B subsistence marketplaces.This advancement refines the S-O-R framework, emphasizing that the effectiveness of stimuli, such as marketing strategies, is highly dependent on the specific context and the unique dynamics of the target market.It highlights the need for a tailored approach within the S-O-R model, underscoring that stimuli responses vary significantly with environmental and organism characteristics.It provides a deeper understanding of the S-O-R framework, advocating for marketing strategies that align with distinct market needs.
Third, the comparative examination of app aesthetics (H3) and functionality (H4) holds significant implications for the acumen of technology engagement in subsistence markets.Although H3 was rejected, indicating that aesthetics alone may not significantly boost engagement, the acceptance of H4 underscores the practical utility and value emphasized by S-D logic in the co-creation process.This highlights a critical extension of both the S-O-R framework and TAM, suggesting that beyond aesthetic appeal, functional attributes play a more decisive role in influencing sustained user engagement.These findings challenge and refine the S-O-R framework by underscoring the organism's (users') deeper cognitive processing of functional stimuli over superficial attributes.Thus, technology engagement strategies focused primarily on aesthetics need to be revisited.Additionally, this outcome enriches the TAM framework by demonstrating that while aesthetics contributes to perceived ease of use, it is the functionality that predominantly impacts perceived usefulness, thus driving sustained technological engagement.Furthermore, by aligning these findings with S-D logic (Möller, 2013), we extend the theoretical discourse.We do so by emphasizing the necessity of functional value in technology's role within B2B subsistence marketplaces.Thus, the model's applicability is enhanced in contexts where practical benefits outweigh aesthetic considerations.
Fourth, our analyses addressed the intersection of technology adoption and B2B relationships, an interdisciplinary area that has received scant investigation (Viswanathan & Sreekumar, 2019).To the best of our knowledge, our findings empirically support the interconnected nature of technology adoption and B2B relational dynamics in subsistence retail supply value chains.This provides a novel extension to the S-O-R framework by integrating it with S-D logic.Thus, this work moves beyond the traditional compartmentalization of these domains and contributes significantly to theory development.This integration offers a deeper understanding of how S-D logic can elucidate noncoercive power dynamics.It enhances understanding of the roles that technology plays in B2B relationships and power structures, particularly in subsistence markets (Vargo et al., 2023).These theoretical extensions are important for developing frameworks that more accurately reflect the complexities of technology's impact on business interactions and power dynamics in B2B contexts.
Managerial implications
This study provides actionable insights for practitioners and policymakers in subsistence markets.First, based on our integration of S-D logic and the S-O-R framework, firms are advised to focus on fostering value co-creation between micro-suppliers and -retailers.Managers are encouraged to develop strategies that not only facilitate technological adoption but also promote a culture of collaboration and mutual innovation (e.g., through platforms for dialogue and feedback between micro-suppliers and -retailers), ensuring the co-development of technological solutions to meet the unique needs of subsistence markets.This can be achieved through targeted support initiatives, including the development of comprehensive training programs to enhance app proficiency, creating dedicated customer service channels, and maintain regular communication pertaining to mobile app updates and improvements.Such efforts will strengthen the relationship between technology service providers and their B2B clients in subsistence markets.
Second, given the inverse relationship between transactional sales promotion efforts and engagement, our findings suggest strategic reprioritization.Firms should reconsider the effectiveness of traditional promotional strategies and instead explore value-based engagement strategies, including community-centric initiatives or services tailored to the specific constraints and needs of subsistence markets, emphasizing long-term engagement over short-term transactions.Third, users' preference for functionality (vs.aesthetics) in technology solutions emphasizes the need for a practical focus.Service providers should prioritize the development of user-friendly, reliable, and efficient solutions that address subsistence market-based participants' operational challenges.For example, simplifying user interfaces and ensuring robust offline functionality can enhance the usability and utility of technology in these contexts.Finally, the potential of technology to foster non-coercive power dynamics and enhance B2B relationship satisfaction highlights the importance of equitable, and respectful interactions.Providers can foster positive B2B relationships by ensuring transparent communication, shared decision-making, and strategies that promote mutual benefits through value co-creation.
The implications for policymaking center on fostering digital and financial inclusion including sustainable growth.The advocacy of initiatives that enhance the technological competencies of micro-suppliers and -retailers, including investments in digital literacy, infrastructure development, and supportive regulations, are crucial.These policies can help protect vulnerable entrepreneurs from exploitation and contribute to the overall resilience and sustainability of subsistence marketplaces.Furthermore, for policymakers, our study highlights the importance of relationship-building over transaction-focused strategies in subsistence marketplaces.This shift provides valuable insights when developing regulations that support ethical marketing practices and sustainable business models.Additionally, our findings on non-coercive power dynamics highlight the need for policies that encourage equitable power distribution and relationship satisfaction in B2B contexts.Policymakers should support initiatives that help micro-suppliers and retailers negotiate fair terms, understand their rights, and engage in mutually beneficial partnerships.
Limitations and further research
Our study presents significant insights but also reveals limitations that suggest areas for future research.Firstly, our research is limited to one cultural and geographical context.Expanding this study to various socio-economic environments, such as different cultures with varying levels of individualism or collectivism, or contrasting low-income settings like Bangladesh and Brazil, could provide valuable comparative insights (Hollebeek, Muniz-Martinez, et al., 2022).Secondly, our initial investigation into non-coercive power dynamics and technology engagement in B2B relationships indicates the need for more in-depth study.This relatively unexplored area could greatly enhance our understanding of both mainstream and subsistence markets.Future studies could extend our research through more detailed survey-based or experimental methodologies, exploring the intricate interactions between power dynamics and technology across different market settings.Over the last decade, with exponential penetration and access to smartphones, a substantive proportion of subsistence entrepreneurs are now capable of using smartphone apps, including app-based mobile financial services (Chaudhuri et al., 2022), however not all of them may have the same level of digital literacy skills.Thus, future studies may investigate factors such as absorption capacity/digital literacy levels of subsistence entrepreneurs, when assessing their engagement with technology enabled services.Studies may also investigate other possible factors that could moderate the relationship between S-O-R components in subsistence B2B settings.Finally, our findings suggest that traditional sales promotion strategies may not be as effective in subsistence markets.This highlights the need for further research into alternative promotional methods that are better aligned with the unique characteristics of these ever-evolving subsistence markets.
Fig. 1 .
Fig. 1.Mainstream and subsistence market ecosystems.*This study's focus is on subsistence micro-suppliers' (SMS) and subsistence micro-retailers' (SMR) engagement with app-based mobile financial services.
Fig. 2 .
Fig. 2. Conceptual model.Control variables: business experience, size of business, type of business; and income.Both Micro-Suppliers and Micro-Retailers were observed to utilize the same provider's app-based mobile financial services, as this was the sole option available for conducting transactions or accessing other services between them.
mobile financial services engagement: Subsistence B2B actor's motivationally driven, volitional investment of focal operant resources (including cognitive, emotional, behavioral) into app-based mobile financial services (operand resources)(Hollebeek, 2019; Hollebeek et al. 2014;Hollebeek et al., 2019) Cognitive engagement -1st-order engagement dimension [CR = 0.94, AVE = 0.85] Using my app-based mobile financial service gets me to think about the ease of doing business with my retailers.0.92 I think about my app-based mobile financial services a lot when I'm using it to do business with my retailers.0.90 Using my app-based mobile financial service stimulates my interest to learn more about this service.0.88 Affective engagement (AE) -1st-order engagement dimension [CR = 0.90, AVE = 0.69] I feel very positive when I use my app-based mobile financial services.0.90 Using my app-based mobile financial services makes me happy to do business with my retailers.0.81 I feel good when I use my app-based mobile financial services to do business with my retailers.0.79 I'm proud to use my app-based mobile financial services to do business with my clients* -Behavioral engagement (BE) -1st-order engagement dimension [CR = 0.91, AVE = 0.79] I spend a lot of time using my app-based mobile financial services to do business with my retailers.0.91 Whenever I need to do business with my retailers, I often use my app-based mobile financial services.0.89 My mobile app-based mobile financial services is one of the financial tools I usually use when I do business with my retailers.
Table 1
Study Variables and Gaps.
Table 2
(continued ) retailers think I can be held accountable in case something goes wrong as they have some form of record or evidence (micro-supplier: small scale vegetable supplier).My father and grandfather have been in this business for (continued on next page)
Table 4
Path Effects.
|
2024-05-04T15:43:19.818Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "4a7d52ccf984187816f748866359feff3db9fd0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.indmarman.2024.04.014",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "45e9796d4d9d2d44a674121f14f0667ee75cb891",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": []
}
|
195583462
|
pes2o/s2orc
|
v3-fos-license
|
Reality shock in radiography: fact or fiction? Findings from a phenomenological study in Durban, South Africa
Background Globally, the phenomenon of reality shock is a major contributor to the attrition of healthcare professionals. Reality shock negatively impacts on initial workplace transition, productivity, and ultimately, employee retention, hence it is important to ascertain its causative factors so that measures can be taken to mitigate its effects. Relative to other health professions, the field of radiography has been slow in detailing the occurrence of reality shock, and attrition is a major problem affecting the profession. In South Africa, a dearth of data exists pertaining to the potential presence of reality shock amongst newly-graduated radiographers as they transition to the workplace. Methods A phenomenological approach was used. Seven newly-graduated radiographers provided their perceptions of their initial workplace experiences. In-depth, one-on-one, face to face interviews were conducted, audio recorded, and transcribed verbatim before interpretive phenomenological analysis was conducted on the obtained data. Findings Three main themes emerged relating to increased responsibility, being undermined, and feeling overwhelmed. Respondents felt pressurized by their increased responsibilities when they commenced employment. They also felt undermined by their more experienced colleagues, and they were overwhelmed by the new work routine, which resulted in reality shock. Conclusions Curricula at institutions of higher education need to include courses which educate student radiographers on what to expect within the workplace as autonomous practitioners. Heads of imaging departments must create structured induction programs for new employees for adequate orientation and mentoring to reduce reality shock. Electronic supplementary material The online version of this article (10.1186/s40359-019-0317-9) contains supplementary material, which is available to authorized users.
Background
The radiography profession has been experiencing a gradual increase in the number of vacancies globally over the years, a trend which has been replicated in South Africa [1]. Owing to manpower shortages, existing radiographers are overworked as they strive to handle workloads that ideally should have been distributed amongst a much larger workforce [1]. Attrition from the radiography profession is a major cause of this manpower shortage, prompting literature to respond by investigating factors influencing radiographer retention [1]. However, existing literature in this context predominantly focuses on experienced radiographers and factors influencing their retention. This study assumes a different perspective, recognizing that radiographer retention may be heavily influenced by events occurring during their introduction to the professional work environment as autonomous practitioners. During this phase, they may experience reality shock, and this phenomenon has been shown to increase attrition rates amongst health care practitoners [2]. Reality shock is defined as the reactions of newly-qualified workers when they find themselves in work situations which they thought they were prepared for, but suddenly find they are not prepared [3]. Reality shock negatively affects performance, undermines effectiveness, and may result in isolation, overdependence, denial, fear, job dissatisfaction, lack of motivation, and a plethora of other negative emotions and events [4]. It is associated with a greater desire to resign, and in extreme cases, attrition from the profession [2].
The phrase 'reality shock' was first reported by Krammer in 1974, after studying the phenomenon in newlyqualified nurses [3]. About three decades later, in 2009, Duchscher concluded a series of qualitative studies performed over a ten-year period, on nurses transitioning into professional practice. Building on Krammer's study, Duchscher developed the Transition Shock theory which described the stages of reality shock and behaviours exhibited by novice nurses at different points in time during their first year of transition to professional practice [5].
Reality shock has been primarily investigated within the nursing field, hence this profession has well-developed guidelines to prevent this phenomenon, utilising recommendations from studies performed on newly-graduated nurses [6]. Most studies discussing reality shock utilised a variety of qualitative approaches, exploring the experiences of newly-graduated nurses as they transitioned into professional practice to discover challenges preventing a smooth transition [7][8][9]. They found that nurses experienced reality shock because of the following reasons: they were academically inept; the increased professional accountability of their new roles; pressure to be competent; personal attitudes; and negative interpersonal relationships within the workplace [9]. Intervention strategies to reduce or prevent reality shock directed at universities and hiring hospitals have been suggested [10,11].
There is scarcity of information detailing the phenomenon of reality shock in newly-qualified radiographers, and this study's authors were able to identify only a single peerreviewed article addressing the subject. The article explored expectations and experiences of newly-qualified diagnostic radiographers. The study subjects were all employed at the same imaging department where they had undertaken a year of clinical rotation as students [12]. This study did not find evidence of reality shock, most likely because of the subjects' prior exposure to their work environment. The three themes that emerged were lack of experience of working out of normal hours; struggling to fit in established groups; and a lack of professional identity and confidence. The study's authors recommended that further research be undertaken investigating the experiences of new radiography graduates employed at imaging departments where they had no prior work experience as students [12] In South Africa, no published texts were identified which looked at reality shock in radiographersa gap this paper aims to fill. Factors contributing to reality shock for local radiographers can only be addressed once they are known and documented, hence the importance of this research which sought to uncover if newly-graduated radiographers experienced reality shock, and if so, what are the contributing factors. This study's objectives therefore focused on exploring and describing the experiences and expectations of newly-qualified radiographers as they transitioned to professional practice. This was done so that strategies to curb reality shock and improve workplace transition amongst newly-graduated radiographers could be formulated using this information.
Methods
This qualitative study sought to interview participants in order to understand the meaning they ascribed to their experiences as they transitioned to the work place. Hermeneutic phenomenology was utilised, as the intent was to capture these experiences from the first-person perspective, with the underlying assumption that the context of these experiences was central in understanding the phenomenon.
Ethical clearances were obtained prior to data collection. Participants were newly-graduated radiographers within their first year of employment in Durban, South Africa. Exclusion criteria were prior work experience, and employment for less than three months. Criterion sampling was used to determine five hospitals within the Durban (eThekwini) municipal jurisdiction from which participants would be chosen, based on the South African Department of Health classification system of hospitals (district, regional, tertiary, central, and specialised) [13]. The differences in location of the study participants was necessary for environmental triangulation. A total sampling technique was then used to select all the radiographers at identified hospitals who satisfied the inclusion criteria. Eight radiographers were identified as eligible for study inclusion. A pilot study was conducted by interviewing a respondent who was working at a hospital which had not been selected for inclusion in the study, and no amendments to the interview guide were deemed necessary as all questions were clearly understood and elicited the required responses. The results of the pilot study were not included in the main findings.
A total of seven respondents located at five hospitals consented to study inclusion and were interviewed, and one radiographer declined to participate. Each interview was conducted at a time and place selected by the respondent, in an environment free from disturbance, to encourage uninterrupted flow of conversation. One-on-one, face-toface, semi-structured, in-depth interviews were conducted by the researcher, and interviews were audio recorded and transcribed verbatim. The interview schedule had seven open-ended questions, and probing questions were posed depending on the responses given by the respondents.
Interpretive phenomenological analysis was conducted on the transcribed data. This was done manually by studying each transcription repeatedly, making notes regarding significant statements (horizontalization), and transforming them into emergent themes which were clustered and labelled. The analysis was systematic and organised, so that information from the data set could easily be located, and traced back to the context of the data. Each step of the analysis was audited and archived for later checking to ensure that the emerging results were as objective as possible. The study's authors independently analysed the interview transcripts, so that there was investigator triangulation. Data saturation was achieved after analysis of the fifth interview transcript, but the remaining two transcripts were analysed to confirm saturation.
Results
The demographic data of the interview participants is as shown in Table 1 below.
Four main themes emerged relating to the reality shock experienced by participants. These are explained below, with quotes from the interviews included to highlight the theme. Names of participants and any other identifying information have been altered to maintain privacy of the respondents and uphold ethical obligations.
(a) Increased responsibility. Participants highlighted how as autonomous professionals, they felt stressed due to an increased sense of responsibility relative to that which they had experienced as students. They perceived the responsibility of being the sole overseer of the patient's needs as stressful, because as students they always had assistance, and therefore there was a shared sense of responsibility. It made them nervous to be responsible for certifying that the quality of their own radiographic images was acceptable without consulting someone more experienced, as was the norm during their student phase. They were worried about facing potential liability should they make any mistake. Despite an awareness of these duties prior to commencing autonomous practice, the psychological experience of their new responsibilities brought about reality shock. Participants had the following to say: "… it can be a bit stressful, just because as students you don't really have that much responsibility, but now [as an autonomous practitioner] it's your patient." Kelly.
"I was a bit nervous, because now … there's no qualified radiographer to ask to pass your image, it's … it's just that responsibility where now you're seen as a qualified." Michelle.
"As students we didn't take the blame for ourselves… someone had to account for you … now you are here, should you do anything wrong it's you. You are the one to go under the bus. You cannot point at your tutor … or your fellow student" Siya.
(b) Being Undermined Each of the newly-qualified radiographers in this study were working at institutions where there was an existing team of qualified radiographers. As inexperienced members of staff, participants were automatically the most junior staff in the imaging department. Participants experienced reality shock because of the difference between the expectation of how they thought they would be regarded by their qualified peers now that they were no longer students, versus the reality they experienced. Most participants felt taken advantage of, and perceived that their professional role was merely to substitute staff members who were absent due to sickness or other causes. Others felt they were not valued, and subtly undermined by their more experienced peers in the workplace. These sentiments are expressed in the following excerpts: "… I think because the older staff … they take advantage of the younger ones because we just came in … And if ever like someone calls in sick … automatically, the younger staff will do [their duties]" Zonke.
"… sometimes … you feel that…that I'm being undermined a little bit. Because they know you are new" Michelle.
(c) Feeling overwhelmed. Participants had expectations of what it would be like to work as autonomous professionals. Their expectations were based on the prior clinical exposure they had obtained as students. However, as students, they were only required to work for a limited time, and they worked under the supervision of a qualified radiographer. As autonomous radiographers who now had to work full-time without supervision, participants were faced with discrepancies between their expectations and the reality they were immersed in, which led to feelings of being overwhelmed. Participants cited different aspects that led to feelings of being overwhelmed, namely the increased workload due to being short-staffed; adjusting to institutional differences between the private and public sector; the routine of coming to work daily; and dealing with the shortages which are found in the public healthcare sector. The following statements highlight these sentiments: "I had to make like a huge adjustment from the private sector to the government sector." Siya.
"… it was a bit hard … just getting used to the routine of coming to work …" Kelly.
"Particularly in public hospitals like this one, where we're not prepared for the lack of equipment, the lack of money to fix anything, the lack of staffing … there's just a lack of everything actually." Lira.
(d) When participants were asked about their future career prospects, most of them indicated that they were considering leaving radiography. Others said they would like to study further in another qualification to enable them to leave the profession. They had the following to say regarding their career plans: "… I am currently maybe thinking of moving out of the field …" Michelle.
"… I need to study something else" Lira.
A sample interview transcript has been supplied (Additional file 1).
Discussion of themes
When participants were asked to relate their initial workplace experiences, their responses were suggestive of reality shock. This is a multi-faceted phenomenon, and participants expressed differing aspects of their new roles for which they did not feel prepared. They highlighted how the increased responsibilities of being an autonomous professional made them anxious; how frustrated they were due to being undermined; and how the workload overwhelmed them. Such negative sentiments made some participants feel that attrition from the profession was the best way forward.
Participants were aware that once they started working as qualified radiographers, they would assume more responsibility. However, the experience of being immersed in this responsibility brought about reality shock. They expressed the idea of increased responsibilities in a negative way, focusing more on the repercussions of what could happen if they made an error that affected a patient, as opposed to embracing their professional independence.
Study respondents highlighted accepting their own radiographic images as the most significant indicator of their increased level of responsibility. Harvey-Lloyd, Stew and Morris [14] state that the level of responsibility given to practitioners at the outset of their careers is a concern that is acknowledged across the different healthcare professions. Phenomenological research from as early as 1950 describes the anxiety radiographers felt due to the sudden responsibility of accepting their own radiographs [14]. This anxiety may indeed be justified, as radiographer error may have very serious implications for the patient. In one instance in Grimsby in the United Kingdom, a radiographer committed suicide after a barium enema examination he was performing proved fatal for the patient. He had incorrectly inserted a catheter, which perforated the patient's bowel. Barium from the procedure leaked into the blood stream, and the patient died shortly after the procedure from pulmonary barium micro-embolisation [15].
Naylor [16] documented how repeated studies unanimously found that newly-qualified nurses were unprepared for their increased responsibilities. However, in one study, although surveyed nurses were anxious about their newly acquired responsibility, they felt it gave them ownership of their practice, which is a positive psychological coping mechanism that may be useful to healthcare practitioners in any discipline [16].
Participants in this study felt taken advantage of, because they were viewed as having less personal responsibility due to their young age, as well as being unmarried. As autonomous practitioners, participants expected to be regarded as equal members of staff by their colleagues, but the reality they encountered was that they were undermined in various aspects, which resulted in reality shock. Their colleagues were older, and many were married, or had family responsibilities. These responsibilities would sometimes require them to be absent from work, and the newly-qualified radiographers would be required to take over the duties of the absent staff members. This made participants feel undervalued, and they perceived that they were regarded as substitute staff.
In addition to this, the allocation of duties and rotations were unfavourable to the participants. They were assigned to work during hours that no one else wanted, and to perform duties that the older, qualified staff members preferred not to do. Participants perceived that they were at the very bottom tier in the departmental hierarchy, and they felt they could not protest such treatment as it was never expressly communicated, but rather, subtly implied. Within the nursing profession, such behaviour by qualified staff is described as oppressive, and it is known as horizontal, or lateral violence [16]. This is defined as destructive behaviours of co-workers against one another [17]. Embree and White [18] explain horizontal violence as peer-to-peer aggression, which includes non-verbal innuendo, and undermining activities. Such behaviours are found in what are now termed toxic workplaces. Horizontal violence discourages staff retention, and so the affected newly-graduated radiographers are likely to seek employment elsewhere due to such experiences [17].
Respondents detailed feeling overwhelmed by high workloads upon exposure to their work environments. During clinical rotations as students, there was always supervision, and assistance in dealing with the workload. Now, as unassisted practitioners, they were responsible for ensuring that all patients within the Imaging Department were attended to. They now had to work regularly, for longer hours, and attend to more patients. Additionally, they were short-staffed, further increasing the work pressure. In a separate study, newly-qualified occupational therapists reported being overwhelmed by their schedules, with limited time to complete their professional duties [19]. Naylor echoes this, noting that high patient volumes, heavy workloads, and staff shortages have been cited as the most prevalent sources of work pressure amongst diagnostic radiographers [16]. Continued feelings of being overwhelmed in the workplace can lead to depersonalisation, and this is associated with feelings of detachment, and dehumanisation [20]. Taking time to relax in relaxation rooms and shorter working hours may help radiographers alleviate the overwhelming feelings associated with increased workloads [21].
In this study, most respondents performed their student clinical rotations in the private healthcare sector, but commenced professional practice in public hospitals. In South Africa, there exists a large discrepancy between these two types of institution. Public hospitals generally have longer patient waiting times; shortages of consumables; and compromised patient care due to a high demand for services, and limited healthcare staff. Conversely, private hospitals generally offer better quality patient care, shorter waiting times, and better quality equipment [22]. Participants exposed to the private healthcare system as students reported experiencing reality shock when they were exposed to public hospitals as qualified radiographers. Some of them had almost no exposure to the analogue equipment used in most public hospitals, only being familiar with modern digital equipment. They were unprepared for the shortages staff and consumables, and the bureaucratic, unsupportive management style. Public hospitals within South Africa are reported to be generally in a dysfunctional state, due to bureaucratic bottlenecks [23]. Being immersed in this environment resulted in reality shock for participants.
Respondents in this study experienced reality shock from varying factors, and some of them expressed a desire to leave the radiography profession. Attrition from the radiography profession remains a serious concern in South Africa. The number of radiographers registered by the Health Professions Council of South Africa has been steadily declining over the past few years, underscoring that the attrition rate within radiography in South Africa is alarmingly high [24]. When new healthcare practitioners are exposed to pressure, and adverse events in the working environment, this negatively influences their attitude, satisfaction, and ultimately, increases the likelihood of attrition from the workplace and professional workforce [25]. Research suggests that about 30% of new nurses either change jobs, or leave the profession within their first year of employment due to reality shock [10]. In this study, although some participants were still deliberating on leaving radiography, one participant had already taken action to this end, and was awaiting admission into university to study a programme in a different field.
However, the findings of this research are not without pitfalls. The results of this study may have limited transferability, as is true for all qualitative studies. Another limitation is that the interviews were conducted by a radiographer, and thus interpretations may be biased towards the interviewer's own experiences within the radiography field, as opposed to reflecting the participants' true sentiments.
Recommendations
Managing reality shock involves multiple stakeholders, and this process ideally should begin with institutions of higher learning. The recommendations outlined in this study are an amalgamation of solutions to challenges faced by respondents, as well as insights provided by literature and the authors' knowledge of the subject.
Universities should structure clinical rotations for students such that every individual is exposed to a lowresource, public hospital as part of the standard requirements. This will enable students to familiarise themselves with such clinical settings. The curriculum should also include a module for final year students which explains to them how the professional experience will differ from their student experience, utilising data from studies such as this one.
In addition, students should be encouraged to apply for their first job at hospitals where they have worked during their clinical rotation as students. When this is not possible, the student must familiarise themselves with their prospective places of work by informal visits, asking peers for information, talking to existing staff members at the hospital, and so on. The familiarity with the surroundings and staff will assist in reducing reality shock.
Management and senior staff members at hospitals recruiting newly-qualified graduates should be formally trained on mentoring, orientation, and encouraging retention of new members of staff. It is important that the rest of the existing staff is also taught how to relate to new staff members, so that a welcoming environment is created.
New radiographers must be given a reduced workload, which is gradually increased as their competency levels rise. Initially, they should work for less hours, and attend to few patients so that they work at a decreased pace. Over time, this ought to then be reviewed depending on individual competency, and the patient load and working hours increased accordingly. Such 'easing in' of new employees should help in reducing reality shock, and thereby reduce the likelihood of attrition from the workplace and workforce by newly-qualified radiographers.
Conclusion and further research
Early career experiences tend to remain entrenched in an individual's mind for several years, and may subsequently influence important decisions, such as choosing to leave a profession. This paper has identified and suggested relatively simple measures which may help to prepare newlyqualified radiographers for the workplace, which should in turn decrease the impact of reality shock. However, recruiting a larger sample of respondents from a different location may yield additional valuable insights which this study may have failed to harness. It is therefore a recommendation for further research that a similar study be conducted with more respondents, and from different provinces in the country. In addition, data collection should ideally be longitudinal, so that the changing needs of respondents at different points in time can be observed, documented, and possibly catered to.
Institutions of higher learning and health care facilities should consider implementing these recommendations as a step in combating the problem of radiographer attrition due to reality shock.
|
2019-06-26T13:43:08.403Z
|
2019-06-25T00:00:00.000
|
{
"year": 2019,
"sha1": "397cf64badc209e5fe8d45ccf6491e03e1af50ec",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-019-0317-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a96dacce4685ec325652aa018d157e322dc21a39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
114008128
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge Base for an Intelligent System in order to Identify Security Requirements for Government Agencies Software Projects
It has been evidenced that one of the most common causes in the failure of software security is the lack of identification and specification of requirements for information security, it is an activity with an insufficient importance in the software development or software acquisition We propose the knowledge base of CIBERREQ. CIBERREQ is an intelligent knowledge-based system used for the identification and specification of security requirements in the software development cycle or in the software acquisition. CIBERREQ receives functional software requirements written in natural language and produces non-functional security requirements through a semi-automatic process of risk management. The knowledge base built is formed by an ontology developed collaboratively by experts in information security. In this process has been identified six types of assets: electronic data, physical data, hardware, software, person and service; as well as six types of risk: competitive disadvantage, loss of credibility, economic risks, strategic risks, operational risks and legal sanctions. In addition there are defined 95 vulnerabilities, 24 threats, 230 controls, and 515 associations between concepts. Additionally, automatic expansion was used with Wikipedia for the asset types Software and Hardware, obtaining 7125 and 5894 software and hardware subtypes respectively, achieving thereby an improvement of 10% in the identification of the information assets candidates, one of the most important phases of the proposed system.
Introduction
It has been shown that the most common causes of application security vulnerabilities are the incomplete identification of requirements and bad specification of requirements. In Colombia, the government entities have been subject of several information security incidents. The root cause of those incidents has been identified as a bad requirements engineering practice. A simple study of the Request For Proposals (RFP) used to contract software development and software acquisition written by government entities, shows that security requirements are underspecified. Most of the documents ask for a "secure implementation" or a "secure configuration", but they do not describe in detail the concrete aspects of such request.
In this article we describe the knowledge base of an intelligent system for the identification and specification of security requirements in software applications. The knowledge base was designed using elements of semantic web, natural language processing, knowledge management and cross sourcing. The purpose of the intelligent system is to allow government entities to identify and define, together with the software provider, the security requirements that application and systems must meet.
The article is organized as follows, section II presents the state of the art regarding knowledge bases developed for cybersecurity. Section III discusses the methodology used to build the knowledge base. In section IV we describe the ontology used to build the knowledge base for the proposed system and shows how the system interacts with the knowledge base.
State of the Art: Knowledge Bases for Cybersecurity
In [1] an ontology of information security is proposed. The ontology includes the most relevant concepts of the domain as Asset, Vulnerability, Threat and Control. It also discriminates between tangible and intangible assets; and allows modeling of the physical infrastructure of the organization with information such as the place where the asset is located. The ontology has 500 concepts and 600 formal restrictions, and was derived from best practice guidelines and standards for information security, including Internet Security Glossary (RFC 2828); German IT Grundschutz Manual; The United Nations Standard Products and Services Code; National Institute of Standards and Technology Special Publication 800-12; ISO / IEC 27000; among others. In [2] the authors present a framework composed of several ontologies. These ontologies are used to represent, store and reuse safety requirements. The first ontology presents knowledge for risk analysis following the ISO 27002 standard (see Fig. 2) As a result the ontology identifies five main elements: assets, threats associated with the asset, protection measures to address threats, valuation dimensions (attributes that make an asset valuable) and valuation criteria (measure of the importance of an asset to the organization).
The second ontology has classified requirements according to IEEE standards. In the combination of these two ontologies, each security requirement has an associated asset together with threats and protection measures. Additionally, each requirement has the information of valuation dimensions, and valuation criteria for each asset associated with the requirement In [3] an ontology of security incidents is proposed. The ontology describes a conceptual framework with the following elements: Agent, Attack, Security Incident, Tools Vulnerability and Access. These elements are related as follows: an agent performs an Attack that can cause a Security Incident. In order to perform an Attack,the agent uses a Tool, which exploits a Vulnerability, in order to get Access. The Incident has a Consequence, on an Asset, and happens at a specific Time. In [4] an initial work is presented for a unified security ontology. First, the study identifies the basic requirements that the ontology should have.
Figure 4.
Overlapping between security domains for integrated security ontology [4].
To identify such requirements the study uses OntoMetric to create a comparative analysis of existing proposals. These requirements are: static knowledge, dynamic knowledge and reusability. Second, a process for ontology integration was applied. Following such process overlapping areas between ontologies were identified (see Fig. 4), related concepts and consistency of the result were verified.
Design Methodology for the Ontology
For the design of ontology the methodology described in [5] was used. The following steps are carried out.
A. What is the domain the ontology will cover?
Definitions of functional requirements to acquire or develop an application.
B. For what we use the ontology?
To identify information security requirements.
C. What types of questions the information in the ontology should answer?
x What are the vulnerabilities for an asset?
x What are the threats that can exploit a vulnerability?
x What are the controls that can minimize a vulnerability?
x What are the types of risk that may materialize?
a. Enumerate the important concepts in the domain
Information asset, environment, threat, vulnerability, inherent risk, impact, control, residual risk, accepted risk, encryption algorithm, encryption software, cryptographic hash algorithm, cryptographic-summary software, log audit, log traceability, application, application code, database, table, communications link, issuer, receiver, owner of information asset, switch, router, firewall, antimalware, malware.
b. Define the classes and the class hierarchy Information asset: information; hardware; software; person, service.
Vulnerability: unencrypted data, unsigned data, absence of audit, lack of traceability, lack of access control, data without cryptographic summary, absence of capture fields validation, absence of output data validation, among others.
Risk: damage or loss of assets, excessive costs, loss of income, loss of business, image loss, legal sanctions, wrong decisions, impact, level of involvement for the company can be measured in money, percentage levels, among others.
A. CyberSecurity Ontology
This section describes the ontology that was designed following the methodology described above (see Fig. 5).
We now define in detail the classes (concepts) and properties (relations) of the ontology.
Vulnerability: Unrestricted access, Absence of antivirus, Absence of backup, Absence of logging and auditing, Weak passwords, Disgruntled employee, Typos, Lack of secure deletion policy, Installing unauthorized software, Unprotected network point, Vulnerabilities in the operating system and/or PC applications, among others.
Threat: Intrusive access to the PC, Alteration or removal, Accidental damage, Natural disasters, Terrorism or public disorder, Information leakage, Infection or malware, among others.
Control: Enable audit logs, Apply Active Directory policies, User training, Defining roles and user roles, Record file deletion, Implementing encryption in storage, Implement UPS or power plant, Implement firewall, Backups policy, Secure wiring policy, Procedure for defining strong passwords, Record of failed attempts to access network resources, among others.
Risk type: Competitive disadvantage, Loss of image or credibility, Economic risks, Strategic risks, Operational risks, Legal sanctions.
2) Properties
Can have: the relation can have is defined as follows: an Asset Type can have a Vulnerability.
Exploits. The relation exploits has Threat as domain and a conjunction between Asset Type and Vulnerability as range.
Minimize. In the relation minimize the domain is Control and Vulnerability is the range. A control minimizes a vulnerability.
Can be materialized. The relation can be materialized has Risk Type as domain and Asset Type as range. A concrete example of the type of information that can be extracted from the ontology is: The asset type 'Electronic data' can have a vulnerability of 'Absence of backup', which is exploited by the threat 'Alteration or deletion' and is minimized by the control 'Backups Policy'. The risk that can be materialized in this case is 'Operational risk'.
The ontology has 361 concepts, which are broken down into six types of assets, six types of risks, 95 vulnerabilities, 24 threats and 230 controls. In addition, there are 515 relationships between concepts.
B. Knowledge-based Intelligent System
The intelligent system based on the ontology is called "CIBERREQ". CIBERREQ is a tool for the identification and specification of security requirements for projects of software development or software acquisition in government entities.
The system receives as input functional requirements written in natural language and, using the knowledge represented in the knowledge base, supports domain experts or users in the definition and specification of security requirements.
CIBERREQ uses the knowledge base for (Fig. 6): x Identify the information assets candidates, found in the functional requirements.
x Identify vulnerabilities related to specific assets.
x Identify threats that exploit vulnerabilities.
x Identify the controls that minimize vulnerabilities.
x Identify the types of risks that may materialize. Figure 6. Process for identification of security requirements to CIBERREQ.
Following there is a description of an example of the application of the CIBERREQ tool in a real project: Given the following functional requirement: "it is required a functionality that allows for the initial loading of parametric tables that make up the databases Base1 and BD2; the system identifies semi-automatically, using natural language processing techniques and validation from security experts, the following information assets: BD1 (unique client base), BD2 (membership database), parametric tables. These three correspond to the asset type 'Electronic Data'.
For these assets the tool identified the vulnerability: "password administration inadequate"; and the threat: "non-authorized access", and the control: "to implement strong passwords in the access control established for the equipment".
For the preceding information the following risks were identified: "damage or loss of assets due to nonauthorized access in Base2 due to inadequate password administration"; "legal sanctions for non-authorized access in Base2 due to inadequate password administration".
Considering the previous information, the security expert, using the mentioned tool, defined the following non-functional security requirements: "confidential information that is processed and transmitted must be encoded with strong cryptographic algorithms"; "authorized users must enter the system using authentication based on the specific role"; "a profile, historical and tracing register must be generated". Figure 7 shows the process developed with the CIBERREQ tool for the case previously described.
C. Ontology expansion with Wikipedia
Additionally, with the object of enriching the ontology, some terms were automatically expanded using Wikipedia´s API.
In Wikipedia, every page has one or more associated categories, and each category can have subcategories or supercategories [7]. For this expansion, category trees were extracted of up to 7 depth levels for the asset types Software and Hardware, obtaining 7125 subtypes of Software and 5894 subtypes of Hardware. This allowed an improvement in identifying the information assets candidates of 10%, one of the most important phases in the system.
Discussion
The use of ontologies in building the knowledge base facilitates maintenance, expansion and extension of the system to other more specific contexts of information security to improve accuracy throughout the process of identifying security requirements.
Therefore, this knowledge base can be used for other projects that are not government.
Conclusion
We have presented the design of an ontology for information security, and a tool that uses the ontology in order to aid in the identification and specification of security requirements. The ontology was developed collaboratively by domain experts and users, and was later expanded automatically with Wikipedia, producing a 10% improvement in the precision of the phase of information asset identification of the CIBERREQ tool. The resulting ontology is a general model that is not specific for a platform or technology. Thus it can be used to develop other intelligent knowledge-based systems.
The knowledge base was validated with different experts and officials from state agencies, in addition, the system was used in a real project of a state entity, but as previously said the knowledge base can be specified and adjusted for other not necessarily governmental contexts, and the fact that part of this ontology is based and expanded with terms in Wikipedia allows validation by the expert community in the subject.
|
2019-04-15T13:07:19.195Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "ee3598485918df06a218f874554d4da2eae595d9",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/39/matecconf_cscc2016_03012.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7236b2333f63747636daf548333594f87c3feae4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
52209434
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of chitosan‒hydroxy propyl methyl cellulose as a single unit hydrodynamically balanced sustained release matrices for stomach specific delivery of piroxicam
The de‒novo design for controlled programmed drug delivery system should be aimed for delivering the drug for an optimal period of time and achieving more predictable and increased bioavailability.1 However, in this there are lots of hurdles arises like several physiological difficulties which includes to restrain and localized drug delivery like site specific drug delivery systems which can be overcome from achieving the gastro‒retention.2 The history of achieving gastro‒retention of therapeutic system is not new. Now a day it is a focusing field and several technologies have been developed, which encompasses bioadhesive, raft forming systems, expanding systems and single/multiple unit floating drug delivery systems.3–5
Introduction
The de-novo design for controlled programmed drug delivery system should be aimed for delivering the drug for an optimal period of time and achieving more predictable and increased bioavailability. 1 However, in this there are lots of hurdles arises like several physiological difficulties which includes to restrain and localized drug delivery like site specific drug delivery systems which can be overcome from achieving the gastro-retention. 2 The history of achieving gastro-retention of therapeutic system is not new. Now a day it is a focusing field and several technologies have been developed, which encompasses bioadhesive, raft forming systems, expanding systems and single/multiple unit floating drug delivery systems. [3][4][5] Single Unit Hydrodynamically Balanced System (SUHBS; HBS) apparently maintain the system density less than 1.064g/cm 3 related to the density of gastric fluid and remains buoyant for a period of time, leading to increased gastric residence time necessary to remain afloat in the upper part of the stomach and delivering the loaded drug throughout the designed period of time in a programmed controlled manner. 6 This is important for treatment with those drugs such as Non steroidal anti inflammatory drugs (NSAIDs) which shows proximal side effects includes serious upper gastro-intestinal adverse events like bleeding, dyspepsia etc. 7 The hydrophilic nature of polymeric matrices may result in increased gastric fluid uptake and swelling leading to increase in bulk volume. The air entrapped in swollen matrices maintains the density lower than unity, which ultimately confers buoyancy to the dosage form. Because of the formation of a buoyant glassy polymeric hydrogel structure, these matrices are expected to prevent the direct contact of PRX to the mucosal lining of the stomach and also sustain the PRX release. 8 Characteristics of formulation depends upon the characteristics of polymer like its molecular weight, concentration, viscosity, combining with different polymer and degree of deacetylation. Therefore it is of interest to investigate the effect of molecular weight, concentration, viscosity, combining with different polymer and degree of deacetylation on drug retardation from polymeric matrices. 9 In addition to these physical and chemical interaction studied by using FTIR and by thermal characterization; by DTA/TGA/DTG study is there any effect of above parameters on drug interaction and drug stability.
Chitosan (CH) has gained lots of attention to the researcher during the past few years due to its several sole properties like biocompatibility, biodegradability and non toxic effects associated with it, that's why some researcher called as an "intelligent polymer". 10 This Chitosan when it comes to the contact with the water it forms an gel like structure and due to the gel formation with water named as "hydrogels". These hydrogels are having high affinity for the water, but they are prevented from mixing with the water due to the presence of physical or chemical bonds associated with the hydrogel-glassy structure. 11 In gastrointestinal drug delivery system it is considered one of the most exclusive career due to its absorption enhacement role and enhancement of drug transport via opening of tight juncture between epithelial cells. 12 The positive charge, with very high charge density, present on the amino group surface of Chitosan binds to the negative charge of stomach mucosal lining, which results in the formation of hydrogel complex which retards the drug which is matrixed in the reservoir of Chitosan molecules. 13 Although in combining this to the different molecular weight of Chitosan molecule this leads to the changes in the viscosity and this affects the retardation of drug from the matrixes.
Hydroxypropylmethylcellulose [HPMC] is cellulosic ether derivative which is hydrophilic in nature. It is most widely used in pharmaceutical industry from the last few decades. They are non ionic in nature. 14 When these HPMC comes in contact with the media they form a gelatinous glassy structure at the outer surface. Now these outer surface acts as a obstruction barrier for the media to penetrate inside the glassy structure and acts as a rate limiting step for the release of the drug. The gel strength is also an rate limiting step for the sustaining of drug. The gel strength can be increased by increasing in the viscosity and this is achieved by the changing in the grades, addition with some another polymer or changing in the concentration. 15 In the present study, the potential of CH-HPMC matrices as single unit stomach specific sustained release carrier have been evaluated for patient compliant oral delivery of Piroxicam (PRX), a potent NSAID and used as a model drug. PRX is a nonsteroidal anti-inflammatory drugs (NSAIDs) belonging to oxicam group. They exhibit a potent analgesic and anti inflammatory activity effective in the treatment of rheumatoid arthritis, and other joint diseases. They exhibit cyclooxygenase (COX) at the peripheral end, which is an important enzyme for the biosynthesis of prostaglandins (PG) at the site of inflammation. They are well absorbed from oral route and from stomach mucosal cell, according to the Biopharmaceutical Classification System (BCS) belonging to class II drug characterized by a low water solubility and dissolution rate. The combination of PRX with cationic polymer (CH) and non ionic polymer (HPMC) are expected to form an hydrogels when they comes in contact with dissolution media (HCl; 0.1 mol L -1 ) and retards the release of PRX from the hydrogel matrices of CH and HPMC. For method validation, High Performance Liquid Chromatography (HPLC), used is of the Water Breeze 2 system, Water Spherisorb ® analytical column used which have the dimension of 5µm, 4.6*250mm. All the chemicals and reagents used were of analytical grade. (Table 1) of HBS capsules were prepared ordered mixing technique by placing the drug between layers of polymers in a borosil glass vial (10ml) and shaken vigorously manually by hand for 5 min., followed by encapsulation in colorless hard gelatin capsule shell. The procedure had advantages that it did not cause size reduction of neither drug nor polymer during mixing that would believe to affect the release profile of formulations. Functional group characterization of drug, excipient and drug excipient composition by using Fourier Transform Infrared spectroscopy (FTIR) studies: Fourier Transform Infrared spectroscopy (FTIR) was performed by BX2, Perkin Elmer, Norwalk, USA. The FTIR analysis was carried out for qualitative compound identification and to ascertain that there is any drug excipient interaction occurs or not and secondly to confirm the presence of functional group in the compound and its comparison to official compendium. The method involved is of direct compression technique by using potassium bromide (KBr). The KBr pellet of approximately 1 mm diameter of the drug was prepared grinding 3-5 mg of sample with 100-150 mg of KBr in pressure compression machine. The sample pellet was mounted in FTIR compartment and taken scan at wavelength 4000cm -1 -400cm -1 .
Development of validated HPLC method for estimation of
Stability studies of PRX in 0.1 M HCl L -1 : Stability studies of PRX in 0.1 mol L -1 HCl (pH 1.2) were determined in order to assure that whether the drug will be remain stable throughout the period of drug release. In this 2mg/ml, 3mg/ml and 4mg/ml solution of PRX is prepared in 0.1 mol L -1 HCl having the pH of 1.2. The temperature of the system is maintained at 37±0.5°C. One ml of the sample was withdrawn periodically and replaced by freshly prepared 0.1 mol L -1 HCl. The samples are suitably diluted by the same followed by sonicating it for 5 minutes and were measured at 333nm (Shimadzu UV-1800).
Determination of drug content in formulations:
The drug concentration in each formulation was determined in triplicate by emptying each formulation in 0.1 mol L -1 HCl at 37±0.5°C. This mixture is stirred for 2 hours at 200 rpm. The solution was filtered through 0.45µm membrane filter, diluted suitably and analyzed at 333 nm.
In vitro buoyancy studies: The capsules were placed in 900ml of in simulated gastric fluid, pH 1.2 in USP type II apparatus at 50 rpm maintained at 37±5°C. The time during which the formulations remained buoyant was observed and was taken as the floating time.
In vitro release studies and drug release kinetics: Prepared ordered mixed HBS capsule containing formulations were immersed in 900 ml of 0.1 mol L -1 HCl and in vitro drug release from the prepared formulations in SGF (pH1.2) using USP XXIV type II (paddle type) apparatus (Electrolab; TDT-08L, Mumbai; India) at 50RPM was carried out. Aliquots of 1ml as samples were withdrawn for analysis and an equal amount of fresh 0.1 mol L -1 HCl was replaced in the dissolution vessel. Obtained samples were analyzed for their absorbance at 333 nm and the concentration was determined by the standard curve of PRX.
In order to explain the drug release from the formulation, various equations are used like Zero Order, First Order, Higuchi model and Korsmeyer-Peppas equation were used. [16][17][18]
Stability studies
To ascertain the stability of the selected formulations (F5, F6 and F12), the stability studies were conducted according to the ICH guidelines. Selected formulations were analysed for long term stability studies for 25±2°C/60±5% RH for a time period of 12 months for their buoyancy, lag time, viscosity, percent drug content and percentage drug release.
Statistical analysis
All the data were analyzed by Students t test to determine statistical differences between the results. A probability value p<0.05 was considered statistically significant. Statistical analysis of obtaining the data was performed by using Graphpad Instat ® software.
Results and discussion
Validation of HPLC method UV spectrum of PRX was measured in 0.1 mol L -1 HCl and buffer ratio of Acetonitrile (CAN) and potassium dihydrogen phosphate (3:1). In both solvents the wavelength was found to be 333nm and was reproducible. PRX shows the linear range of (1-50µg/ml) and the coefficient (r 2 ) was found to be 0.9967 (y=62370x+114200). The retention time (RT) was 3.253 min. The recovery studies were performed at 50 %, 100 % and 150 % levels and the % RSD (Relative Standard Deviation) should not be more than 2% as mentioned by ICH guidelines and it was found to be 0.831 within the specified range ( Figure 1). Ruggedness and Intermediate precision % RSD was found to be 0.539 and it also conferred the limitation within the mentioned by ICH guidelines. Method precession % RSD also found to be 0.593 and it shows that the method is precise. The results obtained from the validation parameters meet the requirements. It explains and suggests that the it follows the Beer's Lamberts law (Table 2).
Stability studies of PRX in 0.1 M HCl L -1
PRX shows some degradation in 0.1 mol L -1 HCl in the concentration range of 2mg/ml, 3mg/ml and 4mg/ml but the degradation was not significantly found (p>0.05). Degradation of the drug was not seen concentration dependent. Degradation of drug during drug release was not seen and it is believed that due to the formation of glassy polymeric structure in the dissolution media due to the nature of hydrophilic colloid it directly prevents the drug from the degradation. This assures the drug will be stable throughout the period of drug release (
Drug excipient interaction studies by Fourier Transform Infra-Red Spectroscopy (FTIR)
The FTIR spectra of pure PRX represents an absorption band at 3338.07cm -1 which indicate that the drug is in the cubic polymorphic form. The absorption band at 772.65cm -1 , 1148.91cm -1 , 1350.55cm -1 , 1435cm -1 and 1629.84cm -1 correspond to stretching of ortho-di-substituted phenyl, stretching of-SO 2 -N-group, Stretching of the symmetric methyl group, stretching of the asymmetric methyl group and stretching of amide carbonyl respectively. Replacing MCH by HCH in spectra (c), there is no difference in the peaks, but there is a very small deflection of the peak of N-H (secondary amines) at 3418.89 cm -1 . In the spectra (d) there is no shifting of the peaks of PRX, MCH and K4 this shows that there is no interaction of the PRX, MCH and K4. Similar type of conclusion can also be drawn for the spectra (e) there is no any major changes in the characteristic peaks PRX and polymers. All the sets of spectra reveals that the there is no any change or shifting of the major peaks when used in combination and from this we conclude that there is no any drug polymer interaction takes place in the formulation.
Thermal Studies: Figure 4 shows the thermal behavior of PRX under experimental conditions. The DTG/TGA curve shows that the drug is stable up to 255°C and degradation of drug is a one stage process. Maximum degradation of drug and maximum loss of mass occurs at between 250-300°C (approximate 60%). The DTA curve of PRX shows one sharp endothermic peak at 201°C corresponding to the melting point of pure drug whereas a broad exothermic peak at 259°C attributed to the slow degradation of drug. Figure 5 shows the thermal behavior of Chitosan. The DTG thermogram shows the Chitosan is stable up to the temperature of 296°C whereas in the starting of the curve there is a very small exothermic peak at 73°C which represents the vaporization of water from Chitosan molecule and the degradation of Chitosan represents that it is a one stage process as clearly indicating in TGA thermogram. Maximum degradation of Chitosan and maximum loss of mass occurs at between 296-310°C (approximate 41%). The DTA thermogram of Chitosan represents one broad exothermic peak at 304°C, which results in the slow degradation of the Chitosan. Figure 6 shows the thermal behavior of HPMC. The DTG thermogram represents the one broad exothermic peak at 345°C and small exothermic peak at 499°C. DTA exothermic peak at 355°C represents that the slow melting of the polymer and another exothermic peak at 505°C representing the degradation of the polymer. Maximum loss of mass, i.e. 78.9% took place at 379°C and at 500°C up to 92 % mass was loss as clearly seen from the TGA thermogram. However, this thermogram also represents that the degradation of polymer is a two stage process one at a lower temperature and another at a higher temperature due to the presence of two functional groups. Figure 7 shows the thermogram of PRX with LMWCH and MMWCH. From DTG, first exothermic peak represents the drug is stable up to a temperature of 249 0 C and another broad exothermic peak represents that the both LMWCH and MMWCH is stable up to a temperature of 296°C. This is confirmed by the TGA thermogram, which shows that the degradation of both the drug and polymer takes place between 258-320°C (approximate 60 %). DTA thermogram shows one endothermic peak at 201°C which suggests the melting of PRX and this peak also present in the DTA thermogram of pure PRX (Figure 4) which confirms that there is no drug excipient interaction takes place. Broad exothermic peak at 302°C represents the slow degradation of PRX and polymer. Figure 8 represents the thermal behavior of PRX with LMWCH and HMWCH. The DTG thermogram represents that sharp exothermic peak at 255°C which suggests that the drug is stable up to that temperature second broad exothermic peak at 303°C which also suggests that the polymer is stable up to mentioned temperature. The maximum loss of mass takes place between 264-316°C. DTA thermogram represents the one endothermic peak at 202°C and this peak present in the DTG curve of PRX (Figure 4), one exothermic peak at 308°C which represents the slow degradation of both drug and polymer and this suggests that there is no interaction between drug and polymer. Figure 9 represents the thermal behavior of PRX with HPMC K4 and K15. DTG and TGA thermogram of formulation shows that the formulation is stable up to the temperature of 304°C and the degradation of a formulation is one stage process. Maximum degradation and maximum loss of mass occur at between 200-300°C approximate of 52%. DTA thermogram indicates one endothermic peak at 202°C which is of pure Piroxicam melting peak whereas broad exothermic peak at 300°C is of slow degradation of drug and polymer. Figure 10 represents the thermal behavior of PRX with MMWCH, HPMC K4 and K15. DTG and TGA thermogram represents the formulation is stable up to the temperature 332°C. In this loss of mass is biphasic process at the temperature of 250°C maximum loss of mass took place of 16.6% and at second stage at 332°C 61.5% loss of mass took place as clearly seen in the TGA thermogram. DTG thermogram explains the formulation is stable up to temperature of 300°C. DTA thermogram reveals that the there is endothermic peak at 198°C which is of pure drug melting peak whereas broad exothermic peak at 311°C is of slow degradation of drug and polymers used. Similarly, similar type of argument can be state for PRX with MMWCH and HPMC K4 (Figure 11). DTG and TGA thermogram represents that the formulation is stable up to the temperature of 300°C. Loss of mass is biphasic process. Between temperatures 200-250°C, 16.4% loss of mass occurs whereas, between 300-330°C, 62.6% loss of mass is there. In DTA thermogram there is presence of drug melting endothermic peak which suggests that the there is no possible drug excipient interactions and broad exothermic peak which signifies that the slow degradation of drug and excipient used at 300°C. All DTA/TGA/DTG curve reveals that the there is no any possible drug excipient interaction is there.
In vitro buoyancy studies
For proficient buoyancy, swelling of the polymer is very important parameter. Additional to these there must be proper steadiness of swelling, water uptake and hydration of polymer. In this, there is swelling of the polymer that resulted in an increase in bulk volume. The air entrapped resulted in density lesser, which ultimately leads to the buoyancy to the formulation without any lag time (Table 3).
This above all the findings reveals that the all the formulations (F1-F12) remained buoyant during the drug release studies. This parameters are the rate limiting steps for the achieving the gastroretention. From the carried study it was found that the disruption of the capsule shell began as soon as it comes in contact with dissolution media, but its complete disruption of the shell took place in about exactly between 15-20 min for all the formulations. During this period release rate was found to be negligible (<0.5%) but as the dissolution medium penetrated through the disrupted shell as these all are the hydrophilic polymers, it forms a gelatinous glassy matrix mass through which drug diffuses slowly. In formulations F1-F4 in this we used the HPMC alone (K4M, K15M) and in combinations with the drugs in different ratios. Evidence is building that suggests that the kinetics of initial hydration of cellulose ethers which are present in HPMC is quite fast and relatively independent of substitution. The amount of water bound to HPMC is related to both the substitution and the polymer molecular weight. Within the gel layer, there obviously exists a moisture gradient from the outside surface in contact with liquid in the inner dry core. Water appears to exist in at least three distinct states within a hydrated gel of pure polymer; the addition of drugs and presumably other excipients to the polymer matrix alters the relative amounts of water in each of the states. Upon complete polymer hydration at the outer surface, chain disentanglement begins to occur, i.e., erosion of the matrix. In the formulation F1 the release was retarding with the propagation of time, but at least there are burst and improper release takes place which is clearly seen in the graphs, but this condition gets better by increasing the viscosity of HPMC; taking K15M but not modified. The possible reason behind this performance of K4M and K15M is only measurement of the self-diffusion coefficient (SDC) of water with pure gels of the polymers. But in combination of both these (K4M and K15M; 3:1) the release somewhat improved, but in a ratio of 1:3 it gets better improved because in this formulation it is believed that due to the high viscosity of K15 and in combination with low viscosity K4M it forms a high gelled matrix through which drug diffuses out slowly from these matrixes (from 12.24% F4; from 25.12 % F3, after the second hour) from the matrix.
In formulation F5-F8, we have taken the Chitosan, an important criteria for selection of polymer for this polymer as the carrier is to metabolic fate in the body or biodegradation and its low toxicity. F5 formulation releases the drug for almost 8 hours (89.55%) but after 4 hours powder gel burst and mix in the media and this is attributed due to the low viscosity of LMWCH, but this condition gets better by combining it with HMWCH in formulation F6 and releases the drug for almost 9 hours (97.02%) but in this by using HMWCH the formulation not burst in the media and it almost floats in the media till the end of the experiment. In formulation F7 combining LMWCH with MMWCH the release profile increases for 10 hours showing 93.78% drug release and again in F8 the release profile extends up to 89.28% for 10 hours but this not actually controls the release profile.
The conclusion drawn from this is that, in this at trial basis in this we had tried to explore the potential of Chitosan for drug delivery in alone or in combination with different viscosity of Chitosan (LMWCH, MMWCH, and HMWCH). From this carried studied we have found that the in combination with different viscosity of Chitosan the release profile of Piroxicam increases in combination and releases the drug profoundly throughout the run time.
In formulation F8-F12 we have taken the different viscosity grades of HPMC (K4 and K15) and Chitosan (LMWCH, MMWCH, and HMWCH) in combination with all these in F12 it releases the drug for almost 11 hours with 93.13% drug release. All the formulation (F1-F12) shows buoyant time till the drug release with no lag time. For efficient buoyancy, polymer swelling is crucial. In our case, the swelling of hydrophilic polymers results in increase in bulk volume. The air entrapped in swollen polymers maintains the density, which confers the buoyancy to the dosage form in 0.1 M HCl L -1 at temperature 37°C.
Drug release kinetics and AIC value:
The in vitro drug release pattern of various formulations was analyzed by fitting the dissolution data into various kinetic models (Table 5). Formulations F1-F5 r 2 values were higher when fitted to a zero order equation which means that the release pattern follows the zero order kinetics. Formulations F6-F12 follows Higuchi model and low n values determined by using Korsmeyer-Peppas equation. In most of the cases the drug release from the polymeric reservoir explained by Fickian diffusion i.e. they are diffusion controlled. When water penetrates inside the polymer strands there is relaxation of these polymeric strands and influences the drug release mechanism. The amount of water penetrated inside the polymer strands depends on the concentration and nature (hydrophilic/lipophilic) of polymer used. For polymeric slabs, cylinders and spherical systems, the value of n=0.5 (polymeric slab), n=0.45 (cylinder), n=0.43 (sphere) indicate Fickian diffusion. Values of n=1 (polymeric slab), n=0.89 (cylinder), n=0.85 (sphere) suggests super case II transport. Besides these our results suggests that n>0.5 (polymeric slab), >0.45 (cylinder), >0.43(sphere) follows fickian diffusion which corresponds to polymer relaxation when they comes with contact to dissolution fluid. [19][20][21] Besides r 2 values, Akaike Information Criteria (AIC) was also performed to test the applicability of the release kinetics model ( Table 6). AIC is the measurement of the goodness of fit based on maximum likelihood. When comparing the several models for a given set of data, the smallest AIC value regarded as best fit out of the models from the sets. AIC from the given set out of data was carried out by KinetDS software (Table 6).
Stability studies
After long term stability studies it was found that there is no significant difference found between the formulation (F5, F6 and F12) before stability studies and after stability studies so from these we can state that our formulation is stable. Similarity factor (f 2 ) when applied to the formulations for comparison of its dissolution profile, normal formulations vs formulations which was kept for the stability studies (normal F5 formulation vs F5 formulation which was kept for the stability, normal F6 formulation vs F6 formulation which was kept for the stability, normal F12 formulation vs F12 formulation which was kept for the stability) the similarity value was found to be 85.45, 88.98, 86.78 respectively for F5, F6, F12. Higher the f 2 value indicating the more similarity between the two dissolution profile rates (Table 7) (Figure 15).
Conclusion
In this study of PRX as model drug with various grades of CH and HPMC as polymeric matrices was studied. All the formulations shows excellent buoyancy till the time of drug release. All the combination with the polymers retards and sustained the drug release (which was followed by zero order kinetics) as it comes to the contact with the dissolution media (0.1 M HCl L -1 ) and forms a glassy hydrogel like structure. Considering the studied data it can be conluded that CH with HPMC may contribute an polymeric matrices which can sustained the PRX for the development of delivery of moiety to the stomach.
Time (Hours)
Cumulative % Drug Release
|
2019-04-29T13:13:21.133Z
|
2016-01-13T00:00:00.000
|
{
"year": 2016,
"sha1": "2419426281c7e9a61a6f6b71daec0dcad286b486",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15406/mojbb.2016.01.00014",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7d7f54412900e5ddbd5e7f6d067bca1fbeca1342",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
226304657
|
pes2o/s2orc
|
v3-fos-license
|
Germinal Matrix-Intraventricular Hemorrhage of the Preterm Newborn and Preclinical Models: Inflammatory Considerations
The germinal matrix-intraventricular hemorrhage (GM-IVH) is one of the most important complications of the preterm newborn. Since these children are born at a critical time in brain development, they can develop short and long term neurological, sensory, cognitive and motor disabilities depending on the severity of the GM-IVH. In addition, hemorrhage triggers a microglia-mediated inflammatory response that damages the tissue adjacent to the injury. Nevertheless, a neuroprotective and neuroreparative role of the microglia has also been described, suggesting that neonatal microglia may have unique functions. While the implication of the inflammatory process in GM-IVH is well established, the difficulty to access a very delicate population has lead to the development of animal models that resemble the pathological features of GM-IVH. Genetically modified models and lesions induced by local administration of glycerol, collagenase or blood have been used to study associated inflammatory mechanisms as well as therapeutic targets. In the present study we review the GM-IVH complications, with special interest in inflammatory response and the role of microglia, both in patients and animal models, and we analyze specific proteins and cytokines that are currently under study as feasible predictors of GM-IVH evolution and prognosis.
The preterm newborn
A preterm newborn (PTNB) is the one born before 37 weeks gestation, considering that a normal gestation lasts for 280 ± 15 days [1]. Depending on the gestational age, PTNB can be classified as extremely preterm (<28 gestational weeks), very preterm (28 ≥ 32 gestational weeks), moderately preterm (32-33 gestational weeks) and late preterm (34-36 gestational weeks) [2,3]. In addition, according to the weight at birth, newborns can be classified into low weight (<2500 g), very low weight (<1500 g) and extremely low weight (<1000 g) [4]. At present, nearly all neonatal deaths occur in PTNB [1,5] and mostly depend on the gestational age and the birth weight [1,5]. Therefore, shorter gestational age and lower body weight are associated with an increased risk of developmental delay [3,6]. Although body weight is often used as an indicator of gestational age, both concepts should not be freely interchangeable. Whereas gestational age is a preferred criterion [6], the difficulty to correctly establish the gestational age in many cases, often makes body weight the most widely used approach [7].
There are 15 million premature births worldwide every year, accounting for ≈11.1% of all births [2,8], that are responsible for ≈3.1 million neonatal deaths per year. Therefore, preterm birth is the leading cause of death in children, accounting for 18% of all deaths among kids under 5 years old, and as much as 35% of all deaths among newborns (aged <28 days) [9]. Moreover, the lack of information from developing countries, where antenatal and perinatal cares are limited [4], is probably hampering a more accurate estimation [2]. Globally, the incidence of premature births has increased by 1% in the last 10 years [1,2]. However, the incidence of preterm births varies significantly depending on the geographic region, being higher in lower-income countries (11.8%) [2], followed by low-middle income countries (11.3%) and middle and high income countries (9.3% and 9.4%, respectively) [2]. In most developed countries PTNB represent 5-7% of births, except in the USA where premature births account for 10-12% of total births [4,10], whereas the highest rate of premature births (over 60% of all births) are detected in sub-Saharan Africa and South Asia countries [2].
Etiology and Consequences of Prematurity
Premature deliveries might be spontaneous deliveries (unexplained preterm labor or spontaneous rupture of the amniotic membranes) or deliveries caused by medical reasons [2,11]. Nevertheless, it has been reported that up to 20% of induced preterm births are based on clinical experience without a specifically justified medical indication, and whereas some studies have reported that purely elective preterm births might be under 10%, other studies have reported over 50% of non spontaneous late preterm births were non-evidence based [12][13][14]. Moreover, one in five cases could have reached a higher gestational age with fewer future complications [12]. In France and the USA, nearly 42% of all cesarean sections are performed when the fetus is growing poorly, increasing the birth rate and survival of the PTNB. In contrast, in developing countries lacking the means and tools, pregnancies will follow their natural course with much lower percentages of induced deliveries [2]. The majority of spontaneous premature births are due to intrauterine infections followed by maternal smoking, unfavorable economic situations and multiple gestation [4]. The increased use of assisted reproduction techniques in recent years also results in multiple pregnancies, 50% of which will be premature [1,11]. Premature birth is associated with complications that can extend from childhood to adulthood, resulting in high economic and societal burdens [11,15,16]. In addition, PTNB have an greater risk of developing associated pathologies, mainly respiratory (respiratory distress syndrome and bronchopulmonary dysplasia) [16,17], cardiological (patent ductus arteriosus) [18] and neurodevelopmental [3] disorders. Most relevant morbidities, associated with a higher risk of mortality in extremely PTNB include severe germinal matrix-intraventricular hemorrhage (GM-IVH), respiratory distress syndrome and necrotizing enterocolitis [19]. Also, a recent study on PTNB with a gestational age of 28.8 ± 2.9 weeks reports that despite the advances in modern neonatology the incidence of severe IVH, necrotizing enterocolitis and periventricular leukomalacia remained stable between 2001 and 20116 [20]. Interestingly, perinatal and, most relevantly, environmental factors, including from maternal education and occupation, whether they kids are taken care by the parents, stress exposure during neonatal intensive care unit stay or malnutrition among others, might have the greatest influence on future neurodevelopmental delay [3,16].
Neurological Complications of the PTNB
Even though the mortality of the PTNB has decreased, the fact that extremely PTNB have better survival rates also means that the incidence of neurological complications has increased, including neurodevelopmental and functional disorders [6,21]. Concretely, extremely premature and extremely low birth weight infants are born at a critical time in brain maturation, and improving neurological development outcomes remains a challenge. In normally growing fetuses, between 25 to 37 weeks of gestation total brain volume increases by 230%, brain stem volume increases by 134% and the cerebellum volume increases by 384% [15]. Other authors describe that the volume of the cerebellum increases fivefold between weeks 24-40 [22]. Besides, from 24 weeks gestation on, cortical gray matter matures, radial glia disappear, the complexity of the connections increases, and cortical folding and gyrification become progressively more complex. In the white matter there is a major development of axons, glial cells and oligodendrocytes [15]. Altogether, these data stress the relevance of impaired brain development, size, structure, connectivity and function in the PTNB at this stage [15]. Central nervous system (CNS) immaturity is based, among others, on the fragile vascular structure of germinal matrix (GM), low neuronal migration, poor myelinization of white matter and exponential growth of the grey matter [1]. These limitations result in different types of brain lesions in the PTNB, including white matter injury (usually associated with neuronal and axonal disturbances in cerebral cortex and other gray matter areas), intracranial hemorrhages (including GM, intraventricular and intraparenchymal), cerebellar injury [5,15,23,24], periventricular leukomalacia, periventricular hemorrhagic infarction with subsequent posthemorrhagic hydrocephalus [25] or posthemorrhagic ventriculomegaly [21].
Germinal Matrix-Intraventricular Hemorrhage
GM-IVH represents the most important neurological complication of the PTNB [26,27]. It is the most common intracranial hemorrhage, whereas subdural and subarachnoid hemorrhages are less frequent [7]. Technological advances in neonatal intensive care and perinatal medicine have significantly increased survival rates of PTNB suffering GM-IVH [28,29], especially in extremely PTNB [30]. Nevertheless, increased survival rates are accompanied by an increase of GM-IVH morbidity [31] since GM-IVH is responsible of severe complications in the majority of PTNB [32]. GM-IVH is caused by the rupture of GM vessels. The GM is a highly vascularized structure located in the periventricular subependymal region and a source of neuronal and glial cells in the immature brain, that will migrate during fetal brain development [5,32]. These glial precursors will constitute in oligodendrocytes, white matter astrocytes and GABAergic neurons in the thalamus and cerebral cortex [5]. Initially the GM surrounds the whole fetal ventricular system, and begins to regress at 28 weeks until it disappears at full term [29,32]. It has been described that the GM starts to involute after 32 gestational weeks and consequently, the risk of hemorrhage decreases from that time on [7]. However, in the premature brain (<32 weeks of gestation) the cerebral white matter is occupied by premature oligodendrocytes and oligodendrocyte precursor cells, which are much more sensitive to excitotoxicity and oxidative stress than mature oligodendrocytes [33]. In very low birth weight babies, approximately at the time of birth, glial precursors are migrating into the cerebral cortex [23] and alterations in this moment may result in a deficit of oligodendroglia and astrocytic precursor cells that can affect myelinization and cortical development [15,23,34].
GM-IVH can also exceptionally occur in full term newborns and may be due to maternal risk factors [23,35,36] or severe asphyxiation during childbirth [37], accounting for 3-5% of cases (Matijevic et al., 2019). However, in full-term newborn, most GM-IVH originates in the choroid plexus and less frequently in the GM, unlike the PTNB [38]. Therefore, GM-IVH occurs almost entirely in PTNB, especially those born with <1500 g and/or <32 weeks gestation, who are very vulnerable to ischemia and bleeding [7] due to their immature CNS, hemodynamic instability [38], difficulty to autoregulate cerebral blood flow and compensate the fluctuations [1] or extreme sensitivity to hypoxia and changes in osmolality and tension [1]. As a consequence, when there are changes in blood pressure, arterial carbon dioxide partial pressure and cerebral blood flow, PTNB cannot compensate for these variations and fragile GM vessels can easily break [39], causing bleeding from the GM [34]. The fragility of the GM in the PTNB is due, among others, to: (i) a capillary network with large vessels and weak endothelial walls [5], (ii) vessels with an endothelial layer with few pericytes [24] because of reduced signaling of tumor growth factor 1 (TGF-1), (iii) an unstable basal lamina as a consequence of a decrease of fibronectin expression [27] and collagen deficiency, (iv) a blood-brain barrier (BBB) with discontinuous astrocyte prolongations [5], (v) a weak structure of the cytoskeleton that supports blood vessels caused by a limited glial fibrillar acid protein expression in the astrocytes end-feet, affecting the mechanic resistance of blood vessels [5,24], or (vi) a vasculature lacking self-regulation mechanisms to modulate blood vessels light under fluctuating hemodynamic conditions [5]. Furthermore, GM has a rich terminal vascularization with an intense metabolism [34] that predisposes to vessel rupture of the subependymal area [24,34].
Normally, the GM-IVH originates in the first days of life [15,40], and rarely occurs during birth [41]. In more than 90% of cases GM-IVH appears in the first week after birth [40], being rare the cases in which the GM-IVH occurs after the third day of life [21]. GM-IVH may spread the next days, block the venous drainage of the terminal veins affecting the adjacent parenchyma and developing ventriculomegaly by obstruction of cerebrospinal fluid (CSF) circulation [32]. Typically, GM-IVH is categorized into 4 degrees according to the severity of the GM-IVH [42,43]. Grades I and II lesions are known as mild GM-IVH, and grades III and IV lesions are considered severe GM-IVH [44]. Grade IV: parenchymal hemorrhage that corresponds to periventricular venous infarctions with hemorrhagic evolution [5].
Currently, grade IV is no longer considered a propagation of the original hemorrhage, but a consequence of the obstruction of the venous drainage, with a consequent venous infarction and an hemorrhage of the adjacent tissue (peri-ventricular hemorrhage infarction) [25] (Figure 1). This, among other reasons, has caused other classifications to be proposed for grading the severity of the GM-IVH [45]. conditions [5]. Furthermore, GM has a rich terminal vascularization with an intense metabolism [34] that predisposes to vessel rupture of the subependymal area [24,34]. Normally, the GM-IVH originates in the first days of life [15,40], and rarely occurs during birth [41]. In more than 90% of cases GM-IVH appears in the first week after birth [40], being rare the cases in which the GM-IVH occurs after the third day of life [21]. GM-IVH may spread the next days, block the venous drainage of the terminal veins affecting the adjacent parenchyma and developing ventriculomegaly by obstruction of cerebrospinal fluid (CSF) circulation [32]. Typically, GM-IVH is categorized into 4 degrees according to the severity of the GM-IVH [42,43]. Grades I and II lesions are known as mild GM-IVH, and grades III and IV lesions are considered severe GM-IVH [44]. Grade IV: parenchymal hemorrhage that corresponds to periventricular venous infarctions with hemorrhagic evolution [5].
Currently, grade IV is no longer considered a propagation of the original hemorrhage, but a consequence of the obstruction of the venous drainage, with a consequent venous infarction and an hemorrhage of the adjacent tissue (peri-ventricular hemorrhage infarction) [25] (Figure 1). This, among other reasons, has caused other classifications to be proposed for grading the severity of the GM-IVH [45]. In many cases GM-IVH and peri-ventricular hemorrhage infarction are clinically silent and detected during routine cranial ultrasound. Some infants present subtle changes in the level of consciousness, limb movement, tone, and eye movement in the hours to days after the GM-IVH. With extensive hemorrhage, a catastrophic deterioration occurs with stupor, "decerebrate" posturing, generalized tonic seizures and hypotonia [41].
GM-IVH Neurodevelopmental Disabilities
As previously stated, improvements in medicine and neonatal care have increased survival rates of the PTNB [46] that also translates to a higher burden of associated disabilities [1,5,23]. PTNB exchange the protective environment of the womb for the stress of the neonatal intensive care unit, with a multitude of physical and sensory stimulations for which they are unprepared. In addition, when newborns are separated from their mothers, they do not receive the necessary biological and maternal emotional care, which triggers adverse short and long-term developmental consequences, including structural and functional alterations of brain development and dysregulation of the hypothalamic-pituitary-adrenal axis stress response system [47] among others. Moreover, brain abnormalities are directly related to an increased risk of sensory [7,23,35], cognitive and motor [33,41] impairments. PTNB with severe GM-IVH commonly suffer developmental delay [23,35,48], associated to cerebelum abnormalities [22], as well as a consequent cerebral palsy [7,23]. Previous studies have reported that approximately 10% of PTNB with severe GM-IVH will develop cerebral palsy [49] and these children will frequently suffer spastic dysplasia where both legs are affected [5]. Nevertheless, other studies have also shown that very PTNB with periventricular hemorrhage develop cerebral palsy in up to 42% of the cases [50], showing that different studies report inconsistent outcomes, and supporting the difficulty of these assessments. Other alterations include visual impairment and hearing loss, that may affect up to 3% of the todlers. Also, recent studies have reported that 15.6% of children had visual deficieny and 7.8% presented hearing impairment [34]. Other studies have reported visual problems associated to the severity of the IVH (ranging from 26.1% in grade I IVH up to 45.5% in grade III IVH) without much effect on acoustic impairment [51] and a recent review and methaaanysis has reported no visual or hearing impairment after periventricular hemorrhage [10]. In addition, extremely premature and very underweight children have deficits in intellectual quotient, expressive and receptive language skills or spatial reasoning [15,16]. Thus, it has been described that GM-IVH increases by twofold the need of special education in very preterm and very low body weight infants with mild GM-IVH, compared to PTNB without GM-IVH [23]. Also, Mukerji et al. have described deficits in academic performance for PTNB with mild and severe GM-IVH [29]. Likewise, a study by Holwerda et al. indicated that school-aged PTNB with grade III GM-IVH had intelligence, visual perception, attention and emotional functioning alterations when compared to PTNB without GM-IVH. Nevertheless, these authors did not detect any disabilities when visual-motor integration, verbal memory and executive functioning were evaluated [48]. In addition, several studies describe that PTNB (including late PTNB) are at higher risk than full-term newborns of suffering neuropsychiatric problems such as autism spectrum disorders, attention deficit hyperactivity disorders, anxiety, depression and antisocial behavior [3,52]. Interestingly, PTNB with periventricular venous hemorrhagic infarction showed worse motor alterations than cognitive problems [48]. Although severe GM-IVH is undoubtedly associated with impaired neurodevelopment [23,29], the outcome of low-grade GM-IVH has not yet been agreed upon [10,33] and it has been suggested that neurodevelopmental alterations in these patients might be exclusively associated to prematurity [29]. Nevertheless, Radic et al. describe that disabilities associated to GM-IVH are highly dependent on the severity of the lesions [26], and in line with these observations it has also been reported that even lower degrees of GM-IVH of the PTNB predisposes to neurological complications, such as cerebral palsy and developmental delay [26,53]. Previous studies have shown that there are no significant differences in the short term when neurological development and cerebral palsy are compared between low-grade GM-IVH and non-GM-IVH controls [36]. Nevertheless, follow-up must continue until school age as significant differences have been detected up to age 16 [23,36] and other authors describe long-term consequences on the neurodevelopmental outcome of PTNB with low-grade GM-IVH [30].
GM-IVH Associated Brain Damage
After the rupture of the GM vessels, blood is deposited in the intraventricular space and red blood cells are lysed, releasing hemoglobin into the intraventricular CSF [54] and the periventricular white matter [30]. This hemoglobin is highly reactive and spontaneously self-oxidizes from oxy-hemoglobin to met-hemoglobin and superoxide ion [54]. After several reactions, heme is converted into hemosiderin, which can escape from the CSF and deposit on the brain stem and the surface of the cerebellum, damaging these structures [33] and altering the normal development of the cerebellar cortex [30]. In addition, degradation of the heme group produces bilirubin, carbon monoxide and free iron. Free iron can generate reactive oxygen species that damage lipids, proteins and DNA. It can also be inserted between cell membranes with cytolytic effects, leading to periventricular cell death [33,54]. Altogether, GM-IVH causes the loss of cell progenitors and greater white matter injury due to oxidative stress and pressure, contributing to the pathogenesis of periventricular leukomalacia [5]. Furthermore, even small hemorrhages of the GM can have negative effects on the migration of neuronal and glial cells in the brain of a premature infant [23], and higher ventricular volumes have been related to lower cognitive efficiency in the PTNB with GM-IVH [25]. GM-IVH may also be complicated by hydrocephalus [5]. When GM bleeding invades the ventricular system, CSF circulation can be obstructed and an ependymal inflammatory response may also occur, causing a decrease in CSF reabsorption and resulting in a consequent posthemorrhagic hydrocephalus [34].
Neuroinflammation and Microglia in the GM-IVH
During fetal development, neural and glial precursor cells are found at the head of the caudate nucleus, below the lateral ventricles [55]. Among the glial cells, microglia are the macrophages of the CNS [56,57], which are the first to respond to ischemia [55]. They play essential roles such as communication between cells, phagocytosis of microbes, cellular debris, apoptotic and cancerous cells, and other foreign substances. Moreover, unlike monocytes, microglia have definite effects on embryonic vasculogenesis and vascular anastomosis [58]. Interestingly, there are differences between the adult and newborn microglia [55,59]. In the developing brain, microglia has an amoebic morphology, while in the mature brain microglia has a ramified form [57]. Besides, in the adult brain they respond rapidly to injury by producing inflammatory cytokines that aggravate brain injury [58], and in the postnatal brain they prune out synapses and form synaptic circuits [59]. Likewise, during brain development, activated microglia are involved in the elimination of transcallosal projections, vascularization and angiogenesis, myelinization, programmed cell death, and axonal guidance of white matter, among others [57]. PTNB have a higher number of microglia in the periventricular white matter relative to other parts of the brain, being even higher in those areas close to the GM. Therefore, any degree of hemorrhage can activate the microglia, triggering cellular apoptosis, although the degree of activation depends on the severity of the GM-IVH [57]. Accordingly, in neonatal brains the depletion of the microglia due to the lesion results in the elimination of endogenous protective mechanisms that improve the outcome of the injury [59]. In compliance with these studies, the dual role of pro-inflammatory activated microglia after ischemic brain injury and the neuroprotective-neuroreparative role of microglia has been further taken into consideration [56,58]. While controversial, microglia state of activity may change between the classic activation state (M1 phenotype) and the alternative activation state (M2 phenotype). The M1/M2 polarization of the microglia consists of the expression of the M1 (CD68, CD86 and inducible nitric oxide synthase) and M2 (CD206, Ym1 and Arginase-1) genes [60]. The M1 phenotype is induced by lipopolysaccharides, interferon-γ, TNF-α, stimulation of Nod-like receptors or Toll-like receptors (TLR), and secretes the pro-inflammatory cytokines TNF-α, IL-1β, IL-6, IL-12, and IL-23 [56], which prevent CNS repair and spread tissue damage [60,61]. In contrast, the M2 phenotype is induced by IL-4, IL-10, TGF-β, IL-13, and secretes anti-inflammatory cytokines IL-10 and TGF-β [55,56], promoting an anti-inflammatory response [56] that resolves local inflammation and eliminates cellular debris, eventually recovering the brain [60,61]. Therefore, the decrease in microglia and, consequently, the deficit of anti-inflammatory cytokines together with the increase in the astrocytic reaction are associated with a more severe injury [55]. For this reason, it would be interesting to investigate the attenuation of the M1 phenotype to favor the activation of the M2 phenotype, as suggested in preclinical studies [56,59]. Despite of this, the hypothesis of M1/M2 phenotypes has limitations, as more and more subsets of microglia phenotypes are found. This may mean that M1/M2 are the extremes of a wide range of macrophage/microglia subsets in which each one plays a critical immunomodulatory role in the GM-IVH [56].
After GM-IVH, the hematoma applies mechanical pressure to the glia and neurons [59], cytotoxic edema and necrosis, known as primary lesion [56]. The secondary lesion results of the entry of blood components into the brain tissue and the resident and peripheral immune cells that trigger the secretion of pro-inflammatory mediators, extracellular proteases and reactive oxygen species, together with an alteration of BBB [56]. Previous studies have shown that blood-derived products, such as thrombin and plasminogen, can contribute to the activation of microglia with the consequent release of inflammatory cytokines that damage the adjacent white matter [57]. Extracellular hemoglobin metabolites also have pro-inflammatory effects on endothelial cells and macrophages, and may even activate innate immunity, acting as ligands of the TLR system [54]. As a consequence, inflammatory fibrosis or arachnoiditis with gliosis may be triggered, promoting an imbalance in CSF production, absorption or transit [62]. In addition, the microglia is responsible for antigen presentation [55] as well as phagocytosis at the site of the hematoma and in the adjacent damaged or dead tissue [59,61]. Microglia also promotes astrogliosis [63], which contributes to cytotoxicity and necrosis [59], while inflammation mediators such as interleukins IL-1β, IL-6, TNF and matrix metalloproteinases [55] further damage the tissue [56].
Whereas previous studies have not been able to confirm a link between inflammation and hemorrhage [64], other studies have found pro-inflammatory cytokines in the fetus that can predict the risk of GM-IVH [64][65][66]. Pro-inflammatory fetal cytokines (TNF, IL-6) have been detected and a direct relationship has been found between umbilical cord IL-6 concentrations and GM-IVH [66]. Likewise IL-1 and IL-8 are increased in children with cerebral palsy, along with vascular endothelial growth factor (VEGF) [65]. However, CCL-18 chemokine levels are low in the umbilical cord of newborns, while CCL-18 levels increase from 32 weeks of gestation, as the risk of hemorrhage decreases [64]. CCL-18 receptors are found in the choroid plexus, periventricular capillary endothelium, ependymal cells and the GM. Therefore, it has been suggested that CCL-18 low levels can predict the risk of a grade II-IV GM-IVH [64]. Also XCL-1 has been reported to be reduced in the CSF from hydrocephalic PTNB, while CCL-19 is significantly increased [62].
Animal Models of GM-IVH
The severity and complications associated to GM-IVH [67] have opened the door to the development of animal models to further understand the neuropathological features as well as to explore new therapeutic options [68]. Rabbits [63], dogs [69], lambs [70], sheep [71], rats [72], mice [32] or pigs [73] have been previously used to study GM-IVH of the PTNB. However, rodents are the most commonly used models. Mice and rats are regularly used because their brain development is well known [74] and their neuroanatomy [75,76], proliferation and differentiation processes [77][78][79][80] or synaptogenesis [81][82][83] have been studied in depth, making it possible to establish brain development comparisons with the human brain. Moreover, alterations of the GM [79,80,84], as well as motor and behavioural consequences [85][86][87], have also been studied in rodents, supporting their suitability as models to reproduce the GM-IVH of the PTNB [88,89]. Nevertheless, relevant differences need to be taken into consideration when using rodents to reproduce GM-IVH pathology; on the one hand, rodents brains at birth are more immature than the human PTNB brain. Brain development in rats at 6 days of age (P6) is equivalent to 35 weeks of gestational age in humans [90], while mice brain development at birth (P0) is equivalent to 22-24 gestational weeks in humans [84]. Besides, GM-IVH does not occur spontaneously in rodents, so transgenic mouse models have been created to induce spontaneous bleeding. In addition, intraventricular administration of glycerol to rabbits, as well as autologous blood or collagenase to rodents, is regularly used to induce GM-IVH in animals.
Genetically Modified Models
Whereas genetically modified animal models that reproduce a GM-IVH are limited, they provide a relevant tool to study spontaneous bleeding in the brain. To our knowledge only a few transgenic models have been developed to the study of GM-IVH, including a transgenic mouse with mutations for integrin [91], a transgenic mouse with mutations for procollagen IV [92] and mice overexpressing VEGF [93]. Interestingly, transgenic models developed to reproduce other diseases, such as cleft palate, can also present GM-IVH [94].
Integrins are heterodimeric receptors, of non-covalently associated α and β subunits, strongly expressed in the CNS [95]. Integrins link the extracellular matrix to the cytoskeleton, some soluble factors and cell surface proteins [96]. They are necessary for the postnatal migration of glial cells [97] and as adhesion molecules that mediate multicellular interactions in the BBB. Complete ablation of the gene for the αv integrin subunit causes 100% lethality of the offspring. Among these, 70% die between E9.5 and E10.5, and those that survive to E12.5 develop brain hemorrhages within the ganglionic eminence of the telencephalon. Also, null αv neonates are severely hydrocephalic and die within a few hours after birth [91,98]. Further assessment of the defects that contribute to brain hemorrhage in αv-null embryos revealed normal assembly of perycites and endothelial cells in the brain, whereas a compromise was observed when the interaction between brain microvessels and parenchyma was addressed [91]. Other studies have shown that global detection of the gene for β1 integrin specifically, also results in early embryonic death at E5 [99]. Conditional deletion of β1 integrin leads to abnormal vascular patterning and embryonic death at E9.5-E10, and whereas heterozygous endothelial β1 integrin deletion does not affect postnatal survival, it reduces β1 expression by 40% and interferes with normal vascular remodeling [99]. Deletion of αvβ3, αvβ5 or αvβ6 genes results in viable and fertile mice that do not show brain hemorrhages [91]. On the other hand, ablation of β8 integrin results in lethality rates that reach 65% by midterm, and the remaining 35% die shortly after birth. Embryos present vascular abnormalities, leaky capillaries, irregular capillary patterning and endothelial cell hyperplasia [100]. Supporting these observations, other studies have shown that induced loss of β8 expression in the GM neural progenitors, also leads to defective vessel development, region-specific vascular defects and hemorrhages similar to those observed in human GM hemorrhage [101]. Altogether, these studies show that integrins establish and maintain vascular integrity by the interaction of the micro-vessels with the cells of the brain parenchyma.
In 2005, Gould et al. developed a mouse model to study porencephaly that harbours a semidominant mutation in the procollagen type IV gene, so the secretion of type IV collagen is inhibited [92]. Interestingly, this mouse also presents spontaneous bleeding, making it appropriate as a GM-IVH model. Procollagen IV is part of the basal membrane of different tissues, including the epithelium of blood vessels and mutations in procollagen IV gene are associated with vascular problems in adults and fetuses with cerebral hemorrhages. About 50% of the heterozygous transgenic procollagen IV mice die the day of birth and approximately 18% of the survivors had porencephaly. These mutant mice also have small size, reduced viability and could have multiple pleiotropic phenotypes, as ocular anomalies, mild renal anomalies or reduced fertility [102].
Yang et al. have also developed a tetracycline-regulated transgenic (VEGF-Tet) system to check the effects of inducing VEGF in the GM. VEGF is largely implicated in angiogenesis [103] and vascular maturation. Previous clinical studies have shown a significant increase of serum VEGF levels in GM-IVH patients, when compared with control babies. Interestingly, the occurrence of GM-IVH increases as serum VEGF levels raise, and authors also reveal that CSF levels of VEGF can predict the need for permanent shunt placement [104]. In line with these observations, VEGF-Tet-Off mice initially develop a dense network of loosely adjoined endothelial cells and pericytes near the lateral ventricles that evolves to a low-vascularity periventricular area [93], reproducing the weak and immature vascular system of the PTNB [93,105] and GMH-IVH-like anomalies. As described in patients, the severity of the lesions in VEGF-Tet-Off mice ranges from bleeding with ventriculomegaly to bleeding with destruction of white matter [93]. Yang et al. also suggest that VEGF activates the induction of neurovascular proteases, including matrix metalloproteinase 9, cathepsins, and caspase-3 as feasible contributors to the lesion [93]. Whereas it is an extremely useful model to assess the causes of intracranial hemorrhage, the fact that over 80% of VEGF-Tet-Off embryos die before birth, limits the utility of this animal to assess the evolution of the lesions or to study therapeutic approaches.
While not specifically developed to reproduce GM-IVH of the PTNB the crossing of heterozygous male mice for the Tgfb3-Cre and Alk5 knockout alleles with female mice homozygous for the floxed Alk5 allele (Alk5/Tgfb3-Cre mice) [94] results in brain complications that resemble the GM-IVH of the PTNB. Postmortem studies revealed that these mice can suffer hydrocephalus with pronounced dilation of the ventricles, as well as compression of the cerebellum due to increased volume of CSF. The results of this research show that TGF-β signaling is implicated in the maintenance of vasculature integrity within the GM [94].
Altogether, transgenic mouse models are currently providing new venues to study the etiology of the GM-IVH, as well as preventive possibilities. Nevertheless, they are only partially characterized, and lethality rates and/or short lifespan limit studies in the long term or neurobehavioral assessment, that are needed to fully understand GM-IVH complex pathology.
Lesion-Induced Models
Given the fact that spontaneous bleeding hardly occurs in animal models, GM-IVH might be induced by different approaches. Despite the pathophysiology of the lesions is largely different to that observed in PTNB, these models have the advantage of allowing the identification of the exact time and location of the bleeding. Most models use glycerol [106][107][108], blood [72,89,[109][110][111] or collagenase [60,72,112] to induce GM-IVH. Whereas not strictly a drug-induced lesion, intracortical injection of phosphate-buffered saline to P3 or P5 plasminogen activator inhibitor 1 KO mice results in lesions that resemble GM-IVH, including white matter and cortical lesions as well as ventricle enlargement, accompanied by motor activity alterations. The deleterious effects are not observed when lesions are induced at P10, suggesting that microvascular maturity directly determines the outcomes in this animal model [113].
Glycerol-Induced GM-IVH in Rabbits
Rabbits have been largely used to reproduce GM-IVH. The model relies in the similarities between rabbits and human vascular structure of basal ganglia [106] as well as the comparable brain development. Among others, the maximal growth of the brain occurs prenatally in rabbits and humans, the structure and function of the GM are similar and the GM involutes at birth both in rabbits and humans. Also, preterm rabbits have high survival rates since the maturation of the lungs is completed just before term [107]. Studies in rabbits have shown that GM vessels in premature E28 animals present structural characteristics of a BBB. However, many ultrastructural abnormalities are observed in the vasculature: the basal lamina is thin, discontinuous and poorly defined, smooth muscle cells and collagen are absent and astrocytes are immature, supporting the incapacity of the ganglionic eminence vasculature to endure the transmural pressures and other factors that contribute to BBB instability and ultimately lead to the development of GM-IVH [114] (see Table 1). GM-IVH can develop spontaneously in rabbits, allowing the study of the triggering factors; however, this only occurs in about 20% of the preterm rabbit pups [115,116], limiting the utility of the spontaneous model. Table 1. Complications associated with GM-IVH induced by glycerol, blood and blood derivates and collagenase lesions.
In order to increase the number of rabbits with GM-IVH, intraperitoneal administration of glycerol has been regularly used in preterm pups [107,122,150]. Glycerol can cause intravascular dehydration, increase serum osmolality and consequent decrease of intracranial pressure leading to the rupture of fragile vessels [68,107]. With this approach, premature rabbits are usually born by cesarean section at 28-29E. Glycerol is administered intraperitoneally a few hours later [63,107,108,117,119] and GM-IVH is detected in up to 95% of the rabbits [107]. Postmortem studies have shown reduced white matter myelinization and magnetic resonance imaging of the brains reveals a reduction in fractional anisotropy and white matter volume, accompanied by ventriculomegaly as well as stretching and thinning of the cortex [117]. These observations have been later confirmed and reductions in myelin basic protein have been detected in the corpus callosum and corona radiata from rabbits with GM-IVH [118]. Also, cerebellar alterations are observed after systemic glycerol administration, affecting cerebellar volume, foliation, and proliferation in a dose-dependent manner. Other studies have also reported increased neuronal apoptosis after glycerol administration to preterm rabbits [120], accompanied by cellular infiltration and neuronal degeneration in the periventricular area [119]. In line with these observations, apoptosis and axonal damage, revealed by beta-amyloid precursor protein and neurofilament immunolabeling, are also observed around the ventricles after glycerol administration [107]. On the other hand, previous studies have reported a large amount of extracellular hemoglobin deposited in the periventricular white matter in his GM-IVH model, supporting the contribution of hemoglobin to brain damage, as observed in patients [151] (see Sections 2.2 and 2.3).
Brain pathological features are closely associated with behavioral alterations after glycerol-induced GM-IVH and previous studies have reported that 25% of pups present motor impairment with hypertonia [117] (Table 1). Pups are reported to look normal, and motor and sensory alterations are limited at 24 and 72 h. However, poor activity is observed in GM-IVH rabbits, and about 13% of the pups that develop severe hemorrhages have seizures [107]. Also, later behavioral assessment reveals that at P14, glycerol-treated rabbits present weakness in extremities, abnormal gait and limited walking speed [117]. Hind legs, righting reflex or locomotion on 30 • inclination are impaired [122], whereas visual and sensory activities seem to be preserved [117] (Table 1).
Some authors have questioned the model due to the direct toxicity of glycerol in different organs, as well as the limited effect of glycerol to reproduce GM-IVH outcomes [150]. Nevertheless, the major shortcoming of this model is the diffuse character of the bleeding, that can affect different brain compartments and provoke subarachnoid, subdural, deep white substance or cortical basal ganglia hemorrhages sometimes [152].
Blood and Blood Derivates-Induced GM-IVH in Rodents
Intraventricular administration of blood or hemoglobin has been commonly used to induce GM-IVH to rats and mice [89,[109][110][111]123,129]. Lesions are caused at early stages of life, ranging from P1-P7 approximately [124,127,153], when rodents brains development resembles the PTNB brain. Whereas most of the studies use maternal blood [109,128,130] or blood from other animals [88,127], some experiments have used autologous blood to induce GM-IVH both to rats [153] and mice [126,154]. Autologous blood administration has some advantages, including the lack of confounding factors, such as exogenous proteins, the possibility to study natural coagulation and inflammation pathways after spontaneous hemorrhages [155,156], or the avoidance of non-physiological substances with potentially misleading consequences [126]. However, the technical approach is especially complicated in newborn pups. In addition, whereas this approach rapidly induces an hematoma, it does not seem to induce the actual rupture of the brain blood vessels [156]. Nevertheless, the outcomes of different experimental approaches are quite similar, independently of the method and the animal model.
As commonly observed in patients, blood-induced lesions regularly result in GM-ventricle damage that includes posthemorrhagic ventricular dilatation [88,[124][125][126][127], that may affect over 65% of the pups [89]. Other studies have reported posthemorrhagic ventricular dilatation in up to 90% of the animals, depending on the severity of the lesions [88], parenchyma disruption around the GM [126], loss of periventricular white matter [89] and ependymal alterations [89], including ependymal nodules with iron-laden macrophages and subependymal rosettes [88]. In line with these observations, ventricle enlargement is also observed after hemoglobin injection to neonate rats [123]. Interestingly, some studies have reported hemosiderin accumulation in the periventricular areas in P21 rats, after intraventricular blood lesions [89] as well as hemosiderin staining in the ventricular system [88], supporting the implication of blood derivates in the brain damage observed after GM-IVH. Other approaches have reported that only lysed red blood cells or iron lesions result in ventricular enlargement, whereas packed red blood cells do not [157]. Demyelination is also commonly observed, and previous studies have shown that bilateral lesions induced by maternal blood to P4 rats may result in corpus callosum thinning and cell death, accompanied of reduced myelinization in the long term (P32) [124]. Reduced myelin basic protein has also been corroborated in a similar model and experimental timing, accompanied by cell death in the periventricular area [109,111]. Similar outcomes have been observed in P5 mice injected with autologous blood, showing poor myelinization and white matter compromise at P23 [125], or in neonate rats treated with hemoglobin [123]. Apart from direct ventricular damage and demyelination, blood lesions may affect cell proliferation in the lesion area, the subventricular zone (SVZ). Controversial results have been observed at this level and whereas it has been observed that cell proliferation in the GM was bilaterally suppressed from 8 h to 7 days after autologous infusion of blood to P1 mice [141], Dawes et al. have more recently described that autologous blood administration to P0 mice induces a transient increase of proliferative cells in the ventricle wall at P4. Nevertheless, this burst of glial progenitor cells in the SVZ neurogenic niche does not seem to integrate within the cortex [126]. Neuronal loss in the hippocampus, after severe GM-IVH damage, is also accompanied by reduced cell proliferation and neurogenesis (assessed by BrdU and nesting immunostaining, respectively) in the dentate gyrus of the hippocampus [128]. In line with these observations hemoglobin-induced lesions also lead to hippocampal neuronal loss and reduced hippocampal volume [129]. Other studies have reported a reduction in the corpus callosum thickness after blood-induced GM-IVH to rats [88,121]. Further assessment also reveals that lesion-induced damage is observed all over the brain and cortical development is affected [126]. Blood breakdown products, such as hemosiderin, can also be detected in the cortex, as an indication of cortical infarcts, far from the injection site [88]. BBB leakage [124] and flattening of the choroid plexus [88] are also observed after blood-induced lesions.
Histopathological alterations observed after blood-induced GM-IVH also result in behavioral deficits. Previous studies have reported anxiety-like behaviors in the open field [125] as well as learning and memory alterations in the passive avoidance test and the Y-maze [128]. However, most of the studies have been directed towards the analysis of motor activity, since over two-thirds of infants with progressive posthemorrhagic ventricle dilatation are known to develop motor deficits [89]. The negative geotaxis test is commonly used to assess motor disabilities in these models and, whereas some discrepancies are observed, an overall impairment is detected at different time points [88,109,111,127]. Other frequent motor abnormalities include difficulty to perform in the rotarod test [121,124] or the grip traction test [89] (Table 1).
Collagenase-Induced GM-IVH in Rodents
Intraventricular administration of collagenase has also been largely used to reproduce GM-IVH in neonate rats and mice (≈P7) [32,112,149]. Bacterial collagenase is a protease that lyses the extracellular matrix, causing the rupture of the cerebral blood vessels [156]. Collagenase administration is accompanied by an inflammatory response that occurs at the site of injection [158], similar to that observed in the blood infusion model [154] and it may also induce an ischemic cerebral injury [156]. On the other hand, collagenase models can be used in different species and they are easy to reproduce. Also, induced damage is dose-dependent and specific lesion sites and timing can be controlled in the experiments.
In 2010, Alles et al. [132] compared unilateral and bilateral collagenase lesions in P6 rats. Brain volume was significantly reduced after bilateral lesions in the short term (P7), and this effect was observed both after unilateral and bilateral lesions in the long term (P30). Ventricle enlargement and brain atrophy have been largely reproduced both in rats [133][134][135] and mice [32] after collagenase injection in the ventricles or the proximal ganglionic eminence, reproducing the pathology of the GM-IVH in the PTNB. Hydrocephalus is also commonly observed in this model [135,136,139,140] and BBB permeability, as well as implicated mechanisms, have been addressed in detail in this model (Table 1). Evans blue extravasation assay shows significant BBB leakage after collagenase injection to P7 rats [147,159]. Rolland et al. have observed alterations in markers of BBB integrity, such as decreased ratios of pAkt/Akt and GTP-Rac1/Total-Rac1 as well as the reduced expression of ZO1, occluding or claudin-3 [140] and other studies have shown similar outcomes [147]. A recent study suggests that characteristic hydrocephalus and ventriculomegaly are derived from a decrease of CSF transport through the glymphatic system, mediated by aquaporin 4 [135]. Other studies have also reported increased thrombin activity after collagenase lesions, supporting its role in hydrocephalus formation [160]. Short and long-term neuronal death is observed [32], and accumulation and upregulation of iron-handling proteins are also observed around the lesions [133]. However, the effects of the collagenase lesions are also detected in distant regions and cortical thinning [32,131,[136][137][138] as well as cortical tau hyperphosphorylation have been observed [32]. General affection of the brain is supported by the widespread presence of microhemorrhages in the cortex, hippocampus and striatum from P7-lesioned mice the long term (P70) [32] and white matter lesions and basal ganglia loss are also detected after collagenase lesions [131]. In addition, neurogenesis is severely compromised in the SVZ. Likewise, reduced doublecourtin immunostaining is observed in the cortex and the hippocampus [32]. The extracellular matrix is affected by collagenase-induced GM-IVH and increased extracellular matrix protein proliferation is detected in the long term [131]. Interestingly, collagenase-induced lesions also result in altered feasible peripheral markers of the GM-IVH, including ubiquitin carboxy-terminal hydrolase L1 or gelsolin, as observed in patients, supporting the clinical relevance of this model [32].
Extensive collagenase brain damage also translates in significant behavioural impairments in different tasks. General developmental delay as well as cognitive and motor disabilities are regularly observed [134]. Eye opening latency is delayed in rats with GM-IVH and the frequency and duration of grooming and rearing are reduced [146]. Motor activity is severely compromised when animals are assessed in the righting reflex, negative geotaxis or rotarod tests. These limitations have been observed both in the short and the long term [112,135,147,148] and similar limitations have been observed when sensory motor function is assessed in rats with the composite neuroscore or foot fault tests [72,140]. As it could be expected, when unilateral and bilateral collagenase lesions are compared, ambulation, surface righting and negative geotaxis outcomes are more severely affected in bilaterally infused rats [132]. Alterations in the open field test have also been detected in rats after collagenase lesions [72,146]. In line with these observations, long-term memory deficits have been largely documented in rats analysed in the Morris water maze [72,135,140,149]. Similar outcomes have been reported in mice, when cognition is assessed in the Morris water maze or the new object discrimination test [32], showing an overall behavioural compromise, as observed in patients (Table 1).
Neuroinflammation and Microglia in Animal Models of GM-IVH
GM-IVH triggers an important neuroinflammatory response, which results in a secondary brain injury that may ultimately underlie the long-term neurological deficits [146]. Therefore inflammation and anti-inflammatory therapies have been widely assessed in many previous basic science studies. Since transgenic models have limited life expectancy, studies on related neuroinflammation are scarce. Nevertheless, the 2 most up-regulated transcription factor genes in mice overexpressing VEGF in the GM are ETS1 and hypoxia-inducible factor 2α, implicated in vascular inflammation. Moreover, ToppGene analysis shows that inflammatory responses are up-regulated in bitransgenic embryos [93].
The inflammatory process in lesion-induced animals has been studied in depth. Whereas some differences might be observed when the etiology and experimental approaches are compared, as described in adult models [154], the neuroinflammatory response seems to be reproducible in GM-IVH models. Interestingly, while an overall exacerbated inflammatory response is observed after local lesions [32,130,142] or systemic glycerol administration [63,108] it is noteworthy that pharmacological depletion of microglia before neonatal stroke to P7 rats, significantly increases lesions and the incidence of intraparenchymal hemorrhage. Activated microglia become an important source of TGFβ1 in the injured neonatal brain, that contributes to neurovascular protection, serves as a survival factor for cerebral capillaries and contributes to the stability of the BBB [58]. These observations support a dual role for microglia and suggest that neonatal microglia may have unique functions.
Previous studies have shown that both blood- [130] and collagenase-induced [32] lesions provoke reactive gliosis both in the short and the long term [141]. This resembles observed alterations in patients, since white matter is severely affected by GM-IVH and it is also extremely sensitive to inflammation and oxidative stress [123]. Increased neutrophil infiltration [107], increased microglia and astrocyte burdens [109,111,112,125,130,142], gliosis and glial scarring are commonly observed in the periventricular area [89,125,143]. However, increased microglia burden is also observed in brain regions distant from the lesion site, such as the cortex [32,107] or the hippocampus [144], supporting an overall inflammatory response that affects the whole brain [123]. It has also been reported that those animals that develop hydrocephalus present a much more severe gliosis that those without hydrocephalus [89]. Besides, it has been suggested that microglia may play an important role in initiating the immune response, while astrocytes may be involved in the later propagation of the inflammatory process [130]. Whereas microglia phenotype classification remains controversial, other preclinical studies show that GM-IVH interfere with M1/M2 balance and the expression of pro-inflammatory cytokines [145]. Following this idea, lesion-induced GM-IVH results in increased levels of pro-inflammatory cytokines, including IL-1β, IL-6 or TNF-α [109,145]. Other studies have also reported the implication of IL-17A and IL-17AR in GM-IVH pathology due to their role in inflammation and BBB breakdown after stroke. IL-17A and IL-17AR levels are increased, contributing to endogenous reduction of silent information regulator 1 expression and increased cell proliferation markers after collagenase-induced GM-IVH in rats [138]. Similar outcomes have been observed after systemic glycerol administration to neonate rabbits, that show increased reactive microglia and increased levels of pro-inflammatory and chemotactic effector molecules, including IL-1β, IL-6 or TNF-α, IL-8 or MCP [30,63,108]. Likewise, an upregulation of mRNA receptor genes for TLR-4, IL1R1, FAS or the transcription factor NF-Kβ is detected [120]. Whereas some studies have reported that inflammatory cytokines might be upregulated mostly in the first few days after the lesions [121], other studies have shown long-term effects of the inflammatory process in different models [32,144].
GM-IVH models have been used to further study feasible pathways responsible of the inflammatory alterations. It has been shown that GM-IVH inflammation might be mediated through AMPA receptors [63]. Also, the cannabinoid 2 receptor might mediate inflammation, since its expression is upregulated 24 h after collagenase-induced GM-IVH [161]. Neuroinflammation might be mediated by CAMKK2/AMPK/Nrf2 signaling [145] and GM-IVH-mediated endothelial nitric oxide synthase inhibition might also contribute to GM-IVH inflammation [146]. The inflammatory process has also been shown to be mediated by IFNAR/JAK1-STAT1/TRAF3/NF-κB signaling pathway in neonate rats after collagenase or blood-induced lesions [112,130] role of the L-17RA/(C/EBPβ)/SIRT1 pathway has been described as a feasible mediator of the inflammatory response observed after GM-IVH injury [118] and other studies have reported the implication of the OX-2 membrane glycoprotein via the tyrosine kinase 1 pathway [147]. On the other hand, hialuronic acid, as part of the extracellular matrix, accumulates after GM-IVH and regulates inflammation through CD44 and TLR2/4 receptors [118]. Blood and blood derivates directly trigger an inflammatory response and hemoglobin and metahemoglobin strongly correlate with TNF-α levels in neonate rabbits with GM-IVH [54]. Inflammation is followed by oxidative stress [123], and oligodendrocite damage and myelinization alterations are considered a direct consequences of GM-IVH induced inflammation [30,63]. Besides, reactive astrogliosis impairs neurogenesis and may mediate observed neuron reduction in the hippocampus after collagenase lesions [128] (Table 1).
Inflammation is one of the most relevant pathological features associated to GM-IVH and it has been suggested that anti-inflammatory therapy for GM-IVH should start early to protect a very sensitive developing white matter [123]. Therefore, animal models have been widely used to assess therapeutic approaches for a population in great need to new alternatives. In this sense, studies focusing on the effect of anti-inflammatory drugs have also contributed to the discovery/characterization of the mechanisms that underlie the above described molecular pathways [136,144,145].
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-11-12T09:10:13.500Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "50398422bf726ae3f0468fc22e6a53ddcaa96d04",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/21/8343/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d3d9a8036da86d60714b5a83cc816f3809c3717",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232140114
|
pes2o/s2orc
|
v3-fos-license
|
The Hydroalcoholic Extract of Uncaria tomentosa (Cat's Claw) Inhibits the Infection of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) In Vitro
The coronavirus disease 2019 (COVID-19) has become a serious problem for public health since it was identified in the province of Wuhan (China) and spread around the world producing high mortality rates and economic losses. Nowadays, the WHO recognizes traditional, complementary, and alternative medicine for treating COVID-19 symptoms. Therefore, we investigated the antiviral potential of the hydroalcoholic extract of Uncaria tomentosa stem bark from Peru against SARS-CoV-2 in vitro. The antiviral activity of U. tomentosa against SARS-CoV-2 in vitro was assessed in Vero E6 cells using cytopathic effect (CPE) and plaque reduction assay. After 48 h of treatment, U. tomentosa showed an inhibition of 92.7% of SARS-CoV-2 at 25.0 μg/mL (p < 0.0001) by plaque reduction assay on Vero E6 cells. In addition, U. tomentosa induced a reduction of 98.6% (p=0.02) and 92.7% (p=0.03) in the CPE caused by SARS-CoV-2 on Vero E6 cells at 25 μg/mL and 12.5 μg/mL, respectively. The EC50 calculated for the U. tomentosa extract by plaque reduction assay was 6.6 μg/mL (4.89–8.85 μg/mL) for a selectivity index of 4.1. The EC50 calculated for the U. tomentosa extract by TCID50 assay was 2.57 μg/mL (1.05–3.75 μg/mL) for a selectivity index of 10.54. These results showed that U. tomentosa, known as cat's claw, has an antiviral effect against SARS-CoV-2, which was observed as a reduction in the viral titer and CPE after 48 h of treatment on Vero E6 cells. Therefore, we hypothesized that U. tomentosa stem bark could be promising in the development of new therapeutic strategies against SARS-CoV-2.
Introduction
e severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused serious public health problems since it was identified in Wuhan (China) in late 2019 [1]. e World Health Organization (WHO) declared coronavirus disease with COVID-19 whereas Venezuela and Uruguay were the ultimate nations to confirm their patient zero, considering the pandemic epicenter after Europe [4]. Even though some vaccines have already been approved only with phase 3 results, currently, there is no preventive treatment or antiviral drug available against SARS-CoV-2 [5].
Nowadays, the World Health Organization (WHO) recognizes that traditional, complementary, and alternative medicine has many benefits [6]. Several candidates with possible antiviral effects have been explored from medicinal plants in the preclinical phase. Uncaria tomentosa (Willd.) DC. (U. tomentosa) belongs to the Rubiaceae family, which is also known as cat's claw and contains more than 50 phytochemicals [7]. Oxindole alkaloids (pentacyclic oxindole alkaloids (POA) and tetracyclic oxindole alkaloids (TOA)) have been recognized as a fingerprint of this species in some pharmacopeias, and several pharmacological activities are linked to this kind of alkaloids [8,9]. It has been demonstrated that U. tomentosa exerts an antiviral effect on human monocytes infected with dengue virus 2 (DENV-2) [10] and herpes simplex virus type 1 (HSV-1) [11]. In our previous studies in silico, U. tomentosa's components inhibited the SARS-CoV-2 enzyme 3CLpro and disrupted the interface of the receptor-binding domain of angiotensin-converting enzyme 2 (RBD-ACE-2) as well as the SARS-CoV-2 spike glycoprotein [12,13]. Additionally, bioactivities such as antiinflammatory [14], antiplatelet [15], and immunomodulatory [16] were reported in the literature. Furthermore, other components isolated from the stem bark such as quinovic acids, polyphenols (flavonoids, proanthocyanidins, and tannins), triterpenes, glycosides, and saponins were identified by instrumental methods [9,[17][18][19][20]. e evaluation of natural compounds to inhibit SARS-CoV-2 in preclinical studies might lead to discovering new antiviral drugs and to a better understanding of the viral life cycle [21]. Several cell lines such as human airway epithelial cells, Vero E6 cells, Caco-2 cells, Calu-3 cells, HEK293Tcells, and Huh7 cells are considered the best models in vitro to determine the antiviral activity against SARS-CoV-2 [22]. Vero E6 cells highly express the ACE-2 receptor; they produce a high titer of viral particles and do not produce interferon [22]. erefore, in vitro test in this cell line constitutes the first step at the beginning of the antiviral studies.
Although the pathophysiology of COVID-19 is not completely understood, a severe inflammatory process has been associated with the severity and progression of the disease [23]. erefore, the immune activation so far described during the course of the infection as well as the pulmonary injury could be ameliorated by U. tomentosa linked to its traditional use as an anti-inflammatory in the folk medicine from South America for years [24].
Based on its antiviral activity on other ARN viruses and our in silico findings against SARS-CoV-2, we assayed the hydroalcoholic extract of U. tomentosa stem bark from Peru as a potential antiviral agent in vitro against this severe acute respiratory syndrome coronavirus 2.
Obtaining Extract from Plant Material.
One hundred grams of the raw plant material (stem bark) of U. tomentosa was powdered and extracted with 700 ml of 70% ethanol at room temperature for 7 days. en, the extract was evaporated by using rotary evaporation to obtain a desiccated extract, which was stored at 4°C until further use.
Identification of the U. tomentosa Stem Bark Constituents by LC/MS (UHPLC-ESI + -HRMS-Orbitrap).
e identification of the main phytochemicals present within the hydroalcoholic extract of U. tomentosa was carried out on an LC Dionex UltiMate 3000 ( ermo Scientific, Germering, Germany) equipped with a degassing unit, a gradient binary pump, an autosampler with 120-vial well-plate trays, and a thermostatically controlled column compartment. e autosampler was held at 10°C, and the column compartment was maintained at 40°C. Chromatographic separation was performed on a Hypersil GOLD aQ column ( ermo Scientific, Sunnyvale, CA, USA; 100 mm × 2.1 mm id, 1.9 μm particle size) with an LC guard-column Accucore aQ Defender cartridge ( ermo Scientific, San Diego, CA, USA; 10 × 2.1 mm id, 2.6 μm particle size). e flow rate of the mobile phase containing ammonium formate (FA)/water (A) and FA/acetonitrile (B) was 300 μL/min. e initial gradient condition was 100% A, changed linearly to 100% B in 8 min, maintained for 4 min, returned to 100% A in 1 min, and maintained for 3 min. e injection volume was 1 μL. e LC was connected to an Exactive Plus Orbitrap mass spectrometer ( ermo Scientific, Bremen, Germany) with a heated electrospray ionization (HESI-II) source operated in the positive ion mode. e Vspray was evaluated at 1.5, 2.5, 3.5, and 4.5 kV. e nebulizer temperature was set at 350°C; the capillary temperature was 320°C; sheath gas and auxiliary gas (N 2 ) were adjusted to 40 and 10 arbitrary units, respectively. Nitrogen (>99%) was obtained from a generator (NM32LA, Peak Scientific, Scotland, UK). During the full scan MS, the Orbitrap-MS mass resolution was set at 70000 (full-width-at-half-maximum, at m/z 200, RFWHM) with an automatic gain control (AGC) target of 3 × 106, a C-trap maximum injection time of 200 ms, and a scan range of m/z of 100-1000. e ions injected to the HCD cell via the C-trap were fragmented with stepped normalized collision energies of 20, 30, 40, and 50 eV. e mass spectra were recorded in the AIF (all-ion fragmentation) mode for each collision energy at an RFWHM of 35000, an AGC target of 3 × 106, a
Cell Viability
Assays. e viability of Vero E6 cells in the presence of the U. tomentosa extract was evaluated using an MTT (4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide) assay. Briefly, Vero E6 cells were seeded at a cell density of 1.0 × 10 4 cells/well in 96-well plates and incubated for 24 h at 37°C in a humidified 5% CO 2 atmosphere. en, 100 μL of serial dilutions (1 : 2) of the U. tomentosa extract ranging from 3.1 to 50 μg/mL was added to each well and incubated for 48 h at 37°C with 5% CO 2 . After incubation, the supernatants were removed, cells were washed twice with phosphate-buffered saline (PBS) (Lonza, Rockland, ME, USA), and 30 μL of the MTT reagent (Sigma-Aldrich) (2 mg/ mL) was added. e plates were incubated for 2 hours at 37°C with 5% CO 2 , protected from light. en, formazan crystals were dissolved by adding 100 μL of pure DMSO to each well. Plates were read using a Multiskan GO spectrophotometer ( ermo) at 570 nm. e average absorbance of cells without treatment was considered as 100% of viability. Based on this control, the cell viability of each treated well was calculated. e treatment concentration with 50% cytotoxicity (the 50% cytotoxic concentration, CC50) was obtained by performing nonlinear regression followed by the construction of a concentration-response curve (GraphPad Prism). For the MTT assay, 2 independent experiments with four replicates of each experiment were performed (n � 8).
Antiviral Assay.
e antiviral activity of the U. tomentosa extract against SARS-CoV-2 was evaluated with a pre-post strategy where the treatment was added before and after the infection. Briefly, Vero E6 cells were seeded at a density of 1.0 × 10 4 cells/well in 96-well plates and incubated for 24 h at 37°C with 5% CO 2 . After incubation, 50 μL of double dilutions of cat's claw (3.1-25 μg/mL) was added to the cell monolayers for 1 h at 37°C with 5% CO 2 . en, the treatment was removed, and cells were infected with SARS-CoV-2 stock at a multiplicity of infection (MOI) of 0.01 in 50 μL of DMEM supplemented with 2% FBS. e inoculum was removed 1 hour postinfection (h.p.i), replaced by 170 μL of cat's claw dilutions, and incubated for 48 h at 37°C with 5% CO 2 . en, cell culture supernatants were harvested and stored at −80°C for virus titration by plaque assay and TCID50 assay. e supernatant of infected cells without treatment was used as infection control. Chloroquine (CQ) at 50 μM was used as a positive control for antiviral activity; 2 independent experiments with 3 replicates of each experiment were performed (n � 6).
Plaque Assay for SARS-CoV-2 Quantification.
e capacity of the U. tomentosa extract to decrease the PFU/mL of SARS-CoV-2 was evaluated by plaque assay on Vero E6 cells. Briefly, 1.0 × 10 5 Vero E6 cells per well were seeded in 24-well plates for 24 h at 37°C with 5% CO 2 . Tenfold serial dilutions of the supernatants obtained from the antiviral assay (200 μL per well) were added by duplicate on cell monolayers. After incubation for 1 h at 37°C with 5% CO 2 , the viral inoculum was removed and 1 mL of semisolid medium (1.5% carboxymethyl cellulose in DMEM 1X with 2% FBS and 1% penicillin-streptomycin) was added to each well. Cells were incubated for 5 days at 37°C with 5% CO 2 .
en, cells were washed twice with PBS. en, cells were fixed and stained with 500 μL of 4% formaldehyde/1% crystal violet solution for 30 minutes and washed with PBS. Plaques obtained from each condition were counted. e reduction in the viral titer after treatment with each concentration of the U. tomentosa extract compared to the infection control is expressed as inhibition percentage. Two independent experiments with two replicates of each experiment were performed (n � 4).
TCID50 Assay for SARS-CoV-2 Quantification.
e capacity of the U. tomentosa extract to diminish the CPE caused by SARS-CoV-2 on Vero E6 cells was evaluated by TCID50 assay. Briefly, 1.2 × 10 4 Vero E6 cells per well were seeded in 96-well plates for 24 h at 37°C with 5% CO 2 . Tenfold serial dilutions of the supernatants obtained from the antiviral assay (50 μL per well) were added by quadruplicate on cell monolayers. After 1 h incubation, at 37°C with 5% CO 2 , the viral inoculum was removed and replaced by 170 μL of DMEM supplemented with 2% FBS. Cells were incubated for 5 days at 37°C with 5% CO 2 . en, cells were washed twice with PBS and then fixed and stained with 100 μL/well of 4% formaldehyde/1% crystal violet solution for 30 minutes. Cell monolayers were washed with PBS. e number of wells positive for CPE was determined for each dilution (CPE is considered positive when more than 30% of cell monolayer is compromised). e viral titer of TCID50/mL was calculated based on the Spearman-Käerber method. e reduction of viral titer after treatment with each concentration of the U. tomentosa extract compared to infection control is expressed as Evidence-Based Complementary and Alternative Medicine inhibition percentage. A control of cells without infection and treatment was included. Two independent experiments with two replicates of each experiment were performed (n � 4).
Statistical Analysis.
e median inhibitory concentration (IC50) values represent the concentration of the U. tomentosa extract that reduces virus particle production by 50%. e CC50 values represent the cat's claw solution concentration that causes 50% cytotoxicity. e corresponding dose-response curves were fitted by nonlinear regression analysis using a sigmoidal model. e calculated selectivity index (SI) represents the ratio of CC50 to IC50. All data were analyzed with GraphPad Prism (La Jolla, CA, USA), and data are presented as mean ± SEM. Statistical differences were evaluated via Student's t-test or Mann-Whitney U test; a value of p ≤ 0.05 was considered significant, with * p ≤ 0.05, * * p ≤ 0.01, and * * * p ≤ 0.001.
Identification of Components in the Hydroalcoholic Extract of U. tomentosa by LC/MS (UHPLC-ESI + -HRMS-Orbitrap)
. Different constituents in the U. tomentosa stem bark such as spirooxindole alkaloids, indole glycoside alkaloids, quinovic acid glycosides, and proanthocyanidins were identified by LC-MS analysis (Table 1 and Supplementary materialsS1-S6).
e LC-MS data provided information on spirooxindole alkaloids as a broad peak that appeared at a retention time (t R ) of 4.82 min and showed an (M + H) + ion at m/z 369.18018 that are characteristics for speciophylline, isopteropodine, isomitraphylline, uncarine F, mitraphylline, and pteropodine. Furthermore, two peaks at 4.99 and 5.18 min, respectively, showed the [M+H] + ion at m/z 385.21127 that were identified as those isomeric spirooxindole-related alkaloids rhynchophylline and isorynchophylline. On the other side, a molecular ion peak (M + H) + of 547.22992 m/z, which eluted at 4.03 min, provided the identity of the indole glycoside alkaloid 3dihydrocadambine. As expected, LC/MS phytochemical analysis showed that the hydroalcoholic extract of U. tomentosa was comprised predominantly of five proanthocyanidins (PAs), including proanthocyanidin C1, epiafzelechin-4β-8, proanthocyanidin B2, epicatechin, and chlorogenic acid, which eluted at 3.76-4.25 min. Finally, LC-MS data along with ESI mass spectra gave characteristic protonated quasimolecular ions of isomeric quinovic acid glycosides ([M + H] + ion at m/z 957.50458). In sum, LC/MS allowed the identification of known components in the hydroalcoholic extract of U. tomentosa used, such as alkaloids, quinovic acid glycosides, and proanthocyanidins (PAs), which play important roles in the biological activities of this medicinal herb and are considered as a fingerprint for quality control that ensures fitness for therapeutic uses.
e Cell Viability Assay on Vero E6 Cells in the Presence of the U. tomentosa Extract.
e viability of Vero E6 cells in the presence of U. tomentosa was higher than 90.0% at concentrations of 25.0 μg/mL or lower, after 48 h of incubation ( Figure 1). Cell viability at 50.0 μg/mL was 17.3%; for this reason, this concentration was not included in the antiviral assay. e CC50 calculated for U. tomentosa was 27.1 μg/mL. Chloroquine at 50 μM (positive control of inhibition) did not affect the viability of Vero E6 cells (Figure 1).
Discussion
In South America, the second wave of novel coronaviruses might be more aggressive, increasing the mortality rate and new cases [26]. Medical trials are underway to determine the efficacy of several vaccines against SARS-CoV-2 [27]. Otherwise, herbal medicines could become a promising option to tackle the ongoing pandemic caused by COVID-19 [28]. Some plant extracts and phytochemicals were modeled over numerous targets of SARS-CoV-2 by using in silico studies, which is the first step in the discovery of new drugs [29]. In China, the use of herbal formulas has been included in the protocol of primary attention in COVID-19 and medical trials were carried out, and promising results to ameliorate the symptoms were reported [30].
Our previous study of U. tomentosa (cat's claw) on this novel coronavirus using in silico analysis showed that two possible mechanisms could be involved in the in vitro antiviral activity against SARS-CoV-2. ese findings revealed that 3CLpro, an essential enzyme for viral replication [31], showed key molecular interactions with speciophylline, cadambine, and proanthocyanidin B2, with high binding affinities ranging from −8.1 to −9.2 kcal/mol. [12]. On the other hand, phytochemicals of U. tomentosa such as proanthocyanidin C1, QAG-2, uncarine F, 3- [13]. Since Vero E6 cells are commonly used to replicate SARS-CoV-2 due to the high expression level of the ACE-2 receptor and lack the ability to produce interferon [32], phytochemicals are the appropriate substrate to explore the antiviral activity of phytochemicals targeting the receptor binding as well as the SARS-CoV-2 main protease, which is a high-profile antiviral drug target, and several compounds have been discovered as main protease inhibitors [33,34]. Mechanisms of the antiviral activity of the hydroalcoholic extract of U. tomentosa, on other viruses like Dengue (DEN-2), have been elucidated; alkaloids (pentacyclic alkaloids) from U. tomentosa induced apoptosis of infected cells and reduced inflammatory mediators such as TNF-α and IFN-α with similar effects to dexamethasone [10]. e quinovic acids (33.1-60 μg/mL) inhibited the vesicular stomatitis virus (VSV) [35], and the total extract at concentrations less than 15.75 μg/mL inhibited the herpes simplex virus (HSV-1) replication when added to Vero cells at the same time compared to the virus [11].
Here, we demonstrated that U. tomentosa also has an antiviral activity in vitro against the SARS-CoV-2 by inhibiting the release of infectious particles and reducing the cytopathic effect on Vero E6 cells. e EC50 was calculated at 6.6 μg/mL (95% CI: 4.89-8.85 μg/mL) by plaque assay and at 2.57 μg/mL (95% CI: 1.05-3.75 μg/mL) by TCID50 assay, whilst the CC50 was 27.1 μg/mL. In other medicinal plants assayed against SARS-CoV-2, similar antiviral activity was shown; in particular, Echinaforce ® (an Echinacea purpurea preparation) exhibited an antiviral activity at 50 μg/mL [36]. Liu Shen capsule, a traditional Chinese medicine, inhibited the SARS-CoV-2 replication with an EC50 value of 0.6024 μg/mL and CC50 of 4.930 μg/mL [37]. Likewise, phillyrin (KD-1), a representative constituent of Forsythia suspensa ( unb.), presented an EC50 at 63.90 μg/mL and CC50 of 1959 μg/mL [38]. Sulfated polysaccharides named RPI-27 and heparin inhibited SARS-CoV-2 in vitro with an EC50 of 8.3 ± 4.6 μg/mL and 36 ± 14 μg/mL, respectively [39]. In our study, selectivity indices of 4.1 and 10.5 were obtained by plaque assay and TCID50, respectively. According to a previous report [40], these results were classified as low selectivity (SI ≥ 2.0 and < 5) and high selectivity (SI ≥ 10), respectively. In spite of SI having a low value, theoretically having a higher value would be more effective and safer during in vivo treatment for a given viral infection. However, there is no evidence of severe toxicity of U. tomentosa, and traditionally, its popular use in the form of maceration or decoction is safe [41].
e lower concentration used of the U. tomentosa extract (3.1 μg/mL) caused a significant increase in the number of infectious viral particles compared to the infection control ( Figure 2). is result could be due to compounds present in the extract that at this concentration promote an increase in cell proliferation or regulation of metabolic pathways that regulate the expression of viral receptors or synthesis of proteins necessary for the viral replicative cycle [42,43]. ese findings demonstrate the importance of evaluating and identifying the compounds present in U. tomentosa with antiviral effect against SARS-CoV-2 and selecting the proper concentration for use.
ere is enough evidence that U. tomentosa could ameliorate a wide array of symptoms associated with COVID-19, like the severe inflammation characterized by a cytokine storm [24] causing endothelial dysfunction. According to the antiviral activity of U. tomentosa against SARS-CoV-2, several biochemical mechanisms could be involved in each phase of the viral life cycle. As previously reported in our in silico studies, U. tomentosa could interfere with viral entrance into host cells [12], affecting viral replication [13]. Furthermore, ACE-2 receptors, which are expressed in Vero E6 cells, could also be blocked by the phytochemicals of U. tomentosa during the entrance of SARS-CoV-2 into the host cells, and the aforementioned studies backed up our hypothesis [13].
Besides, it might control the hyperinflammation, via inhibition of IL-1α, IL-1β, IL-17, and TNF-α [44], reduce Evidence-Based Complementary and Alternative Medicine oxidative stress [45], and protect the endothelial barrier, via inhibition of IL-8, which is linked to the induction of permeability [46]. It also has antithrombotic potential via antiplatelet mechanism and by thrombin inhibition [15]. Furthermore, U. tomentosa modulates the immune system by extending lymphocyte survival via an antiapoptotic mechanism [47]. It is known that the 3α protein of severe acute respiratory syndrome-associated coronavirus induces apoptosis in Vero E6 cells [48]; therefore, the phytochemicals found in the hydroalcoholic extract could inhibit this process and protect against the inflammatory cascade. Interestingly, U. tomentosa bark extract reduced the lung inflammation produced by ozone in mice [49]. Based on our results, U. tomentosa is a promising medicinal herb to combat COVID-19, but it is necessary to continue with animal models followed by clinical trials to validate our results in the context of COVID-19 patients.
is study is the first approach to evaluate the potential use of U. tomentosa against SARS-CoV-2; we have to explore specific mechanisms of inhibition and propose the main molecules involved with the antiviral activity. As shown in our phytochemical analysis, the presence of chemical groups determined by LC/MS (UHPLC-ESI + -HRMS-Orbitrap), such as spirooxindole alkaloids, indole glycoside alkaloids, quinovic acid glycosides, and proanthocyanidins, suggests that they could be responsible for the described activity. Here, the mechanisms discussed about the hydroalcoholic extract of U. tomentosa are only inferred under the mechanisms evaluated in other RNA viruses reported in the literature and also our previous in silico studies on SARS-CoV-2.
In regard to the antiviral activity of U. tomentosa, the EC50 was calculated at 6.6 μg/mL, which is an indicator of a promising activity as an extract, but it cannot be taken as a reference value to reach plasma concentration because U. tomentosa extract presented several phytochemicals, which were not quantified and individually tested. Since cat's claw has been used in clinic for other diseases, there are no clinical studies carried out and reported pharmacokinetic data. However, in mice, the administration of 5 mg/Kg per oral of six Uncaria alkaloids presented a bioavailability ranging between 27.3% and 68.9% and with a maximum plasma concentration (C max ) between 305.3 ± 68.8 ng/mL and 524.5 ± 124.5 ng/mL [50]. Additionally, the recommended dose of U. tomentosa is one gram given two to three times daily [51]. A standardized extract consisting of less than 0.5% oxindole alkaloids and 8% to 10% carboxy alkyl esters has been used at doses of 250 to 300 mg in clinical studies [52]. In humans, no toxic symptoms were reported with a usual administration of 350 mg/day for 6 weeks [53,54] and 300 mg dry extract daily for 12 weeks [55]. Traditional uses such as tinctures, decoctions, capsules, extracts, and teas are prepared and, in a decoction, up to 20 g of raw bark per liter of water has been used; although this information is based on traditional practices, this equates to 4 mg oxindole alkaloids [56]. us, we hypothesized that the antiviral activity on SARS-CoV-2 is attributed to the whole extract synergized by all its phytochemicals acting by different mechanisms discussed above.
Conclusion
U. tomentosa has been widely used as an anti-inflammatory and immunomodulatory agent. Previous studies have shown that U. tomentosa has a broad spectrum of effects on several RNA viruses. In this study, we demonstrated that hydroalcoholic extract of U. tomentosa stem bark inhibited the release of SARS-CoV-2 infectious particles and reduced the cytopathic effect caused by the virus on Vero E6 cell line, underlying the importance of continuing this investigation with specific in vitro assays, followed by studies in animal models, and finally validating its use in clinical trials. Our investigation shows for the first time the antiviral effect of U. tomentosa on this novel coronavirus (SARS-CoV-2).
Data Availability
All data used to support the findings of this study can be made available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no known conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-03-09T06:22:56.350Z
|
2021-02-24T00:00:00.000
|
{
"year": 2021,
"sha1": "51a11282fd25698a25e2ca8218e444a057e04702",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/6679761",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a44cb03da2334d9aa15a6c03e1dd859d48658118",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251135093
|
pes2o/s2orc
|
v3-fos-license
|
Humans disagree with the IoU for measuring object detector localization error
The localization quality of automatic object detectors is typically evaluated by the Intersection over Union (IoU) score. In this work, we show that humans have a different view on localization quality. To evaluate this, we conduct a survey with more than 70 participants. Results show that for localization errors with the exact same IoU score, humans might not consider that these errors are equal, and express a preference. Our work is the first to evaluate IoU with humans and makes it clear that relying on IoU scores alone to evaluate localization errors might not be sufficient.
The main difference between image classification and object detection is that an object detector also has to predict the object's location, typically indicated by a bounding box around the object. Object location can be used as a first step for a downstream task, e.g., instance segmentation [1], or human pose estimation [2]. Alternatively, in this paper, we focus on the setting where an object detection is presented to humans as an end result, where examples include visual inspection [3], or focusing attention in medical images [4]. We Authors with equal contribution. * Work performed while at Delft University of Technology.
do not evaluate the object detector itself [5]. Instead, we evaluate if the predicted object location by object detectors aligns with what humans consider a detected object.
Evaluating object detectors. Object detectors are commonly evaluated [5,6,7,8,9] with mean average precision (mAP): the mean of the per-class average precision scores. Average precision is the area under the precision-recall curve, created by ranking all detections by confidence and then checking if they are correct according to the ground truth. The detection is correct if (1) the assigned class label is correct and (2) the detection location has sufficient overlap with the ground truth. The Intersection over Union (IoU) score is used to determine the overlap. The location of a detection is correct if the IoU score is higher than a threshold, typically 0.5 or higher [10,6]. In this paper, to the best of our knowledge, we are the first to investigate how well the IoU measure aligns with human localization quality judgments.
Human annotation for object detection. Extensive crowdsourcing studies are performed to draw bounding boxes around objects in images [11,12] or the precise shape of the object [13,14]. Experiments in which crowd workers validate object detections showed that annotators tend to be lenient when validating bounding boxes, i.e., bounding boxes with IoU < 0.5 are still accepted [15]. Furthermore, analyses performed in [16] suggest that to efficiently and accurately localize all objects in an image, several crowdsourcing tasks are needed, such as verifying box correctness, verifying object presence, or naming the object. In this paper, we extend the work in [17,18,16] with four user studies investigating which bounding boxes humans accept and prefer.
Contributions. We make the following contributions: (1) We design four user studies to explore what kind of detections humans prefer and accept as good detections. 1 (2) We investigate the relationship between a too small bounding box and a too large bounding box, where they both have the same IoU score. (3) We analyze the impact of object symmetry and bounding box position in human preference and acceptance of detectors' output. (4) We experiment with various object sizes (small, medium, large) and recommend future studies.
Our results show that humans disagree with IoU for measuring localization errors.
EXPERIMENTAL APPROACH
We perform four controlled experiments to evaluate the relation between IoU and human localization quality judgments and study which object detections are accepted or preferred by humans. We do not train or test any object detection models since they are highly influenced by many design choices, e.g., model parameters, dataset. Thus, our boxes are generated according to the ground truth. We relate our findings to machine-evaluated detections. For machine-evaluated detections, we use the common IoU, measuring the localization performance of the predicted box B p with the ground truth box B gt , as IoU = Bp∩Bgt Bp∪Bgt . We address two important features of object localization: (i) Box Size and (ii) Box Position, which are affected by the IoU score, in four online user studies (two studies per feature). 2 We also experiment with various object sizes (small -S, medium -M, large -L) 3 and IoU values (0.3, 0.5, 0.7, 0.9) to study differences and similarities between humans and detection algorithms.
Procedure and participants. All studies follow the same procedure. Participants are given an example to introduce the task. The task consists of a masked image to indicate which object is investigated, the question that directly specifies the object name, and the possible answers. The images are chosen from the MS COCO dataset [19]. We ran the studies using Qualtrics 4 . The user studies have been distributed among research group members and authors' peers.
Box Size. As illustrated in Fig. 2, we use two different box sizes, small and large, with the same IoU score. The box aspect ratio and position is taken from the ground truth box. In the Size Preference study, we investigate the box size, and ask participants which box size they prefer for a detection. They can choose one option among: large box, small box or "the size of the box does not matter". In the Size Acceptance study, we show either a small or a large box and ask participants if they accept or reject it as an object detection. For both studies we evaluate IoU values (0.3, 0.5, 0.7, 0.9) and include all object sizes (S, M, L). In the Size Preference study, we annotate 72 images, with six images per each combination between object size and IoU value. In the Size Acceptance study, we annotate 96 images (eight per combination).
Box Position. As illustrated in Figure 3, we applied two positional shifts to the ground truth box, for symmetrical and asymmetrical objects, using a fixed IoU value of 0.5. Unlike the size experiment, the predicted box size is fixed and only the position of the box changes to evaluate the effect of the position. Depending on the orientation of the object, the predicted box is shifted horizontally (back, front) or vertically (top, bottom). Since symmetrical objects do not have front and back sides, we consider front as the right side and back as the left side of the object. Similarly to the size surveys, in the Position Preference study, we ask participants if they prefer a particular part or side of the object for detection. The Position Acceptance study investigates if users would accept the bounding box as a correct detection. In both position surveys, we use 20 images, which are equally distributed across object types (symmetrical, asymmetrical) and box positions (front/top, back/bottom), with 5 images per category.
RESULTS
Analytical method. To study the human preference and acceptance of bounding box sizes and positions, we apply several statistical tests. We apply the Chi-square test [20] to find out if there are any associations between variables such as object size and preferred box size or IoU value and preferred box size. To understand whether differences in preference proportions (e.g., small boxes, large boxes, no preference), or acceptance proportions (e.g., front box, back box) are statistically significant, we apply the Z-test [21] and the Cochran's Q test [22]. While the Z-test can only be applied to compare two proportions, the Cochran's Q test can be applied on any number of proportions. In case of statistically significant differences, we apply a posthoc Dunn test with Bonferroni correction [23] to see which proportions are different. Since for each study we perform multiple comparisons and statistical tests, we use a lower significance threshold than 0.05 (by applying a Bonferroni correction), i.e., α = 0.05 #tests . Size Preference. Figure 4(a) shows, per IoU and object size, the percentage of preferred bounding box sizes. For 0.9 IoU value, people have no size preference -for each object size, the option no preference is either the most chosen, or similarly chosen as large boxes. For IoU values of 0.9, posthoc Dunn tests with Bonferroni correction show that no preference is statistically preferred for small and medium objects, but not for large objects. The prevalence of no prefer-ence is sensible: for IoU > 0.9, the difference in appearance between small and large boxes is subtle to the human eye.
For all other evaluated IoU values, 0.7, 0.5, 0.3, and for all three evaluated object sizes, the Cochran's Q test shows that there are statistically significant differences in the preference of boxes. Posthoc Dunn tests with Bonferroni correction indicate that large boxes are statistically significantly more preferred by humans. Small bounding boxes are always the least preferred while large bounding boxes are always the most preferred, irrespective of object size. We observe a gradual preference increase of small bounding boxes as the IoU value increases, and a comparatively higher increase in having no preference (see Figure 5(a)). Using a Chi-square test, we found an association between the IoU value and the preferred bounding box size (χ 2 (2)=1227.84, p < 0.006). We also notice a gradual decrease in the preference of small bounding boxes with the decrease of the object size. These results are shown in Figure 5(b). Using a Chi-square test, we found a statistically significant association between the object size and the size of the preferred bounding box (χ 2 (2)=62.05, p < 0.006). Size Acceptance. In Figure 4(b), we show the percentage of accepted small and large boxes, for each IoU value and image size. For each IoU value, the acceptance of small bounding boxes decreases with the decrease of object size, the smaller the object, the less accepted the small bounding boxes. Large bounding boxes are always more accepted than small bounding boxes, disregarding IoU values and object sizes. The exception are medium objects with 0.9 IoU, where small boxes are statistically significantly more accepted (z=-2.82, p < 0.008). For the rest of the cases, large bounding boxes are statistically significantly more accepted than small bounding boxes for IoU values of 0.3, 0.5 and 0.7 and all object sizes (p < 0.008), but are not more accepted for neither small nor large objects with 0.9 IoU. We also found, c.f. Ztest, that (1) large bounding boxes are always statistically significantly accepted (p < 0.008) and (2) small bounding boxes are only statistically significantly more accepted for 0.9 and 0.7 IoU (all object sizes) and large objects with 0.5 IoU.
Position Preference. Figure 6(a) presents the results of the Position Preference user study. For symmetrical objects, participants have no preference regarding the position (front/top or back/bottom) of the bounding box, no preference being chosen the most. According to the Cochran's Q test, we also find that there are statistically significant differences in proportions among the three options chosen by study participants (χ 2 (2)=268.76, p << 0.017). A pairwise posthoc Dunn test with Bonferroni correction indicates that there are statistically significant differences between the proportions in which no preference and front bounding boxes are preferred (p << 0.017), as well as between the proportions of no preference and back bounding boxes (p << 0.017).
For asymmetrical objects, however, the most preferred bounding box is positioned at the front of the object. The Cochran's Q test shows that the difference in proportions among the three options is statistically significant (χ 2 (2) = 576.74, p << 0.017). Posthoc analysis using the Dunn test with Bonferroni correction shows that these differences are statistically significant between each two possible answers (front and no preference, front and back).
Position Acceptance. Figure 6(b) presents the results of the Accepted Box Position study. For both symmetrical and asymmetrical objects, the front bounding box is accepted in higher proportions than the back bounding box. For symmetrical objects, we found sufficient evidence, c.f. Z-test, that the proportion of back (z = -7.16, p < 0.008) and front (z = -12.62, p < 0.008) bounding boxes of being accepted is higher than the proportion of not being accepted. For asymmetrical objects, however, only front bounding boxes are statistically significant accepted (z = -20.18, p < 0.008). Similarly, for each object type, we analyze whether one type of bounding boxes is more accepted than the other. For both symmetrical and asymmetrical objects, the front bounding boxes are statistically significant more accepted than back bounding boxes.
DISCUSSION
In this paper, we performed four user studies to understand which object detections are preferred and accepted by humans. We addressed two main features of object localization, namely the scale (large, small) and the position (front/top, back/bottom) of the bounding boxes, and we experimented with objects of various sizes (small, medium, large) and symmetries (symmetrical and asymmetrical).
Our studies show a statistically significant relationship between the IoU value and the preferred bounding box size, as well as between the object size and the preferred bounding box size. Large bounding boxes are both the most preferred and the most accepted, while object detectors accept and prefer large and small boxes similarly if the boxes have the same IoU scores. We also found that for asymmetrical objects, the position of the bounding box matters for study participants, since they tend to choose bounding boxes that define or help them identify the object. This observation contrasts current state-of-the-art object localization models [24,25,26,27,28,29,30], where all bounding box positions are considered correct, regardless of their orientation, when the IoU is higher than the threshold.
Object detection models, when intended for humans, should be developed in a user-centric manner i.e., they should incorporate end-users preferences and comply with end-users needs. Thus, future studies should focus more on understanding which aspects of the objects should be captured by bounding boxes. The current study can also be extended by considering multiple datasets, occluded or truncated objects or images with multiple objects, as well as bounding boxes that are not centered, or which are shifted in random positions. Nevertheless, future studies should consider improving object detectors based on human preferences.
|
2022-07-29T06:42:49.970Z
|
2022-07-28T00:00:00.000
|
{
"year": 2022,
"sha1": "dd9a501446f09010cecf47dda7c0928cc4e63c3e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dd9a501446f09010cecf47dda7c0928cc4e63c3e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
209236939
|
pes2o/s2orc
|
v3-fos-license
|
Longitudinal Risk Prediction of Chronic Kidney Disease in Diabetic Patients Using a Temporal-Enhanced Gradient Boosting Machine: Retrospective Cohort Study
Background: Artificial intelligence–enabled electronic health record (EHR) analysis can revolutionize medical practice from the diagnosis and prediction of complex diseases to making recommendations in patient care, especially for chronic conditions such as chronic kidney disease (CKD), which is one of the most frequent complications in patients with diabetes and is associated with substantial morbidity and mortality. Objective: The longitudinal prediction of health outcomes requires effective representation of temporal data in the EHR. In this study, we proposed a novel temporal-enhanced gradient boosting machine (GBM) model that dynamically updates and ensembles learners based on new events in patient timelines to improve the prediction accuracy of CKD among patients with diabetes. Methods: Using a broad spectrum of deidentified EHR data on a retrospective cohort of 14,039 adult patients with type 2 diabetes and GBM as the base learner, we validated our proposed Landmark-Boosting model against three state-of-the-art temporal models for rolling predictions of 1-year CKD risk. Results: The proposed model uniformly outperformed other models, achieving an area under receiver operating curve of 0.83 (95% CI 0.76-0.85), 0.78 (95% CI 0.75-0.82), and 0.82 (95% CI 0.78-0.86) in predicting CKD risk with automatic accumulation of new data in later years (years 2, 3, and 4 since diabetes mellitus onset, respectively). The Landmark-Boosting model also maintained the best calibration across moderateand high-risk groups and over time. The experimental results demonstrated that the proposed temporal model can not only accurately predict 1-year CKD risk but also improve performance over time with additionally accumulated data, which is essential for clinical use to improve renal management of patients with diabetes. Conclusions: Incorporation of temporal information in EHR data can significantly improve predictive model performance and will particularly benefit patients who follow-up with their physicians as recommended. (JMIR Med Inform 2020;8(1):e15510) doi: 10.2196/15510
Background
With the rapid development in digitization of health care data, the modern electronic health records (EHRs) hold considerable promise for driving scientific advances in various aspects of biomedicine through the utilization of machine learning techniques. EHRs contain not only diverse clinical data elements that can better describe a patient's overall health status but also rich longitudinal data of patients that serve as a critical source for understanding the evolution of disease and management of chronic conditions. Developing accurate risk prediction models to drive timely initiation of appropriate therapies and monitoring is of paramount importance for conditions that have a substantial public health impact and can benefit greatly from early intervention.
Chronic kidney disease (CKD), especially CKD attributed to diabetes, that is, diabetic kidney disease (DKD), certainly falls within this category [1]. DKD is one of the most frequent and dangerous microvascular complications in diabetes mellitus (DM) that affects about 20% to 40% of patients with type 1 or type 2 DM [2]. It is the leading cause of end-stage renal disease (ESRD), which accounts for approximately 50% of the cases in the developed world with major public health and economic implications [3]. Therefore, annual screening is recommended for patients with type 1 and type 2 diabetes [4,5], which in turn has two implications: (1) there is a better chance for us to observe more regular and meaningful temporal patterns among these patients, and (2) an effective model for predicting the risk of DKD in the following year can be more beneficial for patients who are compliant to the annual check protocol because this allows implementation of early preventive measures.
Related Work
The effective use of temporal EHR data for predictive modeling remains a challenge owing to its highly variable sampling rates across different groups of patients (eg, patients may not follow the annual check protocol and only visit the hospital for critical health events) and distinct data types (eg, vital signs are noted hourly during inpatient encounters, whereas laboratory tests and medications are recorded when clinicians order them, and demographic data are more stable). Attempts have been made to handle temporal information in a variety of clinical applications. One approach involves representing the time series of clinical features with a single heuristic value (eg, taking the latest value or the trend [6] or shrinking to a weighted sum of values with the weights determined by the timestamps [7,8]). Another approach is to preserve the underlying sequential order by mapping the time series into temporal patterns (eg, knowledge-based temporal abstraction or hidden Markov chains [9,10]) or symbolic representations (eg, the Symbolic Aggregate approXimation based on Gaussian quantiles and the temporal discretization for classification [11,12]). Moreover, deep learning techniques such as recurrent neural networks, in particular, long-and short-term memory and Gated recurrent units, have contributed to model temporal events [13][14][15]. However, it has also been reported in the corresponding work that many such approaches could suffer from high data sparsity or informative missingness and insufficient training data.
In the prediction of kidney-related events, single-value abstraction is the most popular approach for its simplicity but at the expense of reduced temporal granularity. For example, in the ADVANCE prospective study for diabetic nephropathy, only baseline values of selected labs and vitals are used in a Cox proportional survival model [16]. A multivariate Cox proportional survival model was developed for predicting ESRD based on mean-and variation-abstractions of repeated glycated hemoglobin (HbA 1c ), creatinine, and blood pressure measurements [17]. More sophisticated use of temporal EHRs has also been studied, many of which were targeted at severe or acute kidney-related events. A Bayesian multiresolution hazard model for predicting CKD progression from stage III to stage IV attempted to capture temporal patterns by associating variables with piece-wise hazard increments at different time windows [18], whereas an independent Markov process modeled the underlying sequential latent states for predicting the transition from CKD stage III to stage IV [19]. A multitask linear model enabled knowledge transfer from one time window to another in the prediction of short-term renal function loss [20], and a tree-based discrete-survival-like gradient boosting machine (GBM) predicting acute kidney injury in inpatients allowed the features and their association with outcome to be time variant and showed excellent performance [21]. However, all of the aforementioned approaches require moderate to high manual effort on feature preselection and curation, which not only limits the scalability of the predictive models but also discards considerable amount of information in each patient's records [15]. In addition, the complexity of EHR data often violates the linearity and independence assumptions for survival and linear models, resulting in worse predictions and impaired generalizability.
Objectives
In this study, we propose a new approach for incorporating the temporal information in medical history of patients with diabetes to further improve the predictive model for evaluating their risk of renal complication in the next year. Because of its robustness, efficiency, and established efficacy in the prediction of kidney events [21], we chose GBM as the base learner and augmented it with schemes to continuously update its learning results based on new patient inputs over a full breadth of EHR data on a yearly basis, named Landmark-Boosting. Here, the landmark time refers to an unbiased reference point (eg, t years since the onset of DM) at which we want to construct stagewise prediction models and make dynamic risk predictions using information collected up to that time [22,23]. The final prediction model is then an ensemble of individual boosting models trained at each landmark time apriori.
Definition of Diabetes
We adopted the Surveillance, Prevention, and Management of Diabetes Mellitus definition of diabetes in this study. Diabetes was defined based on the following: (1) the use of glucose-lowering medications (insulin or oral hypoglycemic medications); or (2) level of HbA 1c of 6.5% or greater, random glucose of 200 mg/dL or greater, or fasting glucose of 126 mg/dL on at least two different dates within 2 years; or (3) any two type 1 and type 2 DM diagnoses been given on 2 different days within 2 years; or (4) any two distinct types of events among (1), (2), or (3); and (5) excluding any gestational diabetes (temporary glucose rise during pregnancy) [24]. DM onset time was defined as the first occurrence of any events from (1) through (5).
Definition of Diabetic Kidney Disease
DKD was defined as diabetes with the presence of microalbuminuria or proteinuria, impaired glomerular filtration rate (GFR), or both [25,26]. Microalbuminuria was defined as albumin-to-creatinine ratio (ACR) being 30 mg/g or greater, and similarly, proteinuria was defined as urine protein-to-creatinine ratio being 30 mg/g or greater [25,26]. Impaired GFR was defined as the estimated GFR (eGFR), an age-, gender-, race-adjusted serum creatinine concentration based on the modification of diet in renal disease equation [27] being less than 60 mL/min/1.73 m 2 .
Study Cohort
The study constructed a retrospective cohort using deidentified EHR and billing data from November 2007 to December 2017 in the University of Kansas Medical Center's integrated clinical data repository Healthcare Enterprise Repository for Ontological Narration (HERON) [28]. The study did not require approval from the institutional review board because data used met the deidentification criteria specified in the Health Insurance Portability and Accountability Act Privacy Rule. The HERON Data Request Oversight Committee approved the data request. As shown in Figure 1, a total of 35,779 adult patients with nongestational DM (age≥18 years) who had at least one valid eGFR or ACR record at an outpatient encounter were eligible for this study so that they could be identifiable as DKD present or not. We excluded patients presenting with any type 1 DM or cystic fibrosis-related diabetes diagnoses over their observation period and those who had kidney disease manifestation (eg, CKD diagnosis, low eGFR, or microalbuminuria) before the onset of DM. The case group included all DKD patients with their DKD onset time, or end point, defined as the first time of their abnormal eGFR or ACR. The control group was defined as patients with DM whose eGFR values were always above or equal to 60 mL/min/1.73 m 2 and had never had microalbuminuria, with their end point defined as the last time of their normal eGFR or ACR. Finally, 14,039 patients were included in the final cohort with 4785 (34.08%) patients with DKD.
Clinical Variable Extraction
According to our data, the heuristic time between 2 adjacent outpatient eGFR or ACR labs is on average 1 year per patient. Thus, for a patient i, a sequence of time-stamped examples (ie, DKD statuses, 1 for DKD and 0 for non-DKD), is identified based on their last outpatient eGFR or ACR collected annually, denoted as {y i t } t T . Note that a patient may be missing eGFR/ACR during certain years, and we kept the corresponding DKD status as NA without any imputation. For example, the outcome sequence for a patient can be (0, NA, 1), which can be interpreted, respectively, as "the patient did not have DKD the same year as DM onset, but cannot determine DKD status for the second year, and had DKD onset in the third year." Each patient was then represented by collecting 15 common types of clinical observations from HERON [28] (Table 1). Each category is a mixture of categorical and numerical data elements. Numeric values were used for laboratory tests and vital signs, whereas binary indicator variables were used for categorical features. In addition, we abstracted the Medication variables at the Semantic Clinical Drug Form or Semantic Clinical Brand Form level and Diagnoses variables at the International Classification of Diseases (ICD)-9 or 10 code level [29]. We further decomposed clinical features into more meaningful pieces according to (1) different sources of a diagnosis (ie, billing diagnoses or EHR problem list diagnoses), (2) different aspects of a medication fact (ie, drug refill or drug amount), (3) different types of encounters where a procedure was ordered or performed (ie, inpatient or outpatient), and (4) different states of an alert (ie, fired or overridden). These data elements were extracted from our institutional EHR and had been explicitly incorporated in our data warehouse as an additional i2b2-specific attribute called modifier [30]. Among the initial 22,331 distinct features available for our study cohort, 15,707 (70%) were only recorded for <1% of the patients, which we excluded to reduce data sparsity. In Figure 2, we illustrated the feature densities over time across different data types. Each row corresponds to the average number of distinct clinical facts per patient for a data type over 5 years before and after DM onset. An evident heterogeneity of clinical activities before and after DM onset can be observed. For example, lab frequencies are much higher in the first 2 years of DM onset, with visits becoming more frequent after DM onset.
Figure 2.
Clinical feature densities across data types. Each row corresponds to the average number of distinct clinical facts per patient for a certain type of clinical data over 5 years before and after DM onset. The darker the region is, the more distinct facts have been recorded for patients on average within the corresponding time window. DM: diabetes mellitus; UHC: University HealthSystem Consortium.
In Table 2, we characterized the temporal variations by estimating the between-observation time, or observation intensity, for each data type and observed that the between-patient irregularity of sampling rates is significantly different from within-patient (P<.001) based on the analysis of variance tests, except for demographics, suggesting varying degrees of health care exposure across patients and over time.
Experimental Design
For the clinical task of predicting DKD risk over the next year, we first randomly divided the 14,039 patients into training set (80%) for model development and validation set (20%) for performance evaluations. To simulate a more realistic clinical scenario and account for the bias caused by varying degrees of health care exposure over time, we stepped forward through patients' time course and built prediction models at each landmark time, that is, every full year since DM onset, for rolling predictions of 1-year DKD risk. As such, individuals may contribute to or be tested by one or more prediction models, depending on their eligibility at the landmark time.
Gradient Boosting Machine
We chose GBMs as the baseline training model, which were then combined with four different approaches to incorporate temporal data. GBM is a family of powerful machine-learning techniques that have shown considerable success in a wide range of practical applications [31][32][33][34][35][36]. We chose GBM as the base learner for its robustness against high dimensionality and collinearity and also because it embeds feature selection scheme within the process of model development [37]. To better control overfitting, we tuned the hyperparameters (depth of trees: 2-10; learning rate: 0.01-0.1; minimal child weight: 1-10; number of trees is determined by early stopping, ie, if the holdout area under the receiver operating curve [AUROC] had not been improved for 100 rounds, then we stopped adding trees) within the training set using 10-fold cross-validations.
Missing Values
Missing values were handled in the following fashion: for categorical data, a value of 0 was set for missing, whereas for numerical data, a missing value split was always accounted for, and the best imputation value can be adaptively learned based on the improvement in training AUROC at each tree node within the ensemble [38]. For example, if a variable X takes values (0, 1, 2, 3, NA, and NA), where NA stands for missing, the following two decisions will be made automatically at each split for each tree: (1) should we split based on missing or not? and (2) if we split based on values, for example, >1 or ≤0, should we merge the missing cases with the bin of >1 or ≤0?
Evaluation Metrics
We used AUROC and area under precision recall curve (AUPRC) to compare the overall prediction performance, with the latter known to be more robust to imbalanced datasets. In addition, we characterized calibration by the observed-to-expected outcome ratio (O:E), which measures agreement between the predicted and observed risk on average across observations. By treating testing examples with predicted probability of outcome in the top 40th percentile as positive cases, we made fair performance comparisons among different methods and further examined the model's ability in detecting positive vs negative cases by reporting the sensitivity, specificity, positive predictive values (PPVs), and negative predictive values. Figure 3 depicts the four different approaches explored in this study for handling temporal EHR data: Latest-Value provides the most straightforward way to aggregate repeatedly measured variables; Stack-Temporal attempts to differentiate the effects of the same variable associated with different timestamps; and Discrete-Survival allows survival analysis model to be created by using binary classifier, which effectively enhances the chronical relationship between the predictors and the outcome. Landmark-Boosting is our proposed model motivated by the boosting method, which is designed to ensemble identification trees by learning over time. Each of the approaches is discussed in detail in the following sections.
Latest-Value Approach
In this approach, we simply collect the last observed value before each landmark time for each predictor across all time windows (Figure 3) [16]. The Latest-Value approach is time agnostic, which implies it only retains the information about existence of certain predictors at the patient level. For example, the latest creatinine recorded for patient A can be 1 month ago but 1 year ago for patient B, which will be treated equally by this approach.
Stack-Temporal Approach
Given the variables for all time windows T, the Stack-Temporal approach concatenates the variable from all windows to represent patient x i using p-dimensional vector, where p=number of variables x T (Figure 3) [20]. One of the disadvantages of this approach is that the feature dimensionality increases proportionally to T, which may lead to worse prediction performance because of overfitting.
Discrete-Survival Approach
The Discrete-Survival approach simulates a discrete-time survival framework by separating the full course of patient's medical history into L nonoverlapping yearly windows, L=1,2,...T, with variables from t-1 to predict DKD risk in t ( Figure 3) [21]. This approach assumes that examples from different time windows are independent of each other even if they may come from the same patient, which does not explicitly allow knowledge to be transferred from the previous time window to the next.
Landmark-Boosting Approach
To build the continuous learning mechanism, we developed a new method by extending the classical GBM to ensemble learners over time, that is, from one landmark time to the next ( Figure 3). Specifically, we collected data D t ={(x it , y i )} with i=1,2,…,N t at each time window t and tried to solve the following optimization problem sequentially for all 1≤t≤T, where F represents the prediction function (ie, ensemble of trees), L represents the loss function (ie, logloss), and E t/t-1 stands for conditional expectation at timet using observed values at time t-1. In other words, we used the predicted probability from time t-1 as the baseline risk and ensembled new learners based on predictors updated at time t. Figure 4 presents the algorithm describing the detailed implementation steps.
Cohort Characteristics
At each landmark time, the eligibility of a patient was determined by checking if a valid eGFR or ACR reading presented in the current time window and was neither DKD nor censored in the previous time windows. As shown in Table 3, the number of eligible patients dropped over time with an increasing DKD rate as a mixing result of cases dropping out or censored from last time.
There is a mild decreasing trend of age and race (white) proportion over the landmark times. In addition, we compared such case-mix shifts between training and testing sets and found no significant differences (Table 4).
Prediction Performance
Overall, the prediction results in Figure 5 showed that the proposed Landmark-Boosting model outperformed other temporal data representation methods with respect to all evaluation metrics. The Stack-Temporal approach always showed the worst performance, whereas the Latest-Value and Discrete-Survival approaches demonstrated competitive results. Only the Landmark-Boosting model had an increasing trend in AUROC over the years after DM onset, which peaked at =2 with value of 0.83 (95% CI 0.76-0.85). AUPRC showed a steadily increasing performance of all approaches over time, whereas the Landmark-Boosting model dominated at each landmark time and reached 0.75 (95% CI 0.65-0.80) at =4. Sensitivity declined slightly over time and achieved an optimal point at t=2 with the Landmark-Boosting model persistently outperforming others with a sensitivity of 83% (95% CI 79%-88%). In terms of specificity, Landmark-Boosting also outperformed others at each landmark time and achieved 78% (95% CI 74%-83%) at landmark time 4. Moreover, PPV improved over landmark time with the Landmark-Boosting approach showing the best performance reaching 67% (95% CI 57%-75%) at landmark time 4 (whereas the second-best model, Discrete-Survival, achieved 51% [95% CI 44%-57%]), translating to correct identification of 503 patients with DKD (whereas the second-best model only identified 383 patients with DKD). 6 presents regional calibration on the original predicted probability scale grouped into 20 bins. The overpredicted or underpredicted was defined as "the O:E ratio within a prediction bin that is significantly below or above 1 (P value<.05)," whereas the remaining cases were considered calibrated. Clearly, the Landmark-Boosting approach also dominated all other temporal methods on calibration, with a dip of overestimation for the group with moderate risk at t=2. Both Latest-Value and Stack-Temporal models underestimated the risk, especially at >2. Discrete-Survival model appeared to overestimate the risk at early years for the low-risk group but tended to underestimate the risk in later years. Figure 6. Calibration comparisons among temporal approaches over landmark time. Regions of calibration across the range of predicted probabilities, scaled by proportion of observations in each region and shaded by the magnitude of the within-region observed-to-expected ratio (O:E), with green suggests underprediction (ie, O:E significantly less than 1), and red suggests overprediction (ie, O:E significantly larger than 1). Pearson correlation coefficients between predicted and actual values over landmark times for each temporal model are included in the table below (the closer the coefficient is to 1, the better the predicted and actual values are linearly related). DM: diabetes mellitus.
Case Study
To closely examine the prediction change over time, we extracted a subset of 111 testing cases eligible at all five landmark times (ie, who had outcome sequence either like [0,0,0,0,0] or [0,0,0,0,1]) and plotted their predicted probability percentiles over years (Figure 7). We observed significant differences in the risk trajectory between patients with and without DKD depicted by the Landmark-Boosting method, with a much sharper increase of relative risk for most patients with DKD after year 1 and more obvious separation of risks over time. On the other hand, all other three methods suggested stable or even decreasing relative risk for patients with DKD over time, without much deviation from patients without DKD, with only a few exceptions. Figure 7. A visualization of predicted diabetic kidney disease (DKD) risk over landmark time. Risk percentiles (ie, normalized risk scores) against landmark time for a sample of patients. Each red line represents patient who finally progressed to DKD, whereas each green line represents patient who did not. DM: diabetes mellitus.
Principal Findings
The study results suggested that exploiting historical temporal EHR data in predictive models would significantly improve prediction performance, especially with our proposed Landmark-Boosting model. As demonstrated in Figure 5, the 4 different temporal models started with similar predictive power during the same year of DM onset but started to deviate along the landmark times. We observed a declining AUROC over time, with our proposed model being the only exception. One potential explanation is that the sensitivity of other three models may be affected by the upward case-mix shift (Table 3), that is, the models' ability to detect positive cases was impaired. For example, the optimal sensitivity of Stack-Temporal model seemed to top at the beginning but suffered a severe drop over time without any significant improvement of specificity, which may be a result of potential overfitting caused by increasing dimensionality. Within the first 2 years, the Latest-Value model seemed to yield a competitive sensitivity against the Landmark-Boosting model while the latter exceled afterward, indicating the effect of continuous self-correction mechanism that began to manifest after the second year since DM onset. A local peak of specificity presenting at year 2 for all four models implied a change in their interests toward the non-DKDs; however, only the Landmark-Boosting model kept the balance by preserving a good sensitivity. In contrast with AUROC, which has been criticized as being susceptible to class imbalance [39], AUPRC demonstrated a steady trend of increase over landmark times for all temporal models, which was mainly attributable to PPV improvement, indicating that the signals from DKD samples may have become stronger over time, likely as a result of increasing DKD prevalence over the landmark years. Nonetheless, the proposed Landmark-Boosting model dominated the others and even showed increasing margins along landmark times. For instance, the Landmark-Boosting model identified 46, 36, and 120 more true cases than the second-best model (91, 72, and 135 more than the nontemporal Latest-Value model) at 2, 3, and 4 years. Moreover, the Landmark-Boosting model was clearly better than the other models on calibration that never underestimated the risks (Figure 6), whereas the Stack-Temporal model also seemed to be well calibrated within the first 2 years of DM onset.
Clinical Implications
Our proposed temporal model will benefit patients with longitudinal data, and the longer we follow up, the better the model can predict the next-year DKD risk by self-adjustment with respect to both the individual's medical history and population shift over time. The study has three important implications. First, our investigation confirmed that temporal EHR and billing data carry critical information depicting the progression of the patient's condition, and it is important to choose the appropriate method for incorporating longitudinal data to promote the predictivity of modern medicine. Second, by allowing the model to evolve along patients' landmark times, we not only reduced the biases related to a patient's exposure within EHR but also simulated a scenario that mirrors the clinical practice for annual screening. Third, rather than prior predictive analyses that were mostly population based [40] or personalized longitudinal models requiring complete patient history [10], our model sought a middle ground, aiming to weave together information at both population and individual levels, for example, the GBM built at each landmark time is an attempt to fit the concurrent population, whereas the carrying over of last individual predictions is for the purpose of preserving personal information.
Our model can continually calculate kidney disease risk for patients with diabetes with automatic collection of new EHR data and improve prediction over time. The ability to precisely stratify patients with diabetes by their renal complication risk in the coming year would merit a variety of potential intervention designs: (1) nutritional interventions that differentiate dietary consultation according to relative DKD risk, for example, presenting dietary flyers to all patients with type 2 DM but arranging in-person consultation sessions for those in the high-risk bin with dietitians knowledgeable in CKD diet; (2) lifestyle interventions that encourage personalized health-promoting behaviors such as smoking cessation and physical activity at different intensity levels based on their DKD risk; (3) medication management by designing targeted strategies according to the risk to encourage patient medication compliance, especially with blood pressure and glucose control medications, and warn patients and physicians against the use of nephrotoxic medications, for example, nonsteroidal anti-inflammatory drugs, unless absolutely necessary for high-risk patients because patients with diabetes are already at a higher risk for developing transient decreases in renal function consistent with acute kidney injury, and nephrotoxic drug exposure can amplify that risk. Moreover, with the DKD risk factor discovery framework developed in our previous work [41], we can further empower the predictive models by outputting explainable risk factors and quantifying their effects on DKD specific to subgroups within different risk bins to better support physicians in designing tailored therapy and management strategies. More importantly, the Landmark-Boosting model almost never underestimated the risk compared with other models, especially among the high-risk group, which is clinically ideal because timely medication management can be effective in protecting high-risk patients from unnecessary harm to the kidney due to the use of nephrotoxic medications.
Limitations and Future Work
There are several limitations to our work. Disease diagnosis sequence is not necessarily the same as the disease manifestation sequence, which may lead to the underestimation of false-negative rates for DKD in this study. For example, our exclusion criteria may have excluded patients with DKD who visited our hospital for their kidney disease but have not had their diabetes-related information recorded in our EHR yet. In addition, the current design of our model is not robust against population drift because of changes in practice over time or differences in clinical vocabulary and workflow implemented across institutions. To further investigate the generalizability of our model, it is necessary to perform external validations and adequate recalibration based on patients from different sites as well as over calendar years to capture the general population shift and practice change.
Although not the focus of this paper, we further examined the factors that potentially contributed to the superiority of the Landmark-Boosting model. In Multimedia Appendix 1, we present the top 50 important features selected by the Landmark-Boosting model and their varying rankings among the other temporal models. Only a few important variables were common across all models (eg, age at DM onset and creatinine). Most top-ranked factors by the Landmark-Boosting model were less important in the other three temporal models (eg, previous visit to cardiovascular clinic, triglycerides, glucose, and exposure to codeine derivative). Furthermore, we examined the features that may contribute to improving the performance of Landmark-Boosting model over time. As shown in Multimedia Appendix 1, we collected the top 30 important features at year 4 and backtracked their rankings in previous years. For each feature, we calculated the Pearson correlation coefficient between ranking and landmark time to determine if the feature ranking increased/decreased significantly over time. Factors showing improved predictive power over time included cumulative clinical fact counts, previous visit to cardiovascular clinic, systolic blood pressure, triglycerides, and alanine aminotransferase. Built on these preliminary findings, we plan to further characterize and evaluate the changing feature representations over time in our future work.
Conclusions
This study addressed the problem of underutilization of temporal information in EHR-based predictive models. We proposed a new approach in leveraging the temporal dynamics in EHR to improve DKD prediction and validated it against three state-of-the-art models using the idea of landmark time to simulate real clinical utility. Experimental results demonstrated that the proposed Landmark-Boosting model can effectively capture temporal dynamics in EHR without overfitting and further improve on patients with a longer follow-up time.
|
2019-11-07T14:42:36.197Z
|
2019-07-16T00:00:00.000
|
{
"year": 2020,
"sha1": "5e99fc3e4b3c722069be61893a44153fe64b99c7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/15510",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "901b0720235f50c11d539c1767dbaffa3578b86f",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220877493
|
pes2o/s2orc
|
v3-fos-license
|
‘Can you please hold my hand too, not only my breast?’ The experiences of Muslim women from Turkish and Moroccan descent giving birth in maternity wards in Belgium
Objectives To reach nuanced understanding of the perinatal experiences of ethnic minority women from Turkish and Moroccan descent giving birth in maternity wards in Belgium thereby gaining insight into the underlying challenges of providing intercultural care for ethnic minority persons in a hospital setting. Methods A qualitative study design was used by conducting In-depth interviews with 24 women from Turkish and Moroccan descent who gave birth during the past three years in maternity wards in Flanders, Belgium. The interviews were analysed using a Grounded Theory Approach. Results This study shows that the women’s care experiences were shaped by the care interactions with their caregivers, more specifically on the attention that was given by the caregivers towards two essential dimensions of the care relationship, viz. Ereignis (attention to what happens) and Erlebnis (attention to how it happens). These two dimensions were interrelated in four different ways, which defined the women’s care experiences as being either ‘uncaring’, ‘protocolized’, ‘embraced’ or ‘ambiguous’. Moreover, these experiences were fundamentally embedded within the women’s cultural context, which has to be understood as a relational process in which an emotional and moral meaning was given to the women’s care expectations, interactions and interpretations of care. Conclusions The findings reveal that the quality of intercultural care depends on the nature and quality of care interactions between ethnic minority patients and caregivers much more than on the way in which cultural questions and tensions are being handled or dealt with in a practical way. As such, the importance of establishing a meaningful care relationship should be the priority when providing intercultural care. In this, a shift in perspective on ‘culture’ from being an ‘individual culture-in-isolation’ towards an understanding of culture as being inter-relational and emerging from within these care relationships is necessary.
Introduction
Despite the increasing multi-ethnicity within societies worldwide, ethnic minority patients still face significantly higher risk of being confronted with lower quality of healthcare, lower health outcomes, inequalities, disparities and barriers in access to care. [1][2][3] Healthcare services are being challenged to provide care for ethnic minority patients because of the increasing heterogeneity in their health determinants, needs and vulnerabilities. [3] Notwithstanding these challenges, the World Health Organization (WHO) recommends that healthcare services should ensure culturally appropriate care for every (ethnic minority) patient. [3] Providing culturally appropriate care is particularly challenging in the hospital setting because of its acute, necessary and inevitable character. [4] Previous research has shown that lower health literacy and socioeconomic status in ethnic minority groups, budgetary restrictions in the hospital, difficulties in communication and differences in cultural interpretations of illness, health and treatment as well as negative perceptions among patients and caregivers are examples of risk factors for pressure on the actual possibility of providing culturally appropriate care in the hospital. [1,[4][5][6][7] Although caregivers experience intercultural challenges on an almost daily basis [6,[8][9][10]; and even though the concepts of cultural competence and transcultural nursing have gained a large amount of attention within the international literature [11][12][13][14][15][16], ethical guidelines on intercultural care practices remain underexposed or lacking, leaving many care practices open to misunderstandings due to intercultural difficulties. [4,17,18] As such, the question of how to provide appropriate care in the context of intercultural diversity remains open to a large extent. A better understanding of the intercultural bedside care experiences of both ethnic minority patients and caregivers is crucial in finding the right directions for providing intercultural care in the hospital setting.
With this paper, we aim to fill this gap by presenting the first part (patients' perspective) of the results of a large-scale qualitative research study about the intercultural care experiences of Muslim women from Turkish and Moroccan descent, as well as of their caregivers in maternity wards in Flanders, Belgium. This focus was chosen because three fields of tension come together in this particular intercultural context.
First, a field of tension exists between the predominant, mono-cultural biomedical approach in the Belgian healthcare system and the multicultural and multi-religious character of the real time society. [19,20] Although the Belgian healthcare system is officially regarded as one of the most equitable healthcare systems worldwide, it also faces an increasing diversity in its population and is continuously being challenged by the above-mentioned risk factors concerning care for ethnic minority patients. [21] Belgium has been integrating European Directives to improve the health of ethnic minority patients and some recommendations and guidelines are present in the Belgian healthcare context. [17,19,22,23] Official policy obligations towards intercultural diversity in healthcare organisations, however, do not exist. [21,22] Hence, critical reflection on the predominant biomedical approach and positive action concerning the provision of appropriate intercultural care remains on the level of free initiative of people and organisations.
The second field of tension concerns the culturally different understanding in the very meaning of illness, health, treatment and care between ethnic minority patients and caregivers. As we learn from a large-scale systematic review, ethnic minority patients inevitably take their own cultural views on care with them when they are being hospitalized. [4] As such, patients' expectations, preferences, attitudes and behaviours within the care process are being influenced by culturally determined values and beliefs from their own cultural context of care. [4,[24][25][26][27] These cultural values concerning good care inevitably meet with the cultural values and beliefs of caregivers in the hospital setting. [4] The way in which these different views on health, illness and care are handled by patients and caregivers is thus an essential part of the intercultural care encounter in the hospital setting and a possible cause of misunderstandings between ethnic minority patients and caregivers.
Finally and even more challenging is the provision of qualitative care for ethnic minority women in the maternity care setting, since here, caregivers have to deal with a higher physical, psychological and social vulnerability of women being pregnant and giving birth within a post-migration context. [28,29] Worldwide, immigrant women are still facing lower quality of care, higher perinatal and infant mortality, a higher risk of maternal or child health problems and barriers in access to obstetric and midwifery-led care despite the right for all women and their new-borns on quality care throughout pregnancy, childbirth and the postnatal period. [3,[30][31][32][33] Previous reviews on maternity care experiences of ethnic minority women discussed difficulties in the care relationship, difficulties in communication, the presence of racism and discrimination, the importance of family involvement and the influence of expectations and cultural practices on the women's care experiences. [27,28,30,34] Due to the quick, dynamic and short-term character of maternity care, there is a lot of pressure on the quality of the intercultural care relationship between patients and their caregivers during the hospital stay.
Taken together, these three fields of tension present the complexity of providing care in the context of intercultural diversity in a challenging way. How should we understand the care relationship between ethnic minority patients and their caregivers in such a setting? How do Muslim women from Moroccan and Turkish descent experience the care during their stay in the maternity wards? Within the specific Belgian context, the focus of our research lies on the experiences of Muslim women from Turkish and Moroccan descent giving birth in maternity wards in Flanders.
Study design
The Grounded Theory approach was used to gain nuanced understanding of the perinatal experiences of Muslim women from Turkish and Moroccan descent. This qualitative research design suits best for exploring experiences and underlying meanings from the women's point of view. The inductive Grounded Theory approach is especially useful to understand complex phenomena and to develop a theoretical framework of these phenomena and their underlying dynamics and processes. [35] The COREQ guidelines (Consolidated Criteria for Reporting Qualitative Research) were followed to ensure rigour of the study. [36] Participants and sampling Participants were included when they considered themselves as Muslim women from Turkish and Moroccan descent and were hospitalized in maternity wards (considered as a normal setting for women to utilize during childbirth) in the province of Limburg during the last 3 years. As for the type of birth, selection was open to varying types (natural births, first and following births, caesarean, instrumental, and more complex types with specific health needs). The focus on Turkish and Moroccan descent was based on the fact that they belong to the two largest Non-European minority groups in Belgium. [37,38] Both groups have different ethnic roots but came to Belgium as labour migrants (1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974) or by reunification with their families and are living in Belgium as Muslim Minorities. Furthermore, both groups are often the subject of public debate on multiculturalism and are dealing with stigmatization in the society (especially the Moroccan community). [37] Limburg was chosen because of the existence of seven coalmines and a high number of labour migrants from different nationalities working and living together since the 1920's. Since the early sixties, the mines have mainly attracted labour migrants from Turkey and Morocco. Muslim women from other origins and women who gave birth at home were excluded, as well as un-documented immigrants since their status can be linked to very specific healthcare problems. [9,39] At first, we also excluded women who were not able to speak Dutch fluently due to extra sensitivities and risk of bias when interviewing with an interpreter.
Initially, we applied purposive sampling to recruit a rather homogenous group of first participants (6 to 8 interviews). After the analysis of these interviews, theoretical sampling was carried out based on the insights that were gathered from simultaneous analysis and the need for further clarification, variation and heterogeneity. [35] For instance, after 6 interviews it became clear that communication was a major theme. Although we initially excluded women not able to speak Dutch fluently, analysis directed us to recruit women with less language ability to clarify a new question: since communication is so important for people who are fluent in Dutch, what, then, are the experiences of people not able to speak Dutch fluently? The final size of the sample was determined by the principle of saturation, when all dimensions were identified and there was sufficient variation. [35] Since the women were not recruited via hospital wards, several strategies were necessary to deal with expected difficulties when recruiting women from a minority group as described in Table 1. [40] The most successful strategies were strategies with a personal connection, for instance when key persons mediated or when the interviewer regularly attended 'get-together' moments. The least successful strategy (advertising in unfamiliar social media groups) was the one that lacked direct personal contact. • Snowball sampling was performed ▪ New participants were suggested by participants after the interview • Advertisments were placed in a website of a key organization and in 6 social media groups These strategies resulted in the participation of twenty-four Muslim women from Turkish (n = 11) and Moroccan (n = 13) descent in a semi-structured, in-depth interview. Included women were hospitalized in maternity care units of six different general hospitals in the province of Limburg during the past 3 years (Table 2). In these hospitals, obstetricians perform almost 95% of all deliveries and women have direct access to specialist care; consulted an obstetrician before and during pregnancy, labour and postpartum. [41,42] As for the type of migration, the women's background varied from recently migrated women (even still during pregnancy) to women whose (grand)parents migrated to Belgium. It is important to note that the categories of first (n = 10), second (n = 13) or third generation (n = 1) are not to be understood as clear cut or well-delineated categories since their migration backgrounds reflect the tendency of transnational migration in worldwide societies according to which people migrate to several societies (to work, to reunite with family) during their lives. For instance, some women migrated first to another country in the European Union (e.g. neighbourhood countries) and later to Belgium. Furthermore, we will use the term "(Muslim) women" instead of the more correct term "Muslim women from Turkish and Moroccan descent" for readability reasons only.
Data collection
Between May 2016 and December 2017, interviews were carried out by the first author (LD), a female anthropologist with experience of conducting in-depth interviews with female Muslim women in Belgium. The interviewer is a mother herself and was raised in the same region as the women from the study but is not from a Turkish or Moroccan origin herself. The interviewer was aware of the fact that taking time and following the pace of the women, was the most important prerequisite to carry out meaningful interviews. As such, the interviewer not only spent time before and after each interview to engage within the daily life of the woman and her family but also allowed her to set the pace during the interview itself. As such, the interviewer did not easily interrupt elaborations by the women during the interview. Depending on the place where the women felt most comfortable, interviews took place either in their own home (n = 20), their parents' home (n = 1) or in a private room within a familiar organisation (n = 3). Interviews lasted on average 115 minutes (range 50'-225') and were conducted in the women's preferred language, most of them in Dutch (n = 20). One interview took place mostly in Dutch and was supported by English clarifications, although we have to specify that this mother could not express herself in Dutch during her hospitalization (n = 1). In one case, understanding one another turned out to be more difficult than expected because the woman's language ability was sufficient for everyday situations but not to express experiences or profound feelings. In this case, the interviewer returned for an additional interview with an interpreter to deepen the data. In total, three interviews took place by the interviewer accompanied by an Arabic interpreter (formal (n = 1), informal (n = 2)). Bearing the sensitivity of the subject in mind, it was important that the women themselves could choose an interpreter with whom they felt most comfortable. Afterwards, an independent interpreter (also a qualitative researcher herself) checked the transcriptions of these interviews for accuracy of translation.
The semi-structured interviews followed an interview-guide based on a previously published systematic literature review [4]. The interview guide was adapted after a pilot interview and refined after discussions within the research team and a meeting with other experts in the field. The open and non-suggestive questions allowed the women to describe their experiences freely while holding a focus on their experiences [35] . Each interview started with a personal introduction of the interviewer with focus on the shared experience of being a mother. After this, the women were asked: 'Can you tell me all about your delivery, from one mother to another?' The interview-guide was used in a flexible manner and proved to be very helpful to check if all the important themes were covered throughout the interview (S1 Appendix). At the end of the interviews, the women were asked about what they experienced as the most important themes discussed and whether there was something else important to discuss that had not been elaborated yet. The themes in need of more attention were noted in the margins of the interview-guide after each interview.
Ethical considerations
In March 2016, Ethical approval (G 2016 03 531) was obtained by the Social and Societal Ethics Committee (SMEC) of KU Leuven. All the participating women received a written information brochure about the study purpose and nature, information of the research group, procedure, the rights as study participants and contact details. Due to the sensitivity of the subject and/or presence of an interpreter, additional ethical considerations were crucial throughout the interview process (S2 Appendix).
All the women agreed to digital recording. During the first interview, the women were asked permission to contact them afterwards in case of additional questions. In one case, the interviewer returned for an additional interview to complete the data and in two cases, the interviewer asked additional questions by phone. Interviews were never interrupted or stopped once they had started although some interviews (n = 5) were rescheduled last minute due to illness, doctor appointments, etc. One women withdrew due to work-related difficulties.
Data analysis
The interviewer (LD) made field notes and a narrative report directly after each interview. Interviews were meticulously transcribed (LD) ad verbatim (non-verbal signs included) as soon as possible after the interview. We used the Qualitative Analysis Guide of Leuven (QUA-GOL) [44], to analyse the data in accordance with the Grounded Theory approach. It consisted of two parts, a preparatory part (pen and paper stage) and the actual coding of the data (by the software QRS international's Nvivo 11, 2016). [44] During the preparatory stage, two researchers (LD and YD) carefully (re)read the transcriptions and descriptive reports and marked significant events, facts or meaningful elements. LD and YD independently made a conceptual scheme for each interview to capture the essence and to cluster concrete experiences in patterns or themes on a more abstract conceptual level. Thus, meaningful themes grounded in the data were discovered rather than breaking down the data as a result of a line-by line coding process. YD and LD compared and discussed these conceptual schemes for similarities or discrepancies. The other supervisors of the multidisciplinary research team (CG and BDdC) simultaneously read seven of the richest interviews. During regular meetings, we discussed the identified patterns, potential discrepancies and themes that needed more clarification. After ten conceptual schemes, we developed an overarching scheme in which key themes were compared for similarities and differences. After this, the overarching scheme was continuously tested for appropriateness by following and previous interviews. From this stage on, a constant forward-backward movement occurred between analysis in one interview and across interviews and between analysis on a basic level and on a higher level of conceptualization. After this, a peer debriefing took place (March 2018) in which the interpretation of the transcripts and results were thoroughly assessed and discussed by an interdisciplinary panel of external experts.
Based on the themes from the preparatory stage, LD made a common list of analytically meaningful codes in preparation of the coding process in Nvivo 11. During the stage of the actual coding process in Nvivo 11, LD linked all the relevant text fragments to appropriate codes by using the list of codes identified in the preparatory stage. This was followed by a close examination of the meaning, dimensions and characteristics of the concepts by means of their associated quotations. This helped us to extract the essential structure and to develop a theoretical framework as an answer to the research question. We verified the accuracy of this framework by means of constant comparison with all the individual interviews and conceptual schemes. Finally, we translated the findings into a narrative storyline and illustrated this by relevant translated quotations. The multidisciplinary research team regularly checked and discussed the actual coding process, the development of the concepts and the framework.
Results
The narratives of Muslim women from Turkish and Moroccan descent present the care experiences in a Flemish maternity ward as a dynamic and long-term care process, which starts from the prenatal consultations and lasts until the postnatal check-up. During this perinatal care process, the women were engaged in various kinds of relationships with multiple caregivers, each influencing the women's care experiences in their own way. From the narratives we learn that the way in which the women ultimately experienced the full care process was determined by the dynamic interplay and interrelatedness of two essential dimensions, viz. Ereignis and Erlebnis. A German translation of these concepts was necessary due to the lack of a wellgrounded translation from Dutch to English for the terms ['Gebeuren' i.e. 'Ereignis'] and ['Beleving' i.e. 'Erlebnis']. The correct translation was frequently discussed, not only with the multidisciplinary research group but also with several experts in the field.
The first dimension, Ereignis, refers to the women's experiences of 'what' actually happened during the care process. It entails the women's experiences of the accuracy or attentiveness of caregivers when performing a variety of care interventions (medical, technical acts). It represents the women's experiences of what caregivers did to take care of (or even to cure) them when giving birth. In essence, it refers to the question: "What attention did various caregivers have for the acts that happened or needed to happen?" as seen through the eyes and experiences of the women.
The second dimension, Erlebnis, entails the women's experiences of 'how' or 'the way in which' the care process took place. Here, the women referred to the caregivers' attentiveness for their emotions, feelings and wishes as well as to their various 'beings', as a person, a patient, a new mother, a daughter, a wife and as a Muslim women. Essentially, this dimension refers to the question: "What attention did various caregivers have for 'the way in which' these things happened to me?" as experienced by the women?" These two dimensions are interrelated and the dynamic interplay between them determined how the Muslim women experienced the care process. Whether or not they were in balance depended on the two-dimensional attention that was given to them by the various caregivers across the whole care process.
On a more fundamental level, the narratives revealed that both dimensions of the care process as well as all the care interactions between the women and the caregivers are embedded in the women's own cultural background, by which they give meaning to their care experiences. This means that the women's cultural context cannot be understood as a static set of values, preferences or traditions but has to be understood as a dynamic process in which a symbolic, emotional or moral meaning was given to the women's experiences throughout and within the process of caregiving and care-receiving. In this relational process, the women balance between their own cultural system of values, beliefs and traditions and the actual reality of values, beliefs and traditions of the multiple caregivers in the hospital. As such, different (cultural) views on the childbearing body, on appropriate treatment and good care come together during and across many different care interactions between the women and their caregivers. Fig 1 provides an illustrative overview of these dimensions and the determinants of the women's care experiences. In the following sections, we will first describe the interrelatedness of the two essential dimensions of Ereignis and Erlebnis. The second part presents the influence of culture as a meaning system affecting the women's experiences in a fundamental way.
Four ways of interrelatedness
The women's narratives showed that the dynamic interplay between the two essential dimensions (Ereignis and Erlebnis) and the way in which both dimensions were interrelated determine the outcome of the perinatal care process. Four ways of interrelatedness between the two essential dimensions define the outcome of the long-term care process as experienced by the women. The four outcomes are summarized in Fig 2. Protocolized care. The first outcome can be presented as a care experience in which there was a fair amount of attention of the various caregivers to the dimension of Ereignis and little or no attention to the dimension of Erlebnis. In the women's experiences, technical care was performed based on strict protocols and procedures. By this, the women experienced that everything went well on the level of the medical outcome. In general, however, they perceived little attention to their experiences, feelings or emotions. Although everything went well in a medical and technical sense, the care process did not happen as the women wished for. On the contrary, they felt forced by various caregivers to follow rigorous rules in a quick pace, while they felt as if there was no regard for their own voice and choices in the care process. In some cases, the women were even told to 'cut the comedy and just follow the procedures'.
Altogether, the women experienced that not many reciprocal care relationships existed here since they felt that most caregivers did not communicate well and were often 'too busy to care'. Nevertheless, caregivers handled with medically competent, technical 'cure' (e.g. when answering complications) despite the restricted room for individual, religious or cultural wishes of the women. Due to this, the women only felt welcome as a patient but not as a person. came to measure and weight her and then they continued like this, there was no pace in it and I could not rest after the birth [. . .] no, those were just standard procedures that must be carried out in the hospital." (6) Uncaring care. The second way of interrelatedness was revealed by the women who felt overcome by their experiences since the various caregivers paid little or no attention to both dimensions Ereignis and Erlebnis. On top of the fact that the women's own voice and feelings were not acknowledged (cfr. protocolized care), technical and medical inadequacy occurred. In these narratives, various caregivers did not recognize the seriousness of physical and emotional difficulties as expressed by the women themselves: "They did not believe me and said: 'Madam, you should not make it bigger nor turn it into a drama, those are just a few drops [of blood] you know. The baby is coming and within 48 hours you will have given birth'. What 48 hours?! Within 45 minutes, I had my baby, delivered by myself [without caregivers around] because they did not believe me." (9) Many women mentioned that 'something was wrong' but caregivers failed to recognize the seriousness of their signals. As a result, complications occurred and care became urgent and acute. A care process started within which many caregivers were immediately and overwhelmingly present in a medical and technical sense, but nobody noticed the patient as a person. As one woman described the experience of her acute caesarean section in which 'inhumane' caregivers did not explain the things they were doing nor were they talking to her to reassure her:
PLOS ONE
Qualitative study on the experiences of ethnic minority Muslim women in maternity wards, Belgium.
"I keep saying, with my first [caesarean section] it happened just like a sort of . . . yes, a sort of slaughter." (21) In such cases, little or no reciprocal care relationships existed and the women felt lonely due to caregivers being indifferent, unhelpful or uncaring. Some felt that caregivers treated them in a brutal, annoyed, harsh or irritated manner. Overall, they mentioned a lack of attentiveness not only towards their physical needs (e.g. no check-up) but also towards their emotional and psychological needs. Caregivers only 'did' things quickly (even very intrusively), but did not interact with the women nor explained what was going on or what was going to happen.
"Suddenly many caregivers came in and out and I was thinking ' (14) These women expressed feelings of powerlessness, of being at the mercy of their caregivers and thus, a lack of self-agency existed. Many of them expressed that they felt treated as a number instead of as a human person since their concerns, questions and worries remained largely unseen.
Embraced care. The third way of interrelatedness of the two dimensions was revealed in the women's experiences where a lot of attention to both dimensions Ereignis and Erlebnis existed. Various caregivers frequently and spontaneously asked them about their concerns, questions or worries. There was personal contact and meaningfulness within the various care interactions and caregivers were helpful, attentive and kind to them.
"They were always very friendly, helpful and asked by their selves 'Can we do this or that for you?' So they suggested things themselves, which made me very reassured and made me think 'Look, they are really helping me, not just in a curtly way'." (2) According to these women, caregivers emphasized that taking care of their needs was not an effort to them and that they loved even the less pleasant parts of the job. Here, complications also happened but were handled differently than in the uncaring care narratives. Competent care was important to the women but instead of a sole focus to the medical and technical aspects of care, they were surrounded by extra care, respectful attitudes of caregivers and were talked and guided through difficult moments. The women felt that they were taken seriously. They did not feel lonely or ignored in their own knowledge, fears or worries. On the contrary, they felt embraced as a human person and did not feel like a number (versus uncaring and protocolized care). The women described the various ways in which care was adapted to their own specific needs. They felt in control of the care process.
"I honestly expected them to be much stricter and that I would have to listen to them but I was actually in charge. [ (22) The women felt recognized as a person with room for their own personal, religious and cultural wishes.
Ambiguous care. The fourth type evolved out of the ambiguity in some care experiences. One reason for this ambiguity was due to the presence of (some) medical inadequacy (Ereignis dimension) that contradicted with the presence of attentiveness towards the Erlebnis dimension, either by the same or by other caregivers. The women forgave and resigned to medical inadequacy when (some of) the caregivers paid extra attention to the way in which the women felt during the care process even though it was hard or (some) things went wrong. Some narratives displayed ambiguous feelings mainly due to the overall good intentions of the various caregivers despite negative feelings caused by medical complications and inadequacies. In these narratives, complications occurred but caregivers were at least trying to give good care by listening to them about how they felt and by answering to one's wishes. In other narratives, ambiguity existed when some of the caregivers were inattentive to either of the two dimensions while other caregivers (e.g. in other shifts or wards) showed a (very) high attention to at least the Erlebnis dimension. (15) Here, the presence of caregivers who were especially attentive to the Erlebnis level, contrasted with the presence of caregivers without attention to Erlebnis during the same hospitalization. When the women met attentive caregivers after previous negative events (e.g. maternity ward versus obstetric rooms), they regained self-agency and recognition and resigned themselves to previously experienced negative events. For example, when the baby had to be admitted to the Neonatal Intensive Care Unit (NICU), the sharpness of the events that happened during delivery (even very intrusive moments) were toned down by most of the women. Altogether, these women felt largely positive about the care process despite previous negative experiences, which was subscribed to the extremely caring attitudes and the cautious medical way in which caregivers in the NICU took care for them and their new-borns.
The women's care experiences
Based on the women's narratives, we detected that they all, in their own unique way, showed one pivotal way of interrelatedness across the various care relationships. From an overarching view, their own care experiences were either predominantly protocolized, or uncaring, or embraced or ambiguous. As such, the women described various similar moments of interactions throughout the care process in general. Of course it did happen that some particular care relationships were different from their overarching experiences, but in such cases, these events had a smaller influence on the women's care experience in general. For instance, one woman described an overarching 'uncaring' process but encountered one particular, very 'embraced' caring nursing student. This specific encounter with a caring student, however, less significantly influenced the overall picture of that woman's experience.
In general, the women's experiences revealed that the attention of caregivers to 'how' they felt during their care and 'the way in which' care was carried out (Erlebnis) was at least equally important to the caregivers' attention of 'what' happened and the sort of (in)competent care caregivers performed (Ereignis).
"Introduce yourself and be friendly. [ (14) Notably, the narratives also showed that 'the way in which' caregivers took care for the women and their new-borns determined the women's fundamental care experience to a high degree.
"In the hospital, they say things very quickly and then they move on. Whereas if they would sit down with you and hold your hand just a little while. Because they are touching your breasts anyway, so why can't they also hold my hand?" (4) Culture as a meaning system The women's care experiences, i.e. the four ways of interrelatedness and the respective care interactions between caregivers and women, were fundamentally embedded within the women's own cultural meaning system. This cultural meaning system was revealed as a dynamic relational process by which the women's care experiences and their care interactions with caregivers took on an emotional, symbolic and moral meaning. As mentioned previously, the women's cultural context of values, beliefs, practices and traditions gave meaning to their expectations, preferences, attitudes and behaviours regarding the care process. Remarkably, most women did not separately mention culture or particular cultural practices or traditions as being explicitly important for their overall care experience. Nevertheless, the narratives revealed that the women's culture was present as a fundamental meaning system that affected every aspect of their care experience in the hospital. The way in which this actually happened, or the intensity of its importance, differed profoundly throughout the narratives. Every woman had a unique way of describing the actual influence of culture on their care experiences.
Notwithstanding these individual varieties, they also all shared the influence of culture in at least three substantive aspects of the women's overall care experiences. Their cultural system affected their expectations about care and care relationships, it affected what happened during the care interactions in the hospital and it influenced the way in which women interpreted and coped with the emotional event of giving birth.
Culture affecting the women's expectations. Most women indicated that they did not have explicit expectations towards the care experiences in the hospital during the process of giving birth. As such, they expressed thoughts like: "Come what may" or "I do not know what to expect" or "It is in Gods' hands". On closer look, though, we detected that all the narratives revealed implicit expectations and underlying preconceptions that were mostly self-evident within the women's own cultural meaning system, but which turned out to be less obvious within the actual reality of the hospital culture. For instance, one of the women pointed at the difference between the cultural meaning of giving birth (as something that is highly special) and the meaning of giving birth within the hospital culture (as something that is quite normal).
"In our culture, we really say that when you are giving birth, you are literally standing with one foot in your grave. Really, that's how hard they see it. It's quite something." (22) Even more, the women's care experiences were often determined by the manner in which their unique expectations about the delivery and care process (viz. 'how care should be') came true within the hospital reality. Some women struggled when their expectations did not come true due to differences in views with caregivers on the special status of a childbearing woman, the desired way of giving birth or the sort of caregivers' support. For instance, many women expected to get extra support by the caregivers as a result of their cultural preconceptions about the vulnerable and emotional position of a childbearing woman and their emphasis on staying strong during a natural birth (since pain was, in their view, a test from God).
"I want to be cared for with attentiveness [. . .] and not like 'Just wait your turn and then give birth.' They really have to listen to you as a person and to ask about your feelings especially with pregnant woman. [. . .] that moment, that is something special and you certainly need being supported and motivated by saying that you are going to do it well and that they are here for you if something goes wrong. So that they are really present by listening to you." (15) The expectation of being surrounded by attentive and supportive caregivers was challenged when the women only faced a rushed presence of caregivers (e.g. uncaring or protocolized care) or when being pregnant or giving birth were considered by caregivers as 'normal' and not as something very special. For instance, when caregivers solely focused on the baby's care without sufficiently acknowledging the mothers' needs.
"Some [caregivers] only care for the baby but the mother needs a little bit of care too, we also have symptoms and pain. This time, they actually looked at us, not just at the child. That was nice. If we feel very bad, how can we take care of the child? [. . .] So I think they should give more attention to the mother." (13) Nevertheless, when complications occurred by which the delivery did not turn out the way they expected it to be (e.g. epidural, cesarean section), most women did not assess it as being insurmountable when the care process happened together with caregivers. Many women expected a warm and attentive care process with appreciation for the special character of giving birth, during which a reciprocal care relationship with caregivers, characterized by mutual kindness and equality, could be realized. The women expressed their expectation of 'being treated in the same way as you treat others' (e.g. mutual kindness) as something that is important within their culture (e.g. 'be kind to others and then you will receive kindness in return'). The care relationship became seriously under pressure when caregivers were harsh or brutal while the women themselves were trying to be as polite as could be. The expectation of reciprocal care was also expressed in the women's expectations of realizing care together, in a relation of equality. As such, they also expected the caregivers to help them with taking up their own role: "I do not know, I expected when I was entering [the hospital] that they would say first: 'How do you want to give birth?' I believed that this was going to be the first thing they would say, and from there on, that a midwife was constantly going to be with me." (6) Furthermore, most of the women's present expectations were influenced by previous care interactions and the way in which they coped with these previous events according to their cultural meaning system. Some women felt more confident due to these previous experiences (e.g. with supportive caregivers) while others tried to cope with previously insensitive caregivers (e.g. towards themselves as a patient or as a Muslim woman) and previous negative care experiences (e.g. during traumatic deliveries, miscarriages or prenatal hospitalizations). Current difficulties were aggravated as soon as caregivers reacted in a similar way as before (e.g. by not taking them seriously in their needs and worries, again. . .).
"After the blood loss we went back to the emergency, we ring the bell and that woman said: "Ah, could you not persevere at home?" So what did she do? Once again, she did not check below." (15) Previous deliveries in foreign countries also influenced the women's expectations since they expected the same care as they were used to have (e.g. Netherlands, Turkey or Spain) or even expected better care (than for instance in Morocco). However, this expectation did not come true when women were confronted with a caregivers' lack of attention especially to their Erlebnis (uncaring, protocolized care).
Another influence on the women's expectations were the shared narratives from family members and friends, intensified by the use of social media and internet (by which the women also stayed connected with their family and friends worldwide). By these, the women explored every manner of 'how care can or should be', prepared themselves to the moment of delivery or braced themselves due to horror stories coming to them from almost everywhere (e.g. failed epidural anesthesia, intrusive caesarean sections or racist caregivers).
In general, the lack of congruence between the women's expectations and the reality in the hospital sometimes led to far-reaching consequences.
"I did not expect that care would be so serious here, that was really a shock and because of this I'm really thinking of: 'Well, I'm not taking another child yet, not for now.'" (14)
This incongruence sometimes caused women to question their choice of giving birth in Belgium or even their own migration: "When that first nurse treated me so [ (8) From the women's narratives, we also learned that prenatal check-ups had a great impact on the women's care experiences, since they created the opportunity to talk through the women's expectations, previous difficulties or worries and to prevent misunderstandings due to different views about the care process. This was especially important when the women described a significant difference in meaning of being pregnant and giving birth between their own culture and the (biomedical) culture of the hospital.
Culture affecting the care interactions. The women's care experiences were also shaped by the way in which their own cultural meaning system intermingled with the caregivers' cultural context during the care process. The narratives showed that most values, practices, beliefs or preferences embedded in the women's culture were not posing insurmountable challenges to the care interactions if and only if the caregivers were generally open and attentive to the women's needs, worries, wishes and feelings. That is, when they had well-balanced attention for both the Ereignis and Erlebnis dimension of the care process (e.g. embraced care). In these situations, the women felt welcome and recognized as a person with her own specific cultural context, which could be part of the care process, and the care interactions with their caregivers.
For example, when caregivers were generally attentive, the women easily found practical ways to integrate some of their cultural and religious traditions within the care process, like for instance listening to the Quran during delivery, whispering the 'Adhan' (first call to prayer) or ''Shahâda' (profession of faith) in the child's ear, rubbing the baby's palate with a piece of date, swaddling the baby in and eating halal food. Most practices happened implicitly, almost silently (like the 'Adhan' or the date), or when the caregivers were absent. Nevertheless, most women really felt respected when the caregivers reacted in a positive or neutral way to the presence and integration of cultural practices into the care process. For example, the women felt respected as a human being and as a new parent when caregivers paused the care acts for a few minutes especially to give the parents the chance to carry out the 'Shahâda' in silence.
"They really respected that, they stopped the things they were doing because we had to whisper this as soon as the baby comes out [ (5) When there was no room for the implicit presence of culture within the care process, differences between the two cultural meaning systems (i.e. between patients and caregivers/hospital) could cause misunderstandings in the care interactions and put the care relationship under pressure. In some cases, these differences have led to underlying feelings of cultural prejudice, discrimination or racism throughout the care experiences of the women, although only a few explicitly identified discrimination or racism. As one women, for instance, pronounced underlying cultural prejudices of her caregivers. (14) Other women stipulated that according to them, there was no racism or discrimination in the hospital since caregivers 'are obliged to treat all patients in the same way and are not allowed to discriminate'. Remarkable, however, is the fact that the absence of discrimination was regularly expressed explicitly, while the presence of racism or discrimination was mostly expressed in a somewhat covered way.
Particularly important for the care interactions was the way in which caregivers reacted to the role of family members as informal caregivers (e.g. their practical and emotional support was described as being important within the women's culture). Recognition of their role was crucial for the care interaction between women and caregivers. Some caregivers failed to recognize the family as a companion in the women's care or even treated them as a burden when they applied the visitation regulations very rigidly, or when they did not communicate well with the family about their various roles as informal caregivers, surrounding the woman. (9) From the women's narratives, we learned that difficulties in the care interactions could be aggravated due to the manner in which caregivers sent visitors away. This became especially difficult when the mothers already felt lonely due to the caregivers' absence, as happened in the case of protocolized or uncaring care (cf. supra). Some women expressed that they experienced a fear that caregivers would no longer care for them due to discussions with their family members. On the other hand, the presence and role of the family was sometimes also experienced as overwhelming by the women themselves, because of the families' overwhelming presence (also via social media), multiple interferences, incompatible advices, or crowd visitations. Caregivers who sensed and recognized the women's discomfort because of an overwhelming family presence, and who mediated in a sensitive way, made the women feel understood, respected and cared for.
"Nobody could hear it, but she whispered very quietly: 'Do you want me to send these people outside?' [. . .] I could not say it to them myself, that is impossible. Because these people came for me with good intentions. So I was so relieved [..]. She understood that I was having a hard time." (15) The absence of close family due to the women's migration was also a difficult experience for some women, which became ameliorated when caregivers in some way took over the role of absent family. (20) Overall, the women appreciated caregivers even more when they showed a genuine interest in them, or started a conversation about their social environment, origin or culture in an open way and recognized their cultural context as an important dimension of the women as a person. In such cases, the care relationship integrated both the Ereignis and Erlebnis dimension in a way that the cultural meaning system of the women involved could be an essential part of the whole experience.
"I cried sometimes because my mother died and my father is
Culture affecting the interpretation and coping of the experiences. Culture did not only affect the women's expectations and their care interactions but also the women's interpretation of why things happened in the way they did. When the delivery and care process went well, the women subscribed the good outcome to their own share of being polite, self-reliant, communicative and non-judgmental. Just like we detected in the part on the women's expectations (cf. supra), mutual kindness and equality ('being treated in the same way as you treat others') is something that the women expressed as an important religious and cultural belief (e.g. 'be kind to others and then you will receive kindness in return'). As such, a good care process was being interpreted as something 'they 'deserved' because they were empathic and kind to the caregivers, and did not bother them unnecessary.
On the contrary, when the care process went rough, women tried to understand why things turned out this way and tried to give meaning to the question 'Why did the caregivers act in the way they did?' Also here, the women's cultural context was part of their interpretation. Metaphorically speaking, the 'ghost of discrimination' was present in many narratives as a more covered form of discrimination or racism. This 'ghost of discrimination' became visible when women asked themselves whether or not a certain act or omission had something to do with them being a Muslim person, their own Turkish or Moroccan origin, wearing the veil or a (perceived) lack of language efficacy: 'Did the caregiver react this way because of this or that, or because I am a Muslim women?' For example, when caregivers reacted in a non-communicative or unfriendly way, women asked themselves: 'Is the caregiver reacting strange due to their character, age or working experience or is it because of my veil?' They asked themselves whether the difficulties they were dealing with in certain cases were the same for other people or, on the contrary, a matter of discrimination.
"This midwife was rude to me, I do not know if it was because of my veil, but it was the way in which she said: 'Take off these rags'. She could have said just as well: 'Take of your veil' in a more polite way but when she [also] loosed her temper with a nursing student, I thought something like . . . in the beginning I thought to myself: 'Yes, she is a racist'. You immediately have the tendency to judge like: 'She is a racist' but when I saw her being infuriated at the student, I thought: 'Maybe, brutality is just part of her nature' and she was an older women, at least 50 years old, and when they grow older they are less pleasant to deal with since they are less patient." (5) For example, some women asked themselves why caregivers were spending more time with other patients.
"When I went for check-ups, I noticed that everyone goes in and really was staying there for half an hour, but when I went in, I only had to stay for only five minutes, so what is the difference now? Is there something about me? Or about her?" (23)
In other narratives, women described this 'ghost of discrimination' as something that happens non-verbally, as something that 'you just feel', when 'you feel less [worthy]' or when 'they looked at me differently'.
"I had the feeling that he was somehow contemptuous, like: 'Okay, that is a Turkish woman, who just came to give birth to another of her, I do not know, umpteenth child.' I don't know what he was thinking about me, but I didn't like it. Look, I was born here too. I also have studied here and I deserve the same amount of attention as anyone else." (5) A particular component of this experience is related to some of the women's religious and cultural practice of wearing a veil. Some caregivers did not react to it and thus implicitly reassured the women that within the hospital, there was no discrimination between people because of differences in cultural background (a form of discrimination that they regularly experienced in the broader society). Other caregivers reacted in a strange or less positive way by which some women experienced that caregivers did seem to estimate them negatively because of their veil.
"It is reassuring [when they do not react to the veil] because it is not nice. You always have to prove yourself twice as a Muslim woman. I always find it annoying." (5) Nevertheless, it was remarkable that most women expressed their appreciation for the care they received, even when (very) negative events happened. Most women showed a positive way of coping with negative events (e.g. a predominantly resignation towards negative events). The reason why the women coped with negative events in this way, was not always entirely clear and based on a combination of several reasons (although culture influenced this positive way of coping, it was not only caused by cultural values, beliefs or cultural interpretation of the events). One of the reasons showed itself in the fact that all ended well since all the mothers and newborns survived. Thus, even when the women dealt with uncaring care, most of them eventually came to terms with it since the medical outcome was more or less good. Another reason for the women's positive way of coping was their faith in Allah and their gratefulness for every outcome that Allah gave. These women expressed that 'everything happens for a reason' and that 'Allah will not give you something you cannot handle'. Some complications were interpreted as a life lesson and an opportunity for personal growth, assigned to them by Allah. Against this background, women sometimes minimized the severity of complications (e.g. when caregivers discovered a congenital defect when it was too late for an abortion). The women did not feel strongly about this since these children were a gift from God and they would never terminate the pregnancy for it.
"Of course, you are just frightened because it is about your children. You are a bit shocked if something is wrong. I had to cry but at a certain point I thought . . . yes, you know we believe in God, some people don't, but we do believe and I thought: 'God doesn't give you something that you cannot handle, so be happy, there are still people with things that are worse.' You always have to be a little bit positive." (20) Still another reason for this predominant positive way of coping despite negative events was, as we mentioned before, that some of the difficulties were countered by the very caring attitudes and the attentiveness of caregivers towards their Erlebnis (viz. ambiguous care).
The opposite happened when difficulties, inadequacy in care, lack of attention to either of both dimensions (viz. uncaring, protocolized, ambiguous care) and caregivers' negative reactions towards the women's cultural context caused the women to feel anxious, distressed, mournful, disappointed, lonely and angry. One woman even wished to be dead due to incomprehensible caregivers (uncaring care). Here, women felt abandoned and unwelcome by which they reacted by giving up or withdrawing oneself from the care relationship or even from care in general (e.g. by leaving the hospital early or by expressing the intention of not returning back to the hospital). Some women felt it was useless to address their complaint about mistakes to the designated department. They no longer trusted the possibility of a good outcome. In such situations, the overall result was a full disconnection from meaningful care or care relationships. Most narratives, however, emphasized that either way, the women would never forget the care performed during the emotional event of giving birth.
Women, care and culture
This study shows that the women's principal focus in describing their perinatal care experiences was on the care interactions with their caregivers and on the two-dimensional attention of caregivers towards 'Ereignis' and 'Erlebnis' rather than a caregivers' lack of attention towards cultural questions during the care process. As such, our study is consistent with existing studies in which the quality of intercultural care depends in the first place on the nature and the quality of the care relationships between the ethnic minority patients and their caregivers rather than on the way in which patients and caregivers handle cultural differences during the care process. [4,28,45] Despite the gratitude of most women towards the safe care outcome, they did not feel satisfied with their perinatal care process when caregivers did not have enough attention to the things that needed to happen (Ereignis), or when they did not have enough attention to the way in which they felt during the care process (Erlebnis). Tensions in the care relationships with caregivers (e.g. due to difficulties in communication, the caregivers' negative attitudes or their lack of support) were intensified by the lack of room for the women's cultural context.
Due to the emphasis by the women on the care interactions with caregivers, we could ask ourselves if we need to adjust maternity care in this intercultural context or whether it would be sufficient to focus on the quality of the care relationships between these women and their caregivers? Is it sufficient to start from a person-centred approach to enhance the quality of intercultural care or do we have to take the women's cultural context more explicitly into account?
The importance of the relational care process. As defined by the WHO a person-centred, safe, effective, timely, efficient and equitable care is important to improve the desired health outcome and the quality of healthcare for all patients. [32,46] More specifically, the WHO developed a framework to achieve the desired health outcomes by defining that the quality of maternity care is depending on two inter-linked dimensions, namely the provision and the experience of care. [32,46,47] Indeed, the women's narratives in this intercultural context, confirmed the importance of the two-dimensional attention towards the provision of care along the necessity of effective communication, respectful attitudes and the social and emotional support of caregivers. [32,[46][47][48] As such, one could argue that, also within this intercultural setting, the WHO framework and a person-centred approach would be a good starting point to improve the quality in maternity care. Within the person-centered approach, caregivers are encouraged to collaborate with patients to co-design and deliver personalized care, which includes the caregiver's responsiveness towards the patient's preferences, needs and (cultural) values. [49] According to the women's narratives, this 'collaborative' relational care process with caregivers was indeed crucial to their care experiences.
However, our results confirm that providing person-centered care is even more challenging in an intercultural setting since different cultural values and preferences from both patients and caregivers come together within the relational process. [50] Starting only from the patient's cultural context might hold the risk of an overemphasis on finding practical solutions for a set of cultural practices, beliefs, values or traditions. In accordance with previous research, we argue that practical solutions to visible religious and cultural tensions can never merely be the answer to the question on how to provide appropriate intercultural care. [4,13,17,51] Because by doing so, the concept and practice of intercultural care would be predominantly understood as a 'technical art' instead of a 'moral practice'. [4,51,52] On the contrary, we have to understand care as the relational process of care-giving and care-receiving in which all involved seek to find a dignified answer to a situation of human vulnerability. [51,52] The nature of this relational process in our findings is consistent with Joan Tronto's ethical model, wherein she defined four essential dimensions of the care process: caring about (attentiveness towards the patient's needs), taking care of (taking responsibility to answer the needs), care-giving (the actual work be done) and care-receiving (the evaluation on how well the needs were met). [53,54] In addition to these four dimensions of Tronto, we could argue that the two-dimensional attention of caregivers towards 'Ereignis' and 'Erlebnis' is needed in each of the four dimensions of care.
Building on Tronto, Martinsen suggest that in a medical context, it might be useful to distinguish between 'taking care of' and 'caring for'. [55] As such, she pointed out that it might be difficult for caregivers to go beyond 'taking care of' the patients' needs in a technical way. She stressed the need to 'care for' the patients also by an empathic engagement [55]: "Why is it hard for the physician to simply care for the patient without hiding behind some kind of procedure? Why is it difficult to just sit down and hold the patient's hand, which may be the most appropriate thing to do in this situation?" (p.114) Our findings reveal that being aware of this tension might even be more crucial in the maternity care setting in which caregivers have to deal with the extra vulnerability of childbearing women in a post-migration context. [28,29] Since only few complications occurred, most of the women placed lower emphasis on 'taking care of' in a more technical sense than on 'caring for' since being pregnant and giving birth were in their views a rather natural process that placed them in a special position in which they needed support in an emphatic and attentive way.
This also referred to the field of tension in the Belgian maternity care since care in this context, is predominantly obstetric-led with a high emphasis on the technical aspects and on the pathological potential of being pregnant and giving birth. [56] The women appreciated the high technical competence of caregivers and were grateful for 'the good outcome' because of this safe, high quality care. This gratefulness was in accordance with a previous study on the experiences of Flemish women in Belgian maternity care. [45] This starting point in the obstetric-led maternity care, however, caused difficulties when the support of caregivers was limited to an uncaring or protocolized way of 'taking care of' the women. In these cases, a gap was noticed between the women who valued a personal relationship with caregivers and the caregivers who kept a 'professional distance'. Although this professional distance is in accordance with the tradition in medicine of 'staying neutral', in the women's narratives this was not interpreted as a 'neutral' value but rather as a caregivers' unwillingness to 'care for' them in a personal way. [55] In talking about caregivers who were not emotionally available for them, some women even doubted if this distance was being caused by their Turkish or Moroccan descent, their veil or to (perceived) language problems. In such cases, the professional distance of caregivers was interpreted against the background of this 'ghost of discrimination'.
Our results show that providing intercultural care is being challenged not only because of culturally different views on illness and care or by caregivers who are causing harm (e.g. by discriminating attitudes). On the contrary, most harm took place in uncaring care or protocolized care when the women were dealing with a lack of care (especially towards 'Erlebnis'). As suggested by Martinsen, our results confirm the importance of interpreting harm as a relational harm caused by a lack of care. [55] Also here, starting from a person-centred approach can be of merit to avoid 'relational' harm especially in an intercultural setting since it focuses on a 'collaborative' relationship and on caring 'with' patients rather than on 'caring for' or 'taking care of' patients. [49] Remains to be asked how we should understand the role of the women's culture within this relational process of care?
Culture within the care relationship: A shifting 'cultural' perspective?. The women's narratives reveal that culture could not be reduced to a well-defined list of common cultural aspects, nor as being separate from religious, social and psychological dimensions. The women's experiences were also not only embedded in one delineated culture (e.g. either 'Muslim' or 'Flemish' culture) since the women dynamically 'moved between' at least two interwoven cultural contexts depending on their own unique migration process. Moreover, our results show that a distinction between the experiences of women from a Turkish or Moroccan origin did not came to the front in our results. In this regard, we also did not find significant differences between the first, second or third generation of Muslim women, which resonates with previous research in which no differences were found between the health beliefs of first and second generation of Moroccan Muslim women in Belgium. [57] A such, cultural issues that came to the front cannot be interpreted as being solely belonging to 'a Muslim culture'. In this, our study confirms the suggestion of Kleinman & Benson that it is not feasible nor desirable to manage cultural issues in healthcare by making a list of cultural values, beliefs or practices that has to be taking into account when caring for a specific group of patients. [13,58] For instance, in talking about the special position of being a childbearing woman, one could easily argue that this could count for all women since being pregnant and giving birth is an important life-changing event for all.
Nevertheless, our results did show that the cultural context of the women is an essential meaning system that is part of the relational care process, which adds another layer of emotional, moral or symbolic meaning to the interactions between the women and their caregivers as well as to the women's expectations and interpretations of care. Similar findings regarding this role of culture as an additional layer to the interpersonal negotiation of care between caregivers and patients has been discussed by Broom et al. [59] As such, our results show the importance of understanding 'culture' as being inter-relational and dynamic rather than as being an isolated, static, individualized entity. Culture here, can be understood in accordance with the definition of Kleinman & Benson [13]: 'Culture is not a single variable, but rather comprises multiple variables, affecting all aspects of experience. Culture is inseparable from economic, political, religious, psychological and biological conditions. Culture is a process through which ordinary activities and conditions take on an emotional tone and a moral meaning for participants. [. . .] Cultural processes frequently differ within the same ethnic or social group because of differences in age cohort, gender, political association, class, religion, ethnicity, and even personality.' As such, our results confirm the findings of recent international research that discusses the necessity of shifting away from an individualized conceptualization of culture in healthcare. [59,60,61] In the cultural competence models as well as in healthcare services, the culture of ethnic minority patients is mostly interpreted as an individual deficit that needs to be taken care of by caregivers. [50,63,64] Our results, on the contrary, showed that a shift is needed towards the understanding of culture as being part of the relational care process and thus as emerging from within the interaction between caregivers and patients instead of seeing it as an entity that stands outside these interactions. Here, the findings add more insight into the importance of a shifting perspective on 'culture' from being an 'individual culture-in-isolation' towards a notion of culture as being inter-relational and emerging from within the various care interactions with caregivers. [59,60,61] The major practical implication of this notion is the fact that we cannot start from cultural knowledge about a specific isolated culture to handle cultural issues or difficulties in health care practice. On the contrary, providing culturally appropriate care has to start from the establishment of a qualitative and meaningful care relationship along with the recognition of the extra emotional and moral meaning that culture adds to the care expectations, interactions and the way in which the care receivers cope with their experiences.
Strengths of the study
This is the first qualitative study exploring the experiences of Muslim women from Turkish and Moroccan descent who gave birth in a maternity ward in Flanders, Belgium. [45] Empirical evidence on the experiences of ethnic minority women in maternity care worldwide, confirms our findings on the importance of the quality of the care relationship, the influence of communication, the influence of the caregiver's attitudes, the importance of the women's involvement in the decision-making as well as the caregivers' responsiveness towards the women's expectations, (cultural) wishes and needs . [27,28,30,31,34] Adding to existing evidence, our study provides new insight into the underlying dynamics in these care relationships by discussing the complex interplay of two essential dimensions of the care process (Ereignis and Erlebnis) and the two-dimensional attention of caregivers towards these dimensions. Moreover, our results explained the way in which this complex interplay is embedded in the women's cultural context. As such, it adds nuanced insight into the way in which the women's cultural context gives an emotional, moral and symbolic meaning to their care expectations, wishes and needs. Together, the interrelatedness of the two dimensions and the women's cultural meaning system, shaped the manner in which women experienced their care in maternity wards in Belgium. As such, our results provide a framework for critical reflection on the ethical question of providing dignified intercultural care in maternity wards.
Limitations of the study
As for the limitations of the study, generalizability of the results is limited due to the nature of the study and since our results are grounded in the narratives of the participating Muslim women from Turkish and Moroccan descent who live in the Belgian society as a minority group. Nevertheless, important issues were raised and discussed within our theoretical framework, which can provide a basis for further reflection on maternity care in Belgium and on intercultural care in general. Moreover, the results of this study were confirmed by the results in earlier international research on the perinatal experiences of ethnic minority persons worldwide. As such, we can prudently assume that the concepts that we discussed are also valuable in other settings. Future research is needed in this regard, to evaluate our framework in other settings. As for the maternity care in Belgium, just a few studies exist that explored maternity care experiences. [45,56,[62][63][64] Since our study is limited to the perspectives of ethnic minority women, it would be an interesting topic for future research to include the experiences of autochthone women who gave birth in Flemish wards and to include the perspectives of caregivers with experiences in maternity wards in order to compare these angles with the results of our study.
Possible bias
Furthermore, we were aware of possible bias, especially a possible researcher and selection bias. First, there was the possibility of researcher bias, which needed attention in this study since it concerns a culturally sensitive theme, surrounded with many strong opinions both in contemporary societies and in scientific research. In this regard, we were particularly observant to the fact that the interviewer was not a woman from Turkish or Moroccan descent herself, which included the risk of missing out on important data from an insider point of view, as well as of receiving only socially desirable answers or normative answers and opinions (i.e. on how it should be), without insight into the real experiences of the women (i.e. how it actually happened). To avoid this, the research group decided to start each interview with a personal introduction of the interviewer by which a start was possible on the common ground of being a woman, a wife, a mother and raised in Limburg. Social desirability and answers from a normative stance were avoided by this, since the women could talk freely by taking this start 'from one mother to another'. Analogously, we applied built-in guarantees, such as reflexivity and bracketing to ensure the trustworthiness of the data. An ongoing reflective journal was kept by the interviewer (LD) using thick descriptions of the interviews and of all the choices made during recruitment, data collection and analysis. [35] The journal also included meanings given to the data and notes on how the interviewer was thinking about the subject before, during and after the study (bracketing). To reduce the risk of missing out on data because of interpreter bias, we cross-checked the translations with an independent interpreter and marked the data in which possible opinions or influence of the interpreter crept in.
Secondly, we were aware of possible selection bias, since all the women had a delivery with positive outcome (e.g. no stillborn babies). As such, we have no results on the overall care experiences of Muslim women with a tragic outcome. Nevertheless, we did perform theoretical sampling, which resulted in a relevant variety of women's' characteristics and hospital characteristics. As such, we reached a rich dataset of in-depth experiences, which allowed us to describe the intercultural care experiences of Muslim women in the context of maternity wards in Belgium in a highly nuanced manner. We reached data saturation on the concepts described in this paper.
Other strategies to ensure the trustworthiness of the findings were: (1) Researcher triangulation (analysis performed by two researchers LD and YD). (2) Data triangulation by space triangulation (women giving birth in six different hospitals) and person triangulation (the women differ in origin, age of migration, number of children, complications during their hospital stay, language ability, etc.). (3) Peer review: frequent meetings with the multidisciplinary research (YD, LD, BDC, and CG) team to critically compare and modify the results. (4) Peer debriefings with external experts at several stages in the research project.
Conclusion
The findings reveal that the quality of intercultural care predominantly depends on the nature and quality of care relationships between ethnic minority patients and caregivers rather than on the way in which cultural questions and tensions are being handled or dealt with. As such, the importance of establishing a meaningful care relationship should be priority in providing intercultural care. Therefore, a shift in perspective on culture from being an 'individual culture-in-isolation' towards the concept of culture as being 'inter-relational' and dynamically emerging from within these care relationships is needed.
|
2020-07-31T13:01:31.455Z
|
2020-07-29T00:00:00.000
|
{
"year": 2020,
"sha1": "f14a9b5e22f3c2f1de2a0cd3183230f22fd90b4c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0236008&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "675fa1fcaf50eb9328041775a9470ac69a90a728",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
226361469
|
pes2o/s2orc
|
v3-fos-license
|
Development of process technology for preparation of Bael fruit powder
Bael occupies an important place not only nutritionally but therapeutically as well. It has a woodyskinned and it is difficult to be used by hand, so it is not very popular as a fresh fruit for table purpose. In this study value added dried products were made to enhance its shelf life & utilize this important medicinal fruit around the year. Two types of sample were prepared 1) raw Bael 2) pretreated Bael sample. The drying of raw and pretreated Bael was carried out in solar cabinet dryer separately. The various parameters like moisture content, relative humidity, moisture content, weight loss were determined during experiment. The samples were examined for Color, flavor, appearance, taste, overall acceptance by a panel of judges (10) and average score was calculated. The statistical analyses of data were carried out which gives the different equations of degree two (R value) for different two samples.
Introduction
The total production of fruit and vegetables in the world is around 370 MT. India ranks first in the production of fruit with an annual output of 32 MT. India contributes 10% to the world fruit production. Bael tree, which is the only species in the genus Aegle, grows up to 18 meters tall and bears thorns and fragrant flowers. It has a woody-skinned, smooth fruit 5-15 cm in diameter. The skin of the fruit is so hard that it must be cracked open with a hammer. That is why Bael is not very popular as a fresh fruit for table purpose. It occupies an important place among the indigenous fruits not only nutritionally but therapeutically as well. The roots, leaves, fruits, bark and leaves all have high medicinal value. Since the fruit takes around eleven months to ripe after the fruit is set on the tree. So, it is not available to the people throughout the year. Ripe stone apple is more prone to spoilage because of higher rate of catabolic activities. So, it has short life but have excellent processing attributes. It can be processed into various products such as fruit juices, jams, jellies etc. Further there is a growing demand of health drinks based on indigenous fruits. The Bael fruit can be eaten fresh or dried. If fresh, the juice is strained and sweetened to make a drink and is also used in making sherbets, a refreshing drink where the pulp is mixed with tamarind. If the fruit is to be dried, it is usually sliced first and left to dry by the heat of the sun. Manufacturing of value added products may be useful in establishing the utilization of the fruit year around. Most of the Indian farmers have lack of adequate knowledge regarding the proper method of harvesting, handling, packaging, transport, and storage and processing. The fruit and vegetable processing industries in India is highly decentralized. There is 30-35% post-harvest loss of Bael occurs only in 15-20 days. Thus there is a need of preparing value added products from ripe Bael fruit pulp to enhance its shelf life & utilize this important medicinal fruit around the year. The improved solar cabinet dryer designed and developed in the Department of Processing and Food Engineering, College of Agricultural Engineering, Pusa. It is simple and direct type. Its cost is low because it can be easily fabricated by use of locally available materials. The developed solar cabinet dryer can be used for drying vegetables, fruits and other crops in batches of small 20-25 kg quantity. This improved solar cabinet dryer was used for the study of developing a suitable process technology to prepare dried fruit powder out of ripe Bael.
Objectives
1. To evaluate the effect of pre-treatment on drying as well as quality characteristics of Bael pulp.
2. To evaluate quality & sensory characteristics of reconstituted drink with prepared powder. 3. To test the performance of developed solar cabinet dryer for quality drying of Bael pulp.
Solar Dryer
The solar dryers are based on hot box principle. Solar dryer used for drying of agricultural produce are mainly of three types depending upon the mode of exposure of produce to sunlight.
Direct Type
In these unit's material to be dried is placed in enclosure with a transparent cover and side panel. Heat is generated by absorption of solar radiation on the product itself as well as on the internal blackened surface of drying chamber. This heat helps in separating the moisture from drying product and in addition serves to heat air in the enclosure resulting in removal of this moisture by circulation of heated moist air. Direct type solar dryers are generally useful for crops spread in single layer or layer of two to three unit thicknesses but comparatively thin layer.
Indirect Type
The solar radiation in this type of dryer is not directly incident on the material to be dried. Air in the solar collector is heated up which is conducted to the drying chamber to evaporate moisture from produces. Indirect type dryers are mostly used for crops which are sensitive to light and for drying large volumes of crops. Since heated air from collector is ducted to drying chamber force circulation is quite essential.
Mixed Type
In these dryers the materials are subjected to pre-treated air of solar heated and action of incident radiation. The combine action furnishes the heat requirement to complete the drying operation. These dryers are most suitable for quicker drying and for higher capacity ranges at the materials which are not sensitive to light. In direct type dryers, the air circulation is recovered by natural convection. In case of indirect dryers forced circulation is essential. mm should be provided between glazed surface and black surface. 5. Proper arrangement should be provided to remove rain water and condensed water from the glazed surface, especially in the direct type solar dryer.
Materials and Methods
This chapter deals with working principle of solar dryer, its specification, preparation of Bael pulp, drying procedure, pretreatment carried out during the experimentation. A view of Traditional method of preparation of Bael sarbat & view of solar cabinet drier under loaded condition can be seen in
Solar Cabinet Dryer
The solar cabinet dryer consists of Main frame (cabinet), tray, transparent cover, and Insulation overall size of the dryer is 1219 mm × 762 mm × 1077 mm.
Main Frame
Main frame has been fabricated out of locally available MS sheet of 20-gauge thickness in the form of a cabinet having base areas of 1219 mm × 762 mm. The side panels have been fabricated inclined at an angle of 31 0 with respect to horizontal after a height of 304.8 mm from front end and height of 762 mm at the rear end. The inside faces of base, tray and side walls are painted with black paint. Four exit holes of diameter 25.40mm are drilled on the rear wall for air flow of moist air.
Tray
A tray has been provided for holding the material to be dried inside the drying chamber. The tray is placed such that it could be moved freely on the angle iron frame.
Transparent Cover
A polythene sheet of 0.2 mm thickness has been used as a transparent cover. This polythene sheet has been provided on the top of dryer.
Insulation
Four side walls and the base of cabinet made of MS sheet of 20-gauge thickness serve as natural insulation for the dryer.
Sample preparation
Fresh good quality Bael was procured from local market of pusa. It was properly washed in running water thoroughly in fresh water to remove dirt, dust and insects if any. It was weighed on electronic weighing machine of sensitivity: .01/.1 G. Shell and pulp was separated by hammer and knife. Fiber and seed was separated from pulp by adding measured and amount of water. Weight of fiber and seed was taken. Final weight of strained sample was taken. Now the prepared sample was spread uniformly in the drying tray and then inserted into the drying chamber of cabinet dryer. For preparation of pre-treated Bael sample same process was done as explained above but in extra .2 % potassium meta bisulphite (K 2 HSO 4 ) was added. Now the prepared sample was spread uniformly in the drying tray and then inserted into the drying chamber of cabinet dryer.
Measurement and Determination Techniques
The drying of raw and pre-treated Bael was carried out in solar cabinet dryer separately. Various parameters like moisture content, relative humidity, moisture content, weight loss were determined during experiment. Hourly ambient air temperature as well as drying air temperature was measured with the help of mercury thermometer (range: --10 to +110 0 C). Hourly relative humidity of ambient air as well as drying air was measured with the help of hygrometer (range: 0 to 100%). The reading was taken every hour starting from 10:00 AM to 5:00 PM. The samples were kept in dryer at 10:00 am till 5:00 pm. At the end of the day the whole sample was taken out of the dryer wrapped in plastic sheet kept at a dry place in the laboratory.
Determination of Moisture Content and Weight loss
Moisture content of fresh raw (untreated), and pre-treated was determined with the help of standard hot air oven. The samples were dried in the hot air oven at 105 ± 5 0 C for 30 hours (taking into consideration the time of power failure also some of the important specifications are as follows: -Temperature range-Ambient to 250 ± 1 0 C. = 77.2% For further drying it will be initial moisture content and using equation again final moisture content after certain period of drying can be calculated.
Determination of Bulk Density
Bulk density was calculated by putting the ground bael powder in beaker of volume 50cc. the powder were fill unto the top of the beaker after that weight of powder was taken and bulk density was calculated by using the formula: Bulk density (Kg/m 3 ) = Mass / Volume
Quality of Dried Bael
Dried Bael samples were evaluated for their quality (viz. Color, flavor, appearance, taste & overall acceptance) by physical observations. After drying the dried samples of Bael were stored in desiccators for studies after 5 months. After 5 months the samples were taken out and their bulk density, quality evaluation was determined. The sensory evaluation of samples was done in standard format by a panel of judges (10) and average score was calculated.
Results and Discussion
Prepared dried flake (Fig A) of raw Bael as well as pretreated Bael was grind by grinder and it had been seen that both of the powder (Fig B) passes through the sieve of size 0.125 mm.
Variation in Temperature and Relative Humidity under Unloaded Condition
The hourly temperature and relative humidity of drying air and ambient air were recorded from 10:00 AM to 17:00 PM and plotted in Fig. 5.1. It can be seen from Fig. 5.1 that the ambient air temperature was in increasing trend from morning to noon and was maximum 38 0 C during 13.00 PM and decreasing in the afternoon it was minimum recorded as 32 0 C at 17:00 PM. Similar trend was observed in case of drying air temperature. The temperature of drying air inside the dryer was observed to be 64 0 C as maximum temperature at 13:00 PM with corresponding ambient air temperature of 38 0 C which shows a temperature difference of 26 0 C above ambient air temperature. relative humidity of air inside and outside the dryer is plotted for drying air and ambient air in Fig. 5.2. It can be seen from Fig. 5.2 that the ambient air relative humidity was maximum 55 per cent at 10:00 AM and it was minimum recorded as 48 per cent at 13:00 PM and 15:00 PM that the drying air relative humidity was maximum 50 per cent at 10:00 AM and it was minimum recorded as 21 per cent at 13:00 PM.
Variation in Temperature and Relative Humidity for Raw Bael
It can be seen from Fig. 5.3 that the ambient air temperature was in increasing trend from morning to noon and Similar trend was observed in case of drying air temperature also. Maximum ambient temperature was 38 0 C and corresponding drying air temperature was observed as 61 0 C which shows a temperature difference of 23 0 C above ambient air temperature. Ambient air was minimum recorded as 29 0 C. Drying air temperature was minimum in the morning at 10.00 AM. Minimum drying air temperature as on first, second and third days of drying was 45 0 C, 45 0 C & 56 0 C, respectively. The maximum drying air temperature recorded were 60 0 C, 61 0 C, 61 0 C on the respective days. It can be seen from Fig. 5.5 that the ambient air relative humidity was maximum 62 per cent on second day of drying at 11.00 AM and it was minimum recorded as 33 per cent at 11:00 AM on third day of drying and drying air relative humidity was maximum 76 per cent on first day of drying at and it was minimum recorded as 22 per cent on third day of drying at 14:00 PM. Minimum drying air relative humidity as on first, second and third day of drying were 62 per cent, 31 per cent, 22 per cent respectively. The maximum drying air relative humidity were 76 per cent, 55 per cent and 32 per cent on the respective days of drying. The lowest relative humidity was observed at 12.00 noon to 15.00 PM in most of the batches of drying.
Variation in Temperature and Relative Humidity for Pretreated Bael
Similar trend of variation of Temperature and R.H as observed for Raw bael was observed for pre-treated also. Maximum ambient temperature 38.5 0 C and corresponding drying air temperature was observed as 63 0 C which shows a temperature difference of 24.5 0 C. Ambient air was minimum recorded as 31 0 C. Drying air temperature was minimum in the morning at 10.00 AM. Minimum drying air temperature as on first and second day of drying was 44 0 C, 45 0 C respectively. The maximum drying air temperature recorded were 62 0 C, 63 0 C on the respective days. It can be seen from Coefficient of regression (R 2 ) shows the correlation between experimental data and predicted values. Equations of higher values of coefficient of regression (R 2 ) show the least diversion of experimental data from the predicted value.
Quality of Dried Bael
Different quality attributes of dried Bael samples viz. Color, moisture content, bulk density were determined and enlisted in Table 1.
Sensory Evaluation of Prepared Drink
It can be seen from Table 2 and 3 that marks of pre-treated was slightly higher than raw sample but slightly lower than fresh drink. Overall acceptability of all sample got 8.5 marks or above. So it can be concluded that the overall acceptability of both of the sample pre-treated and raw was very good. Overall acceptability 9 Taste
|
2020-10-15T21:40:01.839Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "01b4874a8c3d4abec2ab303fc9483b4536a59522",
"oa_license": null,
"oa_url": "https://www.chemijournal.com/archives/2020/vol8issue5/PartY/8-5-243-535.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "01b4874a8c3d4abec2ab303fc9483b4536a59522",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
13580359
|
pes2o/s2orc
|
v3-fos-license
|
Oncology nurses’ perceptions of obstacles and role at the end-of-life care: cross sectional survey
Background Major obstacles exist in the care of patients at the end of life: lack of time, poor or inadequate communication, and lack of knowledge in providing care. Three possible nursing roles in care decision-making were investigated: Information Broker, Supporter, and Advocate. The purpose of this study was to examine obstacles faced by oncology nurses in providing end-of-life (EOL) care and to examine roles of nurses in providing care. Methods A descriptive, cross-sectional, correlational design was applied. The study was conducted at two major University Hospitals of Oncology in Lithuania that have a combined total of 2365 beds. The study sample consisted of 239 oncology registered nurses. Data collection tool included a questionnaire about assessment of obstacles and supportive behaviors, nursing roles, and socio-demographic characteristics. Results The two items perceived by respondents as the most intense obstacles to providing EOL care were The nurse’s opinion on immediate patient care is not welcome, valued or discussed and. Family has no access to psychological help after being informed about the patient’s diagnosis. The majority of respondents self-assigned the role of Supporter. Conclusions Major obstacles in providing care included the nurse’s opinion that immediate patient care was not valued, lack of nursing knowledge on how to treat the patient’s grieving family, and physicians who avoided conversations with the patient and family members about diagnoses and prospects. In EOL care nurses most frequently acted as Supporters and less frequently as Advocates.
Background
According to the World Health Organization (WHO), worldwide 56.2 million people die every year. Of these, 7.6 million people die from cancer. In Europe, 3.2 million people die each year and 1.7 million deaths are caused by cancer [1]. Patients spend a significant period of time in oncology hospitals where primarily nurses are responsible for end-of-life (EOL) care. Throughout history nurses have been responsible for ensuring the quality of life for patients, their families, and the community through all stages of life [2]. Nurses spend more time with patients than any other health care professionals [3,4]. Nurses provide regular care for patients at the EOL; they may identify behaviors that obstruct or improve EOL care for patients and families [5]. Furthermore, identifying the obstacles or supportive behaviors that have the most impact to patients and families and then working to eliminate highly rated obstacles or increase support for positive behaviors are critical to improve EOL care.
Research findings have indicated that the main obstacles in caring for patients at the EOL include the lack of time for professional care and staff shortages; challenges in communication with colleagues, patients, and/or patients' relatives; intensive treatment decisions made in spite of patients' wishes and needs, and a lack of knowledge about care for patients at the EOL [6]. Of 77 articles published in the last 10 years on obstacles to EOL care in intensive care units, palliative care units, and oncology hospitals only a few studies analyzed nursing roles and obstacles faced by nurses [2,7].
In the Medical Dictionary of Health Terms, the "endof-life" concept refers to a final periodhours, days, weeks, or months in a person's life in which it is medically obvious that death is imminent or a terminal moribund state cannot be prevented [8]. Similarly, Watson et al. defined "end-of-life care" as the delivery of care during the last few weeks of life and the time directly preceding death in emergency departments [9]. Care of patients at the EOL involves many aspects: pain and symptom management, dealing with culturally sensitive issues, support for patients and their families during the process of dying and experiencing loss, and ethical decision-making. A survey of relevant literature revealed there were obstacles preventing nurses from demonstrating their professional competencies in EOL care [10].
Lack of time remained one of the major obstacles. Nurses had too many tasks to take time to listen to patients' wishes concerning EOL decisions, to communicate with families; and to understand their values, expectations, and attitudes [11]. Even though nurses knew that their presence at a patient's bed would reassure and comfort the dying person, they had no time to do that because of responsibilities for other patients too [3]. Another obstacle identified in the literature was poor or inadequate communication. Anselm et al. found that physicians, residents, and nurses reported that the main obstacles in communication with patients at the EOL were the patients themselves, the health care system, health care providers, and the nature of this dialogue [12]. Findings from Heyland et al. demonstrated that for patients high-priority communication areas that needed improvement were related to feelings of peace, assessment and treatment of emotional problems, physician availability; and satisfaction that the physician took a personal interest in them, communicated clearly and consistently, and listened. Similar areas were identified by family members as high in priority [13].
Several studies identified the lack of knowledge about care for patients at the EOL as an obstacle [6,7,14]; the lack of skills as well as how to treat the grieving family was a major obstacle in providing quality care. Reinke et al. analyzed what nursing skills were important but under-utilized in EOL care. Nurses named as very important such skills as communication; symptom management competencies, especially those concerning anxiety and depression; and interactions with patientcentered care systems [10].
An analysis of obstacles in EOL care must take into account the role assumed by nurses because they are placed in a unique position; they may assist patients and family members by providing information, discussing and advocating for patients' wishes [15]. Adams et al. revealed that usually three possible nursing roles existed in EOL decision-making, namely: Information Broker, Supporter; and Advocate [16]. Still, there is little evidence on nurses' roles in EOL decision-making. As Information Brokers nurses played an important role in the process of ensuring smooth communication between family members and the team of health care professionals. Nurses provided information to physicians and family members and also acted as mediators. Liaschenko et al. further defined the nurse's role as the main point for exchange of information: obtaining information from many sources, synthesizing it, and forming a holistic assessment [17]. Another EOL nursing role was that of a Supporter. Nurses expanded their role in the EOL decision-making process by demonstrating empathy for patients, family members, and physicians; acting as supporters at the patient's EOL period and developing trusting relationships with family members [15,16]. Finally, the most researched nursing role in EOL care was that of an Advocate. This role was performed through speaking to the team of health care professionals on behalf of the patient or family as well as speaking to the family on behalf of the patient [5,7].
Thus, defined nursing roles and activity areas at the patient's EOL help in ensuring that unique needs are met for patients at EOL. The role of a nurse in providing EOL care is very complex, requiring personal psychological preparations, flexibility, and strength. Citation?
No such studies existed in either Lithuania or any other Eastern European country; therefore, this was the first study to investigate whether Eastern European countries faced the same challenges in EOL care as Western European countries and the United States (US), and what was the role of nurses.
Aim
The main purpose of this study was to examine obstacles faced by oncology nurses in providing EOL care. A secondary purpose was to examine roles of nurses in providing EOL care.
Research design
A descriptive, cross-sectional, correlational design was applied in this study.
Instruments
Pre-established obstacles and supportive behaviors on the questionnaire administered in this study were from an original validated survey, Questionnaire of Helps and Obstacles in Providing End-of-Life Care to Dying Patients and Their Families [14] that was modified after expert opinion and suggestions from oncology nurses in order to be most influential in the oncology setting. A translated and validated Lithuanian version following the technique of inverse translation was used in this study. The questionnaire contained 67 items consisting of three parts: assessment of obstacles and supportive behaviors, nursing roles, and socio-demographic characteristics. The first 40 items evaluated obstacles and supportive behaviors. Responses were given in a Likert scale format ranging from 1 = not help (or not an obstacle) to 5 = extremely intense help (or extremely large obstacles).
Items 41 to 59 evaluated the nurse's opinion about their roles during EOL care. Nine items (41, 42, 43, 44, 45, 53, 56, 58, 59) described the role of a nurse as a Supporter; four items (48, 50, 51, 57) described the role of a nurse as an Information Broker; and six items (46, 47, 49, 52, 54, 55) evaluated the role of a nurse as an Advocate. According to a 5-point Likert scale, respondents had to evaluate different statements by marking 1 of 5 possible responses: "Strongly agree", "Agree", "Undecided", "Disagree", or "Strongly disagree". For purposes of statistical analysis, responses were collapsed into three categories: "Strongly agree" and "Agree", "Undecided", and "Strongly disagree" and "Disagree". Seven questions identified socio-demographic characteristics. After completing this study, the Cronbach's alpha established for the questionnaire used in this study was 0.86, meeting the requirement for acceptance. Similar questionnaires have been used in studies with Intensive Care Unit nurses in Spain and the US [6,18,19].
Sample and setting
Registered oncology nurses from two major Lithuanian hospitals of oncology participated in this study. The two hospitals have a combined total of 2365 beds. According to data of the Health Care Ministry, at present there are 22,300 registered nurses (RNs) in Lithuania, which includes oncology nurses (approximately 350 oncology nurses). All 250 RNs working in oncology at the two largest University Hospitals of Oncology in Lithuania were invited to take part in this study. The response rate was 95.6% with 239 participants, who indicated age, gender, employment, current work place, and length of current employment on the socio-demographic section of the questionnaire. The sample included 238 females and 1 male. The average age of nurses participating in the study was 44.09 ± 8.96 years; and the average length of service was 22.90 ± 9.66 years.
Data collection
One of the authors personally distributed the questionnaires to all 250 eligible nurses at the hospitals during work hours from 1 September to 1 November in 2015.
Data analysis
Survey data were processed and analyzed using the statistical software package SPSS for Windows 19.0 [20]. The level of significance selected for testing data points was established at p ≤ 0.05. Descriptive statistics were used to calculate the average values of the variables within a 95% confidence interval. The standard deviation was used to describe the spread of values. A statistical analysis of qualitative ordinal variables was carried out by means of the chi-square (χ 2 ) test, and degrees of freedom (df ) was calculated. The Spearman's correlation coefficient (r) was used to determine the degree of dependence between variables. A positive r value indicated a direct linear correlation, i.e., when the value of one variable increased, the value of the other variable also increased. A negative r value indicated a reverse correlation, i.e., when the value of one variable increased, the value of the other variable decreased. Because the number of male nurses was not representative, no analysis of results based on gender was done.
Ethical considerations
Research was carried out in accordance to ethical principles of scientific research, the Declaration of Helsinki, as well as the Code of Ethics of the Lithuanian Social Research Centre (LSRC). Hospital administrations were informed of the research goals, and their permission was obtained prior to starting the study. In addition, verbal informed consent was obtained from each participant of the study following an explanation of the research goals. Confidentiality of respondents was assured. Anonymity was maintained, as respondents were never asked for any identifiers such as their names, surnames, or addresses. Collected data were summarized and reported in the aggregate and used only for scientific purposes. The study was approved by the Bioethics Centre at the Lithuanian University of Health Sciences (BEC-KS (M)-566). Participants were informed about the purpose of the study, the data protection rights, and the right to refuse participation in the study or to terminate the participation without reasoning or penalty. Survey methodology was applied with minimal risk or harm to study participants.
Results
Obstacles to providing the EOL care The two items perceived as the most intense obstacles to providing EOL care by respondents were identified in the following statements: The nurse's opinion on
The nurse's role in relation to obstacles in providing EOL care
In this study, a survey of 239 oncology nurses revealed that almost half (46%) of respondents self-assigned the role of a Supporter. Subscale values were distributed from 9 to 14 for Supporter, from 6 to 11 for Advocate, and from 4 to 8 for Information Broker. The subscale means, standard deviation, and confidence intervals are presented in Table 2.
An analysis of obstacles in accordance to nurses' roles did not reveal any statistically significant differences. However, three major obstacles were identified by more than 80% of all categorized respondents: Nurses have to deal with angry patient's family members; The patient's relatives having inadequate understanding of the situation interfere with the nurses' duties, and Usually there is no time for conversations with patients about their wishes concerning the end of life issues/decisions. The perception of whether it was an obstacle or not an obstacle was close to being evenly divided (approximately 45%) by all categorized respondents for item The nurses' opinion on immediate patient care is not welcome, valued or discussed. (see Table 3).
Discussion
The main goal of nurses in solving problems at the EOL is to reduce the patient's suffering and manage pain and symptoms so that the patient's quality of life would remain the same [2]. Beckstrand et al. confirmed that one of the major obstacles to autonomous decision-making was the nurse's opinion of not being valued [11]. Data from this study confirms overall research findings that oncology nurses still lack professional autonomy because they identified disregard and disrespect of their opinion on patient care as the most significant obstacle. Study data lead to an assumption that according to nurses' opinions, their function in EOL care was to assist physicians rather than to make autonomous decisions. This approach is very characteristic of the culture in Eastern European countries where nursing science is still developing and paternalistic relations dominate in the health care system [21]. Conflicting views and feeling that you as nurses are being disrespected often cause problems. In this study, another dominant obstacle in the provision of EOL care was related to family members who had no access to psychological help after being informed about the patient's diagnosis. A descriptive study conducted in intensive care units in Spain also confirmed these findings and identified the lack of support for family members as one of the obstacles affecting nursing [18]. Another study by Beckstrand et al. also revealed that the inability of family members to obtain psychological help after being informed about the patient's diagnosis was considered an important obstacle in the provision of quality EOL care [14].
As demonstrated in the research literature, this is a universal problem in the US as well as Europe where there are no uniform systems to ensure quality health care services during critical moments of life, not only for patients but also for their family members [13,18,22]. Professional knowledge, skills, and coordination are necessary for problem management. Furthermore, the nursing literature contains limited information about patient care at the EOL, and nurses perceive this as an obstacle in the provision of care. A study on EOL care conducted by Hebert et al. demonstrated that 71% of nurses participating in the study lacked adequate knowledge on pain management, 62% of nurses lacked general knowledge on EOL problems, and 59% of nurses rated knowledge on management of other symptoms as inadequate [2]. Findings of our study correspond to findings of other researchers because the majority of oncology nurses participating in this study indicated a lack of nursing knowledge and training on how to treat the patient's grieving family. Other researchers had similar findings; the lack of nurses' knowledge was considered a major nursing obstacle [14,18].
Skill development in key aspects of care provision may improve the provision of EOL care for critical care patients and their families. Physicians who are evasive and avoid conversation with the patient and/or family members were identified as one of the most important obstacles in nursing. Sufficient information, communication, and relationships between the staff and the relatives may help to facilitate shared decision-making. Gjerberg et al. conducted a study on EOL care communications and shared decision-making in Norwegian nursing homes; most relatives expressed that they wanted a conversation about the patient's wants and preferences for EOL care, even when such conversations might be emotionally difficult [22]. This is also confirmed by a qualitative study carried out by the US researchers. Oncology nurses participating in the study identified subject areas they found difficult to discuss with the EOL patients. These include dialectic tensions, specific EOL related topics, the lack of skills in providing empathy care, characteristics of patients and their family members, and institutional obstacles [23].
Findings by Beckstrand et al. and Iglesias et al. correspond to this study's results and identify physicians avoiding conversations with the patient and family members as one of the most important obstacles in the provision of EOL care [14,18].
According to study results, an analysis of obstacles in accordance to nurses' roles identified the following three major obstacles: Nurses have to deal with angry patient's family members; The patient's relatives having inadequate understanding of the situation interfere with the nurses' duties; and Usually there is no time for conversations with patients about their wishes concerning the end of life decisions. These obstacles were described as very important in other studies conducted in both oncology and intensive care units [11,24,25]. Similar results were found in a study by Kirchhoff et al. One of the major obstacles in the provision of nursing care was related to issues with patients' families that made care at the EOL more difficult, such as the family not fully understanding the meaning of life support and angry family members. A study by Beckstrand et al. had similar findings. One dominant obstacle was patients' family members' not understanding what the term "lifesaving measures" really meant [14].
Most frequently, nurses acted as Advocates on behalf of patients and family members by informing physicians about patients' wishes and speaking with physicians for the family. Results of more recent studies have shown that nurses were more likely to employ direct methods, i.e., to speak with physicians and family members about the patient's prospects and involvement in decisionmaking [26]. Bach et al. assessed the nurses' role in EOL decision-making in a critical care unit. Research data revealed that nurses usually assumed the role of a Supporter -"supporting to journey" -being there, a voice to speak up, enabling coming to terms, and helping to let go [27]. Swedish researchers also analyzed which nursing role was the most frequent in an intensive care unit. According to findings of that study, nurses most frequently acted as Supporters: uncertainty about who was the close relative, getting near, keeping hope alive, and being honest, and in certain situations being Advocates [28].
Summarizing all research findings and comparing them to this study's results, it may be argued that in EOL care nurses most frequently act as Supporters, less frequently as Advocates and even less frequently as Information Brokers. Patients and family members need assistance and support in making difficult EOL decisions, and the nurse is the person who spends the most time with the patient and family members. For this reason, support is sought among nurses as they are empathic and represent the interest of both the patient and his family. Nursing professionals are in key positions to support EOL decisions and to advocate for patients and families across all health care settings. Support and advocacy have been identified as the common thread of quality EOL nursing care [5].
Relevance to clinical practice and education
Oncology nurses are professional and have sufficient skills and experience to play an important role in solving patients' problems at the EOL period. Recommendations to hospital administration include providing support to oncology nurses, including strategies that would help improve the authority of the nursing profession. They should also create a physical environment in which nurse area able to talk with family about EOL issue. In addition, the information regarding identified obstacles and nurses' roles in providing EOL care can be used to facilitate discussion and change within oncology interdisciplinary teams and improve EOL care for patients with cancer and their families. Therefore, organizing interprofessional team conferences to discuss cancer patient cases and conducting patient satisfaction surveys to move toward patient-cantered care would be useful.
Moreover, nursing education programs should have more study time on the death and dying process, and the topic of "death" should not be considered taboo. Including more credits on inter-professional communication in the training programmes would enable nurses to discuss the EOL issues with patients, their family members, and colleagues.
Conclusion
Having analyzed study results, it is possible to conclude that respondents identified the following as major obstacles in providing EOL care: the nurse's opinion on immediate patient care was not valued, the lack of nursing knowledge on how to treat the patient's grieving family, and physicians avoiding conversations with the patient and family members on diagnosis and prospects. In EOL care nurses most frequently act as Supporters and less frequently as Advocates. Furthermore, three major obstacles were identified throughout all nursing roles: dealing with angry patient's family members, the lack of time for conversations with patients about their wishes and the patient's relatives having inadequate understanding of the situation interfere with the nurses' duties.
Abbreviations EOL: End of life
|
2017-12-19T15:17:55.248Z
|
2017-12-19T00:00:00.000
|
{
"year": 2017,
"sha1": "c7b59d89cbb93c44546c7a41e3a79864d5f5a80a",
"oa_license": "CCBY",
"oa_url": "https://bmcpalliatcare.biomedcentral.com/track/pdf/10.1186/s12904-017-0257-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7b59d89cbb93c44546c7a41e3a79864d5f5a80a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
218679282
|
pes2o/s2orc
|
v3-fos-license
|
Use of Multimodal Imaging and Clinical Biomarkers in Presymptomatic Carriers of C9orf72 Repeat Expansion
Key Points Question Can metabolic brain changes be detected in presymptomatic individuals who are carriers of a hexanucleotide repeat expansion in the C9orf72 gene (preSxC9) using time-of-flight fluorine 18–labeled fluorodeoxyglucose positron emission tomographic imaging and magnetic resonance imaging, and what is the association between the mutation and clinical and fluid biomarkers of amyotrophic lateral sclerosis and frontotemporal dementia? Findings In a case-control study including 17 preSxC9 participants and 25 healthy controls, fluorine 18–labeled fluorodeoxyglucose positron emission tomographic imaging noted significant clusters of relative hypometabolism in frontotemporal regions, the insular cortices, basal ganglia, and thalami in the preSxC9 participants. Use of this strategy allowed detection of changes at an individual level. Meaning Glucose metabolic changes appear to occur early in the sequence of events leading to manifest amyotrophic lateral sclerosis and frontotemporal dementia. Fluorine 18–labeled fluorodeoxyglucose positron emission tomographic imaging may provide a sensitive biomarker of a presymptomatic phase of disease.
A myotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) are related neurodegenerative disorders. Amyotrophic lateral sclerosis primarily affects the motor system with upper and lower motor neuron involvement, but extramotor manifestations may occur. [1][2][3] Frontotemporal dementia is the second most common form of presenile dementia, caused by degeneration of frontal and anterior temporal cortices. It affects brain regions implicated in executive control, language, behavior, and personality. 4 The disease course of both ALS and FTD is progressive and invariably fatal. The molecular link between ALS and FTD has been confirmed by the discovery of the hexanucleotide repeat expansions in the 3′ untranslated region of the chromosome 9 open reading frame 72 gene (C9orf72, OMIM 614260), the most common known monogenetic cause of both ALS and FTD. [5][6][7] During this time of antisense oligonucleotides and other interventional gene therapies, research in the presymptomatic stage may contribute to the development of novel treatment strategies 8 and detection of individuals at risk of developing ALS and/or FTD, and ultimately lay the foundation for future clinical studies to slow or even prevent clinical disease manifestation. 9 Presymptomatic carriers of disease-causing mutations permit in vivo research of the brain at a unique time to gain a better understanding of the early mechanisms that precede the onset of symptoms.
Over the past 10 years, study findings have suggested that several neurodegenerative diseases are preceded by an intermediate presymptomatic phase. 10,11 Research in presymptomatic carriers of a hexanucleotide repeat expansion in the C9orf72 gene (preSxC9) reported the occurrence of cognitive and behavioral changes, neuropsychiatric symptoms, and degeneration of gray matter (GM) and white matter (WM). [12][13][14][15][16][17][18][19][20][21] Neurofilaments (Nfs), such as neurofilament light chain (NfL) and phosphorylated neurofilament heavy chain (pNfH), have been studied extensively in ALS and FTD. Elevated levels of NfL and pNfH, both markers of neuronal injury and neurodegeneration, demonstrated high diagnostic performance. 22 Previous research has shown that NfL is increased in symptomatic, but not presymptomatic, preSxC9 at the group level. 23 Recent studies suggested that a slow increase in Nf levels can be observed in presymptomatic individuals who carry the mutation as far as 3.5 years before diagnosable illness, 24-26 while another study described an association between higher NfL levels and GM atrophy. 27 It has often been suggested that assessing glucose metabolism using positron emission tomographic (PET) imaging with fluorine 18-labeled fluorodeoxyglucose ([ 18 F]FDG) is a useful diagnostic marker in the earliest stage of ALS and FTD. [28][29][30][31] Moreover, [ 18 F]FDG PET imaging serves as a relevant biomarker for disease staging, cognitive impairment, and survival prediction. 29,32 However, little is known about the glucose metabolic changes that may occur before clinical disease manifestation in preSxC9. The goal of our study was to evaluate changes in glucose metabolism that occur before diagnosable illness, ie, the presymptomatic disease stage, 33 in preSxC9. In addition, we wanted to explore the association between cerebral glucose metabolism and other known indicators of disease, such as Nf levels in cerebrospinal fluid (CSF), neuropsychological capacities, and clinical neurologic examination.
Participants
A total of 29 healthy individuals serving as controls were included in this study, of whom 25 were considered in the analysis. None of the volunteers had a first-degree relative with dementia or a history of neurologic illness, psychiatric illness, or substance use. Participants with brain lesions noted on structural magnetic resonance imaging (MRI) were excluded. Demographic characteristics are detailed in Table 1.
The study was conducted from November 30, 2015, to December 11, 2018, at the neuromuscular reference center of the University Hospitals Leuven, Leuven, Belgium. All participants provided written informed consent, and this study was approved by the ethics committee of the University Hospitals Leuven, Leuven, Belgium. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for case-control studies.
We compared the Nf levels in the preSxC9 group with those of a control group (n = 10; mean [SD] age, 49 [14] years) previously reported. 34 A consecutive series of 17 preSxC9 participants was included in this study. A pathogenic expansion of C9orf72 was considered as having more than 30 repeats. All preSxC9 participants were native Flemish speakers, and their educational levels were between 3 ([upper] secondary education) and 6 (second stage of tertiary education) on the International Standard Classification of Education scale. 35 None of the preSxC9 participants met the clinical diagnostic criteria for ALS or FTD. 36,37 Exclusion criteria were the presence of clinically apparent ALS or FTD, severe and chronic illness, substance use, and traumatic brain injury.
Key Points
Question Can metabolic brain changes be detected in presymptomatic individuals who are carriers of a hexanucleotide repeat expansion in the C9orf72 gene (preSxC9) using time-of-flight fluorine 18-labeled fluorodeoxyglucose positron emission tomographic imaging and magnetic resonance imaging, and what is the association between the mutation and clinical and fluid biomarkers of amyotrophic lateral sclerosis and frontotemporal dementia?
Findings In a case-control study including 17 preSxC9 participants and 25 healthy controls, fluorine 18-labeled fluorodeoxyglucose positron emission tomographic imaging noted significant clusters of relative hypometabolism in frontotemporal regions, the insular cortices, basal ganglia, and thalami in the preSxC9 participants. Use of this strategy allowed detection of changes at an individual level.
Meaning Glucose metabolic changes appear to occur early in the sequence of events leading to manifest amyotrophic lateral sclerosis and frontotemporal dementia. Fluorine 18-labeled fluorodeoxyglucose positron emission tomographic imaging may provide a sensitive biomarker of a presymptomatic phase of disease.
All participants with preSxC9 were evaluated with the Dutch version of the Edinburgh Cognitive and Behavioral ALS Screen (ECAS) by an experienced neuropsychologist (J.D.V.). 35 The ECAS is a brief, multidomain screening battery that assesses cognitive functions typically affected in patients with ALS (language, verbal fluency, and executive functioning), as well as cognitive functions not typically affected in patients with ALS (memory, visuospatial functioning). 38 Dutch normative data were used, with the fifth percentile as a threshold for abnormality. 35 Results of the ECAS are presented in Table 2.
All of preSxC9 participants underwent a standard clinical neurologic examination by a neurologist experienced in neuromuscular disorders (P.V.D.).
Sixteen of 17 preSxC9 participants agreed to undergo a lumbar puncture according to standardized protocol at the Uni-versity Hospitals Leuven to determine the Nf levels in the CSF within 48 hours following the [ 18 F]FDG PET MRI scan. Neurofilament levels in CSF were measured using commercially available kits for NfL (UD51001, with an intraassay variability of 1.6% and interassay variability of 8.7%; UmanDiagnostics AB) and pNfH (with an intraassay variability of 5.2% and interassay variability of 8.7%; Euroimmun AG). Assessment of Nf levels was done using predefined diagnostic cutoff values for NfL (1227 pg/mL) 34 and pNfH (750 pg/mL). 39 All participants underwent simultaneous [ 18 The [ 18 F]FDG PET images were acquired in list mode for 25 minutes (30 minutes postinjection). The PET images were reconstructed with ordered subset maximum likelihood expectation maximization with 4 iterations and 28 subsets followed by postfiltering with 4.5-mm gaussian postsmoothing in the transaxial direction and standard smoothing along the Z direction. Images had an initial voxel size of 1.56 × 1.56 × 2.78 mm 3 . A vendor-provided, atlas-based method was used for attenuation correction. 40 Simultaneous to the PET acquisition, a 3-dimensional volumetric sagittal T1-weighted image (3D BRAVO, repetition time/ echo time [TR/TE] = 8.5/3.2 milliseconds, 0.6 × 1 × 1 mm 3 voxel size, dimensions: 312 × 256 × 256 voxels) and T2-weighted fluid-attenuated inversion recovery image (3D CUBE, TR/ TE = 8500/130 milliseconds, 0.7 × 1 × 1 mm 3 voxel size, dimensions: 268 × 256 × 256 voxels) were acquired.
Statistical Analysis
Statistical analyses of clinical data were performed using SPSS software, version 25 (IBM Software) and GraphPad Prism, version 8.0 (GraphPad Software). Demographic characteristics and clinical test results were compared between groups using a χ 2 test for dichotomous and categorical variables or Mann-Whitney test for numeric variables. All hypothesis tests were 1-sided, and statistical significance was set at P < .05. Abbreviations: ALS, amyotrophic lateral sclerosis; ECAS, Edinburgh Cognitive and Behavioral ALS Screen; preSxC9, presymptomatic carrier of a hexanucleotide repeat expansion in the C9orf72 gene. a Possible score range, 0 to 136; lower scores indicate more severe cognitive dysfunction. b Evaluates functions typically affected in ALS. Total score ranges from 0 to 100; language, 0 to 28; verbal fluency, 0 to 20; and executive functions, 0 to 42.
Lower scores indicate more severe cognitive dysfunction. c Evaluates cognitive functions not typically affected in ALS. Total score ranges from 0 to 36; memory, 0 to 24; and visuospatial functions, 0 to 12. Lower scores indicate more severe cognitive dysfunction.
Research Original Investigation
Use of Multimodal Imaging and Clinical Biomarkers in Carriers of C9orf72 Repeat Expansion Image Analysis ANTS, version 2.1.0., and SPM, version 12 (Wellcome Trust Centre for Neuroimaging) software, combined with in-house scripts implemented in Matlab (R2018b; The MathWorks Inc), were used to process the T1-weighted and fluid-attenuated inversion recovery images. After visual inspection of the raw T1 images, the T1 images were processed in native space using the antsCorticalThickness pipeline in ANTS, 39 which performs a brain extraction and segments the image of the individual's brain by means of 5 specific tissue priors: CSF, cortical GM, WM, subcortical GM, and the brainstem. After visual inspection of the segmentations, 3 control scans were excluded because of poor image quality and subsequent suboptimal brain data extraction and segmentation. Gray matter tissue probability maps were warped to the Nathan Kline Institute template (Rockland Sample, dimensions = 182 × 218 × 182 voxels), which was warped to Montreal Neurological Institute space (voxel size = 1 × 1 × 1 mm 3 , matrix = 182 × 218 × 182) and modulated with the jacobian warp parameters, all using nonlinear symmetric diffeomorphic registration. All [ 18 F]FDG PET images were first quality checked for complete acquisition and motion, then dynamically reconstructed and corrected for potential head motion. The frames, which were reconstructed over a series of 5 minutes, were then averaged. After visual inspection, PET images were coregistered to their respective native MRI and spatially normalized to Montreal Neurological Institute space using ANTS, applying the normalization parameters described above. After visual inspection, 1 control scan was considered an outlier (>3 SD from the mean) and subsequently excluded. The [ 18 F]FDG PET images were corrected for partial volume effects with the Müller-Gartner method (PMOD, version 3.9), which considers both GM spill-out and WM spill-in based on the MRIbased GM and WM tissue probability maps. Partial volume correction (PVC) was done using a point-spread function with a 5.5-mm isotropic full width at half maximum to mimic the PET image resolution, while a regression approach was applied for all voxels with a WM probability greater than 0.95 to determine the [ 18 F]FDG uptake in WM. The GM probability threshold was set at 0.3 to correct uptake values of GM voxels for WM activity. Partial volume effect-corrected [ 18 F]FDG PET images were spatially normalized to Montreal Neurological Institute space using ANTS, applying the normalization parameters described above.
For the group comparison, both MR and [ 18 F]FDG PET images were smoothed with an isotropic gaussian smoothing kernel of 8-mm full width at half maximum to blur individual variations. Owing to a difference in ambient conditions (ie, visual input) following tracer injection, the occipital lobe was excluded from all neuroimaging analyses. All PET images were proportionally normalized to the average activity in a GM mask generated from the voxel-based morphometry comparative analysis applying an absolute threshold of 0.1 (excluding the occipital lobe).
The spatially normalized and smoothed images were then entered into a generalized linear model. All hypothesis tests were 1-sided, with a height threshold of P < .001 and a clusterlevel familywise error (FWE)-corrected threshold of P < .05, applying a minimum extent threshold of 150 voxels for the [ 18 F]FDG PET analyses. Age was included as a nuisance covariate in [ 18 F]FDG PET analyses as well as in the voxel-based morphometry analyses, where total intracranial volume was also considered a nuisance variable. The Talairach atlas 41 was used to define Brodmann areas and the Harvard-Oxford Atlas [42][43][44][45] was used for the anatomic localization of significant clusters.
A volume-of-interest-based analysis after region-based voxelwise correction for GM atrophy was conducted using the Hammers N30R83 maximum probability atlas to confirm our findings at the voxel level in PMOD, version 3.9 (PMOD Inc) and SPSS, version 25 (IBM Software). We applied the Benjamini-Hochberg method to correct for multiple testing.
W-Score Maps
W-score maps ([raw value for each patient − value expected in the control group for the patient's age] / SD of the residuals in the control group) were computed for preSxC9 using the control group as a reference to quantify the degree of [ 18 F]FDG PET imaging abnormality at the voxel level. W-score maps are analogous to z-score maps, adjusted for covariates of interest. For our study, we considered age as a covariate of interest. 46 The threshold for abnormality was defined as an absolute W-score greater than or equal to 1.96, which corresponds to 95% of the area under the curve in a normal distribution. Hypometabolic maps, binarized at a W-score less than or equal to −1.96, and hypermetabolic maps, binarized at a W-score greater than or equal to 1.96, were summed across participants to generate W-score frequency maps to illustrate the fraction of preSxC9 participants surpassing the threshold for abnormality at the voxel level.
Results
A total of 46 participants (17 preSxC9 and 29 healthy controls) were included in this study. After data inspection, all preSxC9 participants (mean [SD] age, 51 [9]
Neuroimaging
Relative glucose metabolism was compared between the preSxC9 and control cohorts. This analysis revealed significant clusters of relative hypometabolism in the preSxC9 group compared with the control group (range, 27%-36%) situated in the basal ganglia, thalamus, and frontotemporal and insular cortices. All analyses were thresholded at a height of P < .001 and FWE-corrected level of P < .05 at the cluster level ( Figure 1; eTable 1 in the Supplement). At the group level, we observed no significant clusters of relative hypermetabolism. The comparative voxel-based volumetric analysis (voxelbased morphometry) revealed significant clusters of reduced GM volume (range, 19%-25%) located in the frontotemporal regions, including the peri-Rolandic region, insular cortices, basal ganglia, and thalami. All analyses were thresholded at a height of P < .001 and FWE-corrected level of P < .05 at the cluster level (Figure 1; eTable 2 in the Supplement). A voxelbased regression analysis of the association between age and GM volume failed to show a significant difference in the slopes of the preSxC9 and control participants. The [ 18 F]FDG PET imaging data were also analyzed with partial volume effect correction to account for GM atrophy. Significant clusters of relative hypometabolism (range, 16%-22%) persisted in frontotemporal regions, including the insular cortices, as well as the basal ganglia and thalami. All analyses were thresholded at a height of P < .001 and FWE-corrected level of P < .05 at the cluster level (Figure 1; eTable 3 in the Supplement; Video). A voxel-based regression analysis of the association between age and cerebral metabolism failed to show a significant difference in the slopes of the preSxC9 and healthy control participants.
These findings were supported in a volume-of-interestbased analysis applying region-based voxelwise correction for GM atrophy (eFigure 1 and eTable 5 in the Supplement). Significant clusters of relative hypermetabolism (range, 6%-7%) emerged in the peri-Rolandic region, the superior frontal gyrus, and the precuneus cortex following PVC. All analyses were thresholded at a height of P < .001 and FWE-corrected level of P < .05 at the cluster level (Figure 1; eTable 4 in the Supplement; Video). To confirm the presence of relative hypermetabolic clusters in preSxC9 participants, the analysis was repeated using standardized uptake value ratio images; cortical uptake was scaled to the average uptake in cerebellar structures not reported as being affected by a mutation in the C9orf72 gene, supporting our findings (eResults, eFigure 4, eTable 6 in the Supplement). 47 Reduced glucose metabolism and gray matter volume depicted in red-yellow, and increased glucose metabolism depicted in blue-white. Data were analyzed at a height threshold of P < .001 and were cluster level corrected for familywise error at P < .05. A, Projections of areas with relative hypometabolism in preSxC9 participants vs healthy controls. B, Volume decline in preSxC9 participants vs healthy controls. C, Relative hypometabolism in preSxC9 participants and healthy controls following voxel-based PVC. D, Relative hypermetabolism in preSxC9 and healthy controls following voxel-based PVC. [ 18 F]FDG indicates fluorine 18-labeled fluorodeoxyglucose; PET, positron emission tomography; PVC, partial volume correction; and t, t value. Section numbers refer to Montreal Neurological Institute coordinates.
Research Original Investigation
Use of Multimodal Imaging and Clinical Biomarkers in Carriers of C9orf72 Repeat Expansion
Clinical Parameters
Neurologic examination revealed mild signs of upper motor neuron (UMN) abnormalities in 12 of the 17 preSxC9 participants (71%). As the presence of a Hoffman sign or ankle clonus is not necessarily abnormal in young people, we only considered the presence of a jaw jerk, a Babinski sign, hyperreflexia, and increased muscle tone for further analyses. This was apparent overall in 10 preSxC9 participants (59%). In 5 participants (29%) increased muscle tone was observed in the lower extremities; 1 (6%) also presented with increased muscle tone in upper extremities. Five participants (29%) of the preSxC9 cohort presented with abnormal neuropsychological performance. Executive functioning was affected in 3 preSxC9 participants (18%), 1 participant presented with isolated abnormal performance on verbal fluency, and another individual showed isolated impairment on the memory subdomain ( Table 2).
Examination of the CSF showed median NfL levels of 652 pg/mL (range, 276-1510) and pNfH levels of 195 pg/mL (range, 123-490). The NfL and pNfH levels did not differ significantly between the preSxC9 and healthy controls at the group level. However, elevated NfL levels, ie, surpassing the diagnostic cutoff, were observed at the individual level in the CSF of 19% of the preSxC9 group ( Figure 2). All 3 of the 16 who underwent lumbar puncture displayed signs of UMN involvement on clinical neurologic examination, and 1 of the 3 individuals with elevated Nf levels displayed an abnormal score on the memory domain using the ECAS. The pNfH levels in the CSF remained within the reference range in all preSxC9 participants (Figure 2).
We were unable to identify a significant association between relative tracer uptake and age, UMN involvement, ECAS performance, or Nf levels in CSF at the group level in preSxC9 participants using regression analyses at a height-corrected threshold of P < .001 and with a cluster-level FWE-corrected threshold of P < .05. Similarly, no significant association was identified between GM volume and age, UMN involvement, ECAS performance, or Nf levels in CSF, applying the same threshold for significance.
We generated voxel-level W-score maps to evaluate how many preSxC9 participants presented with suprathreshold voxels in key regions. A frequency image of the W-score maps, generated from the [ 18 F]FDG PET images without correcting for partial volume effect, showed that 14 preSxC9 participants (82%) had significantly reduced tracer uptake in the in-sular cortices, central opercular cortex, and thalami (eFigure 2A in the Supplement). In addition, a frequency image of the W-score maps, generated from the [ 18 F]FDG PET images corrected for partial volume effect, showed that up to 71% of preSxC9 patients had significantly increased tracer uptake, surpassing the predefined threshold of an absolute W-score of 1.96, which corresponds to the 2.5th percentile on both sides in the peri-Rolandic region (eFigure 2B in the Supplement). A mean image of the W-score maps in the preSxC9 cohort reflected the consistency of the changes observed at the group level (eFigure 3 in the Supplement). Individual W-score maps of relative hypometabolism supported the pattern observed at the group level in up to 82% of preSxC9 participants (Figure 3). A W-score frequency map of GM volume reduction revealed suprathreshold voxels in the thalami and central opercular cortex in 11 preSxC9 participants (65%) (eFigure 2C in the Supplement). In addition, using the W-score maps, we were unable to identify a clear association between the extent of abnormality and UMN involvement, ECAS performance, and Nf levels in CSF.
Discussion
A voxelwise comparison of glucose metabolic patterns revealed clusters of relative glucose hypometabolism situated in frontotemporal and insular cortices, the basal ganglia, and thalami. Moreover, GM volume reductions revealed a widespread neuroanatomic signature in the frontotemporal and insular cortices, basal ganglia, and thalami. The observed volumetric differences are consistent with structural changes reported in previous studies of preSxC9. 13 Even though regional hypometabolism in subcortical and extramotor regions may be explained in part by neuronal loss, the functional disruption identified by [ 18 F]FDG PET imaging was supported, as clusters of reduced glucose metabolism in aforementioned regions withstood PVC, and thus correction for GM atrophy.
Significant clusters of relative hypermetabolism were observed in the precentral and superior frontal gyrus and the precuneus cortex following PVC. This finding may be interpreted as compensatory neuronal activity or a possible abnormal function of cortico-striatal-thalamic-cortical circuits resulting in UMN abnormalities. In addition, we can speculate that the observed clusters of relative hypermetabolism reflect neuroinflammation associated with activated astrocytes or microglia. 29 The observed structural and metabolic changes in the preSxC9 participants suggest that brain regions corresponding to cognitive and motor processes are impaired in the presymptomatic stage of ALS and FTD. These findings are in line with previous [ 18 F]FDG PET imaging studies in symptomatic carriers of a C9orf72 hexanucleotide repeat expansion, demonstrating relative hypometabolism in frontotemporal and subcortical regions. 48,49 Moreover, our findings support the role of the thalamus in C9-related disease. 49,50 The role of the cerebellum in C9-related disease remains unclear. A recent voxel-based morphometry study from the multicenter Genetic Frontotemporal Dementia Initiative consortium described GM volume reductions in the superior-posterior cerebellum. 14 We, however, did not observe significant GM volume reductions in the cerebellum, supporting the findings of another study. 15 To our knowledge, there are no consistent findings on volumetric changes in the cerebellum of preSxC9 individuals.
For this study, W-score maps were generated to observe individual effects, as individual differences may have been washed out in a group-level voxel-based analysis. W-score frequency maps reflected the consistency of the pattern observed at the group level in individual W-score maps of a number of preSxC9 participants. These maps demonstrated that the highest frequencies (up to 82%) of reduced glucose metabolic uptake, below the threshold for abnormality, were found in the insular cortices, central opercular cortex, and thalami of preSxC9 participants. The highest frequencies (up to 71%) of increased glucose metabolism, above the threshold for abnormality, fol- Thresholds for W-score maps were set at a W score less than or equal to −1.96 and sorted by age, displayed on axial (z = 10), coronal (y = −14), and sagittal (x = −6) sections. Section numbers refer to Montreal Neurological Institute coordinates. Images from participants with elevated neurofilament levels are within red boxes. w indicates W score.
Research Original Investigation
Use of Multimodal Imaging and Clinical Biomarkers in Carriers of C9orf72 Repeat Expansion lowing PVC were found in the peri-Rolandic region and superior frontal gyrus of the preSxC9 participants. Given that only part of the preSxC9 cohort had cognitive, pyramidal, or Nf changes, we suggest that the metabolic changes may occur early in the sequence of events leading to manifest ALS and FTD.
Because the age at disease onset is variable in C9orf72 repeat expansion carriers, the preSxC9 cohort in the present study most likely consists of a mixture of individuals who are relatively close to or far from disease onset. In addition, a hexanucleotide repeat expansion is known to be associated with a clinically heterogeneous disease spectrum. 2 The conceivable high degree of clinical variability within the preSxC9 group could potentially blur correlations with clinical parameters. As we did not observe an association with deviation from the norm and increasing age, longitudinal studies are needed to establish how the patterns of hypometabolism evolve and their predictive value for clinical disease onset.
We did not identify significant differences in CSF Nf levels at the group level between healthy controls and preSxC9 participants. Other studies were also unable to identify significant differences for this marker for axonal injury between healthy controls and individuals who are preSxC9. 24,51 At the individual level, NfL appears to be more sensitive than pNfH in the phase preceding diagnosable illness: no preSxC9 participant displayed a pathologic increase in pNfH, but 3 preSxC9 participants displayed abnormally high NfL levels.
We did not identify a significant association between Nf levels, ECAS performance, clinical neurologic screening, and findings on neuroimaging. This finding may, at least in part, be explained by sample size, as few preSxC9 participants presented with cognitive changes, elevated Nf levels, and UMN signs. Changes in cerebral metabolism may also precede clinical signs, which is in line with a recent study describing functional reorganization and network resilience in individuals who are preSxC9. 18 Longitudinal, multimodal PET and MRI studies are needed to gain a better understanding of the sequence of events that precede diagnosable illness. In addition, no significant association was observed between GM volume and increasing age in the preSxC9 cohort. We could therefore speculate that the observed volumetric differences between the preSxC9 and healthy control participants may represent not only GM atrophy but may, at least in part, indicate neurodevelopmental differences. Adolescent neuroimaging studies could assist in gaining more insight into the natural history of brain development in preSxC9.
Limitations
This study has limitations. First, the sample size was relatively small, which may have been a factor in the power of group comparisons for signs of upper motor neuron involvement and the association between neuroimaging data and clinical indicators of disease. Second, the absence of convertors in our cohort prevented us from exploring the predictive values of these markers for diagnosis. Third, we did not perform neurologic examinations in the control cohort. We also did not use cognitive screening with the ECAS in the healthy controls; however, we administered a Mini-Mental-State Examination in all participants, which did not reflect any cognitive abnormalities. Fourth, the difference in ambient conditions (visual input) between the preSxC9 and control cohorts necessitated masking the occipital lobe from our comparative analyses between preSxC9 and healthy controls, therefore preventing us from performing a whole-brain, voxel-based comparative analysis. However, to ensure the robustness of the patterns of relative hypometabolism, we performed a second wholebrain analysis in the preSxC9 group. This second analysis revealed the same clusters that we observed previously as well as a cluster of relative hypermetabolism in the occipital lobe, supporting our findings.
Conclusions
This study showed regional glucose metabolic alterations in presymptomatic carriers of a C9orf72 hexanucleotide repeat expansion before diagnosable illness that remained after correction for volume differences. Within the preSxC9 cohort on W-score maps of [ 18 F]FDG PET images, up to 82% (n = 14) presented with voxels surpassing the threshold of abnormality in key regions, Nf levels were elevated in only 19% (n = 3), deviation from the norm according to ECAS performance was observed in 29% (n = 5), 59% (n = 10) presented with subtle UMN signs, and abnormalities were noted on W-score maps on MR images in 65% (n = 11). The individual W-score image suggests that [ 18 F]FDG PET might be able to detect neuronal injury in an earlier stage than motor or cognitive changes or Nf levels.
To our knowledge, this is the first study that closely examines cerebral glucose metabolism in preSxC9 carriers and its association with GM volume and indicators of disease. Our findings suggest that [ 18 F]FDG PET imaging could provide a sensitive biomarker of a presymptomatic phase of disease, which can be of relevance for future therapeutic strategies. Multimodal and longitudinal imaging studies with an augmented sample size are needed to gain more insight into the sequence of events in the presymptomatic stage of C9orf72related disease.
|
2020-05-19T13:02:32.473Z
|
2020-05-18T00:00:00.000
|
{
"year": 2020,
"sha1": "9b54722cf63583a2e2ebcf42e42a5db6e50112ab",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamaneurology/articlepdf/2765969/jamaneurology_de_vocht_2020_oi_200023_1596744488.67938.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a325dfa5624cb514912685388596830944502254",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11153251
|
pes2o/s2orc
|
v3-fos-license
|
Single Aggressive Interactions Increase Urinary Glucocorticoid Levels in Wild Male Chimpanzees
A basic premise in behavioural ecology is the cost-benefit arithmetic, which determines both behavioural decisions and evolutionary processes. Aggressive interactions can be costly on an energetic level, demanding increased energy or causing injuries, and on a psychological level, in the form of increased anxiety and damaged relationships between opponents. Here we used urinary glucocorticoid (uGC) levels to assess the costs of aggression in wild chimpanzees of Budongo Forest, Uganda. We collected 169 urine samples from nine adult male chimpanzees following 14 aggressive interactions (test condition) and 10 resting events (control condition). Subjects showed significantly higher uGC levels after single aggressive interactions compared to control conditions, likely for aggressors as well as victims. Higher ranking males had greater increases of uGC levels after aggression than lower ranking males. In contrast, uGC levels showed no significant change in relation to aggression length or intensity, indicating that psychological factors might have played a larger role than mere energetic expenditure. We concluded that aggressive behaviour is costly for both aggressors and victims and that costs seem poorly explained by energetic demands of the interaction. Our findings are relevant for studies of post-conflict interactions, since we provide evidence that both aggressors and victims experience a stress response to conflict.
Introduction
A central principle in behavioural ecology is that the function and evolution of behaviour is analysed in terms of economic logic as costs and benefits, which determines an individual's fitness [1,2]. Cost-benefit analyses are regularly used to predict how animals should behave to maximize their net fitness gains, including during fights [3,4]. Despite its importance, it has been surprisingly difficult to quantify the true costs and benefits of a behaviour. One approach is to look at physiological variables, such as hormones secreted in response to stressful events. Although the stress response is generally adaptive, high glucocorticoid (GC) levels are thought to be costly for animals, so measuring their prevalence in wild populations is of importance [5,6]. GC release, whether it is part of the stress response to energetic stressors [7,8] or to psychological stressors [9], is costly due to the subsequent release of energy reserves, otherwise needed to support reproduction and growth [10]. Additionally, a range of costly and negative health effects, including impaired immune and cognitive functions, have been linked to chronic release of GCs [11,12]. Overall this suggests, that GC levels are a reliable proxy for costs, whether the release of GC is caused by energy demands [13][14] or by psychological effects, such as anxiety, uncertainty or relationship damage [15][16][17][18][19], (for review: [6]).
The relationship between high GC levels and aggressive behaviour has been established in a wide range of taxa (fish: Oncorhynchus mykiss [7]; reptiles: Urosaurus ornatus [20]; birds: Parus major [21]; mammals: Canis lupus [16], Cavia porcellus [22], Helogale parvula [16]; primates: Macaca assamensis [23], M. sylvanus [24], Pan troglodytes [14,17], Homo sapiens [8,25]). In most cases, these studies correlated aggression rates and GC levels over long sampling periods, making statements about causality difficult. Exceptions include several studies that have shown an increase of GC levels in plasma following prolonged aggressive interactions (fights over dominance in fish: [7]; Judo or wrestling matches in humans: [8,25]). One way to determine causality between aggressive behaviour and excretion of GCs is to study the hormonal effect of single events of aggression under natural conditions. In many non-human primates there is an inverse linear relationship between dominance rank and aggression received [26], resulting in more incidences of aggression towards lower ranking individuals. Consequently, studies have focused on the relation between GC levels and received aggression [23,24]. However, other studies in humans [8,25], rodents [22] and fish [7] have examined plasma cortisol increases in both winners and losers of aggressive interactions. In chimpanzees, however, aggressors do not always win [3], so GC levels might increase in both aggressors and victims (receivers) of aggression.
Giving or receiving of aggression affects the subsequent behaviour of opponents. Prior aggression sometimes attracts further aggression [27] and former opponents seem to avoid each other, if the aggression stays unreconciled [28]. These effects have been attributed to post-conflict anxiety and damaged relationships, both psychological stressors of aggressive interactions. Even though the stress response is likely caused by the resulting uncertainty, any release of GCs will still incur metabolic costs to the individual.
Here, we investigated whether single events of aggression in male chimpanzees (Pan troglodytes) caused increased urinary glucocorticoid (uGC) levels in victims and aggressors. We compared uGC levels related to aggression with uGC levels of samples related to pre-aggression periods. We also investigated the effect of duration and intensity of aggression on uGC levels, since fighting duration is correlated with energy use, and contact aggression can more easily lead to injuries than non-contact aggression. Finally we investigated the effects of dominance rank in relation to aggression. Given evidence of reproductive skew in male chimpanzees with dominant males siring more offspring [29,30], high ranking males arguably have more to lose than lower ranking males.
Study site and data collection
We observed nine adult male chimpanzees (P. t. schweinfurthii, Table 1) of the Sonso community living in the Budongo Forest (1°35' -1°55' N, 31°08' -31°42' E), Uganda, between February 2008 and July 2010. The Sonso community has been observed continuously since 1990 [31] and was comprised of 15 males (15 years: 10; 10-14 years: 5), 35 females (14 years: 27; 10-13 years: 8) and 28 juvenile and infants during the study period. Only one adult male was excluded from the study, due to not being sufficiently habituated to tolerate 6 hr focal follows. Budongo forest is a moist, semi-deciduous tropical rain forest with an average altitude of 1100m and a mean yearly rainfall of 1600mm [31]. With a team of up to six observers, we followed up to three parties of chimpanzees (independently moving subgroups) from approximately 7 a.m. to 5 p.m. through the forest, recording the party behaviour using all occurrence sampling [32] for aggression, grooming and affiliative social interactions. We waited for one of two target behaviours to occur: (1) aggressive interactions, in which one individual (aggressor) attacked another group member (victim) using either contact (hits, bites or tramples) or non-contact aggression (displays, charges or chases); (2) resting, in which one individual had no social interaction for a minimum of one hour and was sitting or lying for at least the first 30 minutes. Aggression events were measured in minutes. After observing a male engaging in one of the target behaviours, we switched to focal animal sampling [32] of that individual for the next 6 hours, collecting every possible urine sample and recording each change in behaviour. Urine was pipetted from plastic bags, when subjects were sitting > 10m high in the tree or from leaf matter when urination occurred on the ground after subjects had moved away. After collecting, urine samples were stored in a thermos flask containing ice and frozen upon arrival in camp, which was within 10 hours after collection. Urine collection did not commence when subjects had engaged in aggression or grooming within the hour prior to the target behaviour, and was aborted when subjects engaged in additional aggression or grooming within two hours after the target behaviour. We collected a total of 169 urine samples (aggression context N = 94, resting context N = 75) from nine adult male chimpanzees following 14 aggression events (9 contact and 5 non-contact) and 10 resting events, with a mean of 7.04 urine samples collected per chimpanzee per target behaviour.
Hormone analysis
Urinary GC levels were measured at the Lab for field-endocrinology at the Max Planck Institute for Evolutionary Anthropology using high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS), applying a method that measures 23 endogenous steroids in small quantities of primate urine [33]. Samples with a recovery of the internal standard deviating by less than +/-50% from the expected value were included in the analysis. In case of large deviation we re-measured the samples. If a large deviation persisted, we re-extracted and remeasured the samples. We excluded samples where the large deviation still persisted. Examination of LC-MS/MS data was carried out with MassLynx (version 4.1; QuanLynx-Software).
Only a fraction of plasma cortisol can be found in chimpanzee urine [34], while metabolites of cortisol are found in higher quantities [33]. To quantify the urinary glucocorticoid excretion (uGC), we used the sum of urinary cortisol plus four of its metabolites (tetrahydrocortisol, tetrahydrocortisone, 5β-androstane and 11-oxoetiocholanolon). The sum of uGC comprised on average 9% cortisol, 37% tetrahydrocortisol, 35% tetrahydrocortison, 5% 5β-androstane and 14% 11-oxoetiocholanolon. We corrected the uGC levels with the creatinine levels of each sample to control for differences in water content of urine samples [35]. We excluded samples with a creatinine level of less than 0.05 mg creatinine / ml urine from the analysis.
Hormonal and behavioural data analysis
The adult males observed during our study had an average urination interval ± SD = 78 min ± 32. We defined GC clearance in urine of chimpanzees following the results of [34]. Using 3 Hlabeled cortisol, the peak recovery of 3 H-labeled cortisol metabolites in chimpanzee urine was between *2 and 4.8 hours after administration [34]. Although some cortisol was most likely secreted into urine earlier, the highest level of radioactivity recovered was found in the second urine sample at 4.8 h after injection. Based on these finding and due to the variation in the urination interval, we set the window for peak GC clearance in chimpanzee urine to 135-270 min after onset of the target behaviour. Due to diurnal decline in uGC levels [36], absolute uGC levels were not comparable. Thus, we calculated a relative uGC level where we divided the uGC level of samples related to the target behaviour (peak-period samples) through the uGC levels of samples collected in the pre-target behaviour times window (pre-period samples). The preperiod urine samples were excreted between 0 min (t 0 ) and 135 min (t 1 ) after the start of the target behaviour, representing the peak uGC secretion of behaviour preceding the aggression or the resting. The peak-period urine samples were excreted between 135 min after the start (t 1 ) and 270 min after the end of the target behaviour (t 2 = 270+d min, with d = duration of target behaviour; Fig. 1), representing the peak uGC secretion of the aggression or resting event. We calculated the relative uGC level for each event, representing the percentage of how high the uGC level was during the peak-period in comparison to the uGC level during the pre-period. Therefore, we divided the mean hormone level of peak-period samples through the hormone level of the pre-period samples: relative uGC level = (mean uGC peak period / mean uGC pre period) x 100 for each subject for each series of urine samples (Fig. 1). However, since urine is stored in the bladder and excreted after varying latencies, some urine samples were expected to contain urine from the two adjacent time periods. To address this problem, we assigned these samples to the time period in which the greatest proportion of urine was likely to have been excreted (Fig. 1). Specifically, if the sample was collected within 30 min after the pre-period (t 1 ) or the peak-period (t 2 ), this was considered to be a potential 'overlap zone' (Fig. 1). 'Overlapzone' samples could be assigned to periods on either side of the period changing point depending on the latency between the sample excreted in the 'overlap zone' and the previous urination. If Δt between the sample in the 'overlap zone' and t 1 (or t 2 respectively) was smaller than between the last sample before t 1 (or t 2 ) and t 1 (or t 2 ), we classified the sample in the 'overlap zone' as a sample from the period before t 1 (or t 2 respectively). In all other cases the sample was classified as belonging to the time period in which it was excreted (Fig. 1).
Statistical analysis
In order to examine the influence of various predictor variables on changes in urinary GC levels, we ran General Linear Mixed Models (GLMM: [37]) using R version 3.1.1 (R Core Team 2013) and the function lmer of the package lme4 [38], with Maximum Likelihood estimates and a Gaussian structure. Assumptions of normally distributed and homogenous residuals were fulfilled, shown by visual inspection of qq plots and residual plots against fitted values. We checked for model stability by excluding subjects one at a time from the data and state in the results when models were unstable. Variables did not exhibit problems of collinearity [39,40] (Variance Inflation Factor < 2 in all cases, derived using R-package car [41], applied using a standard linear model excluding the random effect), suggesting that each predictor variable accounted for a portion of the variance. We ran several GLMMs. In the model including all the data, relative uGC level was the response variable and subject identity was the random factor. We ran additional models which including only the samples relating to aggressive events. This was in order to examine behavioural variation related to aggression (which obviously did not occur for resting events). Due to the small sample size, we were unable to fit all predictor variables into one model. Therefore we looked at the effects of each of the predictor variables on the relative uGC level separately. To establish the significance of the full model we used a likelihood ratio test, comparing the full model with the respective null model comprising only the random effect and the intercept. When models were insignificant, we considered the predictor variables to have no clear effect on the response variable.
Ethical statement
The ethics committee of the School of Psychology, University of St. Andrews, UK, approved this non-invasive behavioural and hormonal study with the Sonso chimpanzees located around Budongo Conservation Field Station (1°43' N, 31°32' E) in the Budongo Forest Reserve, Uganda. In accordance with ethical guidelines we kept 7m distance to the chimpanzees, never interacted with chimpanzee subjects and collected urine with plastic bags, when subjects were sitting > 10m high in the trees, or from leafs after subjects had voluntarily moved. Research was conducted under the permits by the Uganda Wildlife Authority (TDO/33/02) and Uganda National Council for Science and Technology (NS 181). Chimpanzees are an endangered species (IUCN red list) and under the protection of Uganda Wildlife Authority.
Results
Our data set comprised of 14 complete sets of urine samples, where we successfully collected both pre-and peak-samples, following aggression and 10 complete sets following resting ( Table 1). Results of the GLMM which included all data showed that the target behaviour (aggression and resting) significantly influenced the relative uGC levels (Table 2A; likelihood ratio test: χ 2 = 5.35, df = 1, p = 0.021). Males' relative uGC levels were significantly higher after aggression (mean relative uGC ± SD = 112% ± 28) than after resting for >30 min (mean relative uGC ± SD = 85% ± 27; Fig. 2). Combining the target behaviour with the subject's role during the aggression (aggression: aggressor, aggression: victim, and resting) showed that uGC levels tended to be higher (Table 2B; likelihood ratio test: χ 2 = 5.48, df = 2, p = 0.065) in both aggressors (mean relative uGC ± SD = 107% ± 12) and victims (mean relative uGC ± SD = 113% ± 33; Fig. 3) than after resting. Estimates show that whilst differences in uGC levels were large for both aggressors and victims against resting, differences were small between aggressor and victim conditions (Table 2B).
Further, we investigated in three separate GLMMs whether the subject's rank (rank 1-9), aggression duration (min.) or aggression intensity (contact or non-contact aggression) affected the relative uGC levels. Only subject's dominance rank showed a significant effect ( Table 3; likelihood ratio test: χ 2 = 5.25, df = 1, p = 0.022) with higher ranking males showing greater elevation of relative uGC levels after aggression (Fig. 4). In contrast we did not find effects due to aggression intensity (contact or non-contact) or aggression duration (min.), although it should be noted that negative effects could be a result of small sample sizes (Likelihood ratio tests: aggression duration: χ 2 = 0.45, df = 1, p = 0.50; aggression intensity: model unstable). Estimates from variables in italics were taken from a re-run of the model.
Discussion
Male chimpanzee aggressors and victims had larger increases in uGC levels after single aggressive interactions than after resting. In addition, higher ranking male opponents had greater increases in uGC levels than lower ranking males. Sample sizes were small but our results are comparable to previous studies showing raised plasma cortisol in both winners and losers in humans [25], rodents [42] and fish [7], suggesting that for male chimpanzees aggressive interactions are costly, regardless of the individual's role in the conflict. The proximate reasons for the increased uGC levels in our study remain unclear. In fish, plasma cortisol increased after fights in both winners and losers, but winners showed only increased plasma cortisol levels 5 min after the end of the fights, while losers still had increased plasma cortisol levels 24h after the fight [7]. If injury to losers could be ruled out, this study suggests different underlying mechanisms, which may involve energetic costs in winners in addition to a social stress reaction in losers [10]. In another study, chimpanzees showed increased uGC levels prior to hunting events and boundary patrols [43]. The author concluded that elevated uGC levels were in response to the anticipation of the cooperative and/or competitive events [43]. Following this idea, chimpanzees might anticipate the energetic needs of specific behaviours required in the near future. In our study, therefore, it might be possible that aggressors secreted cortisol to release sufficient energy for the attack, while victims primarily released cortisol in response to the psychological stressor of being attacked [26]. However, we suggest this is unlikely to be the only possible explanation, for the following reasons. First, a study in humans has shown that competitive martial arts matches release significantly more plasma cortisol than comparable non-competitive exercise [8], suggesting that uGC rises can be linked to a psychological component induced by competitive contexts. Second, we were unable to detect a significant relationship between the duration or intensity of aggression and the uGC elevation, although longer and more intense fights should have consumed more energy. This suggests that in chimpanzee conflicts, psychological rather than energetic stressors seem to have a greater influence on GC levels. This conclusion, however, needs to be treated with caution due to the small sample size.
With the above in mind, it is very unlikely that the energetic stress hypothesis [13] alone can explain why higher ranking male opponents had a greater increase in uGC levels than lower ranking ones. This is especially true, because our study shows greater increases in uGC levels in reaction to the aggression, rather than high absolute uGC levels. Another possible explanation comes from baboons, where dominant males showed higher fecal GC levels during times of social instability [44]. Social instability, the time of rank changes and male immigration, was also strongly correlated with drastically increased aggression. However, rates of aggression, given or received, did not correlate with fecal GC levels [44], again suggesting that GC level increases were not well explained by energetic costs. Rather, dominant male baboons had more to lose than subordinate ones when considering Chacma baboons' strong reproductive skew. Reproductive skew is less pronounced in chimpanzees, but is nonetheless present [29,30]. Thus, high ranking male chimpanzees likely have more to lose from defeat than low ranking males. As a result the GC reactivity due to psychological stressors might be stronger in high ranking than in low ranking males.
Social stress is often caused by a lack of control and predictability, such as the exclusion from food or other resources, or the threat of recurring aggression [6,11]. This is particularly pertinent in chimpanzees, with their fission-fusion social structure that leads to constantly changing networks of social support, so that even dominant individuals can lose aggressive interactions [3,45,46]. Powerful aggressors cannot be certain of winning a conflict, and thus lack a degree of control and predictability. This may be a general pattern in animal groups with more egalitarian social hierarchies and may provide an explanation for increased uGC levels after aggression in aggressors as well as victims.
Our results further our understanding of conflict management in animals. Several studies on primates have shown an increase of self-directed behaviours in aggressors and victims [47][48][49], (but: [50]), suggesting that both suffered from anxiety after aggression [51,52]. Traditionally, however, it was assumed that victims, not aggressors, suffer from post-conflict stress. For example, in a study of reconciliation in human boys [53], victims showed higher salivary cortisol levels following unreconciled aggression compared to reconciled aggression or following control conditions [53]. In chimpanzees, 'consolation' of victims has been shown to reduce self-directed behaviour [54]. However, these studies have not looked at hormonal responses of the aggressors, as it has not been recognized that aggressors might also suffer a stress reaction. The current study suggests that it is time to reconsider this assumption. Indeed, both victims and aggressors are known to engage in stress-reducing post-conflict coping strategies, such as reconciliation, 'consolation' or third-party mediated reconciliation [39,48,49,55].
|
2017-05-30T20:59:49.809Z
|
2015-02-25T00:00:00.000
|
{
"year": 2015,
"sha1": "173a278b41165f6ccef9f3119d5117eaa60f0f36",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0118695&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9615d4f737ef6da722be499f24f484fa49444143",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
246439147
|
pes2o/s2orc
|
v3-fos-license
|
The Technical State of Engineering Systems as an Important Factor of Heat Supply Organizations Management in Modern Conditions
: This article examines the features of the heat supply organizations (HSO) anti-crisis management, which has become relevant due to the pandemic in the spring of 2020. It is noted that the spread of coronavirus and the related economic problems had a negative impact on the sustainable development indicators of both countries and organizations. HSO, which are rarely considered in modern publications, are positioned in the study as the most important part of the economy of any country, on which the future stabilization of the economic situation among heat consumers depends. This study made it possible to draw a conclusion about the strengthening of the HSO engineering systems technical state role in the anti-crisis management of these organizations. The study summarizes and presents the main characteristics of the heating networks technical condition and systematizes all types of diagnostics that are encountered in practice. The characteristics of HSO management choices of the technical state monitoring methods is given. The authors propose a generalized model of the diagnostic methods choice, taking into account the sustainable development of a specific HSO. Perspective approaches to the improvement of the HSO heat supply systems state are determined, which will ensure the development and increasing reliability of urban infrastructure in the context of achieving sustainable development goals.
Introduction
The main global trends determining the development of all sectors of the economy in recent years have been focused on achieving sustainable development goals and digital transformation. In 2020-2021, a new factor appeared-the pandemic, which had a significant impact on the economy, affecting, to a greater or lesser extent, all industries. In this regard, the question of searching for new solutions, which, by their purpose and meaningful characteristics can be considered anti-crisis, but which, at the same time, make it possible to maintain a focus on sustainable development. In the context of this article, sustainable development of economic systems is understood as their long-term development in the changing conditions of the external and internal environment, balanced in terms of economic, social, and ecological indicators, which allows them to maintain their competitiveness.
In particular, the approach to the functioning of the heat supply industry, which in many countries is one of the life-supporting industries, requires a significant revision.
Obviously, the issues of ensuring sustainable development in the heat supply industry that require research, are concentrated to the greatest extent where their growth was observed even in the "pre-pandemic" period. Of course, the forecast of energy consumption is also important, which determines the possibilities within which realistic tasks of planning the development of the energy supply system will be set [1].
Many authors also raise the issues of the influence of the energy complex on the ecological and socio-economic development of the territory as one of the links of sustainable development [2][3][4].
In Russia, the problematic issues of heat supply development are associated with the growing wear of heat-generating and heating network equipment in conditions of insufficient investment, which, during the pandemic were further reduced. On the one hand, in the context of weak external demand, the decline in mining, including resources for generating heat energy, continued (−14.2% in June after −13.5% in May 2020 [5]). On the other hand, the transition of a large number of enterprises to remote work significantly affected the demand for heat energy. The Association "Council of Energy Producers" of the Russian Federation reported that the level of payment for heat energy was 3.2% less than for the same period in 2020, while a crisis decrease in the volume of payments was recorded among the population-by 13.3%, as well as managing organizations-by 6.3%. The accumulated receivables accounted for 30% of all heat energy that was consumed in 2020. As a result of the growth of non-payments, the financial situation of energy supply organizations is significantly deteriorating [6].
Similar situations of crisis in energy supply, which directly or indirectly affected its organization, were observed not only in Russia, but also in many other countries around the world. This determined the need to look for the "main link" in providing anti-crisis measures, which would make it possible to maintain the trend towards sustainable development in the face of a decreasing amount of resources that is irrational to "scatter". Therefore, all areas of activity of energy supplying organizations were analyzed and, according to the Pareto principle, the most effective factor in a crisis situation was determined-the technical condition of engineering systems and their associated reliability and efficiency.
It is important that, in world practice to date, there are various options for organizing heat supply, ranging from completely centralized (countries of the post-Soviet space-Uzbekistan, Tajikistan, Kazakhstan, etc.), an intermediate option-combining decentralization and centralized systems (for example, USA, Canada), to completely local options for organizing heat supply in some European countries. However, in recent years, the focus on achieving sustainable development goals has shifted the priorities for the development of energy systems in many countries. In the context of the prevalence of decentralized schemes, the transition to centralized systems which can significantly save resources becomes relevant. The main arguments for this transformation in technical terms are: -Increasing the energy efficiency of heat production through the use of combined heat and power generation. Thus, the program documents that are adopted in a number of countries [7,8] suggest a significant increase in the share of combined heat and power generation, which is planned to be achieved through financial, technical, and information support for the introduction of cogeneration technologies; -Increasing the energy efficiency of heat consumption [9][10][11], which can be achieved on the basis of technical norms and standards in the design and construction of heat supply facilities, mandatory energy certification of buildings and structures, and other measures. In the context of digital transformation, it is important to introduce intelligent systems for collecting data on the actual technical condition of heat supply systems and heat consumption.
In these circumstances, the "common denominator" for all variants of the heat supply industry organization is the technical state of the heat supply systems and their elements, which we considered in this study using the example of Russia and other countries with similar characteristics of heat supply systems. In practice, this means the need to maintain the technical condition of engineering systems on the basis of an objective assessment of the technical resources, which, on the one hand, guarantees the reliability of heat supply to consumers, and, on the other hand, will allow industries to get out of the crisis situation by reducing operating costs and improving the energy efficiency, thereby contributing to the sustainable development of the organization. The main focus in achieving the necessary balance is the technical state of engineering systems, the objectivity of the assessment plays a significant role in the process of developing new anti-crisis management tools for HSO. The issues of organizing technical system diagnostics have repeatedly become the subject of discussion, however, did not come to the fore, so at present it is important to determine a new methodological approach to its organization which makes it possible to establish the best configuration of technical diagnostic tools to ensure the stability of the HSO operating in the conditions of the changed nature of energy supply to consumers.
Materials and Methods
Intervention studies involving animals or humans, and other studies that require ethical approval, must list the authority that provided the approval and the corresponding ethical approval code. In this study, management is considered in conjunction with the issues of improving the quality and reliability of the heating networks that are operating and reducing the HSO costs that are associated with carrying out repairs through the development of an integrated methodological approach for organizing technical diagnostics of heating networks. Until now, power supply organizations have relatively rarely raised the question of the importance of technical diagnostics and a detailed analysis of its impact on reliability and efficiency. Principles of the organization of diagnostics have not been reviewed for many years for various reasons.
Diagnostics of any technical objects includes the following functions: • An assessment of the technical condition of the object; • The detection and localization of faults; • Forecasting the residual resource of the object; • Monitoring the technical condition of the object.
The activity of HSO is of a pronounced seasonal nature: during the heating season, it is required to provide reliable and quality heat supply to consumers at established prices, and to carry out the necessary volume of repair and restoration work on heating networks and generating capacities outside this period. From the 70s of the last century, the minimization of costs for diagnostics of technical condition in the district heating systems in Russia and the countries of the post-Soviet space was ensured by carrying out so-called pressure tests, i.e., determination of the technical condition of pipelines on the basis of their hydraulic testing under high pressure. The sections of the heating network, the technical condition of which did not allow them to withstand the load and led to the rupture of the structural material, were taken out for repair. The average efficiency of this method is 92%, but at the same time, information about "non-ruptured" pipelines was not available for obvious reasons. It was impossible to predict the upcoming repair costs and make an attempt to optimize them, which is especially important in the face of reduced financial resources.
The hydraulic test method is effective only if this decision is considered in the short term and provides a minimum amount of information for making management decisions. While, from the perspective of long-term planning in the context of overcoming crisis situations and focusing on the sustainable development of HSO, the management of this organization needs to implement a newer, more systematic approach to identifying the technical state of heating networks. Considering the strengthening of the engineering systems technical state in modern management, many HSOs began to use other management tools in combination with hydraulic tests, namely conducting a planned basic diagnostic of the engineering systems technical state using non-destructive and destructive testing methods. The information that are obtained as a result of such methods makes it possible to provide the possibility of long-term forecasts of the engineering systems technical state, to determine the prospect of sustainable development of TCO and to balance it according to a set of indicators.
However, it is not always possible to carry out such diagnostics on their ownequipment and personnel of appropriate qualifications are required, which makes it necessary to bear the costs of outsourcing them. Therefore, the management of HSO needs to base the choice of a set of methods and diagnostic tools on an approach to its organization that allows for the minimization of resources for their provision while ensuring the required quality of diagnostics.
The research methodology that is adopted by the authors that is aimed at forming a new methodological approach to the organization of diagnostics, is based on the implementation of four successive stages ( Figure 1). Here, we describe in more detail the materials and methods that were used by the authors in the framework of each of the stages of the research sequence presented that is above. systems technical state in modern management, many HSOs began to use other management tools in combination with hydraulic tests, namely conducting a planned basic diagnostic of the engineering systems technical state using non-destructive and destructive testing methods. The information that are obtained as a result of such methods makes it possible to provide the possibility of long-term forecasts of the engineering systems technical state, to determine the prospect of sustainable development of TCO and to balance it according to a set of indicators.
However, it is not always possible to carry out such diagnostics on their own-equipment and personnel of appropriate qualifications are required, which makes it necessary to bear the costs of outsourcing them. Therefore, the management of HSO needs to base the choice of a set of methods and diagnostic tools on an approach to its organization that allows for the minimization of resources for their provision while ensuring the required quality of diagnostics.
The research methodology that is adopted by the authors that is aimed at forming a new methodological approach to the organization of diagnostics, is based on the implementation of four successive stages ( Figure 1). Here, we describe in more detail the materials and methods that were used by the authors in the framework of each of the stages of the research sequence presented that is above. At the first stage is the method of analyzing the current regulatory and legislative framework of Russia and other countries with similar characteristics of heat supply systems. The regulatory and legislative framework in any country acts as the basis for the formation of a diagnostic system, being both a "starting point" and a set of mandatory conditions that diagnostics must meet. Therefore, a comparative analysis was carried out in the context of a number of countries. It should be noted that a detailed analysis of the structure of the countries' legislative documents on energy supply and an analysis of their provisions made it possible to state their significant differences, but at the same time confirmed important common characteristics that determine the possibility of universal solutions for organizing diagnostics:
•
Organizational and economic factors of the HSO functioning which determine the engineering systems structure and composition, their configuration on the territory of deployment, the operation requirements, as well as the conditions for the heat energy supply in terms of its quality, timing, and reliability; • Technical factors of the HSO functioning, including the requirements for the technical state of engineering systems that are operated in the process of heat energy generation, transportation, and distribution; methods for identifying this state; as well as a list of the generated information and ways for its interpretation for making management decisions. At the first stage is the method of analyzing the current regulatory and legislative framework of Russia and other countries with similar characteristics of heat supply systems. The regulatory and legislative framework in any country acts as the basis for the formation of a diagnostic system, being both a "starting point" and a set of mandatory conditions that diagnostics must meet. Therefore, a comparative analysis was carried out in the context of a number of countries. It should be noted that a detailed analysis of the structure of the countries' legislative documents on energy supply and an analysis of their provisions made it possible to state their significant differences, but at the same time confirmed important common characteristics that determine the possibility of universal solutions for organizing diagnostics:
•
Organizational and economic factors of the HSO functioning which determine the engineering systems structure and composition, their configuration on the territory of deployment, the operation requirements, as well as the conditions for the heat energy supply in terms of its quality, timing, and reliability; • Technical factors of the HSO functioning, including the requirements for the technical state of engineering systems that are operated in the process of heat energy generation, transportation, and distribution; methods for identifying this state; as well as a list of the generated information and ways for its interpretation for making management decisions.
At the second stage, a review of the materials of scientific and technical achievements and HSO diagnostics practice in the field of destructive and non-destructive testing methods of the heating networks pipelines state was carried out. At this stage, the authors also encountered completely different experiences of the countries that were studied, establishing that the energy supply organizations of the "post-Soviet" space, where the heat supply systems are more accident-prone, have advanced to a greater extent; however, experiences Energies 2022, 15, 1015 5 of 14 vary greatly even within the same country. Therefore, an analytical approach was applied to the comparison of destructive and non-destructive testing methods based on statistical studies of factual data of the intensity of defect development processes, as well as on the results of their diagnosis by various methods.
The development of an indicator system, which constitutes the content of the third stage of the research, was formed on the basis of regulatory materials of Russia and other countries with similar heat supply system characteristics using a grouping method which made it possible to follow the author's provisions that related to the technical state of the engineering systems and sustainable development of TSO: 1.
Ensuring the reliability of heat supply in accordance with the requirements of technical regulations. This means that the contractual obligations of the HSO to the consumers oblige the supplier to do this while observing the safety of the supply processes and ensures the requirements for the quality of heat energy and heat carrier; 2.
Ensuring the efficiency [12] of the generation and transportation of heat energy, which implies the elimination of unproductive resource losses due to the technical condition of engineering systems, and, in the future, investment in the transition to digitalization of operation and usage of energy efficient structures and technologies; 3.
Ensuring the environmental safety [13] of heat supply, which in the current situation remains one of state priorities for managing the industry and requires the analysis of the technical state of engineering systems from the standpoint of environment protection measures; 4.
The priority development of district heating systems on the technological basis of cogeneration [14], taking into consideration the economic and territorial characteristics of the country; 5.
The achievement of economically justified profitability of the current activities and investments of the HSO, which forces the management of the HSO to bring into the special planning zone the issue of renewing worn-out engineering systems; 6.
The observance of non-discriminatory and transparent conditions for the implementation of consumer relations in accordance with the requirements of antimonopoly legislation [15], the basis for which is the technical capabilities of connecting all consumers to the HSO engineering systems.
It should be noted that the above provisions fully meet the objectives of the Russian Federation that are aimed at achieving the sustainable development goals and are universal in terms of regulating these issues in other countries.
To select the means of technical diagnostics, the use of which will ensure the listed requirements, the authors proposed a number of technical and economic indicators, which are presented in more detail in the "Results" section.
At the final fourth stage, the systematized research materials on the compared methods of diagnostics were used and the method of mathematical modeling was applied to develop recommendations for HSO for choosing the optimal set of technical diagnostic methods, the use of which has not previously been presented in studies. Admittedly, the choice of diagnostic tools should be the responsibility of managers, whose subjective opinions prevailed in the selection. At present, when the crisis phenomena in the economy have fully manifested themselves, decision-making should be formalized and allow comparison of options that are based on quantitative assessments, which will contribute to the rational allocation of resources for both diagnostics and repairs of heat supply systems based on its results. This made it possible to develop a generalized managerial decisions model which forms the basis of a new methodological approach to the diagnostics organization.
Results
The study of the regulatory framework that was conducted by the authors showed that, depending on the territory and range of coverage, the regulatory framework in any country is divided into two levels: country and sectoral levels. The analysis established two opposing positions-strict control by the state and its almost complete absence. A Energies 2022, 15, 1015 6 of 14 distinctive characteristic of the Russian Federation, as well as several other post-Soviet countries (Belarus, Uzbekistan, Kazakhstan), is the presence of uniform requirements for ensuring the industrial safety of heat networks and its mandatory state control. Separate documents of an industry nature that were developed quite a long time ago, contain requirements for monitoring the technical condition of heat networks, which differ depending on the availability of the structural elements being monitored-accessible and not directly accessible parts of engineering systems. The analyses that were carried out by the authors of the practice of fulfilling these requirements for diagnostics by HSO showed their low productivity, primarily due to the focus on outdated methods for monitoring the technical condition of heating networks and the lack of consistency.
The regulatory and legislative framework of other foreign countries defines a fundamentally different procedure, unlike the Russian Federation, for ensuring control of the state of the heat network, which should be systematically ensured at all stages of the life cycle of centralized heating systems, including design, the manufacture of basic materials and equipment, construction, and operation. The subject of control is also different from domestic practice due to the presence of risk factors for disruptions in the operation and heating network and the strength of their impact. At the same time, HSOs without the presence of a specialized state technical control body are fully responsible for the quality of functioning of heat networks based on their own approach to assess their technical condition of existing heat networks.
In general, it should be noted that, according to data that were published in the open press (articles, reports), in modern conditions, the assessment and forecasting of the technical condition of existing heating networks and "life extension" is carried out according to internal regulations that are developed by HSO personnel; a universal model for assessing the technical the state of thermal networks does not exist now. Thus, it can be stated that the regulatory and legislative framework of all countries in the field of methods for monitoring and evaluating heat networks leave a sufficient space for their specification. At the same time, the analysis made it possible to identify a significant set of requirements, both mandatory and recommendatory, that can be used to improve diagnostics.
In practice, according to data from open sources, in the EU and the USA, for diagnostics of existing heating networks, built-in alarm systems for the dampening of the insulating layer are mainly used, and thermal imaging aerial photography is carried out in the infrared range. Also, continuous monitoring of the coolant leakage at the branches of the main networks and at consumers is carried out. Instrumental diagnostics of heat networks by methods of non-destructive and destructive testing in the EU and the USA are carried out mainly under the control of state-accredited bodies (experts) at all stages of the construction of heat networks, from checking components to commissioning. At the same time, the range of methods for monitoring the technical condition of pipelines of existing heating networks is limited; at the stage of operation, it is assumed a priori that the pipelines that are put into operation are highly reliable. Accordingly, approaches to organizing the operation of pipelines and monitoring their condition do not imply the presence of problematic situations that are associated with the poor quality of work that is performed at the stage of pipeline installation and the use of low quality materials and low quality network water, which quite often occurred in the Soviet and even post-Soviet period in domestic practice. At the same time, the argument that in these countries there is no need for modern diagnostic methods, for example, in-line diagnostics, would be untrue.
In the countries of the post-Soviet space, a very diverse pattern of using various diagnostic methods has been accumulated, but at the same time, the amount of data are not enough for a full-fledged comparative analysis, since each HSO mastered those diagnostic methods that corresponded to its capabilities. Nevertheless, experts believe that the most promising in terms of practical application are the methods of in-line diagnostics. Despite the fact that today the existing methods of in-line diagnostics are not able to give exact information about the actual state of the pipeline and its working life, their reliability is at the level of 75-80%, which is 1.5-2 times higher than the reliability of other non-destructive testing methods. Thanks to the improvement of the method of in-line diagnostics and non-destructive testing modules, as well as the development of new instrumental methods for monitoring pipelines that are based on the modern development of technical means, it will be possible to replace hydraulic tests for diagnosing heating network pipelines with non-destructive testing methods. [16] The study of the engineering systems technical state factor in the anti-crisis management of HSO made it possible to form a new management model in the field of production processes of the organization, which assumes the provision of the maximum possible efficiency of the technical diagnostics methods application. We have identified two conditions that play a decisive role in decision-making:
•
The requirements for the content of information that allows obtaining objective data on the heating networks technical state, which can be used for heating network repair planning based on the selection of sections that require restoration measures.
•
The selection of technical diagnostic methods that allow the obtaining of information of the required quality while minimizing resources for their use considering a large number of technical diagnostic tools operating on various technical platforms.
Analysis of the operational features of heating networks, which determine their technical state, made it possible to determine a list of the necessary characteristics of heating networks structural elements (Table 1). They must be fully identified on the basis of a set of diagnostic works to make necessary management decisions.
Pipelines
• Detection of metal discontinuities, as well as dents, cor-rugations, edge displacements, etc., on the inner and outer sur-faces of pipes, including welds; To determine the most common types of defects that are detected by technical diagnostics, an analysis of damage statistics on heating networks of the Russian Federation was carried out and generalized based on data that were published in open sources by the authors of the article and other researchers.
To characterize the information that was used and its processing in the study, it should be noted that, for several decades, the authors have been conducting a study of statistical data on the heat networks operation while adhering to several conditions that provide a significant meaning for the conclusions. These include the following conditions that apply to the entire sample: All types of heating network layouts that were used in practice are considered: both ground and underground layouts; -Accounting is carried out in the context of all types of underground layouts, from channel-less layouts to combining heat networks and other engineering systems in collectors; -All the main pipe diameters of heating networks are considered, from the smallest diameter of 50 mm to the maximum diameter of 1400 mm; - The observation period was multiple replicates of a one year period, during which there is a heating season and a season when heating networks are not in operation, during which current repairs and pressure testing that are described above are carried out; - The data are recorded in relation to the age of the heating networks in the annual context; the age of heating networks was calculated from the year of commissioning, measured in years.
The results of the analysis are presented in Figure 2. While during the study, it was not always possible to consider the full dataset according to all the listed requirements, the patterns that were obtained in certain analytical data slices, for example, as in Figure 2, give practically similar results, which are discussed in more detail below.
was carried out and generalized based on data that were published in open sources by the authors of the article and other researchers.
То characterize the information that was used and its processing in the study, it should be noted that, for several decades, the authors have been conducting a study of statistical data on the heat networks operation while adhering to several conditions that provide a significant meaning for the conclusions. These include the following conditions that apply to the entire sample: -All types of heating network layouts that were used in practice are considered: both ground and underground layouts; -Accounting is carried out in the context of all types of underground layouts, from channel-less layouts to combining heat networks and other engineering systems in collectors; -All the main pipe diameters of heating networks are considered, from the smallest diameter of 50 mm to the maximum diameter of 1400 mm; - The observation period was multiple replicates of a one year period, during which there is a heating season and a season when heating networks are not in operation, during which current repairs and pressure testing that are described above are carried out; - The data are recorded in relation to the age of the heating networks in the annual context; the age of heating networks was calculated from the year of commissioning, measured in years.
The results of the analysis are presented in Figure 2. While during the study, it was not always possible to consider the full dataset according to all the listed requirements, the patterns that were obtained in certain analytical data slices, for example, as in Figure 2, give practically similar results, which are discussed in more detail below. It can be noted that the largest number of items are attributed to the technical condition of the heating network pipelines, which is confirmed by numerous studies that reveal the main reason for the deterioration of the technical condition of the heating networkexternal and internal corrosion of the metal [17]. According to research by the authors that was carried out on the example of the largest HSO of Russian megalopolises and correlated with the data of other researchers, the deterioration of the technical condition of heating networks due to external corrosion and electro-corrosion accounted for more than It can be noted that the largest number of items are attributed to the technical condition of the heating network pipelines, which is confirmed by numerous studies that reveal the main reason for the deterioration of the technical condition of the heating network-external and internal corrosion of the metal [17]. According to research by the authors that was carried out on the example of the largest HSO of Russian megalopolises and correlated with the data of other researchers, the deterioration of the technical condition of heating networks due to external corrosion and electro-corrosion accounted for more than 50% of all cases, while internal corrosion accounted for 38%. The rest of the recorded cases of deterioration in the technical condition of heating networks, which was about 10%, was due to all other types of reasons. This guided the authors to search for those diagnostic configurations that make it possible to identify, in a priority manner, such reasons for the decline in the technical state of engineering systems.
A comparison of methods for the heating networks technical state monitoring should be carried out within the framework of three main blocks of work: -As part of the preliminary block, a preliminary overview of the control methods that were available for use and included in the review was carried out. At the same time, the collected data were checked and their preliminary analysis for any method was carried out from the standpoint of the reliability, relevance, and its completeness for comparison; -As part of the analytical block, the comparison of control methods and the retrospective data analysis on the main defects of heat pipelines was carried out, according to which the specifics of the controlled parameters and states were identified. A detailed analysis of the two main groups of control methods-non-destructive and destructivewas carried out, taking into account the significant differences in the consequences of their use regarding the integrity of the heating network; -As part of the recommendation block, on the basis of a set of comparison indicators, the definition and justification of the best methods for monitoring the technical condition of heating networks was carried out.
The research scheme is shown in Figure 3.
configurations that make it possible to identify, in a priority manner, such reasons for the decline in the technical state of engineering systems. A comparison of methods for the heating networks technical state monitoring should be carried out within the framework of three main blocks of work: -As part of the preliminary block, a preliminary overview of the control methods that were available for use and included in the review was carried out. At the same time, the collected data were checked and their preliminary analysis for any method was carried out from the standpoint of the reliability, relevance, and its completeness for comparison; -As part of the analytical block, the comparison of control methods and the retrospective data analysis on the main defects of heat pipelines was carried out, according to which the specifics of the controlled parameters and states were identified. A detailed analysis of the two main groups of control methods-non-destructive and destructive-was carried out, taking into account the significant differences in the consequences of their use regarding the integrity of the heating network; -As part of the recommendation block, on the basis of a set of comparison indicators, the definition and justification of the best methods for monitoring the technical condition of heating networks was carried out.
The research scheme is shown in Figure 3. Let us concretize these blocks based on the obtained research results. To carry out the work of the preliminary block, all the types of diagnostics that were encountered in practice, presented in Figure 4, that were differing in technical platforms for their implementation, were systematized. Let us concretize these blocks based on the obtained research results. To carry out the work of the preliminary block, all the types of diagnostics that were encountered in practice, presented in Figure 4, that were differing in technical platforms for their implementation, were systematized.
They can be positioned as a universal classification of diagnostic methods, which is recommended by the authors for analyzing the conditions and possibilities of obtaining data for assessing the engineering systems technical state depending on the chosen technical platform for their implementation.
Within the framework of the work of the analytical unit, the comparative analysis of methods of destructive and non-destructive testing, including methods of external and in-pipe diagnostics of heating pipelines, was carried out using indicators that most reliably characterize each individual method. In the study, the following non-economic indicators and comparison parameters are suggested-the "method error" that was declared by the manufacturer of the equipment that was used for diagnostics by non-destructive and destructive methods, as well as the application scope of each of the analyzed methods.
However, this does not give an unambiguous answer to the question of the choice of diagnostic methods, but only supplements the previously obtained materials regarding their features. They can be positioned as a universal classification of diagnostic methods, which is recommended by the authors for analyzing the conditions and possibilities of obtaining data for assessing the engineering systems technical state depending on the chosen technical platform for their implementation.
Within the framework of the work of the analytical unit, the comparative analysis of methods of destructive and non-destructive testing, including methods of external and inpipe diagnostics of heating pipelines, was carried out using indicators that most reliably characterize each individual method. In the study, the following non-economic indicators and comparison parameters are suggested-the "method error" that was declared by the manufacturer of the equipment that was used for diagnostics by non-destructive and destructive methods, as well as the application scope of each of the analyzed methods. However, this does not give an unambiguous answer to the question of the choice of diagnostic methods, but only supplements the previously obtained materials regarding their features.
For the conclusions to be correct, it is necessary within the framework of the recommendation block to reduce the various indicators to an integral indicator. According to expert estimates, the most universal is the indicator of the economic effect from the use of each individual method. For the conclusions to be correct, it is necessary within the framework of the recommendation block to reduce the various indicators to an integral indicator. According to expert estimates, the most universal is the indicator of the economic effect from the use of each individual method.
The authors proposed a general model for choosing a complex of technical diagnostic methods, which provides the ability to make decisions on development in the long term, balanced in all the considered indicators.
To achieve various goals of HSO management (ensuring maximum diagnostics coverage of heating networks, obtaining the most accurate result with respect to "pointwise" selected objects), it is necessary to use different, but at the same time interrelated optimization criteria for each of the established goals. The research that was carried out by the authors of HSO management priorities with various degrees culture of diagnostics in various management situations, allows us to suggest the following positions for consideration:
1.
Ensuring the maximum diagnostic coverage of heating networks as expressed by the total length (L), is a typical situation for HSO, which has a large number of heating Obtaining the most accurate result with respect to the selected objects of diagnostics involving the use of methods that are distinguished by a high reliability of the results that are obtained. As a criterion, the maximum possible accuracy of the results that are obtained as a result of diagnostics should be used, i.e., T → max; 3.
In terms of restrictions, regardless of the management situation in the HSO, there will always be restrictions on: • Resources-financial (money, FR), which the HSO can allocate for diagnostics, information resources that the management has in relation to the diagnosed facilities (IR), and HSO investment to improve the technical condition of the heating networks in the planned period (volume of investments, VI); • The reliability of the results of the measures that were taken to diagnose the technical condition of heating networks: the required volume of diagnostics (V), the number of methods that were used (N), and the timing of the diagnostics (T); • The risks that are associated with insufficient information for making management decisions on carrying out repair and restoration work (R).
Taking into account the above, in general terms, the model for making an optimal managerial decision on the sustainable development of HSO has the next form, where Xi is the i-th managerial decision: Let us comment on the above model, noting the following features of its application: 1.
The model can be used either as a single criterion model, and then the corresponding target functions are used when making managerial decisions on diagnostics, or as a two-criteria model with the simultaneous use of two target functions.
2.
The minimum and maximum values of the parameters that are included in the restrictions are set by the management of the HSO with the involvement of experts in the relevant field in relation to the conditions of the functioning of a particular HSO at a specific time interval. 3.
In the existing economic practice, when making decisions, linear optimization models and the corresponding linear optimization methods, which are proposed to be used at the initial stage are the most common. At the same time, it should be noted that such an approach is significantly simplified: in particular, in addition to the restrictions that take into account the minimum and maximum values of the parameters, there may also be restrictions that connect individual elements of this model and that are, usually, nonlinear in nature. In addition, the target functions that are considered in the framework of this optimization problem, with a more detailed study of the issue are also likely to be nonlinear. All of this leads to the complication of the proposed model and the use of special nonlinear optimization methods. However, it should be noted that such a formulation of the problem requires a significant amount of information, which the majority of HSOs currently do not have.
4.
It should be noted that the presented model is rather general. However, it is currently not possible to eliminate this shortcoming for the following reasons. Firstly, the choice of the objective function and the set of restrictions to be taken into account are largely determined by the features of the functioning of a particular HSO, which does not allow explicitly prescribing the requirements for the model. Secondly, as noted above, a significant obstacle in modeling is the lack of sufficient reliable information to identify the nature of the relationship between the individual factors of the model. In this regard, at the first stages of decision-making by HSO can be reduced to the enumeration of possible diagnostic options, taking into account the priority of the objects to be diagnosed.
Discussion
An improvement of the recommended model and decision-making based on it can be ensured on the basis of the creation and implementation of a unified industry digital platform that is used for transmitting technological data in real time and the possibility of using industry technological statistics for scientific purposes. Such preconditions are contained, for example, in the departmental project "Digital energy" [18].
The second issue that is related to decision-making by HSO management in the context of anti-crisis management and requiring discussion in the framework of the study is the improvement of methods for assessing the heating networks structural elements technical state based on diagnostic data. Today, such an assessment is carried out in the HSO mainly on the basis of the available practical experience, which does not allow making objectively substantiated decisions. At the same time, in other industries, approaches that are based on the use of mathematical and information models are quite successfully applied. The interpretation of the monitoring data that characterize the state and operating of technical objects (including heat supply systems) as big data largely determines the methods of their processing, including statistical methods, predictive analytics, machine learning, the use of artificial neural networks etc. The application of these approaches allows for the classification of the state of these objects with a high degree of reliability and the development of a set of measures to prevent the development of defects and the occurrence of emergencies.
Considerable experience in developing models for assessing the state of technical objects which can be recommended and considered in the continuation of the conducted research has been accumulated, for example, in the nuclear industry, where increased requirements for industrial safety are imposed. In particular, among the promising approaches that are used, the methods of identifying trends and forecasting based on time series [19,20], clustering of equipment states in the parameter space using the principal component method [21,22], the use of neural network modeling and machine learning [23], etc. should be noted. Such models can be developed and successfully applied in diagnostics of the state of structural elements of heating networks, which will significantly increase the reliability of heat supply and reduce the cost of repair work and the elimination of emergencies. However, the limiting factors in the development of HSO in this direction are the lack of complete and reliable information for building such models, which was already noted by the authors earlier, as well as the lack of specialists with the necessary competencies in HSO. At the same time, it should be noted that the wider application of digitalization in the heat supply industry will eliminate these obstacles, which will lead to the sustainable development of HSO.
In conclusion, it can be noted that the issues that were identified in the proposed discussion are closely interrelated and should be considered comprehensively as elements of a single organizational and economic mechanism for anti-crisis management in the heat supply industry.
Conclusions
New meanings in a changed economic context require transformation both in the economies of countries and at the level of industries and individual organizations within them. The pandemic of 2020-2021, having essentially caused a crisis, forced many business entities, including the heat supply business, to return to the issues of anti-crisis management and search for key links in the system of anti-crisis measures. Some researchers clearly point out the need to search for "growth points" in the energy sector [24]. Before the crisis of 2020, scientists very often raised issues of energy saving and the development of various technologies for that [25].
It is obvious that the degree and forms of the pandemic's impact are quite specific for each of the organizations and are directly dependent on the key characteristics of its products and the markets in which they are sold. For the heat supply industry, the key point in overcoming the crisis is to ensure the proper functioning of engineering systems, which leads to an increasing role of technical diagnostics in the organization of HSO activities. The research that was carried out by the authors made it possible to develop a model for making managerial decisions for the HSO management, which forms the basis of a new methodological approach to the organization of technical state diagnostics which allows ensuring the reliability of heat supply, overcoming the imbalances prevailing in the industry and further sustainable development of HSO. As a further development of the subject that was considered in this study, it seems promising to form new approaches to the processing of diagnostic information using digital technologies.
|
2022-02-01T16:11:55.458Z
|
2022-01-29T00:00:00.000
|
{
"year": 2022,
"sha1": "675e569e35ad481faf11959922a303b9693ab2dc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/15/3/1015/pdf?version=1643535370",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "07e62747e4eb3b69188b81cfe8136225813020db",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
157642113
|
pes2o/s2orc
|
v3-fos-license
|
Location of logistics hubs at national and subnational level with consideration of the structure of the location choice
between several Abstract: The location of logistic hubs is a strategic decision made after multicriteria analysis. This requires first the definition of quantitative or qualitative criteria that can be independent or partially conflicting. The decision of location can be made at different geographical levels (countries or regions). In this paper, we suggest a generic structuration of criteria by geographical level and by family for choosing hubs location, taking into account the involved structure of location choice, which is rarely done in the literature: sequential assessment (choice of a country, then of a region of this country) or simultaneous assessment (direct choice of a location among several regions belonging to different countries) .
INTRODUCTION
In a globalisation context, firms are perpetually looking for new markets or new production resources. This implies to define efficient supply chains. In that purpose, the implementation of networks of logistic hubs usually allows to decrease the transportation costs in comparison with direct source/destination transportation (Alumur and Kara, 2008). Implementing a hub requires a huge investment. The choice of a location is therefore a problem that has drawn a large attention from both practitioners and academics. On the base of a literature survey, this communication suggests a hierarchical definition of families of criteria, then of criteria, that can be adapted to specific purposes. The main originality of the proposal is that it may allow to take into account the sequence of decisions resulting in the choice of a hub location, which is seldom done in the literature. Criteria are in that purpose defined either at the national or subnational level. The choice of a location can then be done by choosing first a country, then a region/city of the country, or by choosing directly a region/city among a set of areas located in different countries. Another originality is the reuse of indexes published by international entities (World Bank, World Economic Forum for instance) for assessing some of the considered criteria.
Problem statement
Logistic hubs allow to consolidate material flows coming from different origins, and to send them to their respective destination using unimodal (i.e. with a single type of transportation resources) or multimodal (i.e. with several types of resources) transport (Farhani et al., 2013;Campbell and O'Kelly, 2012). Modern logistic hubs may play different roles according to the services they provide: standard functionalities (international/national transport, distribution, warehousing, inventory management...) or high added-value ones (orders assembly, co-packing, and post-manufacturing). Global Logistic Hubs (GLH) are usually located near ports or international airports. They may manage important flows of various types of goods (raw materials, semi-finished products, finished products...) at an international level but such hubs can also be used as transhipment resources only, linking national suppliers/producers to consuming areas. A Regional Distribution Centre (RDC) manages and gathers flows of goods, imported from international logistic centres or locally produced, in order to distribute them on a whole national territory using long distance transportation means. An Urban Distribution Centre (UDC) is a logistic platform located in the vicinity of an urban area, insuring the management and concentration of good flows coming from senders or RDC, for distributing them in the centre of the city. This includes the well-known "logistic of last kilometre" problem.
The location of logistic hubs is a specific case of the « facility location problem », intensively studied in the literature on transportation and logistics domain (see for instance (Owen and Daskin, 1998)). This decision is strategic and the comparison between several potential locations includes many Abstract: The location of logistic hubs is a strategic decision made after multicriteria analysis. This requires first the definition of quantitative or qualitative criteria that can be independent or partially conflicting. The decision of location can be made at different geographical levels (countries or regions). In this paper, we suggest a generic structuration of criteria by geographical level and by family for choosing hubs location, taking into account the involved structure of location choice, which is rarely done in the literature: sequential assessment (choice of a country, then of a region of this country) or simultaneous assessment (direct choice of a location among several regions belonging to different countries). Imane Essaadi * , Bernard Grabot *, ** , Pierre Fénies *, *** Location of logistics hubs at national and subnational level with consideration of the structure of the location choice aspects that can be either quantitatively or qualitatively assessed. In the last case, qualitative assessment based on expertise should be possible. Assessment criteria may be partially conflicting, which still increases the complexity of the decision-making.
The choice of implantation of a hub may be done according to various sequences of decision influencing the definition of the assessment criteria: choice of a country or region, with a sequential (country, then region of the chosen country) or simultaneous choice (choice among regions belonging to several countries). The sequence of decisions is chosen by the stakeholders (government, logistics operator, manufacturer...) according to their objectives. An assessment of possible locations at the national level requires to assess criteria denoting the global attraction of a country, which is often difficult in quantitative terms, especially for large and/or developing countries, that often have heterogeneous characteristics. The assessment at the regional level consists in comparing cities or regions of the same country. Most of the literature on hub location is either at the national or subnational level. Sequential (or hierarchical) assessment, consisting in comparing first countries, then regions/cities of these countries may nevertheless be found in (Daganzo, 1996;Mayer and al., 1999;Mataloni, 2011). A simultaneous assessment may also be relevant: this would mean to compare regions belonging to several countries, resulting in less biases than the sequential assessment. In that case, criteria allowing to choose a country should be added to the regional ones.
In that context, we shall analyse in the next section the location criteria often suggested in the literature. We shall also review some indexes published by economical entities that can be reused as location criteria. We shall finally suggest to group location criteria in categories and will show how they can be implemented on sequential and simultaneous assessment, which is seldom done in the literature.
Survey of logistic hub location selection criteria in the academic literature
In this survey, we have considered articles suggesting criteria for hub location but also for foreign investment, using keywords like: hub location selection criteria, hub location decision, locational determinant, location criteria evaluation. We have excluded many articles dedicated to comparisons of the competitiveness of existing ports or hubs, since they consider performance criteria of existing entities and not criteria related to the attractiveness of a potential location. The selected papers involve either national evaluation based on national criteria (N), subnational assessment over regional criteria (R) or simultaneous assessment (SM) or sequential choice decision. Furthermore, in order to avoid giving too much consideration to very specific studies, we have finally only selected criteria cited at least by two different authors.
The criteria selected by the identified studies are summarized in Table 1 where the last column is related to this work.
Review of logistic hub location selection criteria on world organization indexes
Several worldwide organizations, like the World Bank or the World Economic Forum, regularly publish indexes aiming at comparing the attractiveness of the countries for foreign Burden of reglementation X X X X X Incentives availability X X X X X X X X Quality and availability of infrastructure
Proximity to
Manufacturing Market X X X X X X X X Proximity to Port/Airport X X X X X X Availability of regional incentives X X X X Pollution X X X X X Safety and Security X X X X X X X Enabling Trade Index (ETI): this index is developed by the World Economic Forum in order to compare the ability of countries to benefit from trade, using a 1 to 7 points scale. It offers a comparative tool to companies, guiding their investment decisions strategies. It covers four main criteria: market access; border administration; infrastructure; operating environment.
Worldwide Governance Index (WGI):
it is proposed by the World Bank in order to assess the governance of 200 countries. It includes six major criteria: voice and accountability; political stability and absence of violence/terrorism; government effectiveness; regulatory quality; rule of law: control of corruption.
Corruption Perception Index (CPI): it is established by
Transparency International. It measures how corrupted public sectors of countries are, on a scale of 0 (highly corrupt) to 100 (very clean).
Liner shipping connectivity index (LSCI):
it is evaluated by the United Nations Conference on Trade and Development (UNCTAD) and measures how countries are connected to global shipping networks, from 0 to 100 points. It includes five components of the maritime transport criteria, namely: number of ships; container-carrying capacity; maximum vessel size; number of services and number of companies that deploy container ships in a country port.
Better Life Index (BLI): it is established by OECD in order
to assess the well-being and the quality of life level on a country ranged from 0 to 10 points.
Limits of the literature
As already stated, we can say that 1) very few studies have considered the use of assessment criteria within a sequence of decisions at national and subnational levels, 2) few studies have proposed a sequential choice strategy to locate logistic hub, while some notable ones used a simultaneous strategies 3) few studies (Lee, 2007;Lipscomb, 2009;Kayikci, 2010;Shiau andal., 2011, Yang andChen, 2016), have suggested a typology of criteria that would facilitate the adaptation of the criteria to a specific case, or would allow to better assess the impact of each category of criteria on the final choice. To our best knowledge, there is not yet other study suggesting 1) criteria adapted to various sequences of decision 2) a taxonomy of criteria 3) the reuse when possible of existing validated indexes.
LOGISTICS HUBS LOCATION SELECTION CRITERIA
We suggest a generic structuration of the reviewed criteria (Tables 1 and 2) by geographical level and by family, in order to facilitate sequential and simultaneous assessment. When considering the criteria listed in Table 2, it is rather clear that the following main categories are assessed: -attractiveness of the local institutions, -stability of the area, -market accessibility, -easiness of access to local resources (land, workers etc.).
In sections 3.1 and 3.2, these categories are instantiated at the national and subnational levels with additional details.
National level criteria
We have grouped national level criteria on seven categories, each one denoting the global attraction of a country: Quality and efficiency of public institutions: It assesses the ability and willingness of a country to establish a good public policy to attract, facilitate drainage and protect investments. It reflects the regulatory, institutional, legal and tax system effectiveness. It includes sub-criteria such as: corruption control, property rights and intellectual property protection, government policies transparency, efficiency and simplification of business regulations, availability of governmental incentives to investors. Stability of the country: It is related to how healthy and reliable the business environment is. It includes political stability, macro-economic stability, safety and security and resilience to natural risks. Political stability is defined as the probability of political risks occurrence such as political violence and terrorism, or sudden and unpredictable change of democratic power. Macro-economic stability is related to the stability and strength of macroeconomic policies such as inflation control, creditworthiness, reduction of public debt. Resilience to natural risks measure the ability of a country to overcome the main shocks and incidents related to natural disaster risk. Market accessibility: It assesses the capacity of a country to facilitate the access to domestic and foreign markets to industrial exporters/importers. This accessibility relies basically on the availability and quality of the infrastructure (roads, highways, rail, ports, airports, telecommunication for transport), on the connectivity level, either maritime or by air (which reflects the existence of service based on the infrastructures), the efficiency of border administration and the openness to trade (existence of free exchange, burden of customs barriers). Market potential: It denotes the overall size of the target market of industrial firms or logistics providers. It includes the domestic market size of the host country and/or the foreign accessible market from this country. The domestic market size assesses the amount of flow of goods imported or produced locally that will be distributed internally, while the foreign market size is related to the amount of goods that will be exported from the host country. Labour market attractiveness: It measures the overall potential of labour market of the host country. It is based on the availability of qualified workforce and on the flexibility of the labour market in terms of flexibility of wage determination, hiring and firing practices, cooperation in labour-employer relations etc. Geographical location attractiveness: It assesses how strategic the geographical location of the host country is. Besides, it includes also the availability of land and the possibility of expansion. Competitive costs advantages of the country: This criterion covers all cost factors that can influence the hub location choice. It can include customs barriers (financial and nonfinancial barriers), port/airport charges (costs for documents, administrative fees for customs clearance and technical control, terminal handling charges and inland transport), labour cost and energy costs.
Subnational level criteria
We have organized criteria belonging to the subnational level in four categories, each reflecting the attraction of a city or region within a chosen country: Availability and quality of infrastructure: This criterion assesses the availability and quality of transport infrastructure within a specific city/area. The importance of assessing this criterion on a subnational level is justified by the fragmentation of infrastructural coverage in some countries.
: It evaluates the attractiveness of land in the city in terms of availability of empty lands at a convenient price and possibility of land extension and development. It can be relevant to consider this criterion at the city level since cost and availability of land may differ considerably among cities in the same country. Workforce attractiveness: It assesses the potential of the labour market within a specific region/city, in terms of the availability of qualified manpower depending on skills required by logistics hub and cost of the workforce. These criteria differ from city to city and have to be taken into consideration as they impact the city choice. Proximity to markets: It evaluates the proximity of local markets such as consumers or industrial zones, and proximity to major ports/airports. Quality of life: It assesses the quality of life within a specific region/city, which affects the human resources welfare. It may rely on pollution level, safety and security, life cost, existence of extra services (schools, hospital) etc. Regional incentives: As there may be great differences among cities in the same country, the local authorities may offer some incentives in order to boost the economic development of landlocked cities.
Simultaneous assessment criteria
This sequence of decisions consist of comparing cities/regions of different countries over national and subnational criteria simultaneously (Lipscomb et al., 2010;Lee, 2007;Lu and Yang, 2006;Kayikci, 2010;Long et al., 2012). It means that for each region, we will assess the attractivity of the country to which this region belongs using national criteria ( §3.1) and the potential of this region based on subnational criteria ( §3.2) simultaneously. The main difficulty and ambiguity of this method lies on the relevance of merging common criteria. Indeed, we may take into account some criteria on both level, as their measure are complementary (quality of infrastructure, for example) or we may consider them only on one level (workforce attractivity, for example). Indeed, we will consider quality of infrastructure criteria on both levels as we have to evaluate not only the quality of infrastructure within a specific region/city but also the availability and quality of ports/airports, railway line highways which serve the entire territory. However, we may assess workforce attractivity only on subnational level as it would be redundant to evaluate it at both levels. This strategy leads to a pertinent analysis since regions from different countries compete against each other. However, it might be heavy to implement it especially if we have a high number of alternatives and criteria.
Sequential assessment criteria
A sequential choice is a hierarchical choice process in which location alternatives are eliminated in phases based on different attributes (Mataloni, 2011). In our context, it consist in comparing first countries based on national level criteria ( §3.1) then cities/regions belonging to the same selected country according to subnational level criteria ( § 3.2). As the final objective of both sequence of decision is the selection of a set of regions/cities where logistic hub will be set up, a key advantage of this sequence of decisions is that it reduces the number of cities/regions and criteria compared to simultaneous approach. However, this strategy has a notable limit as regions of different countries would not be in competition.
Criteria assessment
There are several ways to assess criteria depending on the availability of either qualitative or quantitative data, qualitative data being usually Moreover, data might be precise (specific value), or imprecise (interval value). Imprecise data based on expert knowledge is often modelled using fuzzy logic (Chu, 2002).
CONCLUSIONS AND PERSPECTIVES
In this communication, we propose a generic structuration of criteria used for the choice of hub location by geographical level (national, subnational) and by category. This structuration can be adapted according to specific applications and allows to conduct a complete evaluation of the location decision either in a sequential or simultaneous way, which is seldom done in literature.
This study represents a first step toward a multicriteria decision analysis of hub location selection aiming to determine a subset of qualified countries and cities to host logistics hubs.
In the future research, we will finalise the assessment of criteria introduced in this communication and will compare Multiple Criteria Decision Methods (MCDM) such as AHP, TOPSIS, ELECTRE or PROMETHEE in order to choose one of them (or a combination of them) and proceed in the evaluation of logistic location.
|
2019-05-19T13:04:51.690Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "77180857b8906494861144f567f44a72f9f0d812",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ifacol.2016.12.178",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "396583c0713bf0923006c9389095e28d517e6614",
"s2fieldsofstudy": [
"Business",
"Geography",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
244872238
|
pes2o/s2orc
|
v3-fos-license
|
Reuse of cardiac implantable electronic devices in developing countries perspectives: A literature review
Access to cardiac implantable electronic devices (CIEDs) is limited in developing countries. Postmortem CIED donation from developed countries to developing countries could be an important resource for those who cannot afford a new one. The objective of this paper was to identify and synthesize the perspectives on the donation of CIEDs for potential reuse in patients without resources living in developing countries.
services or their catastrophic costs are some of the reasons for the high mortality of CVDs in developing countries. 3,5 Among CVDs, bradyarrhythmias are a frequent clinical observation and include various cardiac rhythm disorders, such as sinus node dysfunction and atrioventricular conduction abnormalities. 6 Bradyarrhythmias have a huge impact on the quality of life of patients, due to their low tolerance to exercise, persistent fatigue and recurrent syncopes, symptoms that weaken more those living in the demanding conditions of developing countries. 7 The only actual treatment for bradyarrhythmias in their persistent form is to stimulate the heart using a CIED, such as a pacemaker. CIEDs have shown to prolong life and improve its quality in patients with bradyarrhythmias. 8,9 Even so, access to CIEDs is still limited worldwide due to the high cost of the devices, which many times exceeds the annual per capita income of individuals in developing countries. 10 Thus, it is estimated that around one million people annually die in developing countries due to the lack of access to cardiac pacing therapy. 11,12 In recent years, the literature and interest regarding reprocessing used CIEDs as an alternative to new ones has increased. [13][14][15] CIEDs are classified as single-use medical devices, and their reprocessing for reimplantation entails risks, for example, device infection or malfunction. 16 However, in the most recent meta-analysis, no significant differences were found in terms of infection (OR 0.98; 95% CI 0.60-1.60), malfunction (OR 1.58; 95% CI 0.56-4.48), premature battery depletion (OR 1.96; 95% CI 0.81-4.72) or device related deaths between new and reused CIEDs. 17 Therefore, due to the high cost of new devices, the reuse of used CIEDs appears to be a feasible and safe option, especially when the alternative would be not having any device at all. [18][19][20] In the European Community there is no uniform policy regarding CIED reuse, while in Romania CIEDs are usually reused, the United Kingdom, France, Spain and Switzerland have published recommendations or prohibitions about reprocessing this type of products. [21][22][23] Reprocessing CIEDs for reimplantation is not allowed in the United States either, due to the risk of infection. 24 However, there are no prohibitions on collecting used CIEDs and donating them to foreign countries where reutilization is permitted. 7 For this reason, organizations such as "Project My Heart Your Heart" and "Project Pacer" in the United States (US) or "Stimubanque" in France collect used CIEDs donated by patients, hospitals and funeral homes and ship them to developing countries, so they can be reused in patients in need. 25,26 Many CIEDs still have adequate battery life and function when the carrier dies, so, postmortem donation is an important source for developing countries where patients cannot afford a new device. 18,27 On the other hand, potential health risks and the ethical fact that patients with reprocessed CIEDs would receive a treatment that would not meet the quality standards of developed countries may raise different concerns. 28 Therefore, the present work aims to identify and synthesize the perspectives on CIED donation for reuse in patients without resources in developing countries, to contextualize the acceptability of these practices and explore the possibility of advocating for a local postmortem CIED donation initiative, similar to those existing in other countries.
The search was limited to articles published in English or Spanish. The summary of the search strategy is shown in Figure 1.
RESULTS
Eight publications responded to the objective of this review and were analyzed. The most relevant findings were classified into two main themes: • Perceptions, preferences, attitudes and opinions of developed countries towards donation of CIEDs for reuse. • Perceptions, preferences, attitudes and opinions of developing countries towards reception of reusable CIEDs. Table 1 shows a summary of the most relevant characteristics of the studies analyzed. Funeral homes = 5 Most participants would be willing to consider the decision to donate a heart device. Ninety eight percent of the general population, funeral homes and 91% of health personnel were in favor of a mechanism to send CIEDs to patients without resources in developing countries.
Most common concerns about CIED reuse were risk of infection or legality of the practice. Association" = 9 CIED manufacturers showed concerns related to health risks for patients and the responsibility of the remanufacturer, derived from the quality assurance of the reprocessing process, traceability and control of the necessary locations for a reutilization model; a morgue, central services and reprocessing.
They also expressed concerns about the ownership of explanted devices and the need of an informed consent about the associated risk of a reprocessed device for the recipients.
F I G U R E 1 Study selection process [Color figure can be viewed at wileyonlinelibrary.com]
because they could guarantee quality control on the process, but not those that had already been in contact with a patient. Among the reasons for not reprocessing used devices, concerns related to the quality control of the devices were described; traceability and control of the devices during explanation and reprocessing, in addition to health risks of the potential recipient, due to the lack of security evidence at the time. 29 Nowadays, and due to the increase in evidence on safety, reutilization of explanted used devices is an alternative to consider. 30 With respect to the general population of developed countries, the majority are willing to consider the donation of an implantable heart device and are in favor on implementation of initiatives to donate reusable devices to patients without resources in developing countries, since they consider that it adds meaning to one's life. 30,31 In the same way, general population with family members or friends who are cardiac devices carriers, shows more positive attitudes towards the donation of CIED. 30 As for healthcare personnel, the majority is in favor on reusing devices in people in need, believing that it is something that adds value to the main mission of their respective organizations. 30 36 Another similar study also indicated that the majority of funeral directors in developed countries are willing to donate the devices they routinely explant to patients without financial resources in developing countries. 31 • Perceptions, preferences, attitudes and opinions of developing countries towards reception of reusable CIEDs.
As for specialist electrophysiologists from developing countries, and in line with what was previously stated, they consider device reutilization a safe and ethical practice and a reasonable alternative when new devices cannot be accessed. 32 The same study shows that if allowed by law, the majority would be willing to implant reconditioned devices in patients who cannot access a new one, in contrast with mentioned concerns about infection and malfunction.
Finally, potential recipient patients, and family members in developing countries, most of them unable to afford a new device are in favor of getting a reconditioned device, even if the risks of infection or malfunction of the reprocessed device are higher. 35 In addition, the majority indicated their willingness to donate their device or the device of a relative after death, so it could be reconditioned and reused in another patient again.
DISCUSSION
CIED reuse is a life-saving initiative. It is profitable, consistent with the principles of beneficence, non-maleficence and justice with a commitment to the administration of resources and the common good. 37 However, CIED donation initiatives require participation of device carrier patients, their families, funeral industry, local authorities, specialists, and potential recipients. 35 This review synthesizes the studies carried out to date, underlining the social acceptability of donating postmortem explanted CIEDs from developed countries to developing countries, reprocessing and reimplanting them in patients who cannot access a new one.
In most developed countries, CIEDs must be explanted at funeral homes before cremation, due to the risk of explosion of the devices in the crematorium. 18,27 The explanted devices have to be handled as biological risk waste, and reutilization is commonly not allowed locally, which means they are discarded. 38 Despite the fact that explanted CIEDs are discarded, a considerable number of explanted devices have shown to be reusable and could comprise a vital resource for other patients. 10 On one hand, due to property rights, carriers or family members of a deceased carrier could claim the ownership of the implant once it is removed from the body. 39 On the other hand, even if reutilization of CIEDs is usually not allowed locally, nonprofit donation of used devices to developing countries is not prohibited. 7 Therefore, this may open the door to the implementation of CIED donation initiatives in many developed countries.
For the implementation of a national CIED donation initiative, local regulatory aspects and property rights must be addressed. 37 It seems feasible to provide and get an informed consent document from patients or family members on hospitals and funeral homes, due to collected data in favor of reutilization. 18,[30][31][32]34,36 This consent, could imply the ownership transference of the explanted device to a reprocessing nonprofitable organization and set the legal framework for the donation process. CIEDs could then be explanted, primarily cleaned and shipped, following local medical waste regulations, to a the reprocessing organization. 10 Collected devices could then be standardly analyzed, reconditioned, cleaned and sterilized using a validated protocol and transferred to specific hospitals in developing countries for reimplantation. 40 Most common concerns raised by healthcare personnel and electrophysiology specialists and against CIED reutilization are the risk of infection and malfunction of reprocessed devices. 30,32 Published systematic reviews and meta-analysis have shown that under rigorous protocols reutilization is safe in terms of infection, malfunction, battery depletion and mortality. 17,19,41 Although reused CIEDs have been studied in several case series and cohort studies, no randomized controlled trials have been published to date. 17 The randomized trial being carried out by the University of Michigan in Kenya and Sierra Leone may provide valuable data in this regard. 42 However, actually and for previously mentioned reasons, CIED reuse should only be considered in situations where benefits outweigh potential risks and these are adequately informed to the recipient patient. 28,43 Likewise, it is important to guarantee a quality reconditioning and traceability of reprocessed devices and a rigorous follow-up of patients who receive a reprocessed device. 14 Therefore, implanting hospitals in developing countries must assure that reprocessed CIEDs are only offered to patients who cannot afford a new device, as well as informing them about the risks of reprocessed devices and collecting the respective informed consent before reimplantation. 43 Among the limitations of this review, it is worth noting the type of studies identified, since all of them are descriptive and do not allow a complete analysis of the subject under study. Another limitation is the location of the studies in developed countries, since most have been carried out in the United States. Therefore, in order to study the perspectives on CIED donation for reuse in greater depth, it would be advisable to continue research on this topic, for example with qualitative methodology. In addition, it is encouraged to describe the perspectives and opinions of patients, funeral professionals and health professionals in other developed countries where an CIED donation initiatives could be implemented, as they comprise key parts of the donation process. 26-28
CONCLUSIONS
The reuse of reprocessed CIEDs could allow many patients with bradyarrhythmias in developing countries receive a treatment that they lack nowadays. The results of this review highlight the positive perspectives of general population, device carrier patients, healthcare professionals, electrophysiologists and funeral industry on the donation of used devices to developing countries. Potential recipient patients also have favorable opinions towards used and reconditioned devices.
In view of the feasibility of collecting postmortem explanted devices from developed countries, local models of CIED donation initiatives are encouraged.
ACKNOWLEDGMENT
Open Access funding provided by University of Basque Country.
|
2021-12-05T06:16:18.476Z
|
2021-12-04T00:00:00.000
|
{
"year": 2021,
"sha1": "b1022169f7ad1539c6f53c1571a3f9936b473545",
"oa_license": "CCBYNC",
"oa_url": "http://addi.ehu.es/bitstream/10810/56956/1/PACE_2021_Lorenzo-Ruiz.pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "8a9ab980d14e4acf238489e81e5608d1aee4ec0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1511955
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of the effects of three different (-)-hydroxycitric acid preparations on food intake in rats: response
A response to Louter-van de Haar J, Wielinga PY, Scheurink AJ, Nieuwenhuizen AG: Comparison of the effects of three different (-)-hydroxycitric acid preparations on food intake in rats. Nutr Metabol 2005, 2:23
Text
Louter-van de Haar J, et al. compared the effects of supplementation of three commercially available (-)-hydroxycitric acid (HCA) preparations [Regulator, Citrin K and Super CitriMax] on food intake and body weight. Two doses (150 mg HCA/kg and 300 mg HCA/kg) were used for each preparation over a period of four days. There are significant flaws in this study, which we have outlined below.
Two series of experiments were conducted. The first series of experiments determined the effect of a single dose of each preparation on food and water intake for the following 46 hrs in a four-day placebo-controlled cross-over experiment. In the second series of experiments, HCA preparations were given twice a day for four days to study the effects of repeated doses of the component on food and water intake.
The authors concluded that Regulator and Citrin K were shown to be potent inhibitors of food intake in rats, whereas Super CitriMax showed only small and more inconsistent effects.
There are several major weaknesses in this study, not the least of which were the small number of animals used in the study (n = 6 per group) and the short duration of the experiment. It is also unclear what the sequencing was of the supplements. It said that the animals were administered either placebo or a treatment then were crossedover, but does not disclose which treatment groups were administered before or after the placebo. For example, if the Super CitriMax was administered before the placebo, its appetite suppression effects may have been carried over to the placebo portion of the experiment, minimizing any differences; while much greater differences may have been observed if the other treatments followed placebo administration. A 46-hour washout period was used, however, there is no data to support this as being a sufficient amount of time to clear the system of the effects of HCA and, in fact, Super CitriMax's ability to boost serotonin levels and reduce appetite may have carried over for several days, resulting in reduced food intake in the placebo group.
Furthermore, at the higher dosage levels where Regulator and Citrin K were reported superior to Super CitriMax, the food intake at the end of 24 hours of animals taking Super CitriMax was actually less than that of animals taking either Regulator or Citrin K (20.0 versus 20.9 and 23.0, respectively); however, for some reason the animals taking the placebo in the Super CitriMax group consumed significantly less food than the animals consuming either Regulator or Citrin K (19.6 versus 24.8 and 25.6, respectively), thus the difference between the Super CitriMax treatment and placebo groups was not as great. Was Super CitriMax the only group to be given the treatment first followed by the placebo?
Also, while the authors call four days a "long-term study," this is simply an inadequate period of time to demonstrate consistent appetite suppression and weight loss using HCA. HCA is a supplement, not a drug. Previous studies using Super CitriMax, both in animals and humans have demonstrated significant appetite suppression and weight loss over a greater span of time (six to eight weeks) [1][2][3].
The authors state that the high calcium and low potassium content of CitriMax (a calcium salt of HCA; 70% soluble in water) may affect its solubility and bioavailability. However, the authors are confusing CitriMax with Super CitriMax, the latter being a novel calcium/potassium salt of HCA that is highly water soluble (100%) and, in contrast to Regulator and Citrin-K, has been proven bioavailable in humans [4]. In addition, calcium offers the added benefit of increasing lipid metabolism, resulting in lipolysis and preserving thermogenesis during caloric restriction, accelerating weight loss [5,6].
The authors further suggest that Super CitriMax may consist of HCA lactone, a less effective form of HCA. However, HCA lactone analysis of Super CitriMax will show that its lactone content is less than 1%.
It should also be noted that several independent studies from multiple universities have shown that Super Cit-riMax modulates important neurotransmitters (serotonin) and neuropeptides (neuropeptide Y) involved in appetite control [6][7][8], something that Regulator and Citrin K have not shown. Clinical studies on Super CitriMax have demonstrated significant gradual reduction of appetite as measured by remaining food over a period of eight weeks [1,2], while no such human studies have been published on Citrin K and Regulator. In any event, the present study is an inadequate model for comparing HCA efficacy, and the speculation the authors offer on Super CitriMax is poorly informed.
|
2018-04-03T00:00:34.947Z
|
2006-07-17T00:00:00.000
|
{
"year": 2006,
"sha1": "002f083bb62fda5fcc82eb0586a55d86840bddf8",
"oa_license": "CCBY",
"oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/1743-7075-3-26",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3100c3264fbdc5d668386ff6fe2e8a63abd8c440",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253420693
|
pes2o/s2orc
|
v3-fos-license
|
Generalized Lagrangian Heterogenous Multiscale Modeling of Complex Fluids
We introduce a full-Lagrangian heterogeneous multiscale method (LHMM) to model complex fluids with microscopic features that can extend over large spatio-temporal scales, such as polymeric solutions and multiphasic systems. The proposed approach discretizes the fluctuating Navier-Stokes equations in a particle-based setting using Smoothed Dissipative Particle Dynamics (SDPD). This multiscale method uses microscopic information derived on-the-fly to provide the stress tensor of the momentum balance in a macroscale problem, therefore bypassing the need for approximate constitutive relations for the stress. We exploit the intrinsic multiscale features of SDPD to account for thermal fluctuations as the characteristic size of the discretizing particles decrease. We validate the LHMM using different flow configurations (reverse Poiseuille flow, flow passing a cylinder array, and flow around a square cavity) and fluid (Newtonian and non-Newtonian). We showed the framework's flexibility to model complex fluids at the microscale using multiphase and polymeric systems. We showed that stresses are adequately captured and passed from micro to macro scales, leading to richer fluid response at the continuum. In general, the proposed methodology provides a natural link between variations at a macroscale, whereas accounting for memory effects of microscales.
Introduction
The modelling of complex fluids, synthetic or biological, is in general a challenging task due to the multiscale nature of the flow, leading to complex behaviours such as flow-induced phase separation, shear-thinning/thickening, and viscoelasticity. Usual approaches involve the solution of a macroscopic balance of momentum, along with constitutive equations that relate the dependency of the stresses and velocity fields due to microscopically-originated features. However, limitations of these approaches arise when the constitutive equations are not known a priori. Moreover, the existence of large relaxation times at the microscale originates a non-trivial interplay with macroscopic flow features, requiring a detailed description of the entire stress history. In this context, heterogeneous multiscale methods (HMM) (E et al., 2007) that combine numerical algorithms to resolve separately macro and micro-scales, appear as powerful tools to model the behaviour of fluids across scales. In HMMs, microscales are localized and solved on parts of the domain to obtain microscopically-derived properties that are used to close the macroscale problem (Ren & Weinan, 2005). This methodology offers the advantage of capturing microscopic effects at the macroscopic length scales, with a lower cost than solving the full microscale problem in the whole domain. In HMMs the derived microscales properties can enter into the macroscales representations either through constitutive relationships, or microscopic stresses information without a priori assumption of the constitutive relationships. The latest is an important advantage of HMMs for the modelling of complex fluids. For an extended review on HMMs, the reader is referred to (E et al., 2007).
Depending on the type of discretization (Eulerian or Lagrangian) used for macro and micro scales, the HMMs are classified as Eulerian/Eulerian (EE), Lagrangian/Lagrangian (LL), Eulerian/Lagrangian (EL), and Lagrangian/Eulerian (LE). See figure 1. . A large part of the existent HMMs relies on EE and EL schemes (E et al., 2007), where the macroscale dynamics are resolved on a fixed grid (using a variety of methods such as finite elements, finite volumes, lattice Boltzmann, to name a few), and microscale simulations (e.g. molecular dynamics (Alexiadis et al., 2013;Borg et al., 2015;Tedeschi et al., 2021), coarse-graining methods, stochastic methods, etc) are associated to grid points, where microscopic properties are derived. For viscoelastic fluids modelling, Laso and Öttinger introduced a pioneering approach known as CONNFFESSIT (Laso & Öttinger, 1993) (Calculation of Non-Newtonian Flow: Finite Element and Stochastic Simulation Technique), combining finite elements at the macroscale and stochastic particle simulations of polymer dynamics at the microscale.
EE and EL approaches are in general suitable for fluids with microstructural relaxation times ( ) sufficiently small compared to the macroscopic ones ( ) (Ren & Weinan, 2005;Yasuda & Yamamoto, 2008, 2014. As depicted in figure 1. , for multiscale problems with a large time scale separation, , an equilibrated response of the microscopic stresses can be obtained in relatively short intervals, regardless of the flow history, since for practical purposes the microscales are seen by the macro solvers as quasi-steady solutions, independently of their initial configuration. Such strategy has been applied to atomistic-continuum simulations of simple fluids (Ren and E (2005)) using Molecular Dynamics with an Eulerian grid-based calculation of the flow field. This can be easily done in simple fluids where the local stress depends point-wise in time on the velocity gradient. Thus, the initial conditions for the microstructure (atoms positions/velocities) can be chosen arbitrarily at every time step and the average stress is calculated via the Irving-Kirkwood approximation, Figure 1: Scheme of different HMM approaches. Eulerian -Eulerian (EE), Eulerian-Lagrangian (EL), and Lagrangian-Lagrangian (LL). The evolution of the stress tensor depends on the effective relaxation times at the microscales . Systems with ¯( green) are accurately computed at the microscopic scales, whereas for ≤¯(blue) larger microscale simulations are required to capture memory effects as the macro scales evolve. LL approaches facilitate the carrying of the stress information during the time integration at macroscales. provided that local stationarity is achieved within the same time step. The previous approach, however, cannot be applied to complex fluids with finite memory, where stresses (and microstructure) do heavily depend on flow history and relaxation times are likely to be comparable or largely exceed the macroscopic time step. Using directly EL or EE schemes, it is fundamentally and technically restrictive to generate such an initial microstructural configuration in a fixed fluid cell for at least two reasons: 1) it is a priori not known where the fluid comes from and what its flow history was, and 2) even if this sequential macroscopic information would be accessible in a given element of fluid, it requires additional constitutive and numerical features able to account for complex spatio/temporal variations. Alternatives to address the issue 1) include spatio/temporal homogenization techniques, backward-tracking Lagrangian particles combined with Eulerian grids to capture memory effects in the fluid (Phillips & Williams, 1999;Wapperom et al., 2000;Ingelsten et al., 2021). However, as already mentioned, fluid memory can be very long in polymer systems, suspensions, etc. precluding simple linear backward approximations. Regarding the issue 2), one alternative is to incorporate continuum configuration fields (Öttinger et al., 1997) that can be discretized and advected from the macroscales. Nevertheless, in this case, it would be extremely difficult to know a priori those fields for general multiphysics problems (i.e. non-polymeric), as well as the numerical generation of microscopic configuration consistent with the history of the fluid.
For systems with larger microstructural relaxation times, the particular restrictions of EE and EL can be circumvented using fully Lagrangian, LL, schemes (E et al., 2007), and a proper sampling procedure for the microstructure. Indeed, LL schemes have been successfully used to model elastic effect and history-dependent flows (Murashima & Taniguchi, 2010;Seryo et al., 2020;Morii & Kawakatsu, 2021). As illustrated in figure 1. , LL schemes directly track the material points at the macroscale retaining their strain and strain-rate variation, thus naturally handling history-dependent fluids. A variety of LL methodologies have emerged over the last decade, adopting mainly smoothed particle hydrodynamics (SPH) discretizations at the macro scales and combination of different microscopic models (Ellero et al., 2003;Murashima & Taniguchi, 2010;Xu & Yu, 2016;Feng et al., 2016;Sato & Taniguchi, 2017;Zhao et al., 2018;Sato et al., 2019;Seryo et al., 2020;Morii & Kawakatsu, 2021;Schieber & Hütter, 2020;Giessen et al., 2020). At the microscales, the stress evolution of polymeric solutions and entanglements have been accounted for using Brownian dynamic (Xu & Yu, 2016), active learning (Zhao et al., 2018;Seryo et al., 2020), and slip-link models (Feng et al., 2016;Sato & Taniguchi, 2017;Sato et al., 2019). In these LL schemes, it is considered that micro scales only account for the polymer contribution to the stress, whereas fluid is modelled uniquely from the macroscopic discretization (Feng et al., 2016). Its effect (i.e. velocity gradient tensor) enters the Langevin-type dynamics for stochastic micro-realizations implicitly as a single parameter, and not directly as a boundary condition for the full micro-system. In fact, one important issue limiting the applicability of HMM methods to more detailed descriptions of complex fluids is precisely the proper imposition of microscale constraints that are consistent with the macroscale kinematics and the calculation of microscopic information required by the macro state (E et al., 2007). When using particle-based micro-models with explicit solvent description (e.g. MD, DPD, DEM, SDPD), the construction of this constrained microscale solver represents often the most cumbersome technical step. For LL schemes, due to the history-dependent evolution of the flow and the existence of non-trivial flow configurations, the microscales can be subjected to arbitrary series of deformations that are usually difficult to handle with traditional periodic boundary conditions (BCs). To avoid these limitations, existent LL schemes have been restricted to the use of microscopic simulators that do not dependent on the "physical" boundary conditions (Feng et al., 2016;Sato & Taniguchi, 2017;Sato et al., 2019;Morii & Kawakatsu, 2021). This include, for example, the case of BD for statistically independent polymers, such as dilute polymer solutions or polymer melts in mean field approximation, or that utilize geometries that reproduce simple flow configurations(Seryo et al., 2020) (i.e. simple shear or uniaxial deformation). More general micro-macro couplings (e.g. full particle-based model of polymeric dispersions, colloid suspensions, emulsions, etc.) involving detailed micro-systems models undergoing arbitrarily flow deformations are beyond the capabilities of the current frameworks.
Moreover, for micro solvers that adopt mean-field approximations, one important assumption is that the microscopic states of all polymers are in equilibrium and that the coils do not have translational degrees of freedom, but only rotational and extensional ones (Morii & Kawakatsu, 2021). Regarding microscopic BCs approaches using simple flow configurations, they are suitable to account for translational effects and often provide information sufficient to characterize simple fluids. However, since complex fluids can possess microscopic structures that are influenced by different flow configurations, geometries, time scales, and deformation rates, it has been evidenced that to correctly model non-Newtonian fluids (Tedeschi et al., 2021), it is necessary to determine the full stress contribution from the microscopic solver.
In this manuscript, we propose a generalized fully Lagrangian HMM (LHMM) using smoothed dissipative particle dynamics (Español & Revenga, 2003;Ellero & Español, 2018) (SDPD), suitable to model general complex fluids (e.g. colloids, polymer, microstructures in suspensions) while using the same fluid description across scales. Among the different computational methods successfully used to model Newtonian and non-Newtonian fluids at continuum and microscales, SDPD has emerged as a suitable tool to simulate complex fluids (Kulkarni et al., 2013;Müller et al., 2014;Ellero & Español, 2018). The main strengths of SDPD are i) it consistently discretizes the fluctuating Navier-Stokes equations allowing the direct specification of transport properties such as viscosity of the fluid; ii) SDPD is compliant with the General Equation for Nonequilibrium Reversible-Irreversible Coupling (GENERIC) (Öttinger, 2005), and therefore, it discretely satisfies the First and Second Laws of Thermodynamics, and Fluctuation-Dissipation Theorem (FDT); iii) at macroscopic scales SDPD converges to the well-known continuum method smoothed particle hydrodynamics (SPH) as the characteristic size of the discretized particle increases (Vázquez-Quesada et al., 2009;Ellero & Español, 2018). For an extended review of SDPD, the reader is referred to the publication of Ellero and Español (Ellero & Español, 2018).
Since SDPD offers a natural physical link between different scales, we construct an HMM that uses SDPD to solve both macro and microscales. This approach ensures the compatibility of the different representations by construction and physical consistency across scales. At the microscales, we adopt the recently proposed BC methodology (Moreno & Ellero, 2021) that allows the acquisition of the full microscopic stress contributions for arbitrary flow configurations. This allows to carry out micro-computations under general mixed flow conditions. Furthermore, compared to existing LL methodologies, our approach exploits the versatility of SDPD to model a variety of microscopic physical systems beyond polymeric systems. We can summarise the main features of the proposed LHHM framework as • Model history-dependent flows by construction.
• Significant spatio-temporal gains in simulations compared to fully resolved microscale simulations.
• Thermodynamic-consistent discretization of the Navier-Stokes equations in both macro-micro scales (deterministic -stochastic) providing a direct link to physical parameters.
• GENERIC compliant at both macro and micro levels.
• Complex-flow configurations are allowed and can be handled it at the microscales.
In the following sections, first, a general description of HMM is introduced along with the governing NS equations, then, the proposed fully Lagrangian approach and the particle-based discretization are presented. Finally, without loss of generality, we streamline the validation of the methodology focusing on two-dimensional simulations of complex flows with memory. At the microscales, we adopt generic, yet complex, polymeric and multiphase flows to showcase the flexibility of the method.
Heterogeneous multiscale methods
In general for HMMs, we can define the macroscopic problem considering a domain Ω ⊂ R (with dimension = 2, 3) with a boundary Ω = Γ D ∪ Γ N , where Γ D and Γ N correspond to boundary regions where Dirichlet and Neumann boundary conditions are applied, respectively. The mass and momentum balance of the system in terms of the Navier-Stokes equations for an incompressible fluid with constant density can be expressed as where, the total stress tensor is given by = I + , being the pressure and the viscous stress. For incompressible Newtonian fluids the viscous stress is a linear function of the strain rate ( = (∇v + ∇v ), being the viscosity) and the flow can be totally described using (1). For non-Newtonian fluids such as colloidal and polymeric systems, this linear relationship does not hold and constitutive equations are required (Bird et al., 1987). Additionally, for of microfluidics, where complex flow patterns and thermal effects may arise, the use of Dirichlet boundary conditions, v = on Γ D × (0, ) may not accurately model such microscopic effects, requiring more elaborated considerations for the boundary conditions.
Lagrangian heterogeneous multiscale method (LHMM)
We propose a LL-type of methodology, as depicted in figure 1, discretizing both macro and micro scales with a particlebased representation of the system. We distinguish macroscale parameters and variables if they are derived from microscales calculations using the upper bar (i.e.¯), whereas microscale variables are denoted using a prime (i.e. ). We use the subindex , , and , to indicate the coordinate axis. If we express the macroscopic viscous stress determined from microscopic simulations in terms of hydrodynamic, non-hydrodynamic, and kinetic contributions as¯=¯ℎ +¯ * +¯. The ensemble average stress can be represented by where¯ℎ accounts for the hydrodynamic contributions to the stress, and¯ * corresponds to the non-hydrodynamics effects (presence of colloids, polymers, walls, etc). In general, hydrodynamic contributions combine both ideal and non-ideal interactions, this is ¯ℎ = ¯ + ¯ℎ | non-ideal .
Whereas the ideal effects are expected to occur in the fluid at all scales, the non-ideal interactions are only originated at microscales by the disruption of the flow field due to the presence of polymer, colloids, walls, or microstructures. Considering that the Newtonian (ideal) stress, in absence of complex microscopic effects is given by¯= 2d (beingd the rate-of-strain tensor computed from microscopic information), we can rewrite (3) in the form Now, if we consider that ideal stress contributes homogeneously over the whole domain a mean-field approximation holds and the ideal term of (4) can be written in terms of macroscopic variables, (v, ) = 2 d = ¯(v , ) . (Notice that the overbar notation of d is omitted since is not a multiscale contribution. In contrast tod that is a macroscopic stress determined from microscopic variables). Thus, we can now introduce a hybrid macro-micro formulation of (4) given by This scheme is a generalized framework that allows us to incorporate ideal hydrodynamics interactions of the fluid from both scales. The weighting parameter conveniently provides numerical stability to the method, whereas naturally accounting for spatial inhomogeneities of the stresses. According to (5), if = 1, the ideal hydrodynamic contributions are fully accounted for from the macroscale level, and microscales only contribute to non-ideal interactions. This approximation is suitable for diluted systems for example. However, is not adequate for more general situations where spatial inhomogeneities exist. In contrast, if = 0 the viscous stresses used to solve the macroscale problem are totally computed by the micro-representation, and it implicitly accounts for all stress contributions (ideal and non-ideal) across scales. An important feature of this macro-micro scheme is that allows us to simulate microscopic stresses at arbitrary locations of the macro domain, whereas other regions are modelled using the standard Newtonian discretization. In the results section, we compare the stability and accuracy of (5) for different values of for different simple and complex fluids. We must remark, that previously reported LL schemes (Murashima & Taniguchi, 2010;Xu & Yu, 2016;Feng et al., 2016;Sato & Taniguchi, 2017;Zhao et al., 2018;Sato et al., 2019;Seryo et al., 2020;Morii & Kawakatsu, 2021;Schieber & Hütter, 2020) correspond to situations where = 1. Hence, assuming that the ideal stress homogeneously contributes over the whole domain from macroscales. Given (2) and (5), we can now express in (1) as We must note that in the case where the non-ideal hydrodynamic contributions are negliglible at the microscale, we have from (3) that ¯ℎ = ¯ leading to a simplification of (6) in the form The principal difference between (6) and (7) is that the former requires the microscopic computation of hydrodynamic stresses at the flow conditions for both ideal (Newtonian),¯( , ), and complex fluid¯ℎ ( , ). The later, in contrast, only involves the simulation of the hydrodynamic contributions of the investigated fluid. Here, we evaluate our LHMM scheme using (7). In section 2.4 we describe the methodology used to estimate the different components of these stresses.
Considering the representation of fluid in a Lagrangian framework (Español & Revenga, 2003), and the previous decomposition (7), the divergence of the total stress in (1) takes the form where is the dimension, and and are the shear and bulk viscosities, respectively.
In general, since we aim to incorporate hydrodynamics interactions of the fluid in both scales, a critical requirement for the microscales solver is the capability to model both simple and complex fluids. Here, we model both macro and micro scales using SDPD, discretizing the fluctuating NS equations as a set of interacting particles with position r and velocity v . The system is constituted by particles with a volume V , such that 1/V = = ( , ℎ), being the number density of particles, = |r − r |, and ( , ℎ) an interpolant kernel with finite support ℎ and normalized to one. Additionally, to discretize the NS equations a positive function is introduced such that = −∇ ( , ℎ)/ . From now, when describing each scale, we identify the discrete particles at microscales with the subindex and , whereas at macroscale with and .
Macroscales
At the macroscales, when the volume V of the discretizing particle approach continuum scales and thermal fluctuations are negligible, SDPD is equivalent to the smoothed particle hydrodynamics method (Vázquez-Quesada et al., 2009). For this scale, the geometry and type of flow prescribe the boundary condition at Ω. The SDPD discretized equations for (1), describing the particle's position, density, and momentum for a fluid without external forces is expressed as where r = − , v = v − v , and e = r / . In (11), is expressed in terms of the macroscales indicating its correspondence with a interpolation kernel with finite supporth. The term is the density-dependent pressure. and are friction coefficients related to the shear and bulk viscosities of the fluid through = ( + 2) / − and = ( + 2) ( + / ) (for = 2, 3). The microscopically-informed tensor¯is given bȳ The terms¯ℎ,¯ * , and¯are obtained from the microscale. Their representation is detailed in the subsection 2.4.
Microscales
At microscales, the SDPD(Ellero & Español, 2018) equations contain both deterministic and stochastic contributions. The later accounts consistently for thermal fluctuations. The balance equations are then given by where v = v − v , and are friction coefficients related to the shear and bulk viscosities of the fluid through = ( + 2) / − and = ( + 2) ( + / ). Thermal fluctuations are consistently incorporated into the model through the stochastic contributions to the momentum equation by (15). Where W is a matrix of independent increments of a Wiener process for each pair , of particles, andW is its traceless symmetric part, given by where the independent increments of the Wiener processes satisfy To satisfy the fluctuation-dissipation balance the amplitude of the thermal noises and are related to the friction coefficients and through We remark that in (15) the prime notation for the and is omitted since thermal fluctuations are only accounted for microscales.
Coupling
In the proposed LHMM, the transfer of information macro-to-micro occurs through the velocity field of the macroscales, v, that defines the boundary conditions of the microscale subsystems. Whereas, the micro-to-macro transfer occurs via the stress tensor,¯. We denote N the number of microscopic subsystems generated to compute microscale-informed stresses. In general, N can be chosen depending on specific macroscopic regions where the stresses need to be computed. However, to facilitate the presentation and validation of the method, we define N =¯, such that one microscopic simulation is generated per each macroscopic particle. Of course, microscopic simulations contain a large number of microscopic degrees of freedom (e.g. polymers, colloids, droplets) on which the mean average is referred. In general, the total number of degrees of freedom (particles) required to describe a system using LHMM decreases compared to a fully-resolved microscopic system when the length scale separation between scales increases (i.e. towards a continuum representation of the fluid), which offers significant advantages from a computational standpoint. In section 2.5 we further discuss those computational aspects. We present the general stages of the coupling in the figure 2 and the Algorithm 1 in the Appendix 1. At the microscale, we use a generalized boundary condition scheme recently proposed (Moreno & Ellero, 2021) to model arbitrary flow configurations, allowing us to account for non-trivial velocity fields (i.e. mixed shear and extensional). We decompose the microscopic simulation domain in three regions: buffer, boundary-condition (Ω bc ), and core (Ω ), as shown in figure 2. The properties of the fluid, such as the stress tensor, are evaluated from the core region. In the boundary-condition region, the velocity of the particles is prescribed from a macroscopic velocity field v. The system is further stabilized and periodic boundary conditions are adopted owing to the buffer region. A detailed description of this domain decomposition approach can be found in (Moreno & Ellero, 2021). To reconstruct the velocity field v at boundary regions Ω bc , we use the velocity gradient ∇v at the macroscale th-particle position. The macroscopic ∇v can be approximated using the SDPD interpolation kernel, such that This first-order approximation allows us to compute velocity gradients with a minimal computational cost during the macroscopic force calculation stage. In the results section, we validate the use of this approach. Other high order alternatives to compute ∇v , are also possible. However, it would require an additional spatial interpolation step (Zhang & Batra, 2004). Using (19), the velocity v , of the microscale particles located at the boundary-condition region is then determined by where the macroscopic velocity field is linearly interpolated taking the macroscopic particle centred at the origin of the box (see figure 2). The extent of the microscopic subsystems is given by the characteristic length Ω . In general, we consider all microscopic subsystems have the same size Ω , however, different sizes can be used, if the specific features of the flow require it.
Micro to macro:
Given a macroscopic particle , we determined its stress tensor,¯, from the microscales. Here, we adopt the Irving-Kirkwood (IK) methodology (Yang et al., 2012) such that the stress is given by¯( ) account for kinetic and potential contributions to the stress tensor, respectively. This potential contribution contains both hydrodynamic and non-hydrodynamic terms. We use the weighting function (r , x ) for the spatial averaging, whereas time averaging is conducted over a range on microscopic time steps. The number of time steps used to perform the averaging typically spans the duration of the microscale simulation. The kinetic part is then given by (Tadmor & Miller, 2011) where is the relative velocity of the particle at time step . In the IK approach, if is the magnitude of the force between particles and , it is considered that the force term can be expressed in central force decomposition as With this, the potential part of the stress tensor reads The bond function is the integrated weight of the bond for a weighting function centred at x. If the weighting function (y − x) is taken as constant within a domain Ω , and zero elsewhere, If additionally, the bond function B is calculated only with bonds fully contained in Ω , we would have B (x; r , r ) = 1/Vol(Ω ) for − ∈ Ω . For more detailed descriptions and extended validation benchmarks, we refer the reader to Moreno & Ellero (2021).
Time-stepping
A critical aspect of heterogeneous multiscale methods is the time-stepping approach used to send information between scales (E et al., 2009;Lockerby et al., 2013). From macroscales, we consider the time step is given by Δ , whereas the overall time scale of the system investigated is related to the operative conditions, such as the shear rate, . Thus macroscopic scales define the extent of the overall simulations, requiring a minimum of steps ( = Δ ). The time-stepping approach depends on the time-scale separation between macro and microsystems. If we denote the characteristic relaxation time for each scale as , systems with large time-scale separation satisfy , whereas for highly coupled scales ≈ . From microscales, the time step Δ , sets the condition to accurately resolve the stress evolution of the system. The relaxation of the microscales requires a minimal number of timesteps , such that = Δ . In practice, microscopic simulations would use large enough ( < Δ ) to ensure the proper stabilization of the system and to reduce the noise-to-signal ratio.
In multiscale methods, the relaxation time of the macro and micro systems determines the ratio Δ /Δ . As the limit condition for the highest temporal resolution we can consider the case of Δ /Δ = 1. However, in practice, this would not correspond to a temporal multiscale method, but a fully microscopic description of the system. In those cases, the gain in performance for using HMM comes only from the spatial upscaling of the stress. Existent LL schemes (Yasuda & Yamamoto, 2014;Sato & Taniguchi, 2017;Sato et al., 2019) that use time steps in the same order for macro and micro solvers are limited to problems with microscale temporal resolutions. Otherwise, in the case of stochastic microscale simulations(Morii & Kawakatsu, 2021), equilibration of the microscales is assumed through mean-field approximations. Due to these practical restrictions, different time-stepping approaches have been recently investigated (E et al., 2009;Lockerby et al., 2013) to increase the temporal gain in HMMs and reach macroscopic time scales. Depending on the order of magnitude of Δ /Δ , different time-stepping schemes can be used. In figure 3, we illustrate the basic sequence of time stepping: ) scattering ∇v on individual microscopic solvers; ) solving microscales under arbitrary BC; ) gathering¯for macroscales; and ) solving macroscales. The simplest time-stepping, typically referred as continuous coupling between scales (see figure 3), considers that micro solvers are evolved during Δ , whereas the time integration at macroscale occurs at Δ = Δ . An alternative to achieve both spatial and temporal gain when using our LL schemes is the heterogeneous-coupling time stepping (Lockerby et al., 2013) (a.k.a time burst), as presented in figure 3. In time-burst approaches, the macroscales are evolved using Δ = Δ , where >> . Therefore, microscale behaviour is extrapolated over larger periods. Compared to continuous coupling, the overall gain of heterogeneous time stepping is given by the ratio / . In general, for highly coupled scales ( ≈ ) we would require ∼ , to reach the continuous coupling. EE and EL schemes with time-burst time stepping have been adopted for systems with large enough time-scale separation ( ). However, due to the incompatibility of simple Eulerian description to capture memory effects, this approximation of constant microscopic stresses over a larger macroscopic time exhibit larger deviations as the microscale relaxation time increases. These limitations can be significantly relieved using LL-schemes (E et al., 2009). Here, depending on the type of system and scale separation, we used both continuous and heterogeneous coupling in time.
The Lagrangian nature of the proposed framework represents a critical ingredient to perform the multiscale coupling with SDPD. Flow history is by default accessible to every element of fluid (SPH particle), which carries its microstructure (in a Lagrangian sense). As a consequence, the initial conditions (SDPD positions/velocities) at every macroscopic time step can be taken as those at the end of the previous time step, regardless of whether the microstructure has relaxed or not within it. This idea allows us to apply HMM directly to the flow of complex fluids by running SDPD simulations in parallel (one for each SPH particle) undergoing inhomogeneous and possibly unsteady velocity gradients obtained from the macroscopic SPH calculation. As discussed in [Bertevas et al. (2009)], accurate IK estimates in mesoscopic calculations require typically periodic representative elementary volumes (RVE) three to ten times larger in linear size than the suspended solid particles, and therefore we expect a significant computational gain when applying this procedure to SPH fluid volumes much larger than the RVE.
LHMM Implementation
Since each macroscopic particle is equipped with its microscale solver, the overall cost of the HMM simulations increases compared to constitutive-equations-based approaches. However, the expected cost is significantly reduced for fluids that require to be solved with a resolution at the microscopic scale (polymer coil or colloid scale for example). LL schemes offer parallelization advantages, allowing each macro particle to compute its stress independently. Here, we implement the LHMM using a c++ driver, coupled with multiple parallel instances of LAMMPS (Plimpton, 1995) to solve both macro and micro scales. In figure 2. we illustrate the parallelization approach used. An important feature of the current implementation is that both macro and micro scales can be fully parallelized separately. This has significant advantages compared to fully microscopically resolved systems. In those, the computational cost does not scale linearly as the size of the macroscopic domain reaches continuum scales.
Since both scales are solved using SDPD, we can estimate the relative cost of solving a given system in terms of the total number of discretizing particles used or degrees of freedom (DOFs). Considering a macroscopic system of size¯being fully microscopically resolved with interparticle distance d , then the total number of DOFs is given by full = (¯/d ) , where is the dimension of the system. This system in a LHMM discretization requires LHMM =¯ total particles, where¯= (¯/d ) and = (Ω /d ) , being Ω the size of the microscopic domain sampled. Additionally, if we Figure 3: Time-stepping approaches and information passing between scales. In LL-schemes is in principle is possible to pass information from micro solvers before reaching full equilibration, since the historically-dependent stress is naturally tracked in the Lagrangian framework.
define the spatial and temporal gain of the LHMM method as =h/Ω and = Δ /Δ , respectively. The total number of DOFs for LHMM can be expressed as where the ratio Ω /d is inversely proportional to the spatial gain achieved by the LHMM, since at the macroscalē ℎ = d . The value of is typically determined by the required number of neighbour points for the kernel interpolation and is related to the accuracy of the method(Ellero & Adams, 2011). Herein, we use = 4 (Bian et al., 2012) (for both macro and micro scales). From (23), we can readily identify that compared to a fully resolved system the LHMM entails a reduction in DOFs required for systems with > 4. In general, the goal of HMM is to model systems with spatial gains orders of magnitude larger to tackle continuum scale problems with microscopic detailed effects.
Another computational gain associated with the LHMM is the flexibility of using larger time steps compared to a fully-resolved system. The Courant-Friedrichs-Lewy (CFL) condition determines the stability criterium for the minimum integration time step for microscales, Δ = d / , where is the artificial speed of sound. As discussed in the previous section, for a target macroscopic time scale , the total number of times steps required is then full = /Δ = ( )/d . Thus, for instance, to model a system on the order of seconds with a nanoscopic resolution would typically require full ∝ 10 12 time steps. In LHMM, the CFL condition at the macroscale allows the use of Δ = d / ∝ Δ , that scales with the spatial gain, it is in principle feasible to integrate macroscale equations over significantly larger time steps. It is worth noting, that a slightly smaller macroscopic time steps may be preferred to comply with the characteristic microscopic relaxation time, as discussed in the previous section. In HMM, the temporal gain is in general limited by the capability of the method to accurately keep track of the historically dependent stress. This aspect is an important feature of the proposed fully Lagrangian scheme, allowing the use of larger macroscopic time steps, compared to Eulerian-Lagrangian settings.
Macro and micro descriptions
We conduct a series of different benchmark tests for a simple Newtonian fluid to validate the consistency and stability of the proposed multiscale method. We consider = 20h, whereas the size of the contraction is = 4h. At the walls, we adopt the methodology used by Bian et al. (2012), such that the velocity of the wall particles used to compute the viscous forces is extrapolated to enforce non-slip boundary conditions, v = 0 at the fluid-wall interface.
To illustrate the flexibility of the proposed LHMM framework at the microscales we model various physical problems. We adopt different generic SDPD models for polymeric and multiphase systems. We must note that these complex fluids are used here only to showcase our multiscale methodology, thus, a systematic parametric analysis of the specific systems is out of the scope of this work, and will be addressed in future publications.
Oligomer melts and solutions
We model non-Newtonian fluids by constructing melts and solutions of oligomers of = 8 and = 16 connected SDPD particles. We use finitely extensible nonlinear elastic (FENE) potential of the form fene = −1/2 2 ln 1 − ( / ) 2 , where and are the bond energy constant and maximum distance, respectively. In our simulations we fix = 23 / 2 and = 1.5d . We characterize the oligomers in the system through its end-to-end vector R , to determine the mean end-to-end distance 2 = |R | 2 . The measured equilibrium end-to-end radius, , under no flow condition is = 0.3 ± 0.02. Given the size of the microdomain and oligomers, the microscales are being sampled on size ratios 10 < Ω / ∼< 13 approximately. Polymeric systems constructed in simular fashion in SDPDSimavilla & Ellero (2022) have shown that the polymer relaxation times agreed with the Zimm model. Herein, we identify relaxation times for = 8 on the order of ≈ 6 SDPD , and for = 16 on the order of ≈ 9 SDPD . The Weissenberg numbers ( = ) investigated on the different examples, full micro and multiscale, ranged from 0.3 to 100.
Two phase flow
We also constructed microscale systems constituted by two immiscible phases and . The composition of each phase is denoted, , for = , , such that the binary mixture satisfies, + = 1. We adopt the SDPD scheme proposed by Lei et al. (2016) for multiphase flows. In this scheme the momentum equation at microscale (14) incorporates an additional pairwise term int , that account for interfacial forces between two phases and , such that , r ∈ Ω and r ∈ Ω , , r ∈ Ω and r ∈ Ω , , r ∈ Ω and r ∈ Ω , and ( ) is a shape factor given by where = 2 +1 , being the dimension. The range for repulsive and attractive interactions are defined as 2 = = −1/ , such that a relative uniform particle distribution are obtained for a given interfacial tension . The interaction parameters satisfy = = 10 3 , and the magnitude can be obtained from the surface tension and particle density of the system as Here, we model the multiphase systems considering a viscosity ratio between both phases / = 1, and interfacial tension = 0.5. The characteristic time, , for total phase separation of a phase with concentrations of 0.2 and 0.5 (starting from a homogeneous mixture), were identified as ∼ 140 SDPD and ∼ 40 SDPD , respectively. In general, the size (4ℎ < Ω < 10ℎ) of microscale systems investigated and shear rates used, leads to capillary numbers = ( Ω )/(2 ) > 10, that are typical for highly deformable and breakable droplets of the suspending phase (Kapiamba, 2022). It has been shown experimentally that at low numbers, the steady state morphology of multiphase systems can be described as a single value function of the flow. However, when microstructural properties are determined by the balance between break-up and coalenscence of the phases, the morphology can be controlled by the initial conditions of the system, leading to more than one steady state morphology (Minale et al., 1997).
Results and discussion
The proper estimation of the velocity gradient at macroscales as well as the correct measurement of the stress tensor from microsystems are key components of the proposed LHMM. Therefore, before validating a fully coupled LHM system, we verify that numerical errors associated with particle resolution at each scale are negligible and that the arbitrary boundary conditions used for microscales do not introduce spurious artifacts on the stress for complex systems.
Macroscopic particle resolution, velocity gradient and stress tensor interpolation
We determine the minimal macroscopic resolution required to capture the characteristic velocity profile in a reverse Poiseuille flow. We validate the convergence of the velocity field in a domain of size 0.25 × with = 64, for different particle resolutions /d = [16,20,24,32]. The obtained velocity profiles are presented in figure 5. From these tests, we identify that even at lower resolutions, /d = 16, the accuracy of the profile is acceptable for practical purposes. Hereinafter, we evaluate the proposed LHMM using macroscopic resolutions /d = 16 and 20, as a good compromise between minimal numerical error and lower computational cost.
As discussed in the coupling section, we used (19) to compute the macroscopic velocity gradient. We verified this approximation to ∇v in a RPF, for a macroscopic domain of size 10d × 50d . In figure 6, we present the velocity and components of the velocity gradients (i.e. ∇ and ∇ ) measured, along with the theoretical solutions. Overall, we identified that (19) provides up to a good approximation of the macroscopic velocity gradient required to define the boundary conditions of the microscale simulations. Even though, more refined alternatives to compute ∇v exist (Zhang & Batra, 2004), such refinements are out of the scope of the present work.
At the macroscales, the divergence of the stress tensor ∇ · considers the SDPD interpolation of the microscopicallyinformed tensor,¯, and the stabilizing parameter, , according to (11). The accuracy of such interpolation without the numerical errors associated with the actual microscales subsystems is estimated using the analytical solution of a Newtonian fluid. This allows us to manufacture microscopic solutions of to solve micro-macro simulations. The analytical solution of the stress tensor is given by Thus, we can compute ∇ · ℎ using the velocity gradients determined on macroscales. In figure 7, we present the velocity profile for systems with various values of , for different , corresponding to magnitudes the maximum velocity gradient ∇ | max = 1.2 and 8. At the evaluated Reynolds numbers and velocity gradients, the flow can be adequately modelled using only the manufactured microscopic solutions ( ≈ 0), this is, macroscopic stress tensor can be recovered from microscale systems, with minimal interpolation errors at the macroscale. In general, we observe that at modest values of it is possible to fully recover the behaviour of the fluid.
Microscales under rigid rotations
As presented by Moreno & Ellero (2021), complex flow patterns can be easily implemented at the microscales to determine the stresses. In LHMM each microscale system may experience temporal variations of the applied velocity gradient even under steady flow conditions, as they travel within an inhomogeneous macroscopic domain, along Lagrangian trajectories. The velocity gradients imposed on microscopic simulations are then referred to a fixed reference frame in the macro domain (see figure 8.a). The use of this reference frame leads to microscopic systems that experience transitions from simple shear to mixed shear-extension as the macroscopic particle rigidly rotates. This transition of course should originate from an affine rotation on the measured stress. However, it should not generate any change in the state of stress of the system. As a consequence, an important attribute to verify from the boundary condition scheme of Moreno & Ellero (2021) is that rigid rotations on the velocity field applied on the boundary condition domain do not alter the microstructure and rheological properties of the fluid.
As a validation test, we construct microscale simulations for oligomeric systems and determine the response of the system as the applied field experiences a large rigid rotation of 45 ( = /4). We consider a generalized velocity field over the boundary region of the form where is a free parameter, and are the strain and shear rate, respectively. The values of , , , and define the flow configuration (Bird et al., 1987). The velocity gradient rotated by an angle is given by ∇v( ) = Q( ) · ∇v · Q T ( ), where Q is the rotation matrix. We conduct the following simulation in three stages: i) we initially apply a simple shear boundary condition until the systems stabilize, ii) sudden rotation ( = /4) on the velocity gradient is applied, letting the system evolves over three folds its relaxation time ( ), and iii) the velocity field is suspended to let the systems reach equilibrium no-flow condition.
In figure 8.b-c, we present the variation of the mean orientation angle (Δ = | − 2( + )|) and the mean end-toend distance ( ) of the oligomer coils. Where is the angle between the end-to-end vector and the axis (in the fixed reference frame), is the angle formed by the end-to-end vector and the v when = 0, and is the mean angle under no flow condition. Since at microscopic scales the orientation time can be affected by the thermal fluctuations of the system, in figure 8 we compare the coil state for two different temperatures. In general, we identify the rotation in the velocity field effectively induces an affine alignment of the mean orientation angle with the flow, thus as increases the coils rotate to preserve Δ . Similarly, the measured size of the coils remains unchanged during the sudden rotation of the flow. Therefore, the transition of pure shear to mixed flow does not induce additional stresses on the coils. The reduction of at the final stage of the simulation (under no-flow condition) is evidence that the stretching of the coils is effectively induced by the imposed flow. Additionally, under no-flow condition the mean-angle of the coils converges to 45 consistent with ramdomly distributed chains (angle averaged over the first quadrant). In figure 8.b the coil reorientation response induced by the large sudden change in occurs on time scales smaller than the microscopic relaxation time . In practice, in an LHMM simulation, large changes in the flow orientation ( ) are not likely to occur in a single macroscopic time step. Therefore, we expect that orientational relaxation will always occur at time scales smaller than the overall time of a microscopic simulation.
Complex fluid characterization
Before proceeding with the validation of the LHMM, we characterize the modelled fluids at the microscale (oligomer melt and multiphase flow) and corroborate that effectively exhibit a complex rheological response. In figure 9, we present the response of both oligomer melt and multiphase fluid under simple shear. The oligomer melt exhibits the characteristic shear thinning behaviour, induced by the alignment of the coils in the system as the shear rate increases (Simavilla & Ellero, 2022). The relaxation time for the two models of chains used ( = 8 and = 16) are ≈ 6 SDPD and ≈ 9 SDPD . The flow constituted by two liquid phases ( and ) also shows a reduction in the viscosity as the capillary number of the system increases. At the lowest modelled, the low affinity between phases induces the formation of interfaces raising the overall viscosity of the system. As the capillary number increases, the mixing of the phases or alignment is favoured leading the system to the viscosities of the individual phases. For multiphase flow, the characteristic time ps of phase separation is a relevant time scale that can determine the stress level of the system. In general, the flow can affect the rate and trajectory of the phase separation leading to metastable microstructuresMinale et al. (1997), or completely inhibiting the phase separation to occur. For comparison, in figure 9.b, we consider two different initial conditions i) fully phase-separated system, and ii) fully mixed phases. In the scenario (i) the phase is modelled as a phase-separated droplet that is subjected to a shear flow. Corresponding to the condition where ps has been reached (complete phase separation has occurred). In contrast, in (ii) both phases are randomly distributed in the domain when the shear flow is imposed. Figure 8: Effect of rigid rotation for an oligomeric microscale system using arbitrary boundary condition scheme. a) Schematic of a macroscopic particle undergoing rigid rotation, and the corresponding applied velocity gradient as the particle moves. For comparison, we include the corresponding velocity field when the reference frame is aligned with the particle velocity. b) Variation of the mean orientation angle and c) mean end-to-end distance of the oligomer coils of size = 8, in a simulation domain that is rotating from pure shear to = /4 and finally under no flow. Here denotes the relaxation time of the system. Thus, the stress evolution of the systems occurs on time scales smaller than ps . Overall, we observe that at low shear rates the viscosity of the system is strongly related to the extent of the phase separation. Whereas for high shear rates (large ), the effects of interface formation are significantly reduced, and the system exhibits the characteristic simple-phase viscosity. In Appendix figure 16, we have included the temporal variation of the stress for four different capillary numbers to highlight the differences in the stress evolution for systems undergoing phase separation. In general, multiphase systems with longer relaxation times require a detailed track of their microstructure to ensure an adequate description of the stress and macroscopic flow response. Examples of such complex systems include biological aggregations (cells and proteins), which clusters can extend over several spatial and temporal time scales.
Fully microscopically resolved simulations
To validate the accuracy of the proposed LHMM, we conduct RPF simulations of a fully resolved (micro) Newtonian fluid and oligomeric melts using simulation domains with length-scales on the order of¯∝ 10 2 d to 10 3 d . For oligomeric melts, we can refer to the domain size in terms of the end-to-end distance of the coils. As a consequence, for oligomers with = 16 and ≈ 1.5d , the fully resolved domains corresponds to lengths on the order of 200 to 800 . These fully resolved systems require between 10 3 to 10 6 microscopic particles or degrees of freedom (DOF). We must remark that macroscopic domains using fully resolved microscopic scales can be typically on the order of¯> 10 8 d , thus requiring > 10 9 . The computational cost to simulate such large systems quickly becomes prohibitive, even for efficiently parallelizable codes. The domain size used herein, provides a baseline to evaluate the accuracy of the proposed LHMM framework and is already large enough to evidence the high computational demand for this type of system. In figure 10.a, we compare the velocity profiles for a fully-resolved Newtonian and an oligomeric melt. Under the same forcing, the non-Newtonian behaviour of the oligomeric melt is evidenced by a reduced velocity (larger viscosity) and flattened profile. Solid lines correspond to the quadratic and fourth-order fitting of the velocities for the Newtonian and melt, respectively. In figure 10.b, we present the velocity profile of the upper side RPF velocity profile obtained for three different domain sizes with fixed ∇v | max . As the domain size increases the effective velocities of the system change. However, the non-Newtonian profile is consistently preserved.
In figure 11, we compare the corresponding velocity profiles obtained from fully resolved microscopic solutions and the proposed LHMM for two oligomeric systems ( = 8 and = 16, with¯= 64). LHMM results correspond to simulations with d = 3.2 and d = 0.2. Considering a kernel sizeh = 4d and a microscopic domain Ω = 20d , the spatial gain for these test is = 3.2. We evaluate the influence of the stabilizing parameter . In general, we observe that when hydrodynamic contributions are only accounted for from macro simulations ≈ 1 the effective viscosity of the systems increases leading to slightly smaller velocities for LHMM. Such effect is reduced as diminishes. When hydrodynamic ≈ 0 the obtained velocity exhibit instabilities, that are likely related to the macroscopic particle resolution. Since the stresses are only accounted for from microsimulations when = 0, the viscous interactions between macroscopic particles can experience numerical fluctuations due to the stress calculation from microscopic transient simulations. However, we must note that even for = 0 the order of magnitude of the viscous stresses is closely related to the fully microscopic results. LHMM with ≈ 0 reproduces up to a good approximation the characteristic behaviour of the oligomeric system. Overall, we identify that stabilization parameters > 0.1 provide a reasonable stabilization of the stresses. We indicate the total number of degrees of freedom (particles) required in those simulations along with the box and oligomer size ratio, for each case. Larger domains will readily required DOF > 1 × 10 6 Figure 11: RPF velocity profiles for two different oligomeric melts with a = 8 and = 16 using different values of the stabilizing parameter . Overall, the LHMM schemes captures up to a good approximation the effect of microscopic oligomer chains in the flow. For = 16, relative larger deviations are observed as approximate zero. This is likely originated by the noise-to-signal ratio for in the computed stress for larger chains. Further improvement can be achieved by increasing the sampling volume at the microscales.
LHMM for complex fluids
Now, we continue evaluating the proposed LHMM on a macroscopic domain with significantly larger spatio/temporal gain, solving the microscales using the SDPD equations (13) to (15). For macroscopic simulations we consider a fluid with properties = 1000 Kg/m 3 ,¯= 1 − 3 Pa · s, = 0.1 m/s,¯= 1 Pa. The macroscopic time and length scales are defined in terms of Δ¯= 0.0002s, d = 5 · 10 −4 m, andh = 0.002m respectively (see table 1). For microscales, we adopt a resolution d = 2.5 · 10 −10 m, such that the size of the microscopic kernel is ℎ = 4d = 1 · 10 −10 m, and Ω = 20d . Therefore, these LHMM simulations correspond to spatial gains ≈ 4 · 10 6 . From 23, we can observe, that it implies a reduction in the required DOF of ∝ 10 6 , compared to the fully-microscopically resolved system.
To streamline the construction of the different microscopic systems and presentation of the results, we conduct microscopic simulations using reduced units ( = (see table 2). Henceforth, unless otherwise stated, the reduced fluid properties of the microscopic simulations are consistently given by = 1.0, = 10. The particles are initially localized in a square grid with an interparticle distance d = 0.2. Additionally, for microscopic simulations we use = 40 and = 50. The time step is chosen to satisfy the incompressibility of the system and ensure numerical stability, we choose the smaller time scale between the Courant-Friedrichs-Lewy condition Δ = ℎ /(4(v + ) and the viscous time scales, Δ = ℎ 2 /(8 ). Thus, we use Δ = 0.0001 to ensure lower density fluctuations. At microscales, we account for thermal fluctuation, thus the energy scale is determined by = 1.0. Following the results reported by (Moreno & Ellero, 2021), we construct microscale simulations suitable for arbitrary boundary conditions with a core size between Ω = 15d and Ω = 20d (i.e. the size of the sample to determine the stresses), whereas the sizes of the boundary condition and buffer regions are 5d and 5d , respectively.
In Appendix figure 17, we compile the steady state velocity profiles obtained for a Newtonian flow, using the stress tensor computed directly from microscopic subsystems ( (21) and (22)), for various . Consistently, in figure 17 we can observe that micro-scale simulations can recover the macroscopic stress tensor, leading to the proper modelling of the flow. This represents an evidence of the robustness of SDPD to capture the ideal solvent contributions across scales. For comparison, we have included the velocity profile obtained for a system without microscale contributions.
Oligomeric melt
In figure 12, we show the steady state results for a RPF flow configuration of oligomeric melts at different shear rates. We can observe the characteristic shear-thinning effect induced by the alignment of the chains in the flow. Overall, the magnitude of the stabilization parameter did not induce any effect on the rheology of the fluid, evidencing a proper description of the fluid from both macro and micro scales separately. In addition to the steady state solution, we were able to capture the characteristic deviations in the temporal evolution for the oligomer melt (see Appendix). For the Newtonian fluid, the velocity profile is consistently reproduced by a quadratic fitting, whereas the microscopic effects of the oligomer chains lead to a 4th order velocity profile in the non-Newtonian fluid.
Besides the differences in the velocity profile for oligomeric melts, another relevant characteristic that can be analysed for this non-Newtonian fluid is the evolution of their stresses. In figure 12.b, we present the variation of the shear stress and first normal stress difference for three macroscopic particles (highlighted in red, black and orange) at = 30, for oligomer melts with = 8. The particles are initially localized at positions across the domain such that they experienced different magnitudes of stress. As described in figure 12, a shear-thinning behaviour can be evidenced in the magnitude of¯when the shear rate increases. Additionally, the emergence of first normal stress differences is observed for the macroscopic particles due to the microscopic response of the chains.
Multiphase flow
Using the same RPF setting at the macroscales, we can easily investigate other physical systems with different microscopic features. In figure 13, we compile the results obtained for multiphase flows using two phases and , with compositions = 0.1 and = 0.5. In these simulations the two phases are initiallity mixed and the phase separation takes places concurrently with the imposition of the flow. As a result, the macroscopic shear affects the morphology of the microstructure formed, leading to a different response of the mixture. The characteristic size of the microstructure depends on the phase composition. Low concentrations of phase favour spherical to elongated droplet transitions, whereas at intermediate concentrations the increase in the shear rate induces transitions of the microstructure from disordered spinodal to lamella-like Figure 13: Typical velocity profile for RPF coupled with multiphase flow at microscale, using two different compositions of the phase , = 0.2 and = 0.5. In b we compare the steady-state velocity profile obtained for a Newtonian fluid and two schemes of LHMM simulations. The phase separation at microscales originates a shear-thinning behaviour of the macroscopic flow. For comparison, we include the steady profile for a system without historical tracking of the microscale. In that situation, the formation of microstructures with larger relaxation times is never reached, and the fluid behaves similar to the Newtonian fluid. structures.
In figure 13, we also compare the steady-state velocity profile obtained for a Newtonian fluid and the multiphase case with = 0.5. For comparison, we have included the profile for a multiphase system where the microstructure at the begining of each macroscopic time step is reinitialize as fully mixed. This assumption is consistent with microstructural evolution reaching its equilibrium condition on time scales much smaller than the macroscopic time step. However, for these type of system it will imply that the historical evolution of the microstructure is neglected. In general, we observe that the microphase separation originates a shear-thinning behaviour for the multiphase systems modelled. Remarkably, we can observe that the thinning behaviour roots in the proper history tracking of the microstructure. In systems without memory, the formation of microstructures with larger relaxation times is never reached, and the fluid resembles the Newtonian behaviour of their individual phases.
Flow through complex geometry
Now, we evaluate the proposed LHMM framework on geometries that induces different local flow types (i.e shear, extension, and mixed flow), for both oligomer melts and multiphase flows. For these large macroscopic domains, a direct validation with the fully microscopically resolved systems is computationally taxing. Therefore, for complex geometries, we first validated the simple Newtonian fluid in the LHMM scheme, with respect to the corresponding Newtonian fluid as modelled from a macroscopic simulations (using only SPH simulations) (see Appendix figure 18). Overall, we identify that the LHMM consistently captures the behavior of the ideal fluid, on the range of paramters evaluated.
Oligomeric melt
In figure 14, we present the steady velocity and stresses for an oligomeric melt ( = 16), passing a cylindrical array at = 0.4. For the cylindrical contraction we use a domain of size 19d × 27d , and the radius of the cylinder = 6d . The size of the macroscopic kernel 4d = 64 and the microscopic domain size Ω = 8 are defined such that the overall spatial gain of these simulations is = 8, and an aspect ratio between the cylinder and the coil size of nearly 300 times. The flow at the macroscale is induced by an external forcing ext = 0.58, acting on the fluid particles. Fully microscopically resolved simulations of these systems would require over 10 7 particles for a two-dimensional system, in contrast to the 10 6 particles used for LHMM. Figure 14.a compares the velocity and stress contours between a Newtonian and oligomeric melt. In general, we identify alterations in the steady profiles arising from the enhanced viscosity of the oligomer melts. The characteristic shear thinning response of the melt (as discussed in previous sections) to the spatially-changing velocity gradient induces a modest but evident break in symmetry for both velocity and stress. In figure 14.b, we plot the profiles along the vertical line at the entrance of the domain. The higher viscosity of the oligomeric melts is consistent with the typical flattened velocity profile observed and the larger stress. The stress profiles along a vertical and horizontal lines is also presented in figure 14.b to illustrate the larger stress contribution due to the oligomeric chains and the change in the generated stress along the channel.
Multiphase flow
The capabilities of the method to track history-dependent effects are further shown using multiphase flows in a square cavity array. In figure 15, we include the velocity and stress profile for a Newtonian and a biphasic fluid, averaged over the same macroscopic time span. For the biphasic fluid the microscale simulations are initialized as homogeneously mixed phases, with = 0.5, that undergo microphase separation as they flow through the channel. As a result, multiphase flows are characterized by the emergence of microstructures that can evolve with the simulation, therefore carring historical information during their transport across the channel. The formation of such microstructures is additionally affected by the spatially-variable velocity gradient experienced by each macroscopic particle. Consistently, as the particles move within the domain the state of the microstructure determines their stress response, affecting the macroscopic flow. In 15.a, we can observe that the Newtonian fluid has reached a nearly symmetric steady condition for both velocity and stresses. Whereas the multiphase fluid exhibit a significantly different flow behavior and stress distribution. Further estimation of the rootmean-squared (RMS) of the velocity and stresses fluctuation allows us to elucite that the temporal stability of the velocity and stress are responsible for the observed flow patterns. Figure 15.b evidences the persistent fluctuations on multiphasic systems, due to the continuous evolution of the microstructure. Different from simple Newtonian fluids, multiphase flows are likely to require larger simulation times in order to reach an steady-state condition (in the statistical sense). In Appendix figure 20, we present the evolution of the velocity and stress (and RMS of the fluctuations) for multiphase flow at different time steps, evidencing that multiphasic systems have not reached yet an steady condition. We must, highlight that when comparing the stress evolution between Newtonian and multiphase flows, the latter is characterized by larger relaxation times ( ) that typically exceed a single microscopic simulation. The LHMM used herein, allows us to naturally account for such large relaxation times while keeping the modelling of microscopic simulations computationally feasible.
It is important to note that depending on the characteristic size micro of the microstructure, the size of the microscopic domain must be large enough for the microstructure to be commensurated, this is Ω > micro . Since certain physical systems can exhibit microstructures constantly varying in size (e.g. continuously growing aggregates), the definition of Ω poses some important challenges, requiring a systematic analysis of the specific physical phenomena investigated. However, these aspects related to varying microstructural size are out of the scope of the present work and will be addressed in future publications. Here, we have focused on showcasing the capabilities and flexibility of the proposed approach.
Conclusions and future work
Herein, we proposed a fully-Lagrangian Heterogeneous Multiscale Methodology, suitable to model complex-fluids across large spatial/temporal scales using fluctuating Navier-Stokes equations. This methodology offers the advantage of capturing microscopic effects at the macroscopic length scales, with a lower cost than solving the full microscale problem in the whole domain. The LHMM discretize both macro and microscale using the smoothed dissipative particle dynamics method, taking advantage of its thermodynamic consistency and GENERIC compliance. The LHMM uses the velocity field of the macro scales to define the boundary conditions of microscale subsystems that are localized at the positions of the macroscopic particles. Subsequently, those microscale subsystems provide a microscopically derived stress that is pushed to the macro scales to close the momentum equation and continue their temporal solution. This way, the stress information is explicitly carried by the macroscopic Lagrangian points and memory effects related to the evolution of the microstructure are preserved. The microscale domains can be constructed on-the-fly, wherever they are required, based on the evolution of the macro simulation or at prescribed intervals to obtain microscale-informed properties. We tested the LHMM using both Newtonian and non-Newtonian fluids evidencing its capability to capture complex fluid behaviour such as polymer melts and multiphase flows under complex geometries. The LHMM was developed using the highly parallelizable LAMMPS libraries. An important feature is that both macro and microscale can be fully parallelized separately. This has significant advantages compared to fully microscopically resolved systems, which required intensive communication between subdomains of the system. In LHMM, each microscopic simulation is executed separately reducing communication bottlenecks. Further, applications of the LHMM include various complex systems such as colloidal suspensions or biological flows. However, this approach entails a two-fold increase in the computational cost, requiring keeping track of two microscale systems per each macro particle. Another alternative is to obtain an estimate ofd using the projection of the macroscopic velocity gradient (∇v ) at the microscale ∇v = ∑︁ ( ∇v − ∇v ) ⊗ r .
Thus, using the projection (30) the rate-of-strain tensord can be estimated leading to an ideal stress of the form
C Temporal evolution of stress in binary mixture
The evolution of the stress for binary systems with different initial conditions. Lower capillary numbers, where interfacial interactions play an important role lead to different stresses as the system evolve. Thus memory effects of the fuid are relevant to properly account for the correct flow behavior. In contrast, systems with larger exhibit similar stress trajectories independently of their initial state. Figure 16: Temporal evolution of the microscopic stress for a binary system with = 0.5, for four different shear rates. Systems with initial condition as fully mixed phases (blue) completely phase separated (orange) are compared. ps is the characteristic time for full phase separation to occur. At lower shear rates the effect effect of microstructure formation can affect the effective stress measured. In contrast, for large shear rates both systems exhibit similar shear thinning behaviour, independent of their initial condition.
D LHHM validation for Newtonian fluid
Effect of the stabilization parameter for Newtonian fluid. The LHMM is able to recover the behavior of the fluid upto a good approximation over the whole range of investigated. We highlight that the contribution of microscales is fundamental to model the fluid properly. For comparison, in figure 17, we present the results for a RPF configuration without microscales contributions¯= 0. In this case the macroscopic contributions alone fail to account for the stress of the fluid, leading to incorrect velocities profiles. Additionally, in figure 18, we compare the velocity profiles of Newtonian fluid modelled from LHMM and full macro representations. Consistently, LHMM recovers the velocity profiles even for complex flow configurations. Figure 17: Imposed velocity field for macroscales for different values of and using the proposed micro-macro coupling. As a comparison, a system without microscales stress cannot recover the desired velocity profile.
E Velocity profile evolution for oligomer melts
In addition to the steady state solution, we were able to capture the characteristic deviations in the temporal evolution for the oligomer melt. In figure 19, we compare the velocity profile stabilization for the RPF for both Newtonian and non-Newtonian fluids, under the same flow conditions. In figure 19, the solid lines correspond to the best fitting of the velocity at the same time step for both fluids. For the Newtonian fluid, the velocity profile is consistently reproduced by a quadratic fitting, whereas the microscopic effects of the oligomer chains lead to a 4th order velocity profile in the non-Newtonian fluid. Figure 19: Start-up flow in RPF configurations for Newtonian and non-Newtonian fluids. The oligomer melt corresponds to chains with = 8. The best fitting of the velocity profile is illustrated by the continuous line at the same time step for both fluids. For Newtonian fluid is consistent with the expected quadratic profile, whereas, for oligomer melt, the microscopic effect leads to a 4th order velocity profile.
F Velocity and stress evolution for multiphase systems
The evolution of the velocity and stress profiles in the square contraction array for multiphase flows evidences that for these complex systems a fully developed steady stated has not been reached. The dynamic formation and destruction of microstrutures is responsible for the constant evolution of the stress.
|
2022-11-10T06:42:39.546Z
|
2022-11-09T00:00:00.000
|
{
"year": 2022,
"sha1": "fbe2ec7b1bea6ede3c62502d472868a2f4d78cf6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fbe2ec7b1bea6ede3c62502d472868a2f4d78cf6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119449767
|
pes2o/s2orc
|
v3-fos-license
|
Even-odd effects in magnetoresistance of ferromagnetic domain walls
Difference in density of states for the spin's majority and minority bands in a ferromagnet changes the electrostatic potential along the domains, introducing the discontinuities of the potential at domain boundaries. The value of discontinuity oscillates with number of domains. Discontinuity depends on the positions of domain walls, their motion or collapse of domain walls in applied magnetic field. Large values of magnetoresistance are explained in terms of spin-accumulation. We suggest a new type of domain walls in nanowires of itinerant ferromagnets, in which the magnetization vector changes without rotation. Absence of transverse magnetization components allows considerable spin accumulation assuming the spin relaxation length, L_S, is large enough.
Difference in density of states for the spin's majority and minority bands in a ferromagnet changes the electrostatic potential along the domains, introducing the discontinuities of the potential at domain boundaries. The value of discontinuity oscillates with number of domains. Discontinuity depends on the positions of domain walls, their motion or collapse of domain walls in applied magnetic field. Large values of magnetoresistance are explained in terms of spin-accumulation. We suggest a new type of domain walls in nanowires of itinerant ferromagnets, in which the magnetization vector changes without rotation. Absence of transverse magnetization components allows considerable spin accumulation assuming the spin relaxation length, LS, is large enough. As it was demonstrated first in [1,2], spin accumulation effects in the presence of a current flowing between ferromagnetic and normal metals may result in a considerable contribution to a contact's resistivity. This phenomenon is the key element for the GMR devices with the CPP (current perpendicular to the plane) geometry. The CPP-GMR effect has been studied theoretically for the spin-valve systems for a simple triple layer and multilayered structures in the pioneering work [3]. The subsequent experiments (see e.g. [4,5,6]) were in excellent agreement with the predictions made in [3] concerning the dependence of the resistivity on the width of magnetic and non-magnetic components and the role played by spin-relaxation mechanisms. Most recently it was discovered that nanocontacts [7] and domain walls in magnetic nanowires [8] possess significant magnetoresistance.
Realization of different experimental configurations allows determination of parameters present in the expressions in Ref. [3] such as the values of resistivity for each of the GMR components and what is most important the spin relaxation length, L S , characterizing the width of a non-equilibrium distribution of spins near the contacts [5,6].
The formulas in Ref. [3], however, have been derived in assumption that while the conductivities of the majority and minority spins are different, the corresponding densities of states (DOS), g α , remain equal. This assumption is not realistic. In the presentation below we address this issue to demonstrate that taking difference in the DOS into account, the changes in the expressions for the distribution of the electrostatic potential lead to some new observable effects. In Ref. [7,8] it was speculated that the pronounced GMR effects are caused by the significant role of spin accumulation. We suggest, as we * Also at: Department of Physics, Florida State University. Electronic address: dzero@magnet.fsu.edu believe, for the first time, that large magnetoresistance observed in Ref. [8] is due to the non-rotational character of the domain walls [9] which are possible in itinerant ferromagnets [10].
Following [3], we re-write the expression for the current j α = eD α n ′ α − σ α U ′ (x), (U (x) -the electrostatic potential, the index, α = ±, stands for the majority and minority spins correspondingly), into the form: where is a non-equilibrium (in the presence of a total current, J ) electrochemical potential for each spin component, and the relation D = σ/(e 2 g) is used in (1,2).
The equations for the current components: together with (2) and the electro-neutrality condition: α n α = n + + n − = 0 (4) present the complete system of equations for each side of an interface (in (3) τ S is a spin relaxation time).
To simplify the analysis, we at first assume the ballistic regime for the interface, i.e. the width of the corresponding domain wall is small (contributions due to the spin scattering inside the barrier will be discussed at the end of the paper). Correspondingly µ α and j α are taken continuous at the boundary.
In convenient notations of Ref. [11] with the help of (4) we obtain for µ S = µ + − µ − : Subtracting one of Eqs. (3) from another and making use of (5), one obtains the equation for the distribution of µ S in the "bulk" on each side of the contact: where Equation (6) coincides in its form with Eq. (14) from Ref. [3]. For our purposes it is convenient to re-write (6) as: Equation (6) applies now everywhere in the sample. In (6'), δ(x) stands for the Dirac delta-function at the boundary (x = 0) (equation ( (6') is valid when the boundary between the domains is abrupt). While µ S is continuous, µ ′ S has a jump at the interface: The jump can be expressed in terms of the total current, J. To do so, we write down the expressions for J, spin current j S = j + − j − , to obtain with the help of (4,5): We specify that on the left from the domain wall spins up belong to the majority band. Magnetization changes sign across the wall and so does the band's occupation so that one has to interchange between σ ± , g −1 ± in (9). Since the currents are continuous, for the jump of electrochemical potential gradient, ∆(µ ′ S ), we get: For a multilayer system, Eq. (6') reads: It is also assumed below that the width of the very left and very right banks are larger then L S . The solution of (11) is a superposition of solutions for a single domain wall: We consider first the behavior of the electrostatic potential, U (x), close to a single domain wall. With the help of Eq. (5) one can write: Making use of continuity of the electrochemical potential at the boundary, one immediately sees from (13) that potential U (x) is discontinuous with the jump ∆U 1 ( at the ith domain wall: The spatial distribution of U (x) can be found by integrating the first equation (9) along each side of the domains: The total potential drop across an isolated single domain wall in the chosen geometry is the sum of two terms: (the second term in (16) comes about from the currents distribution (12) on the distances of the order of L S on the two sides of the wall). In the following notations one obtains: i.e. g ± drop out from the total magnetoresistance. Equations (16-17) coincide with the results obtained in [3]. The differences in the DOS for the minority and majority spins lead to an appearance of the discontinuities (14) and changes in the dependence of the electrostatic potential U (x) (see (15)) along the domains.
To make a numerical estimate for the total potential drop we will use the data from Ref. [8] obtained for Co nanowires: L S ≃ 60nm, ρ ≃ 1.3 · 10 −5 Ω·cm. The typical values of δ are of the order of 0.4 ÷ 0.5 [5,6], while β is of the order of 0.5 ÷ 0.7 [12]. After substituting these values in (17), the resistance drop ∆R per unit area is 5 · 10 −11 Ω, or ∆R ≃ 112 Ω for the geometry used in [8]. The value of the potential drop at domain boundary is The distance between the domain walls is estimated using the formula d ≃ √ dwlCo (dw ≃ 10nm is a width of the domain wall, lCo ≃ 40nm is the diameter of a wire [8]).
approximately the same as the value of the total potential drop in (16): We would like to emphasize, that the ratio in (17') may also be negative.
As an example, let us consider in some more details, the drop of the potential, ∆U (x i ), across the very left domain for a system of N walls (we also take x 1 = 0). From (12) and (14) we have where For the N walls the value of ∆U N shows an interesting "even-odd" effect: Below in Fig. 1 we plot (δR) N /∆R = (δU ) N /J∆R as a function of number of the domain walls. The "even-odd" effect is well-pronounced at N d ∼ L S , where d is a size of domain.
Another interesting feature is that changes in Eq. (18) could follow a motion of a single domain wall, say at x = x i , through its contribution (−1) i e −xi/LS into (18'), or even a collapse of a domain caused by the applied magnetic field (such a collapse has been experimentally For simplicity, we assumed that there have been four domain walls before an external magnetic field was applied. seen in Ref. [8]). In order to explicitly demonstrate this effect, we first introduce the following notations: The result of our calculations of ∆R coll /∆R as a function of the distance between two neighboring domains L c = x i+1 −x i is plotted in Fig. 2. Experimentally, the motion of a domain wall can be detected by the STM technique.
Now we would like to briefly discuss the change in our results in case when one takes the finite width of the domain wall into account. In particular, we consider the importance of depolarizing effects in the Bloch or Neél type of domain wall for the results above. Since the width of the wall is usually much smaller then the spin diffusion length, L S , the arguments that lead us to results given by (14-16) still hold. The only modification would come from the change in the boundary conditions for the spin current.
Electrons going through the Bloch or Neél domain wall loose part of its polarization because the transverse component of magnetization inside the wall creates a torque which causes spin's re-orientation. Obviously this process will reduce the spin accumulation. If d w is the width of a domain wall, spins of electrons traveling through the wall are rotated in the transverse component of the exchange field, H exch , in the wall by an angle ϑ α (α =↑↓): From (20) one obtains: with v given by: and after some algebra with help of (5): Taking into account (22), the expression for the jump of µ ′ S at the interface is: As a result, the expression for the potential drop ∆U across an isolated domain wall acquires the form: To estimate the value of α, we first re-write the second expression in (24) as: where l is a mean free path and v F is the Fermi velocity.
Using the data, provided in [8], we have: µ B H exch ∼ 0.1eV, d w ≃ 10nm, L S /l ≃ 10, v ≃ 4.5 · 10 7 cm/s, v F ≃ 10 8 cm/s. Thus our estimates provide α ≃ 14, which according to (23) significantly reduces the spin accumulation effect. This result can also be explained slightly differently: from (20) we estimate the depolarization angle ϑ ≃ 1, i.e. at crossing the Bloch or Neél type of domain wall the electronic spins would adiabatically follow magnetization. To reconcile the above estimates with the significant magnetoresistance experimentally observed in Ref. [5,8], we suggest that the domain walls in these experiments were neither of the Bloch nor Neél type. Instead, in itinerant ferromagnets another type of domain walls, "linear walls", is realized [10]. In a linear wall the direction of magnetization does not change, while its absolute value goes through zero inside the wall. Theoretically, "linear" domain walls were considered in [9] for local spin ferromagnetic systems at temperatures T close to critical temperature T Curie . To summarize, we have shown that taking into account difference in the density of states between the minority and majority spin bands drastically changes the distribution of the electrostatic potential along the domains.
The discontinuities of the potential across each domain wall are of particular interest. The jumps in the values of potential can be measured directly by the STM technique. Jumps possess such characteristic features as the "even-odd" effects counting the total number of domain walls in magnetic nanowire in the presence of an external magnetic field. Our results can be directly extended to the GMR structures consisting of F/N layers which were studied theoretically in [3] in the approximation of equal density of states of majority and minority bands. To ascribe large values of magnetoresistance observed in [5,8] to spin accumulation effects, it was also necessary to suggest that in nanowires made of itinerant ferromagnets the domain walls are of a linear type in which magnetization changes without creating a perpendicular component that would revert spins of polarized electrons.
|
2019-04-14T02:00:51.114Z
|
2002-08-23T00:00:00.000
|
{
"year": 2002,
"sha1": "4ea6b43a5e478633cdcc40c14d2a06cf9ab3d71c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0208463",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a65ccc0fd5deb58eddf36048d812764f433fbe12",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
88478696
|
pes2o/s2orc
|
v3-fos-license
|
Developing cloud-based Business Process Management (BPM): a survey
In today’s highly competitive business environment, modern enterprises are dealing difficulties to cut unnecessary costs, eliminate wastes and delivery huge benefits for the organization. Companies are increasingly turning to a more flexible IT environment to help them realize this goal. For this reason, the article applies cloud based Business Process Management (BPM) that enables to focus on modeling, monitoring and process management. Cloud based BPM consists of business processes, business information and IT resources, which help build real-time intelligence systems, based on business management and cloud technology. Cloud computing is a paradigm that involves procuring dynamically measurable resources over the internet as an IT resource service. Cloud based BPM service enables to address common problems faced by traditional BPM, especially in promoting flexibility, event-driven business process to exploit opportunities in the marketplace.
Introduction
Since its introduction more than 20 years ago, Michael Hammer (1) and Thomas Davenport (2) initiated the term business process management (BPM) to illustrate the entire stream of activity that every organisation carries out. BPM has substantially matured over the last two decades and provided an effective way to monitor and improve business efficiency (1). With the increase of globalization, business management and the process becomes big challenges for all enterprises. Modern enterprises are dealing difficulties to cut unnecessary costs, eliminate wastes, streamline and automate their business processes to deliver huge gains (3) (4). BPM is expected to help the enterprise be both competitively agile and cost efficient, promote flexibility, event-driven business processes to exploit opportunities in the marketplace (5). To address these challenges, IT is used to manage business processes and eventually evolved into what is known as BPM today (6). BPM acts as a "business process advocate" using methods, techniques & software to design, implement, control & analyze operational processes involving people, organizations, applications, documents & other sources of information (7).
BPM takes data from various business enterprise applications in the form of data presentation, and then does the following two things (8): (1) to track how information is used to complete a business, so as to accurately locate and understand existing business processes; (2) to track the flow of information in various operations, to ensure business processes run well. BPM software was introduced along with the aim to provide solution that enables to integrate existing applications without additional technology investment. Even with limited investment, companies can further improve their work efficiency.
The article applies BPM method in cloud computing (CC) environment. Nowadays, CC has emerged as an attractive, high-performing multitenant environment that promises aggregator systems and business process delivery, business services and business content in an environment that drives innovation. Therefore, combining BPM with the cloud will produce a flexible and affordable (3). With BPM software and applications connected with the cloud, the company receives all the benefits of internet applications along with the power and flexibility of the BPM software ecosystem. BPM located as Software as a Service (SaaS) bidding in the cloud that changes the way businesses view of cost structure of application creation and maintenance (9). By connecting BPM applications with the cloud, companies can handle large number of business processes at the same time and software development efforts become easier. The application of BPM in CC environment is expected to provide insight of BPM application in modern enterprises.
Business Process Management (BPM)
BPM refers to discipline that enables to combine knowledge from IT and knowledge from management sciences, and applies them into operational business processes (7). BPM focuses on improving organization, integration, optimization, implementation, monitoring and process management. With the increasing needs of customers and dealing with competitors have made business process become more complex, heavily rely on information systems and may span multiple organizations (10). Business processes in general can be classified into: human centric and system centric with combination of: person-to-person (P2P), person-to-application (P2A), and application-to-application (A2A) (11). The article examines the A2A processes in software systems that cover: transaction processing systems, enterprise architecture integration (EAI) platforms, and web-based integration servers. This software architecture enables all different business processes can be accessed dynamically through a protocol, known as service oriented architecture (SOA) and tailored to business needs of the company. The use of web-based SOA in BPM includes (12): (1) developing tools for users to personally define models with basic components; (2) business performance management tools to manage all functions as different processes and to monitor IT systems and business process operations. The architecture of web services technology that enables to link BPM are illustrated in the following: Figure 1. Web services technology that enables to link business process modeling.
Definition of Cloud Computing (CC)
According to National Institute and Standard of Technology NIST (13), CC is a model that allows easy on-demand network access to shared containers of configurable computing resources (eg, networks, servers, storage, applications, and services) that can be set up and released quickly with little or no management effort or service provider interaction. CC has emerged to provide computing services based on demand, measurable, 'pay-per-us' and virtual centrally via the internet to improve the company's ability to cope with a flexible and highly competitive business environment. The cloud technology has evolved through combining the advantages of SOA, virtualization, grid computing, and management automation with the following features (8): (1) virtualization, reuses hardware equipment to provide an expandable system environment with extra flexibility such as the use VMware and Xen that act as a demand-based virtualization IT equipment. Users can configure their personal network and system environment through virtualization network known as VPN; (2) service flow and workflow, provides a complete set of service environment as per demand; (3) web service and SOA, through standard of WSDL, SOAP, and UDDI, and other cloud services can be delivered in the web service; (4) web 2.0, can strengthen information share and interactive cooperation of users; (5) large-scale distributed systems, requires large-scale distributed memory system and computing ability to realize the rental of computing resources and memory spaces by users; (6) programming model, allows user to write application program under cloud environment.
CC involves dynamically measurable resource procurement through internet services. Through elastic and virtualized infrastructures, CC enables to lease and release the needed resources in an ondemand, utility-like fashion, with billing according to use and to scale the computing infrastructure up and down rapidly (rapid elasticity) (14). Cloud Computing also presents a significant technological trend, and has clear that it is reshaping the information technology process and IT market (15).
BPM in the Cloud Computing.
The cloud based BPM refers to the use of BPM tools that are delivered as software service, Software as a Service (SaaS) over networks. According to Gartner by 2016, there will be more than 20% of all business processes around the world will be supported by cloud based BPM platforms. Cloud based BPM provides cloud users with valuable opportunities to use cloud software based in a pay-per-user manner. The application of cloud service model in organization can be summarized as: Figure 2. Service models of cloud-based BPM (12).
The figure 2 above shows service models of cloud-based BPM comprised of: (1) Infrastructure as a Service (IaaS), is a CC service that provides IT infrastructure of CPU, RAM, storage, bandwidth and other configurations. These components are used to build virtual computers. Virtual computer can be installed operating system and application as needed. The advantage of IaaS service is that it is not necessary to purchase computers so it allows creating cost-effective solution. Configuring a virtual computer can also be changed as needed. Suppose that when the storage is almost full, storage can be added immediately. Common IaaS providers such as: Amazon EC2, TelkomCloud and BizNetCloud; (2) Platform as a Service (PaaS), is a service that provides computing platform. Usually there are operating systems, databases, web servers and application framework for running applications. PaaS service enables developers to focus on the applications they create without thinking about the maintenance of the computing platform itself; (3) Software as a Service (SaaS), is a cloud computing service where applications are provided. The service provider manages the infrastructure and platform that runs the application. Examples of email application services are: Gmail, Yahoo and Outlook, while examples of social media applications are Twitter, Facebook and Google +. These service enables users to utilize application without must purchase the license. Users only need a CC client device connected to the internet. There are also applications that require users to subscribe in order to access applications such as: Office 365 and Adobe Creative Cloud; (4) cloud clients, cover all important stakeholders of the company. They can be staffs, suppliers, business partners, and customers.
Theoretical Model
The article applies literature study method. The results are based on the information reviewed from selected relevant articles. The first step is to search the relevant articles from the last 10 years using a database of journals such as: Google Scholar, EBSCOhost, ProQuest and IEEE. By using the keyword "Business Process Management and Cloud Computing" then got some relevant articles. The article examines the application of BPM in CC environment (see figure 3) with the following components (16) (8): (1) infrastructure service layer, consists of virtual resource environments, network and file storage systems, and service buses. Above the hardware layer supports dynamic configuration of virtual hardware facilities and enables to create a distributed file storage system. This storage system enables managing virtual resources and forms as separate file system on various physical machines distributed over LAN, including load balancing, fault tolerance treatment, dynamic node configurations, and more; (2) platform service layer, covers business process machines and business process pre-built libraries and other middle-wares. The service bus is in this layer with the function to uniformly manage, request and manage all service through Web Service, WSDL, SOAP, and UDDI services. The PaaS service function enables easy maintenance and improving transparency; (3) BPM as platform layer, has a BPM system built in it. BPM system has full life cycle management and specialized process services such as process modeling with BPM notation (BPMN) and business activity monitoring (BAM) monitoring. This layer provides the added benefits for helping companies to build and visualize their business needs on a cloud-based BPM; (4) software and service layer, sit on the top layer of CC that contains application services and software through internet services.
Those 3 layers of CC and BPM layer enable companies to visualize their BPM needs through BPMS. The core system of BPM is in PaaS, that enables companies to model their business processes. It enables to collect information, develop, optimize and monitor information easier. The use of BPM in CC enables to complement application pooling, allows easy system integration, managing sharing of such information and services with convenient tools. This allows the process architects to create and modify processes to solve business problems and adjust the processes needed, as business changes. In addition, cloud-based BPM also creates opportunities for SMEs with ease by simplifying complexity. To maximize the BPM output, the cloud-based BPM framework needs to be supported with the use of data mining and DSS technology. It comprised of components of (17): (1) information sources, consists of current ERP and legacy system, point of sales (POS), other OLTP/web access, and external data that are fed as input data; (2) data management, consists of metadata, enterprise data-warehouse and replication; (3) information management, processes data marts of each departments into routine business reporting, OLAP, dashboards, intranet search for content; (4) operation management, carries out data and text mining, optimizing and simulating the data, and also automating decision system for board of directors. Data, information and analytical services are enabled through cloud services (see figure 4). The figure 4 shows internal service oriented BPM enables to assist developing good BPM culture in organization, such as (18): (1) BPM enables important key activities to be managed and continuously improved to ensure consistent ability to deliver high quality standards of products/services; (2) business processes are critical and all-encompassing activities of design, manufacture, marketing, innovation, sales and others which deliver quality to the end customers; (3) process management constantly strive for excellence and how they stimulate innovation and creativity for process improvement and optimization; (4) BPM includes activities which refer to developing supplier quality management issues; (5) the management processes is conducted through performance measurement for setting targets for improvement and also for measuring product/service capability, process capability, supplier capability and efficiency/effectiveness aspects in terms of cycle time, quality standards, costs, etc.; (6) BPM, through continuous measurement and improvement will determine effectiveness of process design for streamlining and simplification. It ensures introducing best practices through benchmarking information and is based on valuable inputs from customers; (7) process management challenges practices (i.e. the dynamic aspects of each process and its behaviour) as much as the performance of each process (its output/metrics). Process management seeks to continuously strengthen all activities through the introduction of best practice, and to ensure that internal standards of performance are competitively acceptable; (8) BPM enables creating a systematic methodology supported by a problem-solving methodology to strengthen newly-designed processes, to reinforce the linkages between various functions and to ensure that optimum performance can be achieved. (17)). Service oriented cloud based BPM enables to address data process inside organizations in the area of (19): (1) low degree of automation in implementation stage: the actual setup of business processes according to managerial needs is mainly done manually, often involving numerous consultants. The use of cloud BPM enables full degree automation in entire business process; (2) implementation delay:
Discussion
The dynamic composition of business processes is mostly impossible, increasing the time to market and reducing an organization's agility. Combining with mining technology has enabled to address specific problem and collaboration process to solve the problems; (3) cognitively inadequate complexity: The lack of a clear separation between business goals and implementation details makes the management of business processes overly complex. Cloud BPM enables to simplify the complexity in business processes and easy access for key stakeholders; (4) process blindness: Managers and other business experts cannot quickly determine whether a specific process can be composed out of existing atomic processes, nor can those stakeholders query the process space within their organization by logical expressions. Thus, checks for process feasibility (e.g. prior to the launch of new products or services) or compliance (e.g. ISO, etc.) are still to be done manually by business analysts. Cloud BPM enables fast query process and check process feasibility/compliance;
Conclusion.
This article discusses the basic concepts of CC and BPM, and how the cloud-based BPM framework is proposed. Cloud BPM enables easy and affordable cost management according to the needs. Along with the current development of business processes that are constantly changing and improving due to changing business conditions, combining BPM with the cloud is a promising approach to enable enterprises in all sizes to stay competitive and focus to their core business. Cloud based BPM and supported with mining technology enables to create flexible business process, faster in delivery and provide better service for key stakeholders. CC is an emerging paradigm that enables to develop a better architecture for managing high complexity of business processes in modern enterprises. It involves easy data/information retrieval and processing, guaranteed data availability, fast and flexible information and analytical services.
|
2019-03-31T13:11:17.219Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "81e183debb8dd6e8c66e75588e520018f00e1aa8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/978/1/012035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "87705acba67f3c443f2fec019f94472a0fa5eb0c",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
239707409
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Microplastics in the Surface Water, Sediment and Fish of Sürgü Dam Reservoir (Malatya) in Turkey
In this study, the concentration, type, size, and color of MPs in multiple environmental compartments was investigated in Sürgü Dam Reservoir. The MP concentrations in surface water were between 106.63 and 200 par.m-3. The MP concentrations in sediment were between 760 and 1.440 par.m-2. A total of 44 MPs, ranging from 0 to 3 samples per fish with averaging 0.41 MPs/individual, were extracted from gastrointestinal tracts of fish. Fibers were the predominant type of MPs in surface water, sediment and fish. The most common MP sizes were 1-2 mm in surface water, 0.2-1 mm in sediment and in fish. The dominant color of detected MPs was black in surface water and transparent in sediment and fish. Polyethylene terephthalate and polypropylene were the major polymer types of the selected particles. Of the two stations, station 1 showed a higher MP concentration level. The results of this study showed the MP concentration in SDR is relatively moderate in sediment although it is lower in fish and surface water samples. This data may assist in extending our knowledge regarding MPs pollution in freshwater systems and provides a baseline for future monitoring and assessment MPs of SDR.
Introduction
Plastics are synthetic ingredients composed of organic polymers, and are used extensively because of their low cost, transportability, and endurance (Wang et al., 2020). Microplastics (MPs) are plastics that are smaller than five millimeters long. MPs have different sizes, shapes, chemical contents, and sources of origin (Free et al., 2014;Zhang et al., 2015). In recent years, MPs have become accepted as one of the pollutants threatening the environment. Numerous negative effects of MPs in aquatic systems have been reported in the literature (Zhang et al., 2015;Sruthy & Ramasami, 2017;Ding et al., 2018;Egassa et al., 2020). For instance, MPs may cause inflammation by accumulating in body tissues or fluids of aquatic organisms, also lead to adverse conditions such as intestinal obstruction, increase in oxidative stress, and deterioration of nutrition, digestion and behavior (Jemec et al., 2016). MPs can transport pollutants, invasive species, and pathogens, as well as increase the persistence of these elements in the environment (Sighicelli et al., 2018). MPs may be perceived as food and be accidentally ingested by numerous aquatic organisms. Moreover, these MPs can be transferred between different trophic levels in the food chain (Sighicelli et al., 2018). Therefore, the transfer of MPs among organisms is another environmental risk for ecosystems (Zhu et al., 2019). Freshwater systems are very important as sources of drinking water and because they are used for irrigation, fishing and energy production (Şahin & Zeybek, 2019). Due to the unplanned use and rapid pollution of freshwater resources, water pollution has become an important issue in Turkey in recent years (Varol, 2020). The presence of MPs in freshwaters has been reported by numerous studies on a world scale (Zhang et al., 2015;Fu & Wang, 2019;Meng et al., 2020). While many researchers have focused on MP pollution in marine ecosystems, freshwater ecosystems have received less attention (Wong et al., 2020). However, as in the former, studies of the latter have shown that MPs create a collective pollution load in aquatic environments (Egessa et al., 2020). In addition, the presence of MPs has been detected in lakes of various sizes, even in those that are relatively far from human activities (Sighicelli et al., 2018).
How to cite
According to the literature, there were a limited number of studies on MP pollution in freshwater resources in Turkey. MP pollution of surface water of Cevdet Dündar Pond, Küçükçekmece Lagoon and Süreyyabey Dam Lake has been determined in Turkey. In addition to these studies, the determination of MP pollution in Sürgü Dam Reservoir (SDR), which is Turkey's freshwater resource, is thought to be an important data input on freshwater resources in Turkey (Erdoğan, 2020;Tavşanoğlu et al., 2021;Çullu et al., 2021).
SDR is used as a fishing, irrigation and recreation area. The most important water source feeding Sürgü Dam is the Sürgü Stream. This stream originates in Reşadiye, passes through the town of Sürgü and discharges to the SDR. The waste water of the villages in the town of Sürgü flows into the Sürgü Stream without first passing through a treatment plant. It is also thought that intensive agricultural activities around SDR and the Sürgü Stream cause water pollution (Erkul & Sarıgül, 2008).
The aims of this study were: (1) to obtain data about the qualitative and quantitative composition of MPs in SDR; (2) to evaluate the MP pollution in SDR seasonally and regionally; (3) to determine the effect of the Sürgü Stream on SDR in terms of MP pollution; and (4) to determine whether the fish in the study area are contaminated by MP. This study may assist to fill the data gaps regarding MPs pollution in Turkey's freshwater ecosystems, and maintain guidance for the future monitoring studies and establishment of some measures against MPs pollution in SDR.
Study Areas and Field Sampling
Sürgü Dam (38°2′6″N, 37°52′46″E) is located in the Eastern Anatolia region of Turkey and is 53 km away from Malatya city center. Sürgü Dam was built on the Sürgü Stream between 1963 and 1969. The dam is 55 m high and 690 m long. There are agricultural lands and some settlements on the shores of the reservoir (Dursun & Gül, 2018).
For the surface water sampling, 150 L water was collected with a steel bucket from the stations and this was filtered through 5.000 µm 1.000, 200 and 91 µm pore size steel sieves (Erdoğan, 2020;Meng et al., 2020). The particles accumulating on the 5000 µm filter have been released while the particles collected from the 1000 µm, 200 µm and 91 µm filter were washed in bottles with ultra-pure water and preserved in 4% formaldehyde (Aytan et al., 2020).
Sediment samples were taken with an Ekmangrab (total area = ~0.025 m²) from the sample locations. Surface sediments to 4 cm depth were taken using with stainless-steel spoons and stored in 500 mL jar glass. Then, the jars were covered with aluminum foil and were kept in jar sat -40 °C (Aytan et al., 2020).
Fish sampling was performed in March 2021 with commercial fishing boat sand fishing gear. A total of 107 fish, 62 of which were Cyprinus carpio and 45 of which were Alburnus mossulensis, were caught in the lake. The fish were euthanized with MS222 (Tricaine-S; 0.25g L -1 ) (McNeish et al., 2018). The numbers of the collected fish are shown in Table 1.
Laboratory Analysis
Wet peroxide oxidation was used to determine the presence of MPs in surface water samples (Masura et al., 2015). Water samples were transferred to a 200 mL conical flask. After filtering the solution through a 10 µm filter, 30mL hydrogen peroxide (H2O2) (30%) was added to the samples. Organic materials were digested with H2O2 at 50 °C for 72 in incubator. After this, the mixture was then filtered with 10 µm filter and moved to petri plates to desiccate (Aytan et al., 2020).
MPs were removed from sediment samples the density flotation method (Aytan et al., 2020). The saturated NaCl solution (d: 1.2 g cm -3 ) was filtered with a 10 μm filter. Sediment samples were tranfered in glass beakers and saturated NaCl solution were added for density separation. These samples were mixed with a steel spoon for a two minutes and waited for 1 hour. This prosedure three times were repeated to ensure all MPs were obtained. The supernatant was filtered with a 10 μm filters. The residues on the filter were washed to glass beakers and, organic particles were digested using 30% H2O2 at room temperature for 168 h, then filtrated onto 10 μm filters, and dried in oven. MPs were measured with a dissecting microscope, using Euromex Image Focus 4.0 software. MPs, defined according to their morphological characterics and physical response properties, were classified according to their type, color, and size (Desforges et al., 2014;Aytan et al., 2020). The concentrations of MPs in the sediment from two stations was calculated by dividing the number of MPs by the cross-section area and was expressed as particules m -2 (Xiong et al., 2018).
MPs in fish were determined according to McNeish et al. (2018). The weight and the total length of each individual were measured before dissection (Table 1). The gastrointestinal tract (GIT) from the esophagus to the anus was taken from all individuals, and its weight was measured (Table 1). The GIT was then transferred to a jar filled with H2O2 (30%). This solution was added to remove biological material. Samples were kept at 45 °C in an incubator. After removing all organic matter, the samples were filtered with a 10 µm mesh and transferred to petri dishes, and then dried with an oven. MPs were determined per individual fish.
MPs were measured with a dissecting microscope, using Euromex Image Focus 4.0 software. MPs, defined according to their morphological characterics and physical response properties, were classified according to their type, color, and size (Desforges et al., 2014;Aytan et al., 2020). The numbers of MPs in fish was expressed as particles per individual (Sun et al., 2019).
The chemical characterization of 65 randomly selected particules were identified using Perkin Elmer Spectrum Two FT-IR spectrophotometers in the range 400-4000 cm −1 . FT-IR analyzes of particules were compared with FT-IR analyzes of standard plastic structures.
The amount of MP for each fish was shown as the frequency of occurrence of MPs (FO%).
Average rainfall data in Malatya Doğanşehir district by month in 2020-2021 was obtained from the 13th Regional Directorate of Meteorology.
Contamination Control of Microplastics
Various measures were taken to avert contamination through out the analysis and processing of samples. All materials used in analysis and sampling were rinsed with ultrapure water. Immediately after the samples were collected, they were transferred to storage containers and the containers were immediately capped. Hands and forearms were cleaned before starting the study, and a 100% cotton laboratory coat was worn during analysis. All surfaces and tools used during sampling and analysis were completely cleaned with alcohol. A procedure blank was performed no sample, during the MP analysis and processing in fish. Three petri dishes were placed a long side the stereo microscope through out the analysis. At the end of the analysis, the petri dishes were checked for MP measurements.
Statistical Analysis
Graphpad Prism software (Version 5, USA) was used for statistical analysis of MP concentrations found in fish at different stations. The data were initially tested for normality distributions with Kolmogorov-Smirnov test. The nonparametric Mann-Whitney U test was used in pairwise comparison because the data did not show normal distribution. The difference between the groups was determined according to the degree of importance at P<0.05.
Microplastic Analysis in Surface Water
In this study, MPs were determined at two stations in SDR in three sampling periods. In total, 141 MP particles were identified (54 particles in June 2020, 37 particles in December 2020, and 50 particles in March 2021) in the surface water of SDR. The maximum MP concentration was determined at St.1 (220 par.m -3 ) in June 2020, and the minimum MP concentration at St.2 (106.7 par.m -3 ) in December 2020 (Figure 2). The MPs collected from the surface water were classified in to Figure 2. Concentration of MPs in surface water, sediment and fish at the sampling stations four types: films, fragments, fibers, and foams ( Figure 3). The predominant type of MP identified was fiber. In June 2020, fibers were the type of MPs with the highest concentration (62.96%), followed by films (18.52%), fragments (14.81%), and foams (3.70%). In December 2020, fibers were also the type of MP with the highest concentration (56.76%), followed by fragments (21.62%), films (18.92%), and foams (2.70%). Furthermore, in March 2021 fibers were the type of MP with the highest concentration (52%), followed by films (24%), fragments (22%), and foams (2%) (Figure 2).
The results revealed that MPs comprised 9 different colors in the surface water, black being predominant (29.63% in June 2020, 21.62% in December 2020, and 20% in March 2021) (Figure 4).
Microplastic Analysis in Sediment
In total, 140 MPs particles were identified (41 particles in June 2020, 44 particles in December 2020, and 55 particles in March 2021) in sediment.
In sediment, the maximum MP concentration was observed at St.1 in March 2021 (1440 par.m -2 ), and the minimum at St.2 in June 2020 and in March 2021 (760 par.m -2 ) ( Figure 2). MPs collected in sediment were classified into four types: films, fragments, fibers, and foams ( Figure 6). The predominant type of MPs identified was fiber.
In June 2020, fibers had the highest concentration in sediment (58.54%), followed by films (24.39%), fragments (14.63%), and foams (2.44%). In December 2020, fibers were also the type of MPs with the highest concentration (61.36%), followed by films (27.27%), and fragments (11.36%). In March 2021, fibers again had the MPs in sediment samples were classified by color into white, black, blue, transparent, red, yellow, gray, purple, and green; the percentages of each are shown in Figure 4. Transparent (24.4% and 29.1%) and black (19.5% and 18.2%) were the predominant colors in June 2020 and March 2021, respectively. In December 2020, black was the most common (20.5%), followed by transparent (18.2%).
The MPs'size compositions varied from 0.17 to 3.41 mm in June 2020, 0.11 to 3.33 mm in December 2020, and 0.13 to 4.12 mm in March 2021. The MPs collected in December 2020 and March 2021 were in the range of 0.2-1 mm (36.0% and 36.4%) and 1-2 mm (25.0% and 34.5%, respectively). The MPs collected in June 2020 were in the range of 1-2 mm (43.9%) and 0.2-1 mm (24.4%). The MPs measuring 0.2 mm had the lowest concentrations in all the sampling periods ( Figure 5).
Microplastic Analysis in Fish
At least 1 MP was discovered in 27 of the 62 C. carpio and 3 of the 45 A. mossulensis. In addition, the number of MPs detected in C. carpio and A. mossulensis samples were 40 and 4, respectively. Calculations of MP data in fish samples and representation of Table 1 were made according to Sun et al., (2020). There were 0.65 MP/fish for all fish (62) mossulensis was calculated as 43.5% and 6.7% (these results were calculated from the data displayed in Table 1). When MP concentrations in fish samples were compared statistically, no difference could be determined between stations.
The MPs obtained from the fish samples were classified into three types: films, fragments, and fibers ( Figure 7). The primary types of MPs were fibrils. Foams were not observed in fish samples (Figure 2). MPs ranged from 0.11 to 3.83 mm in length. As shown in Figure 5, MPs measuring 0.2-1 mm (45.5%) predominated in the fish samples. Transparent (25%), blue and black (20%) were the predominant MP colors seen in fish (Figure 4).
Discussion
MPs can be found in all freshwater components, including surface water, sediments, and aquatic organisms. Therefore, it is important to observe them in both biotic and abiotic matrices (Meng et al., 2020).
However, various factors, such as intensive agricultural activities in the surrounding area, changes in the water flow of the Sürgü Stream, and fishing and recreational activities, may all cause pollution in SDR. In this study the presence, types, sizes, and colors of MPs MPs of surface water samples were acquired via the filtration of 150 L of surface water. Meng et al. (2020) have stated that one method by which to sample water is to filter large volumes from a lake. Using this method, we found an MP concentration level of 156,7 par.m -3 in SDR with respect to the average microplastic concentration levels in stations and sampling periods. Other research on surface water MP concentrations in the freshwater systems in Turkey have shown that MP concentration levels were 33000 MPs/m 3 in Küçüçekmece Lagoon, 233 MPs/m 3 in Cevdet Dündar Pond, and 5.25 MPs/m 3 in Süreyyabey Dam Reservoir, respectively. Comparing the surface water MP concentration levels covered in this study with the data above, it can be seen that the MP concentration in our study is smaller than those of Cevdet Dündar Pond and Küçüçekmece Lagoon yet greater than that of Süreyyabey Dam Reservoir (Erdoğan, 2020;Tavşanoğlu et al., 2021;Çullu et al., 2021). In relation to other research across the world, the MP concentration in SDR is relatively moderate in sediment although it is lower in fish and surface water samples (Table 2). Fibers are important types of MPs in freshwater ecosystems (Meng et al., 2020). They mostly consist of clothing, textiles and fishing lines (Hu et al., 2020). The fibers in SDR showed higher concentrations than other MP types. Similarly, Erdoğan (2020) has reported that fiber is the predominant type of MP in Cevdet Dündar Pond. Moreover, Hu et al. (2020) have found that fibers predominate in Dongting Lake in China. The fiber pollution in SDR may owe to precipitation, atmospheric transport, aging fishing equipment, and stream flowing into the lake, as reported in the literature . Films, the type of MP with the second-highest concentration in the lake, are mostly formed by the deterioration of plastic bags, which are misused and released into the environment uncontrollably .
This study has shown that St.1 is more polluted than St.2 in terms of freshwater and sediment MP concentrations. The fact that St.1 was located at a higher point than St.2 in respect to the flow direction of Sürgü Stream could be a reason behind the higher MP concentration levels in St. 1 Hu et al. (2020) have also indicated that rivers are an important cause of MPs in lakes. However, the low MP levels in surface waters in both stations was due to scanty rainfall in the months before December. The heavy rainfall in the months preceding June as well as the severe snowfall and melting snow before March indicate that the high MP concentration in both stations could be land-based (Figure 8).
The distribution of MP colors in SDR was found to be similar to other data reported in the Ofanto River in Italy (Campanale et al., 2020). Black and transparent MPs predominated in both studies. MPs derive their color from the plastics in which they originate, but colors can vary depending on photo degradation and the residence time of the plastics in the water (Campanale et al., 2020).
In this study, the 0.2-1 mm and 1-2 mm sizes of MPs showed high concentrations in surface water and sediment in SDR. Similarly, Egassa et al. (2020) reported that the predominant group of MPs measured 0.3-0.9 mm and the smallest 4.0-4.9 mm in Victoria Lake in Africa.
It was important to determined the composition of substances, and identification of ingredient of the MPs was made with FT-IR. PET was a widespread plastic type of collected from SDR, and as a main component of MPs. PET is often used to make plastic bottle, packages and is used as a fiber in the clothing industry. Similar to the results of this study, PET were the dominant polymer type in in surface water of the Manas River Basin, China . The main sources of PS and PET are thought to be from the surrounding domestic sewage and land origin .
This study has revealed the concentration, color and size of MPs in two freshwater fish species (C. carpio and A. mossulensis) in SDR. Higher MP concentrations were determined in C. carpio compared to A. mossulensis. Several studies have shown that MPs are ingested by fish (Lusher et al., 2016;Xiong et al., 2018;Hanachi et al. 2019). Merga et al. (2020) have similarly reported that benthopelagic (C. carpio) fish digest higher concentrations of MPs compared to surface-fed fish. Furthermore, Hanachi et al. (2019) have demonstrated that C. carpio can provide an important indicator to explain the presence of MPs in freshwater and their transport between different trophic levels of fresh water ecosystems. In the present study, MPs were detected in 50% of C. carpio samples collected from St.1 and 40% of C. carpio samples collected from St.2 (Table 1). In addition, MPs measuring 0.2-1 mm and 1-2 mm were detected at higher concentrations compared to other sizes in both fish species. According to this study, MP contamination may adversely affect the aquatic organisms of SDR. Therefore, in order to reduce MP concentrations and their effects in the studied aquatic system, the origin and entry routes of MPs in SDR should be determined.
Conclusions
In this study, the concentration, distribution, color and size of MPs in various components of the SDR ecosystem have been determined. MPs were detected at two sampling stations in three sampling periods. Generally, the concentrations of MPs at St.2 were higher than those at St.1. Fibers were the predominant types of MPs in surface water, sediment and fish samples. Transparent and black were the predominant colour, PET and PP were the common polymer types in SDR. The main sources of MPs in SDR are thought to be daily used plastic products, the wastewater discharge, the Sürgü Stream, and the atmosphereIt is especially important to control MP contamination in the rivers that flow in to the lake. Moreover, these freshwater sources should be protected through measures such as appropriate waste management, the establishment of a wastewater treatment facility, and the recycling of plastic materials.
Ethical Statement
All fish samples were collected according to the animal protocols certified by Inonu University Research Committee (2020/13-2).
Author Contribution
This article was written by a single author.
Conflict of Interest
The author declares that she has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-10-25T20:56:17.772Z
|
2021-08-19T00:00:00.000
|
{
"year": 2021,
"sha1": "7b19949c62835789d82fd2859bfc4085c967c04d",
"oa_license": null,
"oa_url": "https://doi.org/10.4194/trjfas20157",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "47d60c6a07623b3fff1ed9b7915e0e1fd780eabe",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
118584238
|
pes2o/s2orc
|
v3-fos-license
|
Collective flow in p-Pb and d-Pb collisions at TeV energies
We apply the hydrodynamic model for the dynamics of matter created in p-Pb collisions at 4.4TeV and d-Pb collisions at 3.11TeV. The fluctuating initial conditions are calculated in the Glauber Monte-Carlo model for several centrality classes. The expansion is performed event by event in 3+1-dimensional viscous hydrodynamics. Noticeable elliptic and triangular flows appear in the distributions of produced particles.
I. INTRODUCTION
The large multiplicity of particles emitted from the small interaction region in relativistic heavy-ion collisions implies that a fireball of very dense matter is formed. Experiments at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC) [1,2] have demonstrated the appearance of a collective flow in the expanding fireball. The physical picture is expected to be different in the interaction of small systems, proton-proton, proton-nucleus or deuteron-nucleus. At RHIC energies the density of matter created in d-Au interactions is small and does not cause jet quenching [1]. d-Au and p-p interactions are treated as a baseline reference to evidence new effects in nucleus-nucleus collisions, beyond a simple superposition of nucleon-nucleon (NN) collisions. With the advent of proton-proton collisions at several TeV center of mass (c.m.) energies at the LHC, it has been suggested that some degree of collective expansion appears in high multiplicity p-p events [3][4][5][6]. However, no direct experimental evidence exist for such a collective expansion in p-p interactions.
At the LHC, p-Pb collisions can be studied in the future, experiments with d-Pb or other asymmetric systems are also possible, but with additional technical difficulties [7]. Estimates of the hadron production in p-Pb interactions at TeV energies take into account nuclear effects on the parton distribution functions, saturation effects, but do not assume the formation of a hot medium [7,8]. Experiments with p-Pb beams should provide an input for models used in heavy-ion collisions for the calculation of dense medium effects on hard-probes.
The expected multiplicity and size of the interaction region in central p-Pb and d-Pb collisions at TeV energies are similar as in peripheral (60 − 80% centrality) Pb-Pb collisions at √ s N N = 2.76TeV [9]. This raises the possibility that hot and dense matter is formed in such collisions. For strongly interacting matter, the assumption of local equilibrium is a good approximation * Piotr.Bozek@ifj.edu.pl and relativistic hydrodynamics can be used to follow the evolution of the system [10]. Quantitative predictions for the elliptic flow have to account for finite deviations from local equilibrium in the rapidly expanding fluid [11][12][13].
In order to test the assumption of the formation of a dense fluid in p-Pb and d-Pb interactions and to estimate possible effects of its collective expansion, we apply the viscous hydrodynamic model to calculate the spectra of emitted particles. The goal of this study is to have a quantitative prediction of the elliptic and triangular flows and of the transverse momentum spectra for comparison with future experiments. The dynamically evolved density of the fireball from hydrodynamic simulations can be used in the calculations of the parton energy loss in such small systems.
The task requires the use of the most sophisticated version of the hydrodynamical model: event by event 3 + 1dimensional (3 + 1-D) viscous hydrodynamics. While a good description of many collective phenomena in heavyion collisions can be obtained in the perfect fluid hydrodynamics in 2 + 1-D [10,14] or 3 + 1-D [15], to calculate the azimuthally asymmetric flow in small systems such as p-Pb or d-Pb collisions one has to use viscous hydrodynamics. In collisions of symmetric nuclei 2 + 1-D boost-invariant viscous hydrodynamics is routinely being applied for observables at central rapidities [11,12]. In p-Pb or d-Pb interactions the energy density and the final particle distributions depend strongly on rapidity. This forces the use of 3 + 1-D hydrodynamics to obtain realistic particles spectra at different rapidities. Only recently 3 + 1-D viscous hydrodynamic simulations became available [13,16]. In proton or deuteron interaction with a nucleus the shape of the interaction region fluctuates widely from event to event. Unlike in interactions of heavy ions, using the average density is not a reliable approximation. Event by event 3 + 1-D perfect fluid hydrodynamics is used by several groups [17]. The inclusion of event by event fluctuations is important in the description of the initial eccentricity and triangularly of the fireball [13,[17][18][19][20][21]. Only one group is using an event by event 3 + 1-D viscous hydrodynamic code for heavy-ion collisions [13,22].
As the size and the life-time of the system decrease the hydrodynamic model becomes less justified. A sizable elliptic flow is observed in peripheral Pb-Pb collisions at the LHC, which proves that substantial rescattering occurs in the evolution of the fireball. By itself it does not prove that the hydrodynamic regime is applicable in such collisions, as some elliptic flow can be generated through collisions in the dilute limit. Few hydrodynamic calculations are applied also to peripheral Pb-Pb collisions at √ s = 2.76TeV [3,23,24] with results compatible with experimental observations. Nevertheless, it must be noted that as the impact parameter increases, uncertainties of the hydrodynamic model become more important; fluctuations modify substantially the initial eccentricity, the relative role of the hadronic corona in the evolution of the system increases. In the present calculation the last issue is partly taken into account through an increase of the shear viscosity to entropy ratio at lower temperatures. For d-Au collisions at √ s N N = 200GeV the hydrodynamic model is expected to break down, as indicated by the absence of jet quenching. However, there are no published experimental results concerning directly the elliptic flow in d-Au collisions or estimates from hydrodynamic models at RHIC energies.
Below we present results from event by event viscous hydrodynamic simulations for p-Pb and d-Pb collisions at √ s N N = 4.4 and 3.11TeV respectively. We use Glauber Monte-Carlo model initial conditions for the hydrodynamic evolution. We calculate particle spectra, charged particle pseudorapidity distributions, elliptic and triangular flow coefficients as function of pseudorapidity and transverse momentum.
II. SIZE AND SHAPE OF THE INITIAL FIREBALL
The number of particles produced in a p-Pb or d-Pb interaction can be estimated from N part the number of participant (wounded) nucleons in collision. The Glauber Monte-Carlo model generates a distribution of events with different source sizes (number of participant nucleons) and different shapes (distribution of participant nucleons in the transverse plane). The binary collision contribution is expected to be numerically small. Moreover the number of binary collisions is roughly N part − 1 (2). The presence of a term depending on the number of binary collisions cannot be separated from the functional dependence on N part . The number of participant nucleons in the Glauber model depends on the NN cross section.
p-Pb interactions at the LHC are planed at the c.m. energy in the NN system starting at √ s N N = 4.4T TeV. This corresponds to proton and Pb momenta of 3.5TeV and 208 × 1.38TeV, attainable with the present magnetic field configurations in the accelerator [7]. For deuteron beams it gives the energy √ s N N = 3.11TeV. The maximal available NN c.m. energy at the LHC is 8.8 and 6.22TeV for p-Pb and d-Pb interactions respectively. For collisions of beams with different energies per nucleon, the NN c.m. reference frame is shifted in rapidity with respect to the laboratory frame. The shift is y sh = 0.46 and 0.12 for p-Pb and d-Pb interactions. All the calculations in the hydrodynamic model are made in the NN c.m. frame. For the final emitted particles a boost is made to the laboratory frame to get spectra around mid-rapidity or pseudorapidity distributions.
The NN cross section at different energies can be obtained from an interpolation of values at 200GeV 2.76TeV and 7TeV [25,26] (σ N N = 42, 62, and 71mb respectively) using a formula of the form . The resulting NN cross sections from the Table I are used in our Glauber model calculation. We take a Wood-Saxon profile for the Pb nuclear density with ρ 0 = 0.17fm −3 , R A = 6.55fm and a = 0.45fm, and an excluded distance for nucleons of 0.4fm, for the deuteron we use the Hulthen distribution [27]. Events at a given impact parameter are generated using the GLISSANDO code for the Glauber model [27]. The distribution of participant nucleons at different impact parameters is shown in Fig. 1 for p-Pb interactions at 4.4TeV. We notice that the number of participant nucleons fluctuates strongly at a fixed impact parameter. The number of participant nucleons can be significantly above the average value (solid line in Fig. 1). Defining the most central collisions as a interval in the impact parameter is incorrect. The few percents most central events in terms of the number of participant nucleons (N part > 18) have a participant multiplicity larger than the average N part at zero impact parameter. The picture is very similar for d-Pb collisions. In the experiment the centrality classes are defined by the track multiplicity, closely correlated with the number of participants in the model. In heavy-ion collisions the number of participants is correlated with the impact parameter. In p-Pb or d-Pb interaction it is preferable to define the centrality classes for events using directly cuts in N part . Figs. 2 and 3 show the probability density for events of a given N part for the two systems considered. For p-Pb events, we use three centrality classes defined as 18 ≤ N part , 11 ≤ N part ≤ 17, and 8 ≤ N part ≤ 10 corresponding to centrality bins 0−4%, 4−32% and 32−49% out of all the inelastic events (N part ≥ 2). The unusual numbers for the centrality percentiles are fixed by the discrete variable N part . For the d-Pb interactions, we choose 27 ≤ N part , 16 ≤ N part ≤ 26, and 10 ≤ N part ≤ 15 corresponding to centrality bins 0 − 5%, 5 − 30% and 30 − 50%. The charged particle density at central pseudorapidity can be estimated from the multiplicity observed at a similar energy and for a similar number of participant nucleons measured in peripheral Pb-Pb collisions at the LHC [9], interpolating the measured values of dN /dη P S / < N part /2 > at centralities 60 − 70% and 70 − 80% to the average number of participant nucleons N part corresponding to the most central bins considered in p-Pb and d-Pb collisions. The energy dependence of dN/η P S is s 0.11 for p-p and s 0.15 for nucleusnucleus collisions [28]. We take s 0.13 to extrapolate from √ s N N = 2.76TeV. The estimated values of the charged particle density at midrapidity are quoted in Table I, the uncertainty comes from the uncertainty in the measurements [9] and in the value of the exponent in the energy dependence.
The azimuthally asymmetric collective flow is driven by the asymmetry of the initial fireball. The initial ec- is calculated in each event with respect to the eccentricity angle ψ 2 maximizing ǫ 2 , the sum runs over all participant nucleons at positions r i , φ i , . . . denotes averaging over events. In a similar way the triangularity is and is calculated with respect the triangularity axis ψ 3 in each event [18,19]. In Figs. 4 and 5 we plot the eccentricity and the triangularity as function of the number of participant nucleons in the fireball. In proton induced interactions, the eccentricity and the triangularity of the source are similar and decrease for central collisions. It is different for d-Pd collisions, the eccentricity is larger than the triangularity and increases for central events. The eccentricity in d-Pb interactions is caused by the asymmetric configuration of the two nucleons in the deuteron. Configurations with a large separation of the deuteron proton and neutron in the transverse plane have a large eccentricity and usually lead to a large number of participant nucleons in the Pb nucleus. This effect causes the increase of the eccentricity for the most central collisions in Fig. 5. We note that the eccentricity in Glauber models can be modified by correlation effects [27,29] We assume that the initial entropy density in the fireball is proportional to the number of participant nucleons. The density in the transverse plane x, y is the sum of contributions from participant nucleons at positions x i , y i from the Pb nucleus N − (x, y) and from the proton N + (x, y) (or from the proton and/or the neutron in the deuteron) The contribution from each nucleon is a Gaussian of width σ w = 0.4fm. The final results show some dependence on the chosen value of σ w . Using a smaller width of 0.3fm/c we notice an increase by ≃ 10% of the integrated elliptic and triangular flow for p-Pb collisions. A similar effect has been observed in Ref. [22]. The parameter s 0 is fixed to reproduce the final multiplicity after the hydrodynamic evolution. The distribution in space-time rapidity η is asymmetric the profiles f ± (η ) are of the form
6)
y beam is the beam rapidity in the NN c.m. frame. The asymmetric emission in the forward (backward) rapidity hemisphere from forward (backward) going nucleons can be observed in the distribution of charged particles in d-Au collisions at RHIC [30]. The distribution of the form (2.5) has been used as the initial condition for the hydrodynamic evolution in modeling Au-Au collisions at RHIC yielding a satisfactory description of the directed flow [31]. The parameters of the longitudinal profile the plateau width 2η 0 and the width of the Gaussian tails σ η are adjusted as initial conditions for 3 + 1-D viscous hydrodynamic calculations to reproduce the charged particle pseudorapidity distributions in Au-Au collisions at 200GeV (η 0 = 1.5, σ η = 1.4 [16]) and Pb-Pb collisions at 2.76TeV [28] (η 0 = 2.3, σ η = 1.4). For the present calculation we take σ η = 1.4 and η 0 = 2.35, 2.4 for interactions at 3.11 and 4.4TeV respectively. An example of the initial entropy density in a d-Pb interaction event in shown in Fig. 6. Typically we observe strongly deformed lumpy initial states. The elongated shape of the source results from the configuration of the nucleons in the deuteron while hitting the larger nucleus. This configuration is more important for the resulting eccentricity and the total number of participant nucleons than the impact parameter (as long as the deuteron hits the core of the Pb nucleus).
III. VISCOUS HYDRODYNAMICS
We use the second order relativistic viscous hydrodynamics to evolve the initial energy density in each event [32]. The initial entropy density is generated in the Glauber Monte-Carlo procedure described in the previous section. The viscous hydrodynamics incorporates deviations from local equilibrium in terms of the shear and bulk viscosities, at zero baryon density heat conductivity can be neglected. These corrections π µν and Π to the energy momentum tensor T µν are evolved dynamically and ∇ µ = ∆ µν ∂ ν . The hydrodynamic equations are solved numerically in the proper time τ = √ t 2 − z 2 on a grid in the transverse coordinates x, y and the spacetime rapidity η , starting from τ 0 = 0.6fm/c. We use s 0 = 0.72GeV 3 in (2.4) for both p-Pb and d-Pb collisions, which gives the expected final multiplicities. We take for the relaxation time τ π = 3η T s , and assume τ Π = τ π . The initial fluid velocity u µ is taken as the Bjorken flow, the initial stress corrections from shear viscosity correspond to the Navier-Stokes formula, while the initial bulk viscosity corrections are zero Π(τ 0 ) = 0. The details of the solution in 2 + 1-D and 3 + 1-D are given in [12,16].
The shear viscosity to entropy ratio in our calculation is not constant. It takes the value η/s = 0.08 in the plasma phase and increases in the hadronic phase [16] [33] and a hadron gas model equation of state at lower temperatures. In constructing the equation of state we follow the procedure of [34]. The temperature dependence of the sound velocity has no soft point [16].
The hydrodynamic evolution stops at the freeze-out temperature of 135MeV. At the freeze-out hypersurface particle emission is done following the Cooper-Frye formula in the event generator THERMINATOR [35], with viscous corrections to the equilibrium momentum distribution f 0 f = f 0 + δf shear + δf bulk We use quadratic corrections in momentum for the shear viscosity and asymptotically linear corrections for the bulk viscosity (3.10) the sum runs over all the hadron species.
In Fig. 7 is shown the freeze-out hypersurface at η = 0 for a p-Pb event with N part = 24. The dense source survives for 5fm/c with the lifetime of the deconfined phase of 3.5fm/c (T > 160 MeV, solid line contour in Fig. 7).
IV. RESULTS
For each centrality class 50 hydrodynamic event are calculated. For each event several hundred THERMINA-TOR events are generated and analyzed together. This reduces non-flow effects, which in this case come from resonances decays. The numerical gird for the hydrodynamic evolution is set in the NN c.m. frame. The momenta of the emitted particles are boosted by y sh = 0.46 and 0.12 for p-Pb and d-Pb collisions to obtain spectra in the LHC laboratory frame.
The distribution of charged particles in pseudorapidity is shown in Fig. 8 for the three centrality classes defined in Sec. II. The density of charged particles at midrapidity for centrality bins N part ≥ 18 and 11 ≤ N part ≤ 17 in p-Pb is larger than observed in Pb-Pb collisions at 2.76TeV for centrality 70−80%. One can expect a similar degree of collective acceleration as in peripheral Pb-Pb events. The particle multiplicity in p-Pb interactions is of the same order as in p-p interactions with the highest multiplicity analyzed by the CMS collaboration [5]. While the nature of the high multiplicity p-p events is still unclear, the multiplicity in a p-Pb or d-Pb collision is simply related to the source size and density. The charged particle densities in pseudorapidity for p-Pb (Fig. 8) and d-Pb collisions (Fig. 9) are asymmetric, reflecting the predominant emission from the participant nucleons in the Pb nucleus [30]. For d-Pb collisions in the centrality bin N part ≥ 27 the particle multiplicity is similar as in 60−70% centrality Pb-Pb interactions. This makes the applicability of the hydrodynamic model even more justified in that case. The particle multiplicity in p-Pb events with N part ≥ 18 is similar as for d-Pb events with 16 ≤ N part ≤ 26. We find, that in all the cases the charged particle density at midrapidity is to within 5% proportional to the number of participant nucleons.
The transverse momentum spectra for π + and K + emitted in p-Pb collisions are hardened by the collective transverse flow generated in the hydrodynamic expansion (Figs. 10 and 11). For more central collisions the spectra are slightly flatter, as more transverse flow is generated. A similar picture appears for transverse momentum spectra in d-Pb collisions. The spectra from N part ≥ 27 and 16 ≤ N part ≤ 26 centrality bins have a similar effective slope and are harder than the ones for the 10 ≤ N part ≤ 15 centrality class. It is interesting to observe that the transverse momentum spectra are harder in p-Pb than in d-Pb collisions. The transverse size of the fireball in p-Pb interactions is smaller but its density is higher, this leads to a faster transverse expansion than for d-Pb interactions.
Fluctuating initial densities (Fig. 6) have a nonzero eccentricity and triangularity ( Fig. 4 and 5). The short hydrodynamical expansion stage in these systems is sufficient to generate noticeable elliptic and triangular flows. Fig. 14 shows the pseudorapidity dependence of the p ⊥ integrated elliptic v 2 and triangular v 3 flow coefficients in p-Pb interactions. In the calculations, 500 to 1500 THERMINATOR events are generated from each hypersurface generated in a 3+1-D viscous hydrodynamic evolution. The event plane orientations Ψ 2 and Ψ 3 are found and the elliptic v 2 and triangular v 3 flow coefficients are calculated in each event. The average over the hydrodynamic events gives v 2 {Ψ 2 } = v 2 and v 3 {Ψ 3 } = v 3 . The second cumulant flow coefficients include flow fluctuations v n {2} = v 2 n . A moderate dependence of the elliptic and triangular flows on centrality is seen. In p-Pb collisions both the eccentricity and the triangularity deformations of the initial shape are fluctuation dominated. We observe some reduction of the collective flow at forward and backward pseudorapidities. This reduction is due to an increase of dissipative effects and a shorter life-time of the source at nonzero space-time rapidities [36,37]. The form of the pseudorapidity dependence of the harmonic coefficients of the flow in Fig. 14 must be taken with caution, as 3 + 1-D viscous hydrodynamic calculations cannot reproduce it accurately [13,16]. The azimuthally asymmetric flow is different in d-Pb collisions (Fig. 15). The elliptic flow is significantly larger than in p-Pb interactions, reaching 0.097 for central collisions. We notice a strong centrality dependence, v 2 increases significantly for central collisions. The initial eccentricity of the source in d-Pb collisions is large (Fig. 5). The elliptic flow fluctuations (the difference between v 2 {2} and v 2 {Ψ 2 } [38]) are relatively less important for the deuteron than for the proton induced interactions. The triangular flow in d-Pb is similar as in p-Pb collisions, and does not vary strongly with the centrality.
The hydrodynamic response translates the initial deformation of the fireball into the azimuthal asymmetry of the final flow in an event by event basis. The elliptic flow coefficient follows closer the initial eccentricity than the triangular flow follows the initial triangularity, both in the magnitude and the orientation of the event plane. This observation is in agreement with other studies [39]. The hydrodynamic response v 2 /ǫ 2 is larger for central collisions, where dissipative effects that reduce the flow asymmetry are smaller [40].
The elliptic and triangular flow coefficients show a hydrodynamic behavior as function of the transverse momentum (Figs. 16 and 18). Same as for the integrated flow, there is little change with centrality for the elliptic flow, while some decrease of v 3 in most central p-Pb collisions is seen. The elliptic flow v 2 (p ⊥ ) for the d-Pb variation with the centrality.
V. CONCLUSIONS
The formation of a hot, collectively expanding fireball in p-Pb collisions at √ s N N = 4.4TeV and d-Pb collisions at 3.11TeV is studied. We perform 3+1-D event by event viscous hydrodynamic calculations. The initial size and shape of the fireball is taken from the Glauber Monte-Carlo model. The initial entropy density is adjusted to reproduce the expected particle multiplicity estimated as an extrapolation from observations in peripheral Pb-Pb collisions at √ s N N = 2.76TeV. A small hot and dense fireball is formed. It expands rapidly in the transverse direction. The p ⊥ spectra of emitted particles get harder, especially for p-Pb collisions. The deconfined phase survives for 3 − 4fm/c in events with high particle multiplicity, the presence of such a dense medium should be visible in the nuclear attenuation factor for high p ⊥ hadrons. The size and the life-time of the source can be further constrained in same-pion interferometry measurements. The initial ec-centricity and triangularity of a lumpy initial fireball lead to the formation of an azimuthally asymmetric flow. The elliptic flow is 3 − 4% in p-Pb collisions, with little centrality dependence (Fig. 20). For the d-Pb system, the elliptic flow is significantly larger, increasing for central collisions, and reaching almost 10%. A comparison to peripheral Pb-Pb collisions in Fig. 20 shows that similar conditions are realized in proton or deuteron induced interactions. The elliptic flow of that magnitude can be measured, with a different dependence on centrality in p-Pb and d-Pb collisions.
Let us close with a discussion on future prospects for proton and deuteron induced reactions at ultrarelativistic energies. p-Pb collisions at 4.4TeV are planned in the near future at the LHC [7]. The elliptic flow and the hardening of the p ⊥ spectra are noticeable and should be looked for in the experimental analysis. However, it must be stressed that the dynamics of such small systems is at the limit of the applicability of the viscous hydrodynamic model. The use of the hydrodynamic model in d-Pb interactions is better justified, also the elliptic flow is stronger, but such collisions are not planned in the near future at the LHC. The shift to the maximum LHC energy to √ s N N = 6.22 for d-Pb and 8.8TeV for p-Pb collisions results in an increase of the particle multiplicity by 30%. Hydrodynamic expansion would last longer, with less dissipative effects. The eccentricity and triangularity are similar as at lower LHC energies and qualitatively we expect similar results.
In view of the results in this paper it seems very interesting to look for collective effects in d-Au collisions at √ s N = 200GeV in RHIC experiments. The multiplicity in central d-Au interactions is similar as in peripheral Au-Au collisions at the same energy. If some stage of collective expansion is present, the large initial eccentricity in a d-Au system should translate into a measurable elliptic flow. Unfortunately no published data exist for these experiments, hydrodynamical simulations are underway.
|
2012-01-28T16:33:10.000Z
|
2011-12-05T00:00:00.000
|
{
"year": 2011,
"sha1": "0598fdb81c9cf4e8018b073e6b3bd44fa327a2df",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1112.0915",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0598fdb81c9cf4e8018b073e6b3bd44fa327a2df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
19144764
|
pes2o/s2orc
|
v3-fos-license
|
Robot-assisted adrenalectomy: current perspectives
Laparoscopy has established itself as the procedure of choice for performing adrenalectomy in benign adrenal disorders. Although laparoscopy scores heavily over open approach in terms of lesser blood loss, pain, shorter hospital stay and better cosmesis, it is riddled with certain shortcomings such as the need of dexterity, two-dimensional vision, dependence on an assistant for camera, etc. Robotic surgery promises to overcome these limitations. Multiple series have established that robotic adrenalectomy is a safe and effective procedure as conventional laparoscopy. Recently, robotic surgery has been found to be precise and accurate in performing partial adrenalectomy in hereditary adrenal syndrome cases. Other advances like single-port surgery have expanded the horizon and indications of robotic surgery. This review aims at studying the current evidence available for the effectiveness of robot-assisted adrenalectomy and defining its current status in managing adrenal disorders.
Introduction
Laparoscopic adrenalectomy (LA) has been recognized as the procedure of choice for the management of, especially benign, adrenal disorders. Various studies have proved the feasibility, efficacy and safety of this approach. [1][2][3][4] As compared to open adrenalectomy (OA), LA involves lesser blood loss, lesser pain and shorter hospital stay and offers better cosmesis. [5][6][7][8][9][10][11][12] Despite all these advantages, LA has been restricted to a handful of experienced surgeons at high-volume centers because of certain technical drawbacks like two-dimensional view, stiff non-articulating instruments, dependence on an assistant for camera, etc. Robot-assisted adrenalectomy (RA) is the recent addition to the armamentarium of surgeons. RA overcomes the limitations of laparoscopic surgery by providing a three-dimensional magnified view, better ergonomics, control of the camera and multi-articulated instruments using Endowrist technology to the surgeons. Recent studies, reviews and meta-analyses have tried to establish its superiority over LA and OA in terms of faster convalescence and lesser complication rate. [13][14][15][16][17][18][19][20][21][22][23][24][25] Higher cost and longer operating times have although prevented RA from gaining widespread acceptance. A recent large international series has shown that conventional laparoscopy and laparoendoscopic single-site surgery seem to be the most common adopted techniques, whereas minilaparoscopy and RA seem to be gaining popularity at a slower rate. 26 In this review, the current status of robotic approach in performing adrenalectomy has been analyzed.
In 2000, Horgan and Vanuno first reported the use of robot in performing laparoscopic bilateral adrenalectomy. 27 Since then, many authors published their experience of performing robot-assisted LA in small series of patients until Winter et al published the first series on 30 patients in 2006. Three different surgeons at a single center performed the surgeries. The median operative (OR) time was 185 minutes, postoperative complication rate was 7% and median hospital stay was 2 days and there was no conversion to open or laparoscopic surgery. 28 Since then, more than 50 studies have been published describing RA, but few studies have included more than 20 cases. This is the minimum number of cases required for achieving the learning curve of RA. 16
RA vs LA
Multiple series have compared laparoscopic and robotic transperitoneal adrenalectomy. Most of these studies have been retrospective observational comparative studies, 13,14,[16][17][18][19][20][21][22][23][24]26 and only one of them was a randomized controlled trial. 15 One of the first series comparing LA with robotic adrenalectomy was by Morino et al who evaluated outcomes in 10 patients who underwent laparoscopic and robotic adrenalectomy. 15 The authors observed that although the laparoscopic approach took less time (115 vs 169 minutes), it had a higher conversion rate (4 vs 0) and longer length of stay (LOS; 5.7 vs 5.4 days) as compared to robotic approach. Brunaud et al performed a retrospective analysis of a large series of 50 robotic and 59 laparoscopic adrenalectomies. They found that though the OR time was longer in the robotic group (189 vs 159 minutes), it was associated with a lesser blood loss (49 vs 71 mL) and shorter LOS (6.3 vs 6.9 days) with similar conversion rate (4 vs 4). 16 Table 2 describes various studies that have compared the outcomes in laparoscopic and robotic groups.
Brandao et al recently conducted a systematic review and meta-analysis comparing laparoscopic and robotic adrenalectomy. 20 This meta-analysis of nine studies, eight retrospective observational and one randomized controlled trial, included 600 patients (277 RA vs 323 LA). Although they did not observe any difference in terms of conversion rates, OR time, or postoperative complications, the robotic group was found to have less blood loss and a short hospital stay. As a part of the International Consultation on Urologic Diseases and European Association of Urology consultation on Minimally Invasive Surgery in Urology, an extensive methodological systemic review of the literature of laparoscopic and robotic adrenalectomy in the treatment of adrenal diseases was performed. The literature was searched systematically till January 2014 to identify studies comparing the safety and efficacy of different modalities of minimally invasive adrenal surgery techniques. The authors presented major findings in an evidence-based fashion and provided a set of recommendations. They concluded that RA might be considered an alternative to LA but requires further study (Grade B). 34 Tang et al performed a meta-analysis that included eight studies (232 cases and 297 controls) assessing RA vs LA, where six were prospective and two were retrospective. Patients in the LA group had significantly shorter OR time (weighted mean difference [WMD] = 17.52 minutes; 95% confidence interval [CI], 3.48-31.56; p = 0.01), but patients in the RA group had significantly lesser estimated blood loss (EBL; WMD = −19.00 mL; 95% CI, −34.58 to −3.41; p = 0.02) and shorter LOS (WMD = −0.35 day; 95% CI, −0.51 to −0.19; p < 0.001). Patients in both the groups had similar conversion rates and overall complications. 35 The same findings have been corroborated in another meta-analysis and systematic review by Chai et al. 36 RA has been compared with conventional LA in a certain subset of patients too. Aksoy et al compared outcomes of these two approaches in obese patients (42 patients in robotic group and 57 in laparoscopic group). The authors observed similar OR times (186.1 vs 187.3 minutes), less EBL (50.3 vs 76.6 mL) and shorter LOS (1.2 vs 1.7 days) in the robotic group. There were no conversions in the robotic group, whereas the conversion rate was 5.2% in the laparoscopic group. 18
Posterior retroperitoneoscopic adrenalectomy (PRA)
Theoretically, posterior retroperitoneal approach of adrenalectomy offers distinct advantages like decreased chance of postoperative ileus and other intestinal complications, as peritoneum is not breached, along with decreased postoperative pain. Ludwig et al first reported on their experience of performing robotic PRA on six patients. The mean tumor size in the study population was 2.8 cm with a mean OR time of 121 minutes. The OR time was further reduced to a mean of 57 minutes for five patients where the entire dissection was performed robotically. The authors did not observe any morbidity or mortality. 38 Berber et al published their experience of performing RA on 23 patients, of whom eight underwent retroperitoneal adrenalectomy. The average tumor size in this study was 2.9 cm with an average OR time of over 3.5 hours. More importantly, the docking time decreased to 15 minutes in the last four cases from 1 hour in the beginning of the study. According to the authors, patients with smaller tumors, bilateral disease and who have undergone multiple prior operations are best suited for retroperitoneal approach. 39 The surgeons in both the series had a vast experience of performing retroperitoneoscopic surgery and robotic procedures.
Karabulut et al compared robotic adrenalectomy with the laparoscopic approach. Thirty-two patients were operated through transabdominal route and 18 were operated through posterior retroperitoneoscopic approach in both the laparoscopic and robotic groups. They did a step-by-step evaluation of OR time and found that both the LA and robotic PRA had similar OR times for each step except for shorter time for hemostasis in the robotic group (23 ± 4 vs 42 ± 9 minutes, p = 0.03). Later on, the same group published their results when they did a head-to-head comparison of laparoscopic and robotic PRA on 31 patients each. Although the mean (standard error of the mean, SEM) OR time was similar in both the groups, there was a significant difference in the mean (SEM) OR time of the last 21 cases of robotic-assisted PRA when compared to the laparoscopic group (139.1 [10.9]
RA for malignant diseases
The use of minimally invasive approach for adrenocortical carcinoma is still a matter of debate. There has not been any prospective or randomized controlled trial comparing robotic or laparoscopy technique with open surgery. Some retrospective case reports and series have reported increased recurrence, peritoneal carcinomatosis, positive margins and local recurrence rates for laparoscopic cases compared to open surgery. [40][41][42][43][44] Robotic adrenalectomy has so far been mainly performed for benign diseases, but it has also been reported for adrenal cancer, adrenal metastasis and oncocytoma. [45][46][47]
Recent advances in robotic adrenalectomy
Single-port surgery Park et al first described the use of robot for performing single-port PRA. Five patients underwent the procedure. The mean tumor size, OR time and EBL were 1.48 ± 0.28 (range: 1.0-1.7) cm, 159.4 ± 57.6 (range: 103-245) minutes and 46.0 ± 56.8 (range: 5-120) mL, respectively. The average time to oral intake and postoperative hospital stay was 0.65 ± 0.11 (range: 0.54-0.79) days and 4.0 ± 2.23 (range: 3-8) days, respectively. There were no conversions to open surgery or postoperative complications. 48 Arghami et al described their experience of performing single-port RA in 16 patients and did a matched cohort analysis with 16 patients of LA. The OR time was 183 ± 33 minutes for single-port RA and 173 ± 40 minutes for LA (p = 0.58). There was one conversion to OA (6%) in each group, both because of bleeding on the right side during bilateral adrenalectomy. Two right-sided single-port RA patients required conversion to LA, one because of poor visualization. Both groups had similar pain scores (mean of 3.7 on a scale from 1 to 10) on postoperative day (POD) 1, and patients in the single-port RA used less narcotic pain medication in the first 24 hours after surgery (43 vs 84 mg in LA group, p = 0.001). The differences between the single-port RA group and LA group in LOS (2.3 ± 0.5 vs 3.1 ± 0.9 days, p = 0.23), percentage of patients discharged on POD 1 (56% vs 31%, p = 0.10) and hospital cost (16% lower in single-port RA group, p = 0.17) did not reach statistical significance. 49 Lee et al described their experience of performing robotic single-site adrenalectomy in 33 patients. The mean OR time was 118 ± 25.8 minutes. Sixty-seven percent of patients had pain scores of less than 4 (on a scale of 1−10). Seventy-four percent of patients were discharged on POD 1, and 96% were discharged on POD 2. OR times were found to drop significantly from a mean of 124 to 103 minutes after 21 adrenalectomies (p = 0.05). 50
Robotic partial adrenalectomy
Although total adrenalectomy has been traditionally advocated for bilateral adrenal disorders especially in hereditary syndromes like multiple endocrine neoplasia type 2, Von Hippel-Lindau disease and neurofibromatosis type I with decreased chances of recurrence, its benefits must be weighed against the morbidity of medical adrenal replacement therapy. Lifelong adrenal replacement therapy after bilateral adrenalectomy may predispose patients to osteoporosis, Addisonian crisis and decreased quality of life. 51 Hence, adrenal-sparing surgery or partial adrenalectomy has been suggested for patients with hereditary adrenal-producing syndromes, bilateral or multifocal lesions or solitary adrenal glands.
Kumar et al first described the technique of roboticassisted partial adrenalectomy in a patient with isolated adrenal metastasis. 47 Boris et al described their initial experience of 13 partial adrenalectomies in 10 patients. Median OR time was 200 minutes, median blood loss was 150 mL and median tumor size was 2.7 cm. No patient developed any intraoperative complication related to catecholamine surge like hypertensive crisis, prolonged hypotension, myocardial infarction or cerebrovascular accident. There were no recurrences, and only one patient required steroid replacement. 52 There have been no comparative or prospective series of robotic and laparoscopic partial adrenalectomy. Hence, more research is required to fully define the role of robotic-assisted partial adrenalectomy.
Cost
Cost of robotic surgery is one of the prime and significant concerns for its widespread acceptance globally. The high
5
Robotic adrenalectomy cost associated with robot is mainly related to its high procurement cost, use of expensive surgical consumables and longer OR times along with high maintenance cost. Hence, a significant advantage needs to be established over other approaches so as to overcome this cost barrier. In general, cost of surgical consumables along with maintenance charges per procedure roughly adds 900−950$ to the usual cost of the surgery. 14 Another study estimated the additional cost to be around 1400−2900$ for performing unilateral RA as compared to LA. 53 A study by Bodner calculated that RA was approximately 1.5 times more costly than LA. 54 On the contrary, Winter et al reported no significant difference between RA and OA or LA in terms of total cost. 28 In another series, RA was found to be 2.3 times more expensive than LA. The authors suggested that this cost difference could be offset by multidisciplinary and high-volume use of robots by other surgical specialties and when depreciation of robotic equipment is distributed over 10 years instead of 5 years. 29 Arghami et al showed that single-port RA was 16% less expensive than LA (84% ± 14% vs 100% ± 16%), although the difference was not statistically significant. 49 Probst et al calculated that the additional cost of robotic procedure is €2288 per procedure provided there are more than 150 robotic procedures in a year. The expenses could be further reduced if there are more cases in high-volume centers. They also showed that the overall cost for patients undergoing RA was lower than that for patients undergoing OA because of difference in the length of hospital stay. 37
Conclusion
Robotic approach is safe, feasible and as effective as conventional laparoscopy for performing adrenalectomy, especially for benign adrenal disorders. Depending upon the body habitus of the patient and experience of the surgeon, either transperitoneal route or retroperitoneal approach can be safely used to perform RA. Robotic surgery has also been demonstrated to be superior to laparoscopy in cases of large tumors, partial adrenalectomy and pheochromocytomas and in obese patients, but the number of patients has been limited in these retrospective series. Higher cost and lesser number of patients preclude the widespread adaption of this technique. High-volume centers and experienced surgeons can safely adapt this technology for better ergonomics of the surgeons and for better and precise dissection of adrenal tumors. Well-designed randomized controlled trials can clearly establish the exact status of robotics in performing radical adrenalectomy.
Disclosure
The author reports no conflicts of interest in this work.
|
2018-05-07T22:45:43.305Z
|
2017-01-05T00:00:00.000
|
{
"year": 2017,
"sha1": "f6e6e26a6680800c134112d8e00885d6f70ef150",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=34327",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e867ffddcbbaa58758f6c48942a5726efd68adc5",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1977055
|
pes2o/s2orc
|
v3-fos-license
|
Recovery of Bacillus licheniformis Alkaline Protease from Supernatant of Fermented Wastewater Sludge Using Ultrafiltration and Its Characterization
Investigation on recovery of alkaline protease from B. licheniformis ATCC 21424 fermented wastewater sludge was carried out by centrifugation and ultrafiltration. Optimization of ultrafiltration parameters (transmembrane pressure (TMP) and feed flux) was carried out with 10 kDa membrane. TMP of 90 kPa and feed flux of 714 L/h/m2 gave highest recovery (83%) of the enzyme from the centrifuged supernatant. The recovered enzyme had given maximum activity at temperature of 60°C and at pH 10. It was stable between pH 8 to 10 and retained 97% activity at 60°C after 180 min of incubation. Enzyme activity was significantly augmented by metal ions like Ca2+ and Mn2+. Protease inhibitors like phenylmethyl sulphonyl fluoride (PMSF) and diisopropyl fluorophosphates (DFPs) completely inhibited the enzyme activity. The partially purified protease showed excellent stability and compatibility with various commercial detergents. The detergent (Sunlight) removed the blood stains effectively along with the enzyme as additive.
Introduction
The membrane separation processes are the most widespread in the field of biotechnology, and they are more easily operated and scaled up in comparison to other bioseparation processes such as chromatography, and electrophoresis. Among the various membrane separation processes, ultrafiltration is one of the processes that functions under pressure gradient which is mostly used for separation and purification of products including enzymes and other proteins [1][2][3] or to recover microbial products (cells and spores) present in a culture medium [4][5][6]. Because of the low amount of enzyme present in the cell-free filtrate, the water removal is a primary objective. Ultrafiltration is an effective technique that has been largely used for the recovery of enzymes [7,8] and, in general, is a preferred alternative to evaporation. This pressure driven separation process is not expensive and also gives encouraging results with little loss of enzyme activity. This process offers both concentration and purification [9].
However, the application of membrane processes in general have some specific problems like fouling or membrane clogging due to the precipitates formed by the final product and/or deposition of solid particles on the membrane. If the solute flow towards the membrane is greater than the solute passing through the membrane, the solute accumulates on the surface of the membrane, this accumulation forms a concentration layer which is known as concentration polarization [10]. Tangential flow filtration is powerful and advantageous alternative over normal flow filtration as tangential flow filtration will significantly reduce the fouling of the membrane. The clogging or fouling can usually be alleviated or overcome by treatment with detergents, proteases, acids or alkalis [11]. In fact, the ultrafiltration process has been effectively in use for the recovery of organic compounds from several synthetic media [12][13][14].
Proteases are commercially important industrial enzymes accounting 60% of the total enzyme sales with two thirds of the proteases produced are from microorganisms 2 Biotechnology Research International [15][16][17]. Microbial enzymes are replacing chemical catalysts in manufacturing chemicals, food, leather goods, pharmaceuticals, and textiles. Among proteases, alkaline proteases are employed mainly as detergent additives because of their distinctive abilities to assimilate proteinaceous stains such as blood, chocolate, and milk. Currently, alkaline proteasebased detergents are preferred over the conventional synthetic detergents, as they have better cleaning properties, higher efficiency at lower washing temperatures, and safer dirt removal conditions [18]. Preferably, proteases used in detergent formulation must have a high activity level and stability over a wide range of pH and temperature. One of the major drawbacks affecting the stability of enzymes recovered from thermophiles at alkaline pH is that enzymes from alkalophiles confer stability over wide pH range but are generally thermolabile. So, there is always a need for proteases with all desirable properties to become accustomed with application conditions, and also, it is necessary to check the stability of the enzyme at elevated temperatures and pH. Applications, such as protease for detergent industries need concentrated and cleaned enzyme to amend with detergent to get good performance during storage and application as well. The enzyme is cleaner when the medium is simple and defined, where, as in case of sludge medium, fermented enzyme contains many other sludge particles and other impurities, so enzyme has to be clarified and concentrated to get higher activity.
Huge amount of municipal wastewater sludge has been generating in Canada. Due to increase in population and other developments sludge management is becoming a crucial environmental concern due to strict regulations on sludge disposal. So, bioconversion of wastewater sludge into value added products is economic and ecofriendly approach. The use of wastewater sludge for the production of alkaline protease with Bacillus licheniformis has been successfully achieved in our laboratory [19,20].
The aim of the present study was to recover and concentrate the alkaline protease activity from culture filtrate of fermented wastewater sludge using ultrafiltration technique. The efficiency of enzyme was examined in the presence of standard commercial detergents and the enzyme was characterized with respect to the effect of various additives such as stabilizers and inhibitors on the stability at higher temperatures and in alkaline pH.
Bacterial Strain. Bacillus licheniformis strain ATCC
21424 was used in this study. An active culture was maintained by inoculating on nutrient agar (composition: 0.3% beef extract, 0.5% peptone, and 2% agar) plates and incubating at 35 • C for 48 h. The plates were stored at 4 • C for later use.
Sludge Samples and Composition.
The wastewater secondary sludge samples collected from municipal wastewater treatment of Communauté Urbaine de Quebec (CUQ, Quebec) were used. The experiments were conducted at a sludge suspended solids concentration of 30 g/L. The sludge was centrifuged in order to obtain higher suspended solids concentration (30 g/L). The sludge characteristics were measured according to standard methods [21] and sludge composition was presented in Table 1.
Inoculum and Cultural Conditions.
A loopful of bacterial growth from a nutrient agar plate was used to inoculate a 500 mL Erlenmeyer flask containing 100 mL of sterilized nutrient broth (composition: 0.3% beef extract and 0.5% peptone) (sterilized at 121 • C for 15 min). The flask was incubated in a shaker incubator (New Brunswick) at 35 • C with 220 rpm for 12 h. 500 mL Erlenmeyer flasks containing 100 mL of sterilized sludge were then inoculated with 2% (v/v) inoculum from the above flask. The flasks were incubated in the same way for 12 h and these actively growing cells were used as inoculum for fermentor experiments.
Fermentation.
A fermenter (Biogénie Inc., Quebec) of 15 L capacity equipped with accessories and automatic control systems for dissolved oxygen, pH, antifoam, impeller speed, aeration rate and temperature and with working volume of 10 L sludge supplemented with 1.5% (w/v) soybean meal and 1.5% (w/v) lactose (sterilized at 121 • C for 30 min) was used for production of extracellular alkaline protease. The medium was inoculated with 4.5% (v/v) inoculum. Temperature and pH of the fermentation medium were controlled at 35 • C and 7.5, respectively. Dissolved oxygen concentration was maintained above 20% (1.56 mg O 2 /L) saturation (critical oxygen concentration) by agitating the medium initially at a speed of 200 rpm and finally increased up to 500 rpm and air flow rate was controlled automatically using a computer controlled system. The fermented broth was collected aseptically in HDPE bottles (VWR Canlab, Canada) after 42 h of fermentation and sealed with paraffin and preserved at 4 • C for further use. [22]. The supernatant after the centrifugation of the fermented broths were collected and stored at 4 • C.
Ultrafiltration
Operating Principle and Washing of the Filter. The equipment used for ultrafiltration was of tangential flow filtration type (PREP/SCALE-TFF, Cartridges Millipore) with recirculation. The fluid was tangentially pumped along the surface of the membrane. Pressure was applied to force a portion of the fluid through the membrane to the permeate side. The supernatant from centrifuge was fed into the ultrafiltration equipment by a pump (Casy Load, Master Flex, Millipore). The supernatant was brought to room temperature (25 • C) in order to conduct ultrafiltration study. The process consisted of feeding aseptically a volume (1 L) of the supernatant from the centrifugation step referred to as "feed" through the membrane in order to concentrate the active components to a concentrated volume referred to as "retentate" which was 18% of the volume of the supernatant [28]. The flow of the supernatant was obtained by means of a pump whose flow varied between 45.5 L/h and 250 L/h, which gave a flow of feed through the membrane ranging between 455 L/h m 2 and 2500 L/h/m 2 . As for the flow of permeate, it generally depended on the TMP and the resistance of the membrane. After ultrafiltration, the permeate and retentate were collected in flasks. Sampling of the supernatant, retentate, and permeate was carried out for measurements of physical and biological parameters (total solids, suspended solids, soluble protein, and protease activity). After each ultrafiltration operation, liquid in the membrane was completely drained. Taking into account the type of medium used in this study (biological environment), it was recommended to use an alkaline solution (0.1 N NaOH). The alkaline solution was passed through the membrane until the membrane was clean. Later, the membrane was removed and inverted to facilitate complete washing. Resistance of the membrane can be determined by "normalized permeability weight" (NWP) as performance of the membrane depends on NWP. The NWP can be calculated by (1). In fact, during the use of the membrane, the value of NWP decreased, and when the value lies between 10% and 20% of its initial value (that of the new membrane, NWP of new membrane should be considered as 100%), the membrane should then be changed [6]. Prior to ultrafiltration runs, we have determined the NWP to check the resistance of the membrane. NWP = water filtrate flux TMP (Transmembrane pressure (15 Psi)) × TCF 20 • C temperature correction factor . (1) Membrane Size. Membranes with molecular weight cut-off (MWCO) of 10 kDa and 100 kDa were used in the present study (Millipore, prep/scale spiral wound TFF-1). The membrane was made up of regenerated cellulose and was of the type spiral wound TFF-1 module PLCC with a surface area of 0.1 m 2 . Supernatant was passed first through 100 kDa membrane to eliminate all other sludge impurities and final permeate was collected as enzyme source to carry out optimization studies using membrane of 10 kDa. Concentrated enzyme was used for characterization purpose. Characteristics of the membranes were presented in Table 2.
Optimization of Parameters of Ultrafiltration. Transmembrane pressure and flux of the feed are important parameters to be controlled in ultrafiltration. For the optimization, the experiment was carried out for various values of TMP (70-110 kPa) and feed fluxes (455-2500 L/h/m 2 ). In a typical ultrafiltration process, lower permeate flow results in higher solute concentration in the retentate. Samples were withdrawn to determine the suspended solids, total solids, soluble protein, and protease activity in the retentate and permeate. Sigma-Aldrich Canada Inc) for 10 min at 37 • C in a constant temperature water bath. The reaction was terminated by adding 5 mL of 10% (w/v) trichloroacetic acid. This mixture was incubated for 30 min in order to precipitate the total nonhydrolyzed casein. Samples and blanks were filtered using Whatman filter paper (934-AH) after the incubation period. The absorbance of the filtrate was measured at 275 nm. The validation of the results was established by treating a standard enzyme solution under identical experimental settings where activity was known. One international protease activity unit (IU) was defined as the amount of enzyme preparation required to liberate 1 μmol (181 μg) of tyrosine from casein per minute at pH 8.2 and 37 • C. All experiments were conducted in triplicate and the mean value was presented.
Effect of pH on Enzyme Activity and Stability of Protease.
The activity of protease was measured at different pH values in the absence and presence of 10 mm CaCl 2 . The pH was adjusted using different buffers; acetate buffer (pH 5), phosphate buffer (pH 6-7), borate buffer (pH 8-9), bicarbonate buffer (pH 10), and Robinson and stokes buffer (pH [11][12]. Reaction mixtures were incubated at 37 • C and the activity of the enzyme was measured. Stability of the enzyme was determined by incubating the reaction mixtures at various pH values using different relevant buffers (pH 5-12) for 2 h at 37 • C. The residual activity after incubation was determined under standard assay conditions. Residual activities are obtained at respective pH values assuming the activity of enzyme before the incubation is 100%.
Effect of Temperature on Activity and Stability of Protease.
Optimum temperature was determined by activity assay on casein at pH 10 from 30 • C-90 • C in the absence and presence of 10 mm CaCl 2 and relative protease activities were assayed at standard assay conditions using casein as substrate.
The thermostability of enzyme was measured by incubating the enzyme preparation at different temperatures ranging from 30 • C-90 • C for 180 min in the absence and presence of 10 mm CaCl 2 . The residual activity after incubation was determined under the standard assay conditions. Residual activities are obtained at respective temperatures assuming the activity of the enzyme before the incubation is 100%.
Effect of Enzyme Inhibitor and Chelator on Protease
Activity. The effect of various protease inhibitors (5 mm
Effect of Various Metal Ions on Enzyme Activity.
To study the effect of various metal ions (Ca 2+ , K + , Fe 2+ , Zn 2+ , Hg 2+ , Mg 2+ , Mn 2+ , Cu 2+ , Co 2+ , Na + ) on enzyme activity, metal salt solutions were prepared in a concentration of 10 mm and 1 mL of metal solution was mixed with 5 mL of enzyme and was incubated for 2 h. Enzyme activities were measured at standard assay conditions. The activity is expressed in terms of relative activity assuming that the activity of the enzyme in the absence of metal salts just before the initiation of the treatment is 100%.
Hydrolysis of Protein
Substrates. The effect of various protein substrates such as casein, BSA, egg albumin, and gelatin were determined under assay conditions by mixing 1 mL of the enzyme and 5 mL of assay buffer containing the protein substrates (1.2% w/v). After incubation at 60 • C for 10 min, the reaction was stopped by adding 10% TCA (w/v). The undigested protein was removed by filtration (whatman filter paper, 934-AH) and the absorbance of the filtrate was measured at 275 nm. The protease activity towards casein was taken as a control.
Effect of Detergents on Stability of Protease Activity.
Protease enzyme stability with commercial detergents was studied in the presence of 10 mm CaCl 2 . The detergents used were Merit Selection (Metrobrands, Montreal, Quebec), La Parisienne (Lavo Inc., Montreal, Quebec), Arctic Power (Phoenix brands Canada), Bio-vert (Savons Prolav Inc., Canada) and Sunlight (the Sun products of Canada corporation, Ontario). The detergent solutions (0.7% w/v) were prepared in distilled water and incubated with the partially purified enzyme (2 mL recovered enzyme and 1 mL detergent of 0.7%) up to 3 h at 60 • C. At every 30 min interval, the protease activity was estimated under standard assay conditions. The control was maintained without any detergent and enzyme activity was taken as 100%.
Application of Alkaline Protease in Removing Blood Stains.
Application of protease enzyme (2 mL recovered enzyme) as detergent additive in removing blood stains was studied on white square cotton cloth pieces measuring 4 × 4 cm prestained with goat blood according to the method of Adinarayana [17]. The stained cloths were air dried for 1 h to fix the stain. Each of these stained cloth pieces were taken in separate flasks and the following setups were prepared.
The above flasks were incubated at 60 • C for 15 min. After incubation, cloth pieces were taken out, rinsed with water, and dried. Visual examination of various pieces exhibited the effect of enzyme in removal of stains. Untreated cloth pieces stained with blood were taken as control.
Transmembrane Pressure (TMP). The TMP was given by [6]
TMP is function of the pressure of the retentate and that of feed which is adjusted by pressure gauge in order to get different values of TMP. Different TMP values were tested range from 70 kPa to 110 kPa to get the optimum TMP value. Profiles of protease activity, soluble protein, total solids, and suspended solids in the retentate were presented in Figures 1 and 2, respectively. All of these values (protease activity, soluble protein, total solids, and suspended solids) were not detectable in permeate as the size of alkaline proteases were in range between 20-30 kDa [11] which is larger than MWCO of 10 kDa membrane. Adjalle et al. [6] reported negligible values of viable spores, total cells and turbidity in the different ultrafiltrated permeates of different fermented broths as sizes of spores and cells of B. thuriegensis (25 kDa and 30 kDa) during recovery of biopesticide are greater than MWCO of 5 kDa membrane. Higher protease activity (69 IU/mL), soluble protein (7.8 mg/mL), total solids (12 g/L), and suspended solids (2.7 g/L) were obtained with 90 kPa among all other TMP values used. All tested components were concentrated to 4.5 times in the retentate than their initial values (supernatant) ( Table 3). The fact that the protease activity was not detectable in all permeates.
So, 83% of protease activity was recovered in retentate with TMP of 90 kPa. Therefore, it is apparent that some of the protease was lost as a deposit on the membrane and/or in the tubes. TMP values higher or lower than 90 kPa were resulted in lower values of protease activity, soluble protein, total solids, and suspended solids. This may be because when TMP is lower than optimum value, feed pressure may not be sufficient to pass the solution through the membrane effectively and loss of components might have occurred in tubes or on the membrane surface. When TMP was high, this high pressure had cause foaming in ultrafiltration membrane which was retained in tubes and eventually components loss on membrane and in tubes might have occurred in the form of foam as they cannot be retained either in the permeate or in the retentate. Flow of the solute across the membranes certainly leads to clogging of some of the pores, creating additional surfaces for adsorption and caking. As gentle conditions are required for the recovery of the intact proteins, it would be difficult to remove/recover these proteins. Various mechanisms could generate these losses through the ultrafiltration membrane, as some of the sample components are close to the MWCO of the membranes. Primary cause of the loss of the enzyme protein through the membrane is the pore size distribution, while shear forces could also contribute by producing smaller fragments [25]. According to ultrafiltration principle, minimum flow of permeate will result in minimum loss or no loss of solute (protease in the present context) in the permeate and will give high concentration in retentate. Minimal flow of the permeate can be obtained with an optimal value of the TMP [6]. Adjalle et al. [26] recorded TMPs of 90 kPa and 100 kPa were optimum values for entomotoxicity recovery from starch industry wastewater and thermal hydrolyzed sludge medium, respectively. Other researcher reported that 20 kPa was best for the separation of serine alkaline protease from neutral protease and amylase and 100 kPa for the separation of serine alkaline protease from the organic and amino acids [3]. The optimum TMP is different for different cases and may be due to the rheological characteristics and other components present in the feed samples. most of the protease was recovered in the retentate and some of the protease might get lost as deposit on the membrane. Li et al. [27] reported that purities of proteases were increased more than ten times at flow rate of 360 L/h while separation of proteases from yellow fin tuna spleen by ultrafiltration, and Adjalle et al. [26] reported 550 L/h/m 2 and 720 L/h/m 2 as optimal values for entomotoxic components (crystal protein, protease etc.) from starch industry wastewater and thermal hydrolyzed sludge medium, respectively. Protease activity, protein, total solids, and suspended solids concentrations were decreased with higher feed flux values (1250, 1667 and 2500 L/h/m 2 ). This decrease with high feed flux values due to the fact that high flux can degrade product quality due to the generation of turbulence effect [28]. Moreover, higher turbulence can cause severe foam in the retentate stream which will create a vacuum and further decrease the permeate flux below the optimum value; hence, administer the overall performance of the ultrafiltration system [6]. The vibration in the filtration unit due to the higher feed flux can cause foaming due to proteins (enzymes and other soluble proteins) present in the medium [29].
Effect of pH on Enzyme Activity and Stability of Protease.
pH is a determining factor in the expression of an enzyme activity as it alters the ionization state of the amino acid or ionization of substrate. The ionization state of enzymes is undoubtedly one of the most crucial parameters that control substrate binding, catalytic enzyme action, and three-dimensional structure of enzyme molecule. Effect of pH on enzyme activity (permeate of 100 kDa) and stability in the presence and absence of 10 mm CaCl 2 are presented in Figure 5. The partially purified protease was found to be typical alkaline protease displaying its maximum activity at alkaline pH 10 as optimum, and it decreased sharply with increase in pH. The active site of the enzyme is mainly composed of ionic groups that must be in proper ionic form to maintain the conformation of the active site of enzyme for substrate binding or reaction catalysis as conformation is sensitive to changes in the environment (like pH change) [30]. The optimum pH obtained for this enzyme was higher than other reports showing pH optimum of 8 for protease from Haloferax lucentensis VKMM 007 [31] and pH optimum of 9 for protease from B. stearothermophilus [15]. However, the findings of optimum pH for this enzyme was in accordance with other findings who also reported pH of 10-10.5 as optimum for proteases from B. subtilis PE-11, B. cereus and Vibrio metschnikovi [17,32,33]. Protease stability of concentrated protease in presence and absence of 10 mm CaCl 2 is shown in Figure 5. When protease was preincubated with various buffers over broad pH values (pH 5-12) for 2 h and then measured for residual activity, the protease showed highest stability over a broad range of pH 8 to 10. The enzyme stability declined to nearly 55% when pH values were higher than 10. The most important feature of the present protease is alkaline enzyme, as it was stable in the alkaline pH up to 11, and it can be used as additive in detergent industry. Usually, commercially important proteases from microorganisms have highest activity in the alkaline pH range of 8-12 [40,45].
Effect of Temperature on Activity and Stability of Protease.
Temperature profiles on enzyme activity in the presence and absence of 10 mm CaCl 2 are shown in Figure 6.
In the present study, temperatures ranging from 30 • C-90 • C were studied in the absence and presence of 10 mm CaCl 2 . Optimum temperature for this enzyme was found to be 60 • C. The enzyme activity was declined gradually when temperatures were higher than 60 • C. Similar results were reported by other researchers where optimum temperature of 60 • C was recorded for proteases from Haloferax lucentensis VKMM 007 and B. mojavensis [31,34]. In contrary to these results, an optimum temperature of 75 • C was reported for protease of B. laterosporus-AK1 [35]. Temperature profile on enzyme stability was presented in Figure 7. Thermal stability of the enzyme was studied at different temperatures of 60 • C, 70 • C, 75 • C, and 80 • C for different time periods (30 to 180 min) in the presence of 10 mm CaCl 2 . The enzyme was 97% stable at 60 • C even after 180 min of incubation and at 70 • C up to 41% stable after 180 min incubation. But, the enzyme was completely unstable at 75 • C after 90 min of incubation and at 80 • C after 60 min of incubation due to thermal inactivation. The main course of action found to be involved in the thermal denaturation of enzyme was due to the dissociation of ionic groups from the holoenzyme and modification or degradation of prosthetic group [30]. Beg and Gupta [34] reported 86% of stability at 60 • C for protease from B. mojavensis and Shanmughapriya et al. [36] reported enzyme stability only up to 40 • C for protease from marine Roseobacter sp. MMD040. The present enzyme from B. licheniformis ATCC 21424 is thermostable as thermostable enzymes are stable and active at temperatures which are even higher than the optimum temperatures for the growth of the microorganisms [37]. So, this enzyme can be very well applied as additive in detergent industries, as it can withstand harsh washing conditions of operation because of its stability at high temperatures.
Effect of Enzyme Inhibitor and Chelator on Protease
Activity. Inhibition studies primarily give an insight of the nature of an enzyme, its cofactor requirements and the nature of the active center [38]. The effect of different inhibitors and chelator on enzyme activity was investigated and results were presented in Table 4. Among all the inhibitors tested (5 mm concentration), PMSF inhibited the protease activity completely and DFP was able to inhibit protease activity up to 90%. In this context, PMSF sulphonates the vital serine residue in the active site of the protease and has been reported to inactivate the enzyme activity completely [39]. Hence, this protease can be classified as serine protease. Our results are in accordance with other studies, where protease was completely inhibited by PMSF [17,31]. EDTA and Diothiotheritol did not inhibit enzyme activity at all, but pCMB and β-ME inhibited the enzyme activity slightly.
Effect of Various Metal Ions on Enzyme Activity.
Effect of various metal ions with 5 mm concentration on the enzyme activity was tested and results were presented in Table 5. Some metal ions (Ca 2+ and Mn 2+ ) had enhancing effect on the enzyme activity, while other ions (Mg 2+ , Zn 2+ , Cu 2+ , Co 2+ , and Na + ) had no effect or slight inhibitory effect on the enzyme activity. Some of the metal ions such as Ca 2+ , Mn 2+ , and Mg 2+ increased and stabilized protease enzyme activity; this may be due to the activation by the metal ions. Results show that these metal ions protected the enzyme activity against thermal inactivation and played a vital role in maintaining active conformation of the enzyme at higher temperatures [40]. Metal ions Hg 2+ and K + had showed maximum inhibition on the enzyme activity, while Fe 2+ inhibited the enzyme activity up to 34%. The inhibitory effect of heavy metal ions is well reported in the previous reports and it is well known that ions of mercury react with protein thiol groups and also with histidine and tryptophan residues. Moreover, the disulfide bonds were found to be hydrolytically degraded by the action of mercury [40].
Hydrolysis of Protein Substrates.
Effect of some native proteins as substrates on enzyme was studied and results were presented in Table 6. Among all substrates, protease showed higher hydrolytic activity (100%) against casein and the enzyme showed moderate hydrolysis of both BSA (54%) and egg albumin (35%). Previous researchers also reported that alkaline proteases showed higher activity with casein [40,41]. But protease did not show any hydrolysis with gelatin. This may be due to the enzymatic cleavage of peptide bonds of gelatin is difficult because of its rigid structure and the restricted enzyme substrate interaction on the surface of gelatin [42]. Different protein substrates contain different amino acid contents and the assay procedure used in this study by detection of tryptophan and tyrosine. So, lesser hydrolysis of other substrates compared to casein may be due to less quantity of detectable amino acid in hydrolyzed product. For example, gelatin has no tryptophan residue.
Effect of Detergents on Stability of Protease Activity.
Protease used for detergent additive is expected to be stable in the presence of various commercial detergents. The protease from B. licheniformis ATCC 21424 was tested for its stability in the presence of different commercial detergents. Excellent stability and compatibility was observed with protease in presence of wide range of commercial detergents (Sunlight, Arctic Power, la Parisienne, Merit Selection and Bio-vert) at 60 • C in the presence of CaCl 2 as stabilizer. The protease showed good stability and compatibility in the presence of Sunlight followed by Arctic Power (Table 7). Significant activity (nearly 50%) was retained by the enzyme with most of the detergents tested even after 3 h incubation at 60 • C. Protease from Streptomyces fungicidicus MML1614 showed compatibility with various commercial detergents and significant activity was retained only up to 90 min of incubation [15] and protease from Conidiobolus coronatus showed compatibility with different commercial detergents at 50 • C with 25 mm CaCl 2 concentration, only 16% activity in Revel, 11.4% activity in Aerial, and 6.6% activity in Wheel after incubation [43]. B. licheniformis RP1 protease was stable with detergents at 40 • C-50 • C only up to 60 min [44]. In comparison with these results, our protease from B. licheniformis ATCC 21424 was appreciably more stable in the presence of commercial detergents.
Washing Test With B. licheniformis Protease in Removing
Blood Strains. Protease produced by B. licheniformis ATCC 21424 was considerably stable in a wide range of pH and temperature and even showed good compatibility with commercial detergents. So, it was tested as an additive with detergent, to check washing performance of the detergent with this additive protease. When the protease preparation was combined with commercial detergent (Sunlight), washing performance of the detergent was enhanced in removing the blood stain from white cotton cloth ( Figure 8). Similarly, proteases of B. subtilis [17] and Streptomyces fungicidicus [15] efficiently removed blood stains from white cotton cloth pieces when combined with commercial detergents. Even though alkaline proteases from Bacillus spp. are stable at high temperature and alkaline pH, most of them are incompatible with detergent matrices [45,46]. Concentrated enzyme from supernatant of sludge fermented by B. licheniformis ATCC 21424 showed higher tolerance with detergent and efficiently removed the blood stain.
Conclusion
In this study, enzyme was concentrated and characterized to apply as additive in detergents as detergent industry needs concentrated enzyme in order to have higher efficiency. The recovery of alkaline protease using ultrafiltration process with an optimum transmembrane pressure of 90 kPa and feed flux of 714 L/h/m 2 showed a recovery of 83% of the protease activity. The protease from B. licheniformis ATCC 21424 is thermostable and alkali-tolerant serine alkaline protease, as it is stable at alkaline pH and high temperature. Recovered alkaline thermostable protease by ultrafiltration can be exploited in detergent industry as an additive, because it showed excellent stability at wide range of temperature and compatibility with commercial detergents. More importantly, the supplementation of the enzyme preparation to detergent could remarkably remove the blood stains of white cotton cloth.
|
2014-10-01T00:00:00.000Z
|
2011-08-18T00:00:00.000
|
{
"year": 2011,
"sha1": "cfb9de53fe5811ced10dc01547c75c67e185e4ef",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2011/238549.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfb9de53fe5811ced10dc01547c75c67e185e4ef",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
144852443
|
pes2o/s2orc
|
v3-fos-license
|
The Doctrine of Natural Justice under Civil and Military Administrations in Nigeria
In all human affairs, there has been established the need for a generally acceptable code of conduct and procedure in the administration of justice, civil or criminal which must be seriously observed by all in relationships with fellow human beings. This is particularly applicable to those who are saddled with the sacred responsibility of steering the ship of the State. In this wise, the place of natural justice is pivotal and has deservedly been elevated to the realm of great importance by all civilized communities. This article traces the origin of Natural Justice and discusses the two basic ideas in which Natural Justice is embodied i.e. audi alterem partem and nemo judex in causa sua. It also discusses the doctrine of Natural Justice under the Civil and Military rule in Nigeria and some other jurisdictions. The Article concluded by identifying some problems associated with the enforcement of Natural Justice and made recommendations. Keyword: constitutional law, doctrine of natural justice
Introduction
Natural justice as a concept did not start with modern government; it is as old as the existence of mankind on earth evidently borne out of cultivated traditional attitudinal disposition of fairness in man to man relationship for which Natural Justice as a concept has attained notoriety.In fact, what seems to have happened to the doctrine in this age of globalization is the codification of several scattered principles of Natural Justice in statute books, partly for ease of reference and to concretize the long standing right of place of the doctrine.
Origin of Natural Justice
Historically, it was Hugo Grotius, a Dutch (born at Delft in Holland, 1583 -1645), who built up what became known as the law of nature, or natural law.For his contribution to the growth of natural law, he is referred to as the father of the law of nature as well as the father of the law of nations (Note 1).Before Grotius, opinion was generally prevalent that above the positive law, which is law which had developed by custom or by legislation of a State, there was in existence another law which has its root in human reason, and which could be regarded without any knowledge of positive law.This law of reason was called law of nature or natural law.
According to Dr. Ezejiofor (Note 2), the concept of natural law was first formulated systematically by the Stoics after the breakdown of the Greek City States; for the Stoics, natural law was universal as it applied not only to citizens of certain states, but rather to everybody everywhere in the cosmopolis.This law was superior to any positive law and embodied those elementary principles of justice apparent to the eye of reason.It is from this natural law that we derive fundamental rights or natural rights which may be defined as moral rights which every human being, everywhere at all times ought to have, simply because of the fact that, in contradiction with other beings, we are rational and moral.No one may be deprived of these rights without grave affront to justice (Note 3).
The rules of natural justice are therefore a part of natural law and relate to the minimum standards of fair decision-making imposed by the common law on persons and bodies that are under a duty to act judicially (Note 4).The principles of Natural Justices are embodied in two basic ideas, audi alterem partem and nemo judex in causa sua.
Audi Alterem Partem
The principle of audi alterem partem (hear both sides) which is one of the twin pillars of natural justice is primarily about giving an individual the opportunity of being heard before he can incur either the loss of liberty, right or property for any wrong or offence committed by him.Most of the earliest reported decisions in which the rule was applied concerned summary proceedings before Judges.In R. v. Dyer (Note 5), the court held that the service of summons upon the party affected was a condition precedent to the validity of such proceedings not only in criminal matters but also in applications for the issue of distress, warrants and orders for levying taxes and other charges imposed by public authorities upon their subjects.This was also pointed out in Harper v. Carr (Note 6).
In the earliest time, the application of the rule was so strict that a Judge who adjudicated summarily without having issued a summons to the affected parties was at one time punishable in the court of the king's Bench for misdemeanor as was held in R. v. Venable (Note 7).
Basically the operation of the rule of audi alterem partem in the beginning could also be viewed from the perspective of deprivation of offices and other official appointments.An example was in 1615, when James Bagg, a Chief Burgess of Plymouth, who had been disenfranchised for singularly unbecoming conduct was reinstated by mandamus because he had been removed without a notice of hearing in the Bagg's Case (Note 8).Also, in Caapel v. Child (Note 9), a Bishop who was empowered by statute to order a vicar to appoint a curate (to be paid by the vicar) when satisfied either of own knowledge or by affidavit, that the vicar had neglected his duties was said to be duty bound to give the vicar notice and opportunity to be heard before making the order.
Invariably, the scope of the application of audi alterem partem became broadened in the nineteenth century to embrace such areas as the conduct of arbitrators.In the case of Re-Brook (Note 10), to include professional bodies and voluntary association in the exercise of their disciplinary functions in the case of Debbis v. Llyod (Note 11) every tribunal or body of persons invested with authority to adjudicate upon matters involving civil consequences and finally to individuals in the case of Wood v. Wood (Note 12).
However, the best known statement of the doctrine of audi alterem partem in the administration of English law was formulated by the House of Lords exercising its appellate function of a government department in the case of Board of Education v. Rice (Note 13), where Lord Loreburn L.C. held as follows: Comparatively recently, statutes have extended, if they have not originated, the practice of imposing upon departments of officers of state the duty of deciding or determining questions of various kind.In the present instance, as in many others, what comes for determination is sometimes a matter to be settled by discretion, involving no law.It will, I suppose, usually be of an administrative kind; but sometimes it will involve matter of law as well as matter of fact, or even depend upon matter of law alone.In such cases they must act in good faith and fairly listen to both sides for that is a duty lying upon everyone who decides anything.But I do not think they are bound to treat such a question as though, it were a trial … they can obtain information in any way they think best always giving a fair opportunity to those who are part in the controversy for correcting any relevant statement prejudicial to their views.
Basically, a breach of this rule will, as we have seen rendered a decision of an inferior tribunal invalid if or when the matter is taken up to a high court (Note 14).Thus, in The Queen v.The Governor-in Council, Western Nigeria, Ex parte Adebo (Note 15), the High Court of Western Nigeria held that certiorari would lie to quash the order of the Government Council deposing the applicant as Olofin of Ilishan because the Chief was not given an adequate opportunity to prepare his defense against the charges against him.In his judgment, Charles J. stated that: (Note 16) It is important to recognize that there is a presumption that when the legislature confers a power on an authority… it intends that the power shall be exercised judicially in accordance with the rule of natural justice, that the individual concerned must be given an adequate opportunity to be heard In The Queen v.The Acting Provincial Secretary, Uyo, ex parte Imeh (Note 17), the High Court granted an order of certiorari to quash the conviction of the applicant who was convicted by a Native Court on a charge in which he was not given an adequate opportunity to plead or answer to the charge preferred against him.
Where the rule is infringed, the court will intervene.Thus, in The Queen v.The Resident, Ijebu Province, ex parte Oshunlaja (Note 18), where the Governor in council approved the appointment of a Chief, whose appointment was made without "due enquiry", Ademola, C. J. (as he then was) said: The court is precluded by law from interfering in the matter of selection of a Chief, it will therefore not interfere, but it will be standing between the executive and the members of the public, so far as the legislature on the point permits, and if something is being done which is contrary to natural justice, or certain things required by law are not done, it will interfere.
Nevertheless, in The Queen v.Western Nigeria,Ex parte Odubote (Note 19), Fatayi Williams, J. (as he then was) in his judgment said: It is my view, settled law that where a statute seeks to take away the jurisdiction of a Superior Court, that privative provision will not be effective where the inferior court or body exercising judicial or quasi-judicial function had acted in violation of the statute or has acted without jurisdiction
Nemo Judex in Causa Sua
This doctrine which literarily means don't be a judge in your own cause is otherwise known as the rule against bias, which of course is the second of the twin pillar of the principle of natural justice.
It is a rule, which is principally concerned with impartiality preventing an umpire from prejudging whoever is standing trial before any tribunal.The instances of bias may clearly arise in a number of ways, one of which is where the umpire has direct pecuniary interest in the subject matter before him in which case he ought to transfer the matter to an independent arbitrator as illustrated in the case of Dimes v. Grand Junction Canal (Note 20) which set aside the decision of the Lord Chancellor who was a shareholder in the company appearing before him.
Even where an umpire has no pecuniary interest in the matter being litigated before him, he may still be disqualified on the ground of bias.Thus, in R. v. Sussex Justices, Ex parte McCarthy (Note 21) , a Solicitor was acting as clerk to the Justice in the hearing of a traffic offence, following a collision and his firm was due to act for the other party to the accident in civil proceedings.Although, the Solicitor's clerk was obviously passive in the course of hearing, a writ of certiorari to quash the conviction was granted.Lord Hewart, C.J. held that: It is of fundamental importance that justice should not only be done, but should manifestly seen to be done Equally,in Metropolitan Properties Co. FGC Ltd. v. Lannon (Note 22), the Chairman of a Rent Tribunal was also a Solicitor who had been involved in protracted disputes with the Landlords.His determination of the rent was quashed by certiorari since right minded persons might think that there was a real likelihood of bias.It is worthy to state here that the real essence of the rule against bias is to divest an individual of undue opportunity of assuming adjudicatory power or partake in the subject matter of which he is in any manner whatsoever connected with a view to engendering confidence of the affected parties in the verdict reached, since justice is rooted in confidence.
In fact, it can be stated here without any fear of contradiction that the concept of natural justice, the notion of law and justice are inextricably intertwined, as the observance of the one produces the other.This position was judicially supported by the Nigerian court in Adeboanu Manufacturing Industries (Nig.) Ltd. v. Akiyode,(Note 23) where the court held that: Perhaps, the concept of natural justice is better explained in two Latin maxims viz: audi alterem partem and memo judex in causa sua.The first maxim simply translates into the golden rule that no one shall be condemned, punished or deprived of property in any judicial or quasi -judicial proceedings unless he has been heard or be seen to have been given an available opportunity to be heard.That has long been a received rule of the principles of natural justice.The second … directs that no one shall be a judge in his own cause.These are the twin pillars on which the concept of natural justice rests.When it is being questioned whether justice has been done in any particular case, a safe ground for reason of difficulty of the terms is to assert that justice has been done according to law, for the law itself must of necessity include the procedure laid down for its attainment.To have the attainment of justice to a free for all pursuit and jettison the rule is to pave way for judicial high handedness and the omnipotence of individual judges.
From the former, it may not be too difficult to observe the relevance and/or co-existence of these two concepts.Indeed, it is beyond dispute that the concept of natural justice is ageless and has traversed all known human generations with inexorable tenacity.It is a concept that has been much debated, analyzed and discussed in multitudinous details by various schools of thought.Yet, it ever remains magnetizing and inexhaustibly available for searchlight by legal researchers and the thrust of sagacious academic interest.
Nevertheless, any meaningful consideration of the concept of natural justice would necessarily involve an appreciation of certain essential doctrines which are irrepressibly knitted for a fuller understanding of the concept itself.
Be that as it may, these rules of natural justice are so fundamental and necessary in the administration of judicial and quasi-judicial functions that a breach of any of them may lead to setting aside or quashing of the decision reached in such proceedings.At this junction, it is pertinent to point out that a decision reached in breach of any of the rules of natural justice renders it voidable and not void.(Note 24)
Natural Justice in Civil and Military Administrations
Civil Administration is synonymous with a democratic administration.Democracy which is the indispensable turn of the rule of law is based on two key elements to wit popular control over collective decision making and makers, and secondly equal right to share in such control, i.e. political equality (Note 25).
Thus, a civil administration will necessarily showcase a system of administration where constitutionalism reigns supreme which the observance of the principles of natural justice should form the bedrock.It is to be noted however, that this is in contrast with Military administrations which have attained unrivalled notoriety for a tradition of operating from the opposite role of constitutionalism or the rule of law.Understanding the Military attitude is tragically informed by the system of coup d'etat that customarily produces every Military administration.
The various coup d'état in the country and the long years of military rule has dishearteningly given way to the rule of force to a scandalous extent which has resultantly led to the rule of law being relegated to the background.For instance, the Military administrations under the rulership of leaders like Buhari/Idiagbon, Babangida and the late Sanni Abacha treated Nigerians to immeasurable brutalization of the citizens.They were either gamesomely killed or hurled into prison on one trumped up offence or the other without trial.This however, is not to say that civil administrations are diametrically exonerated from the violation of the principles of natural justice in the course of governance, as instances are not wanting where a democratic administration turns tyrant of a magnitude almost equal to that of Military administrations.They have to a lesser extent demonstrated flagrant disregard to the observance of the fundamental norms of a democratic administration which make the observance of the principles of natural justice possible.
Nevertheless, it is pertinent to state that different degrees and forms of violation of the principles of natural justice exist under the two systems of administration.
Natural Justice in Civil Administration in Nigeria
The constitution is today accepted as grundnorm upon which the existence of any democratic government is based, being the collective will of the people.The preamble to the 1999 Constitution (as amended) (Note 26) provides that: We the people of the Federal Republic of Nigeria having firmly and solemnly resolved to live in unity and harmony…….do hereby make, enact and give to ourselves this constitution.
All the Nigerian Constitutions since independence have always stated very explicitly that the people are the makers of the constitution as a collective resolve.That being so the constitution which is a symbol of the will of the people is supreme.However, these claims as reflected in the preamble that the constitution is the collective will of the people may be a fallacy as the constitutions of Nigeria since independence were either a product of a colonial or a military parting gift to Nigerians.
In any event, constitutionalism of government which civil administration is noted for is certainly an indispensable factor for meaningful observance of the principles of natural justice.Constitutionalism in itself may simply be described as adherence to the due process or rule of law as opposed to arbitrariness, tyranny or dictatorship.
The present Nigerian Constitution being a Federal Constitution necessarily has the legislative, the executive and the judicial arms of government exclusive of each other.All these three fundamental arms of government are duty bound to uphold the provisions of the Nigerian constitution which among other things contains provisions on human rights of the citizens as revealed in its chapter IV.Interestingly enough, it is within the purview of Chapter IV of the 1999 Constitution that natural justice finds a firm expression as it affects fair hearing.
In determination of his civil rights and obligations, including any question or determination by or against any government or authority, a person shall be entitled to a fair hearing within a reasonable time by a court or other tribunal established by law and constituted in such manner as to secure its independence and impartiality.
In Ika Local Government Area v. Mba (Note 28), the meaning of fair hearing under section 36 of the Constitution was stated as follows: a trial or investigation conducted according to all legal rules formulated to ensure that justice is done to the parties to a cause or matter The above definition of fair hearing was also referred to in Ezechukwu v. Onwuka (Note 29).However, in Fagbule v. Rodrigues (Note 30) and NEPA v. Arobieke (Note 31), it was stated that fair hearing involves a situation where having regard to all the circumstances of a case, the hearing may be said to have been conducted in such a manner that an impartial observer will conclude that the tribunal was fair to all the parties.
In any case, where the issue of fair hearing has been made an issue, the court must first of all determine it before all other issues in the matter.This was the position of the court in Babalola v. Oshogbo L. G. (Note 32).
Section 36(1)(a) of the 1999 Constitution makes provision for a person whose rights and obligations may be affected to be duly represented before reaching a decision affecting him.In Yakubu v. Chief of Naval Staff (Note 33), where the accused was denied bail and access to Counsel of his choice, it was held that it constituted a complete violation of his right to fair hearing.Moreover, section 36(4) of the Constitution provided that when any person is charged with a criminal offence, he shall, unless the charge is withdrawn, be entitled to a fair hearing in public within a reasonable time by a court or tribunal.
Section 36(5) went on to provide that every person charged with a criminal offence shall be presumed to be innocent until he is proved guilty.Section 36(6)(a) states that every person charged with a criminal offence shall be entitled to be informed promptly in the language that he understands and in detail of the nature of the offence.In Unity Bank Plc.v. Bouari (Note 34), it was held that the fair hearing provision is an aggressive one, not a cowardly one.
Be that as it may, consideration of the principles of natural justice under a civil administration would necessarily involve an x-ray of the three arms of government from the view points of the principles of fair hearing.
Natural Justice and the Legislature
As earlier observed, an essential feature of a civil administration is the predominance of the rule of law.It is in furtherance of this principle that the doctrine of separation of powers has been regarded as complementary of the rule of law as each arms of government is assigned its distinct functions which should not be exceeded or compromised except as allowed by the constitution in specified circumstances.The Constitutions established a bi-cameral National Assembly as the Federal Legislature.
Essentially, the various Nigerian Constitutions have always made provision for the supremacy of the Constitution.Thus, if any other law is inconsistent with the provisions of the Constitution, then that other law shall to the extent of its inconsistency be void.The supremacy of the Nigerian Constitution was buttressed by the Supreme Court in INEC & Anor v. Balarabe Musa & Anor (Note 35) when it held per Ayoola E. O. that: First, all powers, legislative, executive and judicial must ultimately be traced to the Constitution.Secondly, the legislative powers of the legislature cannot be exercised inconsistently with the Constitution, where it is so exercised; it is invalid to the extent of its inconsistency… As a corollary to the foregoing in a democratic government, the judicial power is vested in the court.The supreme court of Nigeria confirmed this in Global Trans & Anor v. Free Enterprises Nig (Note 36)., where it held: There is no doubt however, that under our Constitution, the three arms of government in both the federal and the states are distinct and separate and each has its functions and powers clearly spelt out.The judicial powers of the federation and States are vested in the courts established for the federation and the State.
Flowing from the foregoing therefore is the fact that the legislature cannot validly make law that is in violation of the Constitution such as ousting the constitutionally conferred jurisdiction of the Courts (Note 37).The National Assembly possess necessary legislative competence to make laws for the country in respect of matters stipulated in the exclusive legislative list as well as the concurrent legislative list as set out in parts I and II of the second schedule to the 1999 constitution (Note 38).Whilst the state House of Assembly is vested with the state legislative power in respect of matters set out in the concurrent legislative list in part II of the second schedule of the constitution (Note 39).However, the legislative powers must be exercised in conformity with the constitutional provisions to be valid and operative.Thus, in A. G. 36 States v. A. G. Federation (Note 40), the Supreme Court held as incompetent for National Assembly to make laws extending or altering the tenure of elected officers to local government councils and or making laws with respect to division of a local government area into wards for the purpose of elections.The court declared null and void, the provisions of section 110 (1) and 112 of the Electoral Act 2001.
It is noteworthy that the 1999 Constitution has given local administration in Nigeria to the state governments and it is the state Houses of Assembly that has legislative competence over such an issue.Similarly, in A. G. Ondo State v. A. G. Federation & Ors (Note 41), the Supreme Court declared certain provisions of the Anti-Corruption and other Related Offences Act (Note 42) unconstitutional.The court specifically frowned at the provisions of Section 26 (3) of the law which provides that: "prosecution of an offender under the law shall be concluded and judgment delivered within ninety working days of its commencement, save that the jurisdiction of the court to continue to hear and determine the case shall not be affected where good ground exist for a delay".
The provision violates the principle of fair hearing and is therefore unconstitutional.
Generally, any legislative enactment, which violates the rights of the citizen, would be unconstitutional.For instance, all the rights enshrines in Chapter IV of the 1999 Constitution are constitutionally sacrosanct, any legislation which detracts, impairs or take any of them away except as permitted by the constitution itself will be against the spirit of the Constitution.
In any event, the entire legislative Houses in the Federation in a civil administration are inferior and subordinate to the Nigerian constitution.Thus, the entrenched clauses including the individual's fundamental human rights such as the right to fair hearing cannot be altered by legislative enactment except the very complicated process dictated by the constitution is religiously followed.In this way, it is made extremely difficult for any civilian administration to light heartedly abrogate the individual's right except in the case of national necessity such as during war or public emergency.
Natural Justice and the Executive
The Executive arm of the government is arguably the most essential arm of any civilian administration.This is because it has always existed whether the government is military or civil.It is an arm that is saddled with the responsibility of execution and maintenance of the constitution and laws validly made by the legislature.
The Executive arm of government is more prone to abuse and violation of principles of natural justice, basically it is the very arm of government that directly interacts with the populace in the course of execution of the law.One would expectedly therefore hope that this arm should be more cautious when it comes to the observance of the principles of natural justice in general.
Regrettably, however, the Nigerian experience has shown that both the principles of natural justice and the rule of law are more in breach than in observance as executive lawlessness has unfortunately become a recognized feature of Executive actions in Nigeria.
Instances of Executive lawlessness ranges from the disobedience to court orders, arbitrariness, usurpation of functions and powers of other arms of government to the encroachment of citizen's rights.The faceoff between the administration of President Olusegun Obasanjo and the Nigerian Labour Congress (NLC) over arbitrary increases in the pump price of petroleum is a demonstration of Executive lawlessness.
Disobedience to court's order by the Executive has had an adverse effect on the observance of the principles of natural justice in the country.Thus, even where a citizen whose rights have been infringes upon by the Executive action gets judgment in court, his victory may still be frustrated by non-compliance by the Executive.Thus, the situation is more pathetic when it is realized that it is the Executive arm of government that has the constitutional responsibility of enforcing the law while disobedience of courts' orders in a military administration through depreciable may be understandable, having in mind its genesis; same cannot be said of a civil administration which is founded on the rule of law.
In Minister of Internal Affairs v. Shugaba (Note 43), Executive lawlessness was exhibited when without a trial, let alone a fair hearing; Abdulrahman Shugaba was deported on the allegation of his not being a Nigerian, but an illegal immigrant.
It however remains to mention here that Executive lawlessness hampers the observance of the principles of natural justice and thereby constitutes a big minus on civil administrations which one would have expected would be adherent to the rule of natural justice.
Natural Justice and the Judiciary
The judiciary is traditionally seen as the last hope of the common man; of course, it is a forum where remedies are obtainable against any oppressive act of the other two arms of government as well as individuals.Indeed, the judicial process is said to be integrative in nature, that is, it solidifies the multi-furious strands that holds society together.
It should be noted however that incidences of the Judiciary acceding to unnecessary adjournments delay court trials and this negates the principle of Natural Justice as Justice delayed is Justice denied.In a bid to curtail adjournments, the Bill for an Act to repeal the Criminal Procedure Act (Note 44), Criminal Procedure Code (Note 45), Administration of Justice Commission Act (Note 46) and enact the Administration of Criminal Justice Act applicable in Federal Courts and Courts of the Federal Capital Territory to make provision for speedy and efficient administration of Criminal Justice and provide for other matters related thereto 2013, has passed a second reading at the Nigerian House of Representatives.(Note 47) Basically, the judicial power of the Nigerian Courts is contained in section 6 of the 1999 constitution.Thus an efficient and virile judiciary is Sine qua non to an independent judiciary.However, the judiciary in Nigeria cannot be regarded as fully independent due to a number of factors such as the mode of appointment of judges which is done by the Executive on the recommendation of the National Judicial Council, insecurity of tenures as the Executive may fire judges at will upon flimsy excuse or for no reason at all.For instance, in 1975, many judges including Justice Elias, the then Chief Justice of Nigeria were removed from office without the observance of the elementary rules of natural justice (Note 48).Equally, in 1985, many Judges were dismissed or unceremoniously retired (Note 49); with a hostile atmosphere of insecurity of tenure as above briefly discussed, it becomes not too certain that the independence of the judiciary can be ensured.
Corruption is another curse responsible for a lack of judicial independence in Nigeria.It is an open secret that corruption has tragically enveloped the whole country and the judiciary not therefore being an exception has been enmeshed in this vice.
Apparently, where the judiciary is corrupt, justice goes to the highest bidder and it becomes a question of cash and carry.Commenting on the issue of corruption in the judiciary, Oputa J.S.C. (as he then was) remarked: (Note 50) Money they say is the root of all evil.The Bench is definitely not a place to make money.A corrupt Judge is the greatest curse to afflict on any nation.The passing away of a great advocate does not pose such public danger as the appearance of a corrupt and/or weak judge on the Bench for in the latter instance, the public interest is bound to suffer and justice…is thus depreciated and mocked and debased.It is better to have an intellectually average but honest judge than a legal genius who is a rogue.Nothing is as hateful as venal justice, justice that is auctioned, justice that goes to the highest bidder.
Amongst other notable factors that impede independence of the judiciary is the lack of independent machinery for the enforcement of its decision.Alexander Hamilton commenting on the effect of lack of self enforcement machinery for the judiciary observed: (Note 51) The judiciary is beyond comparison, the weakest of the three departments of power… it has no influence over either the sword or the purse, no direction either of the strength or the wealth of the society; can take no active resolution whatsoever.It may truly be said to have neither force nor will but merely judgment.
Notwithstanding the above obvious handicaps of the judiciary arising from lack of independence, amongst other constraints.The Nigerian judiciary has had an impressive record for the defense of natural justice in general and fair hearing in particular.Perhaps, the list of such instance cannot possibly be given comprehensively in a limited work of this nature, however, it is hoped that a few examples would suffice.
In Mogaji v. Board of Customs and Excise (Note 52), the court held that it is a violation of the constitutional prohibition of inhuman or degrading treatment to organize a raid with the use of horse whips, guns, tear-gas, to strike or otherwise injure custodians of such goods.Equally, in Alaboh v. Boyles & Anor (Note 53) the court held that the beating, pushing and submersion of the applicant's head in a pool of water by the first respondent were inhuman and degrading treatment.The court also declared unconstitutional, the arrest and detention of innocent citizens for the offence of another person in A. C.B. v. Okonkwo (Note 54) where Niki Tobi J.C.A. (as he then was) observed:
I know of no law which authorizes the police to arrest a mother for an offence committed by the son. Criminal responsibility is personal and cannot be transferred. A police officer who arrested "
A" for the offence committed by "B" should realize that he has acted against the law.Such a police officer should in addition to liability in civil action be punished by the police authority.
In Onu Obekpa v. C.O.P. (Note 55), it was held that bail to a person accused of an offence other than a capital offence is a basic constitutional right and undoubtedly the right to release before trial is much more basic where trial is going to last more than two months for a non-capital offence.In the case, the state counsel opposed an application for bail of the accused on the ground that he had not stayed in detention up to two months as envisaged by Section 32 (1) of the 1979 Constitution to entitle him to bail.
The position of the court above was in accord with a sense of justice and the constitutional provision which presumes innocence of an accused until proved guilty (Note 56).This is because if an accused person who is detained for two months on a mere allegation is eventually pronounced innocent by the court, he would have been made to suffer in vain.
In Aiyetan v. Nifor (Note 57), the Supreme Court held that: The principle of natural justice as enshrined in the rules of common law and section 33 (1) of the 1979 constitution is not confined to courts or tribunals establish under section 6 (5) of the 1979 constitution, but to every situation whenever a person or authority is concerned in the determination of the rights of another.
The supreme court in interpreting the provision of section 33 (1) of the 1979 constitution in relation to investigations of Constitution Investigation Committee in Adenyi v. Governing Council of Yaba Tech (Note 58) held that: Section 33(1) of the 1979 Constitution which guarantees and has entrenched fair hearing is in strict interpretation limited to the determination of civil rights and obligations.It follows therefore that where the determination of civil rights and obligations are in issue, particularly in an investigation committee, the observance of fair hearing is not stricto sensu obligatory.
It is noteworthy to stress here that the position of the court above that section 33 (now Section 36 of the 1999 Constitution) does not apply to contractual relationship of master and servant is grossly misconceived.This is because the strict interpretation given to the provision by the court cannot be the intentions of the law-makers.When a person is employed in the public service, the person exercising the power to terminate must derive his power from law.
By any construction, the decision in respect of the determination of contract of employment comes within the contemplation of section 33(2) (Note 59).It is however, a different ball game if the contract of employment has explicitly stipulated that the employment could be terminated without notice or hearing.In that case, the employee would have been deemed to have waived his right to fair hearing with such clause in his contract of employment.But in the absence of such provision, the position of the court is final as termination of contract of employment is definitely an action which has a decisive effect on a person's right and therefore fair hearing must be observed.This is more so where the party who wants to terminate an appointment or to effect a dismissal is a public officer, there is the requirement of fair hearing in the restricted sense of natural justice.
In Sariki v. Burma (Note 60), it was held that where a statute fails to provide for natural justice, the justice of the common law supplies the omission.Thus, even in the absence of statutory provision that a person should be given a fair hearing, it is still incumbent at common law to accord an individual the right of being heard before a decision which negatively affects him is taken.Although, many are of the opinion that the judiciary has failed in its sacred duty as the last hope of the common man, from the above consideration, it would be seen that the judiciary is an indispensable factor in the defense of the observance of the principles of natural justice and the rule of law which is an integral element that makes the observance of natural justice possible in a civil administration.
Military Administrations in Nigeria
Since Nigeria became an independent country on the 1 st October, 1960, there has been seven Military Administrations commencing with the Military regime of General Aguiyi Ironsi, which took power from the 16 th January, 1966 and terminating with the Military regime of General Abdusalami Abubakar on 29 th May, 1999 (Note 64).
Upon assumption of office, the Military promulgated the (Suspension and Modification) Decree (Note 65), which abolished amongst other institutions, parliament and Regional legislatures.The Decree provides in section 3 that: (Note 66) The Federal Military Government shall have power to make laws for the peace and good governance of Nigeria or any part thereof on any matter and that the Regional Military Administration can only legislate on matters in concurrent list with the prior permission of the federal Military Government.
The Decree in addition vested the executive authority of the Federal Republic of Nigeria in the Head of the Federal Military Government (Note 67) who may delegate his power to the Military Governor in the regions.Sadly enough too, the fortunes of the Nigerian judiciary under the Military was adversely affected by the combined effect of the provisions of section 1 (2) and ( 6) of the Decree, after providing that the constitution shall have the force of law throughout Nigeria, it went on to say that nothing in the constitution shall render any provision of a Decree void to any extent whatsoever, whilst section 6 of the Decree provided that no question as to the validity of this or any other Decree or any Edict shall be entertained by any court of law in Nigeria.
Obviously, the foregoing provisions have handcuffed the court in the exercise of its judicial powers.The response of the Nigerian courts to the above provisions of the Decree was a mixed one.For instance, in Ogunlesi and Ors v. A.G. Federation (Note 68) two Decrees of the Federal Military Government were challenge as ultra-vires.The Lagos state High Court held that the unlimited legislative competence of the Federal Military Government overrides the constitution.
Similarly, in Adamolekun v.The Council of University of Ibadan (Note 69), the Supreme Court while holding that it could not question as ultra-vires, the federal Military Government in making a Decree, however further held that the courts have jurisdiction to declare an Edict void, if it is inconsistent with a Decree or the constitution.Judicial courage was more pointedly exhibited against the Military Decree when in Lakanmi and Kikelomo v. A.g.Western State & Ors, (Note 70) the court declared a Decree of the Federal Government as invalid.
It needs to be stressed that the decision in Lakanmi's case spurred the Federal Military Government to promulgate the Federal Military Government (Supremacy and Enforcement of Power) Decree (Note 71), which amongst others asserted that the event of 15 th January, 1966 was a revolution and which by implication had changed the legal order of the country.This implicitly meant that the Supremacy of the Constitution had been dethroned and replaced with that of a Decree.
Nevertheless, all the Military Administrations that have so far ruled the country have common characteristics or features which were made manifest in the promulgation of Decrees heavily decked with ouster clauses, suspension and modifications of the constitution, establishment of Military Tribunals to try classified offences amongst others.These exclusionary decrees make the principle of Natural Justice mute under the military.
Natural Justice and Military Administrations in Nigeria
By any definition, Military regimes and the rule of law are antithetical.While one is ruled by force, the other is by law.
However, it is pertinent to examine Military Administrations having become part of the nation's history, though via irregular means.For all intents and purposes, they could be treated as any other Administration (civil) especially for the purposes of examining the existence of the rule of law under it and by necessary extension, the quantum of natural justice available to the citizenry in a military administration.
Military rule has much more than the civilian administration had a profound, far reaching and traumatic impact on the protection of human rights in Nigeria.Whereas, the civilian administration being a constitutional government is expected to conform with the supremacy of the constitution and all its actions made subject to judicial review, that however is not necessarily true of a military government that has the tradition of either suspending or modifying the constitution once it seizes power as what is saved or preserved in the existing constitution remains in force at the will of the federal military government and as supplement to any other decree which is subsequently issued by that body.
The State Security Detention of Persons Decree No. 2 of 1984 as amended which was promulgated by the Buhari/Idiagbon Military regime constituted a monumental minus to the observance of the principles of natural justice under that military administration.Indeed, the Decree in its section 1 (1) provided that: (Note 72) If the Chief of Staff, Supreme Headquarters is satisfied that any person who is or recently has been connected with acts prejudicial to the nation or in preparation for investigation of such acts and by reasons of which it is necessary to exercise control over, he may by order in writing direct that the person be detained in a civil prison or police station Section 2 of the Decree provides for a review by the Chief of Staff Supreme Headquarters of every case of detention under the Decree every three months with a view to determining whether it is necessary to continue such detention.Section 3(1) of the Decree validates all detentions of persons prior to the promulgation of the Decree, whilst section 4 (1) contains an ouster clause to the effect that no suit or proceedings shall be brought against any person for anything done or purported to be done in pursuance to the Decree.Chapter IV of the constitution which contains fundamental human rights provisions were suspended for the purpose of the Decree.
Pursuant to the provisions of this Decree, many people especially the overthrown second Republic leaders were detained for months without trial.Professor Nwabueze expressing his disapproval of the import of Decree No. 2 remarked that: (Note 73) Although the offence of corruption for which the Decree was applied against some of the second Republic Civilian leaders which led to their detention was a notorious act, but it does not follow that all the detained leaders were guilty since they have not even been tried and therefore, it was unsafe and unjustifiable to have lumped them together for indiscriminate and indefinite incarceration without trial.
He finally opined that it is better that nine guilty persons should go free than one innocent person be wrongly punished.
No civil proceedings shall lie or be instituted in any court for or on account of any matter or thing done or purported to be done under or pursuant to any Decree or Edict and if any such proceedings are instituted before, on, or after the commencement of this Decree, the proceedings shall abate, be discharged and made void.
The above provision of the Decree is, however, quite understandable because according to Prof. Nwabueze, the aim of such provision is to ensure the comprehensiveness of the exclusionary provisions.The executive or administrative acts of the military government under this Decree were impenetrably shielded from judicial review and that all possible loopholes for the court's intervention were effectively plugged (Note 75).
Without prejudice to the above, the extent of constitutional protection available to a citizen under a military administration is far less than that available to him under a constitutional democracy.In any event, ouster clauses not only limit the operation of fundamental rights provisions, but also judicial review of administrative actions.Thus, even where the rights of the citizens are being unjustifiably trampled upon vide military legislative supremacy or executive or administrative action duly fortified by ouster clauses, the right to redress the oppressive action would be abated and in abeyance, courtesy of ouster clauses.Certainly, this scenario does not augur well for the observance of natural justice.
Exclusion of right of appeal clause in the Decree setting up military tribunals is another feature that infringes on the citizen's rights.For instance, under the Buhari/Idiagbon regime, all Decrees establishing tribunals for the trial of certain offence contained provisions to the effect that no appeal shall be made from a decision of any tribunal under the Decree (Note 76).
However, there are a few instances where military tribunals which were established by military Decree provided for right of appeal.For instance, the Recovery of Public Property (Special Military Tribunal) Amendment Decree (Note 77) established a special Appeal Tribunal to hear and determine appeals from decisions of the Recovery of Public Property Tribunal, Special Military Tribunal Decree 1984.The Exchange Control Anti Sabotage Decree 1984 and the Counterfeit Currencies Special Provisions Decree 1984 as amended by the Counterfeit Currencies Special Provisions (Amendment) Decree 1986.The appeal tribunal may confirm, vary or set aside the judgment on order of the tribunal or maintain and uphold the conviction and discuss the appeal or allow the appeal and set aside the conviction.
Moreover, the Decree (Note 78) creating the Robbery and Firearms Tribunals and which prescribed capital punishment for convicted offenders does not provide a right of appeal from decisions of the tribunals.Although, there must be an end to litigation but trial in Military tribunals which forecloses constitutional right of appeal is evidently a breach of fair hearing.
Military Tribunals are usually composed of only or mainly soldiers with no knowledge of law or no regard for human rights or due process of law.This ultimately occasion a breach of fair hearing.
Retroactive legislation in criminal matters is yet another feature of the military administration which unleashes constraints on the observance of natural justice.Section 33(8) of the 1979 Constitution, now section 36(8) of the 1999 Constitution (Note 79) prohibits outrightly, retroactive (Note 80) legislation in criminal matters and this provision cannot be derogated from under any circumstances.For examples, section 36 (8) of the 1999 Constitution which is impari materia with the provision of 1979 Constitution provided that: No person shall be held to be guilty of a criminal offence on account or omission that did not at the time it took place constitute such an offence and no penalty shall be impose for any criminal offence heavier than the penalty in force at the time the offence was committed.
Sadly however, most of the Decrees with penal implications enacted by the Buhari/Idiagbon regime had retroactive effects.Undoubtedly, retroactive laws certainly infringes upon basic human rights, fundamental freedoms, the rule of law amongst other unpalatable effects.Just as the promulgation of ad hominem (Note 81) Decrees by the Military Administration infringes on the principles of natural justice.Ad hominem Decrees are usually promulgated in the course of trial to secure the conviction of an accused that is standing trial.
A good case in point is the trial of Major General Zamani Lekwot and six others in connection with the Zango Kataf disturbances.In that case, the defense counsel, Chief G.O.K. Ajayi had filed an ex parte motion at the Kaduna High Court, asking that the tribunal be restrained from continuing the trial until the issue of fundamental human rights was addressed by the Supreme Court.Former President Babangida signed Decree 55 which ousted the jurisdiction of the court in respect of anything connected with the tribunal.
Consequent upon the promulgation of the Decree, the defense counsel withdrew from the case.In the end, Lekwot and others were convicted.
Similarly, while the action challenging the annulment of the June 12 th 1993 elections was going on in a Lagos High Court, three new Decrees were promulgated outing the jurisdiction of the court over the matter (Note 82).
The regime of General Sani Abacha equally denied Late Chief M.K.O.Abiola the right to be heard when it tacitly refused to appoint Justices of the Supreme Court to make up the required number of Justices to hear Late Abiola's case which he instituted to challenge the annulment of the 1993 presidential election.The Justices of the Supreme Court were not fully constituted to hear Abiola's case as most of the available Justices could not hear the case as there was a pending action instituted by them against Chief Abiola's Concord Newspaper over a libelous publication which disqualifies them on account of possible bias.Nevertheless, the regime of General Abacha deliberately refused to fill the vacancy in the Supreme Court as a device to frustrate Chief Abiola's action, which could not be heard until he died in custody in 1998.
Conclusion
With the above consideration of the observance of the principles of natural justice under civil and military administrations, coupled with the analysis of the historical antecedent of the doctrine and several human rights declarations globally and other incidental matters amongst others, it remains to add here that the observance of the principles of natural justice is not cultured in specific or geographical bound.Evidently, the demand for its observance has not changed; neither has the effects of its violation changed too.It has continued as it has been from ages, as human nature too has remained the same.
Justice is thus rightly regarded as the "bond of society" the "cornerstone of human togetherness".It is the condition in which the individual can feel able to identify with society, feel at home with it, and accept its rulings.
Recommendations
It is said that no problem will ever exist without being accompanied by its own solution.Thus, it becomes imperative that the already identified problems are turned over to unveil the accompanied solution.
In the area of enforcement of the law, it is essential that the principle of natural justice is given utmost attention as opposed to rigid and technical justice all in the guise of enforcing the law simplifier.Thus, even where an action is statute barred, the court should not just first be divested of its jurisdiction.It should at least consider the cause of the delay in instituting the action within time with a view to finding out the reasonableness or otherwise of such a delay.
As it affects delegation of authorities, the enabling law should state in precise and discernable manner how such discretion should be exercised and subjectively varied or modified to include how and when the conferred powers become exercisable and not when the administrator thinks fits, or in his own opinion expedient.
Without adequate and comprehensive legal devices, which may include wide and compulsory publicity of delegated legislation to enable the people for whom it is meant not only to know of its existence but the extent of powers conferred on the administrator is needful.
The ousting of the jurisdiction of the court on grounds of locus standi or ouster clauses which shut out citizens from seeking judicial remedy should be reviewed by way of abrogation wherever human rights violation is involved.
The judiciary should be cleansed so as to make it live up to its accolade of the last hope of the common man.Corrupt judicial officers should be dealt with.Effort to guarantee independence of the judiciary should be made.Such measure may include the appointment of judges by the judicial arm of government without giving such right to the executive anymore and the strict financial isolation of judicial funding.
Finally, human right bodies and institutions should be strategically located in some parts of the rural areas as a way of making free legal services available to the rural dwellers instead of their undue concentration in towns and cities.
Notes
Babatunde Anisun & Ors v. Adeleke Osayomi & Ors (Note 61) the Court set aside the judgment of a High Court of Ijero-Ekiti for denying the Appellants a hearing before giving judgment against them in a chieftaincy dispute instituted by the Respondents.The judgment of the Lagos High Court in the case of Chief M.K.O Abiola & Amb.Babagana Kingibe v.The State (Note 62), presided over by Dolapo Akinsanya J. which declared the Interim National Government led by Chief Ernest Shonekan illegal, null and void is a thump up for the judiciary in defense of law.Similarly, the decision of the Supreme Court in INEC v. Balarabe Musa & Others (Note 63), where the court declared null and void, the provisions of the Electoral Act 2001 which prescribed over and above the constitutional requirements for eligibility to contest an election is quite commendable of the judiciary.
|
2018-12-27T19:36:55.047Z
|
2013-05-30T00:00:00.000
|
{
"year": 2013,
"sha1": "848ad64dc09dcf459872c68728227be49921a8a3",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/jpl/article/download/27874/16827",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "848ad64dc09dcf459872c68728227be49921a8a3",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
46872119
|
pes2o/s2orc
|
v3-fos-license
|
Tumor-associated macrophages remodeling EMT and predicting survival in colorectal carcinoma
ABSTRACT The immune contexture, a composition of the tumor microenvironment, plays multiple important roles in cancer stem cell (CSC) and epithelial–mesenchymal transition (EMT), and hence critically influences tumor initiation, progression and patient outcome. Tumor-associated macrophages (TAMs) are abundant in immune contexture, however their roles in CSC, EMT and prognosis of colorectal cancer (CRC) have not been elucidated. In 419 colorectal carcinomas, immune cell types (CD68+ macrophages, CD3+, CD4+ or CD8+ T lymphocytes, CD20+ B lymphocytes), EMT markers (E-cadherin and Snail) as well as the stem cell marker (CD44v6) were detected in tumor center (TC) and tumor invasive front (TF) respectively by immunohistochemistry. Tumor buds, that represent EMT phenotype, were also counted. It was found CD68+ macrophages were the most infiltrating immune cells in CRC. By correlation analysis, more CD68+TF macrophages were associated with more CD44v6 expression (p < 0.001), lower SnailTF expression (p = 0.08) and fewer tumor buds (p < 0.001). More CD68+TF macrophages were significantly related to more CD3+TF T lymphocytes (p = 0.002), CD8+TF T lymphocytes (p < 0.001) and CD20+TF B lymphocytes counts (p = 0.004). Strong CD68+TF macrophages infiltration also predicted long term overall survival. CRC patients with more tumor buds had worse survival. However, strong CD68+TF macrophages infiltration could reverse the unfavorable results since patients with more tumor buds but increasing CD68+TF macrophages infiltration had the favorable outcome, similar to lower tumor buds groups. This study provided direct morphological evidence that tumor-associated macrophages in the invasive front play critical roles in fighting with the unfavorable results of tumor buds, thus resulting favorable outcomes for CRC patients.
Introduction
Tumor cells that are heterogeneous, often show varied phenotypes and reside in distinct microenvironment niches. 1 Tumor associated immune cells (TAIs), whose types, densities, or functional differences in the tumor microenvironment, are thought to play critical roles in tumor prevention or progression. 2 Tumor associated macrophages (TAM) were the main components in the tumor microenvironment. 3 Whether the TAM infiltration in CRC could benefit or harm patient outcome is still controversial. 4,5 Cancer stem cells (CSCs) are characterized as the subtype of cancer cells that have self-renewal ability, clonal tumor initiation capacity, and clonal long-term repopulation potential. 6 Currently, the immune milieu that helps CSCs to escape from cytotoxic insult and elimination of infiltrating inflammatory cells, reveals its coherent relation with CSC. 7,8 It has been reported that CSCs interacted with tumor macrophages to promote tumor cell invasion and escape from killing by natural killer cells. 9 However, exploring the complex interactions between TAMs and CSCs is still urgently needed.
Epithelial-mesenchymal transition (EMT) is a pro-metastasis process that epithelial cells lose their cell polarity and cell-cell adhesion, with the observation by reduced epithelial marker like E-cadherin (E-cad), increased mesenchymal markers like vimentin and activated transcription factor like snail. 10 Tumor bud (TB), which is defined as a single cell or cell clusters composed of at most 5 de-differentiated cells at the invasive front, 11 is a well validated prognostic factor in CRC. [12][13][14][15] Prevailing opinion acknowledges that TB is at least a morphological characteristic of EMT. 16 EMT can be induced by many factors like growth factors such as TGF-b and cytokines such as IL-6, which can be secreted not only by tumor cells but also by activated TAMs. 17,18 However, it was also hypothesized that the specific immune response acts against the EMT with strong lymphocytes infiltration by targeting and then destroying TBs. 19 The role of TAM, either blocking or enhancing tumor formation and/or progression is controversial, and their impact on EMT is still unclear.
In this study, we focused on CD68 C macrophages, as well as other immune cell components CD3 C , CD4 C or CD8 C T lymphocytes and CD20 C B lymphocytes in the tumor center (TC) and tumor invasive front (TF) respectively, and investigated whether their behaviors affected either tumor cell stemness or the EMT as well as patient's outcome in CRC.
Distribution of Tumor-associated immune cells
Immune cells infiltrated in the stroma of tumor were counted in TC and TF separately (Fig. 1B-F and Supplementary Table S2). In TC and TF, CD68 C macrophages were the most infiltrating immune cells. CD68 C macrophages ranged from 0 to 94 (mean: 25.75, median: 22.13) in TC and from 0 to 185.5 per HP (mean: 36.59, median: 30.00) in TF. In TF, the density of CD3 C T lymphocytes ranged from 0 to 230.50 per HP (mean: 26.40, median: 18.67), that of CD4 C T lymphocytes ranged from 0 to 40.25 (mean: 1.95, median: 0.13), that of CD8 C T lymphocytes ranged from 0 to 51.25 (mean: 13.25 median: 10.75), and that of CD20 C B lymphocytes ranged from 0 to 230.75 (mean: 8.34, median: 3.00). The densities of these types of lymphocyte in TC were also shown in Supplementary Table S2.
The association between Tumor-associated macrophages and other immune cells
We compared the consistency of CD68 C macrophages and other immune cells. In TF, more CD68 C TF macrophages were significantly related to more CD3 C TF T lymphocytes (r D 0.172, p D 0.002), CD8 C TF T lymphocytes (r D 0.187, p < 0.001) and CD20 C TF B lymphocytes (r D 0.152, p D 0.004) ( Table 1). However, CD68 C TC had no relation to any other immune cell types in TC (p > 0.05) (Supplementary Table S3). It revealed the particularly located CD68 C macrophages in TF could recruitment lymphocytes infiltration, especially CD8 C T lymphocytes and CD20 C B lymphocytes to the surrounded sites.
Tumor-associated immune cells and clinicopathological variables
The densities and locations of CD3 C , CD4 C , CD8 C , CD20 C , CD68 C cells and their relationships with the clinicopathological features of 419 CRC patients were shown in supplementary Table S4 and Table S5. Reduced CD68 C TF macrophages were correlated with increased lymph node metastasis (p < 0.001, r D ¡0.195), higher TNM stage (p D 0.001, r D ¡0.178), increased vessel invasion (p D 0.001, r D ¡0.203) and more overall death rate (p D 0.001, r D ¡0.176). Similar to CD68 C TF macrophages, reduced CD8 C TF T lymphocytes and CD20 C TF T lymphocytes also were related to higher TNM stage, increased vessel and perineural invasion and more distant metastasis (p < 0.05).
Correlation of tumor-associated macrophages with CSC marker
The CSC marker CD44v6 was distributed on the membrane (Fig. 1G). CD44v6 was detected in 139 of 367 cases (37.87%) in TC and in 97 of 289 (33.56%) in TF. Then the relations of TAM and CD44v6 were analyzed in TC and TF respectively. More CD68 C TF macrophages significantly related to stronger CD44v6 TF expression (r D 0.211, p < 0.001) ( Table 2), while CD68 C TC macrophages had no relation to CD44v6 TC (p D 0.13) (supplementary Table S6).
Correlation of tumor-associated macrophages with EMT markers and Tumor bud
Epithelial marker E-cadherin and representative EMT transcriptional factor Snail were also detected. Membranous E-cadherin and nuclear Snail were defined as positive staining Table S7). In TF, CD3 C TF T lymphocytes was negatively related to Snail TF (r D ¡0.139, p D 0.021) (Supplementary Table S8). CD20 C TF B lymphocytes had a tendency to associate with Snail TF (r D ¡0.103, p D 0.069) (Supplementary Table S8). Though E-cadherin TF had no correlation to CD68 C TF macrophages (p D 0.808), more CD68 C TF macrophages infiltration significantly related to less TBs densities (r D ¡0.048, p < 0.001) ( Table 2).
CD68 C TF macrophages predict good prognosis We compared prognostic values of CD68 C macrophages, CD3 C , CD4 C or CD8 C T lymphocytes and CD20 C B lymphocytes in TC and TF respectively. By Kaplan-Meier survival analysis, lower CD20 C TC (p D 0.028) or CD68 C TF (p D 0.003) indicated unfavorable overall survival respectively ( Fig. 2A/B). On the contrary, lower tumor buds indicated better overall survival (Fig. 2C, p D 0.005).
Univariate COX proportional hazard model showed CD68 C TF macrophage was an indicator for outcome in CRC (HR D 1.89 (95%CI: 1.228-2.908), p D 0.004). Since CD68 C TF macrophages and TB were negatively related (r D ¡0.167, p D 0.001). TB was merged into low (0-4 TBs) and high (5 TBs). We combined CD68 C TF macrophages and TB, named MATB (macrophages and TB), then four groups were got: CD68 C TF high TB high , CD68 C TF high TB low , CD68 C TF low TB high , CD68 C TF low TB low . Kaplan-Meier survival analysis showed CD68 C TF low TB high had the worst survival (p < 0.001). In high TB group, when CD68 C TF were high, overall survival time was prolonged, which was similar to low TB group (Fig. 2D). This results revealed that CD68 C TF macrophages could increase CRC patients overall survival even if they had higher tumor buds. For better understanding how significant that MATB did in per TNM stage, patients were divided by TNM stage. Kaplan-Meier survival showed only in stage III that MATB significantly influenced overall survival (p D 0.009) ( Supplementary Fig. S1).
In order to evaluate prognostic value for the combination of CD68 C TF macrophages and TB, multivariate COX proportional hazard model was performed. Those predictors, including gender, age at diagnosis, chemotherapy, histological grade, histological type and TNM stage were entered into the analysis. Eventually, only TNM stage (p < 0.001) and MATB (p D 0.002) remained in the model. MATB was an independently prognostic predictor in addition to TNM stage (Table 3). After dividing patients by TNM stage, multivariate COX proportional hazard model showed MATB was significant only in stage III (HR D 0.337 (95%CI: 0.160-0.337), p D 0.004).
Discussion
In this study, we evaluated the components of tumor associated immune cell types and counted their densities in TC and TF respectively in 419 CRCs. The stemness marker CD44v6, and the EMT marker E-cadherin and transcriptional factor snail were also detected. We found CD68 C macrophages were the most infiltrating immune cells. By correlation analysis, TAM infiltration in TF was not only accompanied by T and B lymphocytes increasement, but also inherently linked with cancer cell stemness and EMT. More CD68 C macrophages infiltration in TF was a favorable factor for patient outcome, which could even reverse the unfavorable results caused by high TB counts. The combination of CD68 C TF macrophages and TB (MATB) was an independent prognostic predictor.
Tumor-associated immune microenvironment was significantly correlated with CSC and EMT. CSCs possess the intrinsic characteristics to evade recognition by the immune surveillance, thereby allowing transformed cells to proliferate, spread and plant. 7 CSC phenotype also has a strong overlap with EMT. 20 The modulation of CSC by the tumor immune microenvironment is poorly understood. An inflammatory microenvironment contributes to EMT in gastric cancer since TAIs like CD68 C macrophages could secret EMT inducers like TNF-a, TGFb, TGF-a, and IL-6. 21,22 However, whether immune cells could eliminate abnormal cells with EMT traits, or support tumor cells to more EMT-like is still complex.
As CD68 C macrophages infiltrated most in CRC, we were focusing on CD68 C macrophages. Macrophages represent the first line to defend against foreign pathogens as an innate immune defense 23 by inducing phagocytosis and subsequently degrading aberrant cells or possibly tumor cells. 24 Macrophages could also distinguish tumor cells and non-tumorigenic cells by specific cell membrane recognition 25 and subsequently clear away malignant cells by macrophage-mediated tumor cytotoxicity or ADCC (antibody-dependent cellular cytotoxicity). 26 TAMs are dynamically shifting in a balance between antitumor M1 and pro-tumor M2 phenotypes. CD44v6 belongs to the cell adhesion molecule CD44 family, whose expression on the membrane has been regarded as one of the CSC markers in CRC. 27 CSC and TAM frequently interacted. 28 Raggi et al found stem-like tumor cells could recruit circulating monocytes to the lesion and induce macrophages differentiation in Cholangiocarcinoma. 28 Liang Yi et al proved that glioma-initiating cells predominantly recruited TAMs, but not adhesive glioma cells. 29 CSCs inhibit cytotoxic T cell proliferation while stimulate activation of T regulatory (Treg) cells 30 and promote recruitment of immunosuppressive TAMs 31 as well as enhance the transition of TAMs from the antitumor (M1) to the protumor (M2) phenotype. 9,32 In this study, we found CD68 C TF macrophages had a positive relation to CD44v6 TF . It revealed that the stem-like tumor cells seemed to recruit CD68 C macrophages to the invasive front in CRC. The interaction behaviors of TAM and EMT have been explored these years. However, most reports by now focused on M2-like macrophages phenotype, and their roles in promoting tumor metastasis and EMT. 17,33 While in this study, CD68 C TC macrophages increased E-cadherin TC expression, CD68 C TF macrophages tended to decrease Snail TF expression. Strong CD68 C TF infiltration also inhibited TB. It demonstrated that CD68 C macrophages had potential to fight against malignant cells with EMT traits.
The mechanisms behind the anti-tumor effects of TAMs have not been fully elucidated, and seem potentially be ascribed to the M1 phenotype, 3 partly controlled by the adaptive immune response of CD4 C and CD8 T C lymphocytes. 34,35 In adaptive immune response, macrophages acting as antigen-presenting cells can recognize abnormal structures as dendritic cells do, subsequently stimulate Th1 by presenting MHC-II. Classically activated macrophages also regulate T lymphocytes activation by producing IL-6, IL-12, TNF-a, or CXCL9/10/15 23 . Here, we also found more CD68 C macrophages infiltration, particularly located in TF, was also accompanied by more T and B lymphocytes.
CD3 C TF T lymphocytes were negatively related to Snail TF . CD20 C TF B lymphocytes had a tendency to associated with Snail TF. Clinical parameters also showed higher CD68 C TF macrophages were correlated with decreased lymph node metastasis, decreased vessel invasion, lower TNM stage. Our work highlighted that an activated immune response might impair EMT-like and CSC-like cells, especially TB. This study provided a hypothesis that tumor cells with stem-like phenotypes recruitment TAMs to the invasive front of tumor lesion and surround around tumor buds. Subsequently, TAMs induce the immune response, especially CD8 C T and CD20 C B lymphocytes, to eliminate these highly invasive tumor buds (Fig. 3). As a result, the activated immune response makes less frequency of tumor metastasis.
Host immune responses had an impact on survival. 36 Zlobec et al 37 and our work proposed high CD68 C macrophages infiltration made a good difference in CRC patient survival, while others revealed TAM mainly exhibited a M2-like phenotype, promoting tumor progression and causing bad outcome. 38,39 In this study, we also found high CD68 C TF macrophages group live longer than low CD68 C TF macrophages group especially in TNM stage III and IV by Kaplan Meier analysis (p D 0.030, data not shown). These discrepancies between different studies may be due to the macrophage phenotypes and tumor types as well as the assessment methods. The CD68 C TF macrophages anti-tumor mechanism was supposed to be that recruitment of macrophages contributes to the development of an adaptive immune response against the tumor, and the balance between antigen availability and elimination through phagocytosis and subsequent degradation of senescent or apoptotic cells, 36 which was concerned with good outcome. We also assumed that high infiltration of CD68 C TF macrophages also arrested EMT traits that might also contribute to the favorable outcome. Furthermore, we wanted to clarify whether CD68 C TF macrophages could suppress EMT and lead to prolonged survival. Zlobec et al gave a concept of cell-to-cell contacts between tumor buds and TAMs, and high CD68 C counts predicted long term overall survival. 37 We also found CD68 C TF macrophages and TBs were negatively related. TB has been validated as a poor predictor in CRC. [12][13][14] After combining CD68 C TF macrophages and TBs to assess overall survival in CRC patient, Kaplan-Meier survival analysis showed higher CD68 C TF macrophages infiltration reverse the unfavorable results caused by higher TB counts. Multivariate COX proportional hazard model also demonstrated the combination of CD68 C TF macrophages and TB (MATB) was an (A-C) High CD20 C TC density, high CD68 C TF density and low tumor buds predicted favorable overall survival respectively (death/ total); (D) CD68 C TF low TB high had the worst survival (CD68 C TF low TB high vs CD68 C TF high TB high , p < 0.001; CD68 C TF low TB high vs CD68 C TF low TB low , p < 0.001; CD68 C TF low TB high vs CD68 C TF high TB low , p D 0.008). In high TB group, when CD68 C TF were high, overall survival time was prolonged, which was similar to low TB group (death/ total). ÃÃÃ p < 0.001. independently prognostic predictor, which had great potential for further application in CRC.
Conclusion
This study provided direct morphological evidence that tumorassociated immune cells play critical roles on EMT and cancer cell stemness. CD68 C TF macrophages, which impeded EMT and CSC, and induced an adaptive immune response, had a good influence on overall survival. CD68 C TF macrophages could even reverse the bad survival results of tumor buds. CD68 C TF macrophages and TB (MATB) was an independent predictor in colorectal carcinoma.
Case materials
In all 419 patients with sporadic colorectal carcinoma who underwent radical surgery but no chemotherapy or radiotherapy before the operation, there were 227 males and 192 females. The age ranged from 24 to 91 years (median, 64 years). According to patient's records, 217 cases were from colon and 202 cases were from rectum. Of all patients, 230 received regular chemotherapy based on 5-Fu after radical surgery, 186 never received chemotherapy, and 3 were unclear. Eighty-one patients were classified as TNM I stage, 127 as stage II, 170 as stage III, and 41 as stage IV.
Three hundred and forty-five specimens were adenocarcinoma, 46 were mucinous adenocarcinoma, and 28 were signet ring cell carcinoma or undifferentiated carcinoma. Histological differentiation were graded into low grade (gland for-mation50%, 299 specimens) and high grade (gland for-mation<50%, 120 specimens). At the end of follow-up, 327 were dead and 92 were alive. Follow-up periods ranged from 1-60 months (median: 33 months, mean: 35 months).
Construction of tissue array
Tissue microarrays of 419 CRC specimens were constructed (Fig. 1A). Using a tissue cylinder in diameter of 1 cm, three tissue punches were taken from formalin-fixed, paraffin-embedded blocks. In brief, each case had multiple tissue punches taken from formalin-fixed, paraffin-embedded blocks. Then punches were transferred into one recipient paraffin block (6 £ 7 punches) with a semi-automated tissue arrayer. Tissues were obtained from the TC, TF, and normal mucosa, which were verified from corresponding H&E slides. The TC area was determined as at least a distance of 20 £ fields from the border of normal mucosa. The TF area was determined as a 20 £ field within the most distal tumor cells. Finally, recipient paraffin blocks were cut into 4mm thick slices and mounted on APES coated slides.
Immunohistochemical staining
Five types of immune cell (CD3, CD4, and CD8T lymphocytes, CD20 B lymphocyte, CD68 macrophages), stem cell marker (CD44v6), and 2 EMT markers (E-cadherin and Snail) were investigated in this study. The information of antibodies and staining pattern was summarized in Supplementary. Table S1. Missed data were caused by tissue falling off.
Immunohistochemical staining by two-step method (PV-9000 polymer detection system, GBI Labs, USA) was performed according to the conditions presented in Supplementary. Table S1. For blank controls, the primary antibodies were replaced with PBS solution (100 mM, PH7.4). Definitely brown granular stain was defined as positive. The numbers of immune cells were counted in four hotspots (20 £, 545 £ 577 mm 2 ) using a computer-automated method (Image-pro plus 6.0, Media Cybernetics Inc.) and the density of immune cells was defined as average counts per high-power field (HP, 20 £). Figure 3. The model shows a hypothesis that tumor cells with stem-like phenotypes recruitment TAMs (brown, single sword) to the invasive front of tumor lesion and surround around tumor buds (double swords). Subsequently, TAMs induce immune response, especially CD8 C T and CD20 C B lymphocytes, to eliminate these highly invasive tumor buds.
As regards the stem cell markers and EMT markers, the percentages of positive cells were scored using the following scale: 0 D no staining or less than 5%; 1 D 5-25%; 2 D 26-50%; 3 D 51-75%; 4 D more than 75%, and staining intensity (0: negative, 1: weak, 2: moderate, and 3: strong) were scored. The final score was achieved when the score of positive cells percentage was multiplied by the score of staining intensity. Subsequently, CD44v6 was defined as low expression when the final score was less than 6, and high expression when the final score was more than or equal to 6, E-cadherin was defined as negative (final score 2) and positive (final score33), and Snail was defined as negative (final score D 0) and positive (final score 1) separately. Then the number of tumor buds was counted in TF and mean densities in an area of 500 mm £ 2500 mm were calculated. Three groups were determined as low with 0-4 tumor buds, mediate with 5-14 tumor buds and high with 15 tumor buds.
Statistical
IBM SPSS Statistics 20.0 (New York, NY, USA) was used for statistical analysis. Correlations were analyzed by Spearman correlation. Univariate survival analyses were performed and survival curves were drawn using Kaplan-Meier method. The differences between curves were tested by the log-rank test. Cumulative survival rate was calculated by life-table method. Multivariate analysis was performed using the COX proportional hazard model and a forward stepwise method was used to bring variables into the model. A significant difference was identified if the P-value was less than 0.05.
Study approval
This study was approved by the Ethics Committee of Biomedicine, Zhejiang University, China.
Financial support
|
2018-04-03T06:10:25.195Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e93acc52416897db9fb22aa13e335405b846a7ca",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2017.1380765?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "11d0d74c02e67f6c1f2f8dc97739bbcfa4efa682",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
19420407
|
pes2o/s2orc
|
v3-fos-license
|
The Oryza bacterial artificial chromosome library resource : Construction and analysis of 12 deep-coverage large-insert BAC libraries that represent the 10 genome types of the genus Oryza
Jetty S.S. Ammiraju, Meizhong Luo, José L. Goicoechea, Wenming Wang, Dave Kudrna, Christopher Mueller, Jayson Talag, HyeRan Kim, Nicholas B. Sisneros, Barbara Blackmon, Eric Fang, Jeffery B. Tomkins, Darshan Brar, David MacKill, Susan McCouch, Nori Kurata, Georgina Lambert, David W. Galbraith, K. Arumuganathan, Kiran Rao, Jason G. Walling, Navdeep Gill, Yeisoo Yu, Phillip SanMiguel, Carol Soderlund, Scott Jackson, and Rod A. Wing Arizona Genomics Institute, Department of Plant Sciences, BIO5 Institute, and Arizona Genomics Computational Laboratory, University of Arizona, Tucson, Arizona 85721 USA; Clemson University Genomics Institute, Clemson University, Clemson, South Carolina 29634, USA; Department of Plant Breeding and Genetics, International Rice Research Institute (IRRI), Los Baños, 4031 The Philippines; Department of Plant Breeding, Cornell University, Ithaca, New York 14853, USA; National Institute of Genetics (NIG), Shizuoka 411-8540, Japan; Flow Cytometry and Imaging Core Laboratory, Benaroya Research Institute at Virginia Mason, Seattle, Washington 98101, USA; Department of Agronomy, Genomics Core Facility, Purdue University, West Lafayette, Indiana 47907, USA
To better understand wild rice species and take advantage of the rice genome sequence (IRGSP 2005), we have embarked on a comparative genomics program entitled the "Oryza Map Alignment Project" (OMAP).The long-term objective of this program is to create a genome-level closed experimental system for the genus Oryza by developing comparative BAC-based physical maps of all 10 genome types of the genus to study evolution, genome organization, domestication, gene regulatory networks, and crop improvement (Wing et al. 2005).
As a first step toward achieving this goal, we report the construction and detailed characterization of 12 high-quality BAC libraries from one cultivated (O.glaberrima) and 11 well-characterized wild species representing the 10 genome types of Oryza.We selected these species in consultation with breeders and basic researchers with emphasis on the presence of traits of potential agronomic importance (Supplemental Table 1) and, in some cases, the availability of mapping populations.Having convenient public access to the other nine genomes of Oryza in the form of BAC libraries will permit rapid advances in both basic and applied research for the most important food crop in the world.
Results
Nuclear DNA content of Oryza species as measured by flow cytometry The genome sizes of nine of the 12 Oryza accessions used to construct BAC libraries were determined by flow cytometry.The 1C values for O. glaberrima [AA; 357 Mb] and O. minuta [BBCC; 1124 Mb] were adopted from previous flow cytometric data (Martinez et al. 1994).The 1C value for O. coarctata [HHKK] was not measured because of quarantine restrictions.We therefore used the value estimated for O. ridleyi [HHJJ;1283 Mb], which is also an allotetraploid species and shares the HH genome type with O. coarctata.
Table 1 compares the results of the nuclear DNA content analysis with previously reported studies.Single peaks obtained from our analysis indicated that the nuclei preparations did not contain dividing cells.The genome sizes of the various rice species vary by as much as 3.6-fold with O. brachyantha [FF] and O. glaberrima [AA] having the smallest (0.75 pg/2C and 0.74 pg/2C, respectively), while O. minuta [BBCC] and O. ridleyi [HHJJ], both tetraploids, have the largest (2.33 and 2.66 pg/2C).O. alta [CCDD] has a genome size of 1008 Mb, and this is the first report of a genome size for this species.Among the diploid species, O. australiensis [EE] (2.0 pg/2C) has the largest genome, followed by O. granulata [GG] (1.83 pg/2C
BAC library construction and characterization
BAC library construction followed standard protocols (Luo and Wing 2003).Briefly, megabase-size DNA for each accession was prepared from nuclei embedded in agarose plugs.HindIII partially digested, size-selected DNA fragments were then ligated into pIndigoBAC536 SwaI and transformed into Escherichia coli.Often, more than one ligation, having different insert sizes and transformation efficiencies, was used to achieve the required number of clones for 10-fold redundancy for each library.The number of clones per library ranged between 36,864 and 147,456, which were arrayed in 384-well microtiter plates (Table 2) and stored at °08מC.
To determine the average insert size and percent recombinant clones for each library, we analyzed 400-700 randomly picked clones, including clones from all the different ligations and at least one clone from every 384-well plate, depending on genome size.Insert sizes ranged from 10 kb to 300 kb, with a majority of fragments in the 120-150 kb size range (Supplemen- tal Fig. 1).Insert size distributions for the O. nivara and O. australiensis libraries (Supplemental Fig. 1) did not follow the expected Poisson distribution and may be explained by the use of multiple ligation mixes used to construct those libraries.The percentage of nonrecombinant clones was between 0% and 5%, indicating that more than 95% of the clones in these libraries contain inserts.The average insert sizes of these libraries ranged between 123 and 161 kb (Table 2).
To estimate the percentage of organellar DNA content, the libraries were screened with three chloroplast and four mitochondrial probes.Results showed that the libraries contained approximately 0.09%-3.9%chloroplast and 0%-0.7%mitochondrial DNA sequences (Supplemental Table 2), which is typically observed using similar DNA preparations (Luo et al. 2001).
By using the genome size, average insert size, and the number of clones for each library, after subtraction of organellar and nonrecombinant contaminants, we estimate that the theoretical genome coverage of each Oryza library is between 10.8-and 19.3fold (Table 2).
Estimation of genome coverage by hybridization and contig analysis
To independently assess the genome coverage of each BAC library, a probe set representing a single locus from each of the 12 rice chromosomes (Supplemental Table 2) was hybridized to each library, and positive BAC clones were analyzed for their ability to assemble into FPC contigs.For 8 of the 12 libraries (O.nivara and O. coarctata [HHKK]), preliminary FPC/BES physical maps were available and composed of a calculated minimum of ן6.8 genome coverage per library.Using these FPC maps, contigs for all but 13 out of the 120 possible contigs were identified (Supplemental Table 2).Upon manual inspection of each FPC contig, we immediately noticed that not all clones in each contig were identified by hybridization.We therefore performed an extended analysis to determine if any BAC end sequences derived from the clones in the FPC contigs, which were not identified by hybridization, could be mapped to the predicted location on the sequenced rice genome.In 93 out of 107 contigs analyzed, at least one BAC clone could be confirmed to be in the correct orthologous position but was not detected by hybridization (Supplemen-tal Table 2).The number of BAC clones identified by the extended analysis were then combined with the hybridization data and used to estimate the genome coverage of each of the eight BAC libraries and the results are shown in Table 3A.The hybridization/BES/FPC analysis revealed that all eight libraries covered their corresponding genomes by at least 10-fold (Table 3A).
For the four remaining BAC libraries, clones that hybridized to the 12-locus probe set were picked, end sequenced, fingerprinted, assembled into contigs individually, and analyzed as above.Results were obtained similar to those using the whole genome FPC assemblies for the O. officinalis [CC], O. alta [CCDD], and O. ridleyi [HHJJ] libraries, with coverages ranging between 10-and 14-fold (Table 3B, Supplemental Table 2).However, analysis of the O. granulata [GG] library resulted in only 6.3-fold genome coverage, 42% lower than mathematically predicted.
Repeat content estimates from pilot BAC end sequences
To obtain a preliminary view of the major repetitive element content of the 12 Oryza species under investigation, we generated nearly 6.7 Mb of sequence from 623 to 3658 BAC ends from each library.These sequences represent a total of 60 to 862 kb and approximately 0.01% to 0.1% of each of the Oryza genomes (Table 4).The TIGR and University of Georgia (UGA) (Jiang and Wessler 2001) O. sativa (Nipponbare) repeat databases (http:// www.tigr.org/tdb/e2k1/plant.repeats/) were combined and utilized for repeat detection using RepeatMasker (http:// www.repeatmasker.org/).The UGA database was then used to estimate the fraction of interspersed repeats belonging to five broad repeat categories: LTR-retrotransposons, LINEs, SINEs, DNA elements, and unclassified (Table 4).Sixteen percent to 49% of sequence generated from each species was detected as repetitive by RepeatMasker using the combined databases, where LTRretrotransposons were the predominate class for every species.If O. coarctata [HHKK] is excluded, because its genome size is unknown, then a roughly linear relationship between genome size and repeat content is observed, with O. brachyantha [FF] having the lowest LTR retrotransposon content and O. australiensis [EE] the highest.
Discussion
New and confirmed genome size data for nine Oryza species Accurate genome size data is a critical basis for the development of whole genome analysis platforms.The Oryza BAC library resource project began using genome size data summarized in the RBG Kew Gardens Angiosperm DNA C-value data base and the Martinez et al. (1994) and Uozu et al. (1997) publications.We observed inconsistencies between studies that used different accessions and methods.The most noticeable were for the following species: and O. ridleyi [HHJJ], where both Iyengar and Sen (1978) and flow cytometry data were available (Table 1).Our genome size measurements were found to be within a 7% range of flow cytometry data previously reported for O. rufipogon, O. officinalis, O. australiensis, and O. brachyantha compared either to Uozu et al. (1997) or Martinez et al (1994).However, with O. ridleyi [HHJJ], our genome size data was 64% higher than previously reported even though the same accession was used (Martinez et al. 1994).
No flow cytometry data were available for O. nivara [AA], and its genome size was estimated by Iyengar and Sen (1978) to be 760 Mb, almost twice that of cultivated rice.We measured the O. nivara genome size to be 448 Mb, which is much closer to the other AA genome diploids O. sativa and O. rufipogon.One possible explanation to account for the large differences in genome size estimations between Iyengar and Sen (1978) and the other flow cytometric data reported here and elsewhere is that the 1C values reported by Iyengar and Sen (1978) 1).If this were the case, then all of the genome size data reported by Iyengar and Sen (1978), except for O. ridleyi, would fall within 21% of the data measured by flow cytometry.
The discrepancy between genome size values measured by flow cytometry for O. ridleyi may be explained by the use of contaminated or heterozygous germplasm in the Martinez et al. (1994) The accessions used for the Oryza BAC library project were genetically homozygous and have been extensively used in breeding programs as donors for important agronomical traits.
BAC library coverage estimations
For a BAC library to be useful for positional cloning, physical mapping, and genome sequencing, it must have a minimum of 5-10 ן coverage across the entire genome.Genome coverage for the Oryza BAC library resource was determined mathematically and by hybridization/BES/FPC analysis, and in all but one case (O.granulata), both measurements resulted in a minimum of 10fold redundancy.For the majority of libraries, the extended analysis resulted in lower genome coverages primarily because not all of the clones in a given contig could be identified by hybridization or BES analysis.We suspect that some of the clones that were not identified by hybridization, including the clones identified by BES alone, were undetected due to technical issues associated with colony blot hybridization.These include the use of locus-specific probes from a single species [AA] to hybridize to distantly related species, uniform hybridization and washing conditions across all libraries, and decreasing filter quality due to multiple hybridizations.For clones that were identified by BESs alone, it is possible that they are false positives and were derived from paralogous sequence duplications in the genome.This is unlikely, however, as we only analyzed BESs from clones in contigs identified by hybridization.The issues raised above may be particularly relevant for analysis of the O. granulata [GG] library, which is the most basal of the Oryza species, and was the only library that showed less than 10-fold genome coverage by hybridization/ contig analysis even though it was predicted to contain 10.8 genome equivalents.
We were unable to detect robust contigs for 19 out of 216 predicted contigs, assuming the syntenic relationships between these species and the reference japonica genome were maintained throughout evolution (Supplemental Table 2).The majority (13) of the "missing" contigs were from the four Oryza polyploid libraries.For Shaked et al. 2001).For the purposes of determining genome coverage, duplicate contigs were treated as independent loci.Regarding dispersed loci, five of the six were identified from the O. australiensis [EE] library.This observation may be indicative of large genome rearrangements in the EE genome and corresponds well with the EE genome being the largest of all the diploids (Table 1) and the most highly repetitive of all the Oryza species (Uozu et al. 1997; Table 4).Preliminary analysis of BAC end sequences of the clones identified in these dispersed loci show that the majority share significant sequence similarity with a number of different classes of transposable elements (data not shown), suggesting these loci may be located in repetitive regions of the EE genome.
Differentiation of colinear and homeologous BACs in the tetraploids: Opportunities to reconstitute the genomes of extinct diploid counterparts
Fingerprinting methods have recently been used to dissect the subgenomes of tetraploids (Cenci et al. 2003).However such differentiation depends on the extent of sequence divergence of the two diploid counterparts in the tetraploid species (Cenci et al. 2003).Recently created polyploids like wheat exhibit very little intraspecific genetic variation due to genetic bottlenecks imposed during polyploidization.However, all the polyploids in the genus Oryza are either highly polymorphic or exhibit at least the same level of genetic variation as the diploids.For these reasons the polyploids are considered as older or ancient (Jena and Kochert 1991;Wang et al. 1992;Ge et al. 1999) A preliminary survey of repeat content from Oryza species and their correlation with respective genome sizes Possible mechanisms for the genome size variation among the Oryza specices include insertion and deletion of a variety of DNA sequences (SanMiguel and Bennetzen 1998; Devos et al. 2002;Feng et al. 2002;Han and Xue 2003;Edwards et al. 2004;Feltus et al. 2004;Ma and Bennetzen 2004).Although insertions have been largely attributed to amplifications of retrotransoposons (Devos et al. 2002;Ma and Bennetzen 2004;Ma et al. 2004), as well as genome-specific unique sequences (Zhao et al. 1989;Uozu et al. 1997), deletions include all classes of DNA sequences through homologous recombination and illegitimate recombination (Ma and Bennetzen 2004).Genome-wide BAC end sequences in combination with physical maps are important resources for gaining insights regarding genome sequence composition and organization (Mao et al. 2000;Messing et al. 2004).To explore the possible relationship between repeat elements and genome sizes among the Oryza species, we estimated the repeat content from BAC end sequences from the Oryza BAC libraries.Repeat databases derived from the O. sativa genome sequence successfully detected repeats in all 12 rice species considered here.
LTR-retrotransposons frequently dominate plant genomes.In this study, the largest, O. australiensis [EE], and smallest genome sizes, O. brachyantha [FF], excluding O. coarctata [HHKK], correlated with the abundance of LTR retrotransposons.These results are in agreement with Uozu et al. (1997), who demonstrated good correlation between genome size of O. australiensis and O. brachyantha with overall chromosome size and morphology.Both metaphase and prometaphase chromosomes of O. australiensis were much larger than those of any other diploid Oryza species with a high degree of heterochromatin condensation, whereas O. brachyantha chromosomes showed the opposite pattern.We are further exploring the causes for this dynamic variation in the sizes of nuclear genomes by sequencing an orthologous region on chromosome 11 across all the genomes of the Oryza.In combination with a well-defined phylogeny, studies with this new BAC library resource will add directionality to the analysis of genome size evolution in the genus Oryza and may answer questions regarding mechanisms involved in such events.
Utilization of the Oryza BAC library resource
The Oryza BAC library resource is the first description of a comprehensive collection of libraries that represent all the genome types of an entire genus.To add additional value to these libraries, we have already generated BAC end sequence and fingerprint databases for eight of the 12 libraries and expect to have similar data for the remaining four libraries in public databases by the end of 2005 (OMAP Consortia, unpubl.).This library resource is publicly available in the form of whole libraries, filters, and individual clones, through our BAC/EST Resource Center (http:// www.genome.arizona.edu/orders)and has already been extensively used worldwide for the analysis of genome evolution and organization, positional cloning, and gap closure in the japonica reference sequence.
For example, an emerging picture in rice evolution is that the genomes of Asian rice (O.sativa ssp.indica and japonica) have undergone rapid genome expansion in comparison to O. glaberrima, which diverged from a common ancestor around 0.64 MYA (Ma Bennetzen 2004).However, no information is available regarding evolutionary trends relative to immediate ancestors of Asian cultivated rice, O. nivara and O. rufipogon, as well as the other nine genome types of the genus Oryza.To obtain a broader understanding of Oryza genome evolution and the consequences of domestication, we and others are using the Oryza BAC library resource to investigate key loci and whole chromosomes across all genomes by comparative physical mapping and genome sequencing.To illustrate, we utilized the O. nivara BAC library and end sequence and fingerprint databases to reconstruct O. nivara chromosome 3 with only 16 small gaps.Detailed comparative analysis showed that O. sativa ssp.japonica rice chromosome 3 is about 20% larger than its progenitor O. nivara chromosome 3, thereby supporting and extending the concept of rapid genome expansion in cultivated rice (Rice Chromosome 3 Sequencing Consortium 2005).
To further explore genome expansion relative to the other AA genomes and O. punctata [BB], we utilized the extended analysis data generated in this study for the Adh1 gene, which is a standard locus that has been used to study genome evolution across the plant kingdom.We measured the distances between paired BAC ends mapped on to the reference O. sativa genome and compared these distances with BAC clone insert sizes.The results indicated that the orthologous region in the reference O. sativa genome is larger by 50 kb (28%), 19.1 kb (11.3%), 35.1 kb (14.8%), and 28.2 kb (9.4%) relative to O. punctata, O. glaberrima, O. rufipogon, and O. nivara, respectively (Supplemental Table 3).Analysis of large and contiguous sequences generated from orthologous Adh1 regions from these species indicate that this dynamic variation is not only highlighted by insertion of transposable elements, but involves multiple genetic mechanisms (J.Ammiraju, Y. Yu, R.T. Mueller, J. Currie, H.R. Kim, J.L. Goicoechea, and R.A. Wing, unpubl.).
In summary, this comparative structural analysis provides a previously unavailable glimpse through the window of rice evolution and confirms that the rice genome has undergone rapid changes after divergence from progenitors.
Plant material
Young leaf tissue was collected from clonally propagated single plants at IRRI from O. brachyantha (Acc. 101232)
Genome size determination by flow cytometry
Samples for flow cytometric analysis were prepared from seedling tissue as described by Arumuganathan and Earle (1991a,b) and Galbraith et al. (1983).Three to 5 measurements, on a minimum of 2000 nuclei per analysis, were made on two separate days with fresh preparations made each day.Cell clumps and debris were excluded from analysis by using red fluorescence and forward angle light scatter gates.Chicken red blood cells (3.0 pg/nucleus), Nicotiana tobacum var.Xanthi (11 pg/ 2C nucleus), A. thaliana ecotype Columbia (0.47 pg/2C nucleus), and Oryza sativa ssp.japonica cv Nipponbare (0.91 pg/2C) were used as internal standards.Values for nuclear DNA content were estimated by a comparison of nuclear peaks from the Oryza species on the linear scale, with the peak for chicken red blood cells (CRBC) included as an internal standard in each run.The conversion factor for picograms to base pairs is 1 pg = 0.965 ן 10 9 bp (Bennett et al. 2000).
BAC library construction
All protocols used for megabase-size DNA preparation, library construction, picking, and arraying were as previously described (Luo and Wing 2003;Kudrna and Wing 2004) except the following: (1) To reduce organelle contamination in the nuclei preparations, nuclei isolation buffer containing 0.5% TritonX-100 was used during the nuclei washing steps (Georgi et al. 2002) ; (2) all libraries were constructed in the HindIII site of the vector pIndigoBAC536 SwaI.This vector is identical to pIndigoBAC536 (H. Shizuya et al. unpubl.)except for the addition of two SwaI sites near and internal to two NotI sites that flank the LacZ gene (M.Luo, A. Jetty, and R.A. Wing, unpubl.); (3) all ligations were transformed into DH10B T1 phage resistant E. coli cells (Invitrogen).
Insert size analysis
BAC plasmid DNA was isolated from randomly picked clones from each Oryza library, in a 96-well format, using a simplified high throughput method (H.R. Kim and R.A. Wing, unpubl.)that is based on conventional alkaline lysis methods (Sambrook and Russell 2001).BAC DNA (∼500 ng) was digested with NotI and resolved on CHEF (Bio-Rad) gels as previously described (Luo and Wing 2003).
BAC library screening
High density colony filters for each library were prepared using a Genetix Q-bot (Genetix).Each 22.5 ן 22.5 cm filter (Hybond-N+: Amersham) contained 18,432 independent clones arrayed in a 4 ן 4 double spotted pattern.All hybridizations followed Chen et al. (2000), and the addresses of BAC clones that hybridized with specific probes were recorded and input as "markers" into FPC (Soderlund et al. 2000).
The Oryza BAC library resource Genome Research 145 www.genome.org
Organellar DNA content estimation
To estimate the percentage of chloroplast and mitochondrial DNA content in each library, one high-density filter from each library was screened with a pool of three barley chloroplast probes, ndhA, rbcL, and psbA (obtained from J. Mullet, Texas A&M University), and with a pool of four rice mitochondrial probes, atpA, cob, atp9, and coxE (obtained from T. Sasaki, MAFF, Japan) separately.
BAC end sequencing and repeat analysis of the Oryza species BAC ends were sequenced using BigDye v3.1 (Applied Biosystems) with T7 (5Ј-TAATACGACTCACTATAGGG-3Ј) and BES_HR primers (5Ј-CACTCATTAGGCACCCCA-3Ј).Cycle sequencing was performed using the following conditions: 150 cycles of 10 sec at 95°C, 5 sec at 55°C, and 2.5 min at 60°C, followed by DNA purification using CleanSeq (Agencourt).Samples were eluted into 20 µL of water and separated on ABI 3730xl DNA sequencers.Sequence data were collected and extracted using ABI sequence analysis software.Phred software (Ewing and Green 1998;Ewing et al. 1998) was used for base calling, and vector and low quality sequences were removed using the program Lucy (Chou and Holmes 2001).All sequences were submitted to the GSS section of GenBank.
Repeat analysis was undertaken using "RepeatMasker" version 3.0.5 (http://www.repeatmasker.org/).The program was run in "sensitive mode" and using cross_match version 0.990329 as the search engine and a custom repeat library composed of both the TIGR Oryza Repeat Database (http://www.tigr.org/tdb/e2k1/plant.repeats/)and a database for transposable elements from Jiang and Wessler 2001.
FPC/BES contig assembly and analysis to estimate genome coverage of the Oryza BAC libraries
Genome coverage estimates utilized (1) hybridization data from the 12 chromosome specific probes, (2) BAC end sequence data from the positively hybridizing clones, and (3) fingerprint/contig data either from existing whole genome FPC assemblies (extended analysis) derived from the Oryza Map Alignment Project (http://www.omap.org)or specific FPC assemblies from only the clones that hybridized with a given probe (small project).
Extended analysis
This strategy was used for the species with high coverage FPC/BES phase I physical maps (O.australiensis [EE] [63,368 clones] [HHKK] [50,146 clones]).First, an incremental FPC build was constructed by implementing the CpM (Clone plus marker) function on phase I physical maps as described above at a 1e 05מ cutoff.End merges of contigs were then performed at a cutoff of 1e 12מ -1e 81מ .Blast analysis was carried out in parallel for all the BAC end sequences from the positive hybridization hits against O. sativa pseudo-molecules representing the 12 chromosome of rice (GenBank accession numbers AP008207-AP008218).Alignments larger than 100 bp and that map to an interval of 200 kb flanking the position of the marker in reference genome, O. sativa ssp.japonica, were further included in the analysis.A contig was considered positive when a majority of the clones in it were hit by both hybridization and BES analysis.Blast analysis of BES from the clones that were mapped within a 50-CB (metric of FPC) unit interval flanking the position of the marker in the "positive contig" was also carried out against the O. sativa pseudomolecules, to identify positive clones that were not identified by hybridization.
Small projects
For those libraries without FPC/BES physical maps (O.officinalis [CC], O. granulata [GG], O. ridleyi [HHJJ], and O. alta [CCDD]) positive clones from hybridizations were fingerprinted and end sequenced.Fingerprints were generated using a modified SNaPshot fingerprinting method (Luo et al. 2003;H.R. Kim and R.A. Wing, unpubl.).Trace files were processed with GeneMapper v. 3.0 (ABI) to generate size files that were assembled with FPC (Soderlund et al. 2000) projects for every marker tested per species.These projects were initially assembled very stringently.The cutoff values were then gradually reduced until clones began to form into contigs.At that particular cutoff, singletons were incorporated in a new contig.End-to-end merges and reanalysis of the resulting contigs were then performed in cycles, until all the clones were added.The initial and final cutoff values of these analyses were chosen based on the number of clones involved in the analysis and the nature of the species (Soderlund et al. 2000).
).The other AA genome species, O. nivara and O. rufipogon, contain less nuclear DNA than the CC and EE genomes.Compared to the AA genome species O. nivara and O. rufipogon, their closest relative O. punctata [BB] has a 3%-5% smaller genome size (∼425 Mb).
Table 2 . Characteristics of the Oryza BAC library resource
a Genome coverage after subtraction of organellar and non-insert-containing clones.b Genome coverage estimated from adapted genome size value of O. ridleyi.
Table 3A . Genome coverage estimations for eight Oryza species based on hybridization and extended analysis utilizing whole genome FPC physical maps and BAC end sequences
Genome coverage based on the total number of hybridization and BES hits identified by extended analysis divided by the total number of loci per genome in the diploids or subgenome in the tetraploids (dispersed clones and undetected homeologous contigs [see SupplementalTable 2]were not taken into account for estimating genome coverage).Average HX coverage of both subgenomes for each tetraploid species (see Supplemental Table2for details).
a b c Calculated coverage of the FPC physical maps (excluding singletons).
Table 3B . Genome coverage estimations for four Oryza species based on hybridization and contig analysis (see Methods for details)
(Ozkan et al. 2001; average HX coverage are as described for Table3A, except that these values are obtained from specific FPC and BES assemblies as small projects (see Methods for details).cCalculatedgenomecoverageestimationsfromTable2.The Oryza BAC library resource the remaining six cases, BAC clones were identified by hybridization but could not assemble into contigs and were thus classified as "dispersed" (Supplemental Table 2).For O. minuta [BBCC], 9 of 12, O. alta [CCDD], 9 of 11 (1 locus was dispersed), O. coarctata [HHKK], 7 of 12, and O. ridleyi [HHJJ], 10 of 12 probes identified clones that assembled into two contigs (Table 3A,B; Supplemental Table2).Although further work is required to elucidate if these duplicate contigs are derived from orthologous positions on each genome type, it is not unexpected that all loci were not represented twice per polyploid genome.Several studies have demonstrated that rapid gene loss and genomic rearrangements are a consequence of polyploidization(Ozkan et al. 2001;
Table 4 . Analysis of repetitive sequences from pilot BAC end sequences of Oryza BAC libraries Oryza species
Interspersed repeats and includes LTR elements, LINE elements, SINE elements, DNA transposons, UNC (unclassified) and simple sequence repeats.
b c Not determined.
|
2017-06-17T22:08:05.206Z
|
2005-12-12T00:00:00.000
|
{
"year": 2006,
"sha1": "743d1b97031b655272c907ff239428209953e1bd",
"oa_license": "CCBYNC",
"oa_url": "https://genome.cshlp.org/content/16/1/140.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "743d1b97031b655272c907ff239428209953e1bd",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
231856789
|
pes2o/s2orc
|
v3-fos-license
|
Reversed Robin Hood syndrome visualized by CT perfusion
Reversed Robin Hood Syndrome (RRHS) was first described in 2007 as a cause of worsening neurological deficit in the setting of an acute ischemic event. RRHS is the shunting of cerebral blood flow to nonstenotic vascular territories due to impaired vasodilation bought on by hypercapnia. A 77 year old lady presented with acute onset left hemiparesis and an exacerbation of her underlying chronic obstructive pulmonary disease (COPD). CT angiography and perfusion visualized RRHS and appropriate treatment was initiated. Treatment strategies for RRHS differ considerably to those for acute ischemic stroke. Choosing the correct treatment strategy is decisive for good clinical outcome.
Introduction
Reversed Robin Hood syndrome (RRHS) has previously been described in cases of worsening neurological deficit after an index stroke event [1] . The steal phenomenon has traditionally been visualized using transcranial ultrasound. Previously published reports have not employed the use of CT perfusion (CTP) which has now become the mainstay of ischemic penumbra diagnostics [2 ,3] . This brief report presents a case of RRHS as the index event in a patient with underlying chronic obstructive pulmonary disease. As described here the patient presented with acute onset stroke symptoms. RRHS however * Corresponding author. E-mail address: advanirajiv@gmail.com (R. Advani).
warrants a different treatment strategy if massive cerebral infarction is to be avoided.
Case report
A 77-year-old lady presented to the emergency room with acute onset left hemiparesis.
The patient had a history of hypertension and Chronic Obstructive Pulmonary Disease (COPD) but was otherwise healthy and functionally independent. She was being treated The patient had experienced a slight weakness in their left arm and left leg on the morning of admission but had not contacted the emergency services before a gradual worsening of the symptoms later that same day.
The paramedics found the patient to be awake with a left hemiparesis, left facial palsy and a mild dysarthria. The paramedics suspected an acute onset stroke, and the patient was admitted to our hospital as a stroke code.
Upon physical examination in the emergency room the patient had an National Institutes of Health Stroke Scale (NIHSS) of 11. The temperature was 36.7 °C, pulse of 90 beats per minute and blood pressure of 169/71 mm Hg. The initial prehospital oxygen saturation was 65% while on room air, subsequently increasing to 92% while breathing oxygen through a nasal cannula at a rate of 3 liters per minute. The respiratory rate was 22 breaths per minute.
An arterial blood gas obtained while the patient was receiving high-flow oxygen through a nasal cannula at a rate of 3 liters per minute showed a pH of 7.27, a partial pressure of carbon dioxide of 81 mm Hg (10.8 kPa), a partial pressure of oxygen of 60 mm Hg (8.0 kPa), a bicarbonate of 37.3 mEq per liter and a base excess of + 10.5 millimoles per liter.
A CT scan was performed including a pre-and intracerebral angiography. A subsequent perfusion scan was also performed. The CT examination was performed using the Siemens Somatom Definition Flash using GE Healthcare Omnipaque intravenous contrast and standardized protocols for angiography and perfusion.
Blood work including the white-cell count, differential, hemoglobin, electrolytes, high-sensitive troponin and CRP, kidney and liver function tests were all within normal limits except for a serum potassium level of 5.0 millimoles per liter.
The CT angiography showed no pre-or intracerebral occlusions. However, a 90% stenosis North American Symptomatic Carotid Endarterectomy Trial (NASCET) was seen at the origin of the right internal carotid artery ( Fig. 1 ). This stenosis is also shown in coronal, axial and sagittal views ( Fig. 2 ). The CTP scan ( Fig. 3 ) showed markedly reduced cerebral blood flow (CBF) to both the anterior and middle cerebral artery territories in the right hemisphere with normal cerebral blood volume. The mean transit time and time to drain were both markedly increased. In the contralateral hemisphere the CBF is slightly elevated and both the mean transit time and time to drain noticeably reduced, reflecting an increase in cerebral perfusion.
The lack of large vessel occlusion despite the significant perfusion defects in the right cerebral hemisphere, the subtle findings in the contralateral hemisphere along with a pronounced hypercapnia on arterial blood gas were indicative of RRHS. A cerebral steal phenomenon induced by a hypercapnic state, brought on by an exacerbation of the patient's underlying Chronic Obstructive Pulmonary Disease (COPD).
Discussion
RRHS is the paradoxical decrease in CBF to an at-risk vascular territory owing to a shunting of the flow to nonstenotic supply areas [1] . Traditionally the diagnosis of RRHS has been made using transcranial ultrasound; verifying the intracranial shunting [4] . The underlying pathophysiology is the impaired vasodilatory response in stenotic vessels leading to a shunting to areas supplied by more healthy vessels through to the path of least resistance.
In the acute setting, verification of the clinical diagnosis using the radiological tools at hand is vital. In the case described here, the lack of a large vessel occlusion can lead clinicians to strategy that cerebral perfusion should be increased by in- creasing systemic blood pressure. Increasing systemic blood pressure with the use of vasopressors should be avoided in RRHS as this will further exacerbate the steal thus resulting in potential hemispheric infarction.
As demonstrated here, appropriately correcting the underlying hypercapnia should be the focus of treatment. The patient was treated for a type 2 respiratory failure using noninvasive mechanical ventilation continuous positive airway pressure (CPAP) and nebulized corticosteroids. This led to a gradual reversal of the steal phenomenon and normalization of the intracranial flow. The neurological deficits improved gradually during the subsequent 24 hours and a follow up MRI scan showed 2 small watershed infarcts in the anterior circulation (anterior cerebral artery-middle cerebral artery (ACA-MCA) watershed zone) of the right hemisphere ( Fig. 4 ). The MRI showed no older infarctions in the right hemisphere. In cases where a radiologically significant carotid artery stenosis has led to embolic infarction in the ipsilateral hemisphere endarterectomy may be considered. Vascular surgeons were called in for a consult and the decision was made to treat the stenosis using best medical treatment. The patient was discharged 3 days after admission with an National Institutes of Health Stroke Scale (NIHSS) of 1. The widespread use of CTP in the setting of acute stroke provides us with another method of visualizing CBF and intracranial circulation. CTP can visualize areas of decreased and increased flow relative to each other. This gives us the possibility to not only evaluate cerebral hypoperfusion and penumbra in acute ischemic stroke but also areas of hyperperfused tissue [5 ,6] . CTP findings should always be carefully evaluated where an occlusion isn't found and conditions such as epileptic hemiparesis, encephalitis and intracranial shunting should be considered.
The diagnosis of RRHS has previously been put forward in the setting of worsening neurological deficit shortly after an index stroke [1] . In this case RRHS was the presenting diagnosis without a preceding index stroke event. The recognition of RRHS and its underlying pathophysiology is crucial in order to determine the correct course of treatment.
Access to transcranial ultrasound is limited at many acute treatment centers and other diagnostic modalities should therefore be considered. The case presented here highlights CTP as an underutilized tool in the evaluation of intracranial blood flow.
Patient consent
The patient consented to the publication of this brief report.
|
2021-02-10T05:17:50.706Z
|
2021-02-02T00:00:00.000
|
{
"year": 2021,
"sha1": "97bb0de00b0f62667f4cc9d963a31814e21141ac",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.radcr.2021.01.047",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97bb0de00b0f62667f4cc9d963a31814e21141ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54948001
|
pes2o/s2orc
|
v3-fos-license
|
Constraints on the Evolution of the Galaxy Stellar Mass Function II: Quenching Timescale of Galaxies and its Implication for their Star Formation Rate
We study the connection between the observed star formation rate-stellar mass (SFR-$M_*$) relation and the evolution of the stellar mass function (SMF) by means of a subhalo abundance matching technique coupled to merger trees extracted from a N-body simulation. Our approach consist of forcing the model to match the observed SMF at redshift $z \sim 2.3$, and let it evolve down to $z \sim 0.3$ according to a $\tau$ model, an exponentially declining functional form which describes the star formation rate decay of both satellite and central galaxies. In this study, we use three different sets of SMFs: ZFOURGE data from Tomczak et al.; UltraVISTA data from Ilbert et al. and COSMOS data from Davidzon et al. We also build a mock survey combining UltraVISTA with ZFOURGE. Our modelling of quenching timescales is consistent with the evolution of the SMF down to $z \sim 0.3$, with different accuracy depending on the particular survey used for calibration. We tested our model against the observed SMFs at low redshift and it predicts residuals (observation versus model) within $1\sigma$ observed scatter along most of the stellar mass range investigated, and with mean residuals below 0.1 dex in the range $\sim [10^{8.7}-10^{11.7}] M_{\odot}$. We then compare the SFR-$M_*$ relation predicted by the model with the observed one at different redshifts. The predicted SFR-$M_*$ relation underpredicts the median SFR at fixed stellar mass relative to observations at all redshifts. Nevertheless, the shapes are consistent with the observed relations up to intermediate-mass galaxies, followed by a rapid decline for massive galaxies.
INTRODUCTION
The stellar mass function (SMF) is a fundamental statistic in astrophysics that helps us to improve our knowledge of galaxy formation and evolution. The SMF directly describes the distribution of the galaxy stellar mass, and the study of its evolution as a function of time can provide important information on the star formation history of galaxies.
As pointed out by many authors recently (e.g. Leja et al. 2015;Tomczak et al. 2016;Contini et al. 2017), a peculiar aspect of the evolution of the SMF is not yet clear, that is, its relation with the star formation rate-stellar mass (SFR-M * ) relation (e.g., Tomczak et al. 2016;Contini et al. 2017 and references therein). In fact, it is in principle possible to connect the observed SMF at any redshift with the SFR-M * relation to obtain the evolution of the SMF, assuming that all processes (not only star formation) responsible for the growth of galaxies are considered.
contini@pmo.ac.cn kangxi@pmo.ac.cn yi@yonsei.ac.kr A similar approach has been followed by several authors (e.g., Conroy & Wechsler 2009;Leja et al. 2015;Tomczak et al. 2016;Contini et al. 2017) and all these studies concluded that, considering reasonable assumptions for mergers of galaxies and stripping of stars, which are the most important physical processes that act on the galaxy's stellar mass, there still are some discrepancies between the observed and modeled SMFs. Nevertheless, it appears now clear the need for a mass-dependent slope for the SFR-M * relation in order to reconcile the observed evolution of the SMF with that inferred by connecting the SFR-M * relation with the observed SMF at high redshift (Contini et al. 2017 and references therein). In accordance with these studies (e.g., Leja et al. 2015;Tomczak et al. 2016;Contini et al. 2017), neither mergers or stellar stripping altogether, nor hidden low-mass quiescent galaxies not detected are likely to be responsible for the mismatch (which appears to be in the range 0.2-0.5 dex over the whole stellar mass range investigated).
The question arises quite spontaneously: What is responsible for the mismatch? There might be several rea-sons, ranging from possible physical processes that we have not understood correctly, to the correct shape of the SFR-M * relation and the scatter around it, or even to uncertainties in the observed SMF at high redshift. Answering this question is not an easy task simply because, in following the above-mentioned approach, many uncertainties are mutually connected. Taking advantage of the ZFOURGE survey, Tomczak et al. (2016) generated star-fomation histories (SFHs) of galaxies by means of the SFR-M * relation redshift dependent, and integrated the set of SFHs with time to obtain mass-growth histories to compare to those inferred from the evolution of the SMF of Tomczak et al. (2014). Their result, despite the reasonable agreement between the observed and inferred SMFs, suggests that either the SFR measurements are overestimated, or the growth of the Tomczak et al. (2014) mass function is too slow, or even both.
In Contini et al. (2017) (hereafter PapI) we followed an approach similar to that described above. In order to match the SMF at high redshift and predict its evolution with time, we used an analytic model which considers mergers and stripping coupled with an abundancematching technique to set the galaxy stellar mass. The main goal in that study was to reconcile the evolution of the SMF coupled to the observed SFR-M * relation, by including real merger trees from N-body simulations that provide the accretion history of galaxies and a prescription for stellar stripping. We concluded that the SFR-M * relation must be mass and redshift dependent, and both mergers and stellar stripping are important processes for the shape of the massive end of the SMF. Moreover, by testing two different sets of SMFs coupled to the same SFR-M * relation and the same modeling for mergers and stellar stripping, we found different evolutions down to low redshift. As a consequence, either the observed SMF at high-z bears uncertainties too large to be conclusive (as we concluded in PapI), or the growths of the SMF described by different observations are not self-consistent.
In this study, we want to address this topic by following an approach slightly different from the one used in PapI. The core of the method remains unchanged, meaning that we use merger trees constructed from the same N-body simulation and the same prescription for stellar stripping, and we set up the stellar mass of galaxies by using an abundance-matching technique. In order to describe the star formation histories of galaxies, we match the observed SMF at high redshift and assign an SFR by means of the observed SFR-M * relation (as done in PapI), but we let the SFR decay exponentially with time according to a quenching timescale that is set for each galaxy. 1 The goals are to: 1 In PapI we assign SFRs to galaxies according to their mass at each time step.
1) calibrate our model with the observed SMF from different surveys and study their evolution, 2) reduce the mismatch between the observed and inferred SMFs at low redshift, and 3) compare the predicted SFR-M * relation with the observed one as a function of time.
The paper is structured as follows. In Section 2 we describe in detail our approach, the method followed, and modeling, and we give a brief introduction to the surveys considered. In Section 3 we show our results, which will be fully discussed in Section 4. Finally, in Section 5 we give our conclusions. Throughout this paper we use the standard cosmology summarized in Section 2. The stellar masses are given in units of M ⊙ , and we assume a Chabrier (2003) IMF.
METHODS
The simulation used in this paper is based on Kang et al. (2012) and fully described in PapI. We refer the readers to those papers for the details of the simulation, while here we give an overview of our approach.
Similarly to the approach used in PapI, we use the so-called subhalo abundance matching (ShAM) technique to populate dark matter haloes with galaxies (Vale & Ostriker 2004), which is now widely used in numerical simulations to connect galaxies with dark matter structures (Guo & White 2014;Yamamoto et al. 2015;Chaves-Montero et al. 2016 just to quote some recent works). For more details concerning the ShAM technique, we refer readers to PapI or the studies quoted above.
In order to study the evolution of the SMF we need to connect galaxies with haloes at different times. Despite the outputs of the simulation being stored in 60 snapshots, from z ∼ 100 to the present time, we start by matching the observed SMF at z match = 2.3, so that the SMF predicted by our model is exactly the observed one. The algorithm reads the merger trees constructed from our simulation and links galaxies and dark matter haloes in a one-to-one correlation by firstly sorting them in mass. As in PapI, since new haloes can form after z match , we populate them with galaxies by using the stellar mass-halo mass relation at that redshift.
After z match we let galaxies evolve according to their merger histories (given by the merger trees) and to their star formation histories. At z match (or the redshift when they are born if it happens after z match ) we assign SFRs to galaxies by means of the SFR-M * relation observed at that redshift, and we let the SFRs evolve down to z ∼ 0.3 according to a τ model that describes the star formation histories (SFHs) of both satellite and central galaxies (see Section 2.1).
Galaxies can lose a fraction of their stellar mass once they become satellites via stellar stripping, due to gravitational interactions with their hosts. Our model considers stellar stripping, and here we summarize the prescription used for this process. For further details, we refer the reader to PapI.
Following Contini et al. (2014), who have analyzed the channels for the formation of the intracluster light, we model stellar stripping in a very similar fashion, that is, by assuming an exponential form for the stellar mass lost: (1) where t inf all is the lookback time when the galaxy last entered a cluster (i.e., became a satellite), t i is the lookback time at z = z i , and τ s1,s2 are two normalizations arbitrarily set to 30 Gyr and 15 Gyr, respectively, for satellites s 1 associated with a distinct dark matter substructure and orphan galaxies s 2 (galaxies not associated with any dark matter substructure). We set τ s1 > τ s2 in order to consider the effect of dark matter in shielding the galaxy from tidal forces (see PapI for further details). The term η(M halo ) is the stripping efficiency, and it provides the strength of stellar stripping as a function of the main halo mass M 200 , which is the mass within the virial radius R 200 .
In Contini et al. (2014) we considered a second important process for the formation of the intracluster light, i.e. the so-called "merger channel" (see also Murante et al. 2007). We simply assume that when two galaxies merge, 30% of the satellite stellar mass becomes unbound and goes to the diffuse component. A similar prescription has been used also in other semi-analytic models (see, e.g. Monaco et al. 2007).
Model of star formation
In this section, we describe in detail how we model the galaxy stellar mass growth from high to low redshift due to star formation. As explained above where we described our approach, we force the algorithm to match the observed SMF at redshift z match = 2.3. In PapI we assigned SFRs to galaxies making use of the SF R − M * relations observed by Tomczak et al. (2016), which means that each galaxy was assigned an SFR according to its stellar mass and redshift. In this study, we model the SFHs of galaxies in a different way: 1) We assign an SFR to each galaxy according to the SF R − M * relation either at z = z match or, if a galaxy forms after z match , at z = z f orm .
2) After z match or z f orm , the SFR of each galaxy will evolve according to functional forms that take into account information such as type (central or satellite), stellar mass, and, more importantly, a quenching timescale.
This approach is clearly different from the one adopted in PapI and adds more direct information about the time galaxies spend before being quenched. We treat the SFHs of central and satellite galaxies separately. For centrals, we use a prescription very similar to the one adopted in Noeske et al. (2007), with some differences: where τ c is the quenching timescale. τ c is derived from the following equation: where M * is the stellar mass at z = z match/f orm and a random scatter in the range ±20% is assigned as a perturbation. Our prescription is different from the original one (Noeske et al. 2007) in the sense that we consider only the stellar mass (rather than the baryonic mass) and add a redshift-dependent correction. We will come back to the reason for this choice later.
To model the SFHs of satellites, we take advantage of a revised version of the so-called delayed-then-rapid quenching mode suggested by Wetzel et al. (2013), in which the SFRs of satellite galaxies evolve like those of centrals for 2 − 4 Gyr after infall, and then quench rapidly. For satellites, we assume a delayed quenching, after which they quench according to a quenching timescale τ s . The quenching timescale τ s is set at z = z match/f orm and is given by equation 3 if a galaxy becomes satellite before z match , and it is set as a random fraction between 0.1 and 0.5 of τ c for the other galaxies. These choices altogether guarantee that τ s < τ c for every galaxy. Hence, the SFR of satellites evolves as described by equation 2 if where t delay is randomly chosen in the range [2-4] Gyr, and as thereafter. The parameter SF R match/f orm in equations 2 and 4 is set at z = z match/f orm and derived as done in PapI, by following Equation 2 in Tomczak et al. (2016): where s 0 and M 0 are in units of log(M ⊙ /yr) and M ⊙ respectively. In Equation 5 the model considers a ran-dom scatter in the range ±0.2 dex as a perturbation, as suggested by A. Tomczak (2017, private communication). Tomczak et al. (2016) find that such a parameterization works well even if quiescent galaxies are considered. Then, as in PapI, where we showed that this parameterization agrees fairly well with the evolution of the SMF with time, we assume Equation 5 to be valid for all galaxies. Hence, s 0 and M 0 are given by (Equation 3 in Tomczak et al. 2016) Equation 5 and the set of equations 6 describe the evolution with time of the SFR-M * relation with a massdependent slope, which has been demonstrated to be necessary in order to reproduce the evolution of the SMF with time (see, e.g., Leja et al. 2015;Tomczak et al. 2016;Contini et al. 2017).
In this study, we have chosen to add a redshift term in the equation that provides the quenching timescales of both satellites and centrals. In fact, it has been proved that the quenching timescale, at least for satellite galaxies, depends on the redshift (see, e.g. Tinker & Wetzel 2010;Wetzel et al. 2013) being shorter for galaxies accreted at higher redshift (Contini et al. 2016). Galaxies accreted at higher redshift tend to have a gas fraction M cold /M * larger than those accreted later, to be less massive, and to reside in lower halo mass and eject mass more efficiently (Contini et al. 2016). In principle, there is no justification to use the same assumption also for central galaxies. Nevertheless, it must be noted that in the original version of the model we applied this correction only to satellites. The plots we will show in Section 3 are the result of the model described above, which remarkably improved over the original version (no redshiftdependent quenching timescale for centrals).
Our model for star formation quenching described in this section considers both environmental and mass quenching. The first one is explicitly included in equations 2 and 4, the quenching being much faster for satellite galaxies. A sort of mass quenching (which is not that described in Peng et al. 2010) is instead implicit in the calculation of the quenching timescales, in a way that, for both satellite and central galaxies, the quenching is faster with increasing stellar mass and redshift. We will come back to this topic in Section 4.
Surveys
In this study, we take advantage of observed highredshift data from three different surveys: ZFOURGE (Tomczak et al. 2014;Straatman et al. 2016), UltraV-ISTA (Ilbert et al. 2013) and COSMOS (Davidzon et al. 2017). These surveys differs from one another in the total area covered, mass completeness limit, and number statistics. In this section, we want to summarize the main aspects of each survey, and we refer the reader to the relevant studies quoted above for further details.
The first set of data we considered in order to calibrate our model at z = z match and compare model predictions with observations at lower redshift has been constructed by Tomczak et al. (2014). These authors used observations from the FourStar Galaxy Evolution Survey (ZFOURGE), which is composed of three 11 ′ × 11 ′ pointings with coverage in the CDFS (Giacconi et al. 2002), COSMOS (Capak et al. 2007) and UKIDSS (Lawrence et al. 2007). The ZFOURGE fields also use HST imaging taken as part of the CANDELS survey (Grogin et al. 2011;Koekemoer et al. 2011), and in Tomczak et al. (2014), the survey has been supplemented with the NEWFIRM Medium-Band Survey (Whitaker et al. 2011).
The second set of data (from Ilbert et al. 2013) is a merged catalog of data from different studies. The main branch comes from a photometric catalog of near-IR data from the UltraVISTA project (McCracken et al. 2012) and optical data from the COSMOS project (Capak et al. 2007). COSMOS is a wide survey that covers 2deg 2 area and with multiwavelength (more than 35 bands) data.The UltraVISTA DR1 data release covers 1.5deg 2 in four near-IR filters Y, J, H, and K s and provides deeper data than the previous COSMOS near-IR datasets from McCracken et al. (2010). The major features of the set of data used by Ilbert et al. (2013) rely upon the large number of spectra available and, even more importantly, on the robustness of the photometric redshifts derived.
The third and last set of data considered comes from the COSMOS2015 catalog (Laigle et al. 2016), which includes ultradeep photometry from UltraVISTA (McCracken et al. 2012), SPLASH (Capak et al. 2012), and Subaru/Hyper Suprime-Cam (Miyazaki et al. 2012). The COSMOS2015 catalog contains precise photometric redshifts and stellar masses for more than half a million objects over the 2deg 2 COSMOS field, and the deepest regions reach a 90% completeness limit of 10 10 M ⊙ at z = 4. It has been shown to provide a large-number statistics due to the large volume probed and improved depth compared with the previous versions of the catalog.
Due to the cross-match of different surveys in the catalogs described above, for the sake of simplicity and in order to avoid confusion, we hereafter call ZFOURGE the first catalog, UltraVISTA the second, and COSMOS the last. In Figure 1 we show the SMFs as obtained from these catalogs, at z ∼ 2.3 (which is our z match ) in the left panel and at z ∼ 0.3 in the right panel. At high redshifts, the three SMFs look different, and not only in the highmass end where uncertainties are typically large, but also in the stellar mass range [10 9.5 − 10 10.5 ]M ⊙ with residuals that reach around ∼ 0.3 − 0.4 dex. Despite that, the SMFs agree very well at z ∼ 0.3 up to 10 11.2−11.3 M ⊙ . The difference between the surveys at high redshift is mainly due to the mass completeness limit, which affects the fit of data. The completeness limits at z ∼ 2.3 for the surveys we have chosen are (in log M ⊙ ) 9.0 (ZFOURGE), 10.0 (UltraVISTA), and 9.3 (COSMOS).
In order to overcome the problem of the mass completeness limit, we build another "mock survey" that takes advantage of the low-mass completeness limit in ZFOURGE (which at odds does not sample the highmass end) and UltraVISTA (which does). We derive the best-fit Schechter parameters by using a single Schechter function (Schechter 1976) , α * is the slope at the low-mass end, φ * is the normalization and M * is the characteristic mass. A mock realisation of data has been derived as follows. We have used the ZFOURGE fit to construct mock data from low mass up to 10 11 M ⊙ and the Ultra-VISTA fit for the rest of the stellar mass range. By using equation 7, we have calculated the best-fit Schechter parameters for this set of mock data. The "mock survey" is named HYBRID, and its SMF is shown in Figure 1. In Table 1 we list the sets of parameters used to calibrate the model at z = z match ≃ 2.3 with our four surveys. We calibrate our model with each of the surveys described above and let galaxies evolve down to z ∼ 0.3, which is the lowest redshift for which we have observed data to be compared with model predictions. In this section, we study the evolution of the SMF down to z ∼ 0.3 as predicted by each flavor of the model. The evolution of the SMF depends on our modeling described in Section 2, and it is linked to the star formation quenching of galaxies. Figure 2 shows several examples of star formation rate histories (SFRH) for central (upper panels) and satellite (bottom panels) galaxies, in different stellar mass ranges and for a large variety of quenching timescales τ c and τ s . The SFRH of centrals is very regular, as shown by the exponential decline, and its amplitude clearly depends on the SFR at z = z match,f orm and τ c . On the other hand, the SFRH of satellites is very diverse. A percentage of satellites have SFRHs very similar to central galaxies, and others show SFRHs with rapid declines after infall or even several rapid declines. In the first case, they are satellites accreted in a short time after either z = z match or z = z f orm , so that they have τ c ∼ τ s because the galaxy itself did not have much time to grow. In the other case, τ s << τ c , so the satellite galaxy after infall keeps forming stars as a central for a period of time in the range [2-4] Gyr and then its SFR declines quickly. In a few cases, a galaxy experiences sev-eral rapid declines because it can spend some time as a satellite and then become central again.
In Figure 3 we show the evolution of the SMF predicted by the model and compared with the observed one, from z ∼ 1.8 to z ∼ 0.3. The model has been calibrated at z match = 2.3 using the observed SMF from ZFOURGE. Model and observed data are represented by stars and diamonds, respectively, while the dashed lines indicate the region between ±3σ scatter observed by Ilbert et al. (2013) (UltraVISTA survey), which we add to the plot in order to have an idea of the shape of the SMF in the high-mass end (data from Tomczak et al. 2014 do not go beyond [10 11.25 −10 11.5 ]M ⊙ ). Overall, the model reproduces fairly well the evolution of the SMF down to z ∼ 0.3. At z ∼ 1.8 and in the stellar mass range ∼ [10 9 − 10 10 ]M ⊙ the SMF is slightly overpredicted, while the trend changes in the knee and beyond. Considering the scatter, it is, however, possible to assert that the model reproduces the first gigayear of evolution. At z ∼ 1.3 the prediction gets worse in the stellar mass range ∼ [10 9.5 − 10 10.3 ]M ⊙ , but still within the scatter, while the knee and the high-mass end are well reproduced. This means that there are not drastic deviations from the observed SMF after the second Gyr of evolution. At z ∼ 0.8, the predicted SMF agrees very well with the observed one over the whole stellar mass range, and the trend persists down to z ∼ 0.3, although the very low mass end is overpredicted and the very high mass end lies on the limit of the observed scatter. Hence, our model calibrated with ZFOURGE reproduces well about 8 Gyr of evolution of the SMF.
In Figure 4 we show the predictions of our model calibrated with UltraVISTA. Model and observed data are represented by stars and solid lines, respectively, while the dashed lines indicate the region between ±3σ scatter observed by Ilbert et al. (2013). The evolution of the SMF appears different from that seen in Figure 3, when the model is calibrated with ZFOURGE. At z ∼ 1.8 and z ∼ 1.3, the knee of the SMF predicted by the model is offset low from the observed one, while the low-and high-mass ends are within the observed scatter. From z ∼ 1.3 to z ∼ 0.7 ( 3 Gyr) there is a evolution of the low-mass end in the stellar mass range [10 8.5 − 10 9.2 ]M ⊙ , which persists down to z ∼ 0.3. At this redshift, the model is able to reproduce the SMF in the stellar mass range [10 9.5 − 10 11.9 ]M ⊙ , but fails below 10 9.5 M ⊙ , where the number density is substantially overpredicted.
A similar picture can be drawn from Figure 5, which shows the predictions of our model calibrated with COS-MOS (symbols and lines have the same meaning as in Figure 4) and a comparison with the Davidzon et al. (2017) data. Although the scatter in this case is distributed differently, the main features seen before still hold. At z ∼ 1.8 and z ∼ 1.3 the knee of the SMF pre-dicted by the model is still offset low with respect to the observed one, although the mismatch is less evident if compared with the one found in Figure 4. From z ∼ 1.3 to z ∼ 0.7, there is still the rapid evolution of the lowmass end in the stellar mass range [10 8.5 − 10 9.2 ]M ⊙ as seen in the previous case, which holds down to z ∼ 0.3. If we compare figures 4 and 5 panel by panel, we see that the evolution of the SMF predicted by the model calibrated with UltraVISTA looks remarkably similar to the evolution of the SMF predicted by the model calibrated with COSMOS. This is interesting since, as we can see from Figure 1, at z = z match , COSMOS shows higher number densities in the low-mass end than UltraVISTA, and lower number densities beyond the knee. We will come back to this in Section 4. Figure 6 shows the SMF evolution as predicted by the model calibrated with our mock survey HYBRID, from z ∼ 1.8 to z ∼ 0.3 (symbols and lines have the same meaning as in Figure 3). As explained in section 2.2, HYBRID has been constructed by merging ZFOURGE and UltraVISTA data, in such a way to be sensitive to ZFOURGE data from the low-mass end to 10 11 M ⊙ , and to UltraVISTA in the high-mass end. For this reason, the evolution of the SMF in the stellar mass range [10 8.5 − 10 11. ]M ⊙ is almost identical to the one predicted by the model calibrated with ZFOURGE, and the only appreciable difference arises in the high-mass end. In fact, at all redshifts shown, the high-mass end predicted by the model calibrated with HYBRID is closer to the average number density rather than to the upper limit, as predicted by the model calibrated with ZFOURGE. Overall, the two surveys provide very similar evolutions.
In order to better judge the goodness of each prediction, it is necessary to quantify the deviation of the model from the observed data. We address this point in Figure 7, where we plot the residuals, that is, the difference between the logarithm of the observed number density and the logarithm of the predicted number density for the model calibrated with our surveys, at z ∼ 0.3. In each panel, stars (and diamonds in the top left and bottom right panels) represent the residuals, while with dashed and dash-dotted lines we plot the 1σ and 3σ observed scatter, respectively. As commented above, ZFOURGE is very close to HYBRID and UltraVISTA to COSMOS. What appears to be very interesting is the level of accuracy of the predictions after about 8 Gyr of evolution of the SMF. It is worth discussing them one by one. In the top left panel of Figure 7, we show the residuals between ZFOURGE and the model. In the stellar mass bin [10 9.3 − 10 11.5 ]M ⊙ , residuals lie within 1σ observed scatter, and most of them are very close to zero. If we consider 3σ (observed scatter), only the first two data points lie outside it. We calculated the mean residual over the stellar mass range shown, and the same quan- tity not considering the two largest residuals (which are always the first and last data points in all panels). In the case of ZFOURGE, the mean residual is 0.15 dex, which reduces to 0.09 dex if we do not consider the first and last data points (see Table 2).
Residuals remarkably increase in the top right (Ultra-VISTA) and bottom left (COSMOS) panels. Here the model reproduces very well the SMF in the stellar mass range [10 9.5 −10 11.5 ]M ⊙ , but residuals are very large outside this range of mass. Indeed, as reported in Table 2, the mean residuals (both cases) are much larger than those discussed above (ZFOURGE). As one can expect, in the case of HYBRID, residuals are very close to those shown in the top left panel (ZFOURGE) and somewhat lower for the first two and last data points. In the case of HYBRID, the mean residual is 0.12 dex, which reduces to 0.09 dex if we do not consider the first and last data points (see Table 2).
These results are remarkably important considering that no study so far, focused on this topic and following a similar approach (e.g., Leja et al. 2015;Tomczak et al. 2016;Contini et al. 2017), has found an average resid- In the light of this, in the next section we will fully discuss these results and their importance to the goals of this study.
DISCUSSION
As discussed in Section 1, many attempts have been made in recent years to reconcile the evolution of the SMF with the observed SF R − M * relation. The large amount of effort spent on this topic by several authors resulted in the conclusion that the slope of the SF R−M * relation must be mass and redshift dependent. Moreover, Figure 3. Evolution of the SMF from z ∼ 1.8 down to z ∼ 0.3 predicted by our model calibrated on the ZFOURGE survey at z match = 2.3. Stars and diamonds represent model and observed data, respectively, while the dashed lines represent the Ilbert et al. (2013) fit of UltraVISTA data in the high-mass end (3σ range). this relation alone cannot explain the evolution of the SMF, but other physical processes responsible for galaxy growth, such as mergers and stellar stripping, must be taken into account. Nevertheless, once any possible process is properly considered, there is still a nonnegligible mismatch between the observed SMF and the predicted one. Understanding the reason behind that is the main goal of this study.
Recently, Steinhardt et al. (2017), by means of an analytic model, focused on the low-mass end of the SMF and confirmed that the growth of galaxies along the main sequence alone cannot reproduce the observed SMF. As a possible solution, they suggest that mergers are also necessary to describe the growth of galaxies in the stellar mass range they investigated. This is qualitatively in good agreement with Tomczak et al. (2016), but not quantitatively. In fact, Tomczak et al. (2016) show that mergers can help to lower the discrepancy between the observed and inferred SMFs if low-mass galaxies merge with more massive galaxies, but the rate required would imply that between 25% and 65% of low-mass galaxies have to merge with a more massive galaxy per gigayear. This rate appears to be too high if compared with current estimates of galaxy merger rates (e.g. Lotz et al. 2011;Williams et al. 2011;Leja et al. 2015). In PapI, besides mergers, we considered stellar stripping. We showed that mergers and stellar stripping have opposite effects, and a full modeling alleviates the discrepancies between the observed and theoretical SMF, but not enough for a full match (within the observed scatter) along the whole stellar mass range investigated.
Several arguments have been invoked in order to explain the residuals between the two SMFs, which appear to be a function of redshift and stellar mass. The most credible ones rely on the accuracy in the measurements of both stellar mass and SFR (Leja et al. 2015;Figure 4. Evolution of the SMF from z ∼ 1.8 down to z ∼ 0.3 predicted by our model calibrated on the UltraVISTA survey at z match = 2.3. Stars represent model data, while the dashed lines represent the Ilbert et al. (2013) fit of the UltraVISTA data (3σ range). Tomczak et al. 2016;Contini et al. 2017). In fact, in PapI we showed that once we reduce z match we obtain a better agreement at low redshift, which means that either measurements of stellar mass and SFR at lower redshift are less affected by uncertainties, or the scatter along these measurements has less time to propagate as time passes. Tomczak et al. (2016) let the SMF evolve for a limited amount of time and for different redshift bins, finding residuals similar to what we find in PapI. They conclude that either the SFR measurements are overestimated or the growth of the Tomczak et al. (2014) mass function is too slow, or both.
One of the goals of this study is to considerably reduce the residuals between the observed and inferred SMFs. In order to do it, we slightly changed the approach followed in PapI, and rather than assigning SFRs according to the SF R − M * relation at each time step, the model builds SFHs that depend on the galaxy type, stellar mass, and quenching timescale. As shown in Section 3, the same model calibrated with different surveys leads to different evolutions of the SMF. This implies that there might be an intrinsic inconsistency in the growth of the SMF described by the same survey. As pointed out in PapI, the inferred evolution of the SMF is sensitive to the shape of the observed SMF at z = z match . The differences in the evolution of the SMF from our four surveys are mainly the consequence of the gap between them in the low-mass end at z = z match (left panel of Figure 1). COSMOS and UltraVISTA show at z ∼ 0.3 the largest residuals in the low-mass end, because they are also higher than ZFOURGE and HYBRID in the very low mass range at z = z match 3 . The calibration of the model with the observed SMF at z = z match is not trivial. High number densities in the low-mass range will end up in too many galaxies at low redshift. The calibration is then very sensitive to both the slope α * and the normalization φ * of the SMF. The observed SMFs agree at z ∼ 0.3 and those predicted by the model show minimal residuals in most of the stellar mass range at the same redshift. This means that the small (but appreciable) offset seen at z = z match in the low-mass range affects the evolution of the modeled SMFs in the low-mass range only, while the offset in the rest of the stellar mass range can explain the different residuals among the modeled SMFs in the high-mass end. To test the importance of the calibration of the model, we introduced a mock survey called HYBRID, which is, as explained in Section 2.2, a combination between ZFOURGE (sensitive to the low-mass end and intermediate mass) and UltraVISTA (sensitive to the high-mass end). The evolution of the SMF predicted by the model calibrated with HYBRID is very consistent with the evolution of the SMF predicted by the model calibrated with ZFOURGE. This strengthens the above argument that a minimal offset in the low-mass end at z = z match can remarkably affect the evolution of the SMF down to low redshift in the same stellar mass range.
ZFOURGE and HYBRID show the smallest average residuals at z ∼ 0.3, which are one-half of the ones shown by COSMOS and UltraVISTA (see Table 2). This is completely due to the residuals in the low-mass end (see Figure 7). In fact, if we only consider the stellar mass range [10 9.4 − 10 11.5 ]M ⊙ , the mean residuals are very similar and between 0.04 and 0.06 dex for all the surveys. In a stellar mass range spanning over two orders of magnitude, our model predicts an SMF at z ∼ 0.3 very close to the observed one and within 1σ observed scatter. If we consider the whole stellar mass range investigated, ∼ [10 8.5 − 10 11.9 ]M ⊙ , our average residuals are still be- Figure 6. Evolution of the SMF from z ∼ 1.8 down to z ∼ 0.3 predicted by our model calibrated on our HYBRID survey (combination of ZFOURGE and UltraVISTA, as explained in the text) at z match = 2.3. Stars and diamonds represent model and observed data, respectively, while the dashed lines represent the Ilbert et al. (2013) fit of the UltraVISTA data in the high-mass end. low any other found in the past studies and, especially in the case of ZFOURGE (as well as HYBRID), are particularly small (≤ 0.15 dex). The improvement with respect to the results found in PapI is tangible (see Fig. 5 of PapI), where, in the case of ZFOURGE, residuals lie within the observed scatter in the stellar mass range ∼ [10 10 − 10 11.3 ]M ⊙ .
Our new model can make a prediction of the evolution of the SF R − M * relation as a function of time, since the SFR is only initialized at z match or z f orm , and the SFH of each galaxy will depend on the time spent as a central or satellite and therefore on its quenching timescales τ c and τ s . We show the prediction of our model in Figure 8, from z ∼ 1.8 to z ∼ 0.3. The solid lines represent the median SFR at each redshift, and the dashed lines show the 16th and 84th percentiles of the distribution, while the dash-dotted lines indicate the 1σ scatter of the observed relation by Tomczak et al. (2016) that we use for comparison. What appears to be relevant is the slope of the SF R − M * relation predicted by the model, which is consistent with the observed one at any redshift up to intermediate-mass galaxies. The SFR reaches a peak and then drops in the stellar mass range of massive galaxies, and the peak moves toward higher mass as the redshift decreases. This is because galaxies grow in mass (move to the right of the plot), and a large part of massive galaxies are centrals (slower quenching). Another interesting feature concerns the comparison with the observed relation. Our predictions are indeed biased low with respect to the observed ones at every redshift, which implies that the model on average requires a lower SFH for each galaxy to fairly match the SMF at low redshift.
The features shown in Figure 8 are independent of the calibration (no dependence on the survey), and this has an important consequence. In fact, assuming that the observed SF R − M * relation at z ∼ 0.3 is well constrained, the model would require a higher normalization of the relation at any redshift, that is, also at z match . This means that, in order to match the SMF at low redshift, either the quenching timescales must be somehow shorter in order to recover the SF R − M * relation at any redshift, or the SFRs at z match are underestimated, or a combination of the two. By looking at the evolution of the SMF (independently of the survey), SFRs down to z ∼ 1.3 do not seem to be underestimated since the predicted SMF lies above the observed one from the low-mass end to stellar masses around 10 10.5 M ⊙ . In these redshift and stellar mass ranges, the model would require either shorter quenching timescales or lower SFRs. Below z ∼ 1.3, the evolution of the SMF is consistent with our modeling of the quenching timescales, which in turn brings to lower SFRs than observed. In PapI we show that the combination of the SMF and SF R − M * relation measured by ZFOURGE leads to overpredicting, on average, the number densities at low redshift. As found by Tomczak et al. (2016), who took advantage of the same survey, there is an inconsistency between stellar mass and SFR measurements, which agrees with our results in PapI.
In the light of the results found in this study, and considering the parameter space used by the model, it is not possible to a priori state which of the two relevant measurements, stellar mass or SFR, is not consistent with the global evolution of the SMF. However, if the observed SMF at z match is correct, the SF R − M * relation must have a lower normalization at any redshift investigated in order to reproduce the observed SMF at low redshift. On the other hand, if we trust the relation between SFR and stellar mass as a function of redshift, the observed evolution of the SMF has to be faster. Considering that we used three different surveys (and a mock combination of two of them) and the above picture holds for each survey, it is plausible to support the first scenario, that is, the observed SFRs are overestimated (see, e.g., Wilkins et al. 2008;Kang et al. 2010).
For the sake of clarity, we must also note that our modeling does not explicitly consider any prescription for the mass quenching as described in Peng et al. (2010). In that work, mass quenching is intended as the quenching of star forming galaxies around and above φ * that follows a rate statistically proportional to the star formation rate (see their equation 17). Their model aims to consider the quenching given by internal feedback, such as super-nova and active galactic nucleus feedback. Despite the goal being the same, in this work we consider a massdependent quenching, where the time scale for quenching is inversely proportional to the galaxy stellar mass (faster quenching for more massive galaxies). These two definitions are conceptually different and likely the reason why (or at least in part) we overpredict the number density of very massive galaxies, but both aim to consider the internal feedback that quenches galaxies. Moreover, we must note that, on one hand, our analytic definitions for the quenching timescales are theoretically valid, but on the other hand, they are also quite arbitrary in nature.
CONCLUSIONS
We have studied the evolution of the SMF from z ∼ 2.3 to z ∼ 0.3 as predicted by an analytic model calibrated with observed data. We set the model in order to perfectly match the observed SMF at z match , assign an SFR to galaxies by means of the SFR-M * relation at z match or z f orm , and let the SFRs evolve according to functional forms that mainly depend on two characteristic quenching timescales. We took advantage of observed SMFs from four surveys: ZFOURGE (Tomczak et al. 2014), UltraVISTA (Ilbert et al. 2013), COSMOS (Davidzon et al. 2017), and a mock combination of ZFOURGE and UltraVISTA that we have called HYBRID. From the predictions of our model, we can conclude the following: • The inferred evolution of the SMF is in good agreement with the observed one, and the level of accuracy of our modeling mostly depends on the survey used for the calibration. We confirm the result found in PapI: the same model calibrated with different surveys leads to different evolutions of the SMF, which are very sensitive to the shape of the observed SMF at z = z match , and in particular to the low-mass end at z = z match . Although the four observed SMFs at z = z match look different, they evolve to the same observed SMF at z ∼ 0.3, which implies an intrinsic inconsistency in the growth of the SMF described by the same survey.
• Our new model brings to a much better agreement between the observed and inferred SMFs at z ∼ 0.3 with respect to previous results. In fact, the residuals between the two SMFs are located within 1σ observed scatter in most of the stellar mass range investigated, and within 3σ in almost the whole range. HYBRID and ZFOURGE are those which provide the smallest mean residuals, around 0.12/0.15 dex, which reduce to 0.09/0.09 dex if we do not consider the very first and last data points.
• The SF R − M * relation predicted by our model is offset low with respect to the observed one (Tomczak et al. 2016) at any redshift and independent of the calibration at z = z match . Tomczak et al. (2016) find that either the evolution of the observed SMF is too slow or the SFR measurements are overestimated. As discussed in Section 4, the latter is more plausible since the prediction of the SF R − M * relation does not depend on the survey used to calibrate the model.
Considering the fact that our model produces fairly well about 8 Gyr of evolution of the SMF (with average residuals between 0.04 and 0.06 dex in the stellar mass range [10 9.4 − 10 11.5 ]M ⊙ ), the improvement over PapI and previous studies is remarkable. However, in order to investigate if the relation between the quenching timescales and stellar mass (with redshift) predicted by the model is quantitatively and qualitatively correct, in a forthcoming paper we aim to apply the model to reproduce the evolution of the SMF of star-forming and quiescent galaxies separately. We will make use of a color separation in order to split our sample into the two subpopulations of galaxies, and we will test our model predictions against several observed quantities, such as the fraction of quiescent galaxies as a function of stellar mass and redshift. Figure A9:. Residuals between the observed SMF and our model calibrated on them at z ∼ 0.3, obtained by quenching low-mass galaxies below a given mass cut (different for different surveys, see text). In each panel, stars (and diamonds in the top left and bottom right panels) represent the residuals, while with dashed and dash-dotted lines we plot the 1σ and 3σ scatter (observed), respectively.
day (see, e.g., Lapi et al. 2017 and references therein). A rough way to test whether it is possible to find a better agreement between the observed and predicted SMFs at low redshift in the low-mass end would imply the suppression or quenching of newborn galaxies residing in such subhaloes. In order to understand the limit of our modeling, we run the four flavors of the model calibrated with our surveys by adding a stellar mass cut below which galaxies that form after z = z match are not allowed to form stars anymore and quench instantly. Figure 9 shows the residuals between the observed and predicted SMFs for all models as a function of stellar mass, at z ∼ 0.3. The cut in stellar mass has been chosen (for each model) such that data points below log M * [M ⊙ ] = 9 in Figure 9 stayed within 2σ observed scatter. Not surprisingly, we find that the model calibrated with different surveys requires different cuts. HYBRID is the one that requires the lowest cut, log M cut [M ⊙ ] < 7.75, while the other three require log M cut [M ⊙ ] < 8.15 (ZFOURGE) and log M cut [M ⊙ ] < 8.20 (for both UltraVISTA and COSMOS). Moreover, these cuts bring the mean residual (not considering the very last data point of the high-mass end) below 0.1 dex for all models and a much better agreement with observations in particular for the model calibrated with UltraVISTA and COSMOS, which are the surveys that show the highest number densities below log M * [M ⊙ ] = 8.5 at z = z match . This is also the reason why the cut in mass is higher in these two cases. To conclude, we must note that such cuts have a negligible effect in the rest of the stellar mass range.
|
2017-10-31T02:11:51.000Z
|
2017-10-20T00:00:00.000
|
{
"year": 2017,
"sha1": "c9feab93a87c2b6d3a28b60ef60cbc0d60323d68",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.07637",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c9feab93a87c2b6d3a28b60ef60cbc0d60323d68",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.