id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
221054320
pes2o/s2orc
v3-fos-license
A case of transcatheter prosthetic aortic valve endocarditis ABSTRACT Transcatheter aortic valve implantation (TAVR) constitutes an established treatment in inoperable or high perioperative risk patients with severe aortic stenosis. Prosthetic valve endocarditis after ΤΑVR occurs with an incidence of 0.3–1% per patient-year. Infective endocarditis may stem from hematogenous dissemination or contact with infected adherent tissue. Few cases of infective endocarditis after TAVR have been reported. We present an interesting case of a 79-year-old male with a history of severe aortic stenosis status post TAVR greater than one year ago, and pulmonary vein isolation for atrial fibrillation six weeks ago was found to have infective endocarditis with a vegetation on the prosthetic valve leading to multiple embolic strokes as a result of Enterococcus faecalis bacteremia. The patient was not a surgical candidate with his Society of Thoracic Surgery (STS) risk score being 18%; therefore, he was managed conservatively on intravenous antibiotics. Our case had endocarditis from enterococcus bacteremia; however, the patient never had any gastrointestinal or genitourinary procedure. Introduction Transcatheter aortic valve implantation (TAVR) has become a substitute to surgical aortic valve replacement (SAVR) for severe aortic stenosis in high-risk patients or those with contraindications to surgery [1][2][3][4]. Following the success of the first transcatheter aortic valve replacement (TAVR) in 2002, the procedure has since been on the rise [1,2]. The procedure fundamentally involves a self-expandable or a balloon-expandable stent with an attached pericardial valve inserted into the native aortic annulus using a transarterial or transapical approach, which thereby compresses the cusps of the native aortic valve against the aortic root wall [4]. Patients for TAVR are typically the elderly, with multiple comorbidities, and with a high surgical risk [1,2,4]. Prosthetic valve endocarditis (PVE) in patients following a TAVR is a rare complication accounting for 1% to 3% of cases [1,5,6]. Guidelines for PVE diagnosis and management are well curated, however, no conventional parameters are set for infective endocarditis following a TAVR [1]. Until further evidence is available, diagnosis and treatment are prescribed on a caseby-case basis based on clinical judgment [1]. PVE is concerning due to its low survival rates. Conservative management is often not adequate (showing a one-year mortality rate of 64% to 66%), requiring surgical valve replacement even in surgically unfit candidates [3,5]. The poor prognosis in these patients makes it crucial to ensure prevention to avoid detrimental events or avoidable mortalities. We hereby report a case of Enterococcal endocarditis that occurred one year after TAVR in a 79-yearold male patient. Case presentation A 79-year-old male with a history of severe aortic stenosis had TAVR performed greater than one year ago with 26 mm bovine Edwards Sapien valve, pulmonary vein isolation with ablation for atrial fibrillation six weeks ago, coronary artery disease, chronic systolic heart failure with an ejection fraction of 30%, type 2 diabetes mellitus presented to the hospital with complaints of altered mental status (AMS), fatigue, loss of appetite, night sweats, fever, and dizzy spells. The patient had been having symptoms of fatigue, night sweats, and loss of appetite 2-4 weeks after the pulmonary vein isolation procedure, and at that time, his amiodarone had been discontinued. The following week, his colchicine was discontinued as the patient was having consistent symptoms. He presented to the emergency department due to the progression of symptoms and altered mental status. On examination, the patient was afebrile, had AMS, on cardiac exam S1/S2/S3 was audible with a systolic murmur, but no gallops or rubs were appreciated. The patient had mild elevation of jugular venous pulse, trace edema of lower extremity, and crackles at bases. Further evaluation revealed leukocytosis with a white blood cell count of 14.1/mm3, and chest x-ray had bilateral infiltrates; therefore, the patient was admitted with a provisional diagnosis of severe sepsis with health-care-associated pneumonia. The patient underwent magnetic resonance imaging to evaluate AMS, which identified multiple embolic strokes and was also found to have Enterococcus faecalis bacteremia. A trans-esophageal echocardiogram (TEE) demonstrated a large, echogenic, mobile, irregular 14 mm x 7 mm vegetation on the noncoronary cusp (NCC) on the ventricular side of the prosthetic valve. [Figures 1-3] TEE did not identify any aortic regurgitation, aorta exhibited normal size, ejection fraction was 30%, mean aortic gradient was 22 mm Hg, peak aortic gradient was 35 mm Hg, mean velocity through the aortic valve was 2.2 m/s, left ventricular end-diastolic diameter was 6.6 cm. The patient was started on intravenous (IV) hydration and antibiotics. The patient was evaluated by the cardiothoracic surgeon and deemed not to be a surgical candidate, especially with a Society of Thoracic Surgery (STS) score of 18%. He was treated with IV vancomycin, ceftriaxone, and ampicillin for six weeks, followed by lifelong suppressive therapy with trimethoprim/sulfamethoxazole. The etiology of the patient's endocarditis is uncertain as he never had any gastrointestinal or genitourinary procedure. Discussion TAVR is a procedure used for patients with significant comorbidities and severe symptomatic stenosis [1]. Owing to the novelty of the procedure, sequelae to TAVR are still being learned [3,6]. Pabilona et al. highlight the significant events occurring as an early sequela to TAVR, including vascular diseases, stroke, renal failure, paravalvular leak with aortic regurgitation, and atrioventricular block [3]. Not much literature is available identifying the late sequelae [3]. A few emerging cases of infective endocarditis after TAVR have been identified with more frequent uses. These reported incidents used antibiotics for treatment in accordance with the PVE cause [6]. PVE has been recorded to have a significant mortality rate of 20% to 40%, with no possibility of improvements in the survival rate for the past three decades [6]. The mortality rate in TAVR patients with PVE is congruent with the recent data at 34% [6]. As per the literature, it also points out that this complication occurs more commonly in males and patients with high-risk profiles [6]. While early PVE is hypothesized to arise during the implantation procedure, contamination with non- classical pathogens like enterococci in TAVR patients is suggestive of a different infective source [6]. Moreover, about 15% of patients suffered from infectious complications postoperatively, serving as a possible infective source for PVE [6]. Additionally, during the transcatheter valve preparation and loading, some leaflet damage can arise due to compressive handling, further favoring PVE [6]. Due to discordance between the bulky-calcified native aortic valve and the implanted prosthetic valve, some amount of paravalvular leak and thence regurgitation are frequently seen after a TAVR [4]. These leaks serve as breeding nests for infections, which is further exacerbated by predisposing comorbidities and older age, all favoring infective endocarditis in patients after TAVR [4]. Additionally, early PVE typically occurs at the junction of the sewing ring and the annulus, hence giving rise to valve dehiscence and intensifying the paravalvular leak [4]. Studies report PVE following TAVR has a wideranging clinical picture, pathogen involved, and management required [3]. The interim between TAVR and admission at the hospital for PVE was noted to vary between around 2 weeks to 23 months [3]. Most common of the isolated organisms was Enterococcus faecalis followed by S. viridans, coagulase-negative staphylococci, Corynebacterium, Pseudomonas, Moraxella, Candida, methicillin-resistant Staphylococcus aureus, and Escherichia coli in decreasing frequency [3,6] According to a study by Puls et al., a late diagnosis has led to severe morbidities, including cerebral embolization, acute renal failure, and long hospital admissions [4]. Echocardiography has been used in TAVR patients to observe and monitor complications like abscesses (47%), fistulae (9%), or involvement of other valves (22%) in comparison with patients with native or surgical prosthetic valves [3,6]. Autopsy series and surgical explantation of infected transcatheter valves have aided in identifying predisposing structural factors such as significant inflammation and infection of the skirt and leaflets with spread and perforation of adjacent structures [6]. Core for diagnosis is an echocardiogram with positive findings (from Duke's criteria) of vegetation, abscess, new partial dehiscence of the prosthetic valve, and a new valvular regurgitation [4]. Identifying small vegetations on transesophageal echocardiography can be difficult because of shadowing and reflecting the property of the prosthetic valve [4]. Enterococci species are highly resistant to antibiotics, and complete elimination may require extended six weeks use of the synergistic bactericidal combination [6]. In addition, these microbes can be tolerant to many drugs like aminoglycosides, beta-lactams, and vancomycin [6]. This trend of high antibiotic resistance leading to treatment failure is alarming since conservative medical management is the most commonly used strategy in treating PVE following TAVR at present. Enterococcal PVE has also shown to be complicated by periprosthetic dehiscence, annular abscesses, or fistulas [1]. In cases when treatment with antibiotics fails, immediate surgical intervention is required [1]. Surgery is the treatment of choice for patients with PVE unless unfit [7]. Patients who benefit most with surgery in terms of prognosis and overall survival are cases with additional complications stemming from PVE such as heart failure, valvular dysfunction, valvular regurgitation, or obstruction, valve dehiscence, and annular abscess [7]. An absolute indication for early surgery in PVE is infection with S. aureus, even if uncomplicated, in order to prevent cerebral complications [7]. PVE caused by other microbes can be managed conservatively with antibiotics if micro-organism is sensitive to antibiotics and shows no evidence of cardiac complications [7]. However, cardiac surgeons should be notified early in the case, and surgery should only be postponed if adequate treatment has been achieved [7]. Surgery is also recommended in hemodynamically unstable patients, those who have a recurrent infection, bacteremia, or emboli [7]. To summarize, the echocardiographic criterion used to diagnose infective endocarditis is not well suited for the diagnosis of PVE in post-TAVR patients [4]. Studies involving a larger population with regular follow-ups are required to understand the prevalence, the pathogen involved, and the treatment regimen effective against this complication. Conclusion It is crucial to be watchful for PVE in patients after TAVR. Currently, prophylactic antibiotic before TAVR and prior to any dental or invasive procedure in patients after TAVR is adopted on a case by case basis per the hospital protocols, naturally creating variability amongst already high-risk cases involving elderly population with multiple comorbidities. We suggest early commencement of organism-sensitive antibiotics in symptomatic TAVR patients with positive blood culture and the absence of an alternate source of infection, even with inconclusive findings on echocardiography. Disclosure statement Authors have no interest to disclose. The authors report no financial relationships or conflicts of interest regarding the content herein. Funding No funding was received for any aspect of this case.
2020-06-18T09:09:10.120Z
2020-05-03T00:00:00.000
{ "year": 2020, "sha1": "89acf0f0ac34a30b48d1b11d945faf1e59bf445a", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20009666.2020.1766801?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "174e36dd587e9b7d14a230111c01db501deb3f55", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
226289919
pes2o/s2orc
v3-fos-license
Topological Regularization via Persistence-Sensitive Optimization Optimization, a key tool in machine learning and statistics, relies on regularization to reduce overfitting. Traditional regularization methods control a norm of the solution to ensure its smoothness. Recently, topological methods have emerged as a way to provide a more precise and expressive control over the solution, relying on persistent homology to quantify and reduce its roughness. All such existing techniques back-propagate gradients through the persistence diagram, which is a summary of the topological features of a function. Their downside is that they provide information only at the critical points of the function. We propose a method that instead builds on persistence-sensitive simplification and translates the required changes to the persistence diagram into changes on large subsets of the domain, including both critical and regular points. This approach enables a faster and more precise topological regularization, the benefits of which we illustrate with experimental evidence. Introduction Regularization is key to many practical optimization techniques.It allows the user to add a prior about the expected solution -e.g., that it needs to be smooth or sparse -and optimize it together with the main objective function.Classical regularization techniques [1], such as 1 -and 2 -norm regularization, have been studied in statistics and signal processing since at least the 1970s.These techniques are especially important in machine learning, where problems are often ill-posed and regularization helps prevent overfitting.Accordingly, various regularization techniques are not only used in machine learning research [2,3], but are also incorporated into the standard optimization software and routinely used in applications. Recently, several authors have begun to explore the use of topological methods to regularize the objective function.All of them use persistent homology to measure either the shape of the data set or the topological complexity of the learned function.For instance, Chen et al. [4] use persistence to describe the complexity of the decision boundary in a classifier and add terms to the loss to keep this boundary topologically simple.Brüel-Gabrielsson et al. [5] use persistence as a descriptor of the topology of the data and introduce a family of losses to control the shape of the data once it passes through a neural network. All the methods that incorporate persistence into the loss function [4,5,6] rely on the same observation.Persistent homology describes data via a diagram, a collection of points {b i , d i } in the plane, that encodes the topological features of the data: components of the decision boundary, "wrinkles" in the learned function, cycles in the point set once it passes through the neural network.Each point represents the birth b i and death d i of a topological feature.Each coordinate depends on the value of the function on a set of points.In the simplest case, (b i , d i ) = (f (x), f (y)) for some x, y in the input, where f is the learned function.In the more sophisticated cases, each point in the persistence diagram is generated by a handful of input points (e.g., four [5]).Accordingly, if a loss L prescribes moving a point in the persistence diagram via a gradient (∂L/∂b i , ∂L/∂d i ), one can back-propagate it to update the model parameters. Although persistent homology describes a family of topological features of different dimensions (connected components, loops, voids), most practical examples have focused on 0-dimensional features (connected components generated by the extrema of the input function).In this case, a natural loss is one that penalizes and tries to remove low-persistence features, which are interpreted as noise: e.g., Persistence-sensitive simplification [7,8,9] offers a direct solution to this problem.It prescribes how to modify a given input function f to find a function g that is ε-close to f , but without the noisy features.Given such a g, which by construction minimizes the diagram loss L above, one can use f − g 2 as a term in the loss.In the context of learning, this approach offers a major advantage: instead of supplying gradients only on the critical points of f , we also get gradients on the regular points of f whose values must be changed to topologically simplify the function; see Figure 1. Our contributions are: • a method to control the topological complexity of a function, represented by a neural network, by incorporating persistence-sensitive simplification into the training; • comparison of the training results after backpropagating gradients through the diagram vs. using persistence-sensitive optimization; • experiments with data that illustrate the utility of controlling the topology of the learned function. We note that topological methods have found a much broader use in machine learning than regularization.An important line of work involves developing techniques to incorporate topological features detected in data into machine learning algorithms [10,11,12].Although there is some overlap in methods between the two research directions (notably propagating loss through the persistence diagram), our work is focused on regularization. Background We recall the relevant background in topological data analysis [13], focusing specifically on 0-dimensional persistent homology, which we introduce using an auxiliary computational construction, merge trees. Merge trees.Let f : X → R be a function on a topological space X.A merge tree tracks evolution of connected components in the sub-level sets f −1 (−∞, a] of the function, as we vary the threshold a. Formally, we identify two points x, y of X, if f (x) = f (y) = a and x and y belong to the same connected component of the sub-level set f −1 (−∞, a].The quotient of X by this equivalence relation is called a merge tree of f . Throughout the paper we use graphs to approximate continuous spaces, so we briefly dissect the above definition for functions on graphs.Let f : G → R be a function on a graph G = (V, E), defined on the vertices and linearly interpolated on the edges.For simplicity, we assume that all the values of f on the vertices are distinct and index the vertices ] and there does not exist k such that i < k < j and v k ∈ C. A merge tree T is not necessarily a tree -it is a forest, with a tree for every connected component of G -but the distinction is minor for this paper. T is naturally decomposed into branches; see Figure 1 Points closer to the diagonal represent shorter branches and we interpret them as noise. Although we have defined everything in terms of the sub-level sets, the definition for super-level sets, f −1 [a, ∞) is symmetric, with maxima replacing the minima.We use both constructions throughout the paper. If graph G has n vertices and m edges, then a merge tree on G can be computed in O(n log n + mα(m)), where α is the inverse Ackermann function.It follows that a 0-dimensional persistence diagram can be computed in the same time. To visualize the topological changes in the model during optimization, we stack persistence diagrams next to each other.The resulting vineyard of a family of functions f i is a multiset of points (i, In other words, over each i (for example, a training epoch) we plot all persistences of the corresponding diagram. Simplification.An important property of persistence is stability: a small perturbation of function f causes a small perturbation of the persistence diagram Dgm(f ).The formal statement is the celebrated Stability Theorem: where f and g are two real-valued functions on the same domain and d B denotes the bottleneck distance.This theorem is one of the justifications for treating points close to the diagonal as topological noise.This view suggests getting rid of the topological noise. In other words, g is ε-close to f but its persistence diagram has only those points whose persistence exceeds ε.In the case of 0-dimensional persistence, ε-simplification always exists and can be computed in the same time as a merge tree [7,8,9]. Method We start with the standard supervised learning problem. Given training data x i with labels y i , we want to learn a model f θ , with parameters θ, that approximates y i given x i .Although this framework applies more generally, throughout the paper we focus on the case where f θ is a neural network. Suppose we are solving a regression problem.In this case, the input labels are scalars, y i ∈ R, and our network maps from some (typically) Euclidean space into reals, f θ : R d → R. The learning process is usually a form of gradient descent on the network parameters with respect to a user-chosen loss, for example, the mean-squared error (MSE), Ideally, we would like to topologically simplify the model f θ either on its entire domain, or at least on the "data manifold," the subset of the domain that contains all possible data.Unfortunately, there are no algorithms to solve this problem -topological methods require a combinatorial representation of the domainso we resort to a standard approximation. We take the domain of the network f θ to be the k-nearest neighbors graph on the training set X: each training sample is a vertex, and two vertices are connected if and only if one of them is among the k-nearest neighbors of the other one.The k-NN graph G approximates the data manifold.We can increase the quality of this approximation by sampling additional points in the neighborhood of our input.In the experiments in Section 6, we draw n additional points from a normal distribution, centered on each training data point, x ∈ X, which results in a graph with (n + 1) • | X| vertices.(Although we don't know the true label on the extra points, we don't need it for the topological simplification.)Both because computing a k-NN graph is expensive for high-dimensional data and because it helps to control noise, in some experiments we build the k-NN graph on the lower-dimensional projection of X using PCA. We use merge trees to compute an ε-simplification g of our model f θ .For every vertex v, we find its first ancestor u that lies on a branch with persistence at least ε. The effect of this operation on the merge tree is that all the branches with persistence less than ε are removed; see Figure 1. Applying simplification.Given an ε-simplification g of f θ , we could add a term λ• f θ −g 2 to the loss and use a single optimizer.Instead, we opted for a different approach by alternating between the standard training and the topological phases, with a separate optimizer for each phase.A key advantage of this separation is that it keeps two histories of the gradients, one for each phase, so that the topological loss does not influence the momentum in the standard training. An important decision is when to switch to the topological phase.We use a heuristic that depends on the validation loss.In each epoch, we first iterate over all batches and perform standard training using the first optimizer.Then, if the validation loss increases, compared to the previous epoch, by more than some threshold (a hyperparameter), we compute the εsimplification g and take 5 to 10 steps with the second optimizer to minimize f θ − g 2 .We use the norms of the gradients of the ordinary training loss and of the topological loss, to set a learning rate for the latter that ensures that we update the model parameters θ by comparable amounts in both phases. Choice of ε. A key decision in implementing our method is how to choose ε, to decide which points to keep and which to remove in the persistence diagram.Earlier works [4,5] prescribe a fixed number of points to keep in a certain region of the persistence diagram.For instance, some of the losses in [5] penalize all but j of the most persistent points.We can optimize such a loss by setting ε = (p j + p j+1 )/2, where p i is the persistence of each point, sorted in descending order. Another alternative, used in topological data analysis to automatically distinguish between persistent and noisy points, is the largest-gap heuristic.To apply it, we find index j such that the difference p j − p j+1 is maximized. Finally, the heuristic that we found most effective and use for all experiments in Section 6 is to use validation loss as our ε.Validation loss tells us how far we are from a function that gives perfect answers on the validation set.Using it as ε, we find the topologically simplest function g that is within the same distance from our model f θ . Classification.For regression, the network itself serves as a real-valued function amenable to topological analysis.Classification requires a little more work.We assume that the data has m classes and the network has m output channels, f θ : R d → R m , with the predicted class chosen as p = arg max i f θ (x)[i].We define the confidence function, φ : R d → R, to measure how much higher the value in the predicted channel is compared to the second highest candidate: When φ(x) is close to 0, the network is not confident whether to classify x as the top class p or the secondbest guess.The zero set φ −1 (0) is the decision boundary, by definition.Outliers of one class scattered among the points of another introduce spurious extrema in the confidence function.By driving optimization towards the simplified version of φ, we can reduce overfitting. Because generically φ(x) is never zero on an input point x ∈ X, we need an extra step to capture the topology of the decision boundary.If two vertices u and v, connected by an edge in the k-NN graph, are assigned two different classes by the network, then the decision boundary passes somewhere between them.In this case, we remove the edge (u, v) from the graph.This pruning results in multiple connected components, at least one per class.We compute the merge tree -forest in this case -of the confidence function on the pruned graph, with respect to the super-level sets, i.e., tracking persistence of the maxima.Because confidence function is never negative, we restrict the infinite branches in the merge tree to die at 0. This obviates special treatment of separate connected components in the graph: if one of them produces a low-persistence merge tree, we simplify it by setting the values of all of its vertices to 0. Comparison with Diagram Simplification Earlier work on applying topological regularization to neural networks [4,5] relied on backpropagation through persistence diagrams.For piecewise-linear functions on a graph, each point in the 0-dimensional persistence diagram corresponds to a pair of vertices, (b i , d i ) = (f (x), f (y)).If one adds a regularization term of the form (d i − b i ) 2 , where the sum is taken over all points (b i , d i ) with persistence less than ε, then one can back-propagate the gradient to the function values and then to the model parameters, i.e., the weights of the network.We call this loss the diagram loss, and the loss proposed in the previous section, the PSO loss. The first disadvantage of the diagram loss is that only critical points generate pairs in the persistence diagram.Accordingly, most input points are not used and receive no information during the backpropagation.To illustrate this, we take f : R 2 → R to be the sum of 4 Gaussians and evaluate f on the uniform grid over unit square [0, 1] × [0, 1] with 10, 000 vertices.Figure 2a illustrates the plot of f .We pick ε so that the two lower persistence points in the diagram of f (corresponding to the two Gaussians with lower peaks) are simplified, and take 50 steps of gradient descent using the PSO loss and the diagram loss directly on values of f at each vertex.The simplified functions appear in Figures 2c and 2e, respectively. Figures 2b and 2d show the vineyards of the two optimization processes.In both vineyards, we show the original persistence values in black, the desired values in red, and the values at each step of the optimization in green.With PSO loss, this is an unconstrained convex problem, so the optimizer quickly eliminates Figure 3 shows the effect of the two losses on a neural network.We train a fully connected network with 5 layers for 100 epochs and then perform 30 steps of topological optimization.The key difference from the previous example is that we do not have direct control over function values, but only over the weights of the network.The diagram loss provides information only for the critical points of the function, and the optimizer ends up minimizing this loss by pushing the whole function towards a constant: in the vineyard on the right-hand side, all points, not just the points below ε, are moving to 0. Since the PSO loss penalizes changes to the high-persistence parts of the function, its optimization does not suffer from the same problem, as the vineyard on the left-hand side shows. It is not clear how to fix this overzealousness of the diagram loss.The main difficulty is that the critical vertices and their pairing change after each gradient descent step.A naive fix would be to add a term that pushes high-persistence points to ∞: We have tried this approach, but it did not perform well.Depending on weight λ, either the additional term had no influence at all, and the function was squashed to a constant; or it dominated, and the function exploded numerically. A more principled solution would be to compute a matching between the persistence diagram after each step of the topological optimization and the target simplified diagram.The matching would translate into a loss that would simplify the diagram, while trying to preserve the high-persistence points.However, this approach has many drawbacks.The computation of the matching, even using the fast algorithms [14], is prohibitively expensive and would make this procedure completely impractical.The method itself, by construction, would only preserve the structure of the persistence diagram, not its values at individual vertices.Finally, changing the diagram loss function at each step of the gradient descent may have unexpected effects on the momentum. Illustrative Example To illustrate how topological regularization using the PSO loss can reduce overfitting, we consider a simple three-class dataset, shown in Figure 4a.It consists of points sampled from three Gaussians, 1,000 points from each, that represent three distinct classes.We randomly shuffle 20% of the labels to introduce class noise.We train a fully-connected feedforward neural network with 5 hidden layers of 100 nodes each for 500 epochs. Figure 4b illustrates the training and validation losses, and Figure 4c shows the persistence vineyard of the confidence function for epochs 350 to 500.In the beginning of this range, the network has already overfit the labels.The growing validation loss confirms the overfitting, which is also evident in the vineyard, where the second and third highest persistence points, which represent the true classes in the data, are becoming indistinguishable from the noisy points. Starting with epoch 450, we apply ten steps of topological simplification after every training epoch.Because we expect each of the three classes to be a single cluster, we set ε to keep the three highest points in the persistence diagram.This defines a PSO loss that encourages removing maxima of the confidence function that do not correspond to the 3 predominant class clusters. As Figure 4b illustrates, after turning on simplification at epoch 450, the validation loss decreases by over 20%. Figure 4c demonstrates the abundance of high persistence features prior to epoch 450.Most of these correspond to mountains in the confidence function around noisy mislabeled points.Turning on simplification at epoch 450 reduces the persistence of these peaks which drives the network to match the class labels of the dominant class around the outliers. This toy example demonstrates how PSO simplification identifies regions of overfitting due to class noise and reduces the confidence function near these noisy labeled points, lowering the validation loss and increasing the accuracy of the model after overfitting has occured. Experiments We study the performance of persistence-sensitive optimization on six regression problems and seven classification problems from the UCI repository [15].To represent a variety of problem settings, the selected datasets vary in the number of features, sample size, and number of classes.We standardize the features by subtracting the mean and dividing by the standard de-viation.For both regression and classification, we use a dense neural network with five hidden layers and 100 hidden nodes per layer.We use the Adam optimizer and a learning rate of 0.001 across all experiments, including regular training and training with topological simplification. We compare performance of the networks trained (1) without regularization, (2) with 2 regularization, (3) with topological regularization.For all experiments, training with and without regularization were run for the same number of total epochs.For the 2 regularization, the square of the weights of the network is added to the loss, scaled by a factor of λ, which we choose by sweeping through a logarithmically spaced grid from [10 −5 , 10 1 ].We report the best performance across all λs for each dataset.For each dataset, we run all the models at least five times with different preset random seeds and average over all the trials. As described in Section 3, we set a number of hyperparameters during the topological simplification: • topological simplification is applied when validation loss increases by more than t; • k determines the number of neighbors in the k-NN graph used to approximate the domain of the function; • n is the number of additional points we sample, for each input point, before building the k-NN graph; • the points are drawn from a Gaussian with variance σ, ranging from 0.001 to 0. We evaluate the quality of the prediction using the root-mean-square-deviation, (ŷ i − y i ) 2 /n. Table 1 presents the results of our regression experiments.Overall, topological simplification reduces RMSD across all the datasets by an average of 6.9%.Sampling each point multiple times with a small amount of perturbation improves performance.By applying simplification when validation loss increases by more than threshold t, we reduce overfitting and the resulting error.We also see that across the λ hyperparameter swept for 2 regularization, the performance is always worse than with topological simplification.We note that our method is fast enough to be used on very large datasets (we give two examples with 40,000+ points, but that's by no means the limit); previous approaches to topological regularization (using a form of diagram loss) [4] were limited to much smaller datasets (hundreds to a thousand points). Classification.We also evaluate our method on seven classification datasets.Each one has from two to 26 classes.Similar to the regression datasets, each has hundreds (Wisconsin cancer, Vertebral, SPECT) to thousands (Wine, Semeion, Wireless) to tens of thousands (Letter recognition) data points.We use the same 56%-19%-25% training-validation-test split.When topological simplification is applied, we set ε to the crossentropy loss and simplify the confidence function φ, described in Section 3. We evaluate the quality of our predictions by computing the cross-entropy (X-E) loss and accuracy. Conclusion We presented a topological regularization method that uses persistent homology, merge trees, and persistencesensitive simplification to minimize the number of noisy extrema in a machine learning model.Unlike previous such methods, our approach is faster -requiring to compute the topological descriptor only once per simplification phase -as well as more robust and predictable in its effects on the model.The key distinction of the method is its ability to prescribe gradients on the entire domain, approximated as a k-NN graph, rather than only on the critical points.We illustrated the benefits of its use in experiments with a number of well-known data sets. Our work has a larger implication for the use of topological methods in machine learning.The realization that one can back-propagate gradients through a persistence diagram has generated considerable interest in the community, with a number of recent works [4,5,6,10,11] exploring this idea.Our results suggest that it may be better to not treat persistence as a black box.Rather, it is a rich language that allows one to precisely express topological constraints and priors to add to a problem.The actual enforcement of these constraints can be accomplished via different methods, back-propagation through the persistence diagram being but one of them. Building on prior work in computational topology, we describe only how to simplify extrema, i.e., 0dimensional persistence diagrams.A key research direction is how to adapt these ideas to higher dimensional persistent homology.It is undoubtedly useful to incorporate higher-dimensional topological constraints, such as loops or voids in the data, into optimization.Doing so efficiently may require imposing constraints not only on the points in the persistence diagrams, but on the entire representative cycles implied by those points. Figure 1 : Figure 1: (a) Function on a graph, with gradients on critical points prescribed by the diagram loss.(b) Persistence diagram of this function.Points closer to the diagonal correspond to smaller fluctuations in the function, and we interpret them as topological noise.ε indicates the level of desired simplification that generates the gradients in (a) and (d).(c) Merge tree of the function, with branches highlighted in different color.The branches translate into the points in the persistence diagram of the matching color.(d) Gradients prescribed by the persistence-sensitive optimization (PSO loss).The gradients are present both on critical and regular points. Figure 2 :Figure 3 : Figure 2: Optimization of the values.(a) Original function.(b) Vineyard of simplification with PSO loss.(c) Function simplified with PSO loss.(d) Vineyard of simplification with diagram loss.(e) Function simplified with diagram loss. Figure 4 : Figure 4: (a) Input data: 1,000 points sampled from each of the three Gaussians, representing three distinct classes, with 20% of the labels randomly shuffled.(b) Training and validation loss during the training of a neural network, restricted to the later epochs, where the network overfits the data.Simplification is applied after every epoch, following epoch 450, marked with a dashed line.(c) Vineyard of the confidence function during training; the start of the simplification is marked with a dashed line.The three persistent points, representing the three classes in the data, become prominent after the simplification. Figure 5 : Figure 5: (a) Training and validation loss curves for an experiment on the wine regression dataset.Performance is best at epoch 44, and simplification is applied only once, after epoch 43.(b) Vineyard over all epochs. Table 1 : 2. RMSD results on regression datasets comparing no regularization, 2 regularization of the weights, and topological simplification, averaged over multiple trials.The best model for each dataset is in bold.As topological simplification always results in performance improvement, the percentage of improvement (decrease in RMSD), from None to PSO, is also shown (∆).The last four columns show the hyperparameters for the best model.during the experiments.We always set ε to the validation loss. Table 2 : Cross-entropy loss and accuracy results on classification datasets comparing no regularization, 2 regularization of the weights, and topological simplification, averaged over multiple trials.The best model is in bold.As the improvement from topological simplification is always greater than or equal to training the model without regularization, the percentage of improvement (decrease in the case of X-E loss and increase in the case of accuracy), from None to PSO, is also shown (∆).The last four columns show the hyperparameters for the best model, with the lowest X-E loss.
2020-11-11T02:00:56.075Z
2020-11-10T00:00:00.000
{ "year": 2020, "sha1": "b40cbaa996221a843c82094b4826f94d52036db4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.comgeo.2024.102086", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "b40cbaa996221a843c82094b4826f94d52036db4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
169667183
pes2o/s2orc
v3-fos-license
Implementation of Water Safety Plans (WSPs): A Case Study in the Coastal Area in Semarang City, Indonesia An area of 508.28 hectares in North Semarang is flooded by tidal inundation, including Bandarharjo village, which could affect water quality in the area. People in Bandarharjo use safe water from deep groundwater, without disinfection process. More than 90% of water samples in the Bandaharjo village had poor bacteriological quality. The aimed of the research was to describe the implementation of Water Safety Plans (WSPs) program in Bandarharjo village. This was a descriptive study with steps for implementations adopted the guidelines and tools of the World Health Organization. The steps consist of introducing WSPs program, team building, training the team, examination of water safety before risk assessment, risk assessment, minor repair I, examination of water safety risk, minor repair II (after monitoring). Data were analyzed using descriptive methods. WSPs program has been introduced and formed WSPs team, and the training of the team has been conducted. The team was able to conduct risks assessment, planned the activities, examined water quality, conduct minor repair and monitoring at the source, distribution, and households connection. The WSPs program could be implemented in the coastal area in Semarang, however regularly supervision and some adjustment are needed. Introduction In 2011, tidal inundation (the local term is rob) in the urban area of Semarang affected 1,538.8 hectares. North Semarang is the largest rob-affected area as many as 508.28 hectares [1]. North Semarang consists of 9 villages [2]. Bandarharjo and Tanjung Mas are affected by tidal inundation (rob) that reached 20-60 cm in height. Compare to other villages, Bandarharjo and Tanjung Mas experienced the deepest subsidence, which reached 4 cm/year [3]. Rob influences the quality of clean water in the areas [4]. A study in 2007 showed more than half (59 %) of 39 respondents in Bandarharjo village used unsafe water in term of bacteriological quality [5]. Other study revealed 88.9 % out of 27 sample water from wells in Bandarharjo had high MPN, a measurement of total coliform [6]. By 2010, people in Bandarharjo started to use deep groundwater (artesian wells). The wells belong to some owners with many customers. Counted 45% out of 20 water samples of deep groundwater were not met the requirement in terms bacteriological quality. Microbial contamination of major urban has the potential to cause outbreaks of waterborne disease [7]. World Health Organization (WHO) has developed guidelines and tools to promote water safety plans (WSPs) to improve the drinking water quality through risk identification, communication, Introducing WSPs program and team building The introduction of WSPs was facilitated by Faculty of Public Health, Diponegoro University and WHO representatives of Indonesia which were motivated by the health problem and quality of drinking water in Bandarharjo. Health problems are based on the incidence of waterborne diseases i.e., diarrhea, dysentery, shigellosis, typhoid, and skin problem [12]. These diseases are getting an increase during 2011-2013 as well as lower in clean water coverage. The problem also was supported by a previous study [5], [6] and preliminary study. These data are intended to make problem awareness of the stakeholders. The meeting was held and attended by stakeholders in Bandarharjo, there are village officer, women group, youth group, community empowerment board at the village level, health officer, local government water supply officer, deep groundwater owners, village security group, and the customers. Facilitator and an expert presented the background of a health problem-related water supply in the area as well as discussion. The most important in the meeting was understand the health problem by the stakeholders and made the commitment to them for overcoming the problem. Each participant or group brainstorms and declares the willingness to implement the WSPs program. A team of WSPs was developed after had commitment among the stakeholders to implement WSPs program. The team structure of WSPs consisted of the head of WSPs team (coordinator), secretary, technical unit, monitoring, and evaluation unit, and others were the member. The team of WSPs was legalized under the decree had signed by the head of the Bandarahrjo village office. Risk assessment A semi-quantitative risk matrix had been used to assess the risk. The matrix consists of 5 columns and 5 rows. The columns contain the degree of severity (consequences), whereas the rows contain the appearance or frequency of risk. The description of how to rate the assessment point is as follow a. The frequency of occurrence: once a day (5), once a week (4), once in a month (3), once in a year (2), once in 5 years (1); b. Degrees of severity: no effect (1), a small impact of compliance (2), a medium impact of aesthetic (3), impact on regulation (4), impact (disaster) on public health (5); c. Risk value and degrees of risk: low risk (< 6), medium risk (6-9), high risk (10-15), very high risk (>15) [8], [10], [13]. An instrument for observation and risk assessment consists of items to the risk of contamination of the drinking water supply system. The instrument is in the form of checklist to observe and assess the risk of water supply system: at a source, process, distribution and consumer levels. This instrument is used by the team for assessing the risk. Minor repair Minor repairs are made based on the results of risk assessment and monitoring. This activity was carried out by the WSPs team along with the owners of artesian wells. The provision of materials and tools for minor repairs is done by sharing resources between owners and the WSPs team. Minor repairs included at the source, distribution, or house connection. For example repair of reservoirs, distribution pipes, house connection pipes, and water meters. Monitoring of risk Monitoring was conducted by the WSPs team consisting of health workers, artesian wells owners, security groups, and technical units. Monitoring activities are carried out in accordance with the agreed schedule and used an instrument/checklist. The monitoring team will perform the survey of water supply system ranging from water sources, distribution systems, and households connections that had minor repair and unconfirmed improvements. Monitoring officers observed and recorded if there were damage, leaking pipes, loose pipes, broken pipe, or submerged water meters. The results of the follow-up monitoring activities were discussed with the WSPs team for planning for further improvements [14], [15]. Results In 2011, tidal inundation (the local term is rob) in the urban area of Semarang affected 1,538.8 hectares. North Semarang is the largest rob-affected area as many as 508.28 hectares [1]. North Semarang consists of 9 villages [2]. Bandarharjo and Tanjung Mas are affected by tidal inundation (rob) that reached 20-60 cm in height. Compare to other villages, Bandarharjo and Tanjung Mas experienced the deepest subsidence, which reached 4 cm/year [3]. Introducing of WSPs Thirty persons participated in this activity, consisted of owners of the deep groundwater, customers/users, the Bandarharjo village officer, staff of local government of drinking water supply system, youth community organization, women organization, technical water sanitation of the village, health officers from primary health care (PHC), and others. First session discussion revealed: (a) WHO and Faculty of Public Health (FPH) Diponegoro University intended to apply Water Safety Plan (WSPs) in rob area of Bandarharjo due to unsafe water in term of bacteriological quality. The purpose was to secure water in order to ensure the water quality; (b) The quality of water in Bandarharjo is deficient as rob and flood contaminate the water; (c) Most residents obtain water from deep ground wells; (d) There are 10 owners of deep ground wells in the area, each serves 50-100 customers; e. Customers of deep groundwater are less informed about water quality they use and will be satisfied if there is a study to determine the quality of their water; (f) Customers want to know how to deal with high pH and acid water, also the management of healthy drinking water; (g) Before implementation of WSPs in Bandarharjo, there would be an examination of the biological quality of 40 water samples; (h) The WSPs needed full cooperation from the owners of deep groundwater; (i) All participants agreed to monitor the quality of deep groundwater. Second session discussion revealed: (a) local government of drinking water supply system suggested all deep ground wells to be examined. The result then might be used to formulate priorities in WSPs implementation; (b) Purpose of examination was to get an idea of the quality of water from deep groundwater wells. Test results will be used as the basis for the implementation of WSPs. The activity is expected to securing clean water that is adversely affected by rob; (c) This activity aims to undertake efforts to secure the provision of drinking water, through the establishment of WSPs team work together with security systems that have been prepared; (d) Water providers should have a commitment to support WSPs implementation; (e) WSPs team will be formed to monitor the quality of drinking water in Bandarharjo; (f) The team should approach the deep ground well owners; (g) Training for WSPs team will be held after Idul Fitri. Preliminary water survey Preliminary water survey was conducted in order to: (a) describe and identify water supply system of deep groundwater wells in Bandarharjo village; (b) assess the bacteriological parameter (total coliform) for standard requirement; and (c) compare bacteriological quality (total coliform) with the standard of Permenkes No 416/1990. The main parameters of drinking water are total coliform, turbidity, salinity, and pH. Water supply system in Bandarharjo starts from water provider, upper reservoir, distribution and then used by customers. Most of the water supplies in Bandarharjo village are deep groundwater wells. Water was pumped up from 85 meters down in the ground into higher water reservoir (e.g., 6-20 m). Water reservoir will provide sufficient pressure of water distribution to customers. Water was distributed to customers by using PVC pipe with diameters 4-6 ʺ. The result of observation found that pipe line distribution was frequently drowned by rob. This system is a difference to a local government drinking water supply system. WSPs team building and training WSPs team was formed on the first day of training. The team consisted of coordinator, secretary, and field technician, and the members. As consultant was the PHC of Bandarharjo, a local government drinking water supplies system, and deep groundwater owner. On day 2, Management simulation (exercise) was given on day 1. For effective training WSPs team was divided into three small groups for risk assessment. Group 1 focused on water sources and discussed a condition of the catchment, piping, water tank (upper and bottom reservoir), present of disinfection, surround the area, contamination possibility (Table 1). Group 2 focused on water distribution and discussed a condition of piping, pipe joining, a branch of distribution, contamination possibility ( Table 2). While group 3 focused on customers and discussed the condition of water, households connection, water meter, piping, valve, hose, water tank (Table 3). For the exercise and discussion in the training, a team conducted field visit for measurement of the risk assessment using instrument [8] [13]. Risk/hazard occurrence, for example, are pipe leakage, broken pipe, the loose connection of pipe, submerged water meter, broken water meter, no treatment (i.e., disinfection, filtration), lack of cleaning or draining upper a reservoir, no tight cover of the reservoir, sediment in inner pipe. On day 3, module 8-11 (adapted from WHO training module) was delivered [16]. The activity consisted of formulating the plan of action and explaining a plan of action after training. Plan of action which comes from participants was dissemination/socialization water safety plan to the community and WSPs team legality (decree) from the head of Bandarharjo village. Explanation of plan of action included measuring a parameter and doing sanitation survey in Bandarharjo village and explaining the description of WSPs implementation after training. The plan of action was made by the WSPs team and lead by the coordinator of WSPs and the result in Table 4. In addition, Table 5 for handling the emergency response. -Open the water meter and clean it -Check water source. If it is fine then it might be the reservoir and distribution pipe had a clogged -Cleaned through high pressure flushing Examination of water and risk assessment of water safety by WSPs team In term of pH and salinity, all water sample qualify the standard, yet two of them exceed turbidity standard (> 25 mg/liter). Eighty percent of water sample at customer level did not meet specified standard according to Decree of Ministry Health of Republic of Indonesia number 416/1990, which is maximum 10/100 ml water sample. Water quality does not require BOD parameter. The BOD examination was performed to estimate a possibility of pipe leakage, loose connections, or water contamination by sewer or tidal inundation, also possibility sedimentation in inner pipe. In addition, from the results of our spot check by cutting distribution pipeline, we found in the inner wall of the pipe has a lot of stick impurity deposition. Field risk assessment in previous study [17] ranged from sourcing, processing, distribution, households connection (costumer). A very high degree of risk at source system (risk value = 25), caused by the presence of holes (space) between lid and reservoir. In term of turbidity, a reservoir has a high degree of risk (risk value = 15) due to the existence of dust, moss, and spore contaminants. At processing system, we found a very high degree of risk (risk value = 25) due to no physical or chemical treatment. At distribution system, we obtained a high degree of risk (risk value = 15). It because of sediment in inner pipe and loose of connection. At costumer/household system, we observed connection pipes, water-meters, and faucets in the houses, and it obtained a very high risk (risk value = 25). WSPs team gave priority to the handling improvement of drinking water supply systems, based on categories of very high and high degree of risk. The priorities include: at the source, the reservoir tub cover less dense and dirty water reservoir; in processing, the lack of primary disinfection treatment (based on initial measurements which given total coliform exceeded the standards); on the customer system, the priority is to water meter which submerged by rob, sewer water, or soil undercurrent. Corrective action plan and minor repair I: after risk assessment Hazard event at source system: (1) Little space between tub and cover allows contamination from dirt or dust; (2) Dirty tub. Corrective action would be made a concrete of tub cover by WSPs team and owner. Draining and manufacture of concrete tubs cover by owner. Hazard event at processing system: (1) Bacterial contamination (coliform total); (2) Performing disinfection should be done socialization. Disinfection uses chlorine by a technician from the officer of local water drinking water supply system Semarang. Before disinfection so do socialization by health officers on advantages and disadvantages of disinfection. Hazard event in distribution pipe system: sediment in inner pipe. Hygienic reasons for maintaining the internal cleanliness of pipework. The method used to clean the dirt is flushing. This is the simplest of the pipe-cleaning techniques. Hazard event at customer system: Water meter was submerged by trench/gutter and tidal inundation. The submerged pipe was elevated by giving additional pipe so that it will not re-submerge by tidal inundation by WSPs team and customer. The budget plan had been prepared and cistern cap/cover foundry design which conducted by WSPs team and expert designer. Concrete thickness approximately 10 cm in total (two cisterns) which approximately 12 m 2 . Concrete framed with steel with a concrete slab 2.5-3 meters long and 50 cm wide which were arranged in rows. Further, on it cast with concrete. Two manholes 60x60 cm were built to simplify for monitoring the water, cleaning, and drainage the cisterns. In addition, to easing climbing up to the cistern, iron stairs with a height of 7 meters were made. Implementation of the foundry was conducted on November 26, 2013, which was supposed to do in the first week of November 2013. This setback/tardiness due to bad weather (rain and rob) which means it didn't support for a foundry. Implementation of foundry involves WSPs team along with competent personnel to perform casting/foundry. Repairs also carried out on the exhaust pipe or pipes to drain along with the faucet, so that can be opened and closed when done the draining. As it is known that the cistern tank has not drainage since it was made (almost ten years). From observations when drained it clearly be seen dirty brown sewage on the ground floor, tub/cistern and walls. Dewatering was done by the owner of the wells and executed a week after casting/foundry. It appears that the inner wall of tub/cistern was very dirty, dark brown colored, so as the water in the tub. Dewatering was conducted on November 30, 2013. For improvement in the system of water processing, before the disinfection socialization to as many as 25 heads of households (consumers) who subscribe to the deep ground well belongs to the owner (Mrs. Warsi) were carried out. Socialization is intended to inform that deep ground well water were possibly contaminated, therefore it is necessary to perform disinfection treatment in order to make people understand that water is contaminated with bacteria would be harmful to health. Socialization was carried out by sanitarians from Primary Health Care and WSPs team. Information of benefit and disadvantage of the water processed by disinfection (chlorination) were given in socialization. Depletion/dewatering on the distribution pipe was done by cutting the tip of the end of distribution pipe. There were 3 distribution pipes. After the cutting, it is very noticeable that the water coming out of the distribution pipes was very dirty. To ensure that all the impurities in the water in the pipe have been carried out, the pipes were stamped at the end of the pipe so that all the dirt out of the pipe wall. Before it, first, we opened the faucet distribution from above tub/cistern, so that the flow coming out of the distribution pipeline is quite strong, so it will be able to flush the dirt that was in the pipelines. The time required for cleaning/flushing pipeline distribution approximately 30 minutes. After 30 minutes streamed clearly, the water that comes out of the distribution pipes has been very clear. In addition to performing flushing, also carried off the connecting distribution pipe those loosen. The activity was conducted on December 7, 2013. Improvements to the customer were elevated and replacement water meter submerged pipe sewer, water and soil were not done yet, because after the double-checked there were still in good condition (data/ information less valid). To do so, there were re-assessment and observation (December 2, 2013) to the whole household connections (approximately 73 customers), to look at the risks that are likely for more. In addition, it is intended to recalculate the need for tools and materials used to repair and time estimate. The problem is when there is no customer list of an owner for certainty. The observation and reassessment were done by WSPs team Bandarharjo village. From observation and re-identification observed from 70 customers on Monday, December 2, 2013, there were 9 water meters submerged land and water from sewerage. It also made improvements in the household connection. Activities of household plumbing repair connection and water meter elevation performed on Saturday, December 7, 2013. Distribution pipe flushing is done after the entire water meter level and household connections have been repaired. Monitoring and minor repair II On December 27, 2013, WSP team observed and monitored 20 customers. There were 2 leaky connection pipes, 1 submerged pipe that connected to a water meter, 1 submerged water meter, 4 leaky faucets, 1 leaky indoor pipe, and 1 submerged indoor pipe. As for indoor reservoir, which is generally in the form of a bucket, there were 4 reservoirs in turbid condition. Next monitoring was held on December 28, 2013. WSP team found showed 3 leaky/seeped faucets, 1 broken faucet, 3 rushed of a water meter, 1 leaky pipe, and 2 turbid reservoirs. Further water safety monitoring was held on December 30, 2013. Almost all equipment was in good condition. WSP team found showed only 1 broken faucet. Monitoring on January 2-4, 2014 found one turbid reservoir. The rest were all in good condition (water meter, pipes, and faucets). Other than at customer level, monitoring was also performed at a water source and distribution level, both by observation. WSPs team found a cover of a reservoir, a concrete cover, was in good condition, clean and no crack observed. Pipes and faucets for leaching were also in good condition and properly function, as well as a pipe from a reservoir for distribution. Pipes of distribution, which have been cleaned or flushed with water, remains in good condition and tight. Besides, an owner has cleaned floor at the water point that previously mossy and slippery. The result of observation showed water in the reservoir was very clear (bottom of the reservoir could be seen). The physical quality of water obtained from the reservoir also clear, tasteless, and odorless. Before conducting minor repairs coordination have been carried out between WSPs team and owner of deep ground wells. This activity was done by identifying distribution pipeline that was leak/rupture. Repair leaky pipes and faucets by replacing faucets/cellophane tape or glue that has been separated. Elevate a water meter level by replacing the pipe and cellophane tape and replaced the broken one with a new water level meter. Similarly, the distribution pipe and connection pipe that loose were replaced and gluey. Pipes were submerged with litter/soil replaced with new pipes. This work was done by the WSPs team and owner of deep ground wells. Observation result showed that during rainy season and flood, the casing is not cemented submerged by floods. This potential for seepage on the side of the casing pipe and can be infiltrated in the existing water in the pipe. Therefore, to prevent contamination of the casting is done by providing cement, sand, and bricks. On monitoring and observation also obtained the result that many distribution pipes are submerged soil, water sewer, and trash. Therefore, it needs efforts to prevent damage and contamination as well as the monitoring of the distribution pipe by replaced and elevated distribution pipes that submerged soil, sewers and garbage. Distribution pipes that replaced had a 3 ʺ as much as 60 rods and distribution pipes with size ¾ ʺ as much as 4 rods, including pipe connection 'T' and gluing using cellophane tape and glue. Searching of leaking distribution pipes requires quite a long time (2 weeks); this is because most of the distribution pipes submerged in the soil and submerged sewer, and garbage. The searching of leaking pipe had been done for 3 times, involving owners, WSPs team and from Faculty of Public Health Diponegoro University. During observation, it was found cover water reservoir (manhole) was rusty and corroded. Based on that, need efforts to do the painting so that it has a protection from rust. Introducing of WSPs program. It is not easy to adopt new programs or innovations (i.e., WSPs) by a community. This is due to various educational background, culture, experience and socio-economic conditions [18], [19]. People will be willing to accept new programs or innovations if they feel they need or get benefit [20]. However, it needs a good socialization and it is one of the diffusion mechanisms [21]. Before introducing the WSPs program to the community, the community needs to be shown or informed about the health problems they face. The several considerations to decide the priority in health problem based on magnitude, severity or important or seriousness, and benefit [22]. The introducing of health problems to the community is very important because some members of the community may not aware of any health problems in the region. Some inhabitants did not realize that quality of drinking water is important, it was revealed in the discussion. To convince the community of the importance of water quality requirements, we present the results of research related to water quality in the region to the representative community. To stimulate a collective sense of belonging to drinking water among community members as they confront the significant problem about contaminated water and its negative impacts on the customers [23]. Based on Ministry of Health Republic of Indonesia regulation No. 416/MEN-KES/SK/IX/1990, a maximum number of total coliform allowed is 10/100 ml sample (piping water) and 50/100 ml sample (non-piping water). Seven out of 20 wells in Bandarhajo had a poor hygienic condition (>50/100 ml) in term of bacteriological quality. This scientific perspective introduced important to the drinking water quality and triggers community to protect the drinking water [24]. The introducing of WSPs program was also to communicate the value of drinking water (i.e., reduce future water borne disease) to increase people's awareness of the importance of this work [25]. Therefore this program is based on the quality of drinking water. The first step to a pre-trigger community is selecting community who concern with the problem. The representative community would be respected people with skills who provide services like village midwives, community leaders, religious leaders, teachers, youth, and women [23]. Coordination should be done before the training and aimed to keep a commitment. High committed to the team in the program is a key success of WSPs. Manager's performance on the job and the team members' technical background and commitment are most critical for project success [26]. WSPs team building and training The WSPs team included a representative the communities group, government sector, and private sector and at many levels of knowledge and experience within the small community. At the initial stage to identify the people who have experience in community activities were selected by village officer that well-known the people in the area. Members of the WSPs team have varying work, education and social backgrounds, such as sanitarian, youth group, deep ground or artesian wells owners, women's groups, urban community empowerment agencies, village officials, community protection officer, health cadres, the local government of drinking water supply system, and community leaders. The characteristics of WSPs team are levels of education ranged from elementary school to higher education, gender (16 males and 8 females), and age (20-60 years old) which were distributed in each of RWs in Bandarharjo village. It is expected that with various backgrounds and levels the community will strengthen the work of a team. High-performance partnerships employ practice designed to ensure they get the right people and resources (i.e., resources, skills, and expertise) on board and it will reduce the financing burden [27], [28]. The WSPs implementation is also taken into account the flexibility and adapted to local needs [16]. An initial investment of time and resources can be characterized as a mutual selection process [29]. Assess people's capacities for collaborative work is important due to their technical abilities and scientific [30]. Although they may have a level of knowledge and experience in water supply system, by delivering the training module of WSPs, they had the same knowledge and perspective. The WSPs team was legalized by decree of village office to increase commitment, clear their role, involvement, and responsibilities such as coordination of all activities, water quality monitoring, pipe checking (i.e., at the source, distribution, and household connection), monitoring of supply system of drinking water. The team could be in a large or small team, but the principles of the team are shared goals, clear roles, mutual trust, effective communication, and measurable processes and outcomes [31]. Building the team is the efforts to increase the likelihood that effective WSPs programs will be sustained and it may be developed with individuals, groups, teams, organizations, inter-organizational coalitions, or communities [32]. Moreover, the WSPs team should have the strong leadership and community participation (i.e., deep groundwater owner) to ensure program sustainability. The collaboration between community, the member of the team and team leader of WSPs will reinforce the teamwork. Shortell et al [33], collaboration tend to develop a committed core leadership that helps to create and consistently of objectives, tasks, projects, and programs. Community member enables to take control, maintenance, and sustainability of the water facilities and eliminate barriers and effective participation in water supply system [34]. The members of the team were fully involved in the planning, implementation, operation, and maintenance of water supply facilities in the area [35]. Module 1-6 (adapted from WHO training module) was delivered on day 1 training [36]. In the training, the team members were divided into a large and small group to ease delivered the materials. In the large group generally explained the material of module and discussion, while in the small group for exercise, field visit, and working in a team. It was expected more effective to deliver the training module and increased relationships. According to Steinert, there are 12 tips in which small-group teaching can become more effective and more enjoyable. In the small group, teaching is to promote understanding, critical thinking and problem solving to enhance communication skills and to foster self-directed learning [37]. Effective results of working group discussion, exercise, field visit, and simulation can be seen in Table 1,2,3,4, and 5. This mixed method makes the participants not bored, fun, and productive [38], because of little explanation from the facilitator and or an expert, then the participants work with the tools already provided. In addition, this way makes it easier for participants to understand the WSPs materials, to conduct simulations, field trips, or working group. The participants able to identified the problem, hazard, and made priorities the problem at the source, distribution, and customer of water supply system. The working group discussions were guided by facilitators from Faculty of Public Health Diponegoro University and WHO representatives as an expert to ensure the effectiveness of training. Participants were also able to make simulation the planning and budgeting at small scale to tackle the problems at routine and emergency conditions. The role of the leader of a small group is important to directing and clarifying the discussion [39], [40]. Water quality The local government drinking water supply system consists of intake of raw water, physical and chemical treatment, distribution and household connection (customers). Complete water treatment included screening, coagulation, sedimentation, filtration, disinfection, and distribution [41]. However, the minimum requirement for artesian wells (deep groundwater) is disinfection. One way to disinfect water is using chlorine. A very effective disinfection using chlorine kills bacteria, viruses, and protozoa such as Giardia and Cryptosporidium [42]. Risk assessment and corrective action plans At costumers (households) level, we observed connection pipes, water meters, and faucets in the houses, and it obtained a very high risk (risk value = 25) [17]. Concrete tubs cover will protect the water in a reservoir from dirt or feces from the wild animal. The fecal coliform source includes wild and domestic animals feces, which can be spread by the wind [43]. Problems that often occur in the distribution system, among others, the presence of microbial growth and biofilm, cross connection, backflow, rusty and aged, contamination during service and nitrification [44]. Deposits material in the inner of distribution system was removed by flushing because this method is the cheapest and simplest methods for removing of loose deposits [45]. Even this method also has disadvantages of the quite large amount of water used and unsuitability for large diameter pipes to achieve the desired the flushing velocity. The flushing involves the discharge of water from pipes, generally through hydrants and washouts, to generate velocities in the pipe capable of removing accumulated material and biofilms inside the pipe and attached to its walls. Although there are no reports of health effects directly attributed to deposits in pipes, they do provide conditions for the proliferation of microorganisms and animals [46]. Flushing can be obtained periodically to minimize deposits in the distribution system, including microbial [47]. The water meter assembly must be fully supported with a minimum ground clearance of 150 mm, and should not be greater than 300 mm from the finished ground level to the base of the water meter assembly. On a case by case basis consideration will be given to varying the height of the water meter up to a maximum of 1.5 m. Water meters must not be located within garages, roof cavities, ceiling spaces or inside pits [48], [49]. Monitoring in accordance to identified the drinking water supply system which not work properly and so be fixed and prevent contamination. Treated drinking water quality monitoring would be able to inform of potential contamination of drinking water [50]. Conclusion The WSPs programs could be implemented by the WSPs team in the coastal area of Semarang city. The team able to identified risks of the drinking water supply system and the actions were taken to make improvements to the water supply system. The minor repairs were done by sharing resources. However, there is still need to provide assistance to the WSPs team and educate community for the sustainability of the WSPs program.
2019-05-30T23:43:37.403Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "18d658bbfa8c96e42df7afba6f603fbd95e97bd4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/116/1/012029", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "acfae1ac49729b15387d9385f9c4562e165d7952", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
52033833
pes2o/s2orc
v3-fos-license
Recapitulation of hepatitis B virus–host interactions in liver organoids from human induced pluripotent stem cells Therapies against hepatitis B virus (HBV) have improved in recent decades; however, the development of individualized treatments has been limited by the lack of individualized infection models. In this study, we used human induced pluripotent stem cell (hiPSC) to generate a functional liver organoid (LO) that inherited the genetic background of the donor, and evaluated its application in modeling HBV infection and exploring virus–host interactions. To establish a functional hiPSC-LO, we cultured hiPSC-derived endodermal, mesenchymal, and endothelial cells with a chemically defined medium in a three-dimensional microwell culture system. Based on cell-cell interactions, these cells could organize themselves and gradually differentiate into a functional organoid, which exhibited stronger hepatic functions than hiPSC derived hepatic like cell (HLC). Moreover, the functional LO demonstrated more susceptibility to HBV infection than hiPSC-HLC, and could maintain HBV propagation and produce infectious virus for a prolonged duration. Furthermore, we found that virus infection could cause hepatic dysfunction of hiPSC-LOs, with down-regulation of hepatic gene expression, induced release of early acute liver failure markers, and altered hepatic ultrastructure. Therefore, our study demonstrated that HBV infection in hiPSC-LOs could recapitulate virus life cycle and virus induced hepatic dysfunction, suggesting that hiPSC-LOs may provide a promising individualized infection model for the development of individualized treatment for hepatitis. Introduction Although vaccines and therapies against hepatitis B virus (HBV) have improved in recent decades, an estimated 257 million people are still living with hepatitis B virus infection with markedly heterogeneous outcomes [1,2]. Some individuals have self-limiting, symptom-free infection, whereas others develop liver cirrhosis and/or hepatocellular carcinoma [2]. This drastic heterogeneity in outcome cannot be justified completely by the contributing factors, such as viral genotype diversity, environmental and demographic variables, and differences in age of patient with infection [3]. Evidence has indicated that the genetic background of the host might play an important role in virus-induced outcomes [4,5]. Although various models have been developed for HBV infection, these models do not represent individualized genetic backgrounds, which limits their application in understanding the potential impact of the host's genetic makeup during virus infection. Since long, primary human hepatocytes (PHH) have been a valuable model for HBV infection studies [6]; however, donor shortage and variable quality limit their applications. More recently, the Na +taurocholate cotransporting polypeptide (NTCP) was identified as a HBV entry receptor, and the exogenous expression of NTCP could successfully induce susceptibility to HBV infection in non-susceptible hepatocarcinoma cells [7]. However, the genetic background of these hepatocarcinoma cells could not broadly represent the human population. Hepatocytes from chimeric mice also face similar problems [8]. Therefore, the generation of new hepatocytes, with patient-specific genetic background and susceptibility to HBV infection, is paramount for individualized hepatitis studies. In the recent decade, the ability of human induced pluripotent stem cell (hiPSC) to differentiate into various terminal cell types, inheriting the donor's genetic background, has been successfully tested and proved [9]. This has created a unique opportunity to accurately model diseases and develop new treatments by using hiPSCs. Studies have reported that hiPSC-derived hepatic-like cells (hiPSC-HLC) could exhibit and inherit hepatic characteristics similar to those of the donor [10,11]. HBV infection have also been reported in hiPSC-HLC [12][13][14]. However, these HLCs displayed low hepatic function that was easily lost, greatly limiting its application in the investigation of virus-host interactions [12][13][14][15]. The liver is known to have a complex and highly organized architecture consisting of numerous cell types, with hepatic differentiation being accurately controlled by its interactions with non-parenchymal cells during liver organogenesis [16]. Therefore, reconstruction of these interactions might be a plausible approach to promote hepatic differentiation, long-term maintenance of hepatic functions, and generation of an effective infection model to understand virus-host interactions. The liver develops from specific hepatic endoderm in a microenvironment surrounded by mesenchymal and endothelial progenitor cells located in the septum transversum. The mesenchymal and endothelial cell microenvironment has been considered a key element to initiate hepatic differentiation and promote liver development [17,18]. By recapitulating this microenvironment, we have previously generated a self-organized hiPSC liver organoid (LO) that could grow into a vascularized and functional tissue post-transplantation [19,20]. However, the self-organized organoid had a few drawbacks such as limited hepatic function before transplantation, and a huge size (approximate diameter of 3000-5000 μm) that inhibited its ability to differentiate well in vitro [19,20]. In the current study, we aimed to generate a functional hiPSC liver organoid that could act as a reliable and feasible ex vivo infection model for hepatitis studies. Cell culture The TkDA3 human iPSC clone used in this study was kindly provided by K. Eto and H. Nakauchi. Undifferentiated iPSCs were maintained on a growth factor-reduced Matrigel (BD Biosciences, San Diego, CA)-coated dish with mTeSR1 medium (Stem Cell Technologies, Vancouver, BC, Canada). HUVECs and human bone marrow (BM)-MSCs were maintained in endothelial cell growth medium (Lonza, Walkersville, MD) and mesenchymal cell growth medium (Lonza), respectively. Cell differentiation and organoid generation To differentiate HLCs and LOs from hiPSC, we first differentiated endoderm from hiPSC according to a previously reported protocol [23]. Then hiPSC-endoderm was then cultured and differentiated into HLC as described previously [23]. HBV preparation, infection and inhibition assays HBV stocks were derived from supernatants of HepG2.2.15.7 cells, which were stably transfected with a complete HBV genome (genotype D) as described previously [21]. HiPSCs-LO, hiPSCs-HLC, HepG2-TET-NTCP organoids, and PHHs were infected with HBV [500 genome equivalents (GEq)/cell or 5000 GEq/cell] in the presence of 4% polyethylene glycol 8000 in 24-well plates. After 10 days post infection or 20 days post infection, cultured cells were then harvested. pg RNA was quantified by SYBR Green (Takara Bio, Otsu, Japan) with primers listed in Table S1. The expression of pg RNA was normalized against expression of β-ACTB (Thermo Fisher Scientific, Waltham, MA). In inhibition assay, hiPSC-LOs were infected with HBV at 5000 GEq/ cell, and 100 nM Myrcludex was added into the medium 2 h before infection; 1.8 mM Entecavir was added to the medium during infection; 1000 IU/mL IFN-α (Sigma-Aldrich) and 1000 IU/mL IFN-γ (Sigma-Aldrich) were added to the medium during infection. Intracellular vDNA and cccDNA isolation and quantification Infected cells were collected after infection. Total DNA in the cells were purifed using DNeasy Blood & Tissue Kit (QIAGEN, Hilden, Germany). The concentration of were determined by naondrop spectrophotometer (Thermo Fischer Scientific, Waltham, MA), and the DNA concentration were adjust to 20 ng/μL for further experiment. 2 μL adjusted DNA sample were used to quatify vDNA by SYBR Green with a standard curve done by using plasmid pUC-HBV over a range of 10 7 -10 1 copies. To quanty cccDNA copies, adjusted DNA sample 25 μL of 20 ng/μL of HBV DNA samples were treated with plasmid-safe DNase at 37°C for 1 h according to the manuscript. After digestion, the plasmid safe DNase were inactivated at 70°C for 30 min. 2 μL sample were used to quantify cccDNA copies by SYBR Green with a standard curve done by using plasmid pUC-HBV over a range of 10 7 -10 1 copies. The primer used for HBV DNA and cccDNA quantification were list in Table S1. Supernatant vDNA isolation and quantification Supernatant from infected cells were collected. 200 μL supernatant were mix with 200 μL AL buffere and 20 μL proteinase K. Total DNA in the supernatant were then purifed using DNeasy Blood & Tissue Kit (QIAGEN) and DNA was eluted with 30 μL of elusion buffer. 2 μL DNA sample were used to quatify supernatant DNA by SYBR Green with a standard curve done by using plasmid pUC-HBV over a range of 10 7 -10 1 copies. The primer used for HBV DNA quantification were list in Table S1. RNA isolation and quantitative real-time polymerase chain reaction Total RNA was isolated using a PureLink viral RNA/DNA mini kit (Thermo Fisher Scientific, Carlsbad, CA). RNA (1 μg) was used as a template for single-strand cDNA synthesis with a high-capacity cDNA reverse transcription kit (Thermo Fisher Scientific) according to the manufacturer's instructions. Q-PCR was performed with cDNA using specific primers and Universal Probe Library (UPL) probes. The primers and UPL probes used in this study are listed in Table S1. All data were calculated using the ΔΔCT method with β-ACTB (Thermo Fisher Scientific) as a normalization control. 2.7. ALB assay, urea production, CYP3A, LDH release, and ALT assay Human ALB was measured using a human ALB enzyme-linked immunosorbent assay kit (Bethyl Laboratories, Montgomery, TX). Urea production was assayed using a QuantiChrom urea assay kit (BioAssay System, Hayward, CA), CYP3A activity was detected using a P450-Glo CYP3A4 assay kit (Promega, Madison, WI, USA), LDH release was detected using the LDH Cytotoxicity Detection Kit (Roche, Roche, Boehringer Mannheim, Germany), and ALT was detected using FUJI DRI-CHEM SLIDE GPT/ALT-PIII (Tokyo, Japan) according to the manufacturer's instructions. The number of hiPSC-HLCs, and PHHs cells were analyzed by Incell analyzer 2000 (GE Healthcare, Cardiff, United Kingdom) with Hoechst 33342 staining. To calculate the cell number in LOs, total DNA of the cell number counted hiPSC-HLCs and hiPSC-LOs were extracted using DNeasy Blood & Tissue Kit, and DNA were eluted with 50 μL of elusion buffer. Then the concentration of those DNA were determined by naondrop spectrophotometer. The total cell number in hiPSC-LOs were calculated by following formula: hiPSC-LOs cell number ¼ HLCs cell number ð Þ Â LOs DNA amount ð Þ HLCs DNA amount ð Þ Indocyanine green uptake and release ICG dry− powder (Akorn, Buffalo Grove, IL) (10 mg) was dissolved in 10 mL of hepatocyte culture medium (HCM; Lonza) to obtain a 1 mg/mL stock. hiPSC-LOs were incubated with ICG in suspension or plated for 4 h at 37°C in a humidified incubator with 5% CO 2 . Then, hiPSC-LOs were washed three times with phosphate-buffered saline (PBS) and incubated in fresh HCM for another 5 h to determine the ICG release. Tissue processing and immunofluorescence hiPSC-LOs were embedded in optimal cutting temperature compound (Sakura Finetek Japan., Co., Ltd., Tokyo, Japan), and 7-μm sections were prepared and mounted on MAS-GP type A-coated slides (Matsunami, Osaka, Japan). Sections were fixed in a 4% paraformaldehyde solution in PBS for 10 min, washed three times with PBS, and blocked for 60 min with 10% ECL prime blocking agent in PBS containing 0.3% Triton X-100, followed by three washes with PBS. Then, the sections were incubated with HBc (Dako, Japan), HBs (Bio-rad, CA, USA), ALB (Bethyl Laboratories) and NTCP antibodies in the blocking buffer at 4°C overnight. The sections were washed three times with PBS and incubated with a fluorescence-conjugated secondary antibody for another 60 min at room temperature. Finally, the sections were washed three times in PBS and covered with a mounting solution containing 4′,6-diamidino-2-phenylindole. Fluorescence was detected with a Zeiss Axio Imager M1 microscope. The NTCP antibody was produced by Kei Miyakawa and Akihide Ryo and tested using HepG2-TET-NTCP cells and organoids with or without doxycycline treatment ( Fig. S2A and S2B). The HBc antibody were tested in infected PHHs (Fig. S2D). To quantification of HBc positive cells, we first count the ALB positive cells according to ALB and nuclear staining, then count the HBc positive cells among this ALB positive cells. The percentage of ALB + HBc + in infected hiPSC-LOs were calculated by following formula: Transmission electron microscopy Samples in culture medium were fixed with an equal amount of 4% paraformaldehyde and 4% glutaraldehyde in 0.1 M phosphate buffer at 4°C for 1 h, followed by incubation with 2% glutaraldehyde in 0.1 M phosphate buffer at 4°C overnight. The fixed samples were postfixed with 2% osmium tetroxide, dehydrated through a graded series of ethyl alcohol, and embedded in a fresh 100% resin. Ultrathin sections (70 nm) were cut with an ultramicrotome (Ultracut UCT, Leica, Vienna, Austria) and stained with 2% uranyl acetate. Then, the sections were washed with distilled water and stained with a lead stain solution. Grids were observed under a transmission electron microscope (JEM-1400Plus, JEOL, Ltd., Tokyo, Japan) at an acceleration voltage of 80 kV. Digital images were taken with a CCD camera (VELETA, Olympus Soft Imaging Solutions GmbH, Munster, Germany). Statistics Values are expressed as the mean ± standard error of the mean (SEM). The statistical significance of differences was evaluated by the Mann-Whitney U test when two groups were compared or by oneway ANOVA and Bonferroni's multiple comparison tests when multiple groups were compared. A p b .05 was considered statistically significant, Statistical analysis was performed using GraphPad Prism. Additional experimental procedures are listed in Supplemental information Materials & Methods. In vitro generation of functional liver organoids from hiPSC To generate functional liver organoids, we first differentiated hiPSCs into endoderm that expressed lineage specific hepatic makers ( Fig. S1A and S1B), and then co-cultured the hiPSC-endoderm with human umbilical vein endothelial cells (HUVECs) and mesenchymal stem cells (MSCs) in a 3D microwell culture system (Fig. 1A). After 24 h, all three co-cultured cells were found to self-organize into organoids having a uniform size (205.1 ± 37.5 μm, n = 532) (Fig. S1C). Quantitative polymerase chain reaction (Q-PCR) analysis was performed on hiPSC-LO after 15 days of differentiation (Fig. 1B). The results revealed that hiPSC-LO had differentiated into hepatic lineages with expression of specific hepatic functional genes ( Fig. 1C and Fig. S1D). We also found that the expression of these genes in differentiated LOs was significantly higher than that in hiPSC-HLCs ( Fig. 1C and Fig. S1D). In addition, the omission of HUVECs and MSCs in hiPSC-LOs (hiPSC-LO w/o H/M) could delay hepatic lineage differentiation with lower expression of hepatic genes (Fig. 1C), indicating that the endothelial/mesenchymal environment was important to promote hepatic differentiation in LOs. Hepatic function analysis indicated that differentiated hiPSC-LOs show about 4-8-fold higher ALB (albumin) secretion ( Fig. 1D and Fig. S1E), and 12-fold higher urea production than hiPSC-HLCs (Fig. 1E). Moreover, hiPSC-LOs displayed approximately 500-fold higher CYP (cytochrome P450) 3A4 activity than hiPSC-HLCs, and the CYP3A4 activity of hiPSC-LOs could be further induced by rifampicin (Fig. 1F). We further explored the ultrastructure of the differentiated hiPSC-LO with transmission electron microscopy (TEM). TEM data revealed that hepatic lineages in organoids had typical hepatic features, such as tight junctions, microvilli on the cell membrane, lipid droplets in the cytoplasm, and bile capillaries between hepatic cells (Fig. 1G). These differentiated organoids also had the ability to uptake and release indocyanine green (ICG) (Fig. S1F), a general characteristic of the liver. Collectively, these results demonstrate the successful generation of a well-differentiated hiPSC-LO with enhanced hepatic function and liver-specific features by recapitulating multiple cellular interactions in an ex vivo culture system. Susceptibility to HBV infection in hiPSC liver organoids To evaluate whether the differentiated hiPSC-LO could be used as a potential hepatic source for HBV infection, we detected the expression of NTCP in hiPSC-LO using the q-PCR assay. Analysis showed that the expression of NTCP could be detected and was comparatively higher in hiPSC-LOs, than in the PHHs (Fig. 2A). Immunofluorescence analysis also confirmed NTCP positive hepatic lineages in the hiPSC-LO (Fig. 2B). We then tested the infection susceptibility of hiPSC-LO with HBV produced from HepG2.2.15.7 cells, and compared its susceptibility with hiPSC-HLCs and PHHs through the expression of pregenomic (pg) RNA, intercellular viral DNA (vDNA), covalently closed circular DNA (cccDNA), and supernatant vDNA at 10 days post infection (dpi) (Fig. 2C). In agreement with previous reports [12][13][14], hiPSC-HLCs could be infected by HBV with the detection of viral RNA and DNA (Fig. 2D-G). Further, infected hiPSC-LOs showed much higher expression of pg RNA, and increased copy numbers of intercellular vDNA, cccDNA, and supernatant vDNA than infected hiPSC-HLCs (Fig. 2D-G and Fig. S3A). Moreover, the expression levels of pg RNA and the copy numbers of intercellular vDNA, cccDNA, and supernatant vDNA of infected hiPSC-LOs were intermediate between that of two different infected PHHs (Fig. 2D-G), indicating that the virus susceptibility of hiPSC-LO was comparable to PHHs. We monitored the dynamic virus release in infected hiPSC-LOs during the infection. From 7 dpi to 10 dpi, a significant release of viruses from the infected hiPSC-LOs was observed, but not from infected hiPSC-HLCs (Fig. 2H). Additionally, the expression of known infection promoting factors, GPC5 (glypican 5), PPARA (peroxisome proliferator-activated receptor alpha), and CEBPA (CCAAT/ enhancer-binding protein alpha) [25][26][27], was found to be higher in hiPSC-LO than in hiPSC-HLC (Fig. 2I). These results indicated that differentiated hiPSC-LO could be a robust infection model. Long term maintenance of hepatic function and virus propagation in hiPSC liver organoids Next, we evaluated the ability of differentiated hiPSC-LO to maintain its hepatic function and its potential to be a long-term infection model. Dynamic monitoring showed that ALB secretion steadily increased in differentiated hiPSC-LO, and could be sustained for 20 days (Fig. 3A). In contrast, a sharp decline of ALB secretion was observed in hiPSC-HLCs after only 5 days of extended culture (Fig. 3A). The expression of CYP3A4 in hiPSC-LO also constantly increased during this period (Fig. 3B). These results demonstrate that the hiPSC-LOs displayed prolonged maintenance of hepatic functions, and have the potential to be a long-term infection model. We further investigated viral replication and propagation in hiPSC-LOs by increasing the infection time from 10 days to 20 days. Compared with 10 dpi infected LOs, pgRNA expression and supernatant vDNA copy numbers markedly increased in 20 dpi infected LOs (Fig. 3C). Such an increase was not observed in infected hiPSC-HLCs and PHHs (Fig. 3C). Immunofluorescence analysis also showed that the proportion of HBV core antigen (HBc) positive hepatic lineages (HBc + ALB + ) increased in hiPSC-LOs prolonging the infection time (Fig. 3D). To further investigate whether infectious progeny viruses could be produced from infected hiPSC-LOs, we collected the supernatant from infected hiPSC-LOs at 20 dpi (20 dpi-Sup, from day 17 to day 20) and examined its infectivity in PXB-HHs. Notably, the progeny virus displayed infectivity, confirmed by pgRNA quantification in PXB-HHs (Fig. 3E). Furthermore, the progeny virus could complete their life cycle based on detectable vDNA in the supernatant (Fig. 3E) and intracellular HBc staining (Fig. 3F). These results indicate that this organoid system could serve as a long-term ex vivo infection model. Induction of hepatic dysfunction in hiPSC liver organoids with HBV infection The prolonged maintenance of hepatic function in differentiated hiPSC-LOs could be helpful in understanding the host consequences caused by virus infection without any concerns about inherent reduction in hepatic function. To explore the host consequences of viral infection, we infected the differentiated hiPSC-LOs with different doses of the virus: 0 GEq (genome equivalent)/cell (non-HBV), 500 GEq/cell (low dose), and 5000 GEq/cell (high dose) [28,29], and analyzed the changes in infected hiPSC-LOs. The increased amount of virus could markedly promote the infection in hiPSC-LOs (Fig. 4A, B and S4A). Further, expression analysis established that virus infection could dose-dependently impair the expression of CYP3A4, CYP3A7, and CYP2C9 (Fig. 4C). Additionally, a high-dose infection could down-regulate the expression of ALB, G6PC (glucose-6-phosphatase catalytic-subunit), HNF4A (hepatocyte nuclear factor 4 alpha), and RBP4 (retinol binding protein 4) (Fig. 4C). To confirm that the impaired expression of hepatic genes was a result of virus infection, we used myrcludex (a HBV entry inhibitor) and entecavir (an anti-HBV nucleos(t)ide) to inhibit virus infection. Upon treatment with these drugs, the expression of pg RNA, and copy number of intracellular vDNA and supernatant vDNA were significantly reduced (Fig. S4B), and the expression of hepatic functional genes was enhanced in infected LOs (Fig. S4C). The infection impaired hepatic gene expression was also detected in infected PHHs (Fig. 4D), but not in infected HepG2-TET-NTCP organoids (Fig. S4E). Moreover, a decreased ALB secretion was observed in high-dose infected LOs (Fig. 4E). To further confirm that viruses caused hepatic dysfunction, we measured the level of aminotransferase (ALT) and lactate dehydrogenase (LDH) that act as markers for early acute liver failure [30], in the supernatant of infected LOs, and found an increased level of ALT and LDH (Fig. 4F and G). TEM analysis indicated that, compared with noninfected LOs, the infected LOs had increased number of vacuoles in the hepatocyte cytoplasm (Fig. 4H), which occupied a significant proportion of the cytoplasm and pushed the nucleus into the cell periphery (Fig. 4H). Additionally, infected LOs showed reduced membrane microvilli ( Fig. 4H and I), a characteristic of fibrosing liver disease [31]. HBV infection has been considered as a driving force for epithelialmesenchymal transition (EMT) in liver cancer development [32]. In infected LOs, we found the expression of EMT markers SNAI2 (Snail Family Transcriptional Repressor 2) and TWIST1 (Twist Family BHLH Transcription Factor 1) was significantly up-regulated (Fig. S4F). These findings indicate that this organoid infection system could recapitulate virus induced hepatic dysfunction. During virus infection, the host innate immune cells recruited into the infection site produce interferons (IFNs) that help inhibit and eliminate the virus [33]. To mimic this immune defense, we treated infected hiPSC-LOs with IFNα, and found that IFNα could induce transcription of antiviral genes, including VIPERIN (virus inhibitory protein, endoplasmic reticulum-associated, interferon-inducible), ISG15 (interferon stimulated gene 15), ISG20 (interferon stimulated gene 20), and MX2 (Myxovirus Resistance Protein 2) (Fig. S5A). Upon IFNα and IFNγ treatment, virus replication was significantly suppressed with downregulated expression of pgRNA (Fig. S5B), and a marked inhibition in expression of hepatic genes in the infected hiPSC-LOs (Fig. S5E). These observations suggest that the innate immune activation could effectively inhibit virus replication but would induce additional hepatic injury in hiPSC-LOs. Discussion In the current study, we developed a novel method to generate functional liver organoids with a 3D microwell system (Table S2). In this new culture system, liver organoids with small diameters (approximately 200 μm) were generated, with improved nutrient permeation and absorption, resulting in effective hepatic differentiation and susceptibility to HBV infection. The susceptibility of hiPSC-LOs to infection was comparable to cryopreserved PHHs. Moreover, this organoid system can mimic the virus induced hepatic dysfunction, indicating that infection in hiPSC-LOs might be able to recapitulate in vivo virus -host interactions. With exogenous expression of human NTCP, different donor derived human hepatocarcinoma cells could gain susceptibility to HBV infection, but their susceptibility varied among cells [34]. Meanwhile, mouse hepatocytes overexpressing human NTCP were not susceptible to HBV infection [35]. These studies have confirmed that NTCP is a necessary factor but not sufficient for HBV infection; other human liver specific factors would also be necessary in the regulation of HBV life cycle, and these factors may be related to the host's genetic background. To test this idea, we compared the HBV infection in LOs generated from different genetic background and observed different levels of susceptibility among them (Fig. S3B). Furthermore, we found that hiPSC-LOs and hiPSC-HLCs generated from same genetic background had comparable expression of NTCP, but had significant difference in virus susceptibility, suggesting that hepatic lineages in distinct differentiation stages might express different levels of liver-specific factors important for virus infection. Thus, further detailed analysis of the differences between single donor-derived LO and HLC would help in the identification of potential risk and resistance factors for infection. Our novel functional hiPSC-LO provides a new avenue to investigate the role of the host genetic background in HBV infection and prognosis of individual infection and has the potential to be used in personalized hepatitis treatment. HBV was not considered as a cytopathic virus because it was believed that the virus itself might not cause the hepatocellular damage in acute and chronic HBV infection [36]; some studies have also stated that HBV infection could not induce hepatic apoptosis [37,38]. However, these models likely display cancer cell characteristics and unstable hepatic function, due to which they probably could not reproduce normal viral cytopathic effects. Besides, subsequent studies have observed that HBV infection could induce hepatic damage in patients and mouse models with severe immunosuppression [39,40], suggesting that HBV may indeed cause cytopathic effects on hepatocytes in vivo, which might be taken care of by rapid host immune responses. In this study, we accidentally found that HBV could induce hepatic dysfunction of hiPSC-LOs with significantly reduced hepatic function, induced release of hepatic injury markers, and altered hepatic ultrastructure. Functional LOs generated without immune cells, when infected, displayed an impaired hepatic function, further favoring the idea that HBV might be a cytopathic virus. Although the mechanisms underlying the cytopathic effects of the virus have not yet been defined, the accumulation of HBV proteins has been reported to result in the appearance of cellular vacuolization in hepatocytes [39], and to induce stress and increase reactive oxygen species (ROS) in the endoplasmic reticulum [41]. The increased ROS might further induce autophagy and disrupt membrane lipids to alter hepatic ultrastructure [42,43]. In this study, we found that HBV infection did not induce cell apoptosis or lysis of hiPSC-LOs (Fig. S4D), supporting the view that HBV might be a cytopathic virus, but not a cytolytic one. The highly efficient and fast hepatic innate immune response to pathogen is initiated through IFNs [44]. Inhibition of IFN activation could induce robust replication of HBV in PHHs [13], but this HBVactivated IFN response in hepatocytes was not detected in hiPSC-LOs (Fig. S5F) or in other infected hepatic cells [45]. This loss of selfdefense might advance the stealthy replication of HBV in hiPSC-LOs. Although studies have suggested that IFNs inhibit HBV replication in a non-cytolytic manner [46,47], our results suggest that IFNs could efficiently inhibit viral replication, but could also inhibit hepatic function in the infected hiPSCs-LOs. This aggravated hepatic injury might be a side effect of IFNs. This IFN-mediated liver injury has also been observed in mice models and clinical trials [48,49]. We were unable to observe IFN induced cccDNA degradation in infected LOs, possibly because of the low APOBEC3A expression in hiPSC-LOs ( Fig. S5C and S5D) [47]. These observations suggest that the organoids may still have some characteristics different from adult hepatocytes. Thus, further efforts are required to generate adult liver-like organoids from hiPSCs. In summary, we have generated functional hiPSC-LOs that can efficiently recapitulate host-virus interactions by mimicking the virus life cycle and display HBV-induced hepatic dysfunction, indicating that this LO may be a reliable and feasible personalized infection model for individualized hepatitis study and treatments. Conflicts of interest The authors declare no competing financial interests. Research in context Evidence suggests that the genetic background of patients with Hepatitis B influences the markedly heterogeneous outcomes seen across these patients. However, very few infection models exist that can represent a patient's genetic background. Our new technique makes this possible by generation of functional liver organoids using human induced pluripotent stem cells (hiPSCs). The hiPSC-derived functional liver organoids can be a robust and long-term HBV infection model, which recapitulates viral lifecycle and virus-induced hepatic dysfunction. It provides a promising approach to understand the precise roles of genetic background in virus-induced host outcomes and develop personalized medicine for hepatitis B patients.
2018-08-19T21:18:32.365Z
2018-08-16T00:00:00.000
{ "year": 2018, "sha1": "b550bd47f893f9bb25ba848a57e26c56dd98c6a7", "oa_license": "CCBYNCND", "oa_url": "http://www.thelancet.com/article/S2352396418303001/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b550bd47f893f9bb25ba848a57e26c56dd98c6a7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
239618924
pes2o/s2orc
v3-fos-license
Dietary Management for Faecal Microbiota Transplant: An International Survey of Clinical and Research Practice, Knowledge and Attitudes Faecal microbiota transplantation (FMT) involves homogenisation and infusion of stool from a healthy, highly screened individual into the bowel of an unwell recipient. Dietary intake is an important modulator of the gut microbiota. Currently there are no clinical practice recommendations available to provide patients or stool donors with dietary advice for FMT. This study aimed to conduct an international survey to examine health professionals and researchers' attitudes, knowledge and current practice recommendations for diet in patients undergoing FMT. An online, cross-sectional, international survey comprising of health professionals and researchers managing patients undergoing treatment with FMT was conducted between July-October 2020. Purposeful and snowball sampling techniques were employed to identify eligible participants who were sent an email invitation and two email reminders with a link to participate in the electronic survey. The survey comprised 21 questions covering demographics, current practice, beliefs and future directions regarding FMT and diet. Closed responses were calculated as proportions of total responses. Open-ended responses were systematically categorised. Common themes were identified from recurring categories. Fifty-eight (M 60%) participants from 14 countries completed the survey. Participants were gastroenterologists (55%), with 1-5 years' experience working in FMT (48%) and treating up to ten patients with FMT per month (74%). Participants agreed that diet was an important consideration for FMT recipients and stool donors (both 71%), and that it would affect the outcomes of FMT. However, they did not feel confident in providing dietary advice to patients, nor that there was sufficient evidence to provide dietary advice and this was reflected in their practice. Future research must collect information on the dietary intake of patients and donors to better understand the relationship between diet and FMT outcomes. In clinical practice, promotion of healthy eating guidelines aligns with current practice and literature. Faecal microbiota transplantation (FMT) involves homogenisation and infusion of stool from a healthy, highly screened individual into the bowel of an unwell recipient. Dietary intake is an important modulator of the gut microbiota. Currently there are no clinical practice recommendations available to provide patients or stool donors with dietary advice for FMT. This study aimed to conduct an international survey to examine health professionals and researchers' attitudes, knowledge and current practice recommendations for diet in patients undergoing FMT. An online, cross-sectional, international survey comprising of health professionals and researchers managing patients undergoing treatment with FMT was conducted between July-October 2020. Purposeful and snowball sampling techniques were employed to identify eligible participants who were sent an email invitation and two email reminders with a link to participate in the electronic survey. The survey comprised 21 questions covering demographics, current practice, beliefs and future directions regarding FMT and diet. Closed responses were calculated as proportions of total responses. Open-ended responses were systematically categorised. Common themes were identified from recurring categories. Fifty-eight (M 60%) participants from 14 countries completed the survey. Participants were gastroenterologists (55%), with 1-5 years' experience working in FMT (48%) and treating up to ten patients with FMT per month (74%). Participants agreed that diet was an important consideration for FMT recipients and stool donors (both 71%), and that it would affect the outcomes of FMT. However, they did not feel confident in providing dietary advice to patients, nor that there was sufficient evidence to provide dietary advice and this was reflected in their practice. Future research must collect information on the dietary intake of patients and donors to better understand the relationship between diet and FMT outcomes. In clinical practice, promotion of healthy eating guidelines aligns with current practice and literature. Keywords: faecal microbiota transplant (FMT), microbiome, dietary management, fibre, gastroenterology INTRODUCTION Faecal microbiota transplantation (FMT) is the process in which stool collected from a healthy, highly screened individual is homogenised, filtered and subsequently infused into the bowel of an unwell recipient. FMT aims to complement and re-establish more diversified microbiota within the gut of the recipient. Current FMT delivery methods vary, ranging from infusion into the caecum via colonoscope, to rectal enema infusions, and oral capsules (1). FMT treatment has shown marked success in the eradication of Clostridium difficile infection (CDI), with a cure rate of 96% reported in one randomised controlled trial (2). FMT has also shown promise in the treatment of chronic gastrointestinal conditions such as Ulcerative Colitis (UC), Crohn's Disease (CD) and Irritable Bowel Syndrome (IBS). A 2017 meta-analysis reported a pooled efficacy for clinical symptom remission of 36% in UC, and 52% for CD (3). Similarly, in IBS overall clinical response rate was 49% (4). FMT is also emerging as a potential treatment option for a variety of non-gastrointestinal conditions. For example, case reports have shown improvements in Parkinson's Disease (5), Multiple Sclerosis (6), Alopecia (7), Depression, and Anxiety (8). Dietary intake, in particular dietary fibre is an important modulator of the gut microbiota composition. Fibres with fermentable characteristics are substrates for microbial populations in the colon, leading to the production of various metabolites. Dietary fibre interventions in healthy participants can influence bacterial abundance (9) and gut bacterial ecology (10). Subsequently it has been hypothesised that dietary patterns of donors and recipients could be a key predictor in the short and long term response to FMT treatment. However, currently dietary assessment of donors and recipients and the role of dietary interventions in supporting FMT has only just begun to be described (11,12). Pilot studies examining the relationship between recipient fibre intake and FMT outcomes have demonstrated a positive trend (13,14), a study by Wei et al. compared the outcomes of UC patients receiving FMT and a known prebiotic fibre (pectin) supplement or FMT alone. Their results suggested that supplementation with pectin delayed the loss of diversity of transplanted gut microbiota and enhanced the effects of the FMT (15). Similarly, a study comparing the outcomes of patients with constipation receiving a soluble fibre supplement and FMT compared to FMT alone showed significantly improved stool frequency and consistency in the supplemented group in both the short and long term (16,17). Current consensus guidelines for donor screening do not include an assessment or requirement of specific diet for FMT donors (1,18). One abstract study found FMT stool donors consumed higher fibre diets than the general American population (26 vs. 18 g/day) (19). More recently a focus on donors following specific dietary patterns such as veganism and Mediterranean diets (20) has emerged. For example, a trial of FMT treatment for metabolic syndrome utilising donor stool from a lean vegan donor demonstrated changes in recipient gut microbiota composition but not metabolic markers (21). Overall, despite a clear relationship between diet and the gut microbiome and FMT the gut microbiome, there is a distinct knowledge gap in the relationship between diet and FMT. Currently there are no clinical practice recommendations available to clinicians to provide patients or stool donors with dietary advice for FMT. However, anecdotally, dietary advice is one of the most common questions a patient undergoing FMT treatment will ask for. In addition, as described above dietary patterns are hypothesised to be a key modulator of response to FMT treatment. Knowledge, attitude and practice surveys are a common tool used to collect information on what is known, believed and done in relation to a particular topic. These surveys are useful to establishing baseline practices and identifying needs, problems and barriers (22). Therefore, this study aimed to conduct an international survey to examine health professionals and researchers' attitudes, knowledge and current practice recommendations for diet in patients undergoing FMT. MATERIALS AND METHODS This study was an online, cross-sectional, international survey comprising of health professionals and researchers managing patients undergoing treatment with FMT. Purposeful and snowball sampling techniques were employed to recruit a specific, yet professionally and geographically diverse sample of clinicians and researchers. Eligible participants were identified from professional FMT working group mailing lists, professional FMT network contacts, lead researchers of clinical trials in FMT identified on clinicaltrials.gov, clinicaltrialsregister.eu and the Australian and New Zealand Clinical trials register, clinics/hospitals/universities offering FMT treatment and lead authors on recently published FMT academic articles. All eligible participants with available email addresses were sent an email invitation in July 2020 with a link to participate in the electronic survey, via the online survey tool, SurveyMonkey (23). The survey had 21 questions and took approximately 10 min to complete. The survey was designed by expert dietetic and FMT researchers and included questions on demographics, current practice, beliefs and future directions regarding FMT and diet ( Table 1). Participation was anonymous and consent was implied on completion of the questionnaire. Participants were invited to share the survey invitation and link with professional contacts. A reminder email was sent at 1 and 2 months after the initial invitation and the survey was closed in October 2020 after being open for 3 months. Ethics approval was obtained from the institutional Human Research Ethics Committee (CDD20/C03). Data was exported directly into an Excel database from the electronic survey. Closed responses were calculated as proportions of total responses and presented as percentages. Open-ended responses were systematically examined and categorised. Common themes were identified from recurring categories. Where possible, common themes were also calculated as proportions of total responses to provide a broader understanding of closed responses. RESULTS A total of 380 potential participants with available email addresses were identified and contacted. Fifty-eight responses were received, providing an estimated participation rate of 15%. Participants were more likely to be male, aged 31-40 years with a medical degree. Of the participants from 14 countries, approximately half of respondents were practicing as gastroenterologists and resided in Australia or the United States of America (USA). Most participants reported between 1 and 5 years' experience working in FMT and treating up to ten patients with FMT per month. CDI was the most common indication participants treated with FMT, followed by UC ( Table 2). Participants were asked to provide their level of agreement or disagreement with statements relating to diet and FMT (Table 3). Overall, two thirds (66%) of participants agreed that diet will impact an individual's response to FMT. They also agreed that the diet of both patients undergoing FMT and stool donors should be considered (both 71%). Interestingly, less than half (42%) of (40) Crohn's disease 12 (21) Irritable bowel syndrome 13 (22) Multi-dug resistant organisms 5 (9) Autism spectrum disorder 2 (3) participants indicated that they felt confident providing dietary recommendations to patients undergoing FMT. Participants were also asked about their current clinical/research practice for patients undergoing FMT. Half of the participants reported (54%) that they did not collect dietary information from patients undergoing FMT. Of those that did report collecting dietary information (46%), this was often under a clinical trial setting, and varied from "type of diet, " to food frequency questionnaires or food diaries. Participants did not routinely recommend patients see a dietitian/nutritionist before or after FMT (69%). Reasons included dietitians being unfamiliar with FMT, nurses or other team members providing the nutrition information, or only if indicated for other circumstances such as a restricted diet or malnutrition. More specifically, current clinical/research practice for patients prior to and post FMT was examined. Close to three quarters (72%) of participants reported that they did not provide any dietary advice or recommend recipients a specific diet prior to FMT. Of those that did provide dietary advice before FMT (28%), most recommended a high fibre/high prebiotic/high fruit and vegetable diet (10%). Conversely, a low fibre/low FODMAP diet was also recommended (9%) as was maintenance of diet (3%). The majority (91%) of participants did not suggest dietary supplements prior to FMT. Of those that did recommend (45) 15 (26) supplements (9%), fibre supplements were the most commonly recommended (5%). Current clinical/research practice reported by participants for patients after FMT was similar to pre FMT. Just over half (57%) of participants did not recommend patients follow a specific diet after FMT. Of the 43% who did provide dietary recommendations, the majority (33%) focused on healthy eating guidelines and/or a high fibre diet. Fermented foods and dietary changes for symptom management were also recommended (both 5%). Dietary supplements after FMT were not routinely recommended (84%). Ten percent of participants recommended patients take a fibre supplement after FMT. Two participants suggested probiotic supplements and one promoted butyrate supplements. Current dietary recommendations in clinical/research practice for stool donors was also examined. The majority (60%) of participants did not provide dietary recommendations or require a specific diet for stool donors. Dietary recommendations that were provided to stool donors included healthy eating guidelines and/or high fibre diet (22%), avoidance of processed foods, or avoidance of raw or risky foods, especially meats and seafood (both 9%) and avoidance of common food allergens or foods recipients are allergic to (7%). Dietary supplements were not recommended for stool donors (96%). Facilitators and barriers to dietary management in FMT were explored through open ended questions. Facilitators identified included; collaboration with a clinical nutrition professional, and effective communication and dietary education of patients. Two common themes surrounding barriers to effective dietary management of patients undergoing FMT were identified. Firstly, lack of available evidence and research to support the role of diet in FMT. This also included recognition of the provider's lack of knowledge. Secondly, the limited role or lack of access to dietitians/nutritionists and a lack of dietary education resources. Participants also reported time constraints, poor patient compliance due to apathy or difficulty coping with dietary change physically (symptoms) or emotionally and specificity of patient's dietary requirements (e.g., food intolerances) as barriers. Two main areas for further research were identified by participants. Firstly, the importance of a patients diet in the success of FMT. It was noted that this would need to be indication specific, with one participant suggesting a limited role of diet in CDI. There was also the question of whether dietary preparation, e.g., fasting prior to FMT was important. Secondly, the importance of, and the best diet for donors for the success of FMT. DISCUSSION To the best of our knowledge this is the first study to examine health professionals and researchers' attitudes, knowledge and current practice recommendations for diet in patients undergoing FMT. Overall, health professionals and researchers in this group agreed that diet was an important consideration for FMT recipients and stool donors, and that it would affect the outcomes of FMT. However, they did not feel confident in providing dietary advice to patients, nor that there was sufficient evidence to provide dietary advice and this was reflected in their practice. Overall, dietary advice was not routinely provided to patients before or after FMT. Current research regarding the role of a patient's diet and dietary supplements in FMT is limited. One animal study has demonstrated that initially, post FMT, the microbial profile of mouse stool was aligned with their donor, irrespective of diet. However, by week 22, the faecal microbiome profile no longer reflected the donor but could be differentiated by the diet (24). In a pilot study in humans, a diet higher in fruit and fibre was associated with successful treatment outcomes in CDI (13). Similarly, remission was achieved in a case report of severe UC treated with FMT and a low sulphur, high fibre diet (25). In addition, dietary supplementation with fibre has been shown to enhance clinical outcomes of participants receiving FMT for UC and constipation compared to those receiving FMT alone (15,16). Overall, dietary data is not currently routinely collected or reported in FMT studies. Generating information on patients undergoing FMT and their diets should start by establishing baseline diets of patients undergoing FMT. Subsequently, associations identified between diet and participant outcomes for specific indications could be used to design animal studies and human intervention trials. The limited available data in this area was reflected in our cohort's dietary advice with the majority of participants not providing dietary advice or recommending dietary supplements. Conversely, results from this survey indicated that when dietary advice was provided the patients, it was consistent with the current available evidence (13,15,16) and healthy eating guidelines (26). Globally, healthy eating guidelines promote fruit and vegetable consumption and limit alcohol, highly processed foods and beverages (26). These guidelines are backed by high quality evidence, align with prevention or remission guidelines for most chronic health conditions, are simple and promote no risk to patients (27)(28)(29). Evidence also suggests that the general population (30)(31)(32) and patients with gastrointestinal conditions rarely meet these guidelines (33)(34)(35). Therefore, a high fibre diet, promoting fruit and vegetable consumption, in line with healthy eating guidelines could be advised for patients undergoing FMT. Other reasons for the lack of dietary advice provided to patients undergoing FMT was a lack of access to suitably trained dietitians, time constraints, poor patient compliance and dietary requirements. Therefore, upskilling physicians and dietitians in this area is important to ensuring quality of care. Dietitians are health professionals with specific knowledge and training in the application of the science of food and nutrition to improve the health outcomes of individuals (36). Many participants indicated that including a clinical nutrition professional in the care of patients undergoing FMT was an important facilitator to effective dietary management of these patients. Certainly, including a dietitian in the care of patients undergoing FMT may overcome many of the barriers identified by participants. For example, including a dietitians can assist with patient compliance and ensure tailored advice for specific dietary restrictions (37)(38)(39). Conversely, some participants indicated that dietitians were unfamiliar with FMT, indicating a need for further training in this area. However, if the current dietary advice that should be provided to patients is in line with national dietary guidelines then dietitians are well-equipped to support patients (36,40). Overall, including a dietitian in the care of patients with FMT will likely provide benefit. Increasing accessibility to dietitians will likely be country specific and should also be considered. Training and upskilling of dietitians and health care providers in FMT and the development of specific FMT dietary education resources are also considerations for the future. Results from this study indicated that dietary advice was not routinely provided to FMT stool donors. This is in line with a recent systematic literature review and meta-analysis of donor screening procedures (41) and current consensus guidelines for FMT production, which focus on potential pathogen transmission (1,18). In our study, if donors were provided with dietary advice, a high fibre diet in line with healthy eating guidelines was recommended. Interestingly, one participant indicated that a number of their donors were vegetarian. Whilst, another suggested that donors already had healthy dietary habits. A study from the United States examined diets of donors and identified that their diet was similar to the general American population, although higher in fibre (26g vs. 18g) (19). However, this is still below recommended guidelines (42). The importance of diet and the best diet for donors still remains to be elucidated. Evidence from animal models suggests that the FMT from a donor with a high fibre diet promoted greater improvements in emphysema compared to diet or FMT alone (43). Increasing evidence regarding the impact of stool donor microbial composition on patient outcomes also suggests that diet will have an important role. For example, donor stool rich in bifidobacterium, known to be enhanced by a fibre rich diet (44), was associated with increased therapeutic efficacy of FMT in patients with IBS (45). Furthermore, a recent study examining the effects of dietmodulated autologous FMT in preventing weight regain has shown the potential for specific dietary patterns to attenuate weight regain (20). However, diet as a donor selection criterion will need to be carefully balanced with donor recruitment procedures, which are already highly restrictive (46). Researchers should incorporate dietary data collection from stool donors into their studies in order to generate greater knowledge in this area. Overall, this study met the aim of conducting an international survey and examining attitudes, knowledge and current practice by health professionals and researcher's for diet in patients undergoing FMT. Results establish a baseline practice of dietary recommendation for patients undergoing FMT and donors. The survey design was beneficial in enabling views of participants globally to be captured and the participation rate was in line with physician surveys (47). However, it is recognised that there is a potential for response bias and that viewpoints may have been missed. A number of participants also reported limited experience in FMT treatment which may have also bias the results. Additionally, the survey questions did not discriminate dietary advice by treatment indication. However, given that three-quarters of respondents did not provide dietary advice it is unlikely that this would influence the results. Overall, it is clear that further research into the role of donors and recipients diets is of interest and importance. Clinicians and researchers should start by establishing baseline diets of patients undergoing FMT for all indications. Combined FMT and dietary intervention studies for UC, constipation and obesity should also be conducted. Incorporation of dietary intake data collection of donors into research studies is also recommended. This study examined health professionals and researchers' attitudes, knowledge and current practice recommendations for diet in patients undergoing FMT. Health professionals and researchers in this group agreed that diet was an important consideration for patients and stool donors, and that it would influence the outcomes of FMT. However, in practice, dietary advice was not provided to patients as they did not feel confident in providing dietary advice, nor that there was sufficient evidence to provide dietary advice. Future research must collect information on the dietary intake of patients and donors to better understand the relationship between diet and FMT outcomes. In clinical practice, promotion of healthy eating guidelines especially fruit and vegetable consumption aligns with current practice and literature and can be encouraged for patients undergoing FMT and stool donors. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Centre for Digestive Diseases Human Research Ethics Committee. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS AC was responsible for the study design, data collection, data analysis, and drafting of the manuscript. AG was responsible for support of study design, data analysis, and manuscript drafting. TB was responsible for support of study design, data analysis, and manuscript drafting. All authors contributed to the article and approved the submitted version.
2021-10-25T13:20:07.395Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "d83422993e488dd6f31ef72ef5616084278887f2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.653653/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d83422993e488dd6f31ef72ef5616084278887f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252741093
pes2o/s2orc
v3-fos-license
Progress towards achieving child survival goals in Kenya after devolution: Geospatial analysis with scenario-based projections, 2015–2025 Subnational projections of under-5 mortality (U5M) have increasingly become an essential planning tool to support Sustainable Development Goals (SDGs) agenda and strategies for improving child survival. To support child health policy, planning, and tracking child development goals in Kenya, we projected U5M at units of health decision making. County-specific annual U5M were estimated using a multivariable Bayesian space-time hierarchical model based on intervention coverage from four alternate intervention scale-up scenarios assuming 1) the highest subnational intervention coverage in 2014, 2) projected coverage based on the fastest county-specific rate of change observed in the period between 2003–2014 for each intervention, 3) the projected national coverage based on 2003–2014 trends and 4) the country-specific targets of intervention coverage relative to business as usual (BAU) scenario. We compared the percentage change in U5M based on the four scale-up scenarios relative to BAU and examined the likelihood of reaching SDG 3.2 target of at least 25 deaths/1,000 livebirths by 2022 and 2025. Projections based on 10 factors assuming BAU, showed marginal reductions in U5M across counties with all the counties except Mandera county not achieving the SDG 3.2 target by 2025. Further, substantial reductions in U5M would be achieved based on the various intervention scale-up scenarios, with 63.8% (30), 74.5% (35), 46.8% (22) and 61.7% (29) counties achieving SDG target for scenarios 1,2,3 and 4 respectively by 2025. Scenario 2 yielded the highest reductions of U5M with individual scale-up of access to improved water, recommended treatment of fever and accelerated HIV prevalence reduction showing considerable impact on U5M reduction (≥ 20%) relative to BAU. Our results indicate that sustaining an ambitious intervention scale-up strategy matching the fastest rate observed between 2003–2014 would substantially reduce U5M in Kenya. However, despite this ambitious scale-up scenario, 25% (12 of 47) of the Kenya’s counties would still not achieve SDG 3.2 target by 2025. Background Strategic decision-making processes are a critical element in health care and public health to inform the planning of future financial resources. A reliable health forecast is essential for health service delivery as it can enhance the delivery of health services and promote investments that make better use of limited sources. Therefore, tools to support stakeholders and policy makers in evidence-based decision-making have become increasingly important [1]. Various approaches are used in strategic decision-making in public health [2][3][4][5]. Scenariobased approaches [6] aim to connect a future scenario with the present while illustrating key decision points [1] and outline possible hypothetical non-arbitrary futures [7]. When combined with other sources of information, scenario-based modelling can improve and facilitate strategic and precise decision-making. In the last decade, integration of scenario modelling in forecasting tools such as the Lives Saved Tool (LiST), decomposition analysis, and the global burden disease (GBD) studies have been applied to identify knowledge gaps and propose new actions for reducing child mortality [8][9][10]. To improve child health outcomes and track progress towards targets during the Sustainable Development Goals (SDGs) era, such forecasting tools provide global, regional, and national trends that highlight how ongoing and future sustainable development efforts can be optimised. Additionally, reliable health forecasts that quantify inequities in child mortality within countries are prerequisites for planning equitable health interventions for the accelerated reduction in under-five mortality (U5M). However, such data are often not available or are aggregated at national which mask local level disparities [11]. There are few examples of scenario-based projections that evaluate progress in U5M reductions at the national level in Kenya. Keats and colleagues [12] identify high-impact interventions for more robust intervention programs that accelerate child mortality reductions. They predicted that the impact of scaling up of maternal, newborn and child health (MNCH) intervention packages to a coverage of 90% (an ideal scenario) would avert over 70% of under-five deaths by 2030. In a similar study, Hategeka et al. estimated that approximately 10,300 underfive deaths would be averted by scaling up ten community-level interventions to 99% by 2030 [13]. Although both studies indicate substantial declines in U5M by 2030, none have specifically examined the disparities on the potential impact of scaling up interventions and reducing disease infection prevalence at the units of health decision-making (counties) in Kenya. A subnational analysis is crucial to inform policies aimed at to eliminating health gaps in Kenya [14,15]. Additionally, the ideal interventions scale-up scenarios evaluated are extremely ambitious and fail to account for local priorities or competing national priorities and resource constraints that determine intervention scale-up decision processes and implementation [16][17][18]. These knowledge gaps present the need for a more pragmatic description of the impact of scaling up interventions on U5M in Kenya informed by disaggregated local data and priorities grounded in actual policy-making processes outlined in the national or subnational strategic health plans rather than idealised targets. This study uses country-specific information on U5M and interventions coverage to forecast subnational U5M rates based on recent available nationally representative health survey collected in 2014 in Kenya. Two pragmatic time-points of intervention coverage, 2022 (midterm) and 2025 (end-term), were used to project U5M rates based on the four-intervention scale-up scenarios by assuming that all counties either achieve i) the highest subnational intervention coverage in 2014, ii) projected coverage based on the fastest county-specific rate of change observed in the period between 2003-2014 for each intervention, iii) the projected national coverage based on 2003-2014 trends and iv) the country-specific targets of intervention coverage. We then compare projected U5M with the current rates based on current trajectories of intervention coverage (business as usual). Finally, we track the achievement of SDG 3.2 across counties. This information will support subnational governments to better anticipate the future impact of current actions on specific outcomes and can influence longterm planning and investments. Country context Since the early 2000s, child mortality has declined remarkably in Kenya. Corresponding to the progress in child survival were continuous efforts to implement public health initiatives aligning with Millennium Development Goals (MDGs), that notably improved the coverage of maternal, newborn and child health (MNCH) intervention packages [12,[19][20][21][22][23]. However, significant challenges remain in attaining the SDG 3.2 target and the Kenya Vision 2030 longterm health objectives. This is compounded by underinvestment in health, underutilisation of health services and inequities in health service access within the country [21,22,[24][25][26]. To effectively tackle challenges in attaining the SDG target, analyses focusing on drivers of inequities in child health are essential in understanding the contributions of competing risk factors and interventions to inform targeting and planning by decision-makers so that the gains made can be sustained and further accelerated. We have previously sought to address these gaps by assessing the contributions of 43 factors associated with U5M for the period between 1993 to 2014 sub-nationally in Kenya [26]. The analysis included a set of all potential predictors of U5M including those that are not directly amenable to intervention for a retrospective period. From the study, policy makers were able to understand the role played by different determinants in changes observed in child mortality during the MDG era [27]. However, here, we focus on identifying the most influential factors of U5M, amenable to interventions to inform targeted disease control, better resource allocation, focus on equity and maximising impact during the SDG era. This is important particularly to inform better planning by the policy makers targeting the most influential factors within defined projection scenarios that focus on accelerating declines in child mortality to meet both national (2022 and 2025) and global SDG targets (2025) on child survival. A summary of the differences in aims and methods of the current study and our previous study assessing the contributions of 43 factors associated with U5M are provided (Table A in S1 File). In 2013, Kenya devolved health service delivery to 47 subnational units (counties) with the central objective of addressing inequities due to systematic disparities in access and healthcare service utilisation [28]. Accordingly, health service delivery is structured around the Kenya Essential Package for Health (KEPH) at four-level systems: community, primary health, country referral, and national referral services that underpin the SDGs principle of universal health coverage in Kenya [29]. County governments are responsible for providing services in three of the four levels, while the national government provides services at the national referral national services [30]. Recommendations from performance reviews of previous health outcomes and stakeholders evaluations used to guide health agendas across the country are operationalised at the national level according to the Kenya Health Sector Strategic plan (KHSSP) [31] or at the county level using the County Integrated Development Plans (CIDPs) [32]. Currently, both plans define an overall framework for mid-term financing and intervention coverage priorities for the period 2018-2023. To further accelerate the progress towards reduction of under-five mortality, these strategic plans recommend integrated approaches to address increased risk of child deaths among high-risk groups such as the immunocompromised. Therefore, we considered intervention coverage and disease-specific prevalence reduction among high-risk groups as priority programmes amenable to intervention. Forecasting is used to quantify the key results desired at the end of each period to demonstrate the outcomes and impacts expected from implementing the priority programmes outlined in the strategic plans based on previous trends and within budgetary constraints. Analysis overview A subnational ecological analysis was carried out to project U5M rates under various scenarios of intervention scale-up after devolution in Kenya. First, a set of parsimonious factors significantly associated with U5M and amenable to interventions relevant to Kenya's health priority programmes [31] were selected. Second, average county level annual rates of change (ARC) of U5M and the significant factors for the period between 2003 and 2014 were computed and used to estimate the continued trends from 2015 to 2025 (business as usual scenario-BAU). Third, a multivariable Bayesian space-time hierarchical model was used to predict countryspecific annual counterfactual U5M assuming coverage from four alternate intervention scaleup scenarios, relative to BAU. Percentage change in U5M from the BAU coverage and counterfactual scenarios for each county and U5M estimates in 2022 and 2025 were reported. Finally, projected U5M rates for the years 2022 and 2025 were compared to the SDG 3.2 target of achieving <25 deaths per 1,000 live births for children below five years of age by 2030 to assess the potential impact on U5M reduction at county level in Kenya [33]. Data U5M rates at county level spanning years between 2003 and 2014 were available from previous work detailed elsewhere [34]. Briefly, the county specific U5M estimates were generated from birth histories for each household survey and census undertaken between 1989-2014 using five demographic methods. A bespoke Bayesian spatial-temporal Gaussian process regression model accounting for heterogeneity in demographic methods, sample size and household sample surveys was used to smooth the raw mortality estimates from demographic methods to predict county and national level U5M rates [34]. A broad range of factors known to influence U5M [12,20,[35][36][37] including maternal interventions, pregnancy-related interventions, child health care seeking and preventative interventions, child health status, water, sanitation, and hygiene indicators (WASH) and community disease prevalence (Fig A in S1 File) available at the county level for the period 1993-2014 were considered for the analysis. Estimates of 43 factors were generated from household sample surveys and census conducted between 1993-2014 and smoothed using conditional autoregressive models as described elsewhere [22]. Of the 43 factors available, 29 factors (Fig A in S1 File) amenable to intervention were considered. Here, amenable to interventions refers to factors wholly or substantially adaptable to increased coverage or priority programmes aimed at reducing disease prevalence among high-risk groups (immunocompromised) grounded in Kenya's health policy-making processes aimed at reducing U5M. Therefore, the prevalence of HIV and malnutrition indicators such as stunting were retained as a surrogate marker (risk factors) to track the impact of multiple concurrent interventions that often provide integrated approaches aimed at reducing the prevalence of HIV and stunting and their effect on U5M [38,39]. Further HIV and malnutrition increase vulnerability to mortality due to immunocompromise which increases susceptibility to opportunistic diseases in addition to being health outcomes. We restricted our analysis to the baseline period between 2003-2014, coinciding with a period of intervention scale-up since most of the interventions such as bed nets, vitamin A and iron supplements were rolled out from the early 2000s [20,22]. Selection of interventions. Large-scale implementation of evidence-based interventions is as an effective tool to accelerate progress towards reduction of U5M that could prevent up to 70% of child deaths in low-and middle-income countries (LMICs) [15,40]. Drawing from a previous study [22], we identified 27 factors amenable to intervention and disease prevalence reduction targeting high-risk groups (HIV and stunting) shown to have impact on U5M in Kenya and employed a multi-stage approach to select the most influential factors using previous trends. The focus of the study is to assess the impact on U5M after varied scale-up of interventions and reduction of the prevalence of HIV and stunting. The modelling approach assumes that changes in U5M are the result of changes in intervention coverage or reduction of disease prevalence and the impact of confounders such as distal factors (for example socioeconomic status) are mediated by changes in interventions coverage. Bivariate associations between U5M and factors were assessed using a simple log-linear regression ( Table B in S1 File). Among the covariates with significant effects (p<0.2), factors whose contribution was captured by other variables under consideration were excluded to reduce circularity and confounding [41]. For example, the effects of DTP3, Polio3, measles and BCG vaccines were captured by fully immunised status and were thus excluded in lieu of fully immunized indicator. Variables were further reduced through an elastic net regression (ENR) by selecting the most predictive factors via the glmnet package in R [42]. ENR shrinks the coefficients of redundant variables exactly to zero with the aim of selecting a part of the original variables to build a model, while selecting the most significant variables among highly correlated variables [43][44][45]. Two ENR models were used to account for antenatal visits (ANC), and interventions provided during ANC given their collinearity (Table C in S1 File). In addition to all the significant factors from the bivariate analysis, model 1 included ANC and each intervention offered during the visit. Model 2 was similar to model 1 except, it assessed the combined effect of interventions delivered during ANC as one index (Iron supplement, tetanus injection and Vitamin A supplements) derived from principal component analysis [46] (Table C in S1 File). Factors with non-zero coefficients from the ENR model with the least mean squared error were used in the subsequent models to assess the effect of interventions on U5M [44,45]. Scenarios for projections. Four alternate intervention scale-up scenarios were used to project U5M estimates to 2022 and 2025. In the first time-point, interventions baseline coverage (2014) was scaled-up to match 2022 target values for the various scenarios while in the second time-point, estimates of intervention coverage attained in 2022 was scaled-up to the respective targets in 2025. These two time points reflect a pragmatic approach to intervention scale-up that constitutes mid-term and end-term goals aimed at achievable sustainable improvement of health outcomes proposed based on time-bound fiscal allocations in Kenya [28,31,32]. The potential impact of these four scenarios was measured against BAU estimates. To compute BAU; county-specific annual rates of change (ARC) for the period 2003-2014 were calculated and used to project U5M estimates to 2025 (S2 File). We, therefore, assumed that the trend for each intervention would remain constant over the study period and applied the ARC to the to the baseline estimates (2014) to get BAU trends for the period 2015-2022 and 2015-2025. The ARC is a constant rate of change between two time periods (2003 and 2014) expressed as a percentage (Eq 1) Where y ti is the U5M for a given county i in year t and n is the number of years between the two rates (e.g., 12 years when computing ARC from 2003 and 2014). The equation used to compute the ARC for each intervention coverage from 2022 or 2025 targets and the baseline coverage (2014 estimates) is provided in S1 File. The four projection scenarios are described in Table 1 and the scenario target values used are presented in Table 2. Each scenario reflected a unique ambition of expanding the coverage of interventions based on the respective rationale yielding varied scale-up targets. This allowed for evaluation of the impact of different combinations of intervention coverage on U5M since each scenario represented varied intervention coverage targets with some scenarios having higher coverage of one intervention but lower coverage of another relative to the other scenarios. County-specific rates varied based on the corresponding baseline coverage (2014). For example, in scenario 1, if the coverage of health facility delivery for the best performing county in 2014 was 61.5%, all counties were matched to this coverage by 2022 using their respective coverage in 2014 as baseline values. Counterfactual analysis. A multivariable log-linear mixed effect spatio-temporal Bayesian ecological model was formulated to estimate the adjusted association of the final set of selected interventions on U5M using WinBUGS Package (Version 1.4.3) [49]. The model included an intercept, fixed effects, spatial and temporal random effects, and a space-time interaction effect. The random effects were assumed to follow prior distributions that capture the spatial-temporal structure of U5M by borrowing information across space and time and time-varying effects that influence all counties (unstructured) and those that are county-specific (structured). Assumptions about intervention coverage and how their associations with U5M were modelled are provided in S1 File and the details of full specification of the model can be found in [26]. Coefficients of the full multivariable model were fit using a two-chain Markov chain Monte Carlo (MCMC) simulation to improve the precision of the parameter estimates and their respective posterior distributions of parameters summarized by the mean and the 95% credible intervals (CI). A total of 25,000 iterations were run per chain including a burn-in period of 5,000 per chain to generate acceptable Monte Carlo errors (<5% of the posterior standard deviation). Trace plots and Gelman-Rubin statistic were used to assess model convergence towards the target distribution of the parameters while Monte Carlo error, the standard deviation and their ratio were used to evaluate model accuracy (the level of uncertainty of the estimated parameters) [50,51]. The additional value of using both the trace plots and Gelman's statistic to assess convergence, is that Gelman's statistic compares the spreads of the individual chains with the spread of all the draws pooled to inform how close the chains converge on parameters target distribution. We adapted a Gelman statistic threshold of less than 5% that has previously been shown to infer to the model stability [50]. The adjusted coefficients were used to predict county-specific annual counterfactual U5M estimates assuming intervention coverage from the four scale-up scenarios relative to coverage under BAU. The percentage change in U5M from the BAU and all four scale-up scenarios were computed for each county to quantify the impact of all selected interventions jointly as well as for each intervention for the period 2015-2025. Finally, U5M estimates for the year Results Eleven parsimonious factors and amenable to intervention were retained from the modelling building process and included in the Bayesian ecological space-time mixed-effects regression; the details are presented in (Fig A in S1 File). The 11 factors included four antenatal visits, antimalarial use, breastfeeding within first hour of birth, access to better sanitation and water for drinking, health facility delivery, seeking treatment for fever, HIV prevalence, child ITN use, stunting and fully immunised status. Stunting was ultimately not statistically significant and hence was excluded from the final model used for counterfactual analysis. Table 3 shows the results from the Bayesian ecological space-time mixed-effects regression. Increasing intervention coverage and reduced disease prevalence were associated with varied impacts on U5M rates ( Table 3). The coefficients represent the effect of a factor equivalent to a unit change (percentage) on under-five mortality adjusting for all the other predictors included in the model. HIV prevalence was associated with the largest increase in U5M of 6.5930 (95% CI: 4.962-8.8188) while seeking treatment after fever; -1.2550 (95% CI: -1.451--1.056) was associated with the greatest decline in U5M relative to the other factors (Table 3). All the model parameters in each successive iteration of MCMC sampling chains converged as illustrated by the trace plots (S3 File) as well as the Gelman statistic of <5% shown in Table 3. Estimates of the coefficients were within the acceptable range of uncertainty with MC errors<1.5% for all predictors. The random effects parameters indicate a relatively higher spatial variation (sigma.w) compared to temporal variation (sigma.t) in our analysis (Table 3). Counterfactual U5M Wide disparities in U5M reduction exist across counties based on the different scale up scenarios relative to the BAU coverage (Fig 1). Scenario 1 scale-up targets had the greatest impact in U5M across all the counties in 2022 while scenario 2 had the greatest impact in 2025. For scenario 1, the mean reduction was -28.4% (95%CI: -31.8--24.9) in 2022 ranging from -69.1% in Migori county to -110% in Mandera county relative to BAU. By 2025, the mean reduction in U5M improved to -31.3% (95%CI: -34.7--27.8) relative to BAU. Thirty-eight percent (18 of 47 counties) of the counties would achieve reductions of at least 30% in 2022 with an additional 13% (6 counties) when the timeline is extended to 2025 ( Fig 1A). Scale-up using scenario 2 would achieve relatively similar reductions compared to scenario 1 of -26.5% (95%CI: -29.8--23.2) in 2022 with small improvements to -33.1% (95%CI: -36.9--29.4) in 2025 relative to BAU (Fig 1B). Notably, counties characterized by low baseline intervention coverage (marginalized) such as Wajir, Mandera, Garissa and Marsabit attain greater reductions in U5M using scenario 1 scale-up while scenario 2 would produce increased reductions for counties with relatively higher intervention coverage such as Nairobi, Kirinyaga and Nyandarua (Fig 1A and 1B). Scale-up of interventions under scenario 3 using projected national coverage to 2022 would attain smaller impact on U5M (relative to scenario 1 and 2) with a mean reduction of -7.6% (95% CI: -11.6--3.7) in 2022 and substantially greater impact of -22.8% (95% CI:-26.1--19.5) by 2025. Based on this scenario, in 2022, 30% (14 of 47) of the counties would have higher U5M rates relative to BAU. However, all these counties would achieve lower U5M rates in 2025 compared to BAU (Fig 1C). This reversal observed in trends in U5M rates from 2022 to 2025 was mainly associated to scale-up of five interventions corresponding to mean changes in U5M rates as follows: four antenatal care visits (-22%), health facility delivery (-19.4%), breastfeeding within first hour after birth (-19.3%), HIV prevalence (-16%) and fully immunised (-7.1%) (S4 File). Scenario 4 had an average change in U5M rates of -10.8% (95% CI: -14.6--7.0) and -29.9% (95% CI: -33.2--26.6) relative to BAU coverage by 2022 and 2025, respectively. Fifteen percent (7 of 47 counties) had worse U5M rates relative to BAU coverage ranging from 13.5% in Mombasa to 0.4% in Taita Taveta in 2022. Individual intervention scale-up to 2025 target values reflects further improvements in U5M rates in 2025 for majority of interventions except health facility delivery, Child ITN use and access to better sanitation in the seven counties relative to BAU (S4 File). There were key differences across counties and scale up scenarios with respect to impact of individual interventions on U5M rates in 2025 (S4 File). Consequently, for optimal reduction in U5M, different combinations of targets values across scale scenarios can be derived. For example, scale-up of health facility deliveries, early breastfeeding and antimalarial use to scenario 1 targets produced the highest reductions in U5M rates compared to all the other scenarios. Further, intervening on improved access to safe water for drinking, antenatal care visits, full immunisation coverage, and reduction in HIV prevalence based on scenario 2 targets, and improvement in seeking treatment after fever based on scenario 3 targets, in combination would yield the most reductions in U5M rates. By 2025, projected U5M rates for BAU based on access to improved sanitation and child ITN use was similar to projected U5M rates from all the scale-up scenarios. In summary, reducing HIV prevalence (scenario 2), scale-up of access to improved water (scenario 2) and recommended treatment of fever (based on scenario 3) yield the highest reduction in U5M by 2025. The corresponding reduction in U5M would be 28.2% (95% CI: -55.2--1.1) and -22.7% (95% CI: -44.5--1.0) and -19.2% (95% CI: -37.6--0.9), respectively. Discussion Our study extends previous efforts that provide evidence on progress towards improving child survival and attainment of child sustainable development goals in Kenya [12,13], by evaluating the potential impact of scaling up multiple child health interventions on U5M reduction, for which sufficient evidence at county level is lacking. We projected U5M rates at county level based on historical trends for the period 2003 to 2014 (BAU) and illustrated the potential benefits of four alternative scale up coverage scenarios (Table 1) for the period between 2015-2025 to assess relative progress towards SDG targets of reducing U5M to at least 25 deaths per 1,000 live births. Our findings indicate that without any acceleration in the pace of coverage for the selected interventions (BAU), the average U5M rate would be 51.0 (95% CI: 46.1-56.0) deaths per 1,000 livebirths. All the counties except Mandera county would not achieve the SDG target by 2025. The alternative projection scenarios show substantial advances towards achieving SDG target, with scenario 2 (scale-up corresponding to the fastest rate of change in intervention coverage achieved between 2003-2014) resulting to the highest reductions in U5M rates relative to BAU (Fig 1). Overall, 63.8% (30), 74.5% (35), 46.8% (22) and 61.7% (29) counties achieve SDG target for scenario 1 (best performing county coverage), scenario 2 (fastest rate of change), scenario 3 (the national projected value) and scenario 4 (the Kenya national health strategic plans targets) respectively by 2025 (Fig 2). Projections based on BAU, highlight the enormity of the challenge of achieving the SDG aspiration based on pre-existing trends of intervention coverage as counties are making slow progress in reducing U5M (Fig 2A). Notably, fifty-one percent of the counties (24 of 47) located in central and coastal regions of the country were projected to have U5M rates >50 deaths per 1,000 livebirths in 2025 based on BAU. These results correspond to the findings from a recent study by Wakefield et al, who estimated a relatively slow decline in U5M across counties in 2020 and that continuing past trends in annual rates of U5M reduction would not be sufficient to reach the SDG target for most counties. In addition, counties within the central, western and coastal regions were projected to have the highest burden of U5M [53]. Further, the UN Inter-agency Group for Child Mortality Estimation estimated the average U5M rates in Kenya to be 42.3 (95% CI: 33-55) deaths per 1,000 livebirths in 2019 indicating potential for missed SDG target by 2030 [54][55][56][57][58], similar to our current findings. These findings highlight the need for evidence-based strategies to inform subnational efforts aimed at accelerating the pace of U5M decline to increase the likelihood of achieving SDGs. However, formulation of such strategies is challenged by insufficient evidence on the implications of intervention scale-up due to scarcity of data. Sample household surveys conducted every 3 to 5 years have been the main data sources and are often not powered to generate U5M estimates at lower units used for decision making. Long-term improved monitoring of U5M requires strengthening of data collection from routine health system information such as the District Health Information System (DHIS) and health demographic surveillance systems (HDSS) to gather timely, disaggregated data that can inform evidence-based decision in Kenya [55][56][57][58]. Therefore, due to lack of recent data from sample surveys or strengthened HMIS, projections based on BAU may underestimate the magnitude of U5M reduction especially for counties that have recently intensified interventions and investments to improve child health. In our analysis, scenario 2 yielded the highest reductions in U5M among all intervention coverage scale up scenarios, with an overall projection of 18.3(95% CI: 16.0-20.6) deaths per 1,000 livebirths translating to an absolute decline of 64.7% relative to BAU by 2025. This scenario illustrates the potential benefits of a rigorous scale up approach that maintains a longterm focus for optimal coverage achievable while the other scenarios depict an opportunity for further improvements beyond the targets set. These projections should be interpreted with caution since we assume that the fastest rate of change (2003-2014) would be maintained to 2025, however, in reality, the socio-economic and demographic environment that children are born and raised is not a smooth occurrence and can create unpredictable mortality shocks [22,59,60]. Additionally, the model does not capture the effects of health delivery systems that influence the quality of interventions delivered and utilisation that are crucial in reducing U5M [56]. Future analysis to evaluate the validity of the impact of intervention scale-up scenarios is necessary. For example, the 2019 national census data when available can be used to validate the scenarios and create a baseline for future analyses. Strategizing for health at subnational levels as well as continued context-specific prioritisation of key drivers of U5M reduction is crucial in reducing U5M inequities within the country. Differences on the impact of individual or a combination of the 10 factors considered in the analysis on county-level U5M rates based on the four scale up scenarios were varied (Fig 1 and S4 File). Overall, accelerated pace of HIV prevalence reduction, scale-up of access to improved water and recommended treatment of fever would have considerable impact on U5M reduction (� 20%) using scenario 2 coverage (S4 File). Forecasted estimates of U5M rates for counties in northeastern, eastern, and northern Rift Valley regions with worsening, or consistently low coverage based on the pre-existing trends relative to other areas of the country showed steady improvements from 2015 to 2025, indicating substantial U5M reductions for all intervention coverage scale-up (Fig 1). Similarly, findings from other studies have reported that counties with relatively low coverage of interventions would have substantial gains in reductions of U5M after deliberate prioritization of vulnerable subpopulations often characterized by low intervention coverage and utilization [13,20,61]. This is line with SDG's overarching goal of reaching the most marginalized, first. Conversely, counties with high intervention coverage such as Kirinyaga, Kiambu, Nyeri and Nairobi in 2014 require an ambitious scale-up approach such as scenario 2 with consistent improvement that aim towards universal coverage for substantial progress towards U5M reduction. These findings illustrate the potential benefits of a general mixed approach to guide targeting of intervention scale-up by characterizing different counties based on pre-existing trends in intervention coverage to ensure sustainable and equitable progress towards U5M reduction in Kenya. For example, setting county-specific targets aimed at steady improvements in coverage of high-impact interventions for counties deemed to have high coverage in 2014 as illustrated in scenario 2 while trying to improve poor performing counties to catch-up with best performing counties coverage in 2014 as illustrated in scenario 1. Aspects of our analysis reflect a comprehensive approach comparable to intervention scaleup strategies adopted within the Kenyan context based on time-bound fiscal budgets [28,31,32]. Intervention scale-up scenarios incorporated mid-term and long-term targets to assess the impact of time-bound implementation plans on U5M reduction. In our analysis, we note substantial shifts in U5M reduction based on 2022 (short-term targets) and 2025 (long-term targets) (Fig 1). For example, scenario 4 that is informed by local health strategies shows accelerated U5M reduction post-2022 ( Fig 1D) implying that the short-term intervention targets would yield marginal reductions in U5M rates for majority of the counties relative to BAU. Gaps between the 2022 and 2025 scale-up scenarios provide quantification of the impact of short-term priorities on future health trajectories indicating looming stagnation, reversal or accelerated pace for progress. Therefore, provisions that allow evaluation and re-adjustment of targets relative to evidence-based trends are integral part of the development process of a robust policy making. Overall, findings from this study demonstrate that Kenya has opportunities to further reduce U5M and that varied scale-up scenarios are better positioned for improvement than others. Currently, the CIDPs provide targets to aid local level planning and budgeting to ensure that basic human development goals including child survival are responsive to long term national goals such as the Kenya Vision 2030 and the SDGs. The CIDPs focus on cross cutting issues affecting development in the county for resource mobilisation [32] which requires evaluations to determine the collective effects achieved by scaling up different interventions on specific health outcomes such as U5M. In the past decade, there has been substantial investments in the development of standardised methods such as LiST and spectrum that illustrate scenario projections aimed at assessing the impact of intervention scale-up on various health outcomes such as child mortality. These methods, however, focus on biomedical interventions used to evaluate disease-specific cause of death and often do not consider the effect of broader social determinants of health [62-64]. By contrast, in this study, we propose hypothetical scale-up scenarios based on historical trends as well as intervention coverage scale-up proposed by health stakeholders in the country for cost-effective interventions taking into account broad social determinants such as improved water and sanitation to compute future trends. Integration of comprehensive evidence-based projections such as those presented here, are a starting point to aid better decision making at county units when planning policy for upcoming years and for deciding between investment possibilities to optimize resource utilization. Limitations There were several study limitations. First, there were no available county specific U5M rates and intervention coverage data for the period between 2015-2020. Hence, data used for the baseline year (2015) was model-based and subject to uncertainty [34]. Our modelling exercise assumes that historical trends in associations between interventions and U5M from the observed data (2003-2014) would still apply over the study period between 2015-2025 (projection period). Given the lack of current nationally representative data on U5M and factors of U5M, we kept coefficients generated using 2003-2014 data as it was impossible to rigorously assess the current associations post-2014 in Kenya. It is possible that, the true decline of U5M rates for the period between 2015-2025 might be greater than we estimate for counties that recently intensified interventions and investments to reduce their U5M rates post 2014. However, we developed scenario-based projections reflecting a range of reasonable values from the recently available data sources to capture empirical data-driven county-specific trends ( Table 2). In addition, disruptions in the provision and utilisation of routine health services and the broader socioeconomic effects of the COVID-19 pandemic threaten countries' ability to achieve SDG targets [65]. Recent estimates suggest a potential increase in child mortality up to an additional 1�2 million child deaths in low-income and middle-income countries as a result of essential health services utilisation drops associated to COVID-19 [36]. In our analysis, the subset of interventions identified as the most impactful in reduction of U5M are based on pre-COVID-19 estimates of U5M trends and baseline intervention coverages. Large and disproportionate changes to provision and utilisation of the identified interventions or prevention and management of HIV/AIDS and malaria may imply changes in the prioritisation and scale up of interventions. Further work is required to investigate the acute impacts of COVID-19 in prioritisation and scale-up of interventions with regards to U5M reductions. Second, there are potentially strong assumptions made about interventions coverage and how their associations with U5M were modelled (S1 File). We employed linear interpolation to estimate the impact of various interventions. This modelling process inherently, assumes that the impact of increasing intervention coverage or reduction of disease prevalence would be equivalent regardless of the initial values. For example, the impact of increasing coverage of antenatal care visits from 80% to 90% coverage would be equivalent to that of increasing coverage from 10% to 20%. It is likely that the scale-up level would not follow the linear increment assumed in the model and that the efforts required to scale up interventions from very low values would be similar when the existing coverage is deemed high such as >80%. We also assume that intervention scale-up independently and additively affect U5M. However, we cannot rule out that depending on the nature of the interaction between interventions, scale-up of a particular intervention might have influence in coverage levels of another. For example, increased coverage of access to clean water might lead to improved access to sanitation. These limitations suggest that future research should focus on improving both data availability and the integration in scenario-based models. Further, the interventions reviewed were chosen for programmatic reasons. Although the interventions studied were strongly associated and of high impact in reduction of U5M, inclusion of long-term impact interventions such as maternal education and health systems indicators such as quality of care would improve child mortality estimation to monitor progress in U5M reduction. We interpret these results with caution as evidence of partial causal effects of the selected child survival interventions assuming that there are no omitted effective interventions in this analysis. We were unable to include data from all the sources available such as routine data when computing the parameter estimates, as only average values from household surveys at county-units were available. This limitation is likely to have a non-systematic impact on the results and projections since earlier studies have established the validity and reliability of survey estimates [66]. Finally, we considered HIV and stunting as risk factors that increase vulnerability to U5M due to immunocompromise which increases susceptibility to opportunistic diseases. However, an alternative model formulation could consider integrated packages that directly or indirectly reduce the burden of HIV and stunting on U5M when such data is available. For example, HIV counselling and testing, prevention of mother-to-child transmission (PMTCT) and prompt treatment of HIV opportunistic infections. Conclusion This analysis gives an account of the potential impact on U5M reduction resulting from acceleration programs illustrated using four hypothetical intervention scale-up scenarios at subnational units in Kenya. Of the ten factors significantly associated to U5M, emphasis on accelerated pace of HIV prevalence reduction, scale-up of access to improved water and recommended treatment of fever would have considerable impact on U5M reduction. Without intensified efforts to scale up interventions (BAU), counties would have marginal reductions in U5M. Contrary, substantial reductions in U5M would be attained based on the scale up scenarios evaluated with the number of counties achieving the SDG 3.2 target by 2025 ranging from 35 (74.5%) to 22 (46.8%). Concerted efforts and resource allocation by the local government as well as other key health stakeholders to accelerate the pace of progress to fulfil children's rights to health and development are critical. Such efforts need to be supported by welldesigned and robust scale-up strategies to effectively reach sub-groups of the population such as the vulnerable populations characterised by poor intervention coverage and utilization as well as promote universal coverage for substantial progress towards achieving the child health related SDG target in Kenya. Supporting information S1 File. Factor selection processes. Table A: a summary of the differences in aims and methods of the current study and our previous study assessing the contribution of factors associated to U5M [26], Fig A: Schematic strategy showing factor selection process, Table B: Determinants of child mortality amenable to interventions and crude association with under-five mortality for the period 2003-2014 and Table C
2022-10-07T15:08:58.095Z
2022-10-05T00:00:00.000
{ "year": 2022, "sha1": "edeaa38c99c29ee4ec435fe724fd1c32a0892f73", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000686&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "768d7e70c68430d06d9b2e94c8302ec3ca2729e8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
7349767
pes2o/s2orc
v3-fos-license
Lycium Barbarum (Wolfberry) Reduces Secondary Degeneration and Oxidative Stress, and Inhibits JNK Pathway in Retina after Partial Optic Nerve Transection Our group has shown that the polysaccharides extracted from Lycium barbarum (LBP) are neuroprotective for retinal ganglion cells (RGCs) in different animal models. Protecting RGCs from secondary degeneration is a promising direction for therapy in glaucoma management. The complete optic nerve transection (CONT) model can be used to study primary degeneration of RGCs, while the partial optic nerve transection (PONT) model can be used to study secondary degeneration of RGCs because primary degeneration of RGCs and secondary degeneration can be separated in location in the same retina in this model; in other situations, these types of degeneration can be difficult to distinguish. In order to examine which kind of degeneration LBP could delay, both CONT and PONT models were used in this study. Rats were fed with LBP or vehicle daily from 7 days before surgery until sacrifice at different time-points and the surviving numbers of RGCs were evaluated. The expression of several proteins related to inflammation, oxidative stress, and the c-jun N-terminal kinase (JNK) pathways were detected with Western-blot analysis. LBP did not delay primary degeneration of RGCs after either CONT or PONT, but it did delay secondary degeneration of RGCs after PONT. We found that LBP appeared to exert these protective effects by inhibiting oxidative stress and the JNK/c-jun pathway and by transiently increasing production of insulin-like growth factor-1 (IGF-1). This study suggests that LBP can delay secondary degeneration of RGCs and this effect may be linked to inhibition of oxidative stress and the JNK/c-jun pathway in the retina. Introduction Glaucoma has been considered to be a neurodegenerative disease characterized by optic nerve (ON) atrophy and irreversible loss of retinal ganglion cells (RGCs) [1]. The loss of RGC bodies may be primary (caused by direct damage to axons or cell bodies, such as crush or transection of axons) or secondary (caused by the toxious effectors released from the neighboring dying cells because of primary damage or a cell death signal from the deafferented target) [2][3][4][5]. The delay of secondary degeneration of RGCs in glaucoma is believed to provide a promising avenue for treatment. Several animal models have been used in the study of glaucoma, including complete optic nerve transection (CONT), acute and chronic ocular hypertension models and the ON crush model. However, it is difficult to distinguish primary degeneration from secondary degeneration in these commonly used models because each involves insult to all RGCs [3]. For example, in the CONT model, all the axons of RGCs are cut and therefore all RGCs will die from primary degeneration. However, in the partial optic nerve transection (PONT) model, which was established about ten years ago, only axons in the dorsal part of ON are transected. The degeneration of the cell bodies of RGCs whose axons are transected during surgery is primary and the degeneration of the cell bodies of RGCs whose axons are intact during surgery is secondary. According to the literature, primary degeneration mainly happened in superior retinas and secondary in inferior retinas, they could be separated in location. [2]. Oxidative stress has been thought to be involved in secondary degeneration after PONT, even though stringent measures are taken to ensure adequate retinal circulation [6][7][8]. Inflammation has also been shown to be involved in secondary degeneration after brain trauma and spinal cord injury. However, its involvement in secondary degeneration of RGCs after PONT has not been studied. Lycium barbarum has been used as an ''upper class herb'' for hundreds of years in the Oriental world. It was used for the treatment of the diseases related to vision, the ''kidney'' and the ''liver'' [9]. We have shown that the polysaccharides extracted from Lycium barbarum (LBP) reduce the death of cultured cortical neurons challenged by beta-amyloid, glutamate and Homocysteine [10][11][12][13]. LBP also delay the degeneration of RGCs in a rat chronic ocular hypertension model [14] and a mouse acute ocular hypertension model [15] and reduce neuronal damage in a mouse transient middle cerebral artery occlusion model [16]. However, it is difficult to know whether LBP delayed primary or secondary degeneration in these models, and the mechanism or mechanisms underlying the neuroprotective effects of LBP for neuronal tissues remained unclear. The aims of this experiment were to confirm whether ON section caused retinal oxidative stress, to investigate the presence of retinal inflammation after ON section and to determine which kind of degeneration LBP could delay and which mechanism(s) might be involved in any neuroprotective effects of LBP, and we were largely successful in these aims. Ethics Statement The use of animals followed the requirements of the Cap. 340 Animals (Control of Experiments) Ordinance and Regulations in Hong Kong. All the experimental and animal handling procedures were approved by the Faculty Committee on the Use of Live Animals in Teaching and Research in The University of Hong Kong (CULATR #1850-09 and #1996-09). Animals and Procedure Adult female Sprague Dawley rats (10-12 weeks of age weighing 250-280 g) were used in this study. The rats were housed in a temperature-controlled room subjected to a 12-hour light/12-hour dark cycle and supplied with food and water ad libitum. The preparation of LBP was as previously described [8]. The final powder was stored in a dry-box and freshly dissolved in phosphate -buffered saline (PBS; 0.01 M; pH 7.4) before use. The treatment (LBP or PBS) began 1 week before surgery (CONT or PONT) until sacrifice at the scheduled time-points (see Fig. 1). The treatment was achieved with a feeding needle by gavage once daily. To investigate if the degeneration speeds were similar between superior and inferior retinas after CONT, the rats without treatment with PBS or LBP were sacrificed either 1 week or 2 weeks after CONT (n = 5 at either time-point). To evaluate the effects of LBP on the survival of RGCs after ON injury, the procedure was as described in Fig. 1. There were 4 to 16 animals in each group: CONT: n = 10, 8, 16 and 12 in PBS, 0.1 mg/kg LBP, 1 mg/kg LBP and 10 mg/kg LBP groups sacrificed 1 week after CONT. n = 7 and 6 in PBS and 1 mg/kg LBP groups sacrificed 2 weeks after CONT. PONT: n = 7 and 4 in PBS and LBP groups sacrificed 1 week after PONT. n = 9 and 10 in PBS and LBP groups sacrificed 4 weeks after PONT. Retrograde labelling of RGCs was achieved using Fluoro-Gold (FG) from the stump of the ON after CONT [17] or from superior colliculi (SC) 1 week before PONT [18]. Seven rats without treatment or ON injury were sacrificed 7 days after SC labeling as controls for both CONT and PONT experiments. Eight animals were used for 1, 19-dioctadecyl-3, 3, 39, 39-tetramethylindocarbocyanine perchlorate (DiI) tracing in vivo. The death of cells in ganglion cell layer (GCL) was studied using terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end labeling assay (TUNEL assay) and protein expression was examined with Western-blot analysis at 12 hours, 1 day, 4 days and 1 week after PONT; there was no drug treatment (n = 3 to 5 animals in each group). Protein expression after LBP or PBS treatment was also studied with Western-blot analysis both 1 day and 1 week after PONT (n = 3 to 5 animals in each group). The rats for RGC counting (both after FG and DiI labeling) and Western-blot analysis were sacrificed using inhalation of CO 2 . For TUNEL assay, the rats were sacrificed by injecting overdoses of Phenobarbital followed by perfusion with 0.9% NaCl and 4% paraformaldehyde (PFA). Surgical Procedure Anesthesia and the CONT procedure were conducted as previously described [17]. The PONT surgery was similar to that described by Fitzgerald et al [8]. The partial incision in the ON was made 1.0 mm from the optic disc and was achieved using a pair of Spring Vannas scissors (15000-08, F.S.T., Heidelberg, Germany) marked 200 mm from the tips of both blades, or using a diamond knife (G-31480, Geuder AG, Hertzstrasse, Heldelberg, Germany) with the blade fixed to a length of 200 mm. Retrograde DiI Tracing in vivo after PONT The method published by Fitzgerald et al. was adopted [8]. Briefly, the ON was partially cut and several crystals of DiI (Molecular Probes, Eugene, OR) were placed precisely into the cut sites to label the RGCs whose axons were transected ( Fig. 2A). The rats were sacrificed 4 days after DiI labeling. The retinas were processed for RGC counting as below. Optic nerves were collected, post-fixed in 4% PFA for 60 minutes and then placed into 30% sucrose in 0.1 M phosphate buffer solution overnight until they sank. They were then embedded into optimal cutting temperature embedding compound and sectioned longitudinally. The sections were mounted Quantification of RGCs After sacrifice, retinas were collected and post-fixed in 4% PFA for 60 minutes. Retinas were divided into the superior and inferior halves and each half was separated into three roughly equal sectors before being flat-mounted as the temporal, middle and nasal sectors (Fig. 3). Eight photographs (2006200 mm 2 ) in each sector were captured along the median line, starting from the optic disc to the edges at 500-mm intervals under a fluorescence microscope at 4006magnification [14,19]. The limitation of using photographs for cell counting rather than focusing through wholemounted retinas is that under-estimation may occur. However the counting method is unlikely to alter the results of this experiment and this method has the merit that the photographs can be kept longer than sections and be recounted. Using rats without treatment with PBS or LBP, we showed similar RGC surviving densities between superior and inferior retinas either 1 week or 2 weeks after surgery (see Results). Therefore, for rats treated with PBS or LBP, only inferior retinas were used after CONT. After PONT, surviving RGCs were counted separately in superior and inferior retinas because the degeneration speeds were different in superior and inferior retinas after PONT [2], and grouped together for the whole retinas. The counting was conducted by a double-blind method by two persons and the data were averaged (mean 6 SEM, numbers per mm 2 ). TUNEL Assay TUNEL staining was previously believed to detect apoptosis only, but more recently it has been shown to detect necrosis and other types of cell death as well [20]. To determine when cell death begins in the GCL, TUNEL staining was used to examine retinas at different time-points after PONT. After sacrifice, the eyeballs were post-fixed in 4% PFA overnight at 4uC, dehydrated with a graded series of ethanol and xylene, and then embedded in paraffin. Cross-sections (4 mm) were cut using a microtome (Micro HM 315R, Heidelberg, Germany). Manufacturer's instructions for the TUNEL assay were followed (Roche Diagnostics GmbH, Mannheim, Germany). Sections were counterstained with 49, 6diamidino-2-phenylindole (DAPI) after the TUNEL reaction to confirm that the TUNEL staining was located in the nuclei. For consistency of analysis, only sections with ON head were selected for observation. Three sections were selected from each animal. The positive-staining cells in the GCL in the inferior retinas were counted under a microscope using a 4006magnification. The data were expressed as mean 6 SEM, numbers per inferior retina. Western-blot Analysis After sacrifice, the inferior retinas were collected in PBS on ice. The procedure including the use of lysis buffer, secondary antibody and the developing reagents were as previously described [17,19]. After transfer onto polyvinylidene difluoride membrane, the membranes were blocked with 5% non-fat dry milk or 3% bovine serum albumin in Tris-buffered saline with 0.05% Tween Statistical Analysis Student's t-test was used for comparisons of two groups. For more than two groups, one-way ANOVA was used for multiple comparisons followed by Dunn's or Student-Newman-Keuls method as post hoc tests. Data were analyzed statistically with the Sigmastat software (Sigmastat 3.5; Systat Software Inc., Chicago, IL, USA). The P = 0.05 level was considered to be statistically significant. RGCs Degenerated Significantly after CONT and PONT The average densities of FG-labeled RGCs in the normal retinas were as follows: the whole retinas: 2088.1664.4 RGCs/ mm 2 ; the normal superior retinas: 2046.5692.4 RGCs/mm 2 ; and the normal inferior retinas: 2144.4689.8 RGCs/mm 2 . There was no difference between the superior and inferior retinas (Student's ttest, P.0.05). The surviving RGC densities decreased significantly in the expected areas after both CONT and PONT in animals treated with PBS or LBP (Student's t-test, P,0.001, Fig. 4 & Fig. 5). The surviving densities of RGCs after CONT from animals without treatment with PBS or LBP were as follows: 1510.7665.6 in the superior retinas and 1402.6674.7 in the inferior retinas 1 week after CONT; 234.2619.8 in the superior retinas and 214.868.4 in the inferior retinas 2 weeks after CONT. There were no significant differences between the superior and inferior retinas at both time-points after CONT (Student's t-test, P.0.05). LBP did not Prevent the Primary Degeneration of RGCs after CONT One PBS group and three LBP groups with different dosages (0.1 mg/kg, 1 mg/kg and 10 mg/kg) were examined 1 week after CONT. No significant difference between the PBS group and any LBP group was detected; in addition, no significant difference among the three dosages of LBP was seen (one-way ANOVA for multiple comparisons and Dunn's method as post hoc tests: Fig. 4A, 4C & 4D). We have previously found that 1 mg/kg LBP can significantly reduce the death of RGCs 2 weeks and 4 weeks after ocular hypertension produced by laser photocoagulation [14], and therefore 1 mg/kg LBP was adopted in the later experiments (CONT 2 weeks and PONT). Two weeks after CONT, no significant difference between PBS and LBP groups was detected (Student's t-test, P.0.05, Fig. 4A, 4E & 4F). LBP Delayed Secondary Degeneration of RGCs in the Inferior Retina 4 Weeks after PONT DiI labeled the cell bodies of RGCs whose axons were transected after PONT and which would be expected to die from primary degeneration. There were 460.9652.8 RGCs/mm 2 and 191.2648.7 RGCs/mm 2 , labeled in the superior and inferior retinas respectively. The difference was significant (P = 0.001, Fig. 2B, 2D & 2E) and the ratio was about 2.4:1 between superior and inferior retinas. These findings indicate that both superior and inferior retinas are vulnerable to primary and secondary degeneration after PONT. However, in the inferior retinas, significantly more RGCs would be affected by secondary injury since the inferior retina has significantly fewer RGCs with axons transected by PONT surgery. LBP had no effect on the survival of RGCs in whole retinas either 1 week or 4 weeks after PONT; comparison of the PBS and LBP groups showed no difference between groups at either timepoint (Fig. 5A). When dividing the retinas into superior and inferior halves, there was no difference in the superior retinas between PBS and LBP groups either 1 week or 4 weeks after PONT (Fig. 5B, 5D & 5E). LBP protected about 18% of RGCs in the inferior retinas 4 weeks after the PONT but not 1 week after PONT (one way ANOVA, p,0.05, Fig. 5B, 5G & 5H). Combining the results from DiI labeling and the survival of RGCs, our data show that LBP appears to delay secondary degeneration of RGCs rather than to affect primary degeneration. DiI Labeled Axons Located in the Dorsal ON The sections from the optic nerves with retrograde labeling of the RGCs by DiI showed that the travel path of DiI from the cut site to the retinas was limited to the dorsal part of the nerve (Fig. 2C). Oxidative Stress and JNK Pathway(s) Involved in Degeneration of RGCs in the Inferior Retina after PONT In the inferior retinas, TUNEL staining showed that the number of positive-staining cells increased significantly 1 week after PONT (one way ANOVA, P,0.01). However, there were no changes at 12 hours, 1 day and 4 days. The positive staining was shown in the nuclei, which was confirmed by counter-staining with DAPI (Fig. 6). The protein level of TNF-a did not increase after PONT in the inferior retinas (Fig. 7A). The expression of MnSOD increased significantly 1 day after PONT and returned to normal level 4 days after PONT (Fig. 7B). The p-JNK/p-c-jun pathway was also involved in the degeneration of RGCs in the inferior retinas. Although the expression of p-JNK1 did not change, the level of p-JNK2/3 increased 1 day after PONT and was maintained until 1 Figure 4. Effects of LBP on survival of RGCs 1 week and 2 weeks after CONT. RGCs were labeled by FG. The arrows indicate microglia which were easily distinguished from RGCs and not counted. The blue arrowheads indicate RGCs. (A, C, D) Orally feeding of 0.1 mg/kg, 1 mg/kg and 10 mg/ kg LBP showed no significant effects on the survival of RGCs 1 week after CONT (compared with PBS group) and no significant difference among the three different dosages of LBP groups was detected. (A, E, F) 1 mg/kg LBP showed no significant effects on the survival of RGCs 2 weeks after CONT (compared with PBS group). (n = 10, 8, 16, 12 in PBS, 0.1 mg/kg LBP, 1 mg/kg LBP, 10 mg/kg LBP groups sacrificed 1 week after CONT and n = 7 and 6 in PBS and 1 mg/kg LBP groups sacrificed 2 weeks after CONT.). doi:10.1371/journal.pone.0068881.g004 Figure 5. Effects of LBP on RGC survival 1 week and 4 weeks after PONT. The RGCs were labeled with FG. (A) LBP did not increase the survival of RGCs either 1 week or 4 weeks after the PONT when the densities of surviving RGCs were produced from the whole retinas (NS: not significant). (B) When the retinas were divided into the superior and inferior halves, LBP did not delay the degeneration of RGCs 1 week after PONT. However, it reduced the degeneration of RGCs in the inferior retina (*P = 0.027) but not in the superior retina 4 weeks after the PONT. (F -H) The photographs of RGCs labeled by FG in both the superior and inferior retinas are about 1.5 mm away from the optic disc. In the superior retinas, the densities of RGCs were similar between the PBS and LBP groups. In the inferior retinas, the density of RGCs in the LBP group was higher than that in the PBS group. Microglia (white arrows) were easily distinguished from RGCs and not counted. (n = 7 and 4 in PBS and LBP groups 1 week after PONT. n = 9 and 10 in PBS and LBP groups 4 weeks after PONT.). doi:10.1371/journal.pone.0068881.g005 week (Fig. 7C). P-c-jun increased with the same tendency as p-JNK2/3 (Fig. 7D). LBP Inhibited Oxidative Stress and Activation of JNK Pathway as well as Transiently Increasing the Expression of IGF-1 in the Inferior Retina After LBP treatment, the expression of MnSOD increased significantly 1 day after PONT (Fig. 8A & Fig. 9A). On the other hand, LBP treatment significantly decreased the expression of p-JNK2/3 and p-c-jun both 1 day and 1 week after PONT (Fig. 8B, 8C & Fig. 9B, 9C). The effects of LBP on the expression BDNF and IGF-1 were as follows: after PONT, LBP did not change the expression of BDNF either 1 day or 1 week after PONT (Fig. 8D & Fig. 9D). However, LBP increased the expression of IGF-1 1 day after PONT, but the effect was not maintained at 1 week (Fig. 8E & Fig. 9E). Discussion After CONT, most RGCs died rapidly from primary degeneration. After PONT, more RGCs die from secondary degeneration at a later time-window in addition to primary degeneration [2,6]. Our results showed that LBP did not delay primary degeneration of RGCs after CONT. However, LBP did delay secondary degeneration of RGCs 4 weeks after PONT. Levkovitch-Verbin et al. showed that although the genetic profile was similar for primary and secondary degeneration of RGCs, minocycline was only effective for secondary degeneration, indicating a potential difference between the two types of degeneration [21]. Our result was consistent with this in that LBP only delayed secondary degeneration but not primary degeneration. In the PONT model, the increasing expression of MnSOD or SOD2, which was demonstrated by immunohistochemistry (IHC), was used as an indicator of oxidative stress [7,22,23]. MnSOD is an anti-oxidant enzyme and can detoxify in cells and tissues by converting toxic superoxide into hydrogen peroxide and diatomic oxygen. Administration of adeno-associated virus containing the SOD2 gene into eyes significantly reduces oxidative stress and nitrative stress in a rat acute ocular hypertension model [24]. The protective effect of LBP for RGCs was related to the anti-oxidative mechanism [19]. In order to determine if the anti-oxidant ability of LBP for RGCs was related to MnSOD, we investigated the expression levels of MnSOD in the rats treated with LBP or vehicle, and our results confirmed the anti-oxidant effect of LBP in retinas after injury. JNKs are the kinases involved in both apoptotic and nonapoptotic cell death [25,26]. C-jun is a transcription factor activated by phosphorylation of JNKs and is involved in the transcription of various proteins, including some pro-apoptotic proteins [26]. Previous studies using the PONT model and IHC staining have shown that JNKs are activated at the primary injury sites and c-jun is activated both at the primary and the secondary injury sites in the retina [7,27]. There are three isoforms of JNKs: JNK1, JNK2 and JNK3; IHC cannot differentiate among these isoforms. We used Western-blot analysis, to differentiate JNK1 from JNK2/3 according to the molecular weights. Our results confirmed the inhibition of the JNK/c-jun pathway by LBP; this effect has been shown previously using different models [28,29]. However, this is the first time that these effects of LBP have been demonstrated in the retina. In addition, our results showed that p-JNK2/3 rather than p-JNK1 were activated in the inferior retina after PONT. A similar result has been shown in cultured RGC-5 cells: advanced glycation end products-albumin from bovine serum increased the production of p-JNK2/3, but not p-JNK1 in vitro [30]. BDNF belongs to the neurotrophin family and is expressed both in SC [31,32] and retina [33]. The level of BDNF increases in retina following ON transection [34] and after periocular injection of in situ hydrogels containing Leu-Ile. This is an inducer for neurotrophic factors, which increase the expression of BDNF in the retina and promote RGC survival after ON injury [35]. IGF-1 is also a neurotrophic factor which is a key molecule determining the survival of RGCs during the early stage of ON injury [36]. However, the effects of LBP on the expression of BDNF and IGF-1 have not been previously studied. Our results show that LBP can produce a transient increase in the expression of IGF-1 in the inferior retina, but the source of this IGF-1 is not clear. Future study using IHC with this model may help to address this issue. It is known that DiI could be transported by either active processes or by diffusion [6,[37][38][39][40]. In this experiment, DiI was used to label RGCs whose axons were transected after PONT. Although it has been reported that DiI can label cells in close proximity to labeled cells in fixed tissues [37], this phenomenon has not been reported in vivo [40]. Perhaps the time available for DiI labeling for fixed tissues was much longer than that in vivo; diffusion to neighboring tissue was obvious in fixed tissues but not for the tissues in vivo. Therefore, we did an in vivo study where diffusion of DiI was limited. Our results also showed that the axonal transport of DiI was limited to the dorsal region in the ON Our results confirmed the neuroprotective effects of LBP for RGCs and showed the possible mechanism. The future target of our study is to provide the basis for the use of LBP in clinical conditions. The electroretinogram is used widely by ophthalmologists and optometrists for the diagnosis of retinal diseases and can evaluate the retinal function by measuring the electrical responses of various cell types [41][42][43][44][45]. Therefore, we have also used the electroretinogram to evaluate the effect of LBP after PONT and this experiment is currently in process.
2016-05-12T22:15:10.714Z
2013-07-19T00:00:00.000
{ "year": 2013, "sha1": "78555b7d1625217421958b124f34df7ff4c58eaa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068881&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ac9585f471f817acac75283677702a179031c94", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220633176
pes2o/s2orc
v3-fos-license
A method for checking high-redshift identification of radio AGNs In large-scale optical spectroscopic surveys, there are many objects found to have multiple redshift measurements due to the weakness of their emission lines and the different automatic identification algorithms used. These include some suspicious high-redshift (z>= 5) active galactic nuclei (AGNs). Here we present a method for inspecting the high-redshift identification of such sources provided that they are radio-loud and have very long baseline interferometry (VLBI) imaging observations of their milli-arcsec (mas) scale jet structure available at multiple epochs. The method is based on the determination of jet component proper motions, and the fact that the combination of jet physics (the observed maximal values of the bulk Lorentz factor) and cosmology (the time dilation of observed phenomena in the early Universe) constrain the possible values of apparent proper motions. As an example, we present the case of the quasar J2346+0705 that was reported with two different redshifts, $z_{1} = 5.063$ and $z_{2} = 0.171$, in the literature. We measured the apparent proper motions ($\mu$) of three components identified in its radio jet by utilizing VLBI data taken from 2014 to 2018. We obtained $\mu_{J1} = 0.334 \pm 0.099$ mas yr$^{-1}$, $\mu_{J2} = 0.116 \pm 0.029$ mas yr$^{-1}$, and $\mu_{J3} = 0.060 \pm 0.005$ mas yr$^{-1}$. The maximal proper motion is converted to an apparent transverse speed of $\beta_{\rm app} = 41.2\pm12.2\,c$. if the source is at redshift 5.063. This value exceeds the blazar jet speeds known to date. This and other arguments suggest that J2346+0705 is hosted by a low-redshift galaxy. Our method may be applicable for other high-redshift AGN candidates lacking unambiguous spectroscopic redshift determination or having photometric redshift estimates only, but showing prominent radio jets allowing for VLBI measurements of fast jet proper motions. INTRODUCTION The discovery of supermassive black holes (SMBHs) powering active galactic nuclei (AGNs) at redshifts higher than about 5, close to the end of the reionisation epoch, poses challenges for explaining the rapid growth of massive black holes in the early Universe (Volonteri et al. 2011). Optical and near-infrared spectroscopic observations resulted in the discovery of more than 250 high-redshift (z > 5.6) galaxies and quasars (e.g. Fan et al. 2001;Bañados et al. 2016;Jiang et al. 2016;Shen et al. 2019, and references therein). As the observed colours of quasars depend on redshift, most high-z sources were first selected as candidates using the i-dropout technique (e.g. Fan et al. 2001), and then confirmed with optical spectroscopy. In the various releases of the Sloan Digital Sky Survey (SDSS) catalogue, many objects have no spectroscopic coverage, and only their photometric redshifts are given. These include a number of (candidate) high-redshift objects. Some others have suspicious or ambiguous E-mail: antao@shao.ac.cn spectroscopic redshift measurements due to the weakness of their emission lines which makes their redshift identification challenging for the automatic algorithms (e.g. Bolton et al. 2012;Yuan, Strauss & Zakamska 2016). It is possible that different Data Releases (DR) of the SDSS catalogue contain two markedly different redshift values derived from the same spectrum but by different versions of the automatic pipelines. For example, J2346+0705 (SDSS J234639.94+070506.8) is identified as a galaxy and a flat-spectrum radio source in the NASA/IPAC Extragalactic Database (NED) 1 . Two redshifts are reported for this object in public data bases: z 1 = 5.063 in SDSS DR13 2 (Albareti et al. 2017) adopted by NED, and z 2 = 0.171 in SDSS DR16 3 (Ahumada, et al. 2020). This source is possibly as-sociated with a γ-ray source detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope, named as 1FHL J2347.3+0710 (Abdo, et al. 2010), 2FHL J2347.2+0707 (Ackermann et al. 2013), 3FGL J2346.7+0705 (Acero et al. 2015) and also classified as a TeV candidate (Ajello et al. 2017). Until now, the most distant known γ-ray-emitting blazar is J1510+5702 at z = 4.31 (Ackermann et al. 2017). As most high-z blazars host SMBHs as massive as ∼ 10 9 M , it is important to ascertain the extremely high redshift of J2346+0705, since the value above 5 would break the high-z γ-ray blazar record, possibly placing important new constraints on the growth of the first-generation SMBHs. In radio bands, the total flux density of J2346+0705 is ∼200 mJy at 8.4 GHz measured with the U.S. National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) (Healey et al. 2007), and around 230 mJy at 5 GHz measured with the NRAO Green Bank 91-m telescope (Becker, White & Edwards 1991). At 1.4 GHz, the NRAO VLA Sky Survey (NVSS) image shows an extended emission to the southeast on arcsec scale (Condon et al. 1998). When observed with very long baseline interferometry (VLBI) on milli-arcsec (mas) scale, the radio jet (Beasley et al. 2002;Pushkarev & Kovalev 2015) is characterised by a compact core and a number of knots 4 . The arcsec-and mas-scale jets are pointing to opposite directions. VLBI imaging observations at multiple epochs allow us to detect positional changes and measure apparent proper motions of jet features. Based on the observed apparent proper motion-redshift (µ−z) relation for a large sample of AGN jets, we introduce a method that can be applied to investigate whether certain high-redshift candidate objects are indeed at large cosmological distances, by using their radio jet proper motion measurements. In Sect. 2, we describe the details of the procedure. We present the case of the quasar J2346+0705 as an example in Sect. 3 and discuss the possible applications and limitations of the method in Sect. 4. A brief summary is given in Sect. 5. In this paper, we adopt a standard flat Λ Cold Dark Matter (ΛCDM) cosmological model with Ω m = 0.27, Ω Λ = 0.73, and H 0 = 70 km s −1 Mpc −1 . THE METHOD In compact radio-emitting AGNs, components typically propagate away from the vicinity of the central SMBH along a well-defined jet. Apparent transverse speeds that reflect the bulk relativistic motion of the plasma are usually well below β app = 25 (measured in the unit of the speed of light c) but can occasionally reach extreme values up to β app ≈ 40 (e.g. Kellermann et al. 2004;Lister et al. 2016Lister et al. , 2019. The apparent speeds depend on the bulk Lorentz factor of the jet (Γ) and its inclination angle with respect to the line of sight (φ). For a jet with a given Γ, the maximum apparent proper motion Urry & Padovani 1995). Furthermore, the observed apparent angular proper motions measured in mas yr −1 depend on the redshift of the source, because of the cosmological time dilation caused by the expansion of the Universe. It slows down phenomena in the observer's frame by a factor (1 + z) compared to the rest frame of the source. The µ − z relation for compact radio sources was first proposed as a test of cosmological world models by Cohen et al. (1988). They found a clear anticorrelation between the proper motion and redshift, and a rough upper limit to µ as a function of z indicating that the redshift is indeed a measure of distance. Subsequent studies of larger samples (e.g. Vermeulen & Cohen 1994;Kellermann et al. 2004;Britzen et al. 2008) well established that there is indeed an upper bound in the µ − z relation that is consistent with the ΛCDM cosmology and a distribution of jet Lorentz factors with a maximum of Γ ≈ 25 (see the model curves in Fig. 1). Statistical studies of the AGN samples found that most of the apparent jet speeds are in fact lower than 5 c, with an extreme upper limit of 40 c (Lister et al. 2016). Consequently, an apparent transverse jet speed exceeding 40 c in a distant object would naturally call its high-redshift identification in question. Alternatively, since at β app = 40 the corresponding bulk Lorentz factor is Γ > ∼ 40, unprecedentedly extreme physical conditions would be required in the jet. Unless there is convincing supporting evidence for the latter, it is plausible to assume that the too fast apparent speed implies a low redshift. APPLICATION TO J2346+0705 Calibrated VLBI imaging data made publicly available in the Astrogeo database are used for this analysis. Data from four observing epochs are found for J2346+0705 at two frequency bands, 2.3 and 8.3/8.7 GHz. These were taken with the ten 25-m antennas of the NRAO Very Long Baseline Array (VLBA), from 1995 July to 2018 April. We choose the 8.7-GHz data that provide higher angular resolution, typically ∼ 1 − 3 mas (depending on the position angle of the synthesised beam of the VLBI array) for this nearly equatorial object. The earliest data from 1995 were of relatively low quality, and thus the extended jet features cannot be reliably imaged. Therefore we restrict our analysis to the other three epochs (2014 August 6, 2017 March 23, and 2018 April 8) to estimate the apparent proper motions of the jet components identified in all images. Since the interferometric visibility data have already been calibrated, we only performed imaging and model fitting using the D software package (Shepherd 1997). Figure 2 shows one of the total intensity images obtained from the observations. In addition to the core (C) at the image centre, there are two compact easily-recognised jet knots (J2 and J3) to the west and northwest of the core within 7 mas. An additional innermost jet component (J1) is also found in the residual map after removing the model components of the core, J2, and J3. Table 1 presents the parameters of the elliptical (for C) and circular Gaussian brightness distribution models fitted to the jet components in D . The model component positions and sizes are also marked in the image in Fig. 2. Based on the three-epoch data in Table 1, we calculated the apparent proper motions of the jet components: µ J1 = 0.334 ± 0.099 mas yr −1 for J1, µ J2 = 0.116 ± 0.029 mas yr −1 for J2, and µ J2 = 0.060±0.005 mas yr −1 for J3. The uncertainties are calculated by taking also into account the model fitting errors at the individual epochs. The apparent proper motion shows a decelerating trend, while the component sizes increase downstream the jet. This is consistent with the general picture that jet features slow down and expand when moving outwards (e.g., Homan et al. 2015). If the redshift of J2346+0705 is z 1 = 5.063, then the apparent jet component speeds are 41.2 ± 12.2 c, 14.3 ± 3.6 c, and 7.4 ± 0.6 c for J1, J2 and J3, respectively. To date, there are only four highredshift (z > 4.5) radio-loud quasars having jet proper motion measurements based on repeated VLBI imaging (Frey et al. 2015;Perger et al. 2018;Zhang, An & Frey 2020;An et al. 2020), and the values are 10 c. The apparent jet speed in the case of the innermost component J1 in J2346+0705 would be much higher than those, and in fact would approach the maximum value of ∼ 50 c measured for any AGN at 15 GHz (Lister et al. 2016). Moreover, apparent jet proper motions are known to be smaller at lower frequencies (e.g. Britzen et al. 2008). Note that for other z > 4.5 radio quasars studied so far, the jet components tend to accelerate as their distance from the core increases, possibly suggesting a young and growing jetted AGN in the early Universe . In the light of the above observational results, we are inclined to consider J2346+0705 as a low-redshift object with z 2 = 0.171, as found in SDSS DR16 (Ahumada, et al. 2020). It would solve the conflict between the anomalously fast apparent jet component motion and the high redshift in a straightforward way. Further supporting pieces of evidence against the high-redshift scenario are the γ-ray emitting nature of J2346+0705, its optical magnitudes that are brighter by at least 3 m than those of typical z ∼ 5 quasars (see the catalogue 5 of z > 4 AGNs compiled by Perger et al. 2019), and the g − r = 0.71 optical colour from SDSS DR16 data which is incompatible with the high redshift (cf. Alexandroff et al. 2013). Jet parameters of J2346+0705 Accepting the redshift z 2 = 0.171, we calculate the core brightness temperature, T b = 4.7×10 10 K, from the fitted Gaussian component sizes and flux densities (Table 1). Assuming equipartition condition in the core between the particle and magnetic field energy densities (Readhead 1994), the Doppler boosting factor can be inferred as close to unity, δ = 0.9 ± 0.3. Adopting the apparent jet speed (β app = 3.7 ± 1.1, calculated at z = 0.171) and the Doppler factor, we estimate the bulk Lorentz factor, Γ = 8.3, and the inclination angle of the jet with respect to the line of sight, φ = 28.5 • (see e.g. Urry & Padovani 1995). Therefore the jet beaming parameters assuming z 2 = 0.171 are consistent with what is usually known for radio quasars, unlike the case if the source is at z 1 = 5.063. The 5 http://astro.elte.hu/~perger/catalog.html spectral index is α 8 GHz 2 GHz = −0.17 ± 0.02 (α is defined as S ν ∝ ν α ), indicating a flat radio spectrum at GHz frequencies. The beaming properties and radio spectral index of J2346+0705 classify it as a typical flat-spectrum radio-loud quasar. Applicability of the method to other objects To apply our method to other objects, it is required that a candidate high-redshift AGN is radio-loud and its prominent mas-scale radio jet structure is imaged with VLBI at multiple epochs (at least twice) at the same frequency. While these requirements obviously limit the widespread use of checking the high-redshift identification using jet proper motion data, there are in fact other suitable candidates found in the SDSS catalogues with VLBI data available. For example, the source J1110+4817 (SDSS J111036.32+481752.3) is listed with z = 6.168 in SDSS DR13 6 . However, Hook et al. (1996) and Yuan, Strauss & Zakamska (2016) independently gave z = 0.74. The currently available VLBI imaging data allow us to model the jet components at 8.7 GHz and estimate their apparent proper motions (Krezinger, et al. 2020). However, this source does not show significant proper motion (Krezinger, et al. 2020). Moreover, its radio structure resembles that of the compact symmetric objects (CSOs) which often show slow jet motions (An & Baan 2012;An et al. 2012). Another quasar, J2253+1942 (SDSS J225307.36+194234.6) has a very high redshift of z 2 = 5.936 in SDSS DR14 7 , however, earlier literature data (Engels et al. 1998) as well as SDSS DR16 8 indicate a much lower value, z = 0.284. Plenty of archival VLBI imaging observations are available for this object. However, its compact, nearly featureless mas-scale radio structure is not well suited for identifying jet components and measuring their proper motions. The studies of J1110+4817 and J2253+1942 illustrate the limitations of our proposed method for sources with unbeamed relativistic jets, or beamed sources without prominent jet components. Our proposed jet proper motion-based method to check the high-redshift identification of certain AGNs could be applied for more cases in the future when photometric redshift determinations for massive survey data are expected to reach out to much higher redshifts than today (e.g. Reza & Haque 2020). SUMMARY We described a method based on multiple-epoch VLBI imaging of jetted radio AGNs to check the validity of extremely high spectroscopic redshift measurements. The method rests on VLBI studies of large samples that indicate a well-defined upper bound of the apparent proper motion-redshift relation for pc-scale AGN jet components. This is a combined effect of jet physics with a maximum bulk Lorentz factor of the plasma, and the cosmological time dilation in the expanding Universe. If an apparent jet component speed exceeding about 40 c is found in a source at any redshift, the object is very likely located at lower redshift. As an example, we analysed the jet properties of a suspicious high-redshift radio-loud AGN, J2346+0705. It has ambiguous redshift values reported in the literature. The inferred fast jet component proper motion in J2346+0705 excludes that it is a high-redshift (z > 5) object. Its radio spectrum and relativistic beaming parameters make it consistent with a flat-spectrum radio-loud quasar at z = 0.171. Albeit with limitations, the proposed method could be applied to check other AGNs with ambiguous very high redshift identifications. As spectroscopic surveys advance and reach fainter magnitudes in the future, the automatic emission line identification and redshift determination algorithms will undoubtedly lead to more and more cases for uncertain or ambiguous redshift determination. Although limited in its scope because of the need for VLBI-monitored jetted radio AGN with fast component motions, the method presented here can dismiss certain cases of false high redshift measurements. This paper has been typeset from a T E X/L A T E X file prepared by the author.
2020-07-20T01:01:02.760Z
2020-07-17T00:00:00.000
{ "year": 2020, "sha1": "b1d4027ee61bd18a0624f3bb7bd5a95199b2fc37", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/mnras/staa2132", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "b1d4027ee61bd18a0624f3bb7bd5a95199b2fc37", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238585741
pes2o/s2orc
v3-fos-license
A COMPARATIVE ANALAYSIS OF INTER-SITE GENE EXPRESSION HETEROGENICITY OF NORMAL HUMAN BUCCAL MUCOSA WITH NORMAL GINGIVAL MUCOSA BACKGROUND Description of heterogeneity of gene expression of various human intraoral sites are not adequate. The aim of this study was to explore the difference of gene expression profiles of whole tissue obtained from apparently normal human gingiva and buccal mucosa (HGM, HBM). MATERIALS AND METHODS Gene sets fulfilling inclusion and exclusion criteria of HGM and HBM in gene Expression Omnibus(GEO) database were identified, segregated, filtered and analysed using the ExAtlas online web tool using pre-determined cut-off. The differentially expressed genes were studied for epithelial keratinization related, housekeeping(HKG), extracellular matrix related(ECMRG) and epithelial-mesenchymal transition related genes(EMTRGs). RESULTS In all 40 HBM and 64 HGM formed the study group. In all there were 18012 significantly expressed genes. Of this, 1814 were over-expressed and 1862 under-expressed HBM genes as compared to HGM. One in five of all studied genes significantly differed between HBM and HGM. For the keratinization genes, 1 in 6 differed. One of every 5 HKG-proteomics genes differed between HBM and HGM, while this ratio was 1-in 4 for all ECMRGs and EMTRGs. DISCUSSION This difference in the gene expression between the HBM and HGM could possibly influence a multitude of biological pathways. This result could explain partly the difference in clinicopathological features of oral lesions occurring in HBM and HGM. The innate genotypic difference between the two intra-oral niches could serve as confounding factor in genotypic studies. Hence studies that compare the HBM and HGM should factor-in these findings while evaluating their results. 4 Fibroblast of anatomically distinct sites have distinct transcriptional patterns. On appropriate stimulation, the relatively quiescent fibroblasts can acquire an active synthetic, contractile phenotype and express several smooth muscle cell markers, which are not exclusive for fibroblasts of that particular niche. [4,5] Failure to account and consider the phenotypic as well as expression profile of these heterogenic differences can lead to potential erroneous interpretation of data collected for the specific experimental purposes. [6] All oral fibroblasts are derived from the ecto-mesenchymal cells of the cranially migrated neural crest cells. In spite of the same lineage, there is anatomical, histological heterogeneity among various intra-oral sites. [7] Human buccal fibroblasts (HBF) have partly primed, committed cells of neural crest lineage characterised by plasticity and longevity controlled by WNT and p53 gene network. [8] The existence of heterogeneity in human gingival fibroblasts (HGF) is documented, particularly those of the gingival papillary and reticular area. [9,10] A subset population of HGF is known to heal without fibrosis. [11] When HGF and HBF cultured fibroblasts were studied, the cell density migration index was higher in HBF than HGF. On the other hand, the HGF had more adult phenotype in contrast to foetal phenotype of HBF. [9] Oral epithelial cells are exposed to external environment and they also interact extensively with underlying cells such as fibroblasts and immune cells. Alteration in fibroblasts influences proliferation, repair and inflammatory cytokine secretion and factors released by fibroblasts influence overlying epithelial cells. [12] Oral keratinocytes interact with themselves and underlying connective tissue to provide a tight barrier and heal when damaged. [13][14][15][16] The aim of this manuscript was to study the difference of expression profiles of epithelial and the extracellular matrix (ECM) components of whole tissue obtained from two common 5 oral niches -gingiva and buccal mucosa from publicly available Gene Expression Omnibus -GEO datasets. Source of microarray datasets A systematic search of the National Centre for Biotechnology Information (NCBI-GEO -http://www.ncbi.nlm.nih.gov/geo) repository using the keywords "buccal mucosa" "gingiva" was carried out. The microarray datasets and their descriptions contents were further limited to a. in Humans -Homo sapiens b. type -"expression profiling by the array.", c. has normal healthy patient, tissue biopsy (epithelium and connective tissue) sample with no obvious disease d. ideal, acceptable extraction protocol e. has mRNA. Datasets that were from culture explants or had only fibroblasts or only keratinocytes or non-tissues or that unclear extraction protocols or sources or details were excluded. No emphasis was placed on types of dataset platforms. The gene series were collated. Pair-Wise Comparison The individual patient's gene expression datasets from methodically reviewing and screening the microarray datasets were collated using the pair wise comparison using the ExAtlas online web tool at https://lgsun.irp.nia.nih.gov/exatlas. [17] The genes from the GEO datasets were log 2 transformed, normalized with quantile normalization method and later combined. Then, the samples -Human Buccal Mucosa (HBM) and Human Gingival Mucosa (HGM) gene-sets were combined. Principal component analysis was performed to check the tissue/gene distribution with FDR ≤ 0.05, correlation threshold = 0.7 and fold change threshold of 2. In the tool, PCA is calculated using the Singular Value Decomposition (SVD) method that generates eigenvectors for rows as well as columns of the log-transformed data 6 matrix. [17] For plotting of tissues and genes (biplot) column projections were used as biplots is helpful for visually exploring associations between genes and tissues. A pairwise compared with ANOVA statistical methodology was employed to compare HBM and HGM. The quality assessment measurement of the samples within the pooled datasets was the standard deviation (SD) value, and the criterion was SD ≤ 0.3. The correlation of gene expression for housekeeping genes was established at >0.5. The false discovery rate (FDR) based on the Benjamini-Hochberg was ≤ 0.05 and fold change was fixed at 2. The scatter plot was made to identify the over-expressed and under-expressed genes in HBM as compared to HGF. Only genes that had gene symbol was considered for further studies. Gene-set enrichment analysis of up/down-regulated genes The listed genes were then used to compare with the Public human Gene Ontology gene set with functional role of 9606 genes. [18,19] A FDR≤0.05, fold enrichment threshold of 2 and a minimum of 5 gene overlap in biological process was kept as standard norms. For this purpose, exatlas tool employs, Parametric Analysis of Gene Enrichment (PAGE) as it is simple and reliable. [20,21] Differential expression of epithelial keratinization related genes The genes associated with epithelial keratinization were collected from gene ontology (GO: 0031424; http://amigo.geneontology.org/amigo/term/GO:0031424). The differentially expressed genes from section-2.2 were assessed for the same. Differential expression of housekeeping gene From a database of human housekeeping genes (HKG-proteomics) (https://www.proteinatlas.org/humanproteome/tissue/housekeeping), the results of section-2.2 were compared and the housekeeping genes that were differentially expressed between HBM and HGM tabulated. [22] 2.6 Differential expression of extracellular matrix related genes [ECMRGs] The list of over and under-expressed genes of HBM as compared to HGM was searched for extracellular matrix (ECM) and annotated according to matrisome divisions (core matrisome or matrisome-associated) and categories (ECM glycoproteins, collagens and proteoglycans, for core matrisome genes, ECM-affiliated, ECM regulators and secreted factors, for matrisome-associated genes) in lines as proposed by Naba et al., using the MatrisomeAnnotator tool at http://matrisomeproject.mit.edu/analytical-tools/matrisomeannotator/ [23] Differential expression of epithelial-mesenchymal transition related genes [EMTRGs] From the database of human epithelial-mesenchymal transition (EMT) database, the results from section-2.2 were compared and the differentially expressed EMT genes were listed out. [24] Differential expression of fibroblast markers The list of fibroblast markers of heterogeneity as revealed by recent single-cell analysis through cell identification and discrimination was collected from previously published literature. [25] A comparison of the same was performed based on ANOVA results and the overlap tabulated along with rank. 8 Microarray datasets From the publicly available database, in early March 2021, a total of 4 datasets (GSE7307, n=4; GSE3526, n=4; GSE17913, n=40) non-smoker HBM and 5 datasets (GSE106090, n=6; GSE10334, n=64; GSE4250, n=2; GSE3374, n=8; GSE23586, n=3) had healthy HGM were collected and processed in the Exatlas software as outlined. In all there were 48 HBM and 83 HGM gene-sets. All these gene-sets were subjected to quality assessment procedure as previously outlined. Those sets whose house keeping genes did not fulfil the required minimum quality parameters of correlation or SD were removed from further analysis. After removal of non-qualifying datasets, a total of 40 HBM and 64 HGM formed the final study group. The final study group gene-set had used GPL570 platform to analyse the array. Gene-set enrichment analysis of up/down-regulated genes The over/under expressed gene enrichment analysis using Gene-Ontology is shown in supplementary file-1, (Tables1, 2). There were 217 biological processes in over-expressed genes. The commonly involved GO processes (>15% of cluster) and their cluster frequency Differential expression of epithelial keratinization related genes In all there were 227 genes associated with epithelial keratinization. Of this, there were 13 genes overexpressed and 25 under-expressed in HBM as compared to HGM. Combined, this 38 genes accounted to 16.74% of all keratinization associated genes. Differential expression of ECMRGs There are 847 ECMRGs identified and reported. [26] The identified over/underexpressed genes were subjected to matrisome analysis. This accounts to 23.73% of the ECM genes. EMT related genes in present study accounted to 24.49% of all known EMT genes. Differential expression of fibroblast markers Of Overall difference between gene expression of HBM and HGM Among all the significant genes in this study, 1-in-5 significantly differed between HBM and HGM. For the keratinization genes, 1 in 6 differed. One of every 5 HKGproteomics genes differed between HBM and HGM, while this ratio was 1-in 4 for all ECMRGs and EMTRGs. (Figure-XX) DISCUSSION Several studies have reported the spatio-temporal gene expression difference among various human tissues, organs and even different parts of same organ, including skin, adipose tissue, brain, cornea and epidymis. [27][28][29][30][31][32] The role of heterogeneity of gene expression of 1 2 fibroblasts of various sites and its implication on human diseases have been reported. [33] Studies exploring the heterogeneity of the various intra-oral sites in terms of pH, histo-cytoarchitecture, immune-topology and gene expressions are limited. [7-16, 34,35] Existing studies have reflected the inherent differences between the various intra-oral sites and even among the same site such as gingiva and periodontium. [9,10,11] The spatio-temporal heterogeneity of the gingiva at single cell level has been recently described widening our understanding of the biology of these intra-oral sites. [36] The intra-oral site specific gene expression differences in pathologic process also has been previously reported. [36,37] However, the innate difference between the gene expression of whole HBM and HGM among normal, non-diseased population has not been reported. Hence this study was attempted. The present study reveals the existence of substantial difference between the HBM and HGM in terms of gene expression. The figure-xx shows the extent of difference. The difference emanates from the basic housekeeping genes, keratinization process to complex ECMRGs and EMTRGs. This inherent difference in gene expression may have ramification and may partially account for site predilection of pathologies. [37,38] In addition to differential expression, the net difference in key biological reactions between HBM and HGM have been evidenced by change in the KEGG pathways. The alterations could be reflecting on the molecular profile of the cells of HBM and HGM. For a typical biomedical research involving molecular techniques, the need for positive and negative control is mandatory. There are guidelines for the use of such procedure related control tissues and mandatory reporting format. [39] However, to the best of our knowledge, there are only few suggestions and deliberations for "normal" tissues that are involved as an experimental control arm, especially in oncology. [40] Ethically, ideal normal tissues are difficult to acquire. [41][42][43] Most of the bioethical guidelines in force advises against incising/excising or enlarging to non-lesional areas, for the exclusive purpose of conducting a 1 3 research. [42] Generally a normal tissue that is trimmed/excised for approximation after unrelated surgical treatment, after voluntary consent is used as normal tissues. [43] This "uninvolved matching" tissue may not a true representation of a native, normal tissue because molecular changes may have occurred under the influence of adjacent infection, inflammation or an adjacent neoplastic process ("condemned mucosa"). When "normal" tissue is required as a control arm, it becomes pertinent to specify what tissues constitute "normal" and justify that such a tissue (if from diseased entity) would not have altered molecular signals that could adversely influence the outcome of the comparison. [41] The influence of age, gender and habits need to be also accounted. The apparent clinically normal tissues could possibly harbour mutations and molecular changes that could potentially influence the outcome of comparisons. [40] Hence the apparent healthy tissues should be used with caution. Some investigators request normal tissues from trauma cases or edges of chronic infection and even from preserved anatomic entities. Wound approximation edges from trauma cases may be contaminated and if late, be affected by inflammatory process. Similarly chronic inflammation could alter the molecular nature of the native tissues. It may not be ethical to obtain tissues from autopsies or stored specimen because these may be done without consent. [41] There has been attempts to elucidate the normal human differential protein expression in various tissues. Even in the study, different intra-oral sites have not been evaluated in such attempts except for tongue. [44,45] Cancer process per se, are known to down-regulate tissue specific genes and are directly associated with prognosis. While studying differences between same type of cancer (for example oral squamous cell carcinoma) at different sites (of oral cavity), the inherent difference between the normal tissues would be highlighted more rather 1 4 than differences between those that are independent of normal tissue physiology. [46] If not accounted, this phenomenon may lead to erroneous conclusions. In studies involving human oral diseases, the normal, non-diseased control tissues are often obtained from gingival tissues or from the retromolar tissue-an area where HBM and HGM meet. In such instance, with chronic exposure to pro-inflammatory and pro-fibrotic cytokines, as in inflammation, the connective tissue and epithelium may be irreversibly damaged. As a part of reparative mechanism, there could a cascade of triggered epigenetic modifications and activation of related genes, leading to their engagement in further differentiation and fibrosis development. Based on the type of cells, the reaction could also widely vary. There are gaps in knowledge regarding this complex mechanism at the cellular and molecular levels. [47] The biological niches in the HBM and HGM are different. In addition, the normal tissue resident microbial flora could influence the gene expression. There are recent reports emanating from cancer and normal tissues. [48,49] In such a background, the inherent geno-typological difference between HBM and HGM, as evidenced in this study assumes an important role. Limitation of the study includes using samples from different studies and population, though the difference have been accounted for in the analysis; the samples differing from several studies but had used one platform; stringent quality control parameter being enforced; non-consideration of age, gender, and deleterious habits. Future studies need to account for the same. CONCLUSION An attempt is made to characterize the genetic expression of HGM and HBM using pre-existing dataset and employing stringent statistical approach. The study identified that 1in-5 genes significantly differed between HBM and HGM. Of genes responsible for 1 5 keratinization process, 1 in 6 differed. Similarly housekeeping genes, genes associated with extracellular matrix and epithelial-mesenchymal transition were also significantly altered.
2021-10-12T13:20:06.262Z
2021-10-09T00:00:00.000
{ "year": 2021, "sha1": "2bb1abfb5993a05fa254e13853c466dd3d7a8f53", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/10/09/2021.10.08.463654.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "2bb1abfb5993a05fa254e13853c466dd3d7a8f53", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
56153195
pes2o/s2orc
v3-fos-license
Knowledge, awareness and attitude towards dengue fever outbreaks in the summer Dengue fever is a fatal viral infection that results in up to 24,000 deaths every year. It is estimated that about 50100 million cases are affected with dengue fever and around 0.5 million cases are affected with hemorrhagic dengue fever. Dengue fever is endemic in 112 countries especially at Asia, and these countries are inhabited by more than one half of the world population. This means that at least half of the world population are at risk for developing dengue fever or hemorrhagic dengue fever which carries significant morbidity and mortality. Dengue fever is a mosquito-borne viral infection caused by the dengue virus. It presents clinically with headache, high fever, myalgia, joint pain, vomiting, and characteristic skin rash. These symptoms occur after an incubation period of few days or weeks following the mosquito bite. In most cases (85%), the condition is mild or even asymptomatic and recovers completely within a week. However, a small proportion of cases (about 5%) progress to a dangerous life-threatening phase where plasma leakage occurs across the blood vessel wall leading to hypotension and shock, or severe fatal bleeding takes place. Each year, from 10,000 to 20,000 ABSTRACT patients die with dengue shock syndrome and dengue hemorrhagic fever. [5][6][7] Despite the magnitude of the problem, dengue fever is a preventable disease. Prevention can be carried out by elimination of inhabitant mosquitoes, vaccination of vulnerable individuals, and regular health education particularly during outbreaks. [8][9][10] . This article will review the knowledge, awareness, and attitudes among different countries towards dengue fever outbreaks in the summer. KNOWLEDGE, AWARENESS AND ATTITUDE TOWARDS DENGUE FEVER OUTBREAKS IN THE SUMMER Because dengue fever is a preventable disease despite being fatal, it has become a public health problem that attracted the attention of many health organizations worldwide. Three main lines of prevention are often applied in line: health education, mosquito elimination, and vaccination. The focus of this article will be about the impact of health education line on dengue fever awareness among populations. Many authors had carried out clinical studies to explore the knowledge, understanding, awareness, and attitudes of the general population towards dengue fever, and the impact of media and healthcare education programs on their knowledge. The general population among different countries seemed to have heard about the dengue fever. However, sufficient correct knowledge about the disease nature, mode of transmission, symptoms and prevention was very poor. Many factors were reported to be associated with good knowledge such as the level of education, the computer literacy, gender, and the socioeconomic class. Itrat et al conducted a cross-sectional study on 447 individuals attending tertiary healthcare hospitals in Pakistan to assess their awareness, knowledge, and attitudes towards dengue fever. 11 They reported that despite the high number of individuals (~90%) who had heard of the disease, only about one third (38.5%) had good knowledge of it. Level of education was the main determinant of knowledge about dengue fever, with literate individuals more well-informed about the disease than illiterates (P<0.001). Most of the individuals stated that the television was there source of education. Overall, the sufficient knowledge about dengue fever in Pakistan was poor. Similarly, Chellaiyan et al studied the knowledge about dengue fever among rural inhabitants in India via a cross sectional study. 12 Among the 224 interviewed participants, 94% reported they had heard about the disease. However, when asked about the details of the disease, only 50% could correctly identify the disease symptoms, 40% knew about the breeding and biting habits of the mosquito, and 89% knew that the Aedes mosquito is the transmitting mosquito. This emphasized the importance of internet in raising the awareness of dengue fever among the public. Furthermore, Soodsada et al in their cross-sectional study over 9 villages in Laos in 2006, found that over 70% knew about the mosquito transmitting dengue fever, and almost half of the participants had their information from relatives and friends. 13 Computer literacy was another factor reported to affect the knowledge and awareness of the public about dengue fever. Nyaar et al in their cross-sectional study in conducted in India using interviews with 374 students at bachelor and master levels to explore their knowledge about dengue fever, reported that knowledge about dengue fever was significantly higher among the IT students in comparison with other departments (p<0.001). 14 Gender was also reported to affect the knowledge and awareness about dengue fever. In Sindh, Pakistan, females had a better knowledge about the Aedes mosquito than males, with values of 62.5% and 37.5%, respectively (p<0.001)15. Similar results were reported in a study conducted in Azad Kashmir, Pakistan in 2016 where females were 2.2 times more aware of the disease than males. 16 Socioeconomic status was a fourth factor reported to influence population awareness about dengue fever. Syed et al. reported that there was a statically significant difference between high socioeconomic and low socioeconomic classes as regards the knowledge and awareness about the disease in Pakistan. Knowledge and awareness were also reported to be poor among population living in rural area12. Contrary, other authors found no significant difference among socioeconomic statis. 14,17 The sources of knowledge about dengue fever varied among studies. Television and internet were reported to be the most common sources of information in most of the studies. However, neighbors, relatives, friends, and sometimes healthcare professionals were uncommonly reported. 12,14,17,18 As regards the attitudes and the preventive measures adopted by individuals to protect themselves against dengue fever, different researchers reported variable data. Itrat et al reported that the vast majority of their participants from Pakistan believed that the use of antimosquito spray for prevention of mosquito bites is the main preventive technique. 11 Only 17.3% knew that eradication of mosquitoes is the main preventive method. Chellaiyan et al stated that two thirds (63.4%) of their participants used mosquito coils as preventive measures against dengue fever, about 15% used mosquito nets, and up to one fourth (24.1%) did not use any protective techniques against the mosquito bites. 12 Malhotra et al. reported similar results. They stated that the most common preventive methods used in India were liquid vaporizers, mosquito coils, and health education. 19 Positive attitudes towards dengue fever were reported in many literature researches. For instance, Soodsada et al stated that almost 95% of participants had a positive attitude that the disease can be treated and that patients should seek medical advice when they experience the symptoms. 13 However, proper water storage methods were poorly adopted. Soodsada et al found that over 85% of participants stored water at home for domestic use and did not change it frequently. 13 Many researchers attribute the negative attitudes and the non-adoption of effective preventive strategies to the poor knowledge about these strategies, for example, only 25% of community participants studied in South India were aware that the Aedes mosquitoes breed in clean water. 20 In contrast, other authors noted that despite good knowledge and awareness about dengue fever, no positive attitudes were undertaken to prevent or early treat the disease. 17 Amrit et al reported that despite good knowledge about the mosquito, the disease symptoms, and the severity of the disease, almost 90% of participants did not get rid of stagnant water surrounding their homes to eliminate mosqitoes. 18 CONCLUSION In spite of the severity and fatality of the dengue fever and its heavy endemicity among multiple factors, knowledge about the disease seems to be poor. More than half of the world population are vulnerable candidates for dengue virus infection, and therefore major efforts had been exerted to improve the knowledge and understanding about the disease worldwide. In this article, various literatures studies that were conducted to evaluate the impact of healthcare education on population awareness, knowledge and attitudes towards dengue fever were reviewed. Though the vast majority of the studied participants confirmed that they had heard about the disease, sufficient knowledge was poor. Knowledge and awareness varied among different studies, and some factors were reported to influence this knowledge such as age, gender, socioeconomic status, level of education, and computer literacy. The attitudes also differed among the studies and did not seem to be correlated with the population knowledge about the disease. A large proportion of population either adopted insufficient measures for prevention of mosquito bite or did not try to actively prevent the disease. Mosquito vaporizers, coils, and nets were the most common preventive methods used among the studies. Few participants understood that mosquito elimination was more effective, and fewer reported that they do change or get rid of stagnant water that constitutes a rich environment for vector breeding. Television, internet, friends, neighbors, and relatives were the most common sources of information about dengue fever reported in most of the reviewed studies. It is evident that it is an urgent necessity for public healthcare organizations to improve public awareness about dengue fever through television, internet, radio, medical programs, and school and college health education sessions.
2019-03-18T14:00:10.298Z
2018-07-23T00:00:00.000
{ "year": 2018, "sha1": "a3ec14457738b2c76f207f45a7d5812b64116db5", "oa_license": null, "oa_url": "https://www.ijmedicine.com/index.php/ijam/article/download/1234/965", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3c39c7ec292e90c0277f391ba7dafe89d32021c8", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
244931970
pes2o/s2orc
v3-fos-license
Physico-chemical and sensory evaluation of virgin olive oils from several Algerian olive-growing regions – Olive cultivar diversity is rich in Algeria but most remain unexplored in terms of quality traits. This work aimed to evaluate the physicochemical and organoleptic quality of twenty olive oil samples belonging to four Algerian cultivars ( Chemlal , Sigoise , Ronde de Miliana and Rougette de Mitidja ) collected throughout the national territory. Physical-chemical and sensory results showed that 60% of the oils belong to the extra virgin category, while 40% were classi fi ed as “ virgin olive oil ” . The results of the principal component analysis (PCA) revealed a great variability in fatty acids composition between the samples depending on the cultivar and origin. Oleic acid was the most abundant and varied between 64.84 and 80.14%. Extra virgin olive oils with quality attributes are eligible for a label. Rougette de Mitidja , Ronde de Miliana and Sigoise from Oran showed great potential. Résumé – Évaluation physico-chimique d ’ olive régions Introduction The olive tree (Olea europaea L.) is one of the most prominent fruit trees in Mediterranean countries, accounting for 97.9% of the total area of olive trees in the world (Rallo et al., 2018). More than 11 million ha of olive trees are grown in more than 47 countries, mainly in Spain (25%), Tunisia (13%), Italy (11%), Morocco (10%) and Greece (9%) (FAOSTAT, 2018). Algeria is one of the main producers of olives; olive trees have been ranked as the first fruit tree (Algerian Ministry of Agriculture). The area covered by olive trees has increased from 310 644 ha in 2010 (M. A.D.R, 2010) to 500 000 ha, in 2018 (M.A.D.R.P, 2018). The olive oil sector is increasingly considered as economic and social development of diverse regions, including Kabylia and Oranía. During the 2017/2018 campaign, Algeria produced 80 000 tons, ranking at the ninth position worldwide. Virgin olive oil (VOO) extracted from fresh and undamaged fruits (Olea europaea L.), and appropriately processed, is a staple of the Mediterranean diet (Bedbabis et al., 2010). Olive oil is widely appreciated by consumers (Caporale et al., 2006) for its sensory properties (Angerosa et al., 2004;Bendini et al., 2007), nutritional and health benefits (Visioli and Galli, 1998). Several studies have highlighted the effectiveness of the bioactive compounds from olive oil in curing and prevention various chronic diseases, such as atherosclerosis, diabetes, cancer, neurodegenerative, and coronary heart diseases (Lou-Bonafonte et al., 2012;Visioli et al., 2018). Olive oil quality is referred to international standards defined by several organizations (European Commission Regulation, 2016;Codex Alimentarius, 2017;IOC, 2018c;Conte et al., 2020). Olive oil quality assessment was performed by physicochemical parameters, including free acidity content, peroxide value, specific UV spectrophotometric absorptions (K 232 and K 270 ), and sensory evaluation (Gharbi et al., 2015). The olive oil quality, nutritional and sensory characteristics are associated with the chemical composition, which is the result of a complex interaction between several environmental, agronomic, and technological factors (El Riachy et al., 2018). The cultivar, growing area, environmental conditions, soil, tree age, irrigation, fruit ripening, harvest time, fruit storage and processing system represent the important factors influencing the composition of olive oil (Gharbi et al., 2015;Ben Rached et al., 2017;Mele et al., 2018;Faci et al., 2021). Algeria has an important potential to develop olive oil production, which could place it among the main producing and exporting countries. The quality of olive oil has not been consistently a priority, with only 15% of the quantity produced belonging to the category of virgin and extra virgin oils. Currently, there is a renewed interest in the production of quality olive oil. The enhancement and improvement of olive oil require the study of its quality and chemical composition which must comply with international standards. This work has aimed to assess the quality of Algerian olive oil according to the cultivar and area of cultivation to valorize the best ones. Plant material and sampling The present work was carried on four cultivars of the local population, namely Chemlal, Sigoise, Ronde de Miliana and Rougette de Mitidja from different olive-growing regions of the country. Chemlal is the most widely grown cultivar in Algeria; it represents 40% of the Algerian olive grove. The fruits are small (about 1.7 g) and used for oil production; the oil yield varies between 14 and 22%. Sigoise occupies 25% of the Algerian olive grove and is located in the west plains. This cultivar is mainly used for the production of table olives which is characterized by medium-sized fruits (3.5 g). Ronde de Miliana occupies a limited olive-growing area. It is a dual-use cultivar (table olive and olive oil). The oil yield is estimated between 16 and 20%. Rougette de Mitidja is a cultivar intended for the production of oil with a yield varying from 18-20%. The olive oil samples (twenty) produced during the 2018-2019 olive-growing season was collected from various producers during the months of November-December 2018. Origin of the plant material and some agronomical conditions of each geographical area were shown in Table 1 and Figure 1. The olive trees have not undergone any chemical treatment. Extraction of olive oil was performed immediately after harvest. The quantity of olives used for extraction is equal to or greater than 300 kg. The modern (automatic) three-phase system with a hammer crusher, malaxer and centrifugation of the olive paste is used for the extraction of the oil. All samples were packed in amber glass bottles without headspace and stored at 4°C until analysis. Different analyses were performed during the first ten of the month of January 2019 for all the samples. Sensory analysis A sensory analysis (median of defects, median of fruitiness, and panel classification test) of the samples was performed by 10 selected and trained panelists (IOC certified panel leader). The profile sheet was used according to the IOC Mario Solinas competition for extra virgin olive oils (T.30/ Doc. No.21, March 2018). The strengths of the positive (fruity, bitter, and pungent) and negative (fusty, winy, musty, muddy, rancid, metallic, and others) attributes were evaluated for each oil sample on a 10 scale. The sensorial evaluation results were estimated by the median, being valid when the coefficient of variation was less than 20 (IOC/T.20/Doc. No.15/Rev.10, 2018). Colorimetric determination of total phenols The total phenol content of the olive oil samples was determined by the Folin-Ciocalteu spectrophotometric method at 725 nm, using a gallic acid calibration curve (Gutfinger, 1981). Fatty acid composition The fatty acid composition of the oils was determined by gas chromatography (GC) as fatty acid methyl esters (FAMEs). FAMEs were prepared by cold transesterification with methanolic solution of potassium hydroxide (2N) according the method of European Commission implementing Regulation 2015/1833 amending Regulation (EEC) No. 2568/91. The solution containing the fatty acid methyl esters was injected in , (Cyanopropyl at 50%) (USA) was used. The column temperature was isothermal at 200°C and the injector and detector temperatures were 250°C. Nitrogen as a carrier gas was used at flow rates of 1 mL/min. Fatty acids were identified by comparison of their retention times with those of standard reference compounds. Analyses were performed in duplicate. Statistical analysis All analyses were performed in triplicate, except for fatty acid composition, which is performed in duplicate; the results are presented as mean values. A simple analysis of variance was conducted according to the LSD test at the 5% threshold with GenStat Discovery Edition software. P ≥ 0.05 indicates that the effect is considered non-significant. The correlation between the different analytical parameters is treated by principal component analysis (PCA), to highlight associations of individuals or links between variables. The data were statistically processed by the Statistica software version 12.5. Oil content The oil yield of the olives from the different samples is shown in Table 1. There are differences between the oil contents of the different cultivars. Chemlal from Djelfa (C5) has the lowest percentage (14%) and Sigoise from Oran (S3) has the highest percentage (22%). Al-Ruqaie et al. (2016) and Giuffrè (2017) showed that the cultivar significantly influences the oil content when samples come from the same harvest area. The geographic origin of the samples also affects the oil content. For the same Chemlal cultivar, the yield varies from 14% for the sample from Djelfa (C5) to 22% for the sample from Bouira (C6). The same observation is noted for the cultivar Sigoise, whose yield varies from 15% (S1 from Blida) to 22% (S3 from Oran). The results are in agreement with those of Navas-López et al. (2020) who noted that genotype, environment and their interaction significantly affect the amount of oil. Quality parameters The parameters of the studied olive oils (Tab. 2) were within the limits of the European Commission implementing Regulation 2016/1227 amending Regulation (EEC) No. 2568/91 and IOC (2018b) and therefore the oils could be classified into the category of extra virgin olive oil, except samples of Chemlal from Tizi Ouzou (C2a and C2b), Sigoise from Ain-Temouchent (S2) and Djelfa (S4). The region and cultivar had a highly significant effect (P 0.001) on the free acidity, the peroxide index, the K 270 , the water content, and significant effect (P 0.01) on the K 232 . Large variations in qualitative indexes according to the cultivar have shown by Piscopo et al., 2016). Significant differences were observed in most of the quality parameters between the extra virgin olive oil of "Sarıulak" olive cultivar produced in three different locations (Antalya, Karaman, Mersin) in the south region of Turkey (Arslan et al., 2013). The geographic area and environmental factors influence the characteristics of Arbequina oil (Borges et al., 2017). Physical indices The physical characteristics of all samples are presented in Table 2. The refractive index increases with the degree of unsaturation of the fatty acids. Indeed samples with higher unsaturation showed a higher refractive index. Our results are within the limits [1.468-1.470] set by Codex Alimentarius (2017) standards for extra virgin oils. Density (MV) depends on the chemical composition of the oil and temperature. Therefore, our results are in the range [0.910-0.916] of Codex Alimentarius (2017). A highly significant difference between the different samples studied (P 0.001) according to refractive index and density. Sensory analysis The flavor and taste of olive oil are strongly correlated with its quality level and nutritional value. Sensory analysis of olive oils, based on median values of the positive and negative attributes carried out by a panel of trained tasters, is one of the most important evaluations used for virgin olive oil quality classification (Fernandes et al., 2018). First, sensorial features of olive oils were analyzed and the oils with defects were discarded. Twelve samples belonging to the extra virgin olive oil category were selected for sensory analyses (Tab. 3). The results of the organoleptic analysis of the extra virgin oils, presented in Table 4, showed great diversity in its sensorial profiles. The variation in bitterness intensity ranging from 1 to 2.5 and pungency from 1 to 2.5. These attributes are scored on a scale of 1 to 3. The type of green fruitiness was the same for the different oils, but it was nuanced by the intensity which varies from 6 to 9. The results (Tabs. 3 and 4) indicated that the organoleptic note varies from one cultivar to another; the oils of the Sigoise cultivar from Blida (S1) and Aïn-Temouchent (S2) were characterized by a fruity taste and a higher spicy median than other varieties. The oil of Rougette de Mitidja cultivar from Blida (R1), with a very local diffusion, obtained a high score of 83.0, the best of all oils. It also showed a remarkable homogeneity as well as balanced attributes of intense fruity (9.0) and spicy (2.0) and bitter (2.0). The olive cultivar has been reported as one of the most crucial factors responsible for olive oil flavor variability (Tura et al., 2008;Mele et al., 2018). It is well established that the organoleptic properties of olive oil, which are strongly correlated to its geographical and varietal origins, are behind its wide commercialization and elevated market value (Rodney et al., 2010;Borges et al., 2017). Samples of Chemlal cultivar, the most abundant olive cultivar in Algeria (40%), from different regions showed a remarkable difference in their sensory characteristics; it varies from ripe fruity with spicy and slightly bitter traits to medium green fruity with more pronounced fruit intensity, stronger spicy and bitter attributes. The same observation was noted for samples from the Sigoise cultivar where the oils have different sensory characteristics. The cultivar factor has a very significant effect on sensory attributes. The aromatic notes depend on the volatile and phenolic compounds. The volatile composition of olive oil depends on several factors, such as the levels and activity of enzymes involved in different pathways (Angerosa, 2002). Results are expressed as mean ± SD of three sample replicates. Significance level: ***P < 0.001; **P < 0.01; *P < 0.05; NS = not significant. Free acidity: P < 0.001; Peroxide value: P < 0.001; K232: P < 0.05; K 270 : P < 0.001; H%: P < 0.001; Density: P < 0.001; n D 20 : P < 0.001; TP: P < 0.001. Fusty and musty oil S1 Medium green fruity R1 Intense green fruity C1 Ripe fruity C3 Ripe fruity C5 Medium green fruity Rm1 Medium green fruity S2 Intense green fruity SC1 Light green fruity SC2 Ripe fruity C7 Light green fruity C8 Medium green fruity C9 Medium green fruity Total phenols The quality of olive oil is significantly related to polyphenols content (Benlemlih and Ghanam, 2016). The total phenol content of the olive oil ranged from 46.29 to 351.45 mg/kg (Tab. 2). A high significant difference (P < 0.001) between the different samples from different regions and varieties was noted. These results are lower than those recorded on extra virgin olive oil of Algerian varieties (Douzane et al., 2013). These low concentrations could be attributed to the olive oil extraction process in which most mills use the conventional three-phase centrifuge where a quantity of water has been added during the extraction process which has led to losses of phenolic compounds, vitamins, and aromatic components. It was reported that the two-phase decanter provides an oil rich in total polyphenols, ortho-diphenols, lower acidity, and better organoleptic quality compared to the three-phase press (Del Caro et al., 2006). Besides the cultivar, the degrees of ripening of the olives and the environmental conditions influence the concentration of phenolic compounds (Škevin et al., 2003). A negative correlation (r = À0.88) between drupe ripening degree and phenolic content of VOOs was found (Rotondi et al., 2004). According to Borges et al. (2017), significant variations (P < 0.05) in the concentration of minor bioactive constituents were observed among oils from different growing areas, not only between Spanish and Brazilian samples but even within areas of the same country for Arbequina cultivar. Fatty acid composition The fatty acid composition is shown in Table 5. The most representative fatty acids are oleic, linoleic, palmitic, and stearic, which represent 98% of the composition of total fatty acids. Oleic acid was the most abundant and varied between 64.84 and 80.00%. The large-fruited varieties showed the highest amounts of oleic acid (Sigoise S1: 75%; Ronde de Miliana Rm: 73.88%; Rougette de Mitidja (R2: 80.14%) and lower levels in small-fruit varieties (Chemlal: 67%), a similar result as reported by Faci et al. (2021) who studied Chemlal of Tizi-Ouzou. These results corroborated with those obtained previously by Douzane (2012) for the same varieties: Sigoise (71.40%), Ronde de Miliana (80.18%) and Rougette de Mitidja (74.80%) collected from Olive production station (Sidi Aich, southern Bejaia). Lower percentages were noted for the Chemlal cultivar produced in several regions (Douzane, 2012). Algerian Chemlal cultivar presents a composition very close to the Tunisian cultivar "Chemlali" which contain low oleic acid and high concentrations of linoleic and palmitic acids (Manai et al., 2007). Linoleic acid which was the dominant polyunsaturated fatty acid ranged from 6.80% to 11.70%, while linolenic acid (C18:3) was the minor and ranged from 0.56% to 0.77%. The levels of linoleic acid are inversely related to oleic acid. According to Sánchez Casas et al. (1999), for better preservation of oils' quality, their linoleic acid content shouldn't be upper than 10%. Lower levels of oxidative stability have been noted in oils containing high proportions of linoleic acid (Pardo et al., 2021). Also, Montaño et al. (2016) noted that oxidative stability was clearly correlated positively with oleic acid (R = 0.688) and negatively with linoleic acid (R = À0.710). Palmitic acid, the major saturated fatty acids showed an important variation between samples, it ranged from 8.62% in Rougette de Mitidja cultivar (R2) to 17.57% in Chemlal cultivar (C5). All samples of Chemlal had high value (superior to 14%).The other varieties presented an intermediate value (13.24% for the Ronde de Miliana cultivar (Rm1), between 10.62 and 13.21% for the Sigoise from different orogins. Statistical analysis revealed a highly significant difference (P < 0.001) between the different samples of a different cultivar. The values of fatty acids comply with the limits required by the European Commission implementing Regulation (2016/ 1227) and the International Olive Oil Council method (IOC, 2018a) on the characteristics of olive oils. The percentages of saturated fatty acids (SFA), monounsaturated fatty acids (MUFA), and polyunsaturated fatty acids (PUFA) and the ratios of PUFA/SFA and oleic/ linoleic acids in all samples were also evaluated. It was observed that the oleic acid, the MUFA/ PUFA ratio, and the oleic/ linoleic ratio of the varieties with large medium fruits (Sigoise, Ronde de Miliana, and Rougette de Mitidja) are the highest and are clearly distinguished from the rates of "population cultivar" of small-fruit (Chemlal) (Tab. 5). Some results on the Chemlal cultivar are comparable to those observed by Louadj and Giuffrè (2010); however, our samples were characterized by a broad variation in the mentioned parameters above, due probably to the large geographical origins. The results obtained in this work showed that the distribution of fatty acids is greatly influenced by olive cultivar. They agree those of others authors (Cicatelli et al., 2013;Fuentes et al., 2015;Cherfaoui et al., 2018). However, many researchers have shown that geographical location has a significant effect. Palmitic acid, oleic acid, linoleic acid, and wax content were found to be significantly affected by the growing region for some cultivars (Rodney et al., 2010). Noorali et al. (2014) noted that the fatty acid profiles could be used to separate olive samples according to their cultivar and growing area, respectively, the fatty acid profile was significantly affected by the growing area for almost all cultivars. Multi dimensional analysis The confrontation between the correlations circle of variables and the individual projections (Fig. 2) allowed for determining the characteristics of individuals according to their physicochemical characteristics, with the plane axis. Thus, axis 2 permitted us to distinguish that Chemlal cultivar has been regrouped for highlighting two subgroups, this is caused by the existence of subspecies within Chemlal cultivar to know as Chemlal of Tizi-Ouzou, precocious Chemlal of Tazmalt, Chemlal of Oued-Aissa, and Ali-Sharif's white Chemlal. The most abundant olive cultivar in Algeria, Chemlal, dominated Algerian olive orchards (40%), probably owing to its unique adaptation to various pedoclimatic conditions (Haddadi and Yakoub-Bougdal, 2010). Chemlal cultivar could be dispersed as a clone. The medium-fruit cultivar Ronde de Miliana (Rm1) formed a single group, a sign of non-existence of a parental link with other varieties; the same results were noted previously by Douzane (2012). The large-fruited cultivar (Sigoise and Rougette de Mitidja) formed distinct groups. The heterogeneity of Sigoise could be connected to the existence of a mixture within the orchard of producers. The individual projection allows us to distinguish the different groups (Fig. 2). This structure characterization was made according to the physicochemical analyzes. The concept of quality in olives is complex, and numerous traits might be considered. Different definitions may apply according to the point of view and final goal of producers, traders, consumers, and/or nutritionists. The olive oil composition of cultivar results from a very complex multivariate interaction between the genotypic potential and the environmental, agronomic, and technological factors that characterize fruit growth and ripening as well as oil extraction and storage (Lavee and Wodner, 1991). Some olive varieties like Chemlal are diffused as a clone. No genetic variation exists, therefore between the individuals. All variations in the final product are consequently attributable to environmental and physical factors. The mechanisms underlying environmentally-caused variations in oil composition are for the most part unknown. The results of previous studies (Ben Temime et al., 2006;Borges et al., 2017) have highlighted the effect of cultivar-environment interaction in the expression of the quality of olive oil. The changes in oil composition of the Tunisian Chetoui cultivar according to the plantation origin were studied (Ben Temime et al., 2006). The results showed considerable variability in oil composition. The characteristics of the EVOO Arbequina from Brazil in comparison with Spanish Arbequina from different regions were recently studied (Borges et al., 2017). The findings reveal that geographic and climate aspects of producing areas may significantly influence the quality and composition of Arbequina olive oil. Conclusion The results showed that the cultivar plays an important role in the qualitative characteristics and sensory attributes of olive oils. The oil of Rougette de Mitidja cultivar from Mitidja (R1) showed the best sensory and physicochemical results. It also exhibited the best levels of oleic acid, qualifying it as the best extra virgin olive oil in Algeria for the 2018/2019 olive season. Ronde de Miliana (Rm1) also showed great potential. Many other olive cultivars are unexplored in terms of quality traits, more effort should be made to assess them. On the other hand, olive growers must be encouraged to promote the local olive-growing heritage by cultivating varieties approved by the National Center for seeds and plants Control and Certification (CNCC).
2021-12-08T16:18:47.418Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1cfa69db5fc5b97820641bb6c642926cc2fbc03a", "oa_license": "CCBY", "oa_url": "https://www.ocl-journal.org/articles/ocl/pdf/2021/01/ocl210055.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4ba554fcebb0ac528df54ebaaa73f85e54109be5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
209168846
pes2o/s2orc
v3-fos-license
A New Method to Reconstruct in 3D the Emission Position of the Prompt Gamma Rays following Proton Beam Irradiation A new technique for range verification in proton beam therapy has been developed. It is based on the detection of the prompt γ rays that are emitted naturally during the delivery of the treatment. A spectrometer comprising 16 LaBr3(Ce) detectors in a symmetrical configuration is employed to record the prompt γ rays emitted along the proton path. An algorithm has been developed that takes as inputs the LaBr3(Ce) detector signals and reconstructs the maximum γ-ray intensity peak position, in full 3 dimensions. For a spectrometer radius of 8 cm, which could accommodate a paediatric head and neck case, the prompt γ-ray origin can be determined from the width of the detected peak with a σ of 4.17 mm for a 180 MeV proton beam impinging a water phantom. For spectrometer radii of 15 and 25 cm to accommodate larger volumes this value increases to 5.65 and 6.36 mm. For a 8 cm radius, with a 5 and 10 mm undershoot, the σ is 4.31 and 5.47 mm. These uncertainties are comparable to the range uncertainties incorporated in treatment planning. This work represents the first step towards a new accurate, real-time, 3D range verification device for spot-scanning proton beam therapy. When compared to conventional x-ray therapy, proton beam therapy (PBT) offers substantial dosimetrical improvements. The depth-dose distribution of proton beams is characterised by a sharp distal fall-off, with the highest amount of energy deposited at the end of the track, in the Bragg peak. This feature is advantageous for cancer treatment: if the beam stops where the target is located, the tumour receives the maximum dose whilst the surrounding healthy tissues are spared 1 . At the moment of writing, many new PBT facilities are in the planning 2 or construction 3 stage. One problem that hinders the full exploitation of PBT is the uncertainty in the beam range. Range uncertainty is the uncertainty in the exact position of the distal fall-off of proton beams in biological tissues. Range uncertainty can cause a substantial underdosage of the target, failing the curative intent of the therapy, as well as an overdosage of the adjacent organs-at-risk, leading to unwanted toxicities 4 . In PBT, for non-moving targets, there are several sources of range uncertainty 5 . The most important are: computed tomography (CT) parameters [6][7][8] , mean ionisation and excitation values 9 and patient set-up 10 . Most of these uncertainties are initially taken into account in the treatment planning stage, by adding specific margins to the clinical tumour volume (CTV) or through incorporating uncertainty in the treatment planning optimisation, robust optimisation 11 . During fractionated treatments, anatomical changes could also impact the desired dose distribution [12][13][14] . The most typical anatomical changes are: body weight loss/gain or daily variations in the filling of internal cavities. These changes will be found by imaging during the course of the treatment and may require a plan adaptation. If the dose distribution is not modified in light of severe anatomical changes, the total treatment outcome can be compromised 15 . For this reason, the introduction of range verification during PBT delivery has potential to improve clinical outcomes. Anatomical changes could be detected through daily cone beam CT (CBCT) imaging, however the use of CBCT in the adaptive process for protons is difficult, mainly for the high uncertainty in dose calculation 16 . In contrast, a number of techniques unique to PBT have been proposed in the last decade for real-time range verification. They are based on the detection of the secondary radiation produced naturally during PBT through proton-nuclear inelastic reactions. These techniques provide in-situ range verification without any additional burden to the patient 1 . Methods 3D position reconstruction method. 16 O is one of the most abundant PG-ray emitters in human tissues. The technique developed in this work utilises the 2.741 MeV γ emission from the I π = 2 − state to the I π = 3 − state in 16 O followed by the emission of a 6.128 MeV γ-ray to the ground state (g.s.). A complete de-excitation decay scheme of 16 O can be found in Tilley et al. 49 . The time difference between the two decays is ~25 ps 49 , which is short compared to the nominal time resolution of scintillator type γ-ray spectroscopy detectors (~400-500 ps 50 ). Within the limitation of current spectroscopy detector and electronic systems, these two γ de-excitations are effectively emitted simultaneously in time and position. The cross section peaks for the reactions 16 ) has a maximum cross section, ~158 mb, for an energy of ~13 MeV. The average energy, 13.5 MeV, corresponds to a residual proton range of ~2.2 mm in water. Due to the coincidence requirement of the algorithm, the population of the state 2 − , or above, is essential. For a proton energy of 14 MeV the 16 ) cross sections have been compared, with the first being the ~29% of the second. During the proton bombardment of human tissues several 2.741 & 6.128 MeV γ-ray couples are produced due to 16 O de-excitation following inelastic nuclear reactions. The simultaneous detection, within the timing resolution of the detection system, coupled with a reconstruction algorithm, allows the identification of the common emission point. The identification uncertainty is proportional to the uncertainties in the position and timing resolutions of the system. The PG-ray distribution has a maximum intensity located a few millimeters proximal to the Bragg peak. For a beam passing through homogeneous tissues with constant oxygen concentration 51 , the beam range can then be determined from the emission points of the detected 16 O-induced γ-ray couples. Prompt-gamma spectrometer. To maximise the PG-ray signal, a spectrometer without any mechanical collimation has been designed. As depicted in Fig. 1a, the spectrometer is composed of 16 LaBr 3 (Ce) cylindrical detectors with dimensions 2″ length and 1.5″ diameter. The detectors are arranged as follows: a ring of eight symmetrically-spaced detectors in the vertical plane plus one ring of four detectors at backward angles (45°) and one ring of four detectors at forward angles (45°), with respect to the beam axis. For an isotropic source in the centre of the spectrometer, when the distance between the source and the front face of all detectors is 8 cm, this geometry covers 30% of the total solid angle 52,53 . The energy resolution of LaBr 3 (Ce) (~40 keV FWHM at 1.33 MeV) makes it a suitable detector for high energy PG-ray spectroscopy. In addition, the LaBr 3 (Ce) intrinsic timing resolution is sub-nanosecond from ~keV up to more than 4 MeV, allowing an excellent Time-Of-Flight (TOF) discrimination 54 . Discussions are being held with clinical scientist colleagues for a small design adaptation to enable clinical implementation. The spectrometer has been modelled using the Geant4 Monte-Carlo Toolkit (version 10.04) 55 . When a γ-ray enters the sensitive area of a detector, as shown in Fig. 1b, it interacts a number of times, termed hits, before being totally absorbed. For every γ i detected, several pieces of information are saved: 1. The two events, γ i and γ i+1 , were recorded in coincidence in two different detectors, i.e. Det i ≠ Det i+1 . 2. The energies, E i and E i+1 , of the two events are 2.741 and 6.128 MeV, irrespective of order. The energy resolution of a 2″ × 2″ × 8″ LaBr 3 (Ce) crystal has been measured by Dhibar et al. 56 at several photon energies up to 4.433 MeV. Above ~2 MeV the energy resolution is around 3% FWHM (Full Width Half Maximum). The algorithm requires that the energy of one of the two events is in the range 2.659/2.823 MeV while the energy of the other is in the range 5.946/6.321 MeV. These ranges are centred on the two decay energies, namely 2.741 and 6.128 MeV, with the extent reflecting a 3% detector energy resolution. At the end of this function only those events which belong to a γ-ray couple are saved. Those events which do not fulfil the criteria above are rejected. Function 2: γ-ray couple analysis. For each couple two spheres are constructed, one for each event in the couple. An example of this is illustrated in Fig. 3, for a (p, 16 O) nuclear reaction at (0, 0, 0), the centre of the spectrometer. As shown in Fig. 3a the centre of each sphere corresponds to the hit coordinates of the associated event while the radius of each sphere is the arrival time of that event multiplied by the speed of light (c). The events γ i and γ i+1 , detected in (x i , y i , z i ) and (x i+1 , y i+1 , z i+1 ), at time t i and t i+1 respectively, are represented by two spheres centred in (x i , y i , z i ) and (x i+1 , y i+1 , z i+1 ) with radius r i = t i · c and r i+1 = t i+1 · c. As shown in Fig. 3b, the intersection between the two spheres, i.e. an intersection circle, is calculated. A torus is constructed around the circle and is stored by the algorithm, Fig. 3c. This geometrical calculation is repeated, resulting in one stored torus per γ ray couple, Fig. 3d. The rationale behind the construction of a torus around each intersection circle is explained. For every couple, the original emission position should lie somewhere on its intersection circle. Several small uncertainties, such as the scattering in the detector, affect the spheres parameters. These uncertainties are reflected in the parameters of the circles which, consequently, may not cross each other. In light of this, around every circle, a torus is calculated. For each torus the major radius, i.e. the distance between the centre of the tube and the centre of the torus, corresponds to the radius of the intersection circle. For the minor radius, i.e. the radius of the tube, a value of 3 mm was determined from a Monte-Carlo simulation of the γ-ray interaction points within the detector medium. Function 3: γ-ray couple emission-position reconstruction. For clarity, only 11 tori are shown in Fig. 4. As highlighted by the inset on the right side of this Figure, the tori converge to the emission position. Each couple of tori is retrieved and, if a non-null volumetric intersection between them exists, the intersection volume is calculated (see Fig. 5). The section of each torus which does not belong to the spectrometer central volume is eliminated before the intersection calculation. This procedure fasten the computational process but allows only the reconstruction of the emitted coordinates located inside the spectrometer. The intersection volume is determined by triangulating the surfaces of the two tori and by applying the triangle/triangle intersection test routine by Tomas Möller 57 . The central point of each intersection volume is calculated and stored as a virtual emission position. The intersection of each torus with all the others (torus n° 1 and torus n°2, …, torus n°1 and torus n°n, torus n°2 and torus n°3, …, torus n°2 and torus n°n, …, torus n°n − 1 and torus n°n) is calculated. If n is the number of tori and N NaN is the number of null intersections between tori, the total number of virtual emission positions N emission-positions is: For the spectrometer, to subtend a high solid angle with respect to the PG-ray emitted in proximity of the Bragg peak and to obtain meaningful results within a reasonable computational time, the internal radius was set to 8 cm. As shown in Fig. 6, both the beam and the phantom have been modelled in the central area of the spectrometer. The beam direction coincides with the phantom central axis (Z axis). The water phantom has been modelled so that the Bragg peak depth for the 180 MeV beam corresponds to the centre of the spectrometer. This is to ensure that the PG rays emitted close to the Bragg peak are detected by the spectrometer with the maximum solid angle. The proton energy distribution was set as Gaussian with a sigma of 1 MeV. The sigma value for the lateral spread was set as 4 mm. These parameters were chosen as they represent typical values determined on our system. The number of initial protons simulated was 10 8 . The PG rays, emitted in the (p, 16 www.nature.com/scientificreports www.nature.com/scientificreports/ reconstruct, in full 3 dimensions, the beam end-of-range value in the phantom. In addition, a scoring mesh (20 × 20 × 150 bins), with the same size and position of the phantom was created. The quantities scored by this mesh were: 1) the energy deposition per voxel and 2) the 2.741 & 6.128 MeV 16 O-induced PG-ray distribution. These quantities are used as a benchmark for the reconstruction algorithm results. To model the interactions in the Geant4 simulation both electromagnetic (EmStandardPhysics-Option4 EmExtraPhysics) and hadronic (HadronElasticPhysics, HadronPhysicsQGS -BIC, IonBinaryCascadePhysics, NeutronTrackingCut and StoppingPhysics) physics lists have been combined together. The IonBinaryCascadePhysics was selected as it has been proved 60,61 that this physics list is the most suitable to model the PG-ray emission. For all particles, the cut has been set to 0.5 mm. Results In Fig. 7a For a clinical implementation of the system, a spectrometer internal radius of 8 cm appears to be only suitable for a very small treatment volume. The most likely clinical scenario for this radius could be a paediatric head and neck case. Additional simulations have been performed to investigate the performance of the spectrometer in different clinical scenarios. In all simulations, the beam and the water phantom have been modelled in the central area of the spectrometer as described in Section 2.4. The number of initial protons simulated has been kept fixed at 10 8 . The spectrometer internal radius has been set to 15 and 25 cm to represent, respectively, an adult head and In the second function, γ-Ray Couple Analysis, for each couple of γ rays γ i and γ i+1 two spheres are constructed. The hit coordinates (x i , y i , z i ) and (x i+1 , y i+1 , z i+1 ) represent the centers while the hit times t i and t i+1 are employed to estimate the radii (r i = t i · c and r i+1 = t i+1 · c). The circle which represents the intersection of the two spheres is calculated. (c) A torus is constructed around the intersection circle. (d) At the end of the second function each previously constructed tours is stored. All the drawings refer to a (p, 16 www.nature.com/scientificreports www.nature.com/scientificreports/ neck (Fig. S1) and a thoracic treatment (Fig. S2). With respect to the configuration previously described, the solid angle subtended by the spectrometer with respect to the origin (0, 0, 0) decreases from 30% for a 8 cm radius to 9% for a 15 cm radius and to 3% for a 25 cm radius. The total number of 2.741 & 6.128 MeV 16 O-induced PG-ray couples, selected by the algorithm in Function 1, is 387 and 191 when the radius is 15 and 25 cm, respectively. The www.nature.com/scientificreports www.nature.com/scientificreports/ variation of the spectrometer performance with the internal radius is shown in Table 1. Both the lateral spread (standard deviation), σ, and the centroid, μ, of the algorithm-reconstructed maximum intensity 16 O PG-ray distribution are reported for each chosen radius. These values have been obtained by applying a Gaussian fit to the reconstruction data. Simulations have been performed to test the spectrometer ability to estimate range deviations from a peak position expected at (0, 0, 0). With respect to the previous analysis the beam energy has been decreased to 177.5 and 175 MeV, which corresponds, to a peak depth, in water, of 21.06 and 20.54 cm and to a range shift of ~5 and ~10 mm relative to the origin (0, 0, 0). In both simulations the number of initial protons was 10 8 and the spectrometer internal radius was 8 cm. The total number of couples is 806 and 766, when the shift is 5 and 10 mm, respectively. Fig. 7b depicts, along the Z axis, the maximum intensity emission origin of the 2.741 & 6.128 MeV 16 O-induced PG ray, detected by the spectrometer and reconstructed by the algorithm, when the beam energy is 180 (blue curve), 177.5 (green dashed curve), and 175 MeV (purple dot-dashed curve). For the three energies, a Gaussian fit is applied to the algorithm reconstructed data. The lateral spread, σ, and the centroid, μ, obtained through the fit, are reported in Table 1. Discussion An excellent correlation is observed in Fig. 7a between the two mesh-scored distributions: the 2.741 & 6.128 16 O-induced PG rays (red dashed curve) and the energy deposition due to the electronic stopping of the proton beam (black dot-dashed curve). This is consistent with the results of previous in silico studies 23,62 and with the outcomes of measurements by Verburg et al. 18 . The 2 mm shift between the depth of the Bragg peak and the depth at which the PG rays are emitted with maximum intensity, highlighted in the inset in the same Figure, is due to the cross-section for 16 O PG-ray emission. As shown is Section 2.1 the total PG cross section for 16 O maximises for incident protons of ~14 MeV. The distribution (blue curve) of γ-ray emittance positions, reconstructed by the algorithm along the Z axis (beam direction), is in agreement with the maximum intensity of the 16 O PG-ray distribution. Table 1 shows the results of an investigation into the spectrometer and algorithm performance for increasing treatment volume. To use the device for adult head and neck or adult thoracic based tumours the spectrometer internal radius would have to be set to a value greater than 8 cm. The reconstruction algorithm takes as one of its inputs the γ-ray detection time, therefore the relative accuracy of the time-of-flight determination increases with flight path, i.e. source to detector distance, up to a limit fixed by the timing resolution of the system. Conversely, for a fixed number of protons/γ-rays, the geometrical efficiency of the spectrometer decreases with increasing radius. For a spectrometer radius of 8, 15, and 25 cm, assuming a proton beam current of 2 nA 63 , the estimated count rate per detector is 21, 7.8, and 3 Mcps, respectively. At the rate for a realistic treatment radius of 25 cm, using 250 MHz digital electronics, pulse pile up should not be a significant problem. For smaller radii and increased count rate the use of digital electronics would allow logic pile-up rejection or pile-up recovery through pulse shape analysis. The results of an investigation into the spectrometer and algorithm performance for a range undershoot of 5 and 10 mm are presented in Table 1 and graphically depicted in Fig. 7b. Due to the symmetry of the spectrometer these results reflects shifts caused by a range overshoot of the same magnitude. This work uses a computationally reasonable number of initial protons (10 8 ), which is comparable to the number of protons delivered in a pencil beam spot. At 68% confidence level, the reconstruction uncertainty is below www.nature.com/scientificreports www.nature.com/scientificreports/ 7 and 6 mm, for the 25 cm radius case and the 8 cm radius case with a range undershoot of 10 mm, respectively. These uncertainties are comparable to the ones typically fed in to robust planning or the usual margins imposed in PBT planning. Following the recipe of Massachusetts General Hospital (MGH), 3.5% of the range plus 1 mm 5 , for a 180 MeV clinical beam the usual margin is 5.7 mm at 68% confidence level. Currently, the reconstruction obtained in the present work is comparable with the performances of the prototypes based on the Compton camera technique 64 . In a second test on patients, Xie et al. 31 , using the IBA knife-edge prototype, estimated the shift of the Bragg peak position relative to the plan, with a ±2 mm precision. Hueso-Gonzalez et al. 42 claims that, with the PG spectroscopy system developed in MGH perfectly aligned on the couch, the absolute range can be reconstructed, in ideal experimental phantoms, with a mean precision of 1.1 mm at 95% confidence level. For a 180 MeV proton beam, when the internal spectrometer radius is 8 cm, the total number of γ-rays detected is 5,591,199. Amongst these events the 1.34% of them have energies in the two ranges discussed in Section 2.3.1. The number of events accepted by the algorithm in Function 1 is 1,652 (826 couples). The authors are additionally investigating the possibility of including, as acceptance criteria, those events whose energy belongs to the single/double escape peaks. With this variation, for the 8 cm radius case, the number of couples rises to 3,884, a ~5 fold increase. The spectrometer has been modelled with realistic energy and temporal resolution. The detectors of choice for this work are large crystal LaBr 3 (Ce) scintillators. These crystals possess internal activity, predominantly due to the decay of 138 La. The energy of the 138 La γ-rays does not overlap with the 16 O PG rays of interest. In addition, the coincidence requirement of the algorithm rejects the activity of these γ-rays. The rate of the LaBr 3 (Ce) internal activity was measured to be 0.85 cts/(s/cm 3 ) in the energy interval 70-5000 keV 54 , slow compared to the (p, 16 O) reaction rate 49 . For all these reasons the LaBr 3 (Ce) internal activity has not been modelled. Additionally, due to the coincidence requirement in the algorithm. the neutrons-induced γ rays are rejected from the reconstruction process. The accuracy of this technique is influenced by two main factors, the γ-ray interaction position (x i , y i , z i ) in the detector medium and its flight time (t i ). Monte-Carlo simulations can produce detector data with exact final γ-ray interaction positions, however, in reality this hit position is not known to the same precision. Running the simulations many times generates a mean distribution of hits for each detector. A probability density function is then derived from this distribution and sampled to generate interaction co-ordinates needed by the algorithm for non position sensitive detectors. The employment of segmented detector modules, with improved position resolution, is under evaluation. Similarly the exact time difference between γ-ray emission and detection, in reality, is also not available. A common start time provided by a suitable timing device could be employed. If this is achieved, the hit times t i and t i+1 of the two events i and i + 1 can be individually inferred. In this case the developed algorithm would not need any modification to determine the beam range. An alternative algorithm is under development; it can reconstruct the γ-ray origin without the need for a start time and only needs the γ ray detector arrival times as input. All the reconstruction results presented in this study were obtained within 30 minutes (Windows 10 64-bit with Intel Core i7-6700 @ 3.41 GHz CPU and 16 GB RAM). The reconstruction algorithm currently runs in a MATLAB environment and a significant decrease in this computational time could be achieved by porting this to a pre-compile binary via a high-level language such as C or C++. Further improvements could be made by porting the algorithm to hardware and both of these options are currently being explored. Conclusions A new technique for range verification in proton beam therapy has been developed. It is based on the detection of the prompt γ rays that are emitted naturally during delivery. A spectrometer comprising 16 LaBr 3 (Ce) detectors in a symmetrical configuration is employed to record the prompt γ rays emitted along the proton path. An algorithm has been also developed that takes as inputs the LaBr 3 (Ce) detector signals and reconstructs the maximum intensity peak position, in full 3 dimensions. The ability to determine proton range in 3D is well suited for spot-scanning systems and for detecting non-uniform anatomical changes such as tumour shrinkage. The spectrometer-algorithm performance has been first investigated for a mono-energetic 180 MeV clinical beam with varying spectrometer radii. The results show that accommodating an adult patient (25 cm spectrometer radius) the proton range could be determined with an uncertainty below 7 mm at 68% confidence level. Additional simulations have been performed with a shift between the beam range and the system origin. In case of a 10 mm range undershoot the PG-ray emission position is reconstructed with an uncertainty below 6 mm at 68% confidence level. Further developments are ongoing to reach the ultimate goal of a clinically compliant system for on-line, real-time range verification. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Table 1. Evaluation of the system performance, in reconstructing the end-of-range position of a proton pencil beam, by varying the spectrometer internal radius and the beam energy. The lateral spread (standard deviation), σ, and the centroid, μ, of the algorithm-reconstructed 16 O PG-ray distribution, as obtained through the application of a Gaussian fit, are reported. As a reference, the results of a simulation with 8 cm spectrometer internal radius and 180 MeV beam energy are shown when both variations are performed. When different spectrometer internal radii (8,15, and 25 cm), representing different hypothetical treatment sites, are modelled, the beam energy has been always kept at 180 MeV. Conversely, when the Bragg peak is shifted, with respect to the centre of the spectrometer, by 5 and 10 mm, the spectrometer internal radius has always been set to 8 cm.
2019-12-11T16:15:51.520Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "eb42b203e8fd91b6fa3da2c0a4048a6b061303a2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-55349-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb42b203e8fd91b6fa3da2c0a4048a6b061303a2", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
258119011
pes2o/s2orc
v3-fos-license
House prices, airport location proximity, air traffic volume and the COVID-19 effect ABSTRACT Although house prices and airports are influenced by distinct factors that shape their evolutions, they are also intrinsically connected through the natural and built environment. Standard theory suggests that air-traffic noise and proximity to key economic hubs such as airports are of prime importance to house prices and the housing market. This study contributes to understanding the link between the housing market, airport location proximity and air traffic. The research investigates this association across four key urban areas within New Zealand proximal to an international airport: Auckland, Wellington, Christchurch and Queenstown. Applying a generalized least squares (GLS) regression approach, the analysis reveals that house prices, air-traffic activity and proximity to airports within New Zealand demonstrate a statistically significant effect, and that air traffic volume has a positive effect on house prices. Moreover, the findings reveal a ‘U’-shape relationship between distance to the airport and house prices, suggesting that airport noise and pollution adversely affect house prices, with this effect diminishing with distance, indicating that economic influences and employment may also serve as a positive externality. INTRODUCTION In the housing literature, there is evidence that the location of the house is one key characteristic that can impact its price (Chau & Chin, 2003;Heyman, Law, & Berghauser Pont, 2019;Lieske, van den Nouwelant, Han, & Pettit, 2021) as a consequence of proximity to both positive and negative externalities such as schools or public parks (Bartholomew & Ewing, 2011;Diewert, Haan, & Hendriks, 2014) or exposed to pollution and antisocial behaviour (Han, Han, & Zhu, 2018;Mohamad et al., 2016;Wadley, Elliott, & Han, 2017).The impact of air quality, noise and proximity to various types of urban infrastructure can arguably have an important influence on the quality of life and has become a key issue for consumers when choosing a residential neighbourhood (McCord et al., 2018). Notably, the effect of noise ostensibly depends not only on the noise intensity to which dwellings are exposed, but also on the nature of the noise source and the consequences for social aversion.Schreurs, Verheijen, and Jabben (2011), undertaking research for the National Institute for Public Health and the Environment in the Netherlands into valuing airport noise, estimated that the depreciation of real estate value due to airport noise to be in the region €1 billion.Equally, they estimated the depreciation of land value to be approximately €600 million.Therefore, the effects of major airports, and their expansions, can be quite controversial and present a topic of debate in terms of an externality effect on house prices and housing markets. Seminal studies, such as that of Tomkins et al. (1998), do, however, point out that there is a noise versus access debate on the impact of airports on urban property markets, with the effects unlikely to exhibit a uniform, or linear, spatial distribution.Regarding the relationship between airports (noise and other pollutants) and house prices, various international studies (Cohen & Coughlin, 2008;Dekkers & van der Straaten, 2009;Lijesen, Straaten, Dekkers, Elk, & Blokdijk, 2010;McMillen, 2004;van Praag & Baarsma, 2005;Zheng et al., 2020) have tended to show that properties located in closer proximity to an airport tend to have their prices lower than their counterparts, primarily due to increased exposure to noise and air pollution.In contrast, studies such as that of Lipscomb (2003) have revealed price premiums in relation to proximity to airports.Accordingly, there are associated costs and benefits, or disamenity and amenity effects with regards to the presence of, and distance to, airports, with the benefits generally perceived to be access, employment and upgraded infrastructure; and the costs mainly environmental degradation and problems of noise and traffic congestion for adjacent populations. Despite the voluminous literature examining the adjacency of airports as an externality, there remains limited evidence and empirical insights as to whether this effect is observable in the New Zealand context.More importantly, given the recent COVID-19 pandemic where the global aviation sector was halted with much fewer flying activities, one question arises: Does proximity to airports influence house prices?This study, therefore, examines housing prices and their determinants in New Zealand suburbs surrounding the four international airport cities of Auckland, Christchurch (Harewood), Wellington (Rongotai) and Queenstown (Frankton).The reasons to choose New Zealand as a case study is that the country's housing prices kept increasing even during the COVID-19 recession (Yiu, 2021), and that only New Zealanders were allowed to travel back to the country via the four international airports due to the country's border restrictions (Gössling et al., 2021;Ngo, 2020).Applying cross-sectional data at the suburb level, we develop several generalized least square (GLS) regression models and measure the distance of the suburb to the airport, airline traffic (passenger) volume change between 2017 and 2021, and finally we account for the effect of COVID-19 to examine whether this has had an impact on the New Zealand housing market.In this sense, our contribution to the literature is twofold.First, this study is the first attempt to account for the non-linear relationship between airport proximity and house prices, despite the fact that it has been observed previously (Kaur et al., 2021;Rahmatian & Cockerill, 2004).Second, we provide a more detailed picture of the above relationship in the New Zealand housing market using a large dataset (four airports, five regionsof which Auckland Airport covers two regions, 334 suburbs and five years, yielding about 1500 observations), especially during the COVID-19 pandemic where a record number of New Zealanders returning to the country was observed, which significantly affected the market (Statistics New Zealand, 2020;Stuff, 2020;The New York Times, 2021). RELEVANT STUDIES In the context of housing literature, an emerging and important area of research in terms of sustainability goals and the more general health and well-being of the global population relates to noise pollution and the environmental, economic, and social (dis)amenity effects.These environmental externalities, neighbourhood amenities and their proximity are key considerations for property prices (McConnell & Walls, 2005).Indeed, in relation to environmental economics there has been a long and established history within hedonic pricing studies which have examined the nature and role of proximate locational externalities (Kauko, 2003), neighbourhood and school quality (Gibbons & Machin, 2008;Forrest et al., 1996;Mohamad et al., 2016;Rehm & Filippova, 2008), environmental degradation (Bao, 2021), air pollution and quality (Amini et al., 2022), traffic noise and proximity (Andersson, Jonsson, & Ögren, 2010;Blanco & Flindell, 2011;Day, Bateman, & Lake, 2007), accessibility to amenities (Duarte & Tamez, 2009) and key infrastructure amenities (Efthymiou & Antoniou, 2013). Airport proximity and noise The institutional setting and impact of airport proximity on neighbouring communities is a strongly debated topic as to whether they are perceived as amenities or disamenties (Affuso et al., 2019;Salvi, 2008).According to Salvi (2008), the positive aspects of proximity are generally related to the provision of communication links and to the direct economic importance of airports, with the negative impacts largely stemming from the adverse environmental effects primarily associated with aircraft noise.Numerous studies have therefore examined the proximal effects to airports and aircraft noise on housing markets using a variety of econometric specifications and approaches.Indeed, studies investigating the effects of airport noise on residential housing markets extend back to the mid-1970s where seminal research (Nelson, 1979;O'Byrne, Nelson, & Seneca, 1985;Rosen, 1974) explored the connection between cumulative measures of airport noise and property prices for a number of North American and European airports.Since then, research within this area has progressed alongside the increased sophistication of software and the availability of data for measuring the proximity and noise effects. The early study by Uyeno, Hamilton, and Biggs (1993) estimated the density and impact of airport noise on property values for detached houses, multiple-unit residential condominiums and also for vacant land.Applying a variety of regression models to test continuous and banded distance, they found that a unitary increase in noise level results in a decrease of approximately 0.65% in property price for detached houses.Notably, the authors showed that a one unit increase in noise level results in a decrease of approximately 0.90% for condominiums.When considering the impact on vacant land values, they found that vacant land values are approximately 16% lower for properties exposed to a 10 dB incremental levels.Taking a slightly different approach, and building on the findings of Uyeno et al. (1993), Collins and Evans (1994) applied an artificial neural network (ANN) approach to identify whether there was an effect between aircraft noise and residential property values.Considering the application of ANNs to be useful due to their powerful abilities for pattern recognition and to distinguish between complex neighbourhood classifications, the authors revealed a range of value effects which differ markedly between property types.Notably, they identified that detached house prices are more sensitive to aircraft noise than semi-detached or terraced properties.Espey and Lopez (2000) applying hedonic analysis also estimated the relationship between residential property values, airport noise and proximity to the airport in the Reno-Sparks area of Nevada, USA.Their results showed a statistically significant negative relationship between airport noise and residential property values, with the average home in areas where noise levels are 65 dB or higher displaying a discount of US$2400 than equivalent homes located in quieter areas.Similarly, Cohen and Coughlin (2008) when comparing various spatial econometric models to examine the impact of airport noise for Atlanta in Georgia, USA, showed that houses located in areas considered outside of 'normal' day-night general decibel levels sold for 21% less than houses located in areas where the normally accepted decibel ranges were not breached.In addition, they revealed there to be a negative price elasticity effect, indicating that houses further from the airport, on average, have lower sales prices, inferring that proximity to the airport demonstrates somewhat of an amenity effect.This finding is in accord with the earlier study of Tomkins et al. (1998) who found proximity to Manchester Airport in the UK to comprise an amenity effect citing employment and economic benefits. The work of Affuso et al. (2019) also questioning whether airport proximity is an amenity or disamenity, applied spatial autoregressive models with directional effects to assess the impact of airport noise on housing values in Memphis, Tennessee, USA.Their study established that proximity to Memphis International Airport is perceived as a disamenity, with an average external cost of US$4795/dB of noise per household.This was also notable in the research of Dekkers and van der Straaten (2009) who developed a spatially explicit hedonic pricing model to quantify the social cost of aircraft noise disturbance around Amsterdam Airport in the urban fringe of the Amsterdam region.They found that higher noise levels resulted in lower house prices.More specifically, the authors discovered air traffic noise comprising the largest price effect, showing a marginal benefit of 1 dB noise reduction of €1459 per house, leading to a total benefit of 1 dB noise reduction of €574 million.Salvi (2008) also applied spatial econometric techniques to measure the impact of airport noise on the price of single-family homes in the Zurich Airport area.Using approximately 4000 sales transactions and georeferenced noise measurements aligned to changing patterns of runways configurations derived from a proprietary aircraft noise model, 1 and controlling for neighbourhood fixed-effects, the author established a noise discount index of 0.97%, with typical discounts ranging between 2% and 8%.In a comparable paper, Rahmatian and Cockerill (2004) observed that the average home prices in southern California were lower by approximately US$16,600-17,400 for houses located within 150-300 m from an airport, and more than US$21,000 for houses located between 600 and 750 m.Similarly, recent research undertaken by Kaur et al. (2021) also found that houses locate within the 400-500 m radius of Essendon Airport in Melbourne, Victoria, Australia, have an average price of AUS $515,015, noting that when the radius increases to 1.3-1.4km (but still within the noise contour of the airport) and further to 1.6-1.8km, the average prices respectively increased to AUS $809,748 and then dropped to AUS $688,659. A strand of enquiry has evolved with studies investigating the relocation and expansion of airports and flight paths on housing market activity, behaviour and pricing levels, with opponents of airport expansions arguing that increased noise will reduce property values and lower tax bases.In a study estimating the effect of the expansion of O'Hare International Airport in Chicago, Illinois, USA, and the accompanying noise on property values in the area, McMillen (2004) found home values to be 10% lower in areas that are subject to severe noise.Despite this initial finding, they did acknowledge and estimate that with aircraft becoming quieter, house prices may rise by as much as US$284.6 million in the densely populated area around O'Hare after a new runway is added to the airport.A comparable study by Jud and Winkler (2006) also explored the influence of the announcement effect of a new airport hub on housing prices.Controlling for extraneous influences, they found that property prices within a 2.5-mile radius declined approximately 9% in the post-announcement period within a 65 dB noise contour band, with properties located within the subsequent 1.5-mile proximity band, decreasing by 6% in the post-announcement period. More laterally, Mense and Kholodilin (2014) scrutinized the effects of an airport expansion and accompanying noise on house and apartment prices located under (new) planned flight paths.The findings revealed that property listing reduced sizeably in areas located under the published flight paths, with an average decrease in value of 9.2% up to a distance of 3 km.Their analysis also indicated that where the intended flight altitude was below 1 km, the discount ranged between 11.8% and 12.8%, whereas higher flight altitudes observed an average discount of approximately 8.3%. A similar research study was conducted by McCord et al. (2018) using advanced spatial modelling techniques to assess proximity to airport noise and if the flight path affected property prices for Belfast, UK.The authors found that high and moderate noise dB levels exhibited a negative influence on house prices.They also noted that there are spatially distinctive patterns in proximity to the airport, observing some market areas to be unaffected by the distance and noise externality, whereas other markets areas displayed negative associations across the coefficient quartiles which they suggested was a consequence of the existing flight path which is located directly above the south of the city.Directly measuring whether the houses fell within the boundary of being directly under the regional airport flight path, and whether the dB levels impact market pricing, they found that flight path noise has a negative impact on house prices of approximately 5.5%. Some existing research has further explored the effect of airport proximity and noise on differing sized airports.Lipscomb (2003), in an interesting paper focusing on the impacts an airport on a small urban city, found that being 1 mile further from Hartsfield International Airport in Atlanta, Georgia, decreases the sales price of a house, on average, by approximately US $36,332, and also demonstrating a US$9083 decrease in the sales price for each quarter-mile proximity relative to an average valued house of US$101,708.Appositely, the author revealed there to be no statistical significance of noise as a predictor of sales price, which he suggests is most likely due to a change in the noise level not causing a significant change in sales price, and that proximity may infer that the benefits outweigh the liabilities in smaller cities reliant to the economic benefits of the airport. A comparable study undertaken by Lu and Morrell (2006) also investigated the existence of environmental and social costs at different sized airports and the role of aircraft noise and engine emissions.Employing data for three major British airports and two Dutch airports, the authors found that the impacts of engine emissions were greater than that of noise.Further assessing the environmental cost with the traffic volume of an airport, the results for the five selected airports revealed a curvilinear relationship between annual environmental costs and aircraft movements, suggesting that marginal environmental costs are increasing as aircraft movements increase, more notably for larger airports as a consequence of social costs. Equally, Tsao and Lu (2022), for the case of Taiwan Taoyuan International Airport, evaluated the impact of aviation noise on house prices employing three different hedonic price models (HPMs).Their empirical results revealed that aviation noise has a significant negative impact on house prices in the region of US$2356/dB within the noise contour areas of 60-64 dB, and US$3623/dB for contour areas greater than 65 dB. In terms of air-traffic volume, the research conducted by Tsui et al. (2017) applying a twoleast squares methodology to establish whether house prices across three key regions and airports within New Zealand are related to the level of air traffic, empirically found that airport traffic volume positively and significantly influenced house prices. Other studies have integrated the role, proximity and noise related to airports in relation to socio-demographic characteristics of surrounding neighbourhoods and land-use ordinance and regulation.Ogneva-Himmelberger and Cooperman (2010) in a spatio-temporal analysis of noise pollution near Boston Logan airport, Massachusetts, USA, identified 'hot' and 'cold' socio-demographic clusters representing spatial concentrations of social groupings to corresponding levels of vulnerability to environmental impacts and noise levels.Their findings revealed that social groupings 'paying' for the cost of noise from Logan International Airport in Boston is highly vulnerable as there are more minority and lower income populations, and lower house prices in the noise-affected areas. In terms of planning ordinance, Batóg, Foryś, Gaca, Głuszak, and Konowalczuk (2019), investigating airport noise for a number of regional airports in Poland with a specific emphasis on the role of land-use regulation, applied spatial hedonic regression and a difference-in-differences approaches to examine the introduction of new land-use restrictions on property prices.They found that the introduction of land-use restrictions impacted upon property prices adjacent to airports. In a follow up study, Batóg et al. (2019) examined single-family house prices in adjacent areas proximal to the Gdansk Lech Walesa Airport and the Warsaw Chopin Airport in Poland.Using a variety of spatial econometric approaches, the findings revealed differential effects of proximity to airports on the prices of single-family houses located in limited-use areas in Gdansk and Warsaw. Airport proximity and the willingness to pay Whilst revealed preferences using hedonic-based price studies have dominated the literature, there have also been a number of stated preference studies, generally contingent valuation and 'willingness-to-pay' (WTP) methods, examining the impact of airport proximity and noise emanating back to the early to mid-1990s such as Feitelson et al. (1996) and Navrud (2002).Feitelson et al. (1996) in a WTP and contingent valuation approach for investigating the impact of airport noise in light of an airport expansion revealed that compensation programmes do not fully compensate homeowners or renters for the loss associated with higher noise exposure.Research undertaken by Wardman and Bristow (2008), examining Manchester, Lyon and Bucharest, looked at embedding aircraft noise nuisance within a broader quality-of-life context, namely changes in aircraft movements by aircraft type within specific time periods, and changes in generic aircraft movements by time of day.They established that respondents across the three selected airport regions value a change of one aircraft per hour in the daytime between €0.87 and €1.10, with evening changes in hourly aircraft volume ranging between €0.31 and €1.26.Carlsson, Lampi, and Martinsson (2004), using data on Stockholm, Sweden, also scrutinized how the marginal WTP for changes in noise levels is attributable to changes in the volume of flight movements by applying a choice experiment method.Similar to Wardman and Bristow (2008), they found the WTP to vary with the temporal dimensions, insofar that mornings and evenings have higher marginal values.The study undertaken by van Praag and Baarsma ( 2005), employing a survey in relation to individuals' 'life satisfaction or 'happiness' relative to income and exposure to aircraft noise, showed that the net income compensation required to offset aircraft noise equates to approximately 3% of net annual household income or 9% of housing costs.Similarly, Pope (2008) focusing on the role of asymmetric information, and more specifically seller disclosure on the implicit price for airport noise for Raleigh-Durham International Airport, North Carolina, USA, indicated that the airport noise disclosure reduced the value of houses by almost 2.9%, which they estimate to translate into approximately a 37 percentage point increase in the implicit price of airport noise. The existing studies, in line with an earlier meta-analysis undertaken by Nelson (2004), reveal noise discounts attributable to airport proximity and air traffic which are more (or less) sizeable relating to market context and other economic circumstances, differences and market structures.The literature generally exhibits there to be consensus in terms of the noise discounts; however, the proximity of airports is more ambiguous with studies showing proximity to be either a disamenity or an amenity for some adjacent urban housing markets.This study therefore augments this debate by examining four key regional airports across New Zealand in order to identify whether, and to what level, there is an externality effect negative relationship between airport noise exposure, air-traffic volume and residential property values. Methodology This research applies a critical realist approach, which relates to underpinning economic and consumer preference and choice reality of people making housing decisions based on house prices and amenity value such as proximity to airports (Affuso et al., 2019).In this regard, the hedonic price model (HPM) is the most common statistical approach used in housing price analysis (Affuso et al., 2019;Dekkers & van der Straaten, 2009;Salvi, 2008;Swoboda, Nega, & Timm, 2015;Theebe, 2004).In general, the HPM contends that buyers value each characteristic or attribute that makes up the product, for example, a house, differently, such that the price of that product should and can be disaggregated according to those characteristics (Lieske et al., 2021).Among those characteristics, the HPM literature, as discussed in the previous section, has identify a non-linear impact of airport proximity on house prices (Kaur et al., 2021;Rahmatian & Cockerill, 2004); however, this impact has not been empirically examined.In this sense, the incorporation of critical realism and HPM can provide better insights on this issue. Specifically, Lieske et al. (2021) and Zhang and Tao (2020) argue that one could estimate the relationship between price and characteristics of a house as: where PRICE it measures the median price of houses in suburb i sold in year t; X jit is a vector of housing characteristics (including location, house size, land size, etc.) of the median house of suburb i sold in year t; b is a vector of parameters to be estimated; and 1 is the measurement error. It is well versed that standard ordinary least squares (OLS) hedonic models are prone to functional form, biased coefficients and instability due to heteroscedasticity (Fletcher, Gallimore, & Mangan, 2000).To account for any possible correlation between the regression residuals, a panel generalized least squares (GLS) estimation method is employed in this study to examine the effects of different attributes on property price.In case the residuals are correlated and/or the variances of the observed values are not equal (i.e., heteroscedasticity is present), OLS and weighted least squares (WLS) will fail to be an efficient estimator, resulting in misleading spurious statistical inferences.In contrast, the GLS approach produces unbiased, efficient, consistent and asymptotically normal results. 2 Therefore, within the confines of this research, the panel GLS allows for the estimation of robust results from an unbalanced panel dataset of the 312 suburbs in the neighbourhood of four international airports in New Zealand for the five-year period of 2017-21, whilst also controlling for the fixed effect variables (Haddad et al., 2011;Quigley, 1995;Wooldridge, 2016). For the main techniques employed, two central themes are developed, each containing four regression models, in which both static and dynamic GLS regression approaches are adopted.The first theme explores a static temporal view (year on year), and the second theme adopts a dynamic temporal view where the time lag of price is also included.For each of these static and dynamic themes, we construct (1) a (base) model with no fixed effect, (2) a model with only the regional fixed effect, (3) a model with only the time/year fixed effect, and (4) a model with both the regional and yearly fixed effects. From a technical perspective, a static linear regression takes the form of y t = ax t + e t , whereas a dynamic linear regression can be expressed as y t = y t−k + ax t + e t .In other words, in a static model, the dependent variable y is a function of a set of independent variables x from the same period, whereas in a dynamic model, y is also affected by its own value(s) from the k previous period.The temporal variability of the coefficient in the dynamic model therefore allows us to observe and model the change in the impact of an attribute on house price in a more flexible manner, which more accurately reflects the property price determination process in reality. For the first theme, it is assumed that house prices (of a certain suburb in a certain year) are impacted by the characteristics of the houses in the same year and, thus, we define it as a static model of house prices.This first static approach incorporates a baseline model that explores global results for all five geographies with an international airport in proximity.The second model looks at regional fixed effects of the five geographies under analysis, with the third model statically looks at global results over time year by year (e.g., 2014, 2015, etc.).Model 4 further accounts for the regional fixed effects, but takes account of the temporal dimension, year by year. As identified, several studies have found a positive linkage between tourism and air traffic to regional house prices (Alola et al., 2020;Biagi et al., 2015Biagi et al., , 2016;;Tsui et al., 2017).Since airport activities can influence air traffic and tourism, we therefore include the number of air passenger arrivals, PAX , as a further control variable within the modelling.As discussed previously, the location characteristic represented by the distance between the house and the closest airport (DISTANCE) can also contribute to house prices.Subsequently, we further control for the non-linear effect of this factor via DISTANCE 2 (i.e., the square of DISTANCE); as previous research indicated that this non-linear relationship is apparent but remains limited within existing studies investigations.Note that the use of the quadratic term in non-linear econometric analysis is common (Dai et al., 2020;Pérez-Molina, 2022;Xiao et al., 2019). Furthermore, to account for the impact of the recent COVID-19 pandemic, and to also consider the differences in house prices between regions and across different years, we modify equation (1) as follows: where PAX it represents the number of air passenger's throughput at airport i in year t; DISTANCE i measures the geographical distance between the centroid of a suburb i and the closest airport; DISTANCE2 i is the squares of DISTANCE i ; COVID is a dummy variable which has the value of 1 for 2020 and 2021, and 0 otherwise; REGION is a vector of dummy variables represents the five regions of Auckland-Manukau city, Auckland-Papakura city, Wellington, Christchurch and Queenstown; and YEAR is also a vector of dummy variables represents different years from 2017 to 2021.It is noted that the five regions correspond to four international airports in New Zealand, which are Auckland Airport (covering two regions), Wellington Airport, Christchurch Airport and Queenstown Airport.These airports have the largest activities in the country, even during the COVID-19 pandemic (Ho, Nguyen, Ngo, & Le, 2021;Official Airline Guide, 2022).Therefore, we expect that the impact of PAX on PRICE as in equation ( 2) would be more pronounced at the suburbs surrounding those airports.We also note for the dataset timeframe that the city of Dunedin in the South Island also has an airport of international status, but since the COVID-19 pandemic the one regular international flight has not been reinstated, and thus it is omitted from the study. For the second theme, we argue that house prices do not only reflect the same year's characteristics but also are influenced by the house prices of the previous year, that is, the 'momentum effect' (Head, Lloyd-Ellis, & Sun, 2014;Oikarinen & Engblom, 2016;Tsui et al., 2019;Titman et al., 2014).Therefore, we include the time lag of price as an independent variable with the initial baseline global modelnote that this technique is also popular in the generalized method of moments (GMM) approach (Le et al., 2021(Le et al., , 2022;;Ngo et al., 2022;Yuen et al., 2022).The second model also incorporates a fixed spatial regional effect of all five geographies, with the third model a dynamic time lagging on price developed, and a final regional fixed effects of all five geographies further incorporated as follows: where PRICE i,t−1 is the house price from the previous year.In this sense, the empirical themes and models can be summarized as in Table 1. Data The data applied within this study are derived from two main sources.First, the house price data for the four international airport cities of Auckland, Christchurch, Wellington and Queenstown encompassing sales prices (PRICE) and property characteristics such as house size, land size and number of bedrooms (X ) was collated from CoreLogic who provide information on property analytics, valuations and sales transactions, 3 providing 422,884 transactions in total dating back to 1980.CoreLogic is a global leader in property information in the United States, Australia and New Zealand; the property data of CoreLogic have been widely used by commercial banks, mortgage advisors, insurers and even the government (CoreLogic, 2022). Comparative units of administrative geographies were applied based on the size of population and geographical expanse of each city.Notably, the Auckland 'Supercity' warranted the inclusion of both Manukau city and Papakura city given their geographical proximity to the airport.Overall, the data comprised 312 suburbs within the five city regions to which cross-sectional analysis was conducted in order to calculate the average price and distance (km) from each suburb (centroid) to the airport to obtain a robust 'price-distance' fixed effect from 1981 onwards.In this sense, this paper differs from previous studies by using suburblevel data, that is, each suburb is an observation, instead of regional or house-level data (Tsao & Lu, 2022;Tsui et al., 2019).However, it comes with a trade-off: we could not include other popular house-level variables such as the distance from a suburb to its central business district (CBD) or motorways as additional explanatory factors of our models. Second, regarding air passenger traffic volume (PAX ), the data were directly extracted from the four international airports' (Wellington, Christchurch, Auckland and Queenstown) monthly traffic reports 4 between 2017 and 2021.Table 2 presents the variables, their descriptions and descriptive statistics of the data applied within the study.Given the nature of real estate data, the logarithmic of sales price is used as the log-linear model helps normalize the dataset and ease of interpretation of the estimation effects (Knoke, Burke, & Burke, 1980;Wooldridge, 2016). Static analysis findings We first report the results of static models, that is, ones that examine a median house in a certain suburb, with house price considered as a function of the house's hedonic characteristics and Table 1.Empirical themes and models used within the analysis. Variables The dependent variable for all models is PRICE it (in logarithms).The dummy variable COVID is omitted when the YEAR dummy variables are introduced in the model (e.g., model 4 or 7) due to the multicollinearity issue.3).The large values of the Wald x 2 suggest that all the static models (models 1-4) provide insightful explanations on the influence of house size, land size, tourism, distance to the nearest airport and COVID-19 on house prices.Notably, the analysis shows that the signs and significant levels of the estimated coefficients are consistent across the models. Model 1 provides the baseline model estimates with an R 2 of 33%, 5 and shows proximity to the nearest airport (DISTANCE) to exhibit a negative coefficient (−0.134), which is statistically significant at the 1% level, with the number of passengers (PAX) exhibiting a positive significant effect on house prices.As previously noted, we included the squared term of distance (DIS-TANCE 2 ), to account for non-linearity.The findings from model 1 show this to be positive and statistically significant (0.014, p < 0.001), suggesting that the negative effect of proximity to airports seems to diminish over distance. Model 2, accounting for regional fixed-effects, reveals an increase in R 2 to 76%, and further confirms that proximity to airports display a negative effect of 13.4%.In contrast to model 1, the results observe PAX to be negative, although not statistically significant. Model 4, which includes all variables (but the COVID dummy variable was omitted due to multicollinearity with the YEAR dummies), shows R 2 increasing to 78%, and also exhibits distance to airports showing a negative effect (13.3%) on house prices, which also appears to diminish over space (DISTANCE 2 = 0.012, p < 0.001). Dynamic analysis findings Table 4 reports the results of the dynamic models (models 5-8) in which the median house price in a certain year is also influenced by its price from the previous year.The models exhibit increased R 2 and x 2 values, compared with the static models, suggesting that the inclusion of house price from the previous year improves the explanatory power of the models.Further, the coefficients on PRICE t−1 are positive and statistically significant, implying a certain degree of temporal autocorrelation within the transaction price time series.Similar to the static models, there is a relatively high consistency across models 5-8 in terms of their coefficient estimations.The findings of model 8 comprising both regional and time fixed effects show air passenger volume to be positive (PAX = 0.012, p > 0.05), however statistically insignificant.In terms of proximity to international airports, the findings exhibit both distance coefficients to be significant at the 1% level and revealing a negative effect of 9.8% (DISTANCE), which appears to diminish over space (DISTANCE 2 = 0.009, p < 0.001). Further analysis into the role of proximity to airports exhibits a non-linear and inverse 'U'shape relationship to be evident between airport proximity and house prices (Figure 1).To estimate the nature of the effect, we combine the positive coefficients of DISTANCE 2 with the negative coefficients of DISTANCE to estimate the degree of distance and house prices.As observed in Figure 1, using the distance coefficients from model 8 in our analysis, we estimate the level of X at the minimum value of the quadratic function Y = aX 2 + bX + c, where a = −0.098(i.e., the coefficient of DISTANCE 2 ), b = 0.009 (i.e., the coefficient of DIS-TANCE), and c is a constant.Accordingly, the value of X which makes the quadratic function reach its minimum is computed as X = − b 2a = − −0.098 2 * 0.009 = 5.59 and, thus, the corresponding value of airport proximity is DISTANCE = exp(5.59)= 266.65. 6In other words, houses (suburbs) located approximately within 300 m from an airport in New Zealand will have the lowest price, holding other factors constant, and as distance increases those houses (suburbs) further away show price premiums.This indicates that distance to airports can be viewed as both negative and positive externality as it suggests that houses in the immediate adjacency to airports will have lower prices, but only to a certain level of distance, then the prices will start to increase.In short, one can argue that airport noise and pollutions adversely affect the housing market in New Zealand, and that houses father away from the airport sell at a premium. KEY FINDINGS AND DISCUSSION This section is mainly based on the extended models of our analysis where the improved R 2 of those models (compared with model 1) suggests that they can explain the situation of the New Zealand housing market.In this sense, we have successfully applied the HPM approach to examine whether and how the various attributes (e.g., house size, number of bedrooms, distance to the airport) affect property prices in New Zealand across regions and over time. Our first notable finding is that across all eight models, the analysis exhibits the hedonic model tends to hold true for the New Zealand housing market.Specifically, house prices in New Zealand are influenced by the housing attributes, in which larger houses (both in terms of land size and house size) tend to be more expensive than smaller ones.This finding is in line with the hedonic price literature (e.g., Affuso et al., 2019;Lieske et al., 2019;Swoboda et al., 2015;Zhang & Tao, 2020).The impact of the number of bedrooms is, however, less clear, which may be explained by the fact that a three-bedroom house is the most common in New Zealand, so the variation in the number of bedrooms is small and contributes less to house prices.For instance, the number of three-bedroom houses dominates our dataset representing 64.85%, whilst the second and third popular sizes are four and two bedrooms, which account for only 12.92% and 11.01% of the sample, respectively. Second, we also observed that house prices vary across New Zealand regions, which is revealed through the regional dummy variables (i.e., Auckland, Christchurch, Manukau, Papakura, Queenstown and Wellington).Particularly, the (negative) values of those regional dummy variables suggest that houses in Auckland were the most expensive, as all other regions have negative and significant coefficients (compared with the base region of Auckland).The second and third expensive regions are Manukau and Papakura (practically can also be considered as part of the Auckland Supercity), respectively, followed by Wellington and Queenstown.Houses in Christchurch were the cheapest (with the largest coefficients across models 2, 4, 6 and 8), which may be due to the major earthquakes in 2010 and 2011 (Kusumastuti & Nicholson, 2018) 7 as well as stigma from the Christchurch attack in 2019 (BBC News, 2019). Our third finding indicates that passenger volume (PAX) seemingly has an effect on house prices.It is evident in the literature that this type of external demand exerts pressure on local housing demand which both directly and indirectly impacts on housing values (Alola et al., 2020;Biagi et al., 2016;Tsui et al., 2017).As such, the service economy, tourism-related activities, and auxiliary effects such as job creation and the development of the hospitality sector to the local economies can result in higher incomes for the local communities and, thus, also has an indirect impact on house prices (Fu et al., 2021;Mikulić et al., 2021;Paramati & Roca, 2019;Tsui et al., 2019).Accordingly, this finding not only supports the tourism-led policy of New Zealand (MBIE, 2017;Tourism Industry Aotearoa, 2019), but also indicates that domestic travel and tourism can play an important role for house prices. Fourth, while passenger volume may be observed as a positive externality, the effects of airport noise, distance and congestion have been viewed in the literature as negative externalities on house values (Affuso et al., 2019;Dekkers & van der Straaten, 2009;McCord et al., 2018;Mense & Kholodilin, 2014).Our findings confirm this effect, as demonstrated by the negative coefficients across the static and dynamic models which suggested that houses in closer proximity to an airport are cheaper than their counterparts; however, that this effect also diminishes and displays an inverse parabolic 'U'-shape effect.This finding is in keeping with studies such as Rahmatian and Cockerill (2004) and Kaur et al. (2021) which both observed house prices within immediate proximity to airports to be lower which diminished with distance and increased outside the airport noise contour. Finally, we also examined the impacts of the recent COVID-19 pandemic on the New Zealand housing market, which reflects via the dummy variable COVID, and via the yearly dummies (i.e., for 2020 and 2021 compared with the 2017-19 period).The first impression is that housing prices were on an increasing trajectory, with the coefficients of the yearly dummies all positive and significant across the models containing time-fixed effects (models 3, 4, 7 and 8).In fact, the 'housing bubble' was observed in New Zealand even before the pandemic (Johnson et al., 2018;Tookey, 2017).In combination, the coefficient of COVID is also positive and significant, suggesting that the recent pandemic also contributed to the prices increase of New Zealand houses driven by acute demand tastes and increased monetary supply.The onset of the pandemic in 2020 led to a record increase in the number of New Zealanders returning to the country (Statistics New Zealand, 2020;Stuff, 2020;The New York Times, 2021).This, in turn, has put more pressure on the country's housing market and, thus, it is understandable that house prices kept increasing during the 2020-21 period. CONCLUSIONS There exists a volume of research that has isolated the effects of externalities on property prices, and specifically investigating the role of airports as an amenity.This paper has added to this literature base and provides one of the first, if not the first, studies to conduct analysis into the proximal effect of four international airports located in New Zealand on their surrounding suburbs using several static and dynamic fixed-effects GLS hedonic models. The research findings, across the differing GLS models, showed when accounting for spatial and time fixed-effects that the proximity to airports in New Zealand revealed a consistent negative effect on property prices.When further investigating the proximity effect by measuring the quadratic function of the distance coefficients, the findings clearly exhibited an inverse 'U'-shape relationship to be evident indicating that housing in suburbs closer to airports exhibit a discount, which diminishes with distance and houses located further away to display a premium.An interesting finding also showed that air traffic volume seemingly has a positive impact on property prices which may be due to economic benefits this brings to the local economy, although further research is required to isolate and examine this potential effect in more detail. The research is therefore important in terms of providing an evidence base for policy particularly in terms of planning interventions in urban environments, such as pollution controls, air traffic limits, congestion charging proposals and urban infrastructure proposals.This study also contributes to the real estate valuation literature, valuation profession and policy, in that it provides a market transaction price-based empirical assessment of how property values can be impacted by the presence of key infrastructure such as airports.For example, the findings could serve as a reference for determining the amount of compensation for noise/air pollution impacts on affected communities due to new private or public (re)development projects such as airport expansion under the 'polluter pays' principle.The findings provide clear evidence that local air zone management strategies and noise abatement and management strategies need further examination in suburbs proximal to airports in New Zealand, a fundamental issue for policy development and management targeting. In consolidating these findings, future work will seek to use data which are spatially referenced in order to measure the nature of the effect more proficiently, and specifically the spatial heterogeneity of house prices and proximity to airports.Accordingly, the results of this study are subjected to some limitations that should be addressed in future research.For example, we do not consider other sources of pollution within the vicinity of the airport (such as soil pollution), which may have price dampening effects.We also do not investigate whether other key infrastructures or transportation comprise an effect and if the proximity to airports varies by property typology.Future research is suggested to incorporate other housing attributes and local amenities to further measure this relationship, and also within a more spatially based framework. DISCLOSURE STATEMENT No potential conflict of interest was reported by the authors. NOTES 1 Provided by the Swiss Federal Laboratories for Materials Testing and Research (EMPA). 2 For instance, see Case andQuigley (1991) andOlmo (1995) for a discussion on the application of GLS in real estate research. 3See http://www.propertyvalue.co.nz/. 4For instance, for Auckland and Christchurch airport passenger traffic updates, see https:// corporate.aucklandairport.co.nz/news/publications/monthly-traffic-updates; and https://www.christchurchairport.co.nz/about-us/who-we-are/facts-and-figures/monthly-passenger-arrivalsand-departures/. 5 It is not uncommon to find such low R 2 values in previous HPM studies (e.g., Lake et al., 1998;Yun Joe Wong et al., 2003;Elsinga & Hoekstra, 2005;Bourassa et al., 2011;Leishman & Watkins, 2017;Bełej et al., 2020), not to mention that it is only our base model and that the R 2 was improved in the extended models. 6A quadratic function is minimized when its first derivative equals zero, that is, Y ′ = 2aX + b = 0. Therefore, X = −b/2a.The numbers are subjected to the rounding effect. 7House prices in Christchurch increased faster than in Auckland and Wellington during the 2005-09 period (Shi et al., 2014), but the market slowed after 2010. Figure 1 . Figure 1.'U'-shape relationship between house price and airport proximity. Table 2 . Variables and descriptive statistics.The statistics of the original values of the variables are shown.In our estimations, their logarithmic values, except for dummy variables such as COVID or YEAR, are used instead. House prices, airport location proximity, air traffic volume and the COVID-19 effect Table 3 . Estimated results of the static models. Table 4 . Estimated results of the dynamic models.
2023-04-14T15:48:40.754Z
2023-04-12T00:00:00.000
{ "year": 2023, "sha1": "8896b8e1a8e37d8a7e07a91418ec772fcc9fa6ca", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/21681376.2023.2186805", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "7fadd7b0e8badc812f8e504822b8e9227c1fd029", "s2fieldsofstudy": [ "Economics", "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
269330107
pes2o/s2orc
v3-fos-license
Efficient EndoNeRF reconstruction and its application for data-driven surgical simulation Purpose The healthcare industry has a growing need for realistic modeling and efficient simulation of surgical scenes. With effective models of deformable surgical scenes, clinicians are able to conduct surgical planning and surgery training on scenarios close to real-world cases. However, a significant challenge in achieving such a goal is the scarcity of high-quality soft tissue models with accurate shapes and textures. To address this gap, we present a data-driven framework that leverages emerging neural radiance field technology to enable high-quality surgical reconstruction and explore its application for surgical simulations. Method We first focus on developing a fast NeRF-based surgical scene 3D reconstruction approach that achieves state-of-the-art performance. This method can significantly outperform traditional 3D reconstruction methods, which have failed to capture large deformations and produce fine-grained shapes and textures. We then propose an automated creation pipeline of interactive surgical simulation environments through a closed mesh extraction algorithm. Results Our experiments have validated the superior performance and efficiency of our proposed approach in surgical scene 3D reconstruction. We further utilize our reconstructed soft tissues to conduct FEM and MPM simulations, showcasing the practical application of our method in data-driven surgical simulations. Conclusion We have proposed a novel NeRF-based reconstruction framework with an emphasis on simulation purposes. Our reconstruction framework facilitates the efficient creation of high-quality surgical soft tissue 3D models. With multiple soft tissue simulations demonstrated, we show that our work has the potential to benefit downstream clinical tasks, such as surgical education. Introduction The development of realistic robotic surgery scenes is important for VR-based surgical training.The conventional method for creating these surgery scenes involves manual creation of soft tissue models with in-vivo textures by skilled artists.However, this approach is highly time-consuming and restricts the level of detail and variety achievable in surgical simulation.To overcome these limitations, we propose an automated approach to reconstruct interactive surgical environments using captured real data. Surgical reconstruction [1][2][3][4][5][6][7], as an emerging task, aims to recover the 3D shapes and appearance of soft tissues from in-vivo surgery videos.As pointed out by previous literature [6,7], surgical reconstruction is cursed with three typical challenges over natural scene reconstruction: 1) Soft tissues will undergo large and drastic deformations.Many surgical operations, e.g., cutting and tearing, can even damage the topologies of soft tissues.2) Surgical tools usually appear on the surgery videos and partially occlude underlying soft tissues from observation.3) Endoscopic surgery videos are captured in confined in-vivo spaces, resulting in limited multi-view geometric clues of the 3D shapes.Our recent work EndoNeRF [7] exploits the strong capacity of NeRF [8] for scene representations and incorporates tailored modules for handling tool occlusion and single-viewpoint input, achieving significant improvements in surgical reconstruction, particularly for scenes with large deformations.However, EndoNeRF encounters new practical challenges when constructing surgical simulation environments.First, the process of reconstructing a surgical scene from endoscopic videos using EndoNeRF is inefficient, requiring over 10 hours for per-scene optimization.Second, the optimized geometry of EndoNeRF is represented in a purely implicit field, i.e., the whole scene is encoded by network parameters.However, many physically-based methods in soft-body simulation [9][10][11][12] require explicit geometry model, e.g., meshes, particles, or tetrahedrons, rather than implicit fields.It is also worth noting that the realistic interaction of soft tissues is reliant on the underlying content beneath the tissue surface.While the geometry in the EndoNeRF only represents the surfaces of soft tissues.Hence, apart from surface reconstruction, another significant challenge lies in recovering topologically closed counterparts of soft tissues for simulation purposes. To fill this gap, this work is the first attempt to create surgical simulation environments with soft tissue surfaces automatically reconstructed from endoscopic surgery videos.Technically, we propose a novel framework for dynamic surgical reconstruction, which can yield realistic and simulator-friendly counterparts of the soft tissues in the input robotic surgery videos.We summarize our main contributions as follows: • We adopt a novel voxel grids-based scene representation for faster dynamic surgical scene reconstruction.• We build a pipeline for converting radiance fields into a closed mesh, which enables physically-based simulation of the reconstructed surgical scenes.• We exhibit multiple robotic surgery simulations with our reconstructed soft tissues on multiple simulation engines, including Taichi MPM [13,14] and NVIDIA Isaac Sim [15].This work builds upon a preliminary version presented at MICCAI 2022 [7].In this paper, we have made significant revisions and extensions to the original conference version.The major improvements include: -We designed a new deformable scene representation with grid-based radiance fields and 4D tensor-decomposed motion fields for faster training convergence.-We proposed a novel pipeline for extracting closed meshes from radiance fields, in order to generate simulatable soft tissues.-We conduct multiple surgical scene simulations with our reconstructed soft tissues.Our code is available at https://github.com/med-air/EndoNeRF. Method We first aim to propose a dynamic scene representation to model soft tissue's 3D shapes and textures from a stereo video clip of a dynamic surgical scene.Then we devise a particular de-occlusion rendering and stereo depth-supervised loss for optimizing the scene representation.Finally, we fill the reconstructed mesh surfaces into closed meshes and perform soft-body simulations on the filled meshes.The detailed descriptions are as follows. Efficient EndoNeRF Scene Representations In order to enable high-fidelity reconstruction of the surgical simulation environments, we resort to neural radiance fields.The fundamental neural radiance fields [8] for 3D scene representations are modeled in a coordinate-based MLP.Optimizing such scene representation to convergence is slow.Alternatively, we adopt an implicit-explicit voxel grids-based scene representation, which is shown to achieve much faster optimization [16][17][18][19].Specifically, we model the shape and appearance of the scene in density volume grids V ∈ R × × and feature volume grids V ∈ R × × × , where , , and are the resolutions for the x, y and z dimensions and is the channel number of the appearance features.For the density volume grids V , each grid vertex maintains its occupancy probability.For the feature volume grids V , each grid vertex holds an appearance code.To map the appearance code into RGB color, we introduce a shallow MLP Θ : R → R 3 as a learnable implicit shading module.The geometry and appearance of any point in the 3D space can be retrieved via tri-linear interpolation (denoted as interp(•)) of the 8 surrounding vertices' densities and features, i.e., the density () = interp(, V ) and the color c() = Θ (interp(, V )). Next, we consider surgical scene deformations.A dynamic surgical scene can be decomposed into a canonical radiance field and a time-dependent deformation field [20,21].Thereby the dynamic scene at time can be viewed as the canonical field warped by the deformation field at .In our proposed method, the canonical radiance field is represented by the V and V .To support large and topology-varying deformations, we adopt decomposed 4D motion fields and a 3-layer MLP to model the deformation field, which maps a spatial-temporal coordinate (, ) into its corresponding displacement Δ.In specific, we define a motion feature field as a × × × × tensor T [22], where is the resolution of the time dimension and is the temporal feature channel number.Direct dense 5D modeling of the motion feature field is costly in storage and over-high-dimensional for optimization on sparsely captured frames.Thus, we need to seek another compact representation.Since deformations can be locally continuous and low-rank, as observed in [19,23], we can decompose this tensor via outer product (Eq.1): where 1 , 2 , 3 and 4 are expected rank for each dimension, is a 1-D vector of the -th dimension, is a feature basis of the -th dimension, and V , , is a 3-D volume encompassing , , -th dimensions.For each continuously queried point (, ), we trilinearly interpolate component tensors and V , , to obtain a motion feature vector.Then we feed the motion feature vector into a 3-layer MLP to compute the output displacement vector.In this way, the corresponding coordinates in the canonical field can be obtained by ′ = + Δ(, ) with Δ(, ) = (interp(, , T )). Rendering and Optimization Volume rendering.With this scene representation, we can reconstruct the deformable surgical scene by optimizing the loss between rendered color Ĉ and ground truth color C. Specifically, the rendered color of the ray r() = o + d at time can be evaluated by volume rendering as shown in Eq. 2: where is the number of sampled points along r(), is the sampling step length, and c are the density and color of the j-th sample evaluated by ( + Δ( , )) and c( + Δ( , )), respectively.The attenuation term can be regarded as the probability that the ray is transmitted to the j-th sample. De-occlusion of surgical tools.According to the literature [6,7], soft tissues in surgical videos can often be occluded by surgical tools in the foreground.To address this issue and accurately reconstruct the soft tissues, our approach focuses on training the rays corresponding to tool pixels.Following the methodology proposed in EndoN-eRF [7], we generate binary tool masks for the left view of each frame.Instead of the mask-guided ray sampling proposed in EndoNeRF [7], which bypasses rays per training iteration, we pre-compute all possible camera rays and check for intersections between these rays and the tool masks prior to training.This saves the computational costs during the scene optimization procedure, resulting in faster training.Any rays that pass through the tool masks are excluded from the training process.During training, the training batch R is randomly sampled from the pre-computed rays that have been screened in this manner.By doing so, we ensure that the optimization of the scene representation bypasses the tool pixels.Leveraging the auto-interpolation property of radiance fields, we can patch the occluded soft tissue areas using information from adjacent frames throughout the training procedures. Distillation of stereo correspondence.To exploit stereo geometry in confined in-vivo input, we propose to leverage stereo geometry to enrich 3D clues over the optimization of the scene representation.Very recent work unimatch [24] learns dense correspondence on general vision datasets in a unified formulation for optical flow, stereo matching, and depth estimation tasks.Due to its superior performance over the previous method [25], we propose to distill stereo correspondence learned on general data into the surgical data along with the optimization of the surgical scene.To measure the learned stereo correspondence of the scene representation, we render depth from the radiance fields via D(r(), ) = =1 , where is the distance of the j-th sample along the ray r().The rendered depth is expected to converge to the estimated stereo depth once well-matched stereo correspondence is attained by optimizing the scene representation.Thus, we estimate stereo depth D(r(), ) by stereo-matching the feature correspondence of the robotic surgery videos from unimatch [24].Lastly, we add a depth-supervised loss to the objective function, resulting in the final loss function: + Huber D(r(), ), D(r(), ) , where C(r(), ) and D(r(), ) is the corresponding ground truth pixel color and unimatch [24] stereo depth of camera ray r() at the time .Here we adopt Huber loss [26] which is more stable to outliers.Compared with the stereo depth maps predicted by STTR [25], supervising better depth maps via unimatch can further decrease the training time since the depth refinement module proposed in EndoNeRF [7], which requires depth rendering of all training images, is no longer needed. ′ ← ( , , ); Extraction of Closed Meshes for Soft-Body Simulations After we obtain an optimized dynamic radiance field, we aim to perform physicallybased simulations on the reconstruction.Numerically solving physically-based simulation systems requires dividing the object material domain into a number of geometry primitives.Since our reconstructed scene representation only encodes the seen soft tissue surface in an implicit geometry, we need to first obtain its explicit form and convert it into a simulatable object.To do this, we propose the following procedure.We first render the reconstructed canonical radiance fields to color and depth maps.Then, we back-project RGB-D maps into point clouds.Namely, each 3D point (, , ) can be computed from a corresponding pixel ( , ) with depth value D as (, , ) = ( D( − )/ , D( − )/ , D), where ( , ) is the principal point and is the focal length.Bilateral filtering is also applied to smooth the point clouds.After conversion to point clouds, we perform Poisson surface reconstruction to extract the mesh surface from the simplified point clouds.Subsequently, we need to construct supporting structures underneath the surface for deformable object simulations.Material Point Method (MPM) and Finite Element Method (FEM) both require a closed mesh surface as the input for discretization.For the MPM solvers, dense particles are sampled to fill the soft tissue surface [27].As for the FEM solver, robust tetrahedral meshing algorithms [28][29][30] are proposed to convert surface objects into Thus, we tailor an efficient mesh-open2closed algorithm that can universally enclose the reconstructed mesh surfaces.The pseudocode of the algorithm is given in Algorithm 1, where the input mesh vertices V and triangles F are structured in 2D arrays, and , denote the x and y dimensions of vertex .The algorithm begins with constructing the boundary edges of the reconstructed surface and organizing them in a list.Those boundary edges can be classified by non-manifold test, i.e., manifold edges should be simultaneously included in 2 triangles.After finding non-manifold edges, we It is noteworthy that our algorithm is designed for the input mesh with a single "hole", i.e., there is only one connected edge.This assumption usually holds since the incisions on soft tissues are relatively shallow in in-vivo surgical scenes.If there are two disjoint surfaces represented in the reconstructed field, a solution is to run the algorithm separately for each surface. Evaluation of Efficient EndoNeRF We conducted an evaluation of our proposed method on a set of typical clips of robotic surgery videos, captured from 10 cases of our in-house DaVinci robotic prostatectomy dataset.In addition to the cases used in EndoNeRF [7], the new cases contain suturing, bleeding, and cutting on soft tissues.Each clip lasted for 4 to 8 seconds and was sampled into 45 ∼ 180 frames.These clips were captured from stereo cameras, and they encompassed challenging scenes with non-rigid deformation and tool occlusion.To establish the effectiveness of our new method, we compared it with two strong baselines: the recent NeRF-based method EndoNeRF [7] and the traditional DynamicFusion-based approach E-DSSR [6].For qualitative evaluation, we exhibit the reconstruction objects produced by our method, including reconstructed point clouds, surface meshes, and closed meshes.Due to clinical regulations, it is infeasible to collect ground truth depth for numerical evaluation on 3D structures.To perform quantitative comparisons, we instead used photometric errors, such as PSNR, SSIM, LPIPS, and training time, as evaluation metrics.This evaluation methodology is consistent with that used in previous work on surgical scene reconstruction, such as [6,7], and is widely used in the field of neural rendering.Figure 2 showcases our reconstruction outcomes, including extracted point clouds, soft tissue mesh surfaces, and closed meshes.Our FastEndoNeRF algorithm excels at reconstructing watertight surfaces of soft tissues from videos, faithfully capturing the intricate in-vivo textures.Despite the presence of large deformations, our method tracks the dynamics of the soft tissues using our proposed 4D decomposed motion field.For tool occlusion in the input videos, our method manages to patch tool-occluded areas by leveraging information from adjacent frames, ensuring a comprehensive and watertight representation of the dynamic soft tissue.In order to ensure that the reconstructed surface is suitable for simulation purposes in contemporary simulation engines, we have employed a mesh extraction scheme capable of constructing high-resolution meshes with intricate textures and shapes from the reconstructed point clouds.Furthermore, our proposed mesh-open2closed algorithm facilitates the creation of a closed structure by appending a base to the mesh surface.This closed structure is essential for enabling accurate simulations in the chosen environment.In Figure 3, we run our method and the original EndoNeRF [7] on the same NVIDIA RTX 3090 GPU for 3 minutes and compare their training efficiency.Due to the limited training time, the reconstruction results obtained with EndoNeRF remain noisy and blurry.Conversely, our method demonstrates impressive performance even at an early training stage (i.e., 10s to 60s), with the ability to approximate the scene's appearance and shape accurately.This validates the superior training convergence speed of our proposed scene representation.It is noteworthy that our model employs ∼160M parameters, consuming 4GB GPU memory for training each case.Without factorizing the 4D deformation field, the 4D deformation field would necessitate an allocation of over 12GB of memory during the training procedure, which shows the effectiveness of our compact dynamic scene representations. Table 1 displays a quantitative comparison of the metrics PSNR, SSIM, LPIPS, and training time.Both methods exhibit impressive photometric results when compared to the traditional method of E-DSSR [6].Despite a slight decrease in performance, FastEndoNeRF achieves a remarkable training time improvement of approximately 20 times faster than EndoNeRF.By training FastEndoNeRF for 27 minutes, we can achieve comparable quality to EndoNeRF trained for over 10 hours.This highlights the efficiency and effectiveness of the FastEndoNeRF approach. NVIDIA Isaac Sim FEM Solver Taichi MPM Solver + Particle Renderer Fig. 4: Soft tissue simulation results.The first row exhibits real-time interaction between surgical tools and reconstructed soft tissues in NVIDIA Isaac Sim [15].The second presents a simulation example of soft tissue incision with MLS-MPM algorithm [14] implemented in Taichi [13]. Initial Application for Surgical Scene Simulation Virtual surgical training platforms have become increasingly significant in surgery education and training [31,32].However, building a surgical education and training platform is associated with several challenges, including limited exposure to real-life surgical cases, and limited access to high-fidelity simulation.Our proposed framework can overcome these challenges by providing a reconstructed realistic environment for surgical trainees to practice and master their skills. Real-time FEM simulation.Here we first build a real-time virtual surgery simulation in NVIDIA Isaac Sim [15], where FEM is the solver for simulating reconstructed continuum objects.In the first row of Figure 4, we import a reconstructed closed mesh into NVIDIA Isaac Sim and tune its physical properties to make it behave like soft tissues.Owing to advanced GPU acceleration, NVIDIA Isaac Sim enables real-time FEM simulation and rendering, producing high-fidelity deformations under the dissection interaction.The automatic reconstruction of the simulation environment from real surgical videos ensures that the in-vivo textures are accurately preserved, thereby enhancing the visual realism of surgical simulations.The proposed algorithm for closed mesh extraction facilitates material domain discretization for the FEM solver within Isaac Sim.If the imported meshes are not closed in Isaac Sim, the mesh tetrahedralization procedure will fail, resulting in unreasonable simulation effects.Moreover, the creation procedure for this simulation environment is highly scalable, thanks to the efficiency of the surgical reconstruction pipeline.MPM simulation.While the FEM solver in NVIDIA Isaac Sim achieves basic softbody simulation, it lacks the ability to perform damage operations on continuum objects, which is considered a crucial aspect of simulating soft tissues.In order to address this limitation, we employ the Material Point Method (MPM) [33], a hybrid grid-particle method that combines the strengths of both Eulerian and Lagrangian approaches.This method enables us to handle large deformations and complex material behavior, as demonstrated in recent papers [34,35].To specifically support damage deformations on soft bodies and achieve two-way coupling between rigid and non-rigid objects, we implement the state-of-the-art MLS-MPM [14].In Figure 4, the second row illustrates an example of soft tissue damage resulting from dissection.It is evident that MLS-MPM is capable of accurately capturing the incision behavior on the soft tissues.While MPM offers soft tissue damaging simulation, it is characterized by high computational costs and falls short of achieving real-time simulations.In the simulation stage, ∼5M particles are generated for simulation, resulting in a memory consumption of around 5GB. Conclusion We present an innovative and data-driven framework for constructing surgical simulation environments using endoscopic videos.Our approach introduces a new fast dynamic scene representation based on NeRF, which significantly accelerates the 3D reconstruction process of surgical scenes.Additionally, we propose a closed mesh extraction algorithm that converts reconstructed soft tissue surfaces into simulation objects.To demonstrate the versatility and applicability of our framework, we showcase multiple simulations of reconstructed surgical environments for diverse clinical applications.Our proposed methodology aims to inspire a significant advancement in the field of surgical simulation and is poised to open up new possibilities for next-generation surgical training and surgical robot learning. Limitations and future work.There are still some under-explored problems with our current methods.First, the de-occlusion of surgical tools relies on the interpolation of radiance fields, which will cause artifacts in the textures of occluded soft tissues.This could be solved by incorporating generative models to inpaint the textures.Second, as an initial trial, our simulation is based on the naive versions of FEM and MPM.In the future, we aim to test more simulation algorithms on our reconstructed soft tissues, e.g., XFEM and XMPM, to achieve more realistic simulation effects. Fig. 1 : Fig. 1: Pipeline of our proposed FastEndoNeRF framework, consisting of a 4Ddecomposed motion field and dense 3D voxel grids. ∪ {}; ← True; /* Iteratively build a base plane of the soft tissue surface and connect the base with the boundary vertices along C */ while !isEmpty(C) do ← Dequeue(C); if ! then /* Projection of the last */ ← ′ ; end /* Projection of in the appended base of the soft tissue */ Fig. 2 : Fig. 2: Reconsruction results.The first column gives the reference input image, the second column exhibits reconstructed point clouds, the third column shows the meshes obtained by Poisson surface reconstruction on the point clouds, and the last column displays the closed meshes appended with a base. Fig. 3 : Fig. 3: Comparisons of reconstruction quality between EndoNeRF and our method within the first 3 minutes of training. Table 1 : Quantitative evaluation and comparison of our method and baselines.We evaluate photometric errors and training time of the dynamic reconstruction.
2024-04-25T06:44:50.373Z
2024-04-10T00:00:00.000
{ "year": 2024, "sha1": "8f9b7e88aabf4b5c19fcf0ba0b1ea15f39cb24fd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11548-024-03114-1.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "8f9b7e88aabf4b5c19fcf0ba0b1ea15f39cb24fd", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Engineering", "Medicine" ] }
65557002
pes2o/s2orc
v3-fos-license
Driver Behaviour State Recognition based on Speech Researches have linked the cause of traffic accident to driver behavior and some studies provided practical preventive measures based on different input sources. Due to its simplicity to collect, speech can be used as one of the input. The emotion information gathered from speech can be used to measure driver behavior state based on the hypothesis that emotion influences driver behavior. However, the massive amount of driving speech data may hinder optimal performance of processing and analyzing the data due to the computational complexity and time constraint. This paper presents a silence removal approach using Short Term Energy (STE) and Zero Crossing Rate (ZCR) in the pre-processing phase to reduce the unnecessary processing. Mel Frequency Cepstral Coefficient (MFCC) feature extraction method coupled with Multi-Layer Perceptron (MLP) classifier are employed to get the driver behavior state recognition performance. Experimental results demonstrated that the proposed approach can obtain comparable performance with accuracy ranging between 58.7% and 76.6% to differentiate four driver behavior states, namely; talking through mobile phone, laughing, sleepy and normal driving. It is envisaged that such approach can be extended for a more comprehensive driver behavior identification system that may acts as an embedded warning system for sleepy driver. Traffic accidents typically are contributed by three main factors, namely; driver, vehicle and external environment [4]. In vehicle factor, lack of maintenance (i.e bald tires, bad brakes), mechanical failure (i.e vehicle age, overdue expiry date of spare parts) and design flaws (i.e manufacture malfunctions) are the most common reason why accident occurs. MIROS reported that vehicle defects contribute to 2% to the total cause of the accident [5]. The other 13% of the total cause of accident is recorded by road environment. Road environment consists situation such as hazardous road condition (i.e potholes, windy roads with no lines, steep shoulders, unsafe work zones for road repair, confusing road sign, defective traffic lights), road obstruction (i.e animal crossing, object on road) as well as weather and ambience (i.e fog, excessive rain, slick road, high wind, extreme differences in temperature, lighting as in sunrise and sunset) are the common environment aberrations that may cause accidents to happen. Subsequently, human factor is the most substantial reason that contributes 85% principal cause of road traffic crashes. According to Redhwan and Karim [6], fatigue, aggressive driving, sudden braking, following too close and exceeding speed limit are the common factors for driver behavior in Malaysia that leads to accident. Tawari [7] stated that there are different types of human factors such as distraction, drowsiness and emotion while driving. The potential distractions, are; drinking or eating, passenger disturbance, object in vehicle, using phone and other distractions [8]. According to Young and Regan [9], driver distraction happens when a driver is hindered from receiving information needed while doing the driving task. This is because of some event, activity, object or person that is within or outside the vehicle that switch the driver's attention from concentrating in the driving task. Kang [10] indicated that driver drowsiness and distraction have been the significant factors in many accidents because the driver awareness level and decision making capability are reduced which have negative effect to the driver itself. In addition, human factor can also refer to human emotion influence which could be dangerous while driving. Psychologists observed that emotion influence directly by experience regarding how one feels about the object of surrounding [11]. Moreover, distraction, concentration, careless driving and loss of self-control while driving will also affect the emotion of driver [12][13]. Therefore, it is necessary to monitor the driver behavior and alert the driver when they are in distraction state to reduce the accident. Unsafe driving behaviors can be predicted in advance and this could lead to safe driving. According to Bayly [14], the amount of accidents would be reduced by 10% to 20% by monitoring and predicting the driver driving behavior states. For simplification, factors of road traffic accidents are illustrated in Figure 3. Human factors can be simplified to four sub-sections to facilitate understanding; namely, physiological, psychological, behaviour and cognitive. Physiological factor refers to the aspects related to human well-being characteristics of normal functioning. Defects in this factor include fatigue, eye sight impairment / disorder, sleep deprivation, nutritional deficit, alcohol / drug / medicine intoxication or medical condition such as seizure, strokes, heart attack and false sensation of the sensory organs. The disability to judge based on previous experience, short attention span and low memory capacity are also falls into physiological factor. On the other hand, the psychological factors are concerned about the workings of the mind or psyche. It can be further segregated into motivation, perceptions, learning, beliefs and attitudes. For instance, acute stress, excessive emotion, lack of competence and skill, attitude (i.e negligence, arrogance, boldness, overconfidence), personality (i.e compromising, hardliner) and individual characteristics are common reason why accident may occur. In this paper, focus is given to psychological factor with emphasis on underlying emotion extracted from speech. Speech can be used to measure driver behavior states (DBS) because speech carries underlying information that can differentiate between one DBS to another [12,13]. However, speech data typically comprised of silence, voiced and non-voiced regions. Silence is observed when there is no speech is produced. It is different from unvoiced speech because the vocal cords are not vibrating resulting aperiodic and random speech waveform in nature. On the contrary, voiced speech produced a quasi-periodic speech waveform due to the air that flows from the lung that tensed of vocal chords. It is relatively high energy with less number of zero crossings present in the speech waveform [15]. Since for most practical cases, voiced region contains more information compared to the other two regions. Therefore, the silence and unvoiced are grouped together as silence region and need to be minimized. To complicate matters, background noise such as sound from vehicles engine, air conditioner, wind and others make the speech to noise ratio (SNR) relatively low resulting in difficulty to segregate between the data to be analyzed and artifacts. Hence, an automated tool to remove silence are developed using Energy and Zero Crossing Rate (ZCR). Such tool is useful in pre-processing a large number of data collected [16]. Although manual pre-processing data is the best, an automated tool can facilitate analysis in term of time, effort and monetary. In this paper, four different DBS are identified, namely; talking through mobile phone, laughing, normal and sleepy driving using Mel Frequency Cepstral Coefficient (MFCC) coupled with Multi Layer Perceptron (MLP) classifier. To explore the effect of silence removal, Short Time Energy (STE) and Zero Crossing Rate (ZCR) are used to truncate silence and unvoiced regions in order to make the data more compact. The aim of the paper is to enhance the driver behavior state (DBS) recognition through the use of unvoiced and silence removal from the speech data signal thus improving the computational time and complexity. This paper is organized in the following manner. Section 2 briefly described the Realtime Speech Driving (RtSD) dataset [12,13] used for this work as well as feature extraction method and classifier employed. Section 3 demonstrates the use of Short Term Energy (STE) and Zero Crossing Rate (ZCR) to remove silence and unvoiced regions. The experimental setup, results and discussion are provided in Section 4. In conclusion, Section 5 presented the summary and future work. Real-time Speech Driving Dataset, Feature Extraction Method and Classifier 2.1. Real-time Speech Driving Dataset (RtSD Dataset) Real-time Speech Driving Dataset (RtSD) [12][13] was collected by the Center for Computational Intelligence (C2iLAB) at the Nanyang Technological University Singapore. Eleven Singaporean and Malaysian drivers participated with age ranging from 20 to 54 years old with aminimum of five years driving experienced. Each participants was required to drive covering 25.61 KM under differring traffic conditions and environments for approximately 60 minutes. Three microphones were placed all around the vehicles to record the ambient noise while one microphone was placed very closed to the driver's mouth. Signals from all the four microphones were recorded and processed to achieve a cleaned speech. Each participant will have to go through three different during condition; a) normal driving condition listening to car radio with no interruption, b) heavy traffic with traffic lights and interrupted with interview being carried out in the vehicle, and c) no traffic light but heavy vehicles on road and the driver is required to make phone calls. In this paper we investigate four different driving behaviour state (DBS); namely: talking through mobile phone, laughing, sleepy and normal driving condition. The talking through mobile phone distractor to the driver represent a medium stress DBS. Each driver was asked simple questionnaire and needed to provide with fast and accurate answers. The laughing DBS was captured when the driver was laughing while reading aloud the road directional signboards with experimenter on board reading jokes. This is supposed to have a complementary effect to the stress induced in talking through mobile distractor. Normal Condition driving will be used as the baseline since most of the time the driver will be driving under this condition. In addition, sleepy DBS was captured during the final phase of the driving exercise, when the driver is exhausted. The data was only analysed from selected drivers who complained of his/her sleepiness during the driving exercise especially towards the end of the driving task. The data acquisition system comprises of recording a series of simultaneous data, such as: brake/gas pedal pressure signals, driver's facial expression and speech as well as road conditions, as shown by block diagram of Figure 4(a). Figure 4(b) shows the microphone mounted on the dashboard, the mounting of the video camera to record the road condition and the digital audio recorder recording noise in the vehicle. The aim for such comprehensive setting was to ensure a complete data collection for real-time driving data on an actual driving vehicle with various test subjects can be carried out. For the purpose of this study, only speech data is used for the analysis. The driving route consists of six segments and the set of instructions for the driver to follow is divided into four phases. The route was planned to induce the driver to experience stress, distraction and frustration. All drivers were not familiar with the route thus periods of familiarization and rests were included in all analysis. For the study we had selected the route with an average amount of traffic at off-peak hours such that a typical daily commute and traffic can be simulated. This can also help the experimenters to copntrol the situation better to ensure the safety of the driver. The detail description of the dataset can be found from [16]. Mel Frequency Cepstral Coefficient (MFCC) MFCC exploits the human auditory frequency response as in the cochlea which uses certain number of co-efficient filter bank and specific shape filter function. These features capture the perceptually most important parts of the spectral envelope of audio signals and translate the sound energy into the nerve impulses for the brain usage. Slaney MFCC implementation [17] of extracting 40 features from the speech signals was selected based on Ganchev et al. claim that Slaney's approach gives slightly better performance than others [18]. Multi-layer Perceptron (MLP) Classifier Multi Layer Perceptron is a feed-forward neural network trained with the standard backpropagation algorithm. MLP has the ability to find nonlinear boundaries separating the states. The complexity of MLP network can be changed by varying the number of hidden layers and the number of neuron unit in each layers. Given enough hidden unit and training data, MLP can approximate virtually any function to desired targets [19]. During training, error information is propagated back to the network to adjust the weight of each neuron and map the output with the most minimal mean square error. In this work, 1-and 2-layer MLP with 10, 20, 30 and 40 neurons are used to observe the DBS identification performance. Silence Removal Real world data is often distorted due to the present of noise and artifacts that may cause linear or non-linear transformation of the original data. Analysing corrupted or distorted data may gives wrong results thus yielding wrong conclusion. Hence, it is imperative that data to be used for any analysis must be pre-processed to ensure it is free from noise and artifacts. A clean data is needed for the analysis to ensure the observation is derived from the correct data. Thus the raw data to be used by feature extraction and classification stages must first be preprocessed by removing all the noise and artifacts. In this work, we only focus on silence and unvoiced regions removal based since the noise produce by the vehicle is very obvious and the signal to noise ratio (S/N) is the lowest. In the vehicular environment the engine noise and other ambient n oise will be maximum during the non-voice region, thus it will not be useful for our driving behavior analysis. For simplicity in this paper we use the silence and unvoiced regions interchangeably which refer to silence region. In speech production, there are silence regions exists in between voiced and unvoiced speech signal. The silence region is characterized by the absence of any speech signal characteristics. Silence is essential for human to comprehend the speech but for our analysis it becomes redundant and need to be removed. Figure 5 depicts the silence region in the speech signal. During silence region signal occurrence, there is no excitation input and output to the vocal tract as shown in the block diagram of Speech with Silent Region of Figure 5(a). It has the lowest energy compared to unvoiced and voiced speech segments as shown in Figure 5 Two of the most common approaches to detect silence region are by employing the Zero Crossing Rate (ZCR) and Short Time Energy (STE). The ZCR can be defined as the signal rate changes of either positive or negative while the signal are transmitted [20]. It is a measure of number of times in a given time interval/frame that the amplitude of the speech signals passes through a value of zero. ZCR is very popular for Voice Activity Detection (VAD) to determine between voiced, unvoiced and silence region. Such ability is due to the fact that ZCR rate for unvoiced sounds and noise are usually higher than voiced sounds thus making it possible to detect the start and end point of unvoiced sounds. The Short Term Energy (STE) can be defined as the energy with a short speech segment [21]. It is a simple and effective classifying parameter specially to differentiate between voiced and unvoiced sounds or silence because typically the voiced signal produces higher energy than silence. The silence region determined by STE will produced less energy than a certain threshold, which will then be truncated. The combination of ZCR and STE (ZCR+STE) is used because in voiced speech, the STE values are much higher than in unvoiced speech and has higher zero crossing rate. The input signal is calculated based on ZCR and STE with different type of windowing and the output is processed by silence removal frame by frame using this output value. The calculation is based on number of frame and each frame is checked to determine whether silence region existed or not. The condition stated that if the maximum amplitude of original input is less than the maximum output, the signal will be truncated. In addition, if the minimum of energy is less than the threshold, it is considered as silence and the frame will be omitted. The ZCR+STE is calculated using window length of 200. The window length is the value which influence the detection of voiced, unvoiced and silence signal. Figure 6 shows the flow of the silence removal using ZCR+STE. In this work, Hamming window is used. The Z is the output of ZCR_STE function while S is the original speech. Framing is used to separate the Z and S signal frames of 0.01 seconds of frame. The number of frame is calculated by dividing the length of S with the frame length. Each of the 0.01 seconds of S and Z is then compared. If max(S) >= max(Z), the signal is appended and go to the next iteration. Otherwise, the frame is removed. Finally, the clean signal is returned (with removed unvoiced and silence). Experimental Set-up, Result and Discussion Once the data had been cleaned, MFCC feature extraction method and MLP classifier were employed for DBS identification. In this work, two types of data arrangements were used with different number of instances in the targeted DBS class, namely: a) talking-biased and b) even-distribution data arrangements. Talking-biased data arrangement comprised of 2000 instances of talking through mobile DBS with 666 instances for sleepy and laughing DBS respectively and the remaining 668 instances for normal driving DBS. This arrangement assumes in real life situation where more drivers will be talking through their hadphones while driving simulating the medium level stress distractor. In addition this data arrangement also allow us to analyse aggravated drivers talking on the handphones and provided a free and unconstrained the way for the driver to responds. On the contrary, the even-distribution data arrangement consists of 1000 instances for the four studied DBS. It is hypothesized that the accuracy for talking through mobile DBS will be the most recognized DBS in the talking-biased data arrangement DBS recognition experiment whereas a more consistent performance among the four DBS should be observed in the even-distribution data arrangement experiment. 5-fold validation technique is employed using 80-20 rule. The data is segregated randomly in 5 folds where 80% of the data is used for training and the remaining 20% of the data is used for testing. The training-testing pairs are iteratively changed until the data are used completely. This is to ensure the classifier calculate the generalization of the data instead of memorization (using similar data for training and testing). 8 MLP networks architecture are implemented using 1 and 2 hidden layers with 10, 20, 30 and 40 neurons respectively. Figure 7 presents the identification performance using the talking-biased data arrangement using multiple MLP network architectures. Results in Figure 7 illustrated that laughing DBS is consistently the least identified as compared to the other DBS with the lowest performance recorded using one hidden layer MLP with 10 neurons (11.41%) and the best performance using 2 hidden layers with 30 neurons (32.88%). The talking through mobile DBS results performed as expected with mean performance of 84.4% and 83.6% for one and two layers MLP respectively. It gives almost two times better than the accuracy of sleepy and normal DBS that yielded performance ranging between 35% and 49%. Hence, it shows that the size of instances in a class may affect performance and different MLP networks architecture may give different performance. Figure 7. Silence region removal using ZCR+STE Further analysis was conducted to determine the optimal MLP network architecture for even-distribution data arrangement DBS identification experiment. Figure 8 shows the overall mean performance of DBS identification result using talking-based data arrangement. It is observed that the highest accuracy is yielded when MLP 2 hidden layer with 30 neurons was used with 51.73% accuracy. Hence, such MLP network architecture will be employed for evendistribution data arrangement DBS identification experiment. The main reason of conducting this experiment is to note the changes in accuracy detected by having no bias in the number of instances in target classes Figure 8. Overall mean performance of DBS identification result using talking-biased data arrangement Table 1 illustrated identification results using the even-distribution arrangement data with highest DBS identified as normal driving (76.6%) followed with sleepy DBS (71.6%), talking through mobile DBS (59.2%) and lastly laughing DBS (58.7%). The overall mean performance recorded is 66.53% that is about 15% better than the best performance recorded using talkingbiased data arrangement. The result is more distributed with variance of 13.4% indicating the potential of such approach to be implemented in recognizing different DBS. Summary and Conclusion Recognising driving behavior state (DBS) can help reduce the road traffic accidents rate. Result from Figure 7 and 8 and Table 1 shows that it is possible to idetify different DBS through speech and by removing the silence and non-voice region. Table I shows the potential of using the speech DBS system to recognize sleepy driver which can be very useful in identifying potential abnormal driving behavior that can cause accidents. It is also shown that due to the silence and unvoiced removal the speech data had been reduced to more than 50% of its original form thus improve the computational time required. Even for cases of talking on mobile phone can be recognize and differentiated with normal driving with 60% and 77% accurarcy respectively. This paper shows a preliminary work on DBS which can be enhanced further with more speech data and different DBS. Further works should be extended in term of feature extraction [22][23], optimal classifier architecture and more driving speech data. It is hope that such work can help reduced our traffic accidents making our road safer for both the drivers and pedestrians.
2019-02-17T14:18:51.774Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "ef73fc85f9b8fae9c2860a9a08215405fce8298b", "oa_license": "CCBYSA", "oa_url": "http://journal.uad.ac.id/index.php/TELKOMNIKA/article/download/8416/4433", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7b4c878b6afb87379295aa4289a19329a91b3d94", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245076635
pes2o/s2orc
v3-fos-license
Tricuspid Valve Intervention at the Time of Pulmonary Valve Replacement in Adults With Congenital Heart Disease: A Systematic Review and Meta‐Analysis Background Tricuspid regurgitation (TR) is a common finding in adults with congenital heart disease referred for pulmonary valve replacement (PVR). However, indications for combined valve surgery remain controversial. This study aimed to evaluate early results of concomitant tricuspid valve intervention (TVI) at the time of PVR. Methods and Results Observational studies comparing TVI+PVR and isolated PVR were identified by a systematic search of published research. Random‐effects meta‐analysis was performed, comparing outcomes between the 2 groups. Six studies involving 749 patients (TVI+PVR, 278 patients; PVR, 471 patients) met the eligibility criteria. In the pooled analysis, both TVI+PVR and PVR reduced TR grade, pulmonary regurgitation grade, right ventricular end‐diastolic volume, and right ventricular end‐systolic volumes. TVI+PVR, but not PVR, was associated with a decrease in tricuspid valve annulus size (mean difference, −6.43 mm, 95% CI, −10.59 to −2.27; P=0.010). Furthermore, TVI+PVR was associated with a larger reduction in TR grade compared with PVR (mean difference, −0.40; 95% CI, −0.75 to −0.05; P=0.031). No evidence could be established for an effect of either treatment on right ventricular ejection fraction or echocardiographic assessment of right ventricular dilatation and dysfunction. There was no evidence for a difference in hospital mortality or reoperation for TR. Conclusions While both strategies are effective in reducing TR and right ventricular volumes, routine TVI+PVR can reduce TR grade to a larger extent than isolated PVR. Further studies are needed to identify the subgroups of patients who might benefit most from combined valve surgery. T ricuspid regurgitation (TR) is a common finding in adults with congenital heart disease (ACHD) referred for pulmonary valve replacement (PVR), including those with tetralogy of Fallot (TOF), pulmonary stenosis, and pulmonary atresia. 1 Notably, as many as three-quarters of these patients have at least mild TR, and one-third present with at least moderate TR. Despite clearly demonstrated benefits of PVR on right ventricular (RV) volumes and function and the observation that isolated PVR also reduces TR, indications for combined valve surgery remain controversial. 2,3 Current guidelines do not suggest when concomitant tricuspid valve intervention (TVI) should be recommended. 4,5 Nonetheless, severe TR is strongly associated with an increased risk of adverse outcomes in ACHD. 6 Therefore, we aimed to evaluate early results of concomitant TVI at the time of PVR. 1. The population comprised ACHD (including TOF, pulmonary stenosis, and pulmonary atresia) who developed at least moderate pulmonary valve insufficiency; 2. The intervention group included patients who underwent combined TVI and PVR; 3. The control group included patients who underwent isolated PVR; 4. Outcomes of the studies included any of the following: tricuspid regurgitation (TR) grade, pulmonary regurgitation (PR) grade, tricuspid valve (TV) annulus size, RV dilatation, RV dysfunction, RV end-diastolic volume (RVEDV), RV end-systolic volume (RVESV), RV ejection fraction (RVEF), RV end-diastolic area, RV endsystolic area, New York Heart Association (NYHA) class, reoperation for TR, or 30-day mortality; and 5. Studies were prospective or retrospective observational studies or randomized controlled trials. Databases were searched for articles meeting our inclusion criteria and published by December 29, 2020: PubMed/MEDLINE, Embase, Scopus, and reference lists of relevant articles. The detailed search terms that were used for this search are given in Data S1. The following steps were taken: (1) identification of titles of records through database searching, (2) removal of duplicates, (3) screening and selection of abstracts, (4) assessment for eligibility through full-text articles, and (5) final inclusion in study. Studies were selected by 2 independent reviewers (C.C. and M.L.R.). When concordance was absent, a third reviewer (J.V.D.E.) made the decision to include or exclude the study. End Points, Risk of Bias, and Statistical Analysis The primary end point of the study was TR grade. The secondary end points were PR grade, TV annulus size (mm), RV dilatation, RV dysfunction, RVEDV (mL), RVESV (mL), RVEF (%), RV end-diastolic area (cm²), RV end-systolic area (cm²), NYHA class, reoperation for TR, or 30-day mortality. The grades of TR, PR, RV dilatation, and RV dysfunction were quantitatively assessed on echocardiography and scored on a scale from 0 to 3 (0, none; 1, mild; 2, moderate; 3, severe). Postoperative measurements were defined as the first observation within 12 months after surgery. For studies reporting interquartile ranges, the mean was estimated according to a validated formula. 9 Two independent reviewers (N.H. and A.G.) extracted the data. When concordance was absent, a third reviewer (J.V.D.E.) CLINICAL PERSPECTIVE What Is New? • In this systematic review and meta-analysis of 749 adults with congenital heart disease, we demonstrated that concomitant tricuspid valve intervention (TVI) at the time of pulmonary valve replacement (PVR) helped reduce tricuspid regurgitation (TR) grade to a larger extent than isolated PVR, while both strategies were otherwise equally effective. What Are the Clinical Implications? • Patients with severe preoperative TR would probably derive the greatest benefit from concomitant TVI in terms of improvement in NYHA class and TR grade; however, concomitant TVI does not seem to be effective in reducing the risk of adverse events such as death, arrhythmias, and heart failure. • Current data therefore do not support the universal application of this approach for severe TR. • Further well-designed studies focusing on specific underlying mechanisms of TR and evaluating the effect on adverse events on long-term follow-up may elucidate which patients stand to benefit the most from this approach. 10 The articles and their characteristics were classified into A (low risk of bias), B (moderate risk of bias), C (serious risk of bias), D (critical risk of bias), or E (no information/unclear). Using the RoB 2 tool, 11 the included randomized controlled trials were assessed for biases. Two independent reviewers (C.C. and M.L.R.) assessed the risk of bias. When concordance was absent, a third reviewer (J.V.D.E.) checked the data and made the final decision. Nonstandard Abbreviations and Acronyms Mean differences (MD) with 95% CI and P values were calculated for continuous variables. For binary variables, odds ratios (ORs) with 95% CI and P values were considered. Forest plots were created to represent the clinical outcomes. The chi-square test and I 2 test were performed for assessment of statistical heterogeneity. 12 The MD and OR were combined across the studies using a random-effects method (DerSimonian and Laird inverse variance). 13 The choice for random-effects models was made on the basis of the assumption that the effect sizes in the individual studies represented samples from a mixing distribution. In addition, the results were reanalyzed using fixed-effects models to explore whether this yielded differences regarding the summary inferences. The risk of publication bias could not be assessed because none of the comparisons included >10 studies. 14,15 All analyses were completed with R Statistical Software (version 4.0.2, R Foundation for Statistical Computing, Vienna, Austria). Institutional Review Board Approval Institutional review board is not applicable for systematic reviews and meta-analyses. Study Selection and Characteristics A total of 2031 citations were identified, of which 46 studies were potentially relevant and retrieved as full text. Six publications 16-21 fulfilled our eligibility criteria ( Figure 1). Characteristics of each study and their patients are shown in Tables Figure S1). Echocardiographic Parameters Results from the meta-analyses of echocardiographic and magnetic resonance imaging (MRI) parameters are presented in Table 4; forest plots are given in Figures With regard to TV annulus size, a clear decrease from preoperative to postoperative was observed in TVI+PVR (MD, −6.43 mm; 95% CI, −10.59 to −2.27; P=0.032), whereas it was not evident whether a similar effect was present in the PVR group (MD, −4.20; 95% CI, −10.42 to 2.02; P=0.074; I²=0%) ( Table 4). Although no evidence was found for an effect of either TVI+PVR or PVR on qualitative score for RV dilatation, TVI+PVR tended to be associated with a greater increase in qualitative score for RV dilatation compared with PVR (MD, 0.14; 95% CI, 0.08 to 0.19; P=0.020; I²=0%); however, this result should be interpreted cautiously given that Lueck et al 18 reported a tendency toward an increase in RV dilatation, whereas Kogon et al 21 reported a decrease in RV dilatation with both procedures. No evidence of effects of either treatment or differences between the effects could be observed with regard to RV dysfunction as qualitatively assessed by echocardiography (Table 4). Short-Term Outcomes The overall OR for 30-day mortality showed no evidence of a difference between TVI+PVR and PVR (OR, 1.86; 95% CI, 0.24 to 14.61; P=0.324) ( Figure S10). Reoperation for TR was only reported by Roubertie et al 19 and they could establish no evidence for a different between both groups. In this study, 2 of 9 (22%) of patients with severe TR who had undergone isolated PVR required reoperations, compared with 0 of 8 (0%) in the TVI+PVR arm (P=0.47). Sensitivity Analysis The treatment effect estimates from fixed-effects models were largely comparable to those from randomeffects models ( Figures S2-S10). In contrast to the random-effects models, the fixed-effects models suggested some evidence for a greater decrease in TV annulus size (MD, −2.47; 95% CI, −2.91 to −2.03; P<0.001), a greater increase in RV dysfunction as qualitatively assessed by echocardiography (MD, 0.29; 95% CI, 0.12 to 0.46; P<0.001), and a smaller increase in RVEF (MD, −6.41; 95% CI, −7.80 to −5.02; P<0.001) with TVI+PVR compared with PVR; however, all of these results should be interpreted with caution given the important statistical heterogeneity in these analyses (I² of 93%, 25%, and 99%, respectively). Furthermore, the greater increase in qualitative score for RV dilatation with TVI+PVR compared with PVR was no longer evident in fixed-effects analyses (OR, 0.14; 95% CI, −0.01 to 0.29; P=0.077); no evidence for heterogeneity was evident in this analysis (I²=0%). Summary of Evidence This meta-analysis investigated the effect of concomitant TVI at the time of PVR in ACHD. The key findings are summarized in Figure 2. Our results demonstrated that both TVI+PVR and PVR reduced TR grade, PR grade, RVEDV, and RVESV. TVI+PVR, but not PVR alone, was associated with a decrease in TV annulus Comments Dilatation of the RV is a common complication following repair of TOF, pulmonary stenosis, and pulmonary atresia, primarily attributable to chronic PR. 1 This, in turn, leads to dilatation of the TV annulus, resulting in varying degrees of TR and further RV dilatation. Although the transannular patch repair approach causes PR, many additional factors can contribute to TR in these patients. 22 These include damage to the TV leaflets or chordae tendineae during initial surgery, as well as the presence of additional valve abnormalities. Regardless of the causative mechanism, moderate to severe preoperative TR is a well-described risk factor for adverse outcomes in ACHD, leading to heart failure, arrhythmia, and death. 6 Although concomitant TVI has been shown to reduce TR in these patients, there has been considerable debate regarding this approach. Several studies have recommended PVR alone to address both PR and TR following TOF repair, arguing that the reduction in RV volume overload resulting from PVR is enough to ameliorate the observed TR. In a comparison between patients undergoing PVR alone versus those with TVI+PVR, Kogon et al 21 found that patients in the latter group experienced a greater increase in TR at medium follow-up (7.0±2.8 years). These results led them to recommend PVR alone in patients with moderate or greater TR. Similarly, Kurkluoglu et al 23 found that dilatation of the TV annulus improved after PVR alone, suggesting that additional parameters should be taken into account when evaluating patients for TVI+PVR. Results from a single-center study by Lueck et al 18 found longer intensive care unit stays for the TVI+PVR group, as well as greater rates of arrhythmia, renal insufficiency, sternal wound infection, and delirium. Notably, all of these findings were drawn from single-center studies composed of relatively small populations. Conversely, results from a multicenter study performed by Deshaies et al 16 found that TVI+PVR results in a greater reduction in TR. With the exception of a slightly higher incidence of major infections, there was no evidence for differences in adverse outcomes between TVI+PVR and PVR alone. Another area of debate that our study could not address is the optimal treatment strategy for patients 18 where the TV was replaced in all 10 of their patients with TVI+PVR, TV repair was the most common TVI in the studies we analyzed. This is similar to other studies of ACHD patients undergoing TVI. A recent singlecenter study from Australia analyzing TVI in adults with Ebstein anomaly and other ACHD found that TV repair was performed in 61% (22/36) of their cohort, while the remaining 39% (14/36) underwent TV replacement. 24 In this cohort, 4 patients required reintervention (with 1 death 9 days after reintervention), of which 2 had initial TV replacement and 2 underwent TV repair. Of the 30 patients with available echocardiographic data, all 5 with moderate or greater TR underwent TV repair. 24 In an analysis of 109 TV repairs and 19 replacements in 128 patients with ACHD other than Ebstein anomaly, Lo Rito et al 25 found that those who underwent suture annuloplasty had a higher rate of moderate or greater TR at latest follow-up (4.95 years; 7.7 interquartile range) compared with those with ring annuloplasty. The only patient who required TV reintervention had an initial biological valve replacement. Importantly, both studies describe a high incidence of atrial arrhythmias following TVI, regardless of surgical approach. 25,26 Currently, there are not enough data to identify which patients may benefit the most from concomitant TVI. Our study, however, highlights several salient features that warrant further exploration. In the only included study to report NYHA class, Roubertie et al 19 found that patients with severe preoperative TR experienced an improvement in NYHA class and TR grade following TVI+PVR. This study similarly found no patients with residual moderate or greater TR in the TVI+PVR group, compared with 78% (7/9) of those with PVR alone when analyzing patients with severe TR before surgery. In accordance with this, Deshaies et al 16 found that severe preoperative TR was associated with a higher risk of residual postoperative TR (OR, 9.43; 95% CI, 4.20-21.33; P<0.001), while TVI+PVR reduced this risk (OR, 0.44; 95% CI, 0.25-0.77; P=0.004). Importantly, only 5.6% (4/72) of patients with severe preoperative TR underwent isolated PVR in this study. In the Cramer et al 20 series, 75% (12/16) of patients with severe TR had TVI+PVR, with both approaches resulting in mild residual TR at 6-month follow-up. Although TR grade and measurements of cardiac volumes and function are valuable indices of the efficacy of TVI, the actual goal of such intervention in ACHD should be the prevention of adverse events such as arrhythmias and heart failure. In this regard, the results of a study by Bokma et al 6 their postoperative TR grade. The authors suggested that both long-standing volume overload attributable to PR and long-standing right atrial volume and pressure overload attributable to TR might contribute to this risk, leading to RV dysfunction and arrhythmias, respectively. While our findings suggest that patients with severe preoperative TR benefit most from TVI+PVR in terms of improvement of TR grade, a benefit in terms of "hard" outcomes can thus not be directly inferred. These data therefore do not support the universal application of this approach for severe TR. Further well-designed studies focusing on specific underlying mechanisms of TR and evaluating the effect on adverse events on long-term follow-up may elucidate which patients stand to benefit the most from this approach. Sources of Heterogeneity Given the nonrandomized nature of the existing studies comparing TVI+PVR against PVR, underlying center-and surgeon-specific bias with regard to treatment allocation was likely. Kogon 17 Cramer et al, 20 and Kogon et al. 21 In every study reviewed for this meta-analysis, the addition of TVI was performed on the basis of surgeon and cardiologist preference, which further adds patient-specific heterogeneity regardless of the degree of preoperative TR. The use of echocardiography and/or MRI also varied among studies. While the use of cardiac MRI has evolved in recent years, only Roubertie et al 19 and Both TVI+PVR and PVR reduced TR grade, PR grade, RVEDV, and RVESV. TVI+PVR, but not PVR, was associated with a decrease in TV annulus. Furthermore, TVI+PVR was associated with a larger decrease in TR grade compared with PVR. No evidence could be established for an effect of either treatment on RVEF or RV dilatation and RV dysfunction as qualitatively assessed by echocardiography of either treatment. There was no evidence for a difference in hospital mortality or reoperation for TR. PR indicates pulmonary regurgitation; PVR, pulmonary valve replacement; RV, right ventricular; RVEDV, right ventricular enddiastolic volume; RVEF, right ventricular ejection fraction; RVESV, right ventricular end-systolic volume; TR, tricuspid regurgitation; TV, tricuspid valve; and TVI, tricuspid valve intervention. Limitations While the use of meta-analysis enabled us to pool studies and increase our sample size, we were ultimately limited to 6 studies that met the inclusion criteria of comparing PVR with and without concomitant TVI. Accordingly, some of the analyses were based on a low number of subjects. As described earlier, our results may have been susceptible to selection bias. Another limitation is the lack of data regarding patient anatomy and underlying causes of TR, which can be critical in determining when TVI+PVR offers the greatest benefit. Since all included studies focused on adults with childhood TOF repair, the operative technique and age at repair reflect treatment strategies from earlier decades, which have since evolved. 27,28 Furthermore, long-term follow-up studies of patients with TVI+PVR remains scarce, which precludes the ability to draw definitive conclusions on durability of the results. CONCLUSIONS While both TVI+PVR and PVR alone are effective in the reduction of TR and RV volumes, routine TVI at the time of PVR can reduce TR grade to a larger extent than isolated PVR. Further studies are needed to identify the subgroups of patients who might benefit most from combined valve surgery, as current data do not support the universal application of this approach.
2021-12-08T06:17:04.684Z
2021-12-07T00:00:00.000
{ "year": 2021, "sha1": "52d415754797a13d7e095cd3dbf3253248186539", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.121.022909", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5208e261a8c49b881ad523d3ec237f6d671d3d50", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247767304
pes2o/s2orc
v3-fos-license
Pain management of nalbuphine and sufentanil in patients admitted intensive care unit of different ages Background Pain relief for patients in the intensive care unit (ICU) can improve treatment outcomes and reduce the burden on doctors and nurses. This study aims to report the clinical analgesic and sedative effects of nalbuphine and sufentanil on ICU patients. Methods This study retrospectively analyzed the medical records of 87 critically ill patients who received nalbuphine or sufentanil infusion in the ICU, including demographic data, diagnosis, Acute Physiology and Chronic Health Evaluation (APACHE) II, Critical Care Pain Observation Tool (CPOT), Richmond Agitation-Sedation Scale (RASS), systolic and diastolic blood pressure, heart rate and blood oxygen saturation (SpO2). The primary outcomes of this study were CPOT and RASS scores. The secondary outcomes were hemodynamic changes, including systolic blood pressure, diastolic blood pressure, heart rate, and SpO2. The adverse events recorded during pain management, such as hypoxemia, respiration depression and bradycardia, were also collected and analyzed. Results None of the patients in both groups experienced episode of hypoxemia, respiration depression and bradycardia. However, age-stratified analyses showed that nalbuphine has a better analgesic effect than sufentanil for patients aged ≤ 60 (P < 0.05). In contrast, sufentanil showed a better analgesic effect than nalbuphine for patients aged > 60 ( P < 0.05). Furthermore, nalbuphine has a significantly better sedative effect than sufentanil for patients aged ≤ 60 (P < 0.05). Conclusion ICU patients of different age groups may be suitable for different analgesics. For patients under the age of 60, nalbuphine has better analgesia and sedation than sufentanil, and does not cause respiratory depression and drastic hemodynamic changes. impact on ICU clinical practice [7]. Clinical practice guidelines recommend the use of intravenous opioids as the first-line drug class for perioperative pain management and pain relief in ICU patients [8]. However, the top up demand use of opioids is not always used, mainly due to concern of side effects such as respiratory depression. Despite the advance in health care, sedation and analgesia are still important aspects of patient care on the intensive care unit. Intravenous opioids are most commonly used to manage the distress caused by pain in ICU patients. Although there are international differences in the prescription of sedative and analgesic drugs, the opioids most commonly used for analgesia are morphine, fentanyl, sufentanil, and nalbuphine [9,10]. Below table shows the commonly used opioid analgesics in ICU patients (Table 1). Although morphine is the most common opioid, morphine can induce a variety of adverse events, such as vomiting, itching, nausea, drowsiness, urinary retention, constipation and respiratory depression. In contrast, sufentanil is another μ-opioid agonist, which has been commonly used in pediatric and adult patients as an auxiliary drug in anesthesia for decades to treat moderate to severe pain [11]. Due to the high lipophilic properties, sufentanil has a rapid onset and offset time after intravenous injection, with a half-life of about 15 min [12]. Several clinical trials have showed that sufentanil has excellent pain control in patient-controlled management of acute postoperative pain, and the most typical adverse events of sufentanil are nausea, vomiting, dizziness, and respiration depression [13,14]. Nalbuphine is a powerful synthetic opioid agonistantagonist analgesic. Studies have shown that nalbuphine can bind to the μ-opioid and κ-opioid receptors in the medulla and cerebral cortex, thereby providing effective analgesic [15][16][17][18]. The analgesic effect of nalbuphine is equivalent to that of morphine, and its onset time is similar to that of fentanyl [19,20]. The short onset time means that continuous infusion of nalbuphine can sustain the desired analgesic effect. Although nalbuphine shows the same degree of respiratory depression as the dose of morphine, nalbuphine has a ceiling effect on respiratory depression, that is, when the dose of nalbuphine is greater than 30 mg/70 kg, the respiration depression effect will no longer increase with the increase of the dose [21]. These properties make nalbuphine considered to be a more ideal and safer analgesic, and it is widely used in pediatric and gynecological surgery [22,23]. It is well known that pain management in the ICU has a great impact on short-and long-term outcomes. Although nalbuphine and sufentanil have been used for analgesia in many operations, the difference between sedation and analgesia for ICU patients is still unclear. Both under-sedation and over-sedation may put critically ill patients at high risk of prolonged mechanical ventilation, longer ICU and hospital stay, organ system failure, and prolonged reintubation rates. Thus, the aim of this study was to compare the analgesic and sedative effectiveness and the impact on analgesia/sedation-related adverse events between nalbuphine and sufentanil in patients admitted to ICU. Study population From 2018 to 2019, critically ill patients who received nalbuphine or sufentanil in the ICU were included. All patients were intubated in the ICU. Patients who allergic to nalbuphine and sufentanil, pregnant women and lactating women were excluded. In addition, patients that were readmitted to the ICU were excluded, that is, only the first admission to the ICU was considered for this study. A total of 78 patients were included in this study. A power analysis was also conducted to estimate the included sample size. The estimated sample size was based on a power analysis for 5 repeated measures of variance using an estimated medium effect size (f = 0.25), an alpha level of 0.05, and a power of 0.8. After analysis, there should be at least 22 people in each group. Therefore, the patients included in this study are sufficient to achieve statistical significance. The clinical data of each enrolled patient were obtained from medical record review of the ICU audit database, including demographic data, reasons for admission to the ICU, type of anesthesia, Acute Physiology and Chronic Health Evaluation (APACHE) II score, Critical Care Pain Observation Tool (CPOT) and Richmond Agitation-Sedation Scale (RASS). In addition, hemodynamic parameters during anesthesia were collected and analyzed, including systolic and diastolic blood pressure, heart rate and blood oxygen saturation (SpO 2 ). The primary outcomes were the analgesic and sedative effects of sufentanil and nalbuphine, which were evaluated by CPOT and RASS, respectively. The four items of CPOT include facial expression, body movements, limb muscle tension, and compliance with ventilator (intubated patients) or vocalization (non-intubated patients). The SPOT score of each item is 0-2 [24]. The higher the score, the higher the level of pain. A SPOT score > 2 is usually considered the presence of pain. The sedation effect of the RASS score, ranging from -5 to 4 points, reflects the change in the patient's sedation level from deep sedation to high restlessness [25]. A RASS score of -1 to 2 is considered the proper level of sedation. The secondary outcomes were hemodynamic changes (systolic blood pressure, diastolic blood pressure, heart rate, and SpO2) and adverse events in ICU (hypoxemia, respiration depression and bradycardia). Hypoxemia is considered significant when the patient's SpO 2 < 90% lasted for ≥ 5 s. It was considered to have respiratory depression when the patient experiences end-tidal CO 2 > 50 mmHg, respiratory rate < 6 breaths/minute, or airway obstruction with cessation of gas exchange at any time. Bradycardia is defined as the patient had a reduction in heart rate to 60 beats/min during infusion with sufentanil and nalbuphine, while arterial is defined as a decrease in systolic blood pressure < 90 mmHg. Patient and public involvement Patients or the public were not involved in the study design, collection, analysis, and interpretation of data or in the writing of this manuscript. Statistical analysis Statistical analyses were performed using SPSS software version 22.0 (IBM, Armonk, NY, USA). Frequency and percentage were summarized for categorical variables. Continuous variables were presented as the mean ± standard deviation (SD) or median with inter-quartile range (IQR). Pearson's chi-squared test was used to analyze categorical variables. The generalized estimating equation (GEE) was used to compare the reduction in pain at different time points between the sufentanil group and the nalbuphine group. In addition, the Student t test was used to analyze the differences between groups (age, weight, systolic blood pressure, diastolic blood pressure, heart rate and SpO 2 ). The Mann-Whitney U test was performed to compare the two groups in the hemodynamic parameters, APACHE II, CPOT, and RASS. Analgesic and sedative effects of sufentanil and nalbuphine on ICU patients Complete sets of CPOT and RASS scores during anesthesia were collected and analyzed. Table 3 shows the analgesic and hemodynamic parameters of patients receiving sufentanil or nalbuphine at different time points. After anesthesia, the CPOT (Fig. 1A) and RASS (Fig. 1B) scores in both sufentanil and nalbuphine groups gradually decreased (GEE, P < 0.001). After 3 h, nalbuphine can effectively reduce the pain intensity (CPOT < 2), and sufentanil also reduced pain intensity after 5 h of infusion. There was no significant difference in analgesic effects between the nalbuphine and the sufentanil groups (GEE, P > 0.05). On the other hand, the sedative effect of nalbuphine was significantly better than that of sufentanil (GEE, P = 0.037). This result shows that both sufentanil and nalbuphine are effective for analgesia and sedation in ICU patients. Hemodynamic changes during analgesia During anesthesia, the patients' heart rate ( Fig. 1C) and SpO 2 (Fig. 1D) did not change drastically, and there was no significant difference between groups at each assessment time point (GEE, P > 0.05). None of the patients in both groups experienced episode of hypoxemia, respiration depression and bradycardia. The systolic blood pressure during analgesia did not change drastically (Fig. 1E), and there was no significant difference between groups in the systolic blood pressure (Table 3, GEE, P > 0.05). No patients experienced arterial hypotension. Although GEE analysis found that nalbuphine had a slight decreased trend after infusion (GEE, P = 0.010), the diastolic blood pressure during the period was still within the normal range (Fig. 1F). In summary, the use of sufentanil and nalbuphine did not cause the respiration depression and drastic hemodynamic changes during analgesia in ICU patients. Analgesic and sedative effects of sufentanil and nalbuphine on different age groups As shown in Fig. 2A, nalbuphine showed better analgesic effect for ICU patients under 60 years of age than sufentanil (GEE, P = 0.004). In contrast, sufentanil showed a better analgesic effect than nalbuphine for ICU patients over 60 years of age (Fig. 2B, GEE, P = 0.005). The CPOT score of the sufentanil group was significantly lower than that of the nalbuphine group at 5 h after infusion (0.87 ± 0.26 vs. 1.85 ± 0.25, P = 0.011). Similarly, nalbuphine has better sedative effect than sufentanil for ICU patients under 60 years of age (Fig. 2C, GEE, P = 0.003). The RASS score of nalbuphine was significantly lower than that of sufentanil at 5 h (-0.30 ± 0.17 vs. 0.3 ± 0.19, P = 0.022) and 24 h (-0.58 ± 0.19 vs. -0.04 ± 0.10, P = 0.022) after infusion. For ICU patients over 60 years old, there was no significant difference in the sedative effects between nalbuphine and sufentanil (Fig. 2D, GEE, P > 0.05). Discussion In this study, the analgesic and sedative effects of sufentanil and nalbuphine was investigated for the first time in ICU patients. Our findings showed that both nalbuphine and sufentanil provided adequate analgesia. No patients in the ICU experienced hypoxemia, respiratory depression, arterial hypotension and bradycardia during analgesia and sedation with nalbuphine and sufentanil. Stratified analysis further showed that nalbuphine presented a better analgesic and sedative effects than sufentanil in ICU patients under 60 years of age. Moreover, sufentanil had a better analgesic effect than nalbuphine in ICU patients over 60 years of age. Thus, the results of this study suggest that nalbuphine can be regarded as a reasonable alternative for sufentanil to provide analgesia and sedation for ICU patients, especially for patients under 60 years of age. Up to 70% of patients in the ICU suffer from moderate to severe pain intensity, and these experiences may leave long-term imprints, such as chronic pain and posttraumatic stress disorder [26,27]. In a study of 599 survivals 6 months after discharge, 17% patients remember severe pain in the ICU, and 18% patients developed posttraumatic stress disorder [28]. Many patients still believe that the pain they experienced during ICU is considered to be the cause of sleep disturbance after discharge from the hospital [29]. In addition, inappropriate management in ICU may be resulted in hypoxemia, thromboembolic and pulmonary complications, increased ICU stay, painassociated immunosuppression, and readmission [1,30]. However, many patients in the ICU, especially those with aphasia, dementia, delirium or intubated and mechanically ventilated patients, cannot self-report their pain verbally. This is why we use CPOT instead of self-report to objectively measure the pain scores and the agitation or sedation levels in ICU patients. Respiration depression has been the main factor restricting the use of opioids. Therefore, the clinical guidelines recommend the use of non-opioid analgesics to reduce or replace the use of opioids [8]. Previous studies demonstrated that the dose-effect curve of nalbuphine in respiratory depression is flatter than that of morphine, and nalbuphine dose greater than 0.15 mg/kg Table 3 Patients admitted to intensive care units receiving sufentanil or nalbuphine at each time point Abbreviation: CPOT Critical Care Pain Observation Tool, RASS Richmond Agitation-Sedation Scale, SBP systolic blood pressure, DBP diastolic blood pressure, HR, heart rate, SpO 2 , oxygen saturation Data were presented as mean ± standard error (SE) or median with inter-quartile range (IQR). Bold indicates a statistically significant difference with a p-value less than 0.05 a The P-value was calculated using the generalized estimating equation method b The Student t test was used to analyze the differences between groups at indicated time points [21,31]. In this study, the average infusion dose of nalbuphine was 0.165 ± 0.057 mg/kg, which was just the marginal dose that cause respiratory depression (Table 4). In addition, the infusion dose of nalbuphine at different time points remained stable (GEE, P > 0.05). Therefore, for the ICU patients in this study, the results that nalbuphine did not cause respiratory depression, hypoxemia and bradycardia may be attributed to the use of low-dose nalbuphine. In addition, supplemental oxygen in the ICU ward may also improve oxygenation in patients with reduced SpO 2 . These results also strengthen the safety of nalbuphine for analgesia in ICU patients. It is known that aging is related to the gradual decrease of the functional reserve of all organ systems, including the nervous system. With age, the concentration of neurotransmitters, norepinephrine and dopamine receptors, and nervous tissue mass and density gradually decreased, which ultimately affects the elderly's pain perception and response to anesthetics [32,33]. Therefore, advanced age is generally considered to be an independent factor affecting anesthesia/ Fig. 1 Patients features in the ICU during nalbuphine or sufentanil infusion. (A) Pain intensity in ICU patients receiving nalbuphine or sufentanil at different time points (mean ± SD). Pain intensity was evaluated by CPOT. There was no significant difference between groups (GEE, P > 0.05). (B) Sedation/restlessness intensity in ICU patients receiving nalbuphine or sufentanil at different time points. Sedation/restlessness intensity was evaluated by RASS. Nalbuphine showed a better sedative effect than that of sufentanil (GEE, P = 0.037) (C) Heart rate of ICU patients at different time points during nalbuphine or sufentanil infusion. There was no significant difference between groups (GEE, P > 0.05). (D) SpO2 of ICU patients at different time points during nalbuphine or sufentanil infusion. No significant difference was observed between groups (GEE, P > 0.05). (E) SBP and (F) DBP of ICU patients receiving nalbuphine or sufentanil at different time points. Data were expressed as mean ± SD. The P-value was calculated using the generalized estimating equation method. Abbreviation: CPOT Critical Care Pain Observation Tool, RASS Richmond Agitation-Sedation Scale, SpO 2 oxygen saturation, SBP systolic blood pressure, DBP diastolic blood pressure analgesia/sedation [32,34]. The age stratified analysis in this study showed that nalbuphine has better analgesic and sedative effects than sufentanil in ICU patients under 60 years of age (Fig. 2). This result may be due to the different pharmacology of sufentanil and nalbuphine. Sufentanil is a high affinity μ-opioid receptor agonist and a selective κ-opioid receptor agonist [35]. In contrast, nalbuphine mixed agonist-antagonist properties, which mainly acts on κ-opioid receptors (analgesic), and processes opioid antagonist effect (morphine-reversal) at the μ-opioid receptor [36]. Although the receptor levels and the efficiency of signal transduction after receptor binding in the elderly are controversial, studies have shown that the number and binding levels of κ-opioid and μ-opioid receptors in elderly rats are greatly reduced [37,38]. On the contrary, for patients over 60 years old, sufentanil has better analgesic effects than nalbuphine (Fig. 2). In this study, the difference between nalbuphine and sufentanil did not reach statistical significance at certain time points, which may be due to the small number of patients in the study. Therefore, future studies should have prospective designs and should recruit more ICU patients to explore the role of age confounder in the analgesic effect of nalbuphine. Limitation This study has several limitations. Since this study focused on patients admitted to ICU, the small population size and the relative complexity of hospitalized diseases are limitations. Thus, the current results can only conclude that nalbuphine has a sustained and stable analgesic and sedative effect on ICU patients, but the results can not reflect other groups of patients who need analgesia. In addition, due to the limitations of retrospective study, this study lacks follow-up data. Therefore, we cannot assess patients' satisfaction with nalbuphine and sufentanil in pain management while in the ICU, and understand the patients' pressure disorders after discharge from the hospital. Future prospective studies with a larger sample size are required to reduce the limitations associated with the study. Conclusion In comparison with sufentanil, nalbuphine showed a sustained and stable analgesic and sedative effect on ICU patients with mild to moderate analgesia needs. During analgesia, nalbuphine did not cause respiratory depression and drastic hemodynamic changes. For ICU patients under 60 years old, nalbuphine has better analgesic and sedative effects than sufentanil. Therefore, we suggested that nalbuphine can be a useful alternative to sufentanil for patients who are admitted to ICU and need analgesia, especially these under 60 years of age. Future studies should have prospective design and should focus on well-defined ICU patients to further confirm age effect between nalbuphine and sufentanil.
2022-03-29T14:01:17.373Z
2022-03-26T00:00:00.000
{ "year": 2022, "sha1": "e477eb172c73c6510ad7adac153e410e3bcfc04f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e477eb172c73c6510ad7adac153e410e3bcfc04f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17269708
pes2o/s2orc
v3-fos-license
The Effect of Ethnic Variation on the Success of Induced Labour in Nulliparous Women with Postdates Pregnancies Objective. To identify the potential effect of ethnic variation on the success of induction of labour in nulliparous women with postdates pregnancies. Study Design. This was an observational cohort study of women being induced for postdates pregnancies (≥41 weeks) between 2007 and 2013. Women induced for stillbirths and with multiple pregnancies were excluded. The primary objective was to identify the effect of ethnicity on the caesarean section (CS) delivery rates in this cohort of women. Results. 1,636 nulliparous women were identified with a mean age of 27.2 years. 95.8% of the women were of White ethnic origin, 2.6% were Asian, and 1.6% were of Black ethnic origin. The CS delivery rate was 24.4% in the total sample. Women of Black ethnic origin had a 3.26 times greater likelihood for CS in comparison to White women, after adjusting for maternal age, BMI, smoking, presence of meconium, use of epidural analgesia, fetal gender, birth weight, and head circumference (adjusted OR = 3.26; 95% CI: 1.31–8.08, p = 0.011). Conclusion. We have found that nulliparous women of Black ethnicity demonstrate an almost threefold increased risk of caesarean section delivery when induced for postdates pregnancy. Introduction Postterm pregnancy has been defined as a pregnancy extending at or beyond 42 weeks of gestation [1] and has a reported incidence of 7% in all pregnancies [2]. Despite the improved understanding of the complex interplay between hormonal, mechanical, and inflammatory processes that initiate labour and allow its progression, the pathogenesis of postterm pregnancy still remains unclear [3]. The National Institute for Health and Care Excellence (NICE) in the United Kingdom has recommended that women over 41 weeks of gestation should have labour induced for the prevention of stillbirth due to postmaturity [4]. A Cochrane review has also reported that inducing labour at 41 weeks of gestation does not increase the caesarean section (CS) rates and leads to improved perinatal outcomes [5]. However, the NICE recommendation in 2008 does not take into account the significant influence of ethnic variation on the obstetric and perinatal outcome [6,7]. There is evidence that normal gestational length is shorter by a week in Black and Asian women when compared with White European women and that fetal maturation occurs earlier [8]. Moreover, the incidence of stillbirth after 41 weeks of gestation is reported to be higher for African and Indian women as compared to Caucasian women [9,10]. Since the national guidelines have not yet been adjusted to include ethnic origin, we sought to determine with our study the potential effect of ethnic variation on the success rates of inducing labour in a cohort of nulliparous women with postdates pregnancies. Materials and Methods This was an observational cohort study of women induced for postdates pregnancies (gestational age ≥41 weeks) at the Maternity Unit of the Shrewsbury and Telford Hospital (SaTH) National Health Service (NHS) Trust, between January 2007 and December 2013. Nulliparous women with singleton cephalic presentation deliveries were considered 2 Scientifica eligible for the study. Women induced for stillbirths and fetal congenital abnormalities and women with multiple pregnancies were excluded. Data was collected from Medway® obstetric electronic database and maternal data, labour/delivery data, and neonatal data were all recorded. Maternal data recorded were age, body mass index (BMI) at booking, smoking status, and ethnicity. Labour and delivery data included route of birth (vaginal delivery, caesarean section delivery), epidural analgesia use, and liquor appearance (normal, meconium stained). Neonatal data recorded were fetal gender (male, female), birth weight, head circumference, Apgar scores (at 1 and 5 minutes), umbilical cord gases taken at delivery (arterial, venous), and admission to the neonatal unit (NNU). Ethnicity data were self-reported during the first antenatal visit of women and the cohort of White European women was considered the reference group. Black African and Black Caribbean women were combined into the group of women with Black ethnic origin. Women from India or Pakistan or those reported from Asia were combined into the group of women with Asian ethnic origin. Women who had not stated their ethnicity ( = 93 or 5% of the sample) at booking were excluded from the purposes of the study. Quantitative variables were expressed as mean values (SD: standard deviation) and qualitative variables were expressed as absolute and relative frequencies. For the comparison of proportions, chi-square and Fisher's exact tests were used. Analysis of variance (ANOVA) and Student's -test were performed for the comparison of mean values between the ethnic groups. Bonferroni correction was used in order to control for type I error in the case of multiple comparisons. Logistic regression analysis was used in order to determine independent factors that were associated with the likelihood of caesarean section delivery. Adjusted odds ratios (OR) with 95% confidence intervals (95% CI) were computed from the results of the logistic regression analyses. All reported values were two-tailed. Statistical significance was set at < 0.05 and analyses were conducted using SPSS statistical software (version 20.0). Ethical approval for collection and analysis of data in our study was obtained by the Research and Development Department of the Shrewsbury and Telford Hospital NHS Trust. Results The total sample consisted of 1,636 eligible women with a mean maternal age at delivery of 27.2 years (SD = 6.2 years). 95.8% of the women were of White ethnic origin, 2.6% were Asian, and 1.6% were of Black ethnic origin. The mean value of BMI was 26.7 kg/m 2 (SD = 6.2 kg/m 2 ) and more than half of the participants (57.0%) never smoked. During labour, 33.7% of the women used epidural analgesia for pain relief and the overall caesarean section delivery rate was 24.4%. The mean birth weight was 3709.2 gr (SD = 430.8 gr) with 52.3% of the fetuses being male. Meconium stained liquor appearance was identified in 21.2% of the participants and 4.2% of the newborns were admitted to the neonatal unit (Table 1). The participants' characteristics for each ethnic category are presented in Table 2. The proportion of caesarean sections was 23.9% in White women, 28.6% in Asian women, and 46.2% in Black women, and it was significantly greater in Black women as compared to White women ( = 0.009). We also found that Black women when compared to White women had a trend for a slightly shorter gestational length (291.1 ± 2.2 days versus 291.9 ± 2.2 days, = 0.07). Multiple logistic regression analysis with dependent variable, the CS delivery rate, showed that ethnicity was independently associated with the odds for CS delivery (Table 3). Specifically, Black women had a 3.26 times greater likelihood for caesarean section in comparison to White women after adjusting for mother's age, BMI, smoking status, presence of meconium, use of an epidural, fetal gender, birth weight, and head circumference. Other factors that were also found to be independently associated with a greater likelihood for caesarean delivery were maternal age, maternal BMI, the presence of meconium, the use of epidural analgesia, and the increased birth weight and head circumference. Discussion Our study has shown that the overall CS delivery rate in our cohort of nulliparous women who were induced for postdates pregnancy was 24.4%. This is near the national average in the United Kingdom where the mean emergency CS rate for 2011-2012 was 30.2% in primiparous and 13.2% in multiparous women whose labours were induced [11,12]. Subgroup analysis of our data demonstrated that women of Black ethnic origin had an almost threefold increased risk of caesarean section delivery in comparison to White women after adjusting for age, BMI, smoking, presence of meconium, use of an epidural, fetal gender, birth weight, and head circumference. Nevertheless, no significant differences were noted in neonatal outcomes between the three ethnic groups (White European, Black, and Asian) in terms of Apgar scores, umbilical cord gases, or admissions to the neonatal unit. There are several studies reporting on the length of gestation amongst different ethnic groups. A large populationbased study showed that the median gestational length was 39 completed weeks in Black and Asian women in comparison to 40 completed weeks in White European women [8]. Another study has shown that gestational duration is more strongly associated with the mother's rather than the father's ethnic origin [13]. Ethnic differences have also been noted with preterm delivery rates where gestation was reported to be shorter for UK African and Afro-Caribbean women when compared to Caucasian women even after adjusting for confounding factors [14]. There is also evidence that Afro-Caribbean women have a twofold higher rate of stillbirths in all maternal age groups compared to Caucasian and Asian ethnicity, even when adjusted for parity and medical comorbidities [15,16]. Moreover, the incidence of stillbirth after 41 weeks of gestation has been reported to be higher for African women when compared to Caucasian women [10,17]. Ethnic disparities have also been reported regarding the perinatal mortality rates at 40 and 41 weeks of gestation with an almost twofold increased perinatal mortality rate for Black women when compared to White Europeans [17]. The hypothesis that has been stipulated to explain the shorter gestational length in spontaneous onset labours, the higher preterm delivery rates, the higher perinatal mortality, and stillbirth rates in the case of Black women is the earlier maturation of infants delivered within this ethnic group [8]. This hypothesis explains why the perinatal mortality rates in Black-ethnicity infants are reported to be lower than their White-ethnicity counterparts in the case of preterm delivery [18]. It may also explain why Black infants born at term have higher perinatal mortality rates compared to White infants, since they may be facing the complications of postmaturity at earlier gestation than the White infants [8,19]. Other evidence for the hypothesis of earlier fetal maturation is that Black infants both at term and at preterm are more likely than White infants to pass meconium in utero, which is considered an indication of maturity [8,20]. In our study, we found similar rates of meconium passage and neonatal outcomes amongst the different ethnic subgroups. Perhaps our study was underpowered to detect any significant differences on this outcome. What our study has shown however in support of the theory of earlier maturation for Black infants is that there is a trend ( = 0.07) for a slightly shorter gestational length in Black women when compared to White women in our cohort. As this was a retrospective analysis of a large set of data, there are certain weaknesses to be considered. First, all information was collected from an electronic database where accuracy of the data is dependent on the practitioner inputting the information. Second, ethnicity was selfreported at booking and this may have led to misclassification. Third, there was a subgroup of women in our cohort with missing data on ethnicity ( = 93 or 5% of the sample). This may reflect either noncollected data or women unwilling to disclose their ethnicity. If this number was classified into a specific ethnic group, then perhaps results may have been different. Fourth, on the basis of possible earlier maturation of Black and Asian fetuses, a higher proportion of women with Black and Asian ethnic origin when compared to White European origin may have gone into spontaneous labour much earlier and prior to the induction of labour date at 41 weeks of gestation. This means that the subgroups of Black and Asian women may be underrepresented in our cohort of women with induced labour and this may have introduced a bias in our results. The main strength of our study includes its large sample size with inclusion of nulliparous only women so as to account for the significant confounding effect of parity. In conclusion, we have found that nulliparous women of Black ethnicity demonstrate an almost threefold increased risk of caesarean section delivery when induced for postdates pregnancy at 41 weeks of gestation. Our study lends support to the literature reports that ethnic differences are likely to play a role in postdates pregnancy. As there is evidence that babies of different ethnic origin most likely mature at different rates, it remains to be seen in future studies whether an earlier induction policy in certain ethnic groups may reduce the rates of caesarean delivery in these women.
2018-04-03T04:52:01.234Z
2016-02-24T00:00:00.000
{ "year": 2016, "sha1": "447b1e8a0ac9ccb7926afd3817dc920f6b7abff4", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/scientifica/2016/9569725.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ae664a2d74773c3ecc89b3d15d61501ed16a1d2", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
18386445
pes2o/s2orc
v3-fos-license
IgA nephropathy in a patient presenting with scleritis Scleritis is a very uncommon manifestation in patients with IgA nephropathy. Here, we report the case of a patient presenting with diffuse anterior scleritis in which the laboratory disclosed microscopic haematuria and nephrotic range proteinuria. Renal function was normal. Serology for lupus, vasculitis and cryoglobulinaemia was negative. Rheumatoid factor was negative, and serum C3 and serum C4 were on the normal range. Serology for human immunodeficiency virus types 1 and 2, hepatitis B, hepatitis C, syphilis, and Lyme disease was also negative. A renal biopsy was performed and revealed IgA nephropathy. Oral steroids were then started, and 6 months later, the patient was asymptomatic. Scleritis did not recur, and ophthalmologic examination was normal; however, proteinuria was still in non-nephrotic range. Renal function still remains normal. Introduction IgA nephropathy (IgAN) is considered to be the most common form of glomerular disease in the world [1]. Although IgAN is clinically restricted to the kidneys in most cases, there are associations with other conditions, particularly with a number of immune and inflammatory diseases, commonly rheumatic (i.e. ankylosing spondilytis, rheumatoid arthritis and Reiter syndrome), gastrointestinal (i.e. celiac disease), hepatic (i.e. alcoholic and non-alcoholic liver disease, and schistosomiasis), pulmonary (i.e. sarcoidosis), and cutaneous (i.e. dermatitis herpetiformis) [2]. Human immunodeficiency virus infection and hepatitis B (in endemic areas) have also been associated with IgAN [2]. Ocular involvement in patients with IgAN is infrequent, and the most common association occurs with uveitis [2]. Reports of the association between scleritis and IgAN are very scarce [3][4][5]. Here, we report the case of a patient presenting with scleritis, in which the laboratory disclosed micro-scopic haematuria and nephrotic range proteinuria, and renal biopsy revealed IgA nephropathy. Case report A 34-year-old Caucasian female with prior history of hypertension was admitted at the Department of Ophthalmology of our hospital for bilateral red eye. She had no previous ocular trauma or surgery, and no other complaints. Ophthalmological examination showed bilateral diffuse anterior scleritis. Visual acuity was preserved. She was treated with subconjunctival injection of steroids with good result. On admission, blood pressure was of 150/100 mmHg, and heart rate was of 72 beats per minute. Ear, nose and mouth examination was normal, and cardiac, pulmonary, abdominal, neurological and lower limbs examination also revealed no changes. Neither joint tenderness or effusion nor rash, nor peripheral lymphadenopathies were detected. There was no genital ulceration. The laboratory disclosed normal renal function (creatinaemia 1.18 mg/dL; uraemia 40 mg/dL) and normal haemoglobin (12.9 g/dL). Urinalysis featured microscopic haematuria (150 erythrocytes/microlitre) and proteinuria of 3.59 g/day. Serum albumin was 3.4 g/dL, and there was evidence of high total cholesterol level (total cholesterol, 254 mg/dL). Serum protein electrophoresis, serum protein immunoelectrophoresis, hepatic function tests, erythrocyte sedimentation rate and C-reactive protein were normal. Serology for lupus (antinuclear, antidouble strand deoxyribonucleic acid, anti-Smith, extractable nuclear and anti-ribonucleoprotein antibodies) and vasculitis (anti-neutrophil cytoplasmic antibodies) as well as the search for cryoglobulins was negative. Rheumatoid factor was negative, and serum C3 and serum C4 were on the normal range. Serology for human immunodeficiency virus types 1 and 2, hepatitis B, hepatitis C, syphilis, and Lyme disease was also negative. Chest X-ray was normal. Renal ultrasound revealed normal-sized and normoechoic kidneys, normal differentiation, and no hydronephrosis. A kidney biopsy was performed, and revealed diffuse mesangial hypercellularity ( Figure 1) and diffuse granular mesangial deposits of IgA ( Figure 2). According to these, the diagnosis of IgAN was established. She started oral prednisolone (1 mg/kg/day), lisinopril (20 mg/day), losartan (50 mg/day) and simvastatin (20 mg/day). Six months later, she was asymptomatic, and no other episodes of scleritis occurred. Ophthalmological examination was normal. Proteinuria decreased but still persisted in nonnephrotic range (2.3 g/24 h). Renal function still remains normal. Discussion Although IgAN is clinically restricted to the kidneys in most cases, there are associations with other conditions, particularly with a number of immune and inflammatory diseases, commonly rheumatic (i.e. ankylosing spondilytis, rheumatoid arthritis and Reiter syndrome), gastrointestinal (i.e. coeliac disease), hepatic (i.e. alcoholic and nonalcoholic liver disease, and schistosomiasis), pulmonary (i.e. sarcoidosis), and cutaneous (i.e. dermatitis herpetiformis) [2]. Human immunodeficiency virus infection and hepatitis B (in endemic areas) have also been associated with IgAN [2]. Ocular involvement in patients with IgAN is infrequent, and the most common association occurs with uveitis [2]. Our case describes the association of scleritis with IgAN. Reports of the association between scleritis and IgAN are very scarce [3][4][5]. Nomoto et al. [3] followed up 113 patients with various types of primary glomerular diseases for 1-33 months and verified that, of those patients studied, six patients exhibited scleritis. All of these six patients with scleritis were identified as having IgAN. Importantly, none of the patients other than those with IgAN had scleritis during the study period. Scleritis is a scleral inflammatory disease that may also involve the cornea, adjacent episclera and underlying uveal tract. Scleritis is associated with a systemic disease in ∼50% of cases [6]. The most common association is with rheumatoid arthritis. A number of other systemic disorders are also associated with scleritis, such as systemic lupus erythematosus, relapsing polychondritis, systemic vasculitis, sarcoidosis and inflammatory bowel disease. In addition, infectious diseases (i.e. syphilis, Lyme's disease and herpes zoster) can also be the aetiology of scleritis [7]. Taking into account the association between scleritis and systemic diseases, a careful and complete examination of other organs should be carried out. In our patient, the absence of symptoms and signs of other organs involvement made the diagnosis of systemic disorder unlikely. On contrary, the presence of microscopic haematuria and nephrotic range proteinuria moved our attention towards a glomerular disease, and as such, a kidney biopsy was performed and revealed IgAN. Scleritis can occur in association with IgAN. In patients with scleritis and asymptomatic urine abnormalities, IgAN should be considered and properly investigated. We hypothesize that abnormalities of the IgA immune system similar to the IgA nephropathy may be involved in the development of scleritis [3]. As described for episcleritis associated with IgAN, in which large numbers of dimeric IgA-secreting plasma cells were observed in the episcleral tissue [8], we speculate that ocular IgA could also display an important role in the pathogenesis of scleritis in patients with IgAN.
2017-04-14T11:49:14.687Z
2010-06-02T00:00:00.000
{ "year": 2010, "sha1": "906d950f2851f331291b9b27f7d50e938d19816f", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/article-pdf/3/5/453/1217204/sfq103.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "906d950f2851f331291b9b27f7d50e938d19816f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260039580
pes2o/s2orc
v3-fos-license
Energy and Exergy Analysis on Zeotropic Refrigerants R-455A and R-463A as Alternatives for R-744 in Automotive Air-Conditioning System (AACs) : The popularity of vehicles and the increased time spent in cars with air conditioning systems has led to regulations in many countries that require the use of environmentally friendly refrigerants with minimal global warming and zero ozone depletion potential (GWP and ODP). Cars need high-performance, eco-friendly air conditioning systems to reduce their impact on the environment, lower fuel consumption, and decrease carbon emissions. The aim of the current work was to propose CO 2 -based blend zeotropic refrigerants, R-455A (R-744/32/1234yf) and R-463A (R-744/32/125/1234yf/134a), to improve the thermodynamic performance of pure CO 2 refrigerants. The thermodynamic energy and exergy analysis and system optimization of an AAC system for the new zeotropic refrigerant blends compared to carbon dioxide (R-744), using Aspen HYSYS software, were investigated. The influence of cooler/condenser pressure, average evaporator temperature, cooler/condenser outlet temperature, and refrigerant flow rate on the cycles’ COP and exergy efficiency were conducted and are presented. The results showed that, at the same operating condition parameters, the cycle COP improved by 57.6 and 76.5% when using R455A and R463A instated of R744, respectively, with the advantage of reducing leakage problems due to the higher operating pressure of R744 (5–7 times higher than those of R455A and R463A), as well as requiring heavy equipment, but at optimal operating condition parameters, R744 and R-463A had a maximum COP of 14.58 and 14.19, respectively. The maximum COPs of R744, R455A, and R463A based on the optimal pressure of the cooler/condenser were 3.1, 4.25, and 5.4, respectively. Additionally, regarding the need for environmentally friendly air conditioning systems with acceptable performance in cars due to their impact on the environment and their contribution to global warming, the blend R455A is recommended for use as a refrigerant in AAC systems. Introduction Most automobiles now come equipped with automotive air conditioning (AAC) systems due to their prevalence and the amount of time people spend in their vehicles.These systems typically use refrigerant fluids, which can be harmful to the environment and contribute to global warming.To mitigate this, many countries have mandated the use of eco-friendly refrigerants with minimal global warming potential (GWP) and zero ozone depletion potential (ODP).The popularity of cars and the increasing time that people spend in them have led to the installation of air conditioning systems in most vehicles.However, the use of refrigerant fluids in these systems can be damaging to the environment, prompting many countries to enforce laws requiring the use of environmentally friendly refrigerants with zero ozone depletion potential and low global warming potential. Due to scientific advancements and the detrimental effects of non-natural refrigerants on the ozone layer and greenhouse gas emissions, it is inevitable that current refrigerants will be phased out and replaced with new, ecologically friendly alternatives.As a result, R134a, with its 100-year GWP of 1430 [1], has been banned from air conditioning systems in the European Union since 2017 [2], being substituted with refrigerants that have a GWP of under 150 [3].The Kyoto Protocol's focus on the impact of certain greenhouse gases on global warming has led to the gradual replacement of traditional hydrofluorocarbons.Natural CO 2 (R744) has gained attention as a safe refrigerant due to its non-flammability and non-toxic properties [4]. Research on refrigerant replacements is now divided into two key approaches that are driven by environmental concerns.The first approach involves seeking innovative refrigerants with low GWP and zero ODP, such as United Signal Company's azeotropic combination R410A and DuPont Company's R1234yf and R1234ze.The second approach explores the use of natural refrigerants [5,6].CO 2 has long been used as a refrigerant by humans [7,8], offering several advantages, such as non-toxicity, environmental friendliness, safety, affordability, and excellent thermal properties.Studies by Robinson and Groll [9] from Purdue University have demonstrated that the CO 2 transcritical cycle is a cutting-edge and reliable technique for automobile air conditioning, with great potential for efficiency enhancement.Research by Yu Binbin, Wang Dandong, Xiang Wei et al. [10] has shown that CO 2 electric car air conditioning systems perform comparably to R134a. Further studies have investigated the impact of R32 refrigerant on CO 2 water heat pump systems, revealing varying reductions in exhaust pressure when R32 was applied [11].Massuchetto LHP et al. conducted tests on three different refrigerant combinations (R744/R1270, R744/R717, and R744/RE170), determining that R744/RE170 yielded the optimal refrigeration outcome [12].Mancini et al. explored the use of CO 2 and dimethyl ether as an azeotropic combination refrigerant, finding that the addition of dimethyl ether benefited the CO 2 system [13].Adding propane to CO 2 refrigerant was also found to enhance the system's coefficient of performance by 7-9% according to the study by W. I. Mazyan et al. [14]. While CO 2 refrigerant is a popular research area for refrigerant replacement, it does have some disadvantages, including its high working pressure and relatively lower system efficiency.These factors somewhat hinder its rapid promotion and implementation [15,16].Sun et al. conducted an experimental investigation on CO 2 /R32 blends in a water-to-water heat pump system [17].Yu et al. studied the temperature difference in heat transfer using theoretical and experimental methods based on the nonlinear temperature enthalpy of R236fa/R32 mixtures, identifying distinct characteristics of the glide temperature depending on the composition [18].Liu et al. examined the detonation of R32/R1234ze(E) mixtures, analyzing the flame structure, explosion pressure, and lowest flammability boundaries of the refrigerant explosion [19]. The development of synthetic refrigerants such as R-32, R-125, R-1234yf, and R-134a with zero ODP and low GWP is hindered by safety concerns, processing complexity, and high cost [20,21].R744, an alternative to R134a, belongs to the Hydrofluoroolefin family [22].Abbood et al. [23] studied and presented environmentally friendly alternative refrigerants to R134a and compared the thermodynamic performance of an AAC system using R134a with blends of hydrocarbons (HCs) such as R290/R600a.Their theoretical analysis demonstrated that the blend (R600a/R290/134a) with a mass ratio of (43/35/22) exhibited a low global warming potential (GWP), while maintaining comparable refrigerant performance.This blend can be directly utilized in the system without requiring any modifications.A reviewed, compared, and comprehensively described outline of the main enhancements of recent advances and sustainable solutions in automobile air conditioning systems category was presented by Sagar and Rakshit [24].The thermodynamic and thermophysical properties for new mixed refrigerants, R13I1/R152a, R1234yf/R290, and R1234yf/R600a, as alternatives to R134a in automotive air conditioning systems, to solve the problem of the high global warming potential of R134a, were presented by [25,26].Savitha et al. [27] presented a review of the thermodynamic and flammability properties of the low GWP refrigerants and the compatibility of these refrigerants with construction material and lubricant.A novel design that integrated evaporative cooling with an automotive CO 2 air conditioning system was introduced and experimentally investigated by [28] to address the significant decrease in energy efficiency experienced in hot climates when using carbon dioxide (CO 2 ) refrigerant for vehicles.Lei et al. [29] presented thermotical and experimental studies to increase the heat transfer area of the evaporator and to improve the system performance of air condition systems using CO 2 in hot climate conditions.Vaccaro et al. [30] investigated and presented the effects of the addition of the second element in the CO 2 transcritical cycles to overcome the heavy expansion loss, requiring specific means for its mitigation.Luo et al. [31] theoretically studied the vapor-liquid equilibrium (VLE) properties of two eco-friendly zeotropics, CO 2 /R1234ze(Z) and CO 2 /R1336mzz(E), by using Peng-Robinson (PR) and Soave-Redlich-Kwong (SRK) models.Chen et al. [32] proposed new correlations, along with the application of deep learning-based modeling techniques, to analyze and predict saturated flow boiling heat transfer and two-phase pressure drops in the evaporating flow.Hussain et al. [33] predicted and optimized the two-phase pressure drop of refrigerant R1234yf across a diverse range of testing conditions.To achieve this, feature engineering techniques were employed to identify and select the most influential features that could accurately estimate the desired output. However, pure R744 systems can experience leakage problems due to their higher operating pressures (7-10 times higher than conventional R134a systems) and require heavy equipment, which reduces their coefficient of performance.These issues can be mitigated by using new blends, although the properties of such blends can differ from their original constituents.Therefore, the aim of this study was to explore the potential of two R744-based blends, R-455A (R-744/32/1234yf) and R-463A (R-744/32/125/1234yf/134a), to enhance the thermodynamic performance of pure R744 refrigerants, leading to lower engine fuel consumption and hence lower carbon emissions.The need for environmentally friendly air conditioning systems with high performance in cars, due to their impact on the environment and contribution to global warming, is essentially required.To achieve this, the thermodynamic energy and exergy characteristics of an AAC system were evaluated and compared for the proposed zeotropic refrigerants and for carbon dioxide (R-744) using Aspen HYSYS software.Additionally, the effects of cooler/condenser pressure, average evaporator temperature, cooler/condenser outlet temperature, and refrigerant flow rate on the cycle coefficient of performance (COP) and exergy efficiency were studied, analyzed, and presented.The anticipated outcomes of this investigation aim to demonstrate the most advantageous refrigerant performance in comparison to R-744 and to provide suitable operating conditions for the proposed refrigerants.Furthermore, these findings will serve as a foundation for constructing an AAC system that can be mass-produced economically and in an environmentally conscious manner. 2 displays the physical and environmental properties of the studied zeotropic refrigerants.The composition of the two blends, R-455A (R-744/32/1234yf) and R-463A (R-744/32/125/1234yf/134a), by mass are (3.0/21.5/75.5)and (6.0/36.0/30.0/14.0/14.0),respectively.Furthermore, the blend R-463A has a high GWP of approximately 1386, while the GWP of the blend R-455A is less than one. Temperature Glide, P-T Envelope During the evaporation process, the refrigerant undergoes a transition from a saturated liquid state, known as the bubble point, to a saturated vapor state, known as the dew point.The temperature at which this transition occurs remains constant under constant pressure.However, when multiple components are present in a refrigerant mixture, a temperature glide can occur.The temperature glide refers to the difference between the dew point and bubble point temperatures.In this study, the temperature glide properties of R455A and R463A were investigated by analyzing the vapor-liquid equilibrium (VLE).The Peng-Robinson equation of state (EOS) was employed to accurately calculate the saturated vapor pressure and saturated liquid density.However, it is important to use appropriate mixing rules when evaluating the thermodynamic properties of the mixture using the EOS. Figure 1 illustrates the pressure-temperature (P-T) envelopes of the studied zeotropic refrigerants (R455A and R463A) compared to pure R744.Pure R744 does not exhibit a temperature difference between the bubble and dew point temperatures since it is a single component refrigerant (see dashed lines in Figure 1a).Azeotropic blends behave similarly to pure refrigerants, where the boiling and condensation points coincide at the same composition for both vapor and liquid phases.However, for zeotropic blends, the more volatile component starts boiling first in the evaporator, followed by the less volatile component.This sequential boiling can lead to a change in concentration as the temperature changes, resulting in a temperature glide (t glide > 2 K).If the refrigerant begins boiling at t i and ends at t f , then the temperature glide can be expressed as t glide = t f − t i (see dashed lines in Figure 1b,c).The changing composition of the evaporating liquid, caused by the varying boiling point, contributes to the temperature glide.Additionally, leakage from the system can introduce changes in the composition and properties of the refrigerant, further contributing to t glide . System Modeling A simulation system was constructed using Aspen HYSYS V12.1 ® Software (As-penTech, Bedford, MA, USA) [35] to assess the thermodynamic performance of zeotropic alternative refrigerants in AACs, as depicted in Figure 2. Aspen HYSYS has gained recognition among academics and engineers for its reliability and capability to evaluate complex industrial processes.The user-friendly nature of the Aspen HYSYS platform enables the optimization of conceptual design and operational parameters.This robust process simulator offers a vast library of pre-built component models and property packages.Aspen HYSYS allows for the seamless integration of multiple modules, facilitating the connection of material and energy streams.This capability enables the static and dynamic modeling of various complex chemical and hydrocarbon fluid-based processes.The simulation model developed for AACs in Aspen HYSYS offers easy integration with a range of energy systems, including compressors, condensers, gas coolers, heat exchangers, evaporators, expansion valves, and more. By leveraging Aspen HYSYS, the thermodynamic performance of zeotropic alternative refrigerants in AACs can be thoroughly examined.The software's extensive functionality and versatility contribute to the advancement of energy-efficient and environmentally conscious air conditioning technologies. System Modeling A simulation system was constructed using Aspen HYSYS V12.1 ® Software (Aspen-Tech, Bedford, MA, USA) [35] to assess the thermodynamic performance of zeotropic alternative refrigerants in AACs, as depicted in Figure 2. Aspen HYSYS has gained recognition among academics and engineers for its reliability and capability to evaluate complex industrial processes.The user-friendly nature of the Aspen HYSYS platform enables the optimization of conceptual design and operational parameters.This robust process simulator offers a vast library of pre-built component models and property packages.Aspen HYSYS allows for the seamless integration of multiple modules, facilitating the connection of material and energy streams.This capability enables the static and dynamic modeling of various complex chemical and hydrocarbon fluid-based processes.The simulation model developed for AACs in Aspen HYSYS offers easy integration with a range of energy systems, including compressors, condensers, gas coolers, heat exchangers, evaporators, expansion valves, and more. By leveraging Aspen HYSYS, the thermodynamic performance of zeotropic alternative refrigerants in AACs can be thoroughly examined.The software's extensive functionality and versatility contribute to the advancement of energy-efficient and environmentally conscious air conditioning technologies. System Description and Assumptions It is crucial to investigate and evaluate the performance of AACs cycles considering the physical and environmental characteristics of the newly studied zeotropic alternative refrigerants.Figure 2a illustrates the schematic diagram of the traditional vapor compression refrigeration cycle in AACs, comprising an evaporator, compressor, cooler/condenser, expansion valve, and internal heat exchanger.The following is a description of the system cycle: (i) 1′-2: non-isentropic compression; (ii) 2-3: constant pressure heat rejection from the cooler/condenser; (iii) 3-3′: subcooling in the internal heat exchanger (IHE); (iv) 3′-4: isenthalpic throttling in the expansion valve (EXV); (v) 4-1: constant pressure heat absorption in the evaporator; (vi) 1-1′: superheating in the internal heat exchanger (IHE).By adjusting the average evaporation temperature, cooler/condenser refrigerant output temperature, cooler/condenser pressure, and refrigerant mass flow rate, the operating conditions of the cycle can be modified.Because the critical point pressure of the refrigerant can be comparatively low compared to the cooler pressure, a transcritical model for the cycle was developed based on the operating conditions.In transcritical cycles, a gas cooler is utilized instead of a condenser, as in subcritical cycles, since there is no phase change above the critical point.As the pressure and temperature of the refrigerant are independent of each other, both pressure and temperature need to be defined at pressures above the critical point.The coefficient of performance of the cycle varies at the same gas cooler outlet temperature but different operating pressures.Therefore, the simulation model's gas cooler was optimized by running it at the optimum gas cooler pressure [36].Figure 2b presents the pressure-enthalpy (p-h) diagram of subcritical/transcritical cycles using R-744 in AACs. In order to ensure accuracy and simplicity in the current simulation, the following assumptions were considered to evaluate the performance of zeotropic alternative refrigerants in AACs:  The entire system operates under steady-state conditions. The effects of kinetic energy and gravity are neglected. A reference analysis is conducted with an ambient temperature of 25 °C and atmospheric pressure of 101.325 kPa.  Heat loss and pressure drop during the heat transfer process are disregarded. System Description and Assumptions It is crucial to investigate and evaluate the performance of AACs cycles considering the physical and environmental characteristics of the newly studied zeotropic alternative refrigerants.Figure 2a illustrates the schematic diagram of the traditional vapor compression refrigeration cycle in AACs, comprising an evaporator, compressor, cooler/condenser, expansion valve, and internal heat exchanger.The following is a description of the system cycle: (i) 1 -2: non-isentropic compression; (ii) 2-3: constant pressure heat rejection from the cooler/condenser; (iii) 3-3 : subcooling in the internal heat exchanger (IHE); (iv) 3 -4: isenthalpic throttling in the expansion valve (EXV); (v) 4-1: constant pressure heat absorption in the evaporator; (vi) 1-1 : superheating in the internal heat exchanger (IHE).By adjusting the average evaporation temperature, cooler/condenser refrigerant output temperature, cooler/condenser pressure, and refrigerant mass flow rate, the operating conditions of the cycle can be modified.Because the critical point pressure of the refrigerant can be comparatively low compared to the cooler pressure, a transcritical model for the cycle was developed based on the operating conditions.In transcritical cycles, a gas cooler is utilized instead of a condenser, as in subcritical cycles, since there is no phase change above the critical point.As the pressure and temperature of the refrigerant are independent of each other, both pressure and temperature need to be defined at pressures above the critical point.The coefficient of performance of the cycle varies at the same gas cooler outlet temperature but different operating pressures.Therefore, the simulation model's gas cooler was optimized by running it at the optimum gas cooler pressure [36].Figure 2b presents the pressure-enthalpy (p-h) diagram of subcritical/transcritical cycles using R-744 in AACs. In order to ensure accuracy and simplicity in the current simulation, the following assumptions were considered to evaluate the performance of zeotropic alternative refrigerants in AACs: • The entire system operates under steady-state conditions. • The effects of kinetic energy and gravity are neglected. • A reference analysis is conducted with an ambient temperature of 25 • C and atmospheric pressure of 101.325 kPa. • Heat loss and pressure drop during the heat transfer process are disregarded. • The refrigerant at the evaporator outlet is assumed to be in a saturated state. • The operating parameter settings and modeling assumption values are presented in Tables 3 and 4.   The refrigerant at the evaporator outlet is assumed to be in a saturated state.The operating parameter settings and modeling assumption values are presented in Tables 3 and 4. These assumptions are made to simplify the simulation and focus on the primary aspects of the performance evaluation of zeotropic alternative refrigerants in AACs. Thermodynamic Model Analysis When evaluating the system, it is important to consider several key parameters, including evaporator capacity, compressor power, pressure ratio, compressor discharge temperature, coefficient of performance (COP), and exergy efficiency.These parameters are used to analyze the energy and exergy performance of the AAC system when using different zeotropic alternative refrigerants.Furthermore, model validation is necessary to verify the accuracy of the AAC system simulation.The validation process involves  The refrigerant at the evaporator outlet is assumed to be in a saturated state.  The operating parameter settings and modeling assumption values are presented in Tables 3 and 4. These assumptions are made to simplify the simulation and focus on the primary aspects of the performance evaluation of zeotropic alternative refrigerants in AACs. Thermodynamic Model Analysis When evaluating the system, it is important to consider several key parameters, including evaporator capacity, compressor power, pressure ratio, compressor discharge temperature, coefficient of performance (COP), and exergy efficiency.These parameters are used to analyze the energy and exergy performance of the AAC system when using different zeotropic alternative refrigerants.Furthermore, model validation is necessary to verify the accuracy of the AAC system simulation.The validation process involves  The refrigerant at the evaporator outlet is assumed to be in a saturated state.  The operating parameter settings and modeling assumption values are presented in Tables 3 and 4. These assumptions are made to simplify the simulation and focus on the primary aspects of the performance evaluation of zeotropic alternative refrigerants in AACs. Thermodynamic Model Analysis When evaluating the system, it is important to consider several key parameters, including evaporator capacity, compressor power, pressure ratio, compressor discharge temperature, coefficient of performance (COP), and exergy efficiency.These parameters are used to analyze the energy and exergy performance of the AAC system when using different zeotropic alternative refrigerants.Furthermore, model validation is necessary to verify the accuracy of the AAC system simulation.The validation process involves  The refrigerant at the evaporator outlet is assumed to be in a saturated state.  The operating parameter settings and modeling assumption values are presented in Tables 3 and 4. These assumptions are made to simplify the simulation and focus on the primary aspects of the performance evaluation of zeotropic alternative refrigerants in AACs. Thermodynamic Model Analysis When evaluating the system, it is important to consider several key parameters, including evaporator capacity, compressor power, pressure ratio, compressor discharge temperature, coefficient of performance (COP), and exergy efficiency.These parameters are used to analyze the energy and exergy performance of the AAC system when using different zeotropic alternative refrigerants.Furthermore, model validation is necessary to verify the accuracy of the AAC system simulation.The validation process involves  The operating parameter settings and modeling assumption values are presented in Tables 3 and 4. These assumptions are made to simplify the simulation and focus on the primary aspects of the performance evaluation of zeotropic alternative refrigerants in AACs. Thermodynamic Model Analysis When evaluating the system, it is important to consider several key parameters, including evaporator capacity, compressor power, pressure ratio, compressor discharge temperature, coefficient of performance (COP), and exergy efficiency.These parameters are used to analyze the energy and exergy performance of the AAC system when using different zeotropic alternative refrigerants.Furthermore, model validation is necessary to verify the accuracy of the AAC system simulation.The validation process involves These assumptions are made to simplify the simulation and focus on the primary aspects of the performance evaluation of zeotropic alternative refrigerants in AACs. Thermodynamic Model Analysis When evaluating the system, it is important to consider several key parameters, including evaporator capacity, compressor power, pressure ratio, compressor discharge temperature, coefficient of performance (COP), and exergy efficiency.These parameters are used to analyze the energy and exergy performance of the AAC system when using different zeotropic alternative refrigerants.Furthermore, model validation is necessary to verify the accuracy of the AAC system simulation.The validation process involves comparing the model predictions with experimental or reference data to ensure the reliability of the results.The details of these parameters and the importance of model validation are discussed in the following sections. Energy Analysis The energy analysis of the modeled AAC system involves applying the principle of conservation of energy to each component of the system.This analysis assumes that the system is in a steady-state condition, with negligible changes in kinetic and potential energies.To calculate the evaporator capacity, the principle of conservation of energy is specifically applied to the evaporator unit, as follows: . where ṁr is the refrigerant mass flow rate, and h is the refrigerant's specific enthalpy. The same approach can also be used to determine the cooler/condenser capacity of the cooler/condenser unit, as follows: . The compressor power absorbed by the refrigerant during the compression process can be calculated using the following equation, assuming the compressor operates adiabatically: . where η is is the compressor isentropic efficiency, which is given by: where h 2,s represents the specific enthalpy of the refrigerant at the outlet of the compressor during isentropic compression, and h 2 represents the specific enthalpy of the refrigerant at the outlet of the compressor during the actual compression operation.The compressor pressure ratio is defined as: where γ is the pressure ratio of the compressor in the system, P 2 is the compressor's exhaust pressure in MPa, and P 1 is the compressor's suction pressure in MPa.The cycle's coefficient of performance: Exergy Analysis The study of exergy, unlike energy, reveals that exergy is not conserved and is subject to destruction in processes that involve both useful work and waste.Conducting a thermal exergy analysis is valuable in identifying the sources and magnitudes of thermodynamic inefficiencies.To achieve this objective, the following general form of the steady-state exergy rate balance equation for control volumes [38] can be applied to the components of the AAC system: . where T 0 represents the environmental temperature dead state, . Q j denotes the time rate of heat transfer at the boundary location, T j is the boundary temperature, . W cv is the amount of work conducted in the control volume, . E is the flow exergy rate, whereas in and out denote the inlet and outlet, respectively, and .I is the rate of exergy destruction in the control volume. The exergy rate in the system at state point i is defined as follows: . where s is the refrigerant's specific entropy, and subscript "0" indicates the reference (dead) condition.The exergy destruction in the compressor is influenced by the internal heat transfer, gas friction, and mechanical friction of the moving elements.Assuming adiabatic compression, the exergy rate balance equation leads to the following equation for the rate of exergy destruction in the compressor: . The rate of exergy destruction in the condenser, resulting from heat transfer between the refrigerant and air streams, can be calculated using the following equation [22]: . where T r is the refrigerant temperature.Assuming there is no heat transfer to or from the environment, the exergy destruction in the internal heat exchanger, which is caused by the internal heat transfer between the refrigerant streams, can be calculated using the following equation: . The exergy destruction in the expansion valve, which is caused by internal friction and a rapid pressure drop, can be calculated by disregarding the heat transfer with the environment using the following equation: . To calculate the rate of exergy destruction in the evaporator, which is caused by the heat transfer between the refrigerant and air streams, the following equation can be used [22]: . The rate of overall exergy destruction in the AAC system's cycle can be calculated by summing up the individual exergy destruction rates in the system's components, as follows: . Finally, the following expression for exergetic efficiency can be used to evaluate the overall exergetic performance of the AAC system: Model Validation The accuracy of the AAC system simulation model, generated with Aspen HYSYS, was validated through model validation.The transcritical CO 2 refrigeration cycle with a regenerator model was validated using the same boundary and operating parameters as reported by Rigola et al. [39] and Elattar and Nada [37].The experimental [39] and numerical [37] results based on COP were compared with the results obtained from the current model, and the maximum relative errors were found to be 11.24% and 2.79% for experimental and numerical results, respectively, as presented in Table 5.These results demonstrate that the simulation model technique implemented in this study using Aspen HYSYS software is sufficiently accurate. Zeotropic Refrigerants Environmental Impacts Considering their potential for advancing refrigeration cycles, it is important to note that R134a, R32, R125, R1234yf, and R744 have negative impacts on the depletion of the ozone layer.However, R134a and R32 also have positive impacts on global warming, whereas R1234yf and R744 do not contribute to global warming.To assess the environmental impact of the proposed refrigerant mixtures, Table 1 provides information on the ozone depletion potential (ODP) and global warming potential (GWP) of the individual pure components of refrigerants [14,40].To evaluate the environmental impact of the proposed zeotropic refrigerants, Equations ( 16) and ( 17) were used, and the results are presented in Table 2. In the equations, w pure,i represents the mass fraction of component i in a zoetrope.The evaluated ozone depletion potential (ODP) and global warming potential (GWP) of the proposed zeotropic refrigerants are presented in Table 2.It is worth noting that all the studied zeotropic blends have an ODP of zero, as their individual components have an ODP of zero.Furthermore, based on the classification by UNIP, 2019 [41], for 100-year global warming potential levels, R-455A is classified as having a negligible GWP level, while R-463A is classified as having a high GWP level. Parametric Studies A transcritical/subcritical cycle was developed for R744, R455A, and R463A due to the critical temperatures of these refrigerants.In transcritical cycles, a gas cooler is used instead of a condenser since there is no phase change above the critical point.At pressures above the critical point, the pressure and temperature of the refrigerant become independent variables and both need to be specified.The coefficient of performance (COP) of the cycle is influenced by the cooler/condenser outlet temperature and the operating pressures.Figure 3 illustrates the optimum cooler/condenser pressure, P 2 , for the studied zeotropic refrigerants, R455A and R463A, compared to R744, based on the maximum COP of the cycle.The common operating conditions for the comparison are t evap = 7.5 • C, t 3 = 35 • C, and ṁr = 0.075 kg/s.According to the figure, the maximum COP values achieved with R744, R455A, and R463A were 3.1, 4.25, and 5.4, respectively.These values were obtained at the optimum pressures of 8.9, 1.65, and 2.3 MPa for R744, R455A, and R463A, respectively. above the critical point, the pressure and temperature of the refrigerant become i ent variables and both need to be specified.The coefficient of performance (CO cycle is influenced by the cooler/condenser outlet temperature and the opera sures.Figure 3 illustrates the optimum cooler/condenser pressure, P2, for the stu tropic refrigerants, R455A and R463A, compared to R744, based on the maximu the cycle.The common operating conditions for the comparison are tevap = 7.5 °C, and ṁr = 0.075 kg/s.According to the figure, the maximum COP values achi R744, R455A, and R463A were 3.1, 4.25, and 5.4, respectively.These values were at the optimum pressures of 8.9, 1.65, and 2.3 MPa for R744, R455A, and R463 tively.W comp ), exergy efficiency (η ex ), and coefficient of performance (COP) with the cooler/condenser pressure (P 2 ) at t evap = 7.5 • C, t 3 = 35 • C, and ṁr = 0.075 kg/s.In Figure 4a, it is evident that as P 2 increased, the evaporator capacity ( .Q evap ) sharply increased for R744, while it remained nearly constant for R455A and R463A.This behavior is due to the difference in refrigerant enthalpy through the evaporator.With R744, the refrigerant enthalpy decreased at the inlet of the evaporator, while it remained approximately constant for R455A and R463A as P 2 increased.This is because the cycle is transcritical for R744 using a gas cooler, while it is subcritical for R455A and R463A using a condenser.Furthermore, R463A exhibited the highest evaporator capacity ( .Q evap ) among the three refrigerants, while R455A had a higher .Q evap than R744 until P 2 ≤ 8.9 MPa, after which R744 surpassed R455A.Figure 4b illustrates the compressor power ( .W comp ) of R744, R455A, and R463A.It was observed that .W comp increased with an increase in the cooler/condenser pressure (P 2 ).This behavior can be attributed to the thermodynamic relationship between pressure and temperature and the nature of the thermodynamic cycle. exhibited the highest evaporator capacity ( evap Q  ) among the three refrigerants, while R455A had a higher evap Q  than R744 until P2 ≤ 8.9 MPa, after which R744 surpassed R455A.Figure 4b illustrates the compressor power ( comp W  ) of R744, R455A, and R463A.It was observed that comp W  increased with an increase in the cooler/condenser pressure (P2).This behavior can be attributed to the thermodynamic relationship between pressure and temperature and the nature of the thermodynamic cycle.Figure 4c,d illustrates the impact of cooler/condenser pressure (P2) on exergy efficiency (ex) and coefficient of performance (COP).For R744, it can be observed that as the pressure of the cooler/condenser increased, both ex and COP initially rose, reached a peak around P2 = 9 MPa, and then started to decline.This behavior is primarily attributed to the increase in evaporator capacity ( evap Q  ), which dominated over the increase in compressor power ( comp W  ) until P2 ≤ 9 MPa.However, after P2 > 9 MPa, the situation reversed, and the (a) Figure 4c,d illustrates the impact of cooler/condenser pressure (P 2 ) on exergy efficiency (η ex ) and coefficient of performance (COP).For R744, it can be observed that as the pressure of the cooler/condenser increased, both η ex and COP initially rose, reached a peak around P 2 = 9 MPa, and then started to decline.This behavior is primarily attributed to the increase in evaporator capacity ( .Q evap ), which dominated over the increase in compressor power ( .W comp ) until P 2 ≤ 9 MPa.However, after P 2 > 9 MPa, the situation reversed, and the COP became inversely correlated with .Q evap and .W comp .On the other hand, for R455A and R463A, the exergy efficiency (η ex ) and coefficient of performance (COP) decreased with increasing P 2 .This decrease is mainly due to the increase in compressor power ( .W comp ) and the nearly constant values of evaporator capacity ( .Q evap ) as P 2 rose.At P 2 = 15 MPa, the maximum .Q evap values for R744, R455A, and R463A were 12.2 kW, 9 kW, and 12.5 kW, respectively.Furthermore, within the studied range of P 2 , the increase in evaporator capacity ( .Q evap ) was 614% for R463A compared to R744 at P 2 = 7 MPa and 2.5% at P 2 = 15 MPa.Additionally, within the studied range of P 2 , the COP of R455A and R463A decreased by 27% and 37%, respectively. Impact of Average Evaporator Temperature, t evap Figure 5 illustrates the impact of the average evaporator temperature (t evap ) on the following key performance parameters: evaporator capacity ( .Q evap ), compressor power ( .W comp ), exergy efficiency (η ex ), and coefficient of performance (COP).The analysis was performed at a condenser temperature of t 3 = 35 • C, mass flow rate of ṁr = 0.075 kg/s, and condenser pressure of P 2 = P opt .The optimum pressures for R744, R455A, and R463A were 8.9, 1.65, and 2.3 MPa, respectively, as shown in Table 2. Table 2 also provides the critical pressures and temperatures for R744, R455A, and R463A, indicating that the R744 cycle operates in the transcritical region, while the cycles for R455A and R463A are in the subcritical region.The .Q evap of R455A and R463A increased with rising t evap , while it decreased for R744.Additionally, the .Q evap of R744 was higher than that of R455A until t evap reached 9 • C. and COP became more pronounced when t3 exceeded 35 °C.This phenomenon occurs due to the reduction in refrigerant enthalpy difference across the evaporator with increasing t3, while the compression work remained constant.Figure 6b indicates that t3 had no significant effect compressor power ( comp W  ) since the cooler/condenser pressure (P2) and average evaporator temperature (tevap) were fixed at their optimum values of 7.5 °C.Exergy efficiency (ex) serves as a fundamental measure of the system's thermodynamic performance.Figure 6d illustrates that for R744, R455A, and R463A, as t3 increased, (a) In Figure 5b,c, it can be observed that for all refrigerants, .W comp decreased and COP increased with increasing average evaporator temperature (t evap ).This trend occurs due to the decrease in compression work with higher t evap while maintaining a constant cooler/condenser pressure (P 2 ).Furthermore, Figure 5d demonstrates the effect of t evap on exergy efficiency (η ex ), clearly showing a decrease as t evap increased.At t evap = 15 • C, the maximum .Q evap values were 10 kW and 12.5 kW for R455A and R463A, respectively, while R744 achieved a maximum .Q evap of 9.48 kW at t evap = 5 • C. Within the studied range of t evap , using R455A and R463A instead of R744 resulted in .Q evap enhancements of 16.3% and 49%, respectively, at t evap = 15 • C. The maximum COP values were 4.25, 6.7, and 7.5 for R744, R455A, and R463A, respectively, at t evap = 15 • C. Additionally, within the studied range of t evap , the COP for R744, R455A, and R463A increased by 52%, 76%, and 56%, respectively.The improvements in COP were 57.6% and 76.5% when using R455A and R463A instead of R744, respectively, at t evap = 15 W comp ), exergy efficiency (η ex ), and coefficient of performance (COP).The analysis was conducted at a fixed average evaporator temperature of t evap = 7.5 • C, mass flow rate of ṁr = 0.075 kg/s, and condenser pressure of P 2 = P opt .The optimum pressures for R744, R455A, and R463A were 8.9, 1.65, and 2.3 MPa, respectively.As shown in Figure 6a,c, the .Q evap and COP decreased as the cooler/condenser outlet temperature (t 3 ) increased for all refrigerants.Furthermore, the decline in .Q evap and COP became more pronounced when t 3 exceeded 35 • C.This phenomenon occurs due to the reduction in refrigerant enthalpy difference across the evaporator with increasing t 3 , while the compression work remained constant.Figure 6b indicates that t 3 had no significant effect on compressor power ( .W comp ) since the cooler/condenser pressure (P 2 ) and average evaporator temperature (t evap ) were fixed at their optimum values of 7.5 • C. R455A, and R463A decreased by 57%, 77%, and 67%, respectively.The maximum COP values were 4.8, 6.5, and 6.5 for R744, R455A, and R463A, respectively, at t3 = 20 °C.Moreover, within the studied range of t3, the COP for R744, R455A, and R463A decreased by 58.3%, 77%, and 66%, respectively.When compared to R744 at t3 = 20 °C, the improvements in COP were 35.4% and 76.5% when using R455A and R463A, respectively.Exergy efficiency (η ex ) serves as a fundamental measure of the system's thermodynamic performance.Figure 6d illustrates that for R744, R455A, and R463A, as t 3 increased, ηex initially rose, reached a peak, and then started to decline.Typically, this peak in η ex occurs around t 3 = 35 • C. The decrease in η ex after t 3 > 35 • C can be attributed to the increase in total irreversibility of the cycle while keeping the compressor power ( .W comp ) constant. W comp increased with an increase in the refrigerant flow rate ( ṁr ).However, ṁr had no significant impact on the exergy efficiency (η ex ) and coefficient of performance (COP).The maximum .Q evap values were 18.8 kW, 17.7 kW, and 24.9 kW for R744, R455A, and R463A, respectively, at ṁr = 0.15 kg/s.Within the studied range of ṁr , the .Q evap for R744, R455A, and R463A increased by 208%, 200%, and 207%, respectively.Similarly, the maximum .W comp values were 6 kW, 4 kW, and 4.6 kW for R744, R455A, and R463A, respectively, at ṁr = 0.15 kg/s.Within the studied range of ṁr , the .W comp for R744, R455A, and R463A increased by 200%, 700%, and 206%, respectively.The maximum COP values were 3.15, 4.4, and 5.3 for R744, R455A, and R463A, respectively, at t 3 = 20 • C. Within the studied range of ṁr , the improvements in COP were 40% and 68% when using R455A and R463A instead of R744, respectively.It is worth noting that while Q evap , COP, and ηex, with values of 12.35 kW, 5.39, and 47.8%, respectively, surpassing both R744 and R455A.To further understand the system behavior, the percentage of exergy destruction for each component in the studied refrigerant blends compared to R744 is illustrated in Figure 10.This provides insight into the influence of each component on the overall exergy destruction.It can be observed that the valve (EXV) contributed the highest irreversibility, accounting for 42% in the R455A cycle, followed by the compressor with 35% in the R463A cycle.The evaporator and condenser also contributed to exergy destruction in all configurations, albeit to a lesser extent.Overall, the findings from these analyses demonstrated the performance advantages and trade-offs associated with different refrigerant blends in the AACs system, highlighting the potential of R463A as the most promising option based on the evaluated performance indicators. System Optimization After conducting a parametric analysis, the performance of two refrigerant blend mixtures (R455A and R463A) and pure CO2 (R744) was optimized to maximize the coefficient of performance (COP) within the studied ranges of operating condition parameters.The optimal operating conditions, based on maximum COP, are presented in Table 6.Furthermore, Table 7 lists the exergy destruction and efficiency for each component, calculated based on the optimal COP case.The table provides the maximum system performance and refrigeration capacity achievable for R744, R455A, and R463A, with values of 14.21 kW, 15.05 kW, and 19.15 kW, respectively.The corresponding maximum COP values were 14.58, 12.86, and 14.19, and the exergy efficiencies (ηex) were 45.4%, 26.8%, and 26.7%, respectively. The outcomes of the optimization study were further analyzed in terms of COP, cycle operating pressures, and refrigerant environmental impact.In terms of COP, R744 and R463A yielded the best results, with their cycle performance indicators being most similar.Regarding cycle operating pressures, R463A and R455A exhibited lower pressures compared to R744.This aspect may have implications for the materials used in cycle components, cycle leakages, longevity, and compressor lubrication.In terms of environmental impact, R455A showed the least negative impact on the environment with a global warming potential (GWP) less than 1.However, it had a slightly lower COP compared to R744 and R463A.Additionally, the average difference in performance (COP) between R455A and R744 and R463A was approximately 11%. Considering the need for environmentally friendly air conditioning systems with acceptable performance in cars, given their impact on the environment and contribution to global warming, it is recommended to use the R455A blend as the refrigerant in the AAC system. System Optimization After conducting a parametric analysis, the performance of two refrigerant blend mixtures (R455A and R463A) and pure CO 2 (R744) was optimized to maximize the coefficient of performance (COP) within the studied ranges of operating condition parameters.The optimal operating conditions, based on maximum COP, are presented in Table 6.Furthermore, Table 7 lists the exergy destruction and efficiency for each component, calculated based on the optimal COP case.The table provides the maximum system performance and refrigeration capacity achievable for R744, R455A, and R463A, with values of 14.21 kW, 15.05 kW, and 19.15 kW, respectively.The corresponding maximum COP values were 14.58, 12.86, and 14.19, and the exergy efficiencies (η ex ) were 45.4%, 26.8%, and 26.7%, respectively.The outcomes of the optimization study were further analyzed in terms of COP, cycle operating pressures, and refrigerant environmental impact.In terms of COP, R744 and R463A yielded the best results, with their cycle performance indicators being most similar.Regarding cycle operating pressures, R463A and R455A exhibited lower pressures compared to R744.This aspect may have implications for the materials used in cycle components, cycle leakages, longevity, and compressor lubrication.In terms of environmental impact, R455A showed the least negative impact on the environment with a global warming potential (GWP) less than 1.However, it had a slightly lower COP compared to R744 and R463A.Additionally, the average difference in performance (COP) between R455A and R744 and R463A was approximately 11%. Considering the need for environmentally friendly air conditioning systems with acceptable performance in cars, given their impact on the environment and contribution to global warming, it is recommended to use the R455A blend as the refrigerant in the AAC system. Conclusions The growing popularity of vehicles has led to increased time spent in cars equipped with air conditioning systems.However, the refrigerant fluids used in these systems often have significant negative environmental effects.To address this issue, many countries have implemented regulations mandating the use of refrigerants with minimal global warming potential (GWP) and zero ozone depletion potential (ODP) in cars.Therefore, there is a pressing need for environmentally friendly and high-performance air conditioning systems in vehicles, given their detrimental impact on the environment and contribution to global warming in addition to lower engine fuel consumption and hence lower carbon emissions. The objective of this study was to propose the use of CO 2 -based blend zeotropic refrigerants, specifically R-455A (R-744/32/1234yf) and R-463A (R-744/32/125/1234yf/134a), to improve the thermodynamic performance of a pure CO 2 refrigerant in automotive air conditioning systems.The study involved analyzing the thermodynamic energy and exergy of an AAC system and optimizing it using Aspen HYSYS software.The investigation focused on comparing the performance of the new zeotropic refrigerant blends with that of carbon dioxide (R-744).Additionally, the study examined the impact of cooler/condenser pressure, average evaporator temperature, cooler/condenser outlet temperature, and refrigerant flow rate on the cycles' coefficient of performance (COP) and exergy efficiency. The results of the analyses indicate that R-463A provided the best system performance among the investigated refrigerants, resulting in the longest possible driving range when used in automotive air conditioning systems compared to R744 and R-455A at the same operating conditions.The study identified optimal cooler/condenser pressures for R744, R455A, and R463A as 8.9, 1.65, and 2.3 MPa, respectively.The maximum COP values based on these optimal cooler/condenser pressures were 3.1, 4.25, and 5.4 for R744, R455A, and R463A, respectively.Furthermore, the study revealed the maximum refrigeration capacities ( .Q evap ) for R744, R455A, and R463A as 12.2, 9, and 12.5 kW, respectively, at a specific . Q I HE = .m r (h I HE,in − h I HE,out ) Hot = .m r (h I HE,out − h I HE,in ) cold .I I HE = ∑ .E I HE,in − ∑ .E I HE,out Heat exchanger model: Simple end point ∆P Hot stream = 0 kPa ∆P Cold stream = 0 kPa Figure 4 . Figure 4. Impact of cooler/condenser pressure on AAC output parameters: (a) P 2 =Figure 5 . Figure 5. Impact of average evaporator temperature on AAC output parameters: (a) Processes 2023 , 11, x FOR PEER REVIEW 15 of 23 ex initially rose, reached a peak, and then started to decline.Typically, this peak in ex occurs around t3 = 35 °C.The decrease in ex after t3 > 35 °C can be attributed to the increase in total irreversibility of the cycle while keeping the compressor power ( comp W  ) constant.At t3 = 20 °C, the maximum evap Q  values were 14.1 kW, 12.9 kW, and 15 kW for R744, R455A, and R463A, respectively.Within the studied range of t3, the evap Q  for R744, . significantly influenced by the refrigerant flow rate ( ṁr the exergy efficiency (η ex ) and coefficient of performance (COP) remained unaffected by changes in ṁr . the refrigerant flow rate (ṁr), the exergy efficiency (ex) and coefficient of performance (COP) remained unaffected by changes in ṁr. Figure 7 . Figure 7. Impact of refrigerant flow rate on AAC output parameters: (a) evap Q  , (b) comp W  , (c) Figure 7 . Figure 7. Impact of refrigerant flow rate on AAC output parameters: (a) Figure 8 Figure8provides a cycle analysis and the state properties at the inlet and exit of each component for the R744 refrigerant using Aspen HYSYS software.The purpose of this analysis was to compare and assess the performance indicators of the Advanced Adiabatic Compressed Air Energy Storage (AACs) system when charged with pure R744 and with R455A and R463A blend refrigerants as working fluids.The selected performance indicators include evaporator capacity ( .Q evap ), compressor power ( .W comp ), exergy efficiency (η ex ), and coefficient of performance (COP). Figure 9 Figure 9 presents the system enhancements achieved with the different working fluids.It is evident that the blend R463A exhibited the highest values for. Figure 9 . Figure 9. Comparisons of studied refrigerant blends on AAC performance indicators: (a) evap Q  , Figure 10 . Figure 10.Exergy destruction percentages of each component for the studied refrigerant blends compared to R-744. Figure 10 . Figure 10.Exergy destruction percentages of each component for the studied refrigerant blends compared to R-744. Table 1 presents the key physical and environmental characteristics of individual refrigerants: R134a, R32, R125, R1234yf, and R744.As shown in Table1, all refrigerants have an ozone depletion potential (ODP) of 0. However, R125 and R134a have higher global warming potential (GWP) values compared to the other refrigerants.Notably, R744 and R1234yf have the lowest GWP values.Additionally, R744 has the highest critical pressure and the lowest normal boiling point, with values of 7.3 MPa and −78 • C, respectively. Table 4 . Energy and exergy equations and modelling assumptions. Table 4 . Energy and exergy equations and modelling assumptions. Table 4 . Energy and exergy equations and modelling assumptions. Table 4 . Energy and exergy equations and modelling assumptions. Table 4 . Energy and exergy equations and modelling assumptions. Table 4 . Energy and exergy equations and modelling assumptions. Table 6 . Optimal operating parameters based on maximum COP. Table 6 . Optimal operating parameters based on maximum COP. Table 7 . Exergy destruction and efficiency for each component for the optimal COP case.
2023-07-22T15:34:36.329Z
2023-07-17T00:00:00.000
{ "year": 2023, "sha1": "14419f8dbf01e9c717abcbbe3c064364ff57121e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/11/7/2127/pdf?version=1689587828", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cda00a273c23a8e11e1a661a9b371d6909145d1a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
219145198
pes2o/s2orc
v3-fos-license
THE SUSTAINABLE DEVELOPMENT IDEOLOGY The author treats the concept of sustainable development as an ideology in which ecological assumptions have replaced economic assumptions, and humanity is still considered a means of action, not an end. The author illustrates the meaning of this ideology by showing its history from the report of the Club of Rome from 1972, through subsequent reports of this club and its head Lester R. Brown, up to UN reports entitled Agenda 2021 and Agenda 2030. The author analyses this latter document and the guidelines contained in it based on the assumption that the population of the Earth should be limited, also mechanically, at the expense of the death of unborn children, rather than by a more even distribution of goods, which would decrease the population growth in a natural way. According to the author, these assumptions can be seen in the approach of the signatories of these documents to the issue of the overpopulation of the Earth and in their uncritical support for the controversial theory of anthropogenic global warming. Many objectives of these agendas include slogans that sound extremely noble, but are practically impossible to achieve or their implementation would limit the freedom of farming and civil liberties in general. Where is the Life we have lost in living? Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information? The cycles of Heaven in twenty centuries Brings us farther from God and nearer to the Dust. 1 HISTORY OF SUSTAINABLE DEVELOPMENT IDEOLOGY According to Thérèse Delpech, one of the basic reasons why the contemporary world is going astray is the growing gap between the progress of science and technology and the lack of such progress in the fi eld of ethics. 2 This has probably been the fact since the beginning of the Modern Age but we are now witnessing the widening of this gap. Following the development of informatics and medical sciences in particular one should really wonder whether our contemporaries have already been poisoned by the fruits of the Biblical tree of the knowledge of good and evil. 3 Too often we are guided by ideologies and not by scholarly refl ection. Contrary to many contemporary defi nitions of ideology, 4 the author considers ideology to be a set of convictions not suffi ciently based on facts but used for some political purposes. Therefore, like all ideologies, the sustainable development ideology should be carefully analyzed and criticized since it evolves around some dangerous slogans. It seems that in our times reason and common sense have been substituted by political rhetoric and political correctness. Political agendas have always been wrapped up in brilliant words. The twentieth century can be called the age of illusions, when even the worst crimes were committed amidst wonderful slogans. People always want to believe in a better future and are easy to believe that it may be achieved by simple political measures. The new century has already produced terrifying developments and their reasons are not only belied but often eliminated from public debate. Old utopias, such as 1 T.S. Eliot, Choruses from "The Rock", in: T.S. Eliot, Selected Poems, Harcourt Inc., 1934, p. 107. the proletarian paradise or the free market "invisible hand" are being substituted by new ones, such as the sustainable development. The political resolutions of world conferences and bombastic statements of global celebrities should not keep us from using common sense in analyzing what lays behind this brilliant offer of a "new brave world" which is becoming a common creed of the world political elites. The most representative manifestation of the sustainable development ideology is the 2030 Agenda adopted by the UN General Assembly by consensus without a vote on 25 September 2015. Nevertheless, the sustainable development ideology has a long history. In 1972 the Club of Rome published an alarming report entitled The Limits of Growth in which its members presented a vision of the coming overpopulation and exhaustion of natural resources and demanded introduction of a global birth control system. 5 Although not all of the alarming theses of the report have been confi rmed in practice, the principal assumption that humanity was a threat to itself was continuously developed. In the 1980s a new assumption was added in the shape of the man-made global warming theory. In December 1983 the UN General Secretary Javier Pérez de Cuéllar appointed Gro Harlem Brundtland, a former Socialist PM of Norway, to chair the World Commission on Environment and Development. In 1987 this commission published another alarming report entitled Our Common Future in which the necessity of reducing human population was a fundamental conclusion. 6 International deliberations concerning the sustainable development continued with the advancing new millennium in mind. During the UN Earth Summit conference in Rio de Janeiro in 1992 a new document entitled The 2021 Agenda was adopted. 7 A further step was made in Paris in March 2000, when the "Earth Charter" was passed, aimed at creating "an ethical framework for building a just, sustainable, and peaceful global society in the 21 st century." Among other demands, the charter stipulated that "an unprecedented rise in human population has overburdened ecological and social systems." It also called for a "sustainable reproduction and sexual health." These vague terms were already meant as global 5 Its text is available at: http://www.donellameadows.org/wp-content/userfi les/ Limits-to-Growth-digital-scan-version. birth control and easy access to abortion. 8 At this point, it is worth remembering that about one billion fetuses were killed in the second half of the 20 th century and in the fi rst 14 years of the new millennium. 9 This is real scale of the problem. In 2001, one of the leaders of the Club of Rome, Lester R. Brown announced a new Copernican revolution. "Economic theory -he stated -and economic indicators do not explain how the economy is disrupting and destroying the earth's natural systems." 10 Therefore Brown demanded an economy based on ecological premises; in other words, an economic system determined by resources. Brown's line of thinking went a long way towards what Janos Kornai referred to as the command economy. According to Kornai, there are two types of contemporary economic systems: those limited by demand and those limited by resources. Systems limited by demand face surplus of capital, labor, power, raw materials etc. but shortage of demand, while in the systems limited by resources the productive capacity is determined by the resource in the shortest supply. Shortage of any resource results in a decrease of output or in a forced substitution: utilization of a resource of worse quality or adjustment of the structure of production to available resources. Because of forced substitution, shortage in one fi eld involves other shortages what detaches the real supply from what was planned. Shortage has not only material effects, it also increases nervousness and confusion what leads to an even less effective utilization of the still available resources. 11 These remarks were true in relation to the Communist command economies but to a certain degree they also refer to the contemporary global economy which "is slowly destroying its support systems, consuming its endowment of natural capital." 12 Brown's alarming remarks concerning the advancing shortage of water, timber, land for grazing and other resources, should have been taken seriously into account, and his appeals to develop an eco-economy are generally right. Nevertheless, some of his alarming prophecies were not suffi ciently grounded and Brown himself was not always consistent. For instance, he noticed that no one regularly 8 What is the Earth Charter?, http://earthcharter.org/discover/what-is-the-earthcharter [access: January 29, 2019]. measured the water table level under the North China Plain the Indian Punjab or the southern Great Plains of the United States, but he warned against an "inevitable crash." 13 He also pointed at the progress in many fi elds of eco-economy. The weakest point of his argument is elsewhere. In the chapter concerning the population problems he pointed at the danger of the population growth rate exceeding the economic growth rate and at the inevitability of overusing natural resources. 14 He analyzed the potential scenarios for the world population: Demographers use a three-stage model to understand how population growth rate change over time as modernization proceeds. In the fi rst stage, birth and death rates are both high, resulting in little or no population growth. In the second stage, death rates fall while birth rates remain high, leading in rapid growth. In the third stage, birth rates fall to a low level, balancing low death rates and again leading to population stability (…) Today there are no countries in stage one; all are in stage two or stage three. 15 Brown failed to notice stage four in which death rates exceed birth rates leading to a serious decrease of population. This is now the case in a growing number of economically developed and post-Communist countries. Meanwhile Brown's main concern was limitation of birth rates. He widely described the family planning progress in the Third World Countries almost equalizing various methods of bringing the birth rates down. Sexual education, promoting family planning in media, encouraging girls to continue education instead of repeated pregnancies, contraceptives and abortion, for instance in the shape of the "morning after" pill-all these methods were generally approved of by Brown. 16 In his follow-up bestselling book "Plan B 3.0," Brown repeated earlier arguments concerning the overuse of natural resources and continued to stress the necessity to stabilize earth population. This time he made no mention of abortion, but one can only wonder whether he changed his mind on this topic. He also failed to notice the dramatic consequences of the decrease of the number of population in the economically developed countries given a steady high population growth rates in the Third World and in the "failed states" in particular. 17 13 L.R. Brown, Eco-Economy, p. 229. 14 By the way Brown's graph on page 214 shows the largest man-made demographic disaster in history: some 35 million people starved to death as result of the Chinese Great Leap Forward. L.R. Brown, Eco-Economy, p The Club of Rome continued its work. In its "Green Agenda" of 2005 we read: The common enemy of humanity is man. In searching for a new enemy to unite us, we came up to the idea that pollution, the threat of global warming, water shortages, famine and the like would fi t the bill. All these dangers are caused by human intervention, and it is only through changed attitudes and behavior that they can be overcome. The real enemy then, is humanity itself." 18 Of course, the initial paradoxical statement could be understood as a simple warning or an appeal to the international community to change "attitudes and behavior," but the reference to humanity as an "enemy" explicitly pointed at the solution: the "enemy" should have been reduced in numbers. One should also pay attention to the style of this document. "The enemy to unite us" has been a typical slogan of totalitarian regimes of the 20 th century. The Communists fought against the "bourgeois enemy," while the Nazis fought against the "Jewish enemy." One can wonder why the Club of Rome authors failed to notice such allusions. THE 2030 AGENDA GOALS All the preceding efforts at the establishment of a world social and economic policy program were crowned in September 2015 when the UN General Assembly adopted the 2030 Agenda. It is the so far furthest reaching program of political control of mankind. In the Introduction we read: 1. We, the Heads of State and Government and High Representatives, meeting at the United Nations Headquarters in New York from 25-27 September 2015 as the Organization celebrates its seventieth anniversary, have decided today on new global Sustainable Development Goals. 2. On behalf of the peoples we serve, we have adopted a historic decision on a comprehensive, far-reaching and people-centered set of universal and transformative Goals and targets. We commit ourselves to working tirelessly for the full implementation of this Agenda by 2030. We recognize that eradicating poverty in all its forms and dimensions, including extreme poverty, is the greatest global challenge and an indispensable requirement for sustainable development. We are committed to achieving sustainable development in its three dimensions -economic, social and environmental -in a balanced and integrated manner. We will also build upon the achievements of the Millennium Development Goals and seek to address their unfi nished business. 3. We resolve, between now and 2030, to end poverty and hunger everywhere; to combat inequalities within and among countries; to build peaceful, just and inclusive societies; to protect human rights and promote gender equality and the empowerment of women and girls; and to ensure the lasting protection of the planet and its natural resources. We resolve also to create conditions for sustainable, inclusive and sustained economic growth, shared prosperity and decent work for all, taking into account different levels of national development and capacities. 4. As we embark on this great collective journey, we pledge that no one will be left behind. Recognizing that the dignity of the human person is fundamental, we wish to see the Goals and targets met for all nations and peoples and for all segments of society. And we will endeavor to reach the furthest behind fi rst. The fi rst 25 lofty statements of the agenda's introduction are followed by statement No 26: 26. To promote physical and mental health and well-being, and to extend life expectancy for all, we must achieve universal health coverage and access to quality health care. No one must be left behind. We commit to accelerating the progress made to date in reducing newborn, child and maternal mortality by ending all such preventable deaths before 2030. We are committed to ensuring universal access to sexual and reproductive health--care services, including for family planning, information and education. We will equally accelerate the pace of progress made in fi ghting malaria, HIV/ AIDS, tuberculosis, hepatitis, Ebola and other communicable diseases and epidemics, including by addressing growing anti-microbial resistance and the problem of unattended diseases affecting developing countries. We are committed to the prevention and treatment of non-communicable diseases, including behavioral, developmental and neurological disorders, which constitute a major challenge for sustainable development. The problem is that among many brilliant demands the promoted medical progress includes "reproductive health-care services" which in the current political newspeak mean free access to abortion, and, more and more often, to euthanasia. Therefore, if the basic assumption of the agenda is to be "people-centered," one should fi rst defi ne who the people are. Does the term "people" refer to the unborn children, the lethally sick or simply weary of life or not? Does it, ultimately, refer to human beings whose "quality of life" was found insuffi cient by many world authorities, such as Peter Singer? 19 The lack of such 19 The Princeton University professor of ethics Peter Singer is the top defender of the global ecological system against human activity. He has relativized differences between human beings and other living organisms, opening the road to acceptance of physical elimination of people according to arbitrary rules. His book Rethinking Life and Death (1994) was once called a "road map to a moral dead end." R.J. Neuhaus, Public Square, "First Things", June-July 2004, No. 144, p. 82. defi nition in the agenda is a serious problem. The consent to support road traffi c does not mean that everybody should support left-side traffi c and right-side traffi c at the same time. The agenda includes 17 sustainable development goals, mostly repeating statements of the introduction. These goals are specifi ed below. Each of these 17 goals have been developed into 169 targets. The verbal creativity of politicians proved to be hardly limited. Even "The Economist" editorial ridiculed the "169 Commandments" of the agenda. 21 For instance, let us take a look at the specifi c targets of goal No 8: 8.1 Sustain per capita economic growth in accordance with national circumstances and, in particular, at least 7 per cent gross domestic product growth per annum in the least developed countries. 8.2 Achieve higher levels of economic productivity through diversifi cation, technological upgrading and innovation, including through a focus on high--value added and labour-intensive sectors. 8.3 Promote development-oriented policies that support productive activities, decent job creation, entrepreneurship, creativity and innovation, and encourage the formalization and growth of micro-, small-and medium-sized enterprises, including through access to fi nancial services. 8.4 Improve progressively, through 2030, global resource effi ciency in consumption and production and endeavor to decouple economic growth from environmental degradation, in accordance with the 10-year framework of programmes on sustainable consumption and production, with developed countries taking the lead. 8.5 By 2030, achieve full and productive employment and decent work for all women and men, including for young people and persons with disabilities, and equal pay for work of equal value. 8.6 By 2020, substantially reduce the proportion of youth not in employment, education or training. 8.7 Take immediate and effective measures to eradicate forced labor, end modern slavery and human traffi cking and secure the prohibition and elimination of the worst forms of child labor, including recruitment and use of child soldiers, and by 2025 end child labor in all its forms. 8.8 Protect labor rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment. 8.9 By 2030, devise and implement policies to promote sustainable tourism that creates jobs and promotes local culture and products. 8.10 Strengthen the capacity of domestic fi nancial institutions to encourage and expand access to banking, insurance and fi nancial services for all. 8.a Increase Aid for Trade support for developing countries, in particular least developed countries, including through the Enhanced Integrated Framework for Trade-Related Technical Assistance to Least Developed Countries. 8.b By 2020, develop and operationalize a global strategy for youth employment and implement the Global Jobs Pact of the International Labor Organization. 22 Reading these goals and targets one cannot help remembering a well-grounded ontological distinction between existence, non--existence, and planned vision. All of these goals and targets sound very nice and seem to aim at making people richer, healthier and more happy. The problem is in the feasibility of this program and in the measures that are planned to be applied. These questions apparently escaped the attention of many signatories and even of the Holy See. During the preparatory stage of the 2030 Agenda, in April 2015, the Holy See organized a conference at which the Vatican obliged to promote the agenda. Asked about the reasons of this engagement, the Chancellor of the Papal Academy of Science, Archbishop Marcelo Sánchez Sorondo replied that the sustainable development agenda does not mention abortion or birth control but "family planning, sexual and reproductive health, as well as reproductive rights." 23 In fact, target 3.7 of the agenda calls to "ensure universal access to sexual and reproductive health-care services, including for family planning, information and education, and the integration of reproductive health into national strategies and programmes." 24 Nevertheless, the Archbishop failed to notice that in the present political newspeak "reproductive rights" mean free access to abortion. The 2015 UN General Assembly Session that passed the 2030 Agenda was attended by Pope Francis. There are also problems with the feasibility of the agenda goals. Among a plethora of the agenda's concerns for human economic and medical well-being we can hardly trace a word about the human reciprocal attitudes. The word "education" is crucial here. Its purposes are nowhere mentioned. What and how should we teach our youngsters? To know more? Where should this knowledge lead us? There are calls for a changed attitude towards our ecological surroundings but not towards other humans. Bearing children boils down to "healthy reproduction." While the word "love" is now mostly understood in its physical dimension, the word "empathy" seemed a taboo to the agenda's authors. The whole burden of changing our world for the better is laid on the governments and local authorities, while individuals are only obliged to care for the ecology. As we said, the UN 2030 Agenda is a far-reaching program of establishment of political control of humanity. The necessity to steer economic and social development is here justifi ed by demographic and ecological premises. Morality is absent since the thinking of the contemporary world elites is overwhelmed by relativity. While there is no solid moral foundation of the agenda, its basic assumptions -the threat of overpopulation and the man-made global warming -are nowhere proved. They rather serve as philosophical dogmas. Although the agenda claims to be "people-centered" it looks rather earth-centered. In many commentaries earth appears as a kind of new goddess in which all humans bear, live, and die. 25 Economic policies have always been based on some basic political and social assumptions. Instead of earlier theories, the idea of sustainable development is based on two very doubtful premises: fi rst, that the earth is overpopulated, and, second, that the global warming is caused by excessive anthropogenic carbon dioxide production. OVERPOPULATION This popular term has been discussed for more about two centuries. In earlier history the population growth has usually been slow. Over centuries, despite high birth rates, wars, plagues, and high infant mortality, the number of world population grew steadily but did not create serious worries. The fi rst to alarm about the number of people outgrowing available resources was Thomas Malthus in his Essay on the Principle of Population (1798). This Malthusian thesis gained massive support in the second half of the 20 th century with its dramatic acceleration of the world population growth. Quite recently Paul Crutzen and Stanisław Wacławek presented an apocalyptical vision of human overpopulation and its impact on the global ecological system, calling the contemporary era the "anthropocene." 26 The question of excessive population implies a concept of optimum population. "Over-" and "under-population" requires defi nition of the ideal number of population. And here we face substantial problems. 25 Perhaps the closest to the spiritual background of the 2030 Agenda is the Wicca pagan movement, promoted among others by Gerald Gardner and Doreen Valiente, and drawing upon many ancient and Eastern hermetic motifs. Their Book of Shadows had many versions but generally refers to the ancient Greek cult of Kore, Indian Pantheism, or even more ancient cult of Mother Goddess. There are also many associations with modern feminism and New Age. The sole rate of density of population tells nothing about the reasons and cures of the present situation. If we compare the economic performance of the Netherlands and Bangladesh on the one hand and Siberia and Canada on the other hand, the conclusions will be close to none. The fi rst pair is characterized by a high density of population accompanied by a high and low level of economic development. The same diversity of economic performance may be found in the second pair of countries, characterized by a low density of population. These comparisons prove that the sole density of population is not an inevitable reason of economic development or economic backwardness. Also the unprecedented economic growth of rather "overpopulated" Korea and especially China in recent decades shows that in order to understand the connection between the number of population per square mile and the economic prospects is more complicated. Therefore, one must take into account the density of population related to the level of income and analyze the mechanism of income changes. 27 There can be no doubt that the extremely crowded cities of the Third World are a human disaster and they create serious environmental problems. But they are a result of the lack of rational dislocation policies. Creation of new centers of economic development could really ease their problems. The Japanese, Korean and Chinese cases of overcoming the traditional "vicious circle of backwardness" must be seriously taken into consideration. Moreover, if the most developed countries paid more attention to intensifi cation of less developed and sometimes really crowded countries, for instance decreasing revenues from the arms exports and increasing investment in local infrastructure, the GNP of the latter countries would grow and their birth rates would decrease without enforcing instruments of "reproductive health-care services" such as abortion or sterilization. These measures are mentioned in the 2030 Agenda only marginally. ANTHROPOGENIC GLOBAL WARMING There can be little doubt that the average world temperature has recently been growing. Nevertheless, there is an ongoing debate on the human impact on this phenomenon. The theory of irreversible effects of the global warming mostly caused by excessive human activities, that is the Anthropogenic Global Warming (AGW) theory, is rather doubtful. Although a number of outstanding authorities claim the human impact to be decisive, one may ask to what extent the global warming may be connected with the human activity since the man--made carbon dioxide is responsible for a few percent of the total global carbon dioxide emission, while the rest comes from natural sources. The AGW theory is supported by a lot of authorities. Since they seem to constitute a majority of experts, there is no need to mention them. But the question is whether scholarly facts can be proved by a popular vote. The global warming theory has many rational and consistent critics. Among the most eminent scholars who criticize the AGW theory one may mention Ivar Giaever, the Norwegian 1973 Nobel Prize winner for physics; Richard Lindzen, an American atmospheric physicist and member of the American Academy of Sciences; Patrick Moore, former president of the Greenpeace Canada; Nils-Axel Mörner, former head of the paleogeophycis and geodynamics department of the Stockholm University and president of the International Union for Quartenary Research Commission in Neotectonics; Garth Paltridge, a retired Australian atmospheric physicist and Honorary Fellow of the Institute of Antarctic and Southern Ocean Studies at the University of Tasmania; Roger A. Pielke, professor of ecology from the University of Colorado; Denis Rancourt, a retired professor of physics from the University of Ottawa; Harrison Schmitt, an American geologist and astronaut; Philip Scott, a retired professor of biogeography from the University of London; Hendrik Tennekes, former director of research at the Royal Dutch Meteorological Institute; Khabibulo Abdusamatov, an astrophysicist and head of the Space Research Laboratory at the Petersburg Observatory of the Russian Academy of Sciences; Sallie Baliunas, a retired American astrophysicist, former Deputy Director of the Mount Wilson Observatory; Vincent Courtilot, an emeritus French geophysicist; David Douglas, professor of physics of the University of Rochester; Ole Humlum, professor of geology at the Oslo Univeristy; William Kininmoth, meteorologist and former Australian delegate to the World Meteorological Organization; Nir Shaviv, professor of astrophysics and climatology at the Hebrew Univeristy in Jerusalem, as well as many other experts. 28 The AGW theory was also criticized by Burt Rutan, a prominent American aircraft engineer who pointed at several research abuses. Edward Smith and Joseph d'Aleo noticed that around 1990 the NASA Goddard Institute of Space Studies limited the number of temperature monitoring stations eliminating those located in the coldest places on earth. One must remember that the 20 th century average annual growth of temperature amounted to 0.7 percent, while the accepted level of measurement error is one percent. This may pose a serious question mark over the whole theory of global warming. D'Aleo became one of the most competent critics of the global warming. He claimed that (1) since 2002 the average world temperature has decreased rather than increased; (2) the effect of carbon dioxide on the rise of temperature is logarithmic, so the more CO 2 in atmosphere the lower the temperature increase it produced; (3) no correlation of CO 2 emissions and temperature was proved since 2002; (4) CO 2 is not a pollutant but a naturally occurring gas, an essential ingredient in photosynthesis which may decrease its global volumes; (5) reconstruction of long-term CO 2 concentrations demonstrates that today's concentration is the lowest since the Cambrian Era 50 million years ago; and (6) temperature lead and not lag carbon dioxide changes while the oceans play here a leading role. 29 Others pointed at the fact that the current level of CO 2 concentration is 380 ppm, while plants grow the best at 1000 ppm. All these facts led Rupert Darwall to the conclusion that the whole AGW theory results from Malthusian assumptions. 30 THE 2030 AGENDA IN PRACTICE The sustainable development ideology may not look as dangerous in itself as earlier ideologies. Although many earlier utopias proved to be very harmful as guiding principles of practical policies, the vague ideas represented by the 2030 Agenda may look quite decent. The danger is in the agenda's basic assumption that human life in itself is a grave problem that should be reduced at all cost. Moreover, the lofty goals of the agenda may be used by big powers as tools of political pressure on small countries. Fortunately, some of these dangers are far from materialization. One thing that is obvious is that the UN legitimization of the "reproductive health theory" serves some international actors, such as the European Union to pressure its member countries to implement free abortion or the Planned Parenthood to sponsor abortion worldwide. Otherwise, the recent Katowice UN climatic COP24 summit became a forum of presentation of contradictory standpoints. Generally speaking, ecological radicals failed to impose their beliefs, while representatives of individual countries defended their economic interests. With her defense of the coal mining industry Poland was not alone. The Chinese and Turkish delegation demanded that their countries to be rated among developing countries and given economic aid. A radical report by the Intergovernmental Panel of Climatic Change was rejected by the US, Chinese, Russian, Kuwaiti and Saudi delegations. The ambitious European Union program of reduction of carbon dioxide emission serve some union members that supply relevant technologies, while they are harmful to countries whose power generation is still based on coal. The latter countries competitiveness may be ruined in comparison to countries that ignore the CO 2 reduction programs. This is why it was so important for Poland to pass the declaration "Forests for Climate" calling for utilization of forest resources to balance carbon dioxide emissions. 31 All in all, the sustainable development ideology may not be dangerous if the human dimension is taken into consideration in all its moral and economic aspects, but if it is implemented in the most radical, pro-abortionist version it may lead not only to a moral devastation of mankind but to disastrous demographic results similar to previous experiments of this kind. * * * A Norwegian expert in oil, Oystein Dahle once observed that "socialism collapsed because it did not allow prices to tell the economic truth. Capitalism may collapse because it does not allow prices to tell the ecological truth." 32 This may be the case, but opposing economy and ecology, most experts treat human life as a factor and not as a goal. Considering policies that would optimize the well-being of mankind they fail to notice that mankind is composed of billions of individual human beings, alive or unborn, each of them not a measure but an objective in itself. Whether we like it or not, each human has an individual genetic code and unique fi ngerprints. So, the basic question remains: what theory or what policy will tell the human truth? BIBLIOGRAFIA
2020-05-21T00:13:09.906Z
2019-12-27T00:00:00.000
{ "year": 2019, "sha1": "dc44aab7db46401db586f58fd7fae2934a19a2cf", "oa_license": "CCBYNCSA", "oa_url": "http://czasopisma.isppan.waw.pl/index.php/sp/article/download/414/314", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5a85de17f0634f10af0a899003591f91035b5b2f", "s2fieldsofstudy": [ "Economics", "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
226236954
pes2o/s2orc
v3-fos-license
Verifying the output of quantum optimizers with ground-state energy lower bounds Solving optimisation problems encoded in the ground state of classical-spin systems is a focus area for quantum computing devices, providing upper bounds to the unknown solution. To certify these bounds, they are compared to those obtained by classical methods. However, even if the quantum bound beats them, this says little about how close it is to the unknown solution. We consider the use of relaxations to the ground-state problem as a benchmark for the output of quantum optimisers. These relaxations are radically more informative because they provide lower bounds to the ground-state energy. The chordal branch and bound algorithm we present provides a series of systematically improving confidence regions where the ground-state energy provably lies. Interestingly, each step in the process requires only an effort polynomial in the system size. Additionally, the algorithm exploits the locality and sparsity of relevant Ising spin models in a systematic way. This yields certified solutions for many of the problems that are currently addressed by heuristic optimisation algorithms more efficiently and for larger system sizes. We apply the method to verify the output of a D-Wave 2000Q device and identify instances where the annealer does not reach the ground-state energy and, more importantly, instances where it does, something impossible to do by means of standard variational approaches. Our work provides a flexible and scalable method for the verification of the outputs produced by quantum optimization devices. I. INTRODUCTION Classical Ising models are among the most paradigmatic and widely studied models in statistical physics. They are capable of describing an immense variety of interesting physics, ranging from ferromagnetic to frustrated and glassy phases. Moreover, they are important in fields as diverse as risk assessment in finance, logistics, machine learning [1], and image de-noising [2] because the solution of many optimisation and decision problems, such as partitioning, covering, and satisfiability can be encoded in the ground state of such models [3]. Their generality and the exponentially growing spaces of spin configurations, however, preclude the existence of any efficient general purpose algorithm to obtain the ground state. It is hence no surprise that a wealth of approximate but more scalable classical techniques for the energy minimisation in such models have been developed. Recently, novel approaches that leverage the power of near-term quantum devices such as quantum annealers, variational quantum eigensolvers [4], variational circuits [5,6] or networks of degenerate optical parametric oscillators [7], are proposed for performing such tasks [8][9][10]. The quality of their outputs is usually benchmarked against some of the most scalable classical approaches, e.g., simulated annealing [11] or variational ansatz classes * In memoriam of Peter Wittek, who was a source of inspiration for all of us. based on tensor-network states [12]. All of these methods share one common feature: they only provide upper bounds on the ground-state energy. On the one hand, this feature limits the verification power of these approaches, which are only able to identify instances where quantum devices do not reach the ground-state energy. On the other hand, even when a quantum optimiser beats all classical variational methods, there is no way to know if the output of the quantum device is actually close to the true ground-state energy, unless the test is performed on problems for which the solution is already known [13]. To overcome these limitations, it is important to develop schemes that provide reliable lower bounds to the groundstate energy of spin problems, against which the results of quantum devices, and also classical variational methods, can be compared. In this work, we tackle this issue by leveraging relaxations of polynomial optimisation problems through semidefinite programming (SDP). The proposed method provides lower bounds on the ground-state energy by optimizing over a larger set than the physical spin configurations. We improve the scalability of this type of relaxations, by making use of a method known as the chordal extension, which allows us to exploit the physical locality and sparsity structure present in relevant problem instances. All in all, this yields an increasingly precise hierarchy of rigorous lower and, in fact, also upper bounds on the ground-state energy. Combining these bounds we obtain a scalable and flexible method that provides with polynomial effort a confidence region inside which the ground state energy provably lies. By arXiv:1808.01275v2 [quant-ph] 3 Nov 2020 making use of a branch-and-bound scheme, the confidence region can be systematically improved. Although the complexity of the general Ising problem implies that one might have to run an exponential number of steps to achieve convergence, our numerical experiments show that in many instances the confidence region collapses to the exact ground state after few iterations. A benchmark on a D-Wave 2000Q device shows how our chordal branch-and-bound (CBB) method can be used both to detect situations in which the quantum solution differs from the optimum and, more importantly, verify when the annealer has actually reached the ground-state energy and no further optimization is required. Our approach to verification consists of relaxing the initial optimization problem and, therefore, results in lower bounds to the ground-state energy, something impossible using standard variational techniques. Certification methods based on relaxations constitute a valid approach for benchmarking any heuristic optimisation, classical or quantum. The main purpose of this work is to demonstrate how these methods can be applied to asses the quality of the outputs of intermediate-scale quantum computing devices, whose timeliness is particularly motivated by recent progresses in the field [14]. II. PRELIMINARIES We consider classical spin systems whose configurations σ := (σ 1 , . . . , σ N ) are vectors of N spin variables σ i ∈ {−1, 1} to each of which a Hamiltonian H assigns an energy H( σ). We are mostly interested in Hamiltonians of Ising type, that can be written in the form with couplings J ij and local fields h i . The method we develop, however, is more general and can also be applied to Hamiltonians that are higher order polynomials of the spin variables σ i and couple three or more spins in a single term. Among all the configurations of such a system there are those that achieve the minimal possible energy, also known as the ground-state energy and defined as This minimization is a polynomial optimization problem in the spin variables. If the Hamiltonian is of the form in (1), then it is quadratic. For our purposes, solving the ground-state problem for a given Hamiltonian means finding E g and outputting a configuration that achieves it. Obviously, finding the ground state is an optimisation problem that can, in principle, be solved by brute force search. This however quickly becomes infeasible as the number of configurations grows as 2 N , restricting this approach to systems of few tens of particles. Many spin models of interest in physics and beyond are characterized by a locality structure defined by a graph G: spins are located in the nodes of the graph and the interacting terms J ij are non-zero only between neighbouring sites, that is, spins connected by an edge of the graph. Local interactions also appear naturally in physical solvers of spin models, such as, for instance, a quantum annealer. Such locality of interactions implies a sparsity of the resulting Hamiltonian and hence optimisation problem. However, exploring this structure still remains a really non-trivial task: even for rather restricted classes of graphs, finding the ground state is an NP-hard problem. This precludes the existence of any efficient and general algorithm. The complexity of the Ising ground state problem thereby depends on subtle details of the problem class (see Appendix A for more details). It is convenient for what follows to present an equivalent formulation of the ground-state problem in which the energy is computed over the set of expectation values of products of spin variables, such as the σ i and σ i σ j that appear in the Hamiltonian, instead of spin variables directly. The connection with the original optimization problem (2) is made by introducing the notion of a state of an N -spin system as a generic probability distribution P over the set of configurations {−1, 1} N . For every function f : {−1, 1} N → R we can then define its expectation value in the state P , denoted by In the following, whenever the connection to a specific physical state P is not evident, we simply denote expectation values by f . With this notation, the equivalent energy minimization problem reads We call any state P that is supported only on the groundstate space (i.e., the collection of all configurations that achieve the ground-state energy) of a model a ground state and such P are manifestly those that achieve the minimal possible expectation value for the Hamiltonian, i.e, min P H P = E g . Optimizing over probability distributions instead of spin configurations requires a similar exponential effort in the system size. Nonetheless, it opens the way for a relaxation in terms of an SDP problem, which is one of the main ingredients of our method, which we present in the following Section. III. THE CHORDAL BRANCH AND BOUND ALGORITHM In this section we describe the main tools we build upon to devise the CBB algorithm. First, we present the SDP relaxation we exploit to obtain both a lower and upper bound to the ground-state energy. These bounds define an energy confidence region in which the unknown ground-state energy provably lies. Then we introduce the branch-and-bound procedure as a tool to systematically improve this confidence region. Lastly, we describe the chordal extension method that exploits the sparsity of relevant physical models in order to increase the scalability of the SDP. Technical details of the algorithm and its implementation can be found in the Appendices B and C. A. SDP Relaxations As mentioned in the previous Section, the ground-state minimization problem can be relaxed to obtain an efficient method to derive lower bounds on the ground-state energy through the formulation in terms of expectation values (4). Specifically we use a method pioneered by Lassere [15,16], which relaxes the polynomial optimization over any distribution P into an SDP. Let us consider a vector x := (x α ) k α=1 of monomials of the spin variables. For any state P we define its moment matrix Γ(P ) with respect to x as the k × k matrix of expectation values Γ αβ (P ) := x α x β P . Any such moment matrix Γ(P ), being defined via an outer product, is manifestly positive semidefinite, i.e., Γ(P ) 0, and, depending on what the elements of x are, it further obeys certain linear constraints, which follow from the fact the monomials are made of spin variables. More precisely, the constraints reflect the two basic properties of these variables, namely that they take dichotomic values σ i ∈ {−1, 1} and commute with each other. This leads to conditions such as, for instance, σ i σ j σ i P = σ j P . We illustrate this with an example: take as a generating set of monomials x the spin variables themselves together with the identity, i.e., x = {1, σ 1 , . . . , σ N }. The resulting Γ matrix takes the following form: Notice how the expectation value of any Hamiltonian of the form given in (1) can be expressed as a linear function of the entries of Γ, given by tr(h Γ), where h is a matrix defined by the system Hamiltonian. A lower bound to its ground-state energy can then be obtained by minimizing tr(h Γ) over all postive semidefinite matrices Γ that fulfil the linear constraints discussed above, expressed also as linear functions of the entries of Γ in terms of some matrices F m , This defines an SDP relaxation of the problem, since not every such positive matrix Γ satisfying the linear constraints encapsulated by the matrices F m necessarily arise as a moment matrix Γ(P ) of a physical state. In contrast with the original minimization, the presented SDP can be solved efficiently, since the amount of variables involved in the moment matrix scales only quadratically with the number of spins. Interestingly, from the solution of the considered relaxations one can also extract a spin configuration with no additional computational cost. Let Γ * be the optimal solution to the SDP. We can associate to it a configuration σ * by taking the sign of the entries in that matrix that correspond to the expectation values σ i , namely set σ * i := sign(Γ * 1,i+1 ). The energy of that configuration H( σ * ) clearly provides an upper bound to the groundstate energy. Moreover, the approximation to the exact ground-state energy can be improved by considering the moment matrix generated by the vector x (ν) of all monomials of spin variables of degree up to ν. For every such a vector, it is possible to construct the corresponding moment matrix Γ (ν) and solve the corresponding relaxation, as in (6). This process defines a hierarchy of relaxations ordered according to the degree of the considered monomials ν that provides an asymptotically converging series of tighter and tighter lower bounds on the ground-state energy (see Appendix B for more details). All the steps in the hierarchy are efficient, as they define SDP problems involving matrices that scale polynomially with the number of spins. B. Branch and bound The derived lower and upper bounds through the previous SDP relaxation can be combined with a so-called branch-and-bound (BB) technique to obtain a series of complementing bounds converging to the exact solution. This is a general iteration strategy that has been applied in several different ways (see, for instance, Ref. [17] for a review). The main ingredient of a BB iteration is the branching procedure, which consists in dividing the original problem into two sub-problems that correspond to the opposite cases of a dichotomic choice. In the groundstate problem, it can be done by choosing a spin i and considering the two subsets of spin configurations that have σ i = ±1 fixed. Finding the ground state in both subcases can be cast as another ground-state problem for a modified graph where the vertex i has been removed and the couplings have been modified accordingly. Obviously, the value of the original ground-state energy is just the minimum between the solutions of the two sub-cases. The trick is now to use the upper and lower bounds to reduce the number of branches to explore. The BB procedure does that as follows: (i) start with the original graph and compute a lower and upper bound z L , z U to the ground-state energy, Step -459. in our case using the previous SDP relaxation; (ii) if the bounds differ, choose a branching and compute lower and upper bounds for the two subcases; (iii) keep track of the best upper boundz U encountered so far and discard all the explored branches in which the lower bound is higher thanz U ; (iv) from the reduced list of branches, pick the one corresponding to the lowest z L , if it still differs from the best upper boundz U , go back to point (ii) and perform another branching; (v) keep repeating until the lowest z L and the best upper boundz U coincide. Although the BB procedure always converges to the solution, it may require an exponential number of steps when implemented on hard instances. Yet, the method presents two important properties: (i) it provides a constantly improving energy range for the ground-state energy and (ii) at all steps, it is known whether the searched solution has been reached, possibly up to numerical precision, and no more steps are needed. In fact, we have observed that in many situations the algorithm can be stopped after a few steps because it has been able to find the solution. Fig. 1 shows a typical instance of the BB procedure in which the lower and upper bounds converge after a few iterations. More details about the implementation of the BB procedure can be found in Appendix C. C. Chordal extension The last ingredient in our construction uses the fact that, for the considered spin systems with local interactions, the optimisation problem defined by the Hamiltonian (1) is sparse. As shown in Refs. [18,19], one can exploit this sparsity to derive a similar relaxation that is more scalable than the previous one. Intuitively, the idea behind the modification is the following: for any pair of non-interacting spins i, j, the corresponding two-body expectation value σ i σ j is not needed for the computation of the energy. Thus, a moment matrix with all two-body correlations is including some potentially unnecessary information. Finding the minimal amount of moments that is sufficient to effectively constrain the optimisation of the energy helps defining a more efficient, and therefore scalable, relaxation. To illustrate the method, let us suppose that the graph G of the problem is already chordal. A graph is said to be chordal if all its cycles of four or more vertices have a chord, i.e. an edge that is not part of the cycle but connects two vertices of the cycle. If G is not chordal, it is always possible to associate to it a so-called chordal extension G C by properly adding some edges (see Fig. 2 for an example). Notice that the chordal extension of a graph is not unique, but a chordal extension can always be found, because the fully connected graph is chordal. For a chordal extension to be useful for our CBB method it needs to be still relatively sparse, as in the example presented here. In Appendix B we provide a general polytime technique to find good chordal extensions for all the cases studied in this paper. Once the chordal extension has been derived, one can then introduce a relaxation where the original Γ matrix is replaced by a direct sum of smaller matrices Γ l , constructed only from the spin variables belonging to the each of the n C maximal cliques, i.e. fully connected subgraphs of maximal size, of G C . Note that some spin variables appear in more than one clique, which means that the SDP does not completely decouple into separate SDPs for each block Γ l , so the optimization involves moments appearing in multiple blocks. As an illustration let us go back to the previous relaxation and suppose one wants to solve the 1D Ising The corresponding dependency graph G is already chordal and is composed of N cliques C i = σ i , σ i+1 , with i = 1, . . . , N − 1. Then, the matrix (5) can be substituted by a direct sum of the N blocks Unnecessary expectation values in (5), such as σ i σ i+2 , no longer appear in any of the N smaller blocks Γ Ci , but all the expectation values that are needed to define the Hamiltonian as a linear function of the moment matrices are still present. This simplification is particularly useful because it can significantly reduce the number of variables involved in the SDP and it reduces the size of the largest positive semidefinite block, which dramatically reduces computational footprint in the numerical algorithms solving the optimization. In this example, we go from a matrix whose size increases quadratically with the number of spins, to a linearly increasing set of constant size matrices. In general, the constraints between different blocks do not allow to split the problem into n C independent ones, but one can still see that the scaling of the computational effort is dominated by the size of the largest block alone (see Appendix B for more details on the chordal extension hierarchy). As an illustration of the gain in scalability provided by the chordal extension, we compare the performance of the CBB method with a sparse Ising problem in the two cases of exploiting and not exploiting the chordal extension. As a benchmark of a sparse instance, we consider the standard 2D ferromagnetic Ising model in a statically disordered magnetic field (quenched disorder) that is picked independently from normal distributions of mean zero and variance σ for each site. Similar results are obtained for other models with local interactions. As a function of the disorder strength σ, the model undergoes a phase transition from a ferromagnetic ground state (in which all states are aligned with each other) to a disordered phase (in which, for extremely large disorder, the spins are aligned with the local magnetic fields). For this model it is known that the ground state can in principle be found in polynomial time (see Appendix A). Indeed, the non-chordal BB method is able to find the solution with an effort that scales roughly with a N 5 dependence, see Figure 3. However, and especially in the interesting region close to the phase transition, fast growing memory requirements and runtime make the method impractical for systems that are larger than 15×15 on the hardware we have at our disposal. The CBB method, in contrast, allows us to solve systems of over 35×35 sites on The problem is finding the ground state energy for a 2D Ising ferromagnetic model with random Gaussian magnetic field, close to the phase transition at σ = 1.5, i.e. where the ground state is already partially disordered. The time estimation was averaged over 100 disorder realizations, except for the largest system size, where the averaging was reduced to 10 samples. Due to the large amount of disorder averaging, we limit ourselves to system sizes L ≤ 15, far below the maximum sizes we can tackle on our hardware. The comparison is shown in a linear scale and double logarithmic in the inset. The dashed lines in the inset are power laws of the form L 10 and L 6 , demonstrating the claimed polynomial scaling of the runtime N 5 for BB vs. N 3 for CBB. the same hardware, due to both lower memory requirements and a very significantly reduced runtime, both in terms of absolute numbers and in terms of scaling (see Figure 3 for a comparison). When using the chordal extension, the method scales roughly as N 3 , as opposed to the N 5 dependence without it. IV. VERIFYING THE SOLUTION OF A QUANTUM ANNEALER Once all the ingredients of the method have been presented, we now turn to the main part of our work and show how to apply our approach to verify the results of energy minimizations performed on an actual quantum annealing device. Numerical computations in this work were run on a workstation with an Intel Xeon E5-1650v4 processor with six physical cores clocked at 3.60 GHz base frequency and 128 GByte RAM. Due to the polynomial scaling of the method, much larger system sizes can be reached with more powerful hardware. The sparse semidefinite relaxations were generated by Ncpol2sdpa [20], and the semidefinite programs were solved by Mosek [21]. The code for the experiments is available under an open source license [22]. A. Verifying solutions for a triangular graph To show the flexibility of the CBB method and also to verify a quantum annealing solution for the largest system size simulable on a state-of-the art annealer, we considered a 2D triangular lattice. In fact, spin models on the triangular lattices display a wealth of interesting physical phenomena, many driven by the possibility to have frustrated interactions. To remain in a regime that is comparable to the benchmarking we did before, we however concentrate on the interplay of ferromagnetic interactions with a disordered magnetic field (for a numerical analysis of the corresponding phase transition, see Appendix D). We used a D-Wave 2000Q quantum annealer with 2040 functional qubits. The chip had 8 faulty qubits and the corresponding couplings were removed from a full-yield 16 × 16 Chimera graph. We used the virtual full-yield Chimera graph abstraction to ensure consistent embeddings and improve the quality of the results. The coupling strengths were automatically scaled to the interval [−1, 1], and the logical qubits used a coupling strength of −2 to hold a chain of physical spins together. The minor embedding was a heuristic method, yielding a chain length of 7. We also tried chains up to length 22, without significant change in the results, showing that the scaling in the couplings ensures that the chains do not break. For each data point, we sampled a thousand data points and chose the one with the lowest energy as the optimum. This takes constant time irrespective of the values set, in the range of milliseconds. The flux bias of the qubits was not offset. Both the quantum annealer and the CBB simulation were done for the same disorder realizations (the disorder in the annealer is fully programmable) to obtain directly comparable results. We observe that there are indeed some cases in which even after 1000 repetitions, the lowest energy found by the quantum annealer is still higher than the exact value computed by CBB. Here, the optimal spin configuration found by the quantum annealer typically differs markedly from the the output of our method, which detects that the quantum device probably got stuck in a local minimum. Interestingly, our method is also able to certify that, for some disorder realizations, the quantum annealer is able to find the exact ground-state energy. It does that typically in a very short time. This is true even for intermediate disorder strengths, around σ = 1.5, where the ground-state spin pattern shows macroscopic islands of aligned spins whose precise shape and positions depend non-trivially on the disorder realization, see Fig. 4. Verifying that the annealer did reach the correct solution is impossible with standard variational approaches used so far and clearly demonstrates the relevance of the introduced CBB method for the benchmarking of quantum optimizers. B. Towards the verification for a Chimera graph Lastly, we consider the application of CBB to a denser graph. For this purpose we choose the Chimera architecture [23], which is the natural graph on the D-wave 2000Q hardware. The corresponding graph is composed of K 4,4 fully connected bipartite unit cells, consisting of 8 spins -4 horizontal and 4 vertical -with edges between each horizontal/vertical pair. These unit cells are arranged to form a 2D square lattice of size L and a total number of N = 8 L 2 spins. Because of the in-cell connectivity, such a graph is clearly non-planar and thus has the potential to encode NP-hard Ising models. Even though the Chimera graph is non-planar and denser than 2D rectangular and triangular lattice, using the chordal extension still gives a remarkable advantage, allowing us to reach system sizes of L = 9, compared to just L = 5 (on the same hardware) for an SDP-based BB method without the chordal extension. Although the D-Wave 200Q quantum annealer is currently implementing a Chimera graph with 2040 functional physical qubits, they are seldom actually used as logical spins. Most recent studies encode each K 4,4 cell as a single logical spin, in order to suppress errors due to the finite size and qubit quality of the system [10]. This results in effectively solving Ising models on a 2D square lattice which, being planar, is actually proven to be polynomially solvable (cfr. Appendix A). The numerical test was performed on the actual Chimera graph. This opens up the way to benchmarking future annealing devices, once their physical qubit quality has improved to a point that makes the individual spins useful, in the much more interesting regime of non-planar graphs. V. COMPARISON WITH OTHER VERIFICATION METHODS The problem of verifying a quantum optimizer is very rich and has many interesting sides. We can identify two relevant players in the problem: the provider and the user. A first problem consists of verifying that the quantum hardware performs some form of quantum process, say quantum annealing, with no classical analogue or that is hard to simulate classically. This type of verification is available to the provider when constructing and testing the device, but generally impossible for the user. Here, the verification methods must be quantum specific. The provider may tomographically reconstruct the different quantum steps in the optimization, or may simply want to certify that the device, when solving the optimization problem, generates a large amount of some given quantum properties, such as coherence, entanglement or quantum non-locality. This may be a direct evidence of some form of "quantumness", but it does not guarantee that the device performs better than a classical approach, as at the moment it is not clear which quantum properties -if any -could provide a quantum computational e M v f r 5 L O R d 1 z 6 1 7 r s t a 4 K Z o p s x N 2 y s 6 Z x 6 5 Y g 9 2 x J m s z w Z A 9 s W f 2 4 l j n 1 X l z 3 n 9 G S 0 6 x c 8 z + w P n 4 B h Z R k T k = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " e M v f r 5 L O R d 1 z 6 1 7 r s t a 4 K Z o p s x N 2 y s 6 Z x 6 5 Y g 9 2 x J m s z w Z A 9 s W f 2 4 l j n 1 X l z 3 n 9 G S 0 6 x c 8 z + w P n 4 B h Z R k T k = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " e M v f r 5 L O R d 1 z 6 1 7 r s t a 4 K Z o p s x N 2 y s 6 Z x 6 5 Y g 9 2 x J m s z w Z A 9 s W f 2 4 l j n 1 X l z 3 n 9 G S 0 6 x c 8 z + w P n 4 B h Z R k T k = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " e M v f r 5 L O R d 1 z 6 1 7 r s t a 4 K Z o p s x N 2 y s 6 Z x 6 5 Y g 9 2 x J m s z w Z A 9 s W f 2 4 l j n 1 X l z 3 n 9 G S 0 6 x c 8 z + w P n 4 B h Z R k T k = < / l a t e x i t > Triangular lattice We used a D-Wave 2000Q quantum annealer with 2040 functional qubits, that is, the chip had 8 faulty qubits and the corresponding couplings removed from a full-yield 16 ⇥ 16 Chimera graph. We used the virtual full-yield Chimera graph abstraction to ensure consistent embeddings and improve the quality of the results. The coupling strengths were autoscaled to the interval [ 1, 1], and the logical qubits used a coupling strength of 2 to hold a chain of physical spins together. The minor embedding was a heuristic method, with the short chain length of 7, which is the one for which we present the results. We also tried chains up to length 22, without significant change in the results, showing that the scaling in the couplings ensures that the chains do not break. For each data point, we sampled a thousand data points and chose the one with the lowest energy as the optimum. The flux bias of the qubits was not o↵set. The results are summarized in Table I. Triangular lattice We used a D-Wave 2000Q quantum annealer with 2040 functional qubits, that is, the chip had 8 faulty qubits and the corresponding couplings removed from a full-yield 16 ⇥ 16 Chimera graph. We used the virtual full-yield Chimera graph abstraction to ensure consistent embeddings and improve the quality of the results. The coupling strengths were autoscaled to the interval [ 1, 1], and the logical qubits used a coupling strength of 2 to hold a chain of physical spins together. The minor embedding was a heuristic method, with the short chain length of 7, which is the one for which we present the results. We also tried chains up to length 22, without significant change in the results, showing that the scaling in the couplings ensures that the chains do not break. For each data point, we sampled a thousand data points and chose the one with the lowest energy as the optimum. The flux bias of the qubits was not o↵set. The results are summarized in Table I. In this work we propose the chordal branch and bound method, which improves upon the state-of-the-art method for finding ground states of spin model in several ways. Most importantly, by leveraging the chordal extension, we are able to. . . CBB < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 a n w C 4 K 3 Z 7 s T 3 T 1 N S X g 6 a c t v m k M = " ower of our method we apply it Peter we thought about which aput. We think the following could vanilla 2D Ising with open boundhis is known to be poly time solvthat the method actually always stic ground state in poly time for ents and that it outperforms stanbound. gnition problem of Pater, which is 2D Ising, but we can put some ake the connection to ML (this is as the problem sizes we can tackle mputer are actually too small to e paper that Peter used as a basis) e non-planar graph where the probbe NP hard in worst case. I sugrow the algorithm at random ast solves these in poly time, then r: the chordal SDP always runs in foxed level ⌫, but the branch and in super poly time if the bounds given level are not tight enough ciently many branches early on.). be at least able get some bounds r system sizes that are comparable can do on their chip (it is a pain hem to implement closed bound-We can pitch that as good candiexperiments as this would then be problem for which nevertheless on Dwave machine's output so some not just to upper bounds obtained ulated annealing, which is already ar graphs for which exact solutions poly time. Triangular lattice We used a D-Wave 2000Q quantum annealer with 2040 functional qubits, that is, the chip had 8 faulty qubits and the corresponding couplings removed from a full-yield 16 ⇥ 16 Chimera graph. We used the virtual full-yield Chimera graph abstraction to ensure consistent embeddings and improve the quality of the results. The coupling strengths were autoscaled to the interval [ 1, 1], and the logical qubits used a coupling strength of 2 to hold a chain of physical spins together. The minor embedding was a heuristic method, with the short chain length of 7, which is the one for which we present the results. We also tried chains up to length 22, without significant change in the results, showing that the scaling in the couplings ensures that the chains do not break. For each data point, we sampled a thousand data points and chose the one with the lowest energy as the optimum. The flux bias of the qubits was not o↵set. The results are summarized in Table I. VI. CONCLUSION In this work we propose the chordal branch and bound method, which improves upon the state-of-the-art method for finding ground states of spin model in several ways. Most importantly, by leveraging the chordal extension, we are able to. . . CBB < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 a n w C 4 K 3 Z 7 s T 3 T 1 N S X g 6 a c t v m k M = " Notice that in all the instances considered here the confidence region provided by CBB converged in few steps and the algorithm returned the exact ground state energy. Right: comparison of the corresponding ground state spin configuration both for a case where D-Wave achieves the lowest energy and for two cases where it does not. Yellow spins are +1 and black one −1. It can be seen clearly that, even when the energies provided by the two methods are close, the corresponding spin configurations can be very different. This shows that the excited state that the D-Wave quantum annealer returns, does not necessarily resemble the globally optimal solution. advantage. Moreover, it is difficult to see how this approach could detect instances where the quantum device gets stuck in a local minimum, the typical challenge in the considered optimization problems. Our work falls into a second class of methods, which attempts at verifying the device only from the outputs it produces, without any reference to the quantum process that gave raise to it. Note that, being based only on a classical output, none of these methods is quantum specific and all can be applied to any optimizer, classical or quantum. This is the type of verification that is more relevant to the user. For optimization problems, this verification will mostly be based on heuristics: most of the relevant problems are NP-hard and it is expected that will remain hard also for quantum computers. Yet, one can not exclude the possibility that quantum devices might eventually give better solutions than any classical solution "on average", or at least for some families of practically relevant problems. In fact, the search for a quantum advantage is one of the main focus of research on quantum algorithms for classical optimisation problems. One can identify three ways of verifying the outputs of quantum optimizers (or, as said, any optimizer). Planted solutions: in this approach, an optimizer is run on problems for which the solution is known [13,24]. This is a way of testing that the considered algorithm is able to obtain the expected solution and hope that it will perform equally well for other problems for which the solution is unknown. In our opinion, this approach is especially relevant for the development of quantum optimizers, for instance to understand which instances are difficult for them. But it also has clear limitations. First, the considered optimization problems are so diverse that it is conceivable that a device may not be able to find the exact solution for problems where it is known in advance while giving reasonably good results for other problems of relevance. Second, testing a device on problems for which the solution is known can not lead to any quantum advantage by definition. Therefore, if any quantum advantage were to be demonstrated, it will never be by running a quantum device on problems with a planted solution. Variational methods: we group in this class any approach providing a candidate solution, not necessarily the optimal one, to the problem. By definition, these approaches provide upper bounds to the searched solution, as the quantum optimizer. Simulated annealing [11] or tensor-network algorithms [12] are notable classical examples of this approach. If the value provided by the quantum optimizer E q , is larger than the best value obtained by one of these classical approaches, E c , one can certify that the quantum device has not reached the searched solution, E g . Classical variational methods have a long tradition and can deal with very large systems, at the moment larger than those available for quantum optimizers. If, however, quantum optimizers will ever reach the quantum advantage, one will encounter a situation in which they will be able to provide the smallest available upper bound to the solution. In a resulting scenario where E g ≤ E q < E c , the limitations of classical variational methods are clear: it is hard to determine whether the better performances of quantum optimizers are an indication of to their intrinsically higher computational power or the lack of a more efficient classical optimization algorithm. Relaxations: this is the approach considered in this work and, more in general, we refer here to any method providing a lower bound to the searched ground-state energy (note however that our approach can also be used to provide an upper bound with the same computational effort). These methods have much less history and cannot reach, at the moment, problem sizes comparable to variational methods. However, and because of their complementarity, they have clear merits. As shown by our demonstration using the D-Wave machine, these methods can give a termination criterion for any optimisation heuristics, certifying that the searched solution has been reached and no more rounds are needed. But even when a gap remains, the lower bounds obtained through relaxations can be complemented with the best upper bound, being quantum or classical, and provide an energy range in which the solution provably lies. Clearly, no quantum advantage is possible for those problems where the obtained lower bound matches the solution obtained by a classical optimiser. On the contrary, problems where there is a gap between the best known classical solution and the lower bound may be good candidates when searching for a quantum advantage. Finally, if the bound only matches the output produced by a quantum device, no classical approach will ever provide a strictly better solution. In our opinion these properties make this approach valuable and, in particular, especially relevant to advance the study and search for a quantum advantage in classical optimization problems. VI. CONCLUSIONS We introduced the chordal branch-and-bound (CBB) method that uses a hierarchy of efficiently computable upper and lower bounds on the ground-state energy of classical spin systems and exploits the physical locality structure of relevant Hamiltonians. Our numerical results show that the iterative branch-and-bound process often converges, providing an exact and certified value for the ground-state energy, together with a ground-state configuration. Even for those cases where convergence is not met, the method always provides with a polynomial effort an energy range where the searched solution provably lies. Moreover, the obtained lower bound can be used to benchmark the output of both quantum and classical optimization methods. We focused here on their use to assess the quality of current quantum annealers beyond their comparison to classical variational approaches. In particular, we were able to identify instances were the quantum optimizer reached the actual ground state energy without resorting to planted solution problems. To our knowledge, to date this is the only approach providing this type of certification. It would definitely be interesting to explore the performance of our method on other relevant Ising models that are proven to be NP-hard. The complexity of the model might result in an exponential number of branchand-bound steps to achieve convergence to the groundstate energy. However, let us stress that NP-hardness is a worst-case feature, hence it might be the case that an efficient convergence to the ground states is achievable on average even for some NP-hard models. In this respect, CBB can be essential to provide numerical evidence that identifies which Ising models are uniformly hard, among the NP-hard ones. From a general viewpoint, recent progress of quantum computing devices [14] urge for the development of methods to benchmark them. Our work is an example of such effort and we are confident that the approach adopted here for benchmarking, based on relaxations of the initial problem, may find applications in other similar contexts. VII. ACKNOWLEDGEMENTS A.A., F.B. and C.G. thank P.W. for many insightful discussions and his passion for research. We also acknowledge useful discussions with J. Tura and M. Navascués. This work is supported by the Spanish Ministry MINECO (QIBEQI FIS2016-80773-P and TRANQI PID2019-106888GB-I00, Severo Ochoa SEV-2015-0522 and PhD fellowship), the European Union's Marie Sk lodowska-Curie Individual Fellowships (IF-EF) programme under GA: 700140, the Generalitat de Catalunya (CERCA Program, QuantumCAT and SGR 1381), Fundacio Privada Cellex and Mir-Puig, computational resources granted by the High Performance Computing Center North (SNIC 2015/1-162 and SNIC 2016/1-320), and a hardware donation by Nvidia, the Perimeter Institute for Theoretical Physics, the Government of Canada through Industry Canada and the Province of Ontario through the Ministry of Economic Development and Innovation, the ERC CoG QITBOX, AdG CERQUTE and the AXA Chair in Quantum Information Science. Appendix A: Complexity of finding Ising ground states There is a wealth of results on the worst-case complexity of finding the ground state of various Ising models [25]. Thereby "worst-case complexity" is the complexity of the hardest instances within a class of families of problems of increasing size. How hard it is to solve the ground-state problem of such a family varies with the interaction graph and can crucially depend on seemingly unimportant details. We consider Hamiltonians that are polynomials (with fixed finite degree and finite precision coefficients) in the spin variables in the form Finding the ground state of Hamiltonians of the form (A1) for arbitrary graphs is in general NP-hard [26], even without any fields. This is still true for J i,j ∈ {−1, 0, 1} and G a finite 3D cubic grid graph and even for G a cubic two-layer 2D grid [26]. In contrast, for planar graphs G and without local fields, the ground state can be found efficiently even without the restriction J i,j ∈ {−1, 0, 1} [25]. With the restriction J i,j ∈ {−1, 0, 1} this even holds for toroidal graphs (grids on a torus, i.e., systems with periodic boundary conditions) [25]. Similarly, if J i,j ≥ 0, then even some systems with local fields can be solved in polynomial time [25]. On the other hand, for general planar graphs with interactions J i,j = 1 and uniform external field h i = 1, finding the ground state is again NP-hard [26]. Ref. [25] contains a list of further concrete models whose complexity is either known to be in P or proven to be NP-hard. These hardness results are typically obtained by reducing the ground state problem to the so-called max-cut problem, which is known to the NP-hard. The polynomial time algorithms to solve the other families of systems, in turn, work by finding perfect matchings [26] or rely on so-called max-flow/min-cut methods [25]. Appendix B: Ingredients of the chordal branch-and-bound method In this Appendix we describe in detail the Lasserre hierarchy and the chordal extension method used to derive lower bounds to the ground-state energy of a classical Hamiltonian. To make this part self-contained, we revise the general method by using the notation use in the main text, while making comparisons to the more typical framework of polynomial optimisation. Moreover, for the sake of completeness we will repeat some notions that have already been introduced in the main text. We begin by recalling that a state of an N spin system is a probability distribution P over the set of configurations {−1, 1} N . Expectation values of a function f for the state P are denoted by f P , or even shorter by f if the connection to a specific states is not evident. The ground state is defined by any state P achieving the minimal possible expectation value for the Hamiltonian, i.e, min P H P = E g , that is, The hierarchy of lower bounds Let x (ν) be the vector of all monomials of spin variables of degree up to ν. For such vector and state P we define its moment matrix Γ (ν) (P ) as the k × k matrix of expectation values Γ (ν) It is not hard to see that for any state P , any moment matrix Γ (ν) (P ) is positive semidefinite, i.e., Γ (ν) (P ) 0, and that, depending on what the elements of x (ν) are, it further obeys certain linear constraints that reflect the two basic properties of classical spin variables of taking dicotomic values σ i ∈ {−1, 1} and commuting with each other. These properties imply relations among different expectation values such as, for example, σ i σ j σ i P = σ j P , which can be expressed in terms of some matrices F m such as tr(F m Γ (ν) (P )) = 0. For a sufficiently large value of ν, the corresponding moment matrix Γ (ν) contains all the expectation values needed for the computation of the energy, in the sense that there is a matrix h (depending on the J ij and h i in case the Hamiltonian is of the form in (A1)), such that for any physical state P it holds that H P = tr(h Γ (ν) (P )). If this is the case, for that given ν, one can relax the ground-state problem by minimising the energy over all matrices Γ (ν) that are positive semidefinite and fulfil the above mentioned linear constraints, rather than over those that can actually arise from a physical state P . The resulting optimisation problem reads ∀m ∈ {1, . . . , k} : tr(F m Γ (ν) ) = 0. The solution to this minimisation problem E (ν) g provides a lower bound on the true ground-state energy, i.e., E (ν) g ≤ E g . This follows from the fact that the set of all matrices Γ (ν) that are positive semidefinite and fulfil the above mentioned linear constraints is larger than the set of moment matrices that can actually arise from a physical state P . It is further obvious that the E (ν) g values are ordered in the sense that E (ν) g ≤ E (ν+1) g for any ν. Remarkably, if all the relevant F m are taken into account, the bounds actually converge to the true ground-state energy for any fixed finite system size and Hamiltonian H, in the sense that lim ν→∞ E (ν) g = E g [16,27]. This does not mean that convergence can only be attained in the limit. As a matter of fact, there are situations in which convergence is attained at a finite value of ν. To illustrate the first levels of the hierarchy, let us go back to the example presented in the main text with mo- It is clear now that that moment matrix represents the hierarchy (B2) at level ν = 1. If we take the special case of N = 3 spins, the whole moment matrix reads Going at level at level ν = 2 for a system of N = 3 spins, the corresponding moment matrix take the following form: Matrix Γ (1) is contained in Γ (2) as a principal minorthe one generated by the first 4 rows and columns. This makes it so that the second level directly implies the constraint implied by the first one, while the non-negativity of a bigger moment matrix results in a generally more stringent test. Moreover, the expectation value of any Hamiltonian of the form given in (A1) can be expressed as a function of the entries of both moment matrices as tr(h Γ (ν) ) = J 12 Γ 25 . As a comparison with previous similar methods, notice that the branch-and-bound technique introduced in Ref. [17] exploits a lower bound method that is almost equivalent to the first level of the relaxation (B2), with the addition of some hand crafted linear constraints. Indeed, the mentioned relaxation can be obtained by considering a moment matrix generated by the set of monomials composed of the spin variables only, without the identity operator. Hence, it results in the first level of the Lasserre hierarchy, diminished by the absence of the one (the first) row of the matrix. In contrast, the hierarchy discussed here allows to systematically construct an infinite family of increasingly precise relaxations that yield better and better bounds at the price of an increasing computational cost. Exploiting sparsity via the chordal extension Depending on the kind of system considered, the optimisation problem defined by the Hamiltonian (A1) can be sparse. As is shown in Refs. [18,19], we can exploit this sparsity to derive a more scalable relaxation than (B2). The method works as follows: take the dependency graph G of the problem and check if it is chordal. As mentioned in the main text, a graph is said to be chordal if all its cycles of four or more vertices have a chord, i.e. an edge that is not part of the cycle but connects two vertices of the cycle. If G is not chordal, construct a so-called chordal extension G C of G by suitably adding edges until the graph is chordal. The chordal extension of a graph is not unique, but a chordal extension can always be found, simply because any fully connected graph is chordal. As it will became clear in the following, for our purposes it is crucial to find a chordal extension that is still relatively sparse. A poly-time method that works well in this respect for all the cases studied here is to compute an approximate minimum degree ordering of the graph nodes, followed by Cholesky factorization of a positive semidefinite matrix with the associated sparsity pattern [28]. Once a specific chordal extension G C is constructed, it will contain a number of n C maximal cliques C l ⊂ V. A clique, that is a fully connected subgraph, is maximal if it cannot be extended by including any other adjacent vertex. Since the graph G represents a sparse Hamiltonian, and G C is obtained from G by simply adding some edges, the function (A1) can be decomposed into a sum H = l H C l of terms that each contain only variables contained in a given maximal clique C l . One can then modify the optimisation problem in (B2) as follows: replace the big Γ (ν) matrix by a direct sum of smaller matrices Γ (ν) l , one for each clique, constructed from the spin variables belonging to the clique C l . Some spin variables appear in more than one clique, which can be captured with additional linear constraints that involve variables of the blocks Γ where the F m,l are the intra-block constraints coming from the properties of the spin variables, while the G l,n correspond to the constraints identifying expectation values belonging to different blocks. Interestingly this relaxation still converges to the exact result [19]. Depending on the sparsity of the graph G (and its chordal extension G C ), substituting the original optimisation relaxation (B2) by (B6) leads to a substantial simplification and improved scaling in runtime and memory. In practical applications, the latter are typically dominated by the the largest block, i.e., the largest maximal clique in G C . Moreover, the block structure can be exploited to have a more finely tuned control of the lower bound precision, essentially by replacing a general hierarchy level ν by a moment matrix with block-dependent levels ν l . This allows to define hybrid levels where, for instance, smaller blocks are generated at higher values of ν l , thus improving the quality of the lower bound without significantly affecting the scalability of the SDP. Let us illustrate how this chordal extended relaxation works in practice by going back to the three spins example introduced above and by considering the 1D Ising model with the Hamiltonian H = 2 i=1 J i,i+1 σ i σ i+1 presented above. The corresponding dependency graph G is already chordal and is composed of two cliques C 1 = σ 1 , σ 2 and C 2 = σ 2 , σ 3 . Since the blocks at level ν = 1 have been presented in the main text, here we show the relaxation at level ν = 2, where the matrix (5) can be substituted by the two blocks: The constraints G l,n derive from the fact that the variable σ 2 belongs to both cliques, hence several expectation values appear in both blocks. Some of the unnecessary expectation values in (5), such as σ 1 σ 3 and σ 1 σ 2 σ 3 no longer appear in the two smaller blocks Γ (2) C1 and Γ C2 . Such a simplification is particularly useful because it reduces the number of variables involved in the SDP. Indeed, the resulting block structure implies that the only two-body expectation values σ i σ j that appear in the moment matrices correspond to spins i, j belonging to the same block. Hence, all the triangle inequalities that can actually be imposed have to involve triples i, j, k that appear in the same clique. The numerical effort for one step in the CBB method is mostly determined by the largest block in the moment matrix. Therefore, we choose to take a hybrid approach, introducing an intermediate level with ν = 2 for all the blocks Γ (ν) l involving less than n t variables, while keeping all the bigger blocks at level 1. Taking such a hybrid level yields a significant improvement in the initial lowerupper bound gap already for smaller values of n t . In fact, we have checked numerically that this devises a test that corresponds at least to the case of level 1 plus the addition of all triangle inequalities between variables in the smaller blocks. Moreover, we also allow for additional triangle constraints between two-body correlations belonging to bigger blocks. In particular, we add them in an iterative way, as shown in Ref. [17], until the improvement on the lower bound is smaller than some numerical precision. In most cases we tested, there was actually no need to introduce these additional constraints. Upper bound For the upper bound one needs a good guess for a spin configurations that is close to the ground-state energy. In our case, this is straightforward: from the moment matrices Γ * (ν) l obtained from the solution of the SDP (B6), construct the configuration σ * where each spin σ * i is aligned according to the sign of the entry corresponding to the expectation value σ i . Intuitively, this can be seen as a way to obtain the spin configurations "closest" to the optimal (but typically unphysical) solution achieved by the relaxation. A nice feature of our strategy for the upper bound is that it basically adds no extra computational cost as it is derived directly from the moment matrix that is obtained by solving the SDP. The above method makes a substantial improvement over earlier approaches. Indeed, as described in Ref. [17], the moment matrix used in previous SDP relaxations is missing the first row and column vector, and thus exactly the entries we need to extract our deterministic configuration. That is why former approaches usually resort on performing a Cholesky decomposition of the moment matrices Γ * l = B T l B l and -by interpreting each row of the resulting matrices B l as a vector v l,j -assigning the deterministic values to the spin variables by taking scalar products of these vectors with a randomly chosen one. Notice that, apart from requiring the additional computational effort of having to perform a Choleski decomposition, the above method is also hard to adapt to an SDP composed of more blocks, as the one resulting from the application of the chordal extension technique. To conclude, once a valid spin configuration σ * has been extracted, we simply set the upper bound to be its corresponding energy H( σ * ). Surprisingly, we noticed that by following this procedure, the exact ground states is usually recovered very soon in the branching (see Fig. 1 in the main text for an example). It then takes additional time to find a matching lower bound to verify that this is indeed the lowest achievable energy. This makes us believe that our procedure is very efficient in finding the optimal deterministic configuration. Branching rule Here we follow the same method outlined in [17], but with a different choice of branching procedure. Indeed, the authors in [17] choose the dicotomic choice to be the relative alignment of a pair of connected spins. That is, given a choice of indices i, j, the two branches correspond to the two cases σ i ± σ j = 0. However, as mentioned before, we prefer to branch on the value of the single spin, by choosing between the two values σ i = ±1. The reason for this is that, in the latter case the number of possible branching steps depends only on the number N of spins in the system. On the contrary, the former method involves an amount of branching choices that depends on the number of edges in the dependency graph G, which can be much higher, often as high as N 2 . For our branching rule strategy, there is the question of which spin i to choose for the next branching. The way we do this here is based on the expectation values σ i recovered from the moment matrices Γ (ν) l and used for the construction of the upper bound. The intuition is the following: spins with an expectation value close to zero are "difficult" choices, because flipping the value of such a spin is likely to lead to a slight change in the energy of the system; conversely, expectations values very close to ±1 are identified as "easy" choices, because flipping such a spin is likely to lead to a significant change in the energy of the system and it is easier to discard a branch during the evolution of the branch-and-bound process. We set the branching rule to "easy-first", that is, at the end of each optimisation round, the next branching is performed on the closest to deterministic spin in the Γ (ν) l . Possible improvements There is some freedom in the choices we outlined in the previous subsections. Given the huge difference in complexity that can be exhibited by various instances of the Ising model, we expect the optimal choice to be model dependent. Here we briefly discuss which modifications we imagine to be most useful for practical applications. Let us start by recalling that, in order to accelerate the convergence process and to keep memory requirements low, it is crucial to reduce the initial lower-upper bound gap as much as possible and as early as possible. One way to do that is to modify the hybrid hierarchy level introduced above. In our applications, it was always enough to set the threshold to at most n t = 7. However, such value can be significantly increased without affecting too much the scalability of the CBB. Indeed, the main bottleneck of our method is the memory required to solve the largest SDP. This depends mainly on the block l * leading to the largest matrix Γ (ν) l * . Therefore, as long as increasing the level of the smaller blocks does not lead to bigger matrices that the one for the largest clique, the SDP will still have the same memory requirements -although the solving time will clearly increase. Other branching rules can be also be adopted. For instance, one can replace the "easy-first" approach with a "difficult-first". In this case, one picks the next branching from the least deterministic spin in the Γ (ν) l . We expect the choice of the most effective branching rule to depend on the system under consideration. Lastly, there are instances in which CBB does not outperform other methods. This is true for specific cases of very sparse problems, where linear-programming relaxations were shown to work very well [29], or some handcrafted models for which exact polynomial algorithms are known [30]. Nevertheless, it would still be interesting to see if one could combine the construction based on the chordal extension with those methods and provide some further advantage.
2020-11-05T09:08:11.227Z
2018-08-03T00:00:00.000
{ "year": 2018, "sha1": "c43765efb5e4978a02114bfb3343b2ac0f08bbb5", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.2.043163", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "b2182b7c212b111286e14e8ef2b13a0ab3e0d9bf", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics", "Mathematics" ] }
55225024
pes2o/s2orc
v3-fos-license
Executive and Non-Executive Cognitive Abilities in Teenagers : Differences as a Function of Intelligence Intelligence and cognitive abilities, including executive functions (EF), have been addressed by psychometrics and cognitive psychology, respectively. Studies have found similarities and overlap among constructs, especially between EF and fluid intelligence (Gf). This study’s aim was to investigate in teenagers: 1) the relationships among Gf, crystallized intelligence (Gc), cognitive, and executive abilities; and 2) the differences among groups with average, superior and very superior intelligence in regard to cognitive and executive functions. A total of 120 adolescents aged between 15 and 16 years old were assessed via IQ tests (the WISC III and Raven’s), EF (computer version of the Stroop Test, FAS Verbal Fluency Test, Trail Making Test—part B), and cognitive abilities (Peabody Picture Vocabulary Test [PPVT], Repetition of words and pseudo words Test, the Rey Complex Figure [REY CF]). Low to moderate correlations were found among measures of intelligence and cognitive and executive functions. Even though interrelated, the measures seem to capture somewhat distinct aspects. Subsequently, the participants were divided into three groups according to their performance on Raven’s Test: Group with very superior intelligence (VSI), Group with superior intelligence (SI), and Group with average intelligence (AI). The ANOVA revealed the groups’ significant effect (VSI, SI, AI), that is, the VSI and SI groups tended to perform better on the WISC subtests, in the cognitive measures of the PPVT, Rey CF, and in executive measure (FAS). A tendency of increasingly better performance in the various abilities according to groups was observed, but the hypothesis of greater specific association between Gf and EF was not confirmed. The results show better general performance according to the level of intelligence. Corresponding author. Introduction Psychometrics has traditionally studied intelligence, while cognitive abilities, including the so-called executive functions (EF), have been addressed by cognitive psychology and neuropsychology.This study focuses on the relationships among these constructs, concepts originally derived from different theoretical fields. Intelligence can be understood as an ability to learn from experience and adapt to the environment (McGrew, 2009).Historically, various theories have been proposed in an attempt to delimit this concept, including Spearman's proposition (Mcgrew & Flanagan, 1998), which concerns a general factor known as g that would permeate all cognitive tasks and accomplishments.One of the most influential theories, even today, refers to the idea of fluid (Gf) and crystalized (Gc) intelligence proposed by Carroll (1993) and Cattell (1987).Gf is associated with non-verbal components, having little dependence on prior knowledge or on the influence of cultural aspects (Horn, 1991;McGrew, 1997) and has been strongly associated with g (Gustafsson, 1988;Härnqvist, Gustafsson, Muthén, & Nelson, 1994).Gc, in turn, refers to prior knowledge and represents abilities required in the solution of most everyday problems and is developed from cultural and educational experiences (Cattell, 1998;Horn, 1991;Horn & Noll, 1997). The development of the dichotomous Gf-Gc model resulted in the Cattell-Horn-Carroll (CHC) theory of cognitive abilities, described as an empirically assessed integration of conceptions developed by Cattell, Horn and Carroll (Schelini, 2006).The CHC model understands intelligence as consisting of a three-level hierarchy.The highest level refers to Spearman's g factor, suggesting that a common ability underlies all the cognitive abilities.There are 15 broad factors in the second level, and the lower level aggregates approximately 80 specific factors that correspond to abilities more directly assessed by IQ tests (Mcgrew, 2009). In turn, according to views based in neuropsychology, behavior is based on three major functional systems that, in addition to emotional aspects related to personality and emotion variables, include cognitive and executive functions (Lezak, Howieson, & Loring, 2004).Cognitive functions involve behavioral aspects related to information processing.Executive functions reflect an individual's ability to engage in independent and selfregulated behavior.Considering that EF are also, to a certain extent, cognitive functions, we use in this paper the terminology "non-executive cognitive functions" to refer to information-processing abilities in order to differentiate between both constructs. Cognitive functions encompass diverse abilities involved in information recording (input), its processing, maintenance and response (output).Among them, we address in this study linguist abilities, such as vocabulary and phonological short-term memory, and visual-spatial abilities, such as perception and visual short-term memory.Vocabulary corresponds to words with which the individual is familiar and is able to reproduce and/or understand, relating them to the language's semantic aspects (Sternberg, 2008).Phonological short-term memory refers to one's ability to retain and recover phonological information for short periods of time (Vance, 2004).Visual-spatial ability refers to the processing of visual mental representations and may be further divided into visual abilities, which include color and movement processing, and spatial abilities, which include visual localization, spatial attention, spatial knowledge and reasoning (Sternberg, 2008).According to the author, among visual abilities, visual perception is a set of processes that enables recognizing, organizing, and interpreting information based on visual sensory stimulation, while visual memory refers to one's ability to retain and recover visual representation in the absence of stimuli. EF, in turn, refer to one's ability to engage in objective-based behavior (Sullivan, Riccio, & Castillo, 2009).Three abilities are considered major EF: inhibition, which enables one to control inappropriate behavior and attention to distractors (selective attention); working memory that is responsible for maintaining and mentally handling information; and cognitive flexibility, which enables changing perspectives and adapting to different contexts (Diamond, 2013;Miyake et al., 2000).These main abilities are involved in and can promote other complex EF such as planning, decision-making, and even fluency (Dias & Seabra, 2014;Malloy-Diniz et al., 2008;Miyake et al., 2000).From this perspective, EF cover "how" an individual does something, while cognitive functions cover "what and how much" an individual is capable of.Some overlap is identified among these concepts.For instance, the proposition of Diamond (2013) regarding EF integrates the concept of Gf.The author considers the interaction among major EF, inhibition, working memory and flexibility to be the base for more complex executive abilities, such as reasoning and problem-solving, which the author considers to be synonymous with Gf.In addition, Arffa (2007) considers EF to overlap with the psychological concept of intelligent behavior.The multidimensionality of these constructs complicates the relationship and reveals overlaps, which can, in turn, change over development.This situation has led some researchers to investigate in more detail the relationships among intelligence, cognitive and executive functions in specific age ranges.For instance, Demetriou et al. (2014) analyzed speed and working memory as predictors of Gf, and found that speed-Gf and Working Memory-Gf relations change in different age ranges. The relationship between IQ tests and EF tests in general are usually not strong.A more careful analysis of this correlation in children, however, reveals that associations between EF tests and IQ tests with greater emphasis on Gc tend to be weak, while the relationship between EF tests and Gf tests tends to be stronger (Arffa, 2007).Additionally, even among EF-related abilities, some are more strongly associated with intelligence measures than others.For instance, in adults, working memory, and more specifically the central executive component, appears more strongly related both to Gc, and especially to Gf intelligence, while the relationship with other executive abilities is less consistent (Abreu, Siquara, Leahy, Nikaedo, & Engel de Abreu, 2014;Friedman et al., 2006).Also, Demetriou et al. (2014) demonstrated that working memory is a strong predictor of Gf in some periods of development, as 13 -16 age range.For this reason, given the amount of evidence involving working memory (see Abreu et al., 2014 for a review), this study focused on other executive abilities in teenagers: two major abilities (inhibition and flexibility) and one complex ability (verbal fluency). Another line of investigation has included studies addressing individuals with brain injuries.Evidence shows that both EF as Gf or Spearman's g factor share some neural substrates and are associated with the prefrontal cortex (Barbey et al., 2012;Roca et al., 2010).Studies show that prefrontal alterations are associated with Gf loss, however, they do not seem to change Gc (Roca et al., 2010).This relationship between prefrontal areas, known as the EF substrate, and Gf raises questions in regard to the relationship between these two constructs.In fact, studies have shown the contribution of Gf, together with working memory and inhibition, to solving EF traditional tests, such as the Tower of London, which assess planning (Zook, Dávalos, Delosh, & Davis, 2004).Neuroimaging studies agree that the prefrontal cortex is a neurological subtract common to both EF and Gf (Abreu et al., 2014). In addition to Gf-Gc, EF are also related to the CHC model's cognitive abilities.Based on the performances of children and adolescents in an EF battery (Kaplan Executive Function System-DKEFS) and Woodcock-Johnson III, which assesses cognitive abilities, the authors found positive and significant correlations between the participants' performances on both instruments concluding that there are similarities among the constructs assessed in IQ tests and neuropsychological batteries (Floyd, Bergeron, Hamilton, & Parra, 2010). Even though WISC-III and WAIS-III were not based on the Gc-Gf or CHC and are considered to present a heavy load of Gc (Primi, 2003), studies have shown some relationships between total IQ or score in subtests, and EF (Ardila, Pineda, & Rosselli, 2000;Riccio, Hall, Morgan, & Hynd, 1994;van Aken et al., 2014).Arffa (2007), for instance, assessed three groups of children and young individuals divided according to the total IQ obtained on the WISC-III into groups with average intellectual ability (IQ between 90 and 114), above average (IQ from 115 to 129), and gifted (IQ above 130).The author verified that the performance obtained on EF tests (except flexibility) is significantly correlated with total IQ.The group of gifted children presented higher performance on EF tests when compared to the other two groups, but this superior performance was not observed on non-executive tests assessing other cognitive abilities. In this context, this study has a twofold objective: 1) to investigate the relationships among Gf, Gc, executive and non-executive abilities; and 2) to investigate the differences among groups with average intelligence, superior intelligence, and very superior intelligence, considering cognitive functions (vocabulary, short-term memory, visual processing), including the WISC-III tests, and executive abilities (selective attention/inhibitory control, verbal fluency and cognitive flexibility).Arffa (2007) divided groups according to a measure with a high load of Gc.In this study, the groups were divided according to their performance in a test with a heavier load of Gf.The hypothesis is that there is a differential pattern of relationships among the variables, with more and stronger relationships established between the executive and Gf measures.Another assumption is that the groups with higher percentiles of intelligence will perform better on all the tasks, but more specifically in EF measures. Participants The initial sample was composed of 139 adolescents aged from 15 to 16 years old, attending high school (64% female) in both public and private schools in the state of São Paulo.Intellectual disability, identified through Raven's Progressive Matrices Test-General Scale, was an exclusion criterion.Hence, those classified with a percentile below 25 were excluded from the sample.There were no participants with any known, uncorrected severe sensory or motor disability.The final sample was composed of 120 adolescents, aged from 15 to 16 years old (68.3% females).Of these, 37.5% were attending the 1 st year and 62.5% were attending the 2 nd year of high school; 88.3% of the students were from private schools.The 120 participants were divided into three groups according to their performance on Raven's Progressive Matrices Test-General Scale: group with Very Superior Intelligence (VSI), with a percentile equal or above 90; group with Superior Intelligence (SI) with percentile between 75 and 89; and group with Average Intelligence (AI) with percentiles from 25 to 74).The characterization of the groups is presented in Table 1.This grouping (at the expense of other possible groupings) was intended to keep a reasonable number of participants per group. Raven's Progressive Matrices Test-General Scale Raven's Progressive Matrices Test-General Scale (Angelini, Alves, Custódio, Duarte, & Duarte, 1999) assesses one's general reasoning ability, suitable for assessing individuals 12 years old or older.It is composed of a book with 60 items divided into five series (A, B, C, D and E).Each series contains 12 problems ordered by level of difficulty.A figure in which one entry is missing is presented in each problem.A set of answer choices is presented for the missing entry.The participant must choose the option that completes the figure.Raven's test has a heavy load of g factor (Primi, 2003).In this study, scores obtained on Raven's test was used to exclude participants with intellectual disability and to assign the participants into groups (VSI, SI, AI).The instrument was applied collectively and the application lasted approximately 20 minutes. Wechsler Intelligence Scale for Children (WISC-III) The WISC-III (Figueiredo, 2002) assesses the intellectual ability of children aged from 6 to 16 years old.It is composed of subtests organized into two groups, Verbal and Performance, each assessing different aspects of intelligence.The verbal scale provides information on language processing, reasoning, attention, verbal learning and memory, while the performance scale enables the assessment of visual processing, planning ability, non-verbal learning and ability to manipulate visual stimulation.In addition to these two scales, the instrument yields a total IQ and the estimate of four factor scores: Factor 1-Verbal Comprehension; Factor II-Perceptual Organization; Factor III-Resistance to Distraction; and Factor IV-Processing Speed. Eight subtests were used in this study, which allowed the Factor Score Verbal Comprehension (Factor 1) and Perceptual Organization (Factor II) to be estimated.In regard to the scale Verbal Comprehension, the "information" subtest measures the level of knowledge acquired from formal education and family upbringing, enabling the verification of temporal organization.The "similarities" subtest examines one's ability to establish logical relationships and verbal concepts or categories."Vocabulary" assesses one's linguistic competence, lexical knowledge, and, mainly, ease at preparing speech.Finally, the "Comprehension" subtest refers to the individual's ability to express experiences and knowledge concerning social relationship rules.In regard to the Perceptual Organization scale, the "Complete Figures" subtest involves visual memory and lexical access because the participant is asked to indicate the part missing in a figure.The "Figure Arrangement" subtest requires perceptive analysis, as well as ability to integrate a set of available information.The "Cubes" subtest examines organization and visual-spatial/non-verbal processing, i.e. the ability to mentally decompose elements of a model to be replicated.It is considered a non-verbal problem-solving measure.Finally, the "Assembly Objects" measures one's ability to organize a whole from separated elements, assessing perceptive integration and problem-solving strategy.According to Furgueson, Greenstein, Mcguffin and Soffer (1999), the summarized versions of the WISC-III are a reasonable alternative to assess children's cognitive abilities.The instrument was individually applied and took one hour, on average. Computer Version of the Stroop Test (Stroop-Comp) The Stroop-Comp (Seabra, Dias, & Macedo, in press) assesses selective attention and inhibitory control.The test is composed of three parts, each with 24 stimuli.The first part presents, for an undetermined time, the names of four colors (yellow, blue, green, and red) written in black capital letters.These words must be read as fast as possible in order to assess reading abilities.The instrument's second part presents 24 colored circles (yellow, blue, green, and red) exposed on the screen for 40 thousandth of a second.The participant's task is to name the color of each circle.This part of the test serves as the baseline to analyze correct answers and reaction time on the third part of the instrument.In this part, the circles are replaced by written words that correspond to the four colors but which are printed in colors different from their meaning (e.g. the word green is written in blue and so on) and the individual is asked to name the color with which the word is written.This stage demands new processing (inhibit the tendency to read and select the relevant stimulus, in this case, color).The effect of interference is obtained by subtracting the score obtained on the second part of the instrument from the score obtained on the third part of the Stroop-Comp, both in terms of score and reaction time (RT).The instrument was individually applied for 20 minutes, on average.In this study we used the score and RT obtained for the third part of the instrument and for interference. FAS Semantic Verbal Fluency Test (FAS) The FAS assesses verbal fluency (Strauss, Sherman, & Spreen, 2006).A computer version was used, in which six screens were presented to the participants.The first screen prompts participants to "Say as many words starting with the letter F as you can.You have one minute."The screen remains empty and the participant has one minute to answer.The examiner controls time and the software records the participant's responses.The same procedure is then repeated with the letters A and S. The instrument was individually applied and took approximately 5 minutes.For this study, we used the total number of words correctly evoked. Trail Making Test-Part B (TMT) TMT (Montiel & Seabra, 2012) assesses cognitive flexibility.It consists of the presentation of 24 items represented by 12 letters (from A to M) and 12 numbers (from 1 to 12), randomly dispersed on a white sheet of paper.The participant's task is to link letters and numbers interchangeably in ascending order for the numbers and alphabetically for the letters.One minute is allowed for the task.The instrument was collectively applied.Scores obtained for the sequences and connections were used. Peabody Picture Vocabulary Test (PPVT) The PPVT (Dunn & Dunn, 1981) assesses receptive vocabulary in a wide variety of fields, including people, actions, qualities, body parts, time, nature, places, objects, animals, mathematical terms, tools and instruments.The test comprises 125 items and each item is composed of four drawings.The task consists of selecting, among the alternatives, the figure that best represents the word spoken by the examiner.The test was collectively applied and lasted 30 minutes, on average.The total score obtained on the instrument was used. Words and Pseudo Words Repetition Test (WPwRT) The WPwRT (Seabra, 2012) assesses short-term phonological memory.The examiner pronounces sequences from two to six words, with a one-second interval between words.The participant's task is to repeat the words in the same sequence.There are two sequences for each word grouping; that is, two sequences with two words, two sequences with three words, and so on.Afterwards, sequences with pseudo words were presented, that is, made-up words that do not have any meaning.There are also two sequences for each group of words, ranging from two to six pseudo words per sequence.The test was individually applied and took 10 minutes, on average.The total score obtained on the instrument was used. Rey Complex Figures Test (Rey-CF) Rey-CF (Rey, 1999) assesses visual perception and immediate recall, respectively, by the participant copying the figure and then reproducing the figure from memory.The test enables verifying how the individual perceives perceptual data and what is spontaneously stored in memory.It consists of a complex geometrical and abstract figure.First, the individual is asked to copy the stimulus figure with the highest level of detail possible.Three minutes later the participant is asked to draw the same figure, however, this time without the stimulus: the participant must rely on memory to reproduce the figure.Scoring is based on accuracy and detail, both in regard to the copy and to the reproduction from memory.Time taken to perform each part of the test is also recorded.The test was collective applied and lasted approximately 20 minutes.The scores and time used to perform each part of the test, for copying and immediate recall, were used. Procedure The project was approved by the Institutional Ethics Board and the legal guardians of the study participants signed free and informed consent forms.Data were collected during the 2 nd semester of the school year on the schools' premises during regular school hours.The participants were removed from their classrooms only after gaining their consent and that of their teachers.The instruments were applied in a collective session with 15 students, at most, in a classroom for approximately 90 minutes (Raven's Test, Rey-FC, TMT and PPVT).There were also two individual sessions of approximately 30 minutes (Stroop-Comp, FAS and WPwRT), and another session of approximately 60 minutes (WISC-III subtests). Data Analysis Pearson's correlation was used to verify the relationship between the percentiles obtained on Raven's test and the WISC-III's IQ of verbal comprehension and IQ of perceptual organization, as well as the correlation between measures of intelligence and the other instruments.Variance analysis (ANOVA) was performed in regard to the group's effect (VSI versus SI versus AI) on the measures of various instruments.A p ≤ 0.05 was used in all the comparisons.Tukey's pairwise comparison analysis was also used. Results We first verified the relationship between fluid intelligence, measured by Raven's Test, the IQ of verbal comprehension and IQ of perceptual organization measured by the WISC-III.A positive and significant relationship was found between the Raven's percentiles and both measures of verbal comprehension IQ (r = 0.61, p < 0.001) and perceptual organization IQ (r = 0.45, p < 0.001).Table 2 presents the correlations between measures of intelligence and performance on the tests of executive and non-executive abilities.All the measures of intelligence were significantly correlated with performance on the Trail and FAS verbal fluency tests, though correlations were weak.Only the measure of Verbal Comprehension was significantly correlated with the Stroop measures, though with a low magnitude.In regard to the non-executive measures, correct answers obtained in the Rey-CF immediate recall were significantly correlated with all the intelligence measures, but also with a low magnitude. Correct answers in the copy exercise were significantly correlated with Raven's measure.Moderate correlations were found only between performance on the PPVT and intelligence measures. We opted in this study to assign participants to groups according to scores obtained on the Gf measure and three groups were formed, as previously discussed in the Participants session.The ANOVA showed that the groups had a significant effect on the various measures.A tendency for groups with superior and very superior intelligence to perform better was observed in all the cases.Significant effects were found in the scores obtained in Vocabulary, Information, Similarities, Comprehension, Figure Arrangement, Cubes, Assembly Objects, and Verbal Comprehension IQ and Perceptual Organization IQ.Only in regard to Complete Figures were no differences found in the performance of groups.Significant effect was also observed in regard to the remaining measures in vocabulary (PPVT) and measures of visual perception and memory (copy accuracy and reproduction from memory on the Rey-FC).No difference was found in the measure of short-term phonological memory (WPWRT).In regard to the EF tests, the effect of the groups was evidenced in the measure of verbal fluency, though there were no effects on the measures of flexibility (TMT) or attention/inhibitory control (Stroop-Comp).These results are summarized in Table 3. Descriptive statistics (means and SD) for the whole sample and full correlation matrix are provided in Appendix. Discussion The analysis of correlations among performances on the intelligence tests showed that, even though they are interrelated, the measures capture relatively distinct aspects.The relationships observed in the intelligence measures and executive and non-executive tests tended to be weak, except for the vocabulary measure, which presented a moderate relationship with all the measures of intelligence.The patterns of these relationships were virtually the same, with a few exceptions such as the relationships between Verbal Comprehension IQ and the Stroop-Comp measures (possibly reflecting the modality of the answer in comprehension, a verbal task) and between Raven's percentiles and correct answers on the Rey-CF.This pattern does not corroborate the hypothesis that there is a greater relationship between Gf measures and executive abilities (Arffa, 2007).In fact, other studies have observed a relationship between executive measures and performance on WISC (Ardila et al., 2000) or WAIS (van Aken et al., 2014), measures with a heavy load of Gc (Primi, 2003).Hence, there does not seem to be an important difference in the way performance in the executive and non-executive measures relates to either Gf or Gc intelligence performance. In regard to the ANOVA, the result revealed an effect of the level of intelligence on performance in three of the six instruments applied for the assessment of cognitive and executive abilities.The effects were observed in an EF test (FAS) and two tests of non-executive cognitive functions (PPVT and two measures of the Rey-CF).Considering the WISC-III's subtests and factor scores, the same tendency was observed with significant effect on all the measures, except for Assembly Objects.Therefore, these findings enable us to infer that the higher the intelligence measured by the Raven's Progressive Matrices-General Scale, i.e. a Gf measure, the better the performance in most executive and non-executive measures. Specifically regarding the results from the EF test, the relationship with Gf was already expected (Arffa, 2007;Barbey et al., 2012;Roca et al., 2010).The VSI group presented the best performance in verbal fluency, a complex measure of EF, which involves auditory working memory, switching and inhibition, in addition to oral language abilities (Dias & Seabra, 2014).There were, however, no differences among the groups in regard to the measures of cognitive flexibility and attention/inhibitory control.This pattern of association between intelligence and EF has been already reported in the literature (Abreu et al., 2014).Friedman and colleagues (2006) for instance, found an association between Gf and working memory but no association was found in regard to inhibitory control.Arffa (2007) using a Gc measure did not find an association between intelligence and flexibility.Therefore, the relationship between intelligence and verbal fluency found in this study could be explained by the different demands involved in a complex EF task or, specifically, by the requirement of working memory in this task, what is in agreement with previous findings suggesting a strong relation between working memory and Gf in the age range included in our study (Demetriou et al., 2014).The data allow the inference that, even though there is a relationship between EF and Gf, this relationship can be understood in a generic manner and seems to be specific to certain EF abilities (Abreu et al., 2014).Looking at the measures employed in this study, a more consistent relationship took place only between Gf and complex executive ability of verbal fluency, while associations with inhibition and flexibility were weak. In regard to non-executive functions, a difference among groups was observed in two of three measures.The VSI group presents higher scores than those presented by the SI in regard to vocabulary, which corroborates Lezak and colleagues (2004) and Malloy-Diniz and colleagues (2008) in regard to the relationship between general intelligence and vocabulary; the same is true for visual-spatial processing, which had been already observed in a previous study (Garderen, 2006) that assigned groups according to the WISC-III.Differences among groups were not observed in the performance of the phonological short-term memory task.Such a fact may be related to the Gf task we used, which involves visual-spatial processing. Concerning the WISC-III measures, the VSI group's performance was superior to the SI group's in regard to Information, which measures acquired knowledge and aspects related to long-term memory.Both groups, VSI and SI, performed better than AI in Similarities, which requires the formation of concepts and inductive reasoning, and in Cubes, which involves visual organization and problem-solving strategies.The SI group performed better than the AI group in Vocabulary, a measure of linguistic performance, and Figures Arrangement, which includes on the requirement to comprehend a sequence of events (temporal sequence) and planning.Finally, the VSI group presented higher scores in comparison to the other two groups in Comprehension, which involves verbal comprehension, knowledge of conventional behavior standards and the ability to think abstractly [see Simões (2002) for an analysis of cognitive demands associated with each WISC-III subtest]. The hypothesis that, if assigned on the basis of a Gf measure, the VSI and SI groups would perform better in EF measures was not confirmed; i.e., no differences were found in the pattern of results between EF tests and non-executive functions tests.In fact, the groups with higher percentiles of intelligence presented improved performances only in regard to complex executive measure in addition to better performances for other cognitive abilities and on the WISC-III subtests, which comprise different aspects, many of which have heavy loads of Gc.Hence, the adolescents grouped by level of intelligence measured by Raven's Test differ both in regard to EF and non-executive functions, in addition to their performance on the WISC-III itself.These results are corroborated by relationships found among the measures, which did not confirm a differential pattern of correlation among Gf, Gc and executive and non-executive measures.The results show that the relationships among cognitive, executive and non-executive functions and both Gf and Gc intelligence, are complex and further research should broaden knowledge on the topic.On the other hand, these results are coherent with the assertion that Gf is strongly associated with g (Gustafsson, 1988;Härnqvist et al., 1994).Being an ability that underlies all cognitive abilities, an increase in all the studied abilities, not only in the executive abilities, would in fact be expected given increased g. Further research is suggested in order to overcome this study's limitations, among them the use of a single non-verbal measurement of Gf.A better measurement of Gf could be achieved with a score composed of verbal and non-verbal tasks.Additionally, other instruments should be considered because it is possible that the tests employed here were too easy for this sample of adolescents, such as the Stroop-Comp, in which a tendency to produce a ceiling effect was observed in the measurement of correct answers; also, other variables could be investigated, such as working memory.Another limitation is the use of only one age range, so the findings obtained should not be generalized to other ages or populations.Future research should clarify these findings and even include other approaches.For instance, using the latent variable approach in which a higher control of interrelations among variables is possible could clarify, to some extent, the relationship among intelligence, cognitive and executive functions. Final Considerations The study investigated the relationships among intelligence, EF and non-executive abilities in teenagers, as well as differences in the performances of groups with different levels of intelligence (very superior, superior, and average).A differential pattern of relationships among the measures due to Gf or Gc was not found.In addition to working memory, with greater evidence in the literature, inhibition and flexibility do not seem to be consistently associated with intelligence, and in particular, with Gf.Only the complex ability of verbal fluency was more consistently associated with intelligence.Hence, in addition to the measures used, the relationship between intelligence and EF seems to be due to specific abilities.Additionally, most abilities presented a tendency of progressive performance based on the groups: the higher the Gf, the better the performances in the remaining measures, which does not corroborate the hypothesis that there is greater specific association between Gf and EF.The results corroborate improved general performance due to superior intelligence, that is, the g effect. Table 1 . Description of participants divided by groups: number of subjects, percentage by group, and percentile in raven. Table 2 . Matrix of correlations between measures of intelligence and performance in executive and non-executive tests for the overall sample. Table 3 . Descriptive and inferential statistics of the effect of group on the measures.
2018-12-14T20:49:56.085Z
2014-11-24T00:00:00.000
{ "year": 2014, "sha1": "9a4be7f2a9b5b398f6f9a0ed2f656fe4c0c8c18d", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=51698", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9a4be7f2a9b5b398f6f9a0ed2f656fe4c0c8c18d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
245576321
pes2o/s2orc
v3-fos-license
EvoMBN: Evolving Multi-Branch Networks on Myocardial Infarction Diagnosis Using 12-Lead Electrocardiograms Multi-branch Networks (MBNs) have been successfully applied to myocardial infarction (MI) diagnosis using 12-lead electrocardiograms. However, most existing MBNs share a fixed architecture. The absence of architecture optimization has become a significant obstacle to a more accurate diagnosis for these MBNs. In this paper, an evolving neural network named EvoMBN is proposed for MI diagnosis. It utilizes a genetic algorithm (GA) to automatically learn the optimal MBN architectures. A novel fixed-length encoding method is proposed to represent each architecture. In addition, the crossover, mutation, selection, and fitness evaluation of the GA are defined to ensure the architecture can be optimized through evolutional iterations. A novel Lead Squeeze and Excitation (LSE) block is designed to summarize features from all the branch networks. It consists of a fully-connected layer and an LSE mechanism that assigns weights to different leads. Five-fold inter-patient cross validation experiments on MI detection and localization are performed using the PTB diagnostic database. Moreover, the model architecture learned from the PTB database is transferred to the PTB-XL database without any changes. Compared with existing studies, our EvoMBN shows superior generalization and the efficiency of its flexible architecture is suitable for auxiliary MI diagnosis in real-world. Introduction Nowadays, cardiovascular disease (CVD) has become one of the leading causes of death around the world, especially in developing countries [1]. Considering the detailed categories of CVDs, myocardial infarction (MI, or heart attack) is known to be a higher risk of morbidity and mortality, accounting for 15 million deaths every year [2]. As shown in Figure 1a, MI is mainly caused by a blockage of the coronary arteries that cuts off the blood supply to the heart. The reduction of oxygen and nutrients may result in lifethreatening damage to the myocardium, followed by an irreversible necrosis if not treated promptly [3]. Therefore, early MI diagnosis is crucial for patients to improve prognosis. Electrocardiogram (ECG) is widely used in MI diagnosis because it is non-invasive and convenient [4]. It usually consists of twelve leads, including three standard limb leads (I, II, III), three augmented limb leads (aVR, aVL, aVF), and six precordial leads (V1~V6). As shown in Figure 1b, MIs can manifest as abnormal waveforms in ECG signals, such as pathological Q-waves, ST elevations, T inversions, and so on [3,5]. Note that, MI can be categorized into several types based on location, corresponding to the aforementioned abnormal waveforms from specific leads, respectively. For instance, to detect anterior myocardial infarction (AMI), lead I, aVL, V5, and V6 deserve more analysis [6]. As for inferior myocardial infarction (IMI), the most significant leads are II, III, and aVF [6]. Cardiologists diagnose MIs by examining all the signals from the 12 leads, which is a tedious and time-consuming process. Thus, automated MI diagnosis algorithms are proposed and deployed to assist cardiologists. For the conventional MI diagnosis algorithms using ECGs, statistical machine learning is adopted to distinguish MIs from normal types or other CVDs. It requires complex featureengineering and classifier selection. In existing studies, waveform features (QRS-duration, QRS-amplitude, ST-segment level, T-amplitude, and so on) [7][8][9], transform features (coefficients of wavelet transform, discrete cosine transform, singular value decomposition, and so on) [10][11][12][13], and statistical features (entropy-based features, sub-band energy features, and so on) [14][15][16] are often employed to represent individuals. For classifier selections, Support Vector Machines (SVM) [12,13,15], K-Nearest Neighbors (KNN) [9,12,14,15], Decision Trees (DT) [8,12,16], and Random Forests (RF) [12] have demonstrated good performances. Obviously, feature-engineering requires much medical expertise, and the performances of these algorithms depend on the quality of the hand-crafted features. To overcome these limitations, Deep Learning (DL) models are introduced to the ECG-based MI diagnosis, which can learn critical features from data without manual intervention [17]. The most commonly used models are Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNN), and their variants. Particularly, 1-D CNNs were used in [18] to detect MIs using lead II. It achieved an accuracy of 95.22% without feature extraction and selection. In [19], a multi-layer Long Short-Term Memory (LSTM) network (a typical variant of RNN) was employed to analyze single-lead ECGs and identify MI patients. This model was tested on two different ECG databases, and the accuracies were 77.12% and 84.17%, respectively. Similar LSTM models for MI diagnosis were also developed and evaluated in [20]. For MI diagnosis on wearable devices, a lightweight Binary CNN (BCNN) was designed in [21]. All the parameters of BCNN are represented in binary, which can dramatically save the computational resources. In addition, an acceptable result (accuracy = 91.22%) was achieved by the model in MI detection using single-lead ECGs. To explore more leads, signals from lead II, III, and aVF were fed into a shallow CNN model to diagnose IMIs in [22]. An accuracy of 84.54% was obtained in subject-oriented experiments. An ML-CNN proposed in [23] is an impressive variant of standard CNN. For generalized anterior myocardial infarction (GAMI) detection, it utilized lead V2, V3, V5 and aVL to analyze and achieve an accuracy of 96.00%. Based on the same leads, an ML-Net was also developed in [24] for GAMI detection. Although the models using single lead or multiple leads (<12 leads) can produce accurate MI detection according to the experiment results, limited lead information may prevent these models from extending to a more complex application in the real-world [25]. Using all of the 12 leads, MFB-CNN, MFB-CBRNN, ML-ResNet, and MFB-LANN were proposed in [26][27][28][29], respectively. The MI detection accuracies were all greater than 93% in these three studies. Additionally, 12-lead ECGs can also be transferred to 2-D images and can be processed by existing deep networks of computer vision [30][31][32], but the rationality of these approaches may require more exploration since the ECG images are different from the natural 2-D images [33]. Compared with the conventional approaches, the DL-based algorithms have shown great advantages because of better generalization and robustness, which has gained increasing attention in the past few years. In fact, the aforementioned MFB-CNN, MFB-CBRNN, ML-ResNet, ML-Net, and MFB-LANN employ the same Multiple-Branch Network (MBN) skeleton as depicted in Figure 2. Each lead has its own CNN-based branch network for feature learning. A global fully-connected layer summarizes features from all of the leads and produces final results. Unlike normal DL models, the MBN skeleton is specially designed for multilead processing to exploit the diversity and integrity of 12-lead ECGs [26]. However, a fixed architecture is used for all of the branch networks, which may not be the best one for each lead. It limits the flexibility of the whole model, whereas manual architecture optimization is always a difficult task [17]. A genetic algorithm (GA) is a typical heuristic optimization algorithm that does not require much domain knowledge [34]. It mimics the biological evolution by performing crossover, mutation, selection, and fitness evaluation in an iterative manner. A GA and its variants have been successfully applied to Neural Architecture Search (NAS), a technique that can automatically design optimal architectures of neural networks [35]. For example, an EvoCNN for image classification was developed in [36] using GAs without manual tuning. Moreover, similar automatically designed CNNs were proposed in [37,38]. Compared with the manually designed architectures, these automatic models have shown significant advantages in terms of classification accuracy and the number of parameters. Unfortunately, the above GA-optimized models are only suitable for 2-D image classification using standard 2-D CNNs, which cannot be directly applied to ECG-based MI diagnosis using the MBN skeleton. Thus, evolving the MBN skeleton automatically through GAs is a critical problem for a more accurate and flexible MI diagnosis using 12-lead ECGs. In this paper, an evolutional MBN (EvoMBN) is proposed to model the 12-lead ECGs for MI detection and localization. In particular, it combines the GA-based NAS technique and the MBN skeleton to automatically learn an optimized architecture. The MBN skeleton ensures that it remains suitable for multi-lead ECG processing, and the automatic GA optimization enhances its flexibility to achieve a better generalization. Furthermore, it requires no hand-designed features since it is a DL model. In detail, the main contributions of this paper are as follows: (1) To balance computational burden and algorithm flexibility, the EvoMBN employs a GA to implement a constrained architecture optimization based on the MBN skeleton. Specifically, a limited number of branch net layers are given in advance. Then GA iterations are performed to automatically learn an optimal depth for each branch net. An efficient architecture encoding strategy is proposed to represent the whole model, making it possible to globally search the optimal solution. (2) To efficiently summarize all the leads and produce final results, a novel Lead Squeeze and Excitation (LSE) block that consists of a fully-connected layer and an LSE mechanism is established. The LSE extends the typical SE [39] to weight leads which are more relevant to the target categories. Compared with a simple fully-connected layer for feature summary, the LSE block can achieve a better performance in our experiments. (3) To comprehensively evaluate the generalization of EvoMBN, five-fold cross validation is performed on the Physikalisch-Technische Bundesanstalt (PTB) diagnostic ECG database [40] under the inter-patient paradigm [41]. The inter-patient paradigm is a more practical evaluation method, as it considers the model generalization on unseen patients. Furthermore, the best EvoMBN architecture learned from the PTB database is directly transferred to the MI detection and localization on the PTB-XL database [42], a larger ECG database which shares no records with the PTB database. To the best of our knowledge, there has not been any architecture transfer developed for crossdatabase evaluations in ECG-based MI diagnosis. Finally, the superior results in the experiments demonstrate the robustness of our model. The rest of this paper is organized as follows. The datasets used in the model development and the details of our model are introduced in Section 2. Section 3 shows the experimental design and results. A comprehensive discussion is provided in Section 4. Finally, Section 5 concludes the whole paper. Materials and Methods First, this section introduces the ECG datasets used for MI diagnosis, including the PTB database and the PTB-XL database. In addition, it presents the preprocessing method used for the ECG signals and the statistical information of the categories considered in MI detection/localization. Particularly, the PTB-XL database is employed to evaluate the automatically learned architecture transferred from the PTB database. Figure 3 shows the statistical information of these 2 databases. Moreover, the EvoMBN for MI diagnosis using 12-lead ECGs is elaborately described in this section. It consists of 3 main phases: separate training of the branch networks, joint GA-based architecture optimization, and final MI detection/localization. As a subnet for branch summary, the LSE block is used in both the architecture optimization and final classification. In addition, a flowchart of the proposed method is shown in Figure 4. The PTB diagnostic database is the most commonly used ECG database in studies related to MI diagnosis algorithms. According to [40], it contains 549 12-lead records sampled at 1000 Hz from 290 patients. A diagnostic result summarized by several cardiologists is attached to each record. As for MI detection/localization, 368 MI records from 148 patients and 80 healthy control (HC) records from 52 patients can be involved in the algorithm research. In detail, there are 6 location-based MI subcategories with sufficient records in the database, including anterior MI (AMI), antero-septal MI (ASMI), antero-lateral MI (ALMI), inferior MI (IMI), infero-lateral MI (ILMI), and other MI (OMI). Note that, the OMI is a collective term for several MI subcategories with insufficient records [43]. Therefore, the MI detection is a binary classification that distinguishes MIs from HCs, while MI localization is a multi-class classification that should determine the detailed MI subcategories. In order to achieve a trade-off between computational burden and information retention, all the ECG signals were downsampled to 250 Hz as in the existing studies [23,27,29]. In addition, Daubechies 6 wavelet filtering [44] was adopted to remove noise and baseline wander in the ECG signals. In particular, our algorithm is developed on ECG heartbeats (or beat for short). A heartbeat is a P-QRS-T cycle, which is a basic unit of ECG [4]. To segment beats from a whole record, a NeuroKit2 R-wave detection algorithm was employed [45]. Once an R-wave was detected, a segment that includes 127 samples to the left and 128 samples to the right of an R-wave position was selected as an ECG beat of 256 samples (127 + 128 + 1 = 256). The reason for setting the length to 256 was that it is more suitable for the processing of CNN models [46]. Furthermore, each beat was normalized by z-score to remove baseline offset and amplitude scaling, which can be formulated as: where x is an ECG signal, and µ and δ denote the mean value and the standard deviation of the signal, respectively. Moreover, the statistical information based on categories is shown in Figure 3a. The PTB-XL Database The PTB-XL is another open-source 12-lead ECG database used in this research; it was established by the same institution as the PTB diagnostic database [42]. It provides 21,837 12-lead records of 10 s from 18,885 patients and shares no records with the PTB database. The sampling rate of the ECG signals is 500 Hz or 100 Hz, corresponding to 2 versions. The version sampled at 500 Hz was selected and downsampled to 250 Hz in this study. The other preprocessing steps were similar to those applied to the PTB database. Note that, the aforementioned OMI is not a specific MI subcategory. The actual MI subcategories included by the OMI in the PTB-XL database are different from those in the PTB database. Therefore, OMI is excluded here. The statistical information of the data used for MI diagnosis is illustrated in Figure 3b. Separate Training of the Branch Networks For the MBN skeleton, the role of the branch networks is to learn the critical features of each lead. Unlike the conventional MBNs [26][27][28][29] that synchronously train all the branch networks, a separate scheme was utilized here. It makes the model more flexible and reusable since the multi-layer features learned by different branch networks can arbitrarily combine without any extra training. The architecture of each branch network was developed based on the efficient Residual Network (ResNet) proposed in [46]. Particularly, a residual architecture with 17 convolutional layers was designed, as described in Figure 5. To make the branch networks more sensitive to detailed features, each branch network was trained to classify the 6 MI subcategories (AMI, ASMI, ALMI, IMI, ILMI, OMI) and HC. Note that, this multi-class classification is not the final MI localization, it is just a strategy for the feature learning of the branch networks. To train the branch networks, weighted cross entropy loss was employed, which can alleviate the effects of the class imbalance in the PTB or the PTB-XL database. It can be computed as: where c is the number of classes considered in the training, ω i is the weight of the class i, y i and p i denote the desired and actual output, respectively. Generally, larger weights should be assigned to classes that have fewer samples, making the network pay more attention to these classes. To this end, a weighting scheme inspired by [47] was utilized to balance the multi-class losses. Moreover, the loss was minimized using the Stochastic Gradient Descent (SGD) with momentum. The initial learning rate was set to 0.1 and decreased by a factor of 10 every 10 epochs. The momentum factor was 0.9 and the batch size was 128. Each branch network was trained for 30 epochs. Finally, 12 branch networks were obtained as feature extractors. Hierarchical features can be generated by the multi-layer architecture of the branch networks [17]. However, conventional MBN models only exploit the top-level features from the tails of all the branch networks. This homogeneous level combination may not be optimal for all the leads since each lead has its own particular pathological information [6]. Therefore, the optimal feature level, corresponding to the features from the optimal depth of the branch networks, should be explored to implement a more accurate MI diagnosis. LSE Block Unlike the conventional MBN skeleton, a novel LSE block was employed to summarize all the features from the branch networks, which consists of a fully-connected layer and an LSE mechanism. The standard SE is designed to explicitly model a channel-wise feature importance in a specific layer [39], whereas our LSE transfers the standard SE to a lead-wise version. Figure 6 illustrates the LSE block in detail. Note that, the features from each lead were preprocessed by a Global Average Pooling (GAP) layer before being fed into our LSE block. In addition, GAP was proposed to squeeze the multi-lead information from all the branch networks. After that, Let u i was the squeezed feature from lead i and u = [u 1 , u 2 , . . . , u 12 ] T which concatenated all these features, the excitation values can be computed by 2 fully-connected layer as: where σ and γ denote the sigmoid and the Rectified Linear Unit (ReLU) function, respectively. W 1 ∈ R (12/r)×12 is the weight of the first layer and W 2 ∈ R 12×(12/r) is the weight of the second layer. Reduction factor r was set to 1 here. The excitation vector e = [e 1 , e 2 , . . . , e 12 ] was applied to scale the features from multiple leads as: where o i is the final output feature vector of lead i, y i is the input feature vector of lead i. In addition, a fully-connected layer was employed to perform the final classification. Figure 6 illustrates the LSE block in detail. In short, the LSE block can help the model discover critical features from relevant leads. As for the MI diagnosis in this paper, the LSE block can implement the MI detection by performing a binary classification. However, there are 2 approaches that can implement the MI localization. As shown in Figure 7, MI localization can be regarded as a plain multi-class classification. In addition, it can be transformed to a group of binary classification. Each element in the group is used to distinguish a specific category (positive) from the other categories (negative). The category with the maximum positive probability is the final output category. These 2 approaches are both evaluated and analyzed in the following sections. Moreover, the LSE block was trained for 30 epochs using Adam optimizer [48] to minimize the weighted cross entropy loss, as introduced in Section 2.2. . Encoding Strategy and Problem Formulation To automatically discover the optimal feature levels, a GA was adopted to optimize the conventional MBN skeleton. Generally, a level combination can be formulated as L = [l 1 , l 2 , . . . l i , . . . , l 12 ], l i denotes the feature level of lead i. Once the feature level of a lead is given, the depth of the corresponding branch network is determined. Therefore, the L can encode the architecture of the whole model, the GA optimization is to discover the optimal L in a specific search space. According to the conventions of CNN models, a basic unit usually consists of a convolutional layer, a Batch Normalization (BN) layer, and an activation layer, regardless of additional residual connections. Thus, each proposed branch network stacks up 17 basic units, which can be treated as 17 feature levels. As shown in Figure 8, an index ranging from 1 to 17 was assigned to each level. Note that, only the levels with even indices were considered in the optimization. The reasons for this level limitation can be summarized in 2 aspects. First, it can simplify the task and alleviate the computational burden. Second, features from adjacent levels may be similar and redundant [49]. The level limitation can reduce the information redundancy and enhance the robustness of the algorithm. Moreover, the features from the final GAP layer are usually critical for the final classification [26][27][28][29], which correspond to the top level (17th level) of the branch network. Thus, the top level was also considered in the optimization. Figure 8. In summary, the GA-based automatic optimization can be formulated as: s.t. |L| = 12, l i ≤ 17, i = 1, 2, . . . , 12 l i mod 2 = 0 or l i = 17 (5) where function A (•) is to decode the L to a specific architecture, and function f (•) evaluates the fitness of the architecture. The search space is defined by the 3 constraints. The |L| denotes the length of L. In theory, this problem can be solved by enumerating all the possible values of L, but it cannot obtain a good result within the acceptable time [50]. In contrast, GAs can implement a more efficient search by performing evolutional iterations that consist of selection, crossover, mutation, and fitness evaluation. It is expected to obtain superior results after several generations. The detailed operations of the GA are introduced in the following sections. Initialization As a population-based algorithm, a GA usually starts with a set of randomized individuals. In this study, a base population was randomly initialized via uniform distribution sampling. Each individual was represented by an L that corresponds to a special architecture of MBN. The size of the population was set to 100 here. After that, a fitness value was computed for each individual, and the method used is elaborately described in the next section. Fitness Evaluation For GA optimization, a fitness value indicates the quality of an individual in the population. Particularly, there were 2 phases for fitness evaluation in this study. First, the architectures represented by the individuals (denoted by L) were set up. Multi-level features are extracted from the branch networks to train an LSE block. Second, the fitness value of an L was calculated as: where F1 and Accuracy denote the f1 score and the classification accuracy of the model represented by the L, respectively. In addition, l i is the ith element of the L, and the summation of all the elements can indicate the complexity of the model. Parameter α, β, and η are the weights (>0) to balance the factors. The GA aims to discover the individual with the maximum fitness value, which corresponds to a lightweight model with a high f1 score and accuracy here. To assign the priorities, α, β, and η were set to 1, 1 × 10 −1 , and 1 × 10 −5 , respectively. In other words, model performance is more significant for the GA than model complexity since our basic target is to implement a more accurate MI diagnosis. To discover the best individual in a group of models with similar performances, the most lightweight one is preferred since it can reduce the computational burden. Fitness evaluation is the fundamental step for the GA selection, as illustrated in the following part. Selection The selection process is designed to obtain the best individuals used to produce the next generation. Based on the fitness value, all the individuals were sorted in descending order. Then the first 10 individuals were selected as the parents to generate offspring. This means that the individuals with higher fitness values are always selected. Finally, the parents and the new offspring constitute the next generation of the population. The essential operations to generate offspring are crossover and mutation, which are shown in Section 2.4.5. Crossover and Mutation To generate new offspring, crossover and mutation are performed on the parent individuals. For the crossover operation, 2 parent individuals were randomly selected at first. Then a one-point crossover scheme was performed since it is widely used in the GA-based optimization [51]. The separation position is the central point of an individual. As a result, 2 new individuals were generated from the 2 parent individuals. An example of the proposed crossover is depicted in Figure 9a. Specifically, the proposed crossover operation can exactly exchange the architecture information of limb leads or precordial leads. It preserves the completeness of the information from the 2 critical groups (limb and precordial leads). Thus, it is more suitable for the 12-lead system than the version based on random separation position. Unlike the crossover using 2 parent individuals, mutation can produce offspring with only 1 parent individual. Given an individual L having 12 elements l 1~l12 , the mutation randomly selects k elements and resets each one to 2 with the probability p 1 or 17 with the probability p 2 . Note that, the mutation operator was only performed on a portion of all the individuals. The mutated individuals were randomly selected with the probability p m . In detail, p 1 , p 2 , and p m were set to 0.8, 0.2, and 0.25, respectively. The number of mutated elements k was set to 3. Figure 9b shows an example of the mutation. Iteration The selection, crossover, mutation, and fitness evaluation constitute a GA iteration. Multiple iterations were performed to promote the fitness of the whole population. After that, the individual with the maximum fitness value is regarded as the best feasible solution to problem (5). The upper bound of the iterations was set to 10 here. Moreover, an early stop strategy was proposed to save computational time. Once the maximum fitness value of the population has not changed for 2 generations, the GA iterations should be stopped. Results This section illustrates the experimental design and results of the MI detection and localization on the PTB database. There are two commonly used paradigms for performance evaluation: intra-patient and inter-patient. In particular, the inter-patient paradigm splits the training and the testing dataset according to the patients. In other words, no patient overlaps exist between the training and the testing dataset. However, beats from the same patient can be included in both the training and the testing set under the intra-patient paradigm. Therefore, inter-patient is more practical than intra-patient for performance evaluation. In this study, all the experiments were performed under the inter-patient paradigm. The networks were implemented using Keras with a TensorFlow backend. MI Detection As mentioned in Section 2.3, MI detection is a binary classification task, which distinguishes MI from HC samples. Thus, accuracy (Acc), sensitivity (Sen), specificity (Spe), positive predicted value (Ppv), and F1 score were used to measure the performance of MI detection. As formulated in (7) Under the inter-patient paradigm, five-fold cross validation was performed on the PTB database. Then the confusion matrix across the folds was obtained, as shown in Figure 10. The performance metrics can be calculated according to this confusion matrix. Our EvoMBN achieved an accuracy of 97.11% in MI detection. The Sen, Spe, Ppv, and F1 were 98.53%, 90.02%, 98.01%, and 0.983, respectively. Specifically, the Spe was a little lower than the other four metrics, which means that the model is more prone to classify HC beats as MI beats. This may be caused by the class imbalance mentioned in Section 2.2. Although weighted cross entropy is employed to alleviate the imbalance, it cannot eliminate the impact completely. In summary, the EvoMBN has demonstrated not only accurate, but also robust MI detection on the PTB database, which indicates the efficiency of the automatic architecture optimization. Moreover, the five-fold cross validation can avoid overfitting for a specific dataset, making the results more credible. MI Localization Compared with MI detection, MI localization is a more complex multi-class classification task. As shown in Section 2, six MI-related classes and HC are involved in the PTB database. However, most existing studies [26,28,43] used five MI subcategories in the inter-patient MI localization, including AMI, ASMI, ALMI, IMI, and ILMI. In order to compare our results with these studies, a six-class (five MI subcategories and HC) MI localization was performed in the five-fold cross-validation experiments under the interpatient paradigm. As described in Section 2.3, the MI localization can be implemented in 2 classification manners: a single multi-class classifier and a group of binary classifiers, represented by model m and model b , respectively. Therefore, the experiments based on these two manners were performed and analyzed. Figure 11a,b provides the confusion matrices across the five folds of the MI localization experiments. Furthermore, Sen, Spe, Ppv, Acc, and F1 were also employed to evaluate the performance of each class, as presented in Tables 1 and 2. Obviously, model b achieved a more accurate MI localization. In detail, the overall Acc was 71.65%, the average Sen, Spe, Ppv, and F1 were 69.80%, 94.34%, 69.88%, and 0.694, respectively. However, the performance of model m was not as good as that of model b . The overall Acc was only 59.21%, the average Sen, Spe, Ppv, and F1 were 57.50%, 91.81%, 56.84%, and 0.569, respectively. According to the confusion matrices, the errors were mainly caused by the misclassifications of the similar categories. For example, AMI, ASMI, and ALMI manifest as similar abnormal waveforms in ECG [6], making it prone to misclassifications. Moreover, the similarities between IMI and ILMI also resulted in the classification errors. For model b , each classifier concentrates on the critical features of a specific category. It may help the model explore the special characteristics of each MI subcategory, which can reduce the errors caused by the aforementioned similarities. To summarize, although MI localization is a challenging task that requires superior generalization of the algorithm, the EvoMBN obtains acceptable results based on the evolutional architectures. In addition, the experiments have demonstrated the advantages of the implement method that combines a group of binary classifiers. It is beneficial for the GA to find the best individuals since each individual can be further optimized for a specific class. Discussion In this section, the significant contributions of the EvoMBN are discussed based on a series of ablation experiments. Furthermore, to further verify the generalization of the algorithm, the architectures learned from the PTB database are transferred to the PTB-XL database without any changes. Moreover, a detailed comparison between the EvoMBN and the other existing methods is presented in the last part of this section. The Efficiency of the LSE and GA Optimization The LSE block is designed to replace the simple fully-connected layer of the conventional MBN skeleton. Then the architecture is further evolved by the GA iterations and achieves impressive performance in the experiments. The efficiency of these two strategies can be demonstrated by a series of ablation experiments. Figure 12 provides the results of the ablation experiments on MI detection. As the MI localization can be implemented by two methods, the ablation experiments using these two methods are performed. The results are shown in Figure 13a,b. Note that, all the ablation experiments are based on the inter-patient five-fold cross-validation. According to Figures 12 and 13, the LSE block and the GA optimization can improve the model performance to some extent. The overall accuracy of MI detection increases by 4.2% with the help of these two strategies according to Figure 12. For MI localization, the improvement is more significant. As illustrated in Figure 13, the accuracy of the model based on a single multi-class classifier has risen from 52.57% to 59.21%. In addition, the model that combines a group of binary classifiers achieves an accuracy of 51.80% without the LSE block and GA optimization, whereas its accuracy increases to 71.65% with the applications of the two strategies. Therefore, the efficiency of the LSE block and GA optimization can be verified by the obvious performance improvements. In particular, LSE can assign weights (excitations) to the leads, making the relevant leads more significant in the MI diagnosis. Thus, it is essential to analyze the excitation values of the 12 leads for different MI subcategories. Since the combination of binary classifiers achieves the best performance, the average lead excitation values across the five folds were computed for each MI subcategory based on these models, as presented in Figure 14. In addition, each lead corresponds to a special anatomical area of the heart [52], as illustrated in Table 3. A rough analysis was performed to check if the relevant leads are emphasized when diagnosing a specific MI subcategory. Table 3. Moreover, ST-segment changes in aVR are proved to be critical in the diagnosis of non-inferior MI and inferior MI [53]. Thus, aVR always has a fairly large weight (>0.7) in the MI localization, as shown in Figure 14. In the ASMI diagnosis, V2 has the largest excitation in the 12-lead system, which is associated with the septal aspect of the heart. Moreover, V3 and V4 are emphasized to a certain extent with weights greater than 0.8, corresponding to the anterior aspect. However, the LSE also assigns large weights to aVL and V6 (lateral aspect), making it more prone to misclassify ASMI as ALMI. This inference can be verified by the confusion matrix given in Figure 11. As for ALMI, the related leads include I, aVL, V5, V6, V1, and V2. Obviously, I and V6 are the most important leads for the LSE in ALMI detection according to the excitation values. Moreover, the emphasis on V3 results in the significant misclassification between ASMI and ALMI, as shown in Figure 11. In particular, II, III, and aVF are expected to have large weights in the IMI diagnosis. Actually, the LSE gives great excitation values to III and aVF. Similarly, II is regarded as one of the most critical leads for ILMI diagnosis according to the excitation values. Again, the inappropriate emphasis on V2 may lead to the considerable misclassification between ILMI and ASMI. In general, at least two relevant leads are emphasized by the LSE in the diagnosis of a specific MI subcategory, which can also indicate the efficiency of our LSE mechanism. Architecture Transferring To further evaluate the generalization of the automatically optimized model, the architectures learned from the PTB database were transferred to the MI diagnosis on the PTB-XL database. The branch networks trained on the PTB database were directly used to extract features and no additional training was performed. The architectures of the best fold in the five-fold cross validation were applied without any changes. Particularly, the implement method which is based on a combination of binary classifiers was used for MI localization, since it can achieve a better performance in the aforementioned experiments. Table 4 presents the detailed information on the transferred architecture. The LSE blocks that summarize all the features should be trained on the PTB-XL database, which can be regarded as a specific fine-tuning of the whole EvoMBN. Note that, the PTB-XL database recommends a train-test splitting method in [42] based on the inter-patient paradigm. Thus, all the experiments in this part adopted this splitting method to evaluate the models. To demonstrate the advantages of the EvoMBN, the model using conventional MBN skeleton was also employed to implement the MI diagnosis on the PTB-XL database. The confusion matrices are presented in Figures 15 and 16, corresponding to the MI detection and localization, respectively. Moreover, Acc, Sen, Spe, Ppv, and F1 score were computed, as shown in Tables 5 and 6. According to Tables 5 and 6, the EvoMBN shows better generalization than the conventional MBN. For MI detection, the EvoMBN achieves an overall accuracy of 90.80% and an F1 score of 0.936, whereas the overall accuracy and F1 score of the conventional MBN are 88.70% and 0.919, respectively. Furthermore, the EvoMBN obtains an overall accuracy of 75.18% and an F1 score of 0.546 in the MI localization. As for the conventional MBN, it achieves an accuracy of 70.79% and an F1 score of 0.530 in the MI localization. To summarize, the architecture learned from the PTB database still has advantages in the transferring experiments compared with the conventional MBN. It demonstrates the superior generalization of our EvoMBN. Comparison with the State-of-the-Art Models In this part, the proposed EvoMBN is compared with the other state-of-the-art methods for MI diagnosis using ECGs as listed in Table 7. Note that only the methods evaluated under the inter-patient paradigm are employed during the comparison. For the methods using conventional machine learning [16,54] should extract multiple hand-designed features to implement the MI diagnosis. In addition, they only perform MI detection on the PTB database. Considering their results for MI detection, the overall accuracies are only 81.71% and 92.69%, respectively. All the models in [24,27,28] employ the conventional MBN skeleton to implement the MI diagnosis without hand-designed feature extraction. The ML-Net in [24] achieves the best performance for MI detection and localization, according to the experimental results. However, the ML-Net concentrates on the detection and localization of GAMI, which only includes AMI, ASMI, and ALMI. Moreover, the MFB-CBRNNs in [27] are only evaluated by the MI detection experiments. The overall accuracy is less than 95%, whereas all the other MBN models can achieve better performances on MI detection (Acc > 95%). The ML-ResNet implements a more comprehensive MI diagnosis in [28]. For MI detection, it obtains an accuracy of 95.49% and an F1 score of 0.969. For MI localization, the accuracy and F1 score are 55.74% and 0.479, respectively. Note that, the ML-ResNet utilizes all five MI subcategories mentioned in this paper, but the performance still needs to be improved. In [43], a multi-lead attention model is proposed to detect and localize MIs. Using the five aforementioned MI subcategories, it demonstrates better generalization than the ML-ResNet, especially in the MI localization. The accuracies of MI detection and localization are 96.50% and 62.94%, respectively. All the aforementioned studies has been listed in Table 7. Considering all the aspects in Table 7, our EvoMBN shows significant advantages over the other methods. First, it is a DLmodel using the MBN skeleton, thus, no explicit feature engineering is required. Second, it employs a GA to automatically optimize the architecture to achieve a more accurate MI diagnosis. The efficient LSE mechanism can also improve the model generalization. Furthermore, it achieves a promising performance in the experiments and outperforms the other existing methods. On the PTB database, the overall accuracy and F1 score of MI detection are 97.11% and 0.983, respectively. For MI localization, the model obtains an accuracy of 71.65% and an F1 score of 0.694. To the best of our knowledge, the EvoMBN may be the first MI diagnosis model that is evaluated by the architecture transferring experiments. In detail, the accuracies of MI detection and localization are 90.80% and 75.18%, respectively. These superior results indicate the efficiency of the proposed method. Conclusions To overcome the limitations of the conventional MBNs, this paper develops an EvoMBN for MI diagnosis using ECGs. Using a novel fixed-length encoding method, it employs a GA to automatically optimize the architecture, which can be represented by an individual in a population. The operators are designed to implement the evolutional iterations, including crossover, mutation, selection, and fitness evaluation. In addition, a novel LSE mechanism is proposed to emphasize the critical leads for a specific MI subcategory. The model is evaluated under the inter-patient paradigm. Five-fold cross validation is performed on the PTB database. The GA optimization and LSE mechanism have shown superior efficiency in both MI detection and localization. The generalization of the model has been further verified by the architecture transferring experiment on the PTB-XL database. Therefore, the EvoMBN has the potential to assist in MI diagnosis in real-world applications as it shows good performance in all the experiments. In the future, the proposed model will be extended to the diagnosis of other CVDs. Moreover, the GA applied to the MBN should be further explored and improved to achieve better results, especially for MI localization. Institutional Review Board Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
2021-12-31T16:21:19.712Z
2021-12-29T00:00:00.000
{ "year": 2021, "sha1": "03f5af6c624d391c832d8a7ed639badd7ddefbdc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6374/12/1/15/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e741c99eb0b842a25ddcb385eaa851f71d2965d3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
21326823
pes2o/s2orc
v3-fos-license
Effect of soaking prior to cooking on the levels of phytate and tannin of the common bean (Phaseolus vulgaris, L.) and the protein value. The effect of soaking in domestic processing, on the nutritive value of the common bean (Phaseolus vulgaris, L.) cv IAC-Carioca, was studied. Five treatments were carried out with experimental diets, and offered to male, recently weaned Wistar rats. The protein sources were, respectively, control diet (casein) (CC), casein plus the soluble solids found in the soaking water (CSS), freeze dried bean cooked without soaking (BNS), freeze dried bean cooked with the non-absorbed soaking water (BSW), freeze dried bean cooked without the non-absorbed soaking water (BSNW). and an aproteic diet (AP) for corrective purposes. The anti-nutritional factors (phytates and tannins), were determined in the differently processed beans and in the soaking water. The following values for the reduction of phytates were obtained: BNS (20.9%), BSNW (60.8%) and BSW (53.0%), and the tannins were reduced by: BNS (86.6%), BSNW (88.7%) and BSW (89.0%). No significant differences were observed between the various treatments using the common bean as protein source, with respect to the net protein ratio (NPR). With respect to the digestibility corrected by non-protein diet, values varying between 94.1% and 94.6% for casein, and between 57.5% and 61.4% for the common bean, were observed, the treatment BNS being more digestible. It was concluded that soaking did not interfere with the NPR of the experimental diets containing the common bean as protein source, nor did it reduce the tannin content. However soaking was capable of reducing the phytate levels in the common bean. On the other hand, soaking was unable to increase the protein digestibility of the common bean, since the treatment BNS showed the highest value for digestibility. The common bean (Phaseolus vulgaris, L.), important component of the diet in all social classes, has thus ac quired priority status amongst Brazilian agricultural products. It is cultivated by a wide range of agricultur ists, on different scales, using different production sys tems and in different physical and socio-economic envi ronments. Brazil is the largest world producer of beans, with an estimated annual production of three million tons, in an area of 4.1 million hectares (1). The elevated costs of producing protein of animal origin reflect in the use of other, cheaper protein sources, with the empha sis on the use of vegetables as a source of protein for human consumption. However, vegetable proteins are usually of lower quality than animal proteins, either as a result of lower digestibility, lack of essential amino acids or presence of substances which prejudice the bioavailability of proteins and minerals. According to Khokhar and Chauhan (2), domestic processing and cooking methods are known to reduce the anti-nutritional factors, improving the nutritional value of legumes. Some authors have observed that the soaking of beans made their cooking easier, reducing the cooking time. Other methods (3), such as removal of the cortex and cooking, autoclaving and pressure cook ing and germination, have been shown to be efficient in the reduction of the anti-nutritional factors of legumes. The question of whether to use soaking or not is contro versial, although the evidence tends to indicate its use. A further controversy is the question of whether to dis card the non-absorbed soaking water prior to cooking, or not. In practice, the use of both procedures can be observed, based purely on experience, with no scientific basis, such that more profound aspects, such as the ex istence of anti-nutritional factors and the relationship of these with the soaking process prior to cooking, have still not been well studied. The present study aimed at evaluating the effect of soaking prior to cooking on the phytate and tannin contents of beans, and the effect of these levels on the growth of experimental animals and on protein di gestibility, aiming at a better use of this legume in human feeding. Chemical characterization. MATERIALS AND The following composi tional characteristics were determined in the raw and cooked beans, with and without soaking: moisture ac cording to A.O.A.C. (4), crude protein by the semi micro Kjeldahl method (5), using a conversion factor of 5.4 for bean protein (6) and of 6.38 for casein, total lipids (7), ash (8), crude fibre (9) and carbohydrates by difference. The amino acid compositions of the processed bean cultivar and the casein used were deter mined according to the method of Spackman et al. (10) and Beckman Instruments (11). The triptophane was not determine, because it is destroyed in the acid hy drolysis. Phytate content was determined using the methodology proposed by Latta and Eskin (12). Biological assay. The assay lasted 15 d, 5 d being for adaptation and 10 d for the calculation of the net pro tein ratio (NPR) (16). The faeces of the animals were collected the first 5 d after adaptation, for a subsequent analysis of nitrogen and digestibility of the protein sources (17). All the nitrogen analyses were carried out in triplicate for each rat, also registering the weight gain and diet consumption during the period. The ani mals were submitted to the following diets: CC, casein control; CSS, casein+soluble solids: 7.50g of the solu ble solids, containing 2.69g phytate and 0.90g tannin, were added; AP, aproteic diet; BNS, bean cooked with out soaking; BSNW, bean cooked without the non-ab sorbed soaking water; BSW, bean cooked with the non absorbed soaking water. Statistical analysis. The results obtained in the bio logical assays and the chemical determinations were submitted to an analysis of variance (ANOVA) and to Tukey's means test, using the programme STATISTICA 6.0_??_, Stat Soft (Tulsa, OK, USA), considering p<0.05 as the minimum acceptable probability for the difference between the means. RESULTS AND DISCUSSION Chemical composition of the raw material Table 1 shows the proximate composition of the raw material used as protein source. The results found for the common bean are in agreement with those cited in the literature (18,19). The amino acid compositions of the processed bean cultivar (BNS) and the casein used are shown in the Table 2, and the results found are also in agreement with those determined by Oliveira (20). Contents of phytate and tannin The determinations of phytate and tannin were car ried out on the raw bean, the soaking water and on the differently processed beans, the results being presented in Table 3. With respect to the phytate content, it can be seen that treatments with soaking resulted in a reduc tion of these compounds. A reduction of 20.9% was ob tained for BNS, 60.8 for BSNW and 53.0% for BSW, as compared to the raw bean. A similar result was ob tained by other authors (3) when comparing the data of the raw bean (15.79mg/g) and soaked and cooked bean (8.34mg/g), showing a reduction of 47.18% of the phytate content. According to these authors, this reduction could be explained by the heat treatment (cooking), where an intensification of the fermentative effect occurs with respect to the phytic acid, decreasing the total content. Phytic acid is an abundant constituent of plants, ac counting for from 1 to 5% of the weight of legumes (21). The data for phytate found in Table 3 are in agree ment with those of Cheryan (21), who observed that the phytate content of raw beans was 1.38% and also those of Barampama and Simard (3), who found 16.50 mg/g phytate in bean. The cooking methods play an important role in the reduction of phytate. Other au thors (2,22) have shown that soaking, fermentation, autoclaving and boiling, all reduce the phytate content, by the action of the heat treatment or by the activation of phytases which act on phytic acid substrates, decom posing them. With respect to the tannin contents, it was shown that the cooking process effected a pronounced reduc tion of these components. This fact was also observed by other authors (3,23). The following reductions were obtained BNS 86.6%, BSNW 88.7% and BSW 89.0%. This reduction is probably due to diffusion of this anti nutrient into the water or by the formation of insoluble complexes between proteins and tannins. Salunkhe et al. (24) found a decrease in the tannin content in soaked beans. Considering these facts, it was concluded that the soaking procedure prior to cooking, with or without the use of the non-absorbed soaking water, did not present a significant effect on the reduction of tan nin, since the thermal process alone is capable of caus ing a considerable reduction in the content of this anti nutritional factor. Biological assay Table 4. Weight gain, diet consumption, protein consumption, and net protein efficiency ratio (NPR) in recently weaned rats fed diets containing casein or the common bean as protein sources. a,b,c Different letters in the same column indicate a statistical difference (p<0.05). * Casein+soluble solids . ** Bean cooked without soaking . *** Bean cooked without non -absorbed soaking water . **** Bean cooked with non -absorbed soaking water . Table 5. Determination of ingested nitrogen (IN), faecal nitrogen (FN), and absorbed nitrogen (AN) in recently weaned rats fed diets containing casein or the common bean as protein sources. a.b Different letters in the same column indicate a statistical difference (p<0 .05). * Casein+soluble solids . ** Bean cooked without soaking . *** Bean cooked without non -absorbed soaking water . **** Bean cooked with non -absorbed soaking water . Table 5 shows the data for faecal nitrogen (FN), in gested nitrogen (IN) and absorbed nitrogen (AN) in the biological assay carried out with the common bean. The animals fed on diets with the common bean as pro tein source excreted much more faecal nitrogen (4-5 times) than those fed on casein based diets. The differ ences found in the values for faecal nitrogen could be due to the lower digestibility associated with a higher faecal excretion of endogenous nitrogen in animals con suming bean based diets, even after cooking, as already observed by other researchers (25,26), also taking in account the fact that tannins interfere in the digestion of nutrients by stomachal competition, which explains in part this increase in the excretion of endogenous ni trogen by these animals (27). According to Marquez and Lajolo (26), there are var ious reasons, still not clearly explained, for the increase in faecal nitrogen excretion. The occurrence of probable interactions with the bean proteins or of the digestive enzymes with non-protein components present in the bean, such as fibre, carbohydrate and tannins (28), de crease the digestibility of the protein of these legumes. However, the tannins can be reduced by from 37.5 to 77.0% by procedures such as soaking and cooking (29). The common bean proteins presented a lower di gestibility than casein (Table 6), as widely described in the literature (30). However, it was shown that the do mestic procedures vary between themselves, conferring greater digestibility on the non-soaked bean, as com pared to the digestibility of the soaked beans, especially that in which the bean was cooked with the non-ab sorbed soaking water, this being in agreement with the literature (31). The values obtained for the apparent digestibility and digestibility corrected by non-protein diet of casein were respectively 93.0-94.6%, whilst for the common bean these values were 55.8-61.4%, in accordance with the literature (25). The lower values found for the bean, amongst other factors, confirm the possible role of greater endogenous nitrogen excretion, in the quality of the protein of legumes (25,32). Mendez et al. (33) sug gest that the food fibre present in legumes could form indigestible complexes with proteins and amino acids, mainly during heat treatment, reducing the availability Considering the results obtained for the phytate and tannin contents in the variously treated common bean samples, as presented in Table 3, there is apparently no explanation for the fact that the bean cooked without soaking showed a greater protein digestibility than that of the soaked beans (Table 6). One could consider the possibility that this be explained by the resulting struc ture of the soaked beans, and not by the discarding or otherwise of the non-absorbed soaking water, since this did not result in a difference in digestibility. This fact was observed by Melito
2018-04-03T05:16:14.005Z
2003-04-01T00:00:00.000
{ "year": 2003, "sha1": "eaa0a26ce79882f22ffb8c81fa4881a265aaebb5", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jnsv1973/49/2/49_2_81/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8775f69f49c30653bbaa32461a34d7ca1fb9ecfa", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249829904
pes2o/s2orc
v3-fos-license
A novel fluorescent probe for real-time imaging of thionitrous acid under inflammatory and oxidative conditions Thionitrous acid (HSNO), a crosstalk intermediate of two crucial gasotransmitters nitric oxide and hydrogen sulfide, plays a critical role in redox regulation of cellular signaling and functions. However, real-time and facile detection of HSNO with high selectivity and sensitivity remains highly challenging. Herein we report a novel fluorescent probe (SNP-1) for HSNO detection. SNP-1 has a simple molecular structure, but showing strong fluorescence, a low detection limit, a broad linear detection range (from nanomolar to micromolar concentrations), ultrasensitivity, and high selectivity for HSNO in both aqueous media and cells. Benefiting from these unique features, SNP-1 could effectively visualize changes of HSNO levels in mouse models of acute ulcerative colitis and renal ischemia/reperfusion injury. Moreover, the good correlation between colonic HSNO levels and disease activity index demonstrated that HSNO is a promising new diagnostic agent for acute ulcerative colitis. Therefore, SNP-1 can serve as a useful fluorescent probe for precision detection of HSNO in various biological systems, thereby facilitating mechanistic studies, therapeutic assessment, and high-content drug screening for corresponding diseases. Introduction NO and H 2 S gases are endogenously generated by certain enzymes, and they affect a wide range of physiological and pathological processes [1][2][3][4]. Their biological effects can be the same and/or partially interdependent. As a result, various biochemical reactions may occur, which can negate, weaken, or enhance each other [5][6][7]. Previous studies indicated that the functions of NO and H 2 S inside our bodies are closely interlinked [8][9][10]. Therefore, there is increasing interest in exploring the interactions of H 2 S and NO in living organisms, to further understand the mechanisms underlying various physiological and pathophysiological reactions and processes using effective molecular tools [11,12]. It has been demonstrated that the NO/H 2 S cross-linking reactions can produce various reactive nitrogen species (RNS), such as nitroxyl (HNO), thionitrous acid (HSNO), nitroso-persulfide (SSNO − ), and S-nitroso-thiols (RSNO) [13,14]. Among them, thionitrous acid (HSNO), the smallest RSNO, has attracted much attention, since it can be used as a NO-H 2 S signaling molecule [15][16][17]. HSNO is the initially formed product by NO/H 2 S crosstalk reactions. Moreover, HSNO can diffuse through cellular membranes quickly, due to its small size, therefore it easily reach intracellular targets [16,18]. Consequently, HSNO can be regarded as a preferred RNS to study the crosstalk between H 2 S and NO. However, HSNO, formed as a mixture of rapidly interconverting isomers, possesses a high chemical reactivity due to either homolytic or heterolytic bond cleavage [18,19]. Because of its instability, it is highly challenging to develop effective methods for selective and sensitive detection of HSNO [20,21]. To date, different spectroscopic methods, such as UV-visible, Fourier transform infrared, 15 N NMR, and electrospray ionization time-of-flight mass spectrometry, have been employed for HSNO detection in the physiological environment [15]. However, these methods cannot be used for real-time detection and cellular imaging. In this regard, fluorescence techniques offer distinct advantages including high sensitivity, convenience, and high spatiotemporal resolution, in combination with microscopy. Analysis of some HSNO-relevant reactions revealed that HSNO has sulfane sulfur character and performs trans-nitrosation, similar to other RSNO compounds [22]. This preliminary information on the dual reactivity of HSNO enabled the development of a fluorescent probe (TAP-1) for HSNO detection [23]. TAP-1 is very sensitive and specific to the presence of HSNO, and it demonstrated effectiveness for visualizing cellular HSNO formation in HEK293 cells. However, the synthesis of TAP-1 is a complex and multi-steps procedure, involving multifarious and high cost starting materials. More importantly, the imaging capability of this probe remains to be examined in relevant disease models. To address the above issues, we aim to design a new fluorescence probe with a succinct structure for real-time and precision detection of HSNO in biological systems. HSNO possesses a high reactivity and it can "immediately" decompose into NO• and HS• (Fig. 1A) [15,18,24]. Consequently, NO•, as a part of the HSNO molecule can interact with o-phenylenediamine of SNP-1 to yield a N-nitroso adduct (benzotriazole). Meanwhile, H 2 S molecules are produced by the nucleophilic substitution reaction, which then reduce the azide group of the probe to an amine (Fig. 1A). Otherwise, SH radicals generated from HSNO homolysis can be transformed into a strong reducing compound H 2 S 2 , followed by rapid decomposition into H 2 S which also can reduce the azide group of SNP-1 [18,25]. Such dual nucleophilic and reducibile characteristics are unique to HSNO. NO, while H 2 S alone or other RSNO do not exhibit such duality. Therefore, the development of fluorescent probes based on the dual reactivity of HSNO is highly reasonable and feasible. Herein we developed a novel fluorescent probe (SNP-1) for HSNO detection (Fig. 1B). The SNP-1 fluorescence can be completely quenched by the activation of photo-induced electron transfer (PET) to o-phenylenediamine and the blockage of internal charge transfer (ICT) from the azide group [26,27]. As a result, SNP-1 exhibits very low background fluorescence. In the presence of HSNO, the electron withdrawn azide group will be quickly transferred to the electron-donating amine, which removes ICT blocking for the probe. Simultaneously, o-phenylenediamine of SNP-1 will be transformed to a benzotriazole derivative, thereby eliminating the fluorescence quenching effect of PET. The resulting product (SNP-G) exhibits strong fluorescence. In contrast to HSNO, H 2 S or NO only reacts with either o-phenylenediamine or azide groups, and the resulting products SNP-HS and SNP-NO do not show any significant fluorescence, because the state of the only one (out of two electrons) is changed during the electron transfer. Also, we speculate that our new probe can be turned on if H 2 S and NO coexist, which still leads to the HSNO formation. In this aspect, our probe SNP-1 is a very specific molecular sensor to probe HSNO under both physiological and pathological microenvironments. Design and synthesis of new fluorescent probe SNP-1 Briefly, the designed fluorescent probe SNP-1 was synthesized by the following steps ( Fig. 2A and Figs. S1-S9). First, compound 1 (4-bromo-1,8-naphthalic anhydride, commercial material) was coupled with 4nitrobenzene-1,3-diamine to form compound 2 [28]. The nitro group of compound 2 was reduced by stannous chloride dehydrate to yield compound 3. The bromine of compound 3 was substituted by the azide group, forming SNP-1. Thus, SNP-1 could be synthesized by a straightforward method involving only three steps, all with high yields, and low cost initial materials. Fluorescence measurements The solutions of various testing species were prepared from glutathione (GSH), cysteine (Cys), Na 2 S, GSNO, NaNO 2 , Na 2 S 2 , pyrrolidine-NONOate in PBS buffer. The stock solution of Na 2 S 2 was prepared in Deionized water. Peroxynitrite (ONOO − ) generation: A mixture of NaNO 2 (0.6 M) and H 2 O 2 (0.7 M) was acidified with HCl (0.6 M) and KOH (1.5 M) was added immediately to make the solution alkaline. Manganese dioxide (MnO 2 ) was added and the resulting mixture was stirred vigorously at r. t. for 20 min to remove excess H 2 O 2 . The concentration of ONOO − was determined using the absorption at 302 nm. The stock solution of tert-butyl nitrite (t-BuONO) was prepared in ethanol. The stock solution of Angeli's salt were prepared in degassed 10 mM NaOH solution containing 50 μM diethylenetriaminepentaacetic acid (DTPA). The stock solution of HSNO (300 μM) was freshly prepared by mixing 1 mM GSNO and 300 μM Na 2 S in 50 mM PBS [23]. The stock solution of SNP-1 was prepared in DMSO. All of the test solutions need to be freshly prepared. In a test tube, PBS buffer and the stock solution of cetyltrimethylammonium bromide (CTAB) were mixed, followed by addition of a requisite volume of testing species sample solution. And then added the stock solution of SNP-1. To test the stability of SNP-G, fluorescence spectra of SNP-G (5 μM) in different buffers (pH 5 to 9), at different temperatures (10-60 • C), and with different incubation times (5 min-24 h) were detected. Furthermore, the effects of different compounds, including Na 2 S (100 μM), GSNO (350 μM), AG (100 μM), and PAG (1 mM), on the fluorescence stability of SNP-G were also examined. The resulting mixture in every test tube was well shaken before scanning the fluorescence spectra of the sample. In the meantime, a blank solution containing no testing species sample was prepared and Cell culture The human hepatocellular carcinoma (HepG2) cell line (ATCC HB-8065) were cultured in MEM supplemented with 10% (v/v) FBS, 100 U/mL of penicillin, and 100 mg/mL of streptomycin at 37 • C in a humidified incubator with 95% air and 5% CO 2 . In vitro cytotoxicity evaluation by CCK-8 assay Cytotoxicity of SNP-1 was measured with CCK-8 assay. After seeding of HepG2 cells in 96-well plates, cells were allowed to grow 24 h. Then the medium was replaced with fresh medium and SNP-1 (0-60 μM) solutions were added. After 24 h of incubation, cells were washed three times with PBS to remove the excess probe. Culture medium containing 10% CCK-8 (100 μL, v/v) was added into each well. After incubation for 2 h at 37 • C, the plate was taken out from the incubator and put in a plate reader to measure the absorbance of the samples at 450 nm. The cell viability was calculated by comparing the absorbance of the control. In vitro fluorescence imaging in HepG2 cells HepG2 cells were cultured according to the procedures of 2.6 and seeded in 24-well plates. In these experiments, PAG was a H 2 S biosynthesis inhibitor and AG was an inducible nitric oxide-synthase blocker. After 70% confluence, cells were treated with SNP-1 or SNP-1 + PAG/ AG in FBS-free MEM at 37 • C for 60 min. After removing the excess probe and washing cells with PBS, cells were treated with 100 μM HSNO (in situ generated from GSNO and Na 2 S), different concentrations of GSNO (40, 350, 400, and 800 μM) and Na 2 S (15, 100, 150, and 300 μM) solutions in FBS-free MEM containing 50 μM CTAB for 30 min at 37 • C correspondingly, and the control groups were treated with FBS-free MEM containing 50 μM CTAB. Cells were then washed with PBS and fixed with 4% paraformaldehyde for 10 min at room temperature, followed by fluorescence imaging under a confocal laser scanning microscope (CLSM). Blue channel of 4',6-diamidino-2-phenylindole (DAPI), was recorded at 410-470 nm with excitation at 360 nm. Green channel of SNP-1 was recorded at 510-570 nm with excitation at 490 nm. In a separate study, the effects of NO/H 2 S sequential treatment on SNP-1 fluorescence in HepG2 cells were examined. To this end, cells were firstly incubated with SNP-1 (50 μM) plus PAG (1 mM) for 60 min, and fluorescence images were recorded. Then cells were treated with 175 μM GSNO for 30 min to acquire fluorescence images. After 60 min of incubation with FBS-free MEM to allow complete GSNO metabolism, cells were treated with 50 μM Na 2 S for 30 min, followed by fluorescence microscopy observation. According to similar procedures, cellular fluorescence signals were detected after sequential treatment with H 2 S and NO. As a positive control, fluorescence images were recorded after cells were treated with 175 μM GSNO and 50 μM Na 2 S simultaneously. Animals The procedures of animal experiments in this study were approved by the Institutional Animal Care and Use Committee of the Third Military Medical University (Chongqing, China). Pathogen-free male BALB/ c mice (6-8 weeks old, 19-21 g) were purchased from the Animal Center of the Third Military Medical University and housed in standard cages under standard conditions. All animals were acclimatized for one week before use. In vivo biocompatibility evaluation of SNP-1 BALB/c mice (6-8 weeks old, 19-21 g) were administrated with SNP-1 by intravenous injection (50 mg/kg). The mice injected with saline were used as the control group. After administration, the body weight and behaviors of mice were monitored each day. At the predefined time points (24 h and 30 days post administration), complete blood panel analysis and serum biochemistry tests including aspartate aminotransferase (AST), alanine aminotransferase (ALT), UREA, and creatinine (CREA) were conducted using the collected blood samples. Major organs (including heart, liver, spleen, lung and kidney) of rats were weighted for organ index calculation and fixed for histological analysis. Fluorescence imaging of HSNO in colon and renal tissues by ex vivo imaging Pathogen-free 6-8 week old male BALB/c mice were randomly assigned to the Control and HSNO groups. Mice in the control group were intravenously injected with 200 μL of saline and HSNO group mice were intravenously injected with 200 μL of HSNO solution (300 μM, i. v.). After 45 min, both groups were intravenously injected with SNP-1 (10 mg/kg). Two hours later, mice were euthanatized and anatomized. Samples of colon and kidney were quickly excised, frozen in liquid nitrogen, embedded in optimal cutting temperature (OCT) cryoembedding medium and subsequently cut into cryosections. Then, the sections were washed with PBS for three times and sealed with sealing liquid containing 4 ′ ,6-diamidino-2-phenylindole (DAPI) and antifluorescent quenching agent, followed by being observed under the automatic digital slide scanning system. Establishment of an acute ulcerative colitis model in mice Acute ulcerative colitis in mice was induced by addition of 3% (w/v) DSS to the drinking water for 7 days [29,30]. On day 1, mice were earmarked for identification. DSS solution was administered to the drinking bottles of the cages (5 mL of DSS solution per mouse per day) while mice in the control group received drinking water without DSS. On day 3 and day 5, bottles were emptied and filled with fresh DSS solution (5 mL of DSS solution per mouse per day), meanwhile, the leftover DSS solution from the bottles were measured to ensure that the changes in colitis activity were not due to the differences in DSS consumption. The weight and disease activity index (DAI) (calculated by body weight decrease, stool consistency and rectal bleeding) [30] were measured and assessed daily throughout the modeling period. Establishment of a renal ischemia/reperfusion injury (IRI) model in mice To establish a renal IRI model, mice were anesthetized (1% pentobarbitone, i. p.) and a heating blanket was used for body temperature maintaining. A tissue separating scissor was used to make a midline laparotomy incision so that an incision of the avascular linea alba was made giving access to the peritoneal cavity and exposing both kidneys. Bilateral renal blood flow was interrupted for 45 min by clamping nontraumatic vascular clamps over renal pedicles. Successful ischemia can be visually confirmed by a gradual uniform darkening of the kidney. After the period of ischemia, clamps were removed and kidneys would rapidly change color from a dark maroon to a healthy dark pink, which suggesting successful reperfusion. Then intestines were replaced followed by closing the peritoneum with a blanket stitch using 6/0 braided silk suture. Finally iodine solution was applied to the surgical area to minimize the risk of post-operative infection [31,32]. Study on colonic HSNO in mice with acute ulcerative colitis Acute ulcerative colitis in mouse was induced by drinking water containing 3% DSS as aforementioned. First, we examined the colonic HSNO levels in mice with colitis. In this case, mice were randomly assigned to four groups: Control, normal mice; Colitis, DSS-induced colitis mice; Colitis + PAG, DSS-induced colitis mice treated with PAG (200 μmol/kg); Colitis + AG, DSS-induced colitis mice treated with AG (2 mmol/kg). PAG (intraperitoneal injection, i. p.) and AG (subcutaneous injection, s. c.) were administered daily during 7 days of DSS treatment. On day 8, all groups were intravenously injected with SNP-1 (10 mg/kg). Two hours later, mice were euthanatized and anatomized. Samples of colon were quickly excised, made into cryosections and observed under the automatic digital slide scanning system as aforementioned. In another cohort study, mice were randomized to different four groups: Control, drinking water plus oral gavage of saline; Colitis, 3% DSS in drinking water plus oral gavage of saline; Colitis + 5-ASA (L), 3% DSS in drinking water plus oral gavage of 5-ASA (30 mg/kg); Colitis + 5-ASA (H), 3% DSS in drinking water plus oral gavage of 5-ASA (75 mg/ kg). 5-ASA was administered daily during 7 days of DSS treatment. On day 8, all groups were intravenously injected with SNP-1 (10 mg/kg). Two hours later, mice were euthanatized and anatomized. Samples of colon were quickly excised, made into cryosections and observed under the automatic digital slide scanning system as aforementioned. Meanwhile, another section of colon tissue was excised from each mouse, fixed with 4% paraformaldehyde and made into paraffin sections for hematoxylin and eosin (H&E) staining. Study on renal HSNO in mice suffering from renal IRI Renal IRI models in mice were established as aforementioned and the renal HSNO levels were examined. In this case, mice were randomly assigned to four groups: Control, normal mice; IRI, mice suffering from renal IRI; IRI + Na 2 S, mice suffering from renal IRI administrated with Na 2 S (100 μg/kg, i. v.); IRI + AG + Na 2 S, mice suffering from renal IRI pretreated with AG (2 mmol/kg, s. c.) and then administrated with Na 2 S (100 μg/kg, i. v.). All groups were intravenously injected with SNP-1 (10 mg/kg) 24 h following surgery. Two hours later, mice were euthanatized and anatomized. Samples of kidney were quickly excised, made into cryosections and observed under the automatic digital slide scanning system as aforementioned. In a separate study, mice were randomized to different four groups: Control, normal mice; IRI, mice suffering from renal IRI; IRI + IPoC, mice suffering from renal IRI treated with ischemic postconditioning (IPoC: immediately after 45 min of ischemia and prior to reperfusion, mice were subjected to 6 cycles of clamping the left renal artery for 10 s of reperfusion after that 10 s ischemia) [33]; IRI + D-cys, mice suffering from renal IRI treated with D-cys (8 mmol/kg, oral gavage). After 24 h, all groups were intravenously injected with SNP-1 (10 mg/kg). Two hours later, blood samples were collected for the examination of serum levels of UREA and CREA. Then mice were euthanatized and anatomized. Samples of kidney were quickly excised, made into cryosections and observed under the automatic digital slide scanning system as aforementioned. Meanwhile, part of kidney tissue was excised from each mouse, fixed with 4% paraformaldehyde and made into paraffin sections for H&E staining. In vitro characterization of SNP-1 Fluorescence properties of SNP-1 were tested as in the presence of HSNO. For this purpose, a mixture containing S-nitrosoglutathione (GSNO) and Na 2 S (prepared in PBS with pH 7.4) served as an HSNO source [15]. SNP-1 (5 μM) itself showed weak fluorescence emission. When SNP-1 was tested in a HSNO solution (100 μM), notable green fluorescence was observed with the maximal emission wavelength at 540 nm. The fluorescent signal was time-dependent, showing the highest intensity at 7 min (Fig. 2B). The turning on fluorescence could also be directly visualized under UV light (Fig. 2B). Also, fluorescence properties of synthesized SNP-G were separately characterized, showing good quantum yields in different solvents (Table S1). Furthermore, the selectivity of SNP-1 to HSNO was tested by incubation with other reactive compounds, including glutathione (GSH), cysteine (Cys), GSNO, peroxy-nitrite (ONOO − ), tert-butyl nitrite (t-BuONO), NaNO 2 , AngeliQs salt (an HNO donor), Na 2 S, Na 2 SO 3 , MCPD (a persulfide donor) [34]. Na 2 S 2 , Na 2 S 3 , and pyrrolidine-NONOate (a NO donor) [35]. In this case, only HSNO (in situ generated from GSNO and Na 2 S) triggered a notable increase in the fluorescence intensity (Fig. 2C). We also studied the reactive sensitivity of SNP-1 towards HSNO. For this purpose, SNP-1 at 10 μM was incubated with various concentrations of HSNO (the theoretical concentrations of HSNO were calculated based on the concentrations of GSNO and Na 2 S). The intensity of SNP-1-derived fluorescence increased gradually as the HSNO content increased (Fig. 2D). Of note, the fluorescence intensity at 540 nm (F540 value) varied from 50 to 220 with the concentration of HSNO changing from 0.5 to 80 μM (Fig. 2E). Moreover, F540 values showed a linear dependence (with the Pearson correlation coefficient r = 0.96301) on the HSNO concentrations varying from 0.5 to 10 μmol/L. Importantly, HSNO quantification with SNP-1 is consistent with the result based on previously reported method (Fig. S10) [36]. Our data suggested that SNP-1 can be utilized to detect HSNO with a detection limit as low as 0.5 μM. Therefore, SNP-1 is potentially adaptive for quantification of trace concentrations of HSNO. In addition, SNP-1 in PBS at various physiological pH values or at different temperatures was proved to be stable, as indicated by negligible fluctuations of fluorescence (Fig. S11). Also, we found similar fluorescence emission spectra for SNP-1 after 10 min of incubation with HSNO at different temperatures (Fig. S12A). Nevertheless, the high temperature could slightly accelerate the transformation of SNP-1 to SNP-G in the presence HSNO (Fig. S12B). Mechanisms responsible for HSNO-triggered fluorescence generation of SNP-1 To clarify the reaction triggering the fluorescence generation of SNP-1 in the presence of HSNO, we synthesized SNP-G (acting as a control) and then analyzed SNP-1 reaction with HSNO in situ formed by GSNO and Na 2 S using high performance liquid chromatography (HPLC). The data obtained based on this reaction confirmed the transformation of SNP-1 into SNP-G after reaction with HSNO (Fig. 2F). Of note, SNP-G was stable in different solutions, at various temperatures, or after incubation for varied time periods, since the fluorescence intensity showed negligible fluctuations under these different conditions (Figs. S13A-C). Imaging of HSNO in living cells by SNP-1 Based on excellent HSNO-responsive fluorescence properties of SNP-1, we then tested it for HSNO imaging in cells. Human hepatocellular carcinoma (HepG2) cells were incubated with 20 μM SNP-1 for 60 min and then rinsed with PBS three times. As expected, strong fluorescence was observed in cells after treatment with HSNO (generated in situ from GSNO and Na 2 S) for 30 min (Fig. 3A and B), while under the same conditions, cells treated with either a H 2 S donor (Na 2 S) or a NO donor (GSNO and sodium nitroprusside) alone exhibited very low fluorescence signals (Fig. 3A-B, Fig. S14) largely because of their reactions with endogenous H 2 S/NO, which resulted in the formation of a small number of HSNO molecules. To verify this hypothesis, cells incubated with SNP-1 were treated with the H 2 S biosynthesis inhibitor propargylglycine (PAG) [37] or the inducible nitric oxide-synthase blocker aminoguanidine hydrochloride (AG) [38] for 60 min, followed by treatment with different concentrations of GSNO or Na 2 S to produce HSNO in situ. The fluorescence signal of SNP-1 in PAG pre-treated cells was weaker ( Fig. 3C and D). By contrast, fluorescence intensities of cells without PAG pre-treatment notably depended on the GSNO concentration. Similarly, stronger fluorescence was observed in cells without AG pre-treatment, compared to cells pre-treated with AG, while fluorescence in cells without AG pre-treatment was Na 2 S concentration-dependent ( Fig. 3E and F). Moreover, our experimental data confirmed that AG, PAG, Na 2 S, and GSNO had little influence on the fluorescence intensity of SNP-G (Fig. S13D). The differences in fluorescence intensities between cells with or without PAG/AG pretreatment indicated the potential intracellular production of HSNO. These results suggested that SNP-1 could detect HSNO in cells. Additionally, SNP-1 could react with intracellular HSNO which was formed by the reaction of exogenous H 2 S/NO (derived from Na 2 S/GSNO) and endogenous NO/H 2 S. Whereas the above result showed that NO or H 2 S alone cannot transform SNP-1 into SNP-G, we further examined the possible effect of sequential treatment with NO/H 2 S or H 2 S/NO. As shown in Figs. S15-S16, the fluorescence signal of SNP-1 in cells simultaneously treated with GSNO and Na 2 S (HSNO formed in this case) was significantly stronger than that of cells sequentially treated with GSNO followed by Na 2 S or Na 2 S followed by GSNO (without HSNO formation in both cases). These results suggested that the bright fluorescent signal in cells was mainly triggered by the reaction product of SNP-1 and HSNO. On the other hand, to examine the sensitivity of SNP-1 to HSNO, cells were incubated with SNP-1 for 60 min, followed by treatment with various concentrations of HSNO. It was found that the intracellular fluorescence intensity is proportional to the HSNO concentration (from 2 to 100 μM) in HepG2 cells (Fig. S17). Notably, SNP-1 can be utilized to detect exogenous HSNO in HepG2 cells at a low level of 2 μM. Moreover, the physiological level of HSNO in HepG2 cells could also be detected by SNP-1 (Fig. S18). All these results indicated that SNP-1 can be used to detect HSNO in cells with high sensitivity and specificity. Control, drinking water + oral gavage of saline; Colitis, 3% DSS in drinking water + oral gavage of saline; Colitis + 5-ASA (L), 3% DSS in drinking water + oral gavage of 30 mg/kg 5-ASA; Colitis + 5-ASA (H), 3% DSS in drinking water + oral gavage of 75 mg/kg 5-ASA. Scale bar, 1 mm. (G) Quantitative analysis of fluorescence intensities. Data are presented as mean ± SD (n = 6). *p < 0.05, **p < 0.01, ***p < 0.001. Detection of colonic HSNO in mice with acute ulcerative colitis using SNP-1 Generally, impaired NO and H 2 S metabolism are associated with immune disorders [39,40]. Additionally, high levels of NO and H 2 S are present in colon tissues from patients with ulcerative colitis and reported to be associated with the occurrence and development of ulcerative colitis [41][42][43]. Thus, we detected HSNO in colitis mice and determined the changes of HSNO levels along with the course of disease using the newly developed probe SNP-1. To test the capability of SNP-1 for detecting HSNO in colon tissues, mice were randomly assigned to two groups (Control and HSNO) and intravenously injected with saline and HSNO, respectively. Fluorescence of the HSNO-treated mice showed significantly higher green fluorescence because of the SNP-1 response to HSNO in colon tissues (Fig. S19). Subsequently, we examined imaging capability of SNP-1 in mice with dextran sulfate sodium (DSS)-induced colitis. The fluorescence of the colitis group was 3.1 times higher than that observed for the control group ( Fig. 4A and B), which was exactly related to elevated HSNO levels in the colitis group, since both H 2 S and NO levels were found to be notably increased (Figs. S20A-B). Additionally, the fluorescence of PAGand AG -treated groups was only 1.52 and 1.69 times higher than that of the control group, mainly because of lower HSNO levels resulting from the suppressed generation of H 2 S/NO ( Fig. 4A and B). Consistently, decreased H 2 S levels in PAG-treated group and decreased NO levels in AG-treated group were detected in colon tissues (Figs. S20A-B). All these results revealed up-regulation of HSNO in colon tissues of mice with colitis, which could be controlled by the content of H 2 S/NO and identified using our new probe SNP-1. To further demonstrate changes of HSNO levels in colon tissues of mice with different degrees of colitis severity, 5-aminosalicylic acid (5-ASA) was used as a therapeutic agent [44,45]. Colitis mice were induced by DSS and treated with different formulations. Consistent with previous studies [46,47], 5-ASA treatment effectively reduced weight loss (Fig. 4C) and disease activity index (DAI) (Fig. 4D). Besides, examination on hematoxylin and eosin (H&E)-stained sections showed that DSS-induced colitis exhibited significant damages in colon structure with epithelial disruption, goblet cell depletion, and significant granulocyte infiltration, which were improved after 5-ASA treatment (Fig. 4E). Decreased colonic H 2 S and NO levels, which meant lowered HSNO levels, were detected in 5-ASA-treated groups, compared with the control group (Figs. S20C-D). As expected, fluorescence intensities of SNP-1 were also weakened in the 5-ASA-treated groups ( Fig. 4F and G). Additionally, a significant correlation between the fluorescence intensities of SNP-1 and DAI values (Spearman correlation coefficient r = 0.910, p < 0.01) was verified by correlation and regression analyses (Fig. S21). Together, these results demonstrated that the high HSNO level in colitis will be reduced with the remission of colonic injuries and the whole process can be monitored by using SNP-1. This further confirmed the advantage of SNP-1 for precise HSNO imaging. Moreover, HSNO can serve as a biomarker to diagnose and assess colitis, while SNP-1 is a promising fluorescent probe for this purpose. Imaging of renal HSNO in mice suffering from renal ischemia/ reperfusion injury (IRI) using SNP-1 Subsequently, we examined whether SNP-1 probe can be applied to other HSNO-related pathological conditions. In this aspect, renal IRI was used, since H 2 S and NO play a critical role in its pathogenesis and treatment [48][49][50][51][52][53]. We first tested SNP-1 as a HSNO detection method in renal tissues and observed increased green fluorescence in the HSNO-treated group (Fig. S22), confirmed the fluorescence response of SNP-1 to HSNO in renal tissues. Next, we established a renal IRI model in mice using a previously reported method [32]. The fluorescence intensity of the IRI group decreased by 41% relative to the control group ( Fig. 5A and B), indicating the down-regulated HSNO level in renal tissues injured by ischemia-reperfusion. Meanwhile, the H 2 S level was found to be decreased in renal tissues in the IRI group compared with the normal control, although the corresponding NO levels increased (Figs. S23A-B). These findings are consistent with the previous result that renal ischemia-reperfusion leads to lowered H 2 S levels [48,49,51] and elevated inducible nitric oxide synthase (iNOS) in kidneys [53,54]. On this basis, Na 2 S was intraperitoneally injected to IRI mice to exogenously increase the H 2 S level (Fig. S23A), thereby improving the HSNO level by the reaction between H 2 S and NO. As expected, fluorescence of the IRI + Na 2 S group was notably higher (by 1.92 times relative to control) than that of the IRI group, which could be partially suppressed by AG pre-treatment, since the NO level in the IRI + AG + Na 2 S group was lower than that of the IRI + Na 2 S group (Fig. S23B). Correspondingly, the fluorescence intensity observed in the IRI + AG + Na 2 S group only increased 1.29 times relative to the control group ( Fig. 5A and B). Together, the high sensitivity of SNP-1 to HSNO in biological systems was fully demonstrated, since decreased HSNO levels resulting from H 2 S depletion in ischemia-reperfused kidneys [55] were accurately detected. Furthermore, only the increase of HSNO, but not the increase of NO alone, could improve the fluorescence intensity. These findings are in line with the in vitro results and adequately confirmed the high selectivity of SNP-1 to HSNO. To further explore the response of SNP-1 to purely endogenous variation of HSNO contents, ischemic postconditioning (IPoC) [33] and D-cystine (D-cys) [31] were administrated to increase endogenous H 2 S levels for the treatment of renal IRI [56,57]. Compared with the IRI model group, IPoC and D-cys treatment significantly reduced representative renal function indicators, including UREA and creatinine (CREA) (Fig. 5C). Additionally, histopathological analyses revealed that the IRI group exhibited damages mainly in the structure of renal tubules with the disordered arrangement, interstitial edema and hyperemia, necrotic tubules, casts formed from coagulated protein, and significant granulocyte infiltration, which were effectively alleviated in the IRI + IPoC and IRI + D-cys groups (Fig. 5D). Correspondingly, the fluorescence intensities of SNP-1 in the IRI + IPoC and IRI + D-cys groups were notably enhanced relative to the IRI group ( Fig. 5E and F). In addition, the detected renal contents of H 2 S and NO are consistent with the SNP-1 fluorescence intensities (Figs. S23C-D), since the HSNO levels in the IRI + IPoC and IRI + D-cys groups were higher than that of the IRI group. These results further demonstrated the effectiveness of our new fluorescent probe SNP-1 for HSNO detection through imaging, even when HSNO levels fluctuate during various treatments. Safety studies Finally, safety profiles of SNP-1 were examined. In vitro cytotoxicity tests in HepG2 cells showed relatively high cell viability at the tested concentrations up to 60 μM, indicating the good cytocompatibility of SNP-1 (Fig. S24). This result is in accordance with the previous report that this kind of phthalic anhydride derivatives are biocompatible at relatively low concentrations (<60 μM) [58,59]. Then, we evaluated the possible side effects of SNP-1 in mice after intravenous injection of SNP-1 at 50 mg/kg (five folds higher than the dose used for imaging studies in mice). Different analyses were performed at days 1 or 30 post injection for the short-term and long-term safety assessment, respectively. The results revealed no significant differences in body weight and major organ (heart, liver, spleen, lungs and kidneys) indices between the control and SNP-1 groups (Figs. S25A-B and S26A-B). Complete blood count showed normal hematological parameters for both groups. Quantificaiton of representative biomarkers relevant to hepatic and renal functions indicated that treatment with SNP-1 did not lead to notable hepato-and nephrotoxicities (Figs. S25C-J and S26C-J). Moreover, examination on histopathological sections revealed no necrosis, congestion, hemorrhage, or distinguishable inflammatory lesions in major organs (Figs. S27 and S28). Collectively, these results demonstrated that SNP-1 displayed good safety profile for intravenous injection. Conclusion In summary, we have designed and synthesized a novel fluorescence probe SNP-1 for visualizing and quantifying HSNO. SNP-1 displayed excellent fluorescence performance for HSNO detection, due to its rapid response, high selectivity, low detection limit, good quantum yield, and a broad linear range. Cellularly, SNP-1 could effectively image exogenous and endogenous HSNO in HepG2 cells. Furthermore, SNP-1 enabled successful fluorescence imaging of HSNO changes in acute ulcerative colitis and renal ischemia/reperfusion injury in mice. Importantly, SNP-1 demonstrated good safety performance. Consequently, HSNO displays high translational potential for diagnosis and therapeutic assessment of HSNO-associated diseases, such as colitis and ischemiareperfusion in different organs. Moreover, SNP-1 can serve as a promising HSNO probe for both mechanistic studies and high-content drug screening. Declaration of competing interest The authors declare no conflict of interest.
2022-06-19T15:13:07.418Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "11c9ed7a4ec6a0d377beb8b425b15e20c9e4524d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.redox.2022.102372", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46ee951976d30b12727c21afdec612750ade9c86", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252897517
pes2o/s2orc
v3-fos-license
Rapid Reviews to Support Practice: A Guide for Professional Organization Practice Networks Background. Occupational Therapists, among other healthcare decision makers, often need to make decisions within limited timeframes and cannot wait for the completion of large rigorous systematic reviews and meta-analyses. Rapid reviews are one method to increase the integration of research evidence into clinical decision making. Rapid reviews streamline the systematic review process to allow for the timely synthesis of evidence; however, there does not exist a single agreed upon guide for the methodology and reporting of rapid reviews. Purpose. This paper proposes a rapid review methodology that is customized to a professional organization practice which can feasibly be used by practice networks such as those of the Canadian Association for Occupational Therapy to conduct reviews. Implications. Practice networks provide a sustainable mechanism to integrate research evidence and foster communication amongst practitioners. This guide for conducting and reporting rapid reviews can be used across Occupational Therapy practice networks and similar groups to support the consistent and timely synthesis of evidence necessary to improve evidence-informed clinical decision making. Introduction R esearch evidence can take up to 17 years to be implemented into clinical practice (Morris et al., 2011). This delay in knowledge translation is prohibitive for Occupational Therapists who aim to integrate research evidence into their clinical decision making (Bennett & Bennett, 2000). To build capacity and improve the translation of research knowledge into Occupational Therapy policies and practice, it is important to understand what barriers are impeding the uptake of research evidence into clinical decision making. It has been consistently cited that lack of timely and relevant research outputs, coupled with inadequate access to relevant sources are among the largest barriers to integrating research into practice (Oliver et al., 2014;World Health Organisation, 2012). One way to improve the accessibility of research evidence and expedite its translation to clinical practice is through rapid reviews of the literature. Evidence syntheses, such as rapid reviews, have been suggested as an effective knowledge product for evidence-based decision making (Chambers et al., 2011;Grimshaw et al., 2012). Occupational therapy practice areas that are developing very quickly and that require rapid response (e.g., technological interventions, COVID-19 pandemic response, and transition to telerehabilitation services) can be overwhelming for clinicians and decision makers (Dahl-Popolizio et al., 2020;Mattison et al., 2020;Wang et al., 2022). Professional practice networks provide a forum in which professionals can connect, communicate, and collaborate with others who have similar interests. Often these networks have a goal to promote excellence in practice and develop new knowledge or resources to improve practice (e.g., https://ppno.ca/; https://caot.ca/site/pd/otn?nav = sidebar). While reviews of the literature are often conducted within an academic setting, professional organization practice networks can play a leadership role in conducting relevant reviews that meet clinician needs and provide actionable evidence in an efficient manner. If a rapid review is conducted by and for a specific practice network, it is more likely to result in actionable evidence which will be integrated into practice while also building capacity of professionals through the conduct of the review itself. Using rapid reviews, the Canadian Association for Occupational Therapists Technology for Occupation and Participation (CAOT TOP) practice network aims to provide timely information regarding the rapidly growing field of technology in practice to provide relevant information for practicing Occupational Therapists. What is a Rapid Review? The Canadian health and social care systems face increasingly complex challenges that require both the generation and amalgamation of knowledge in short periods of time. For example, policymakers often require evidence to support time-sensitive decision-making pertaining to the efficiency, quality, and equity of programs and services. Systematic reviews provide a synthesis of existing evidence for a specific topic and are being increasingly employed in policymaking and practice (Bosch-Capblanch et al., 2012;Oxman et al., 2010); however, the time and cost intensive nature of systematic reviews poses a barrier in supporting time-efficient decision making (Oliver et al., 2014). Rapid reviews are one alternative to the cumbersome systematic review process which provide relevant evidence in a time-and cost-efficient manner (Tricco et al., 2015). Specifically, rapid reviews are a form of knowledge synthesis in which the steps taken in a systematic review are streamlined to produce actionable evidence in a shorter time (Polisena et al., 2015) (see Table 1 for key methodological differences between rapid and systematic reviews). Rapid reviews have been increasingly used in health systems policy-making, healthrelated intervention development, and health technology assessment (Harker & Kleijnen, 2012;Polisena et al., 2015;Watt et al., 2008). While rapid reviews provide a synthesis of information in a timely manner, they are not without their faults. In order to improve the timeliness of the review process, rapid reviews often make sacrifices on the rigour included in the review resulting in a less accurate and robust review in comparison to a systematic review (Featherstone et al., 2015). Further, without a standardized methodology in place, as is seen in other review types such as systematic reviews (Higgins et al., 2019), it can be difficult to determine how to best conduct and report a rapid review. Based on previous reviews of the literature, numerous approaches to conducting rapid reviews have been identified (Haby et al., 2016;Hamel et al., 2021;Tricco et al., 2015). While other researchers and groups have proposed approaches for rapid reviews (Abrami et al., 2010;Featherstone et al., 2015;Khangura et al., 2012;Thomas et al., 2013;Tricco et al., 2016;Varker et al., 2015), there is no agreed upon methodology and these approaches, and they were primarily developed for the purpose of rapid reviews being conducted by individual researchers. This guide consolidates and builds on previous approaches and proposes a rapid review methodology which is tailored for professional organization practice and can be sustainably implemented within the CAOT TOP practice network. The CAOT Practice Network: Technology for Occupation and Participation As technology use continues to rapidly advance, engaging in evidence-based practice through the integration of research evidence into clinical decision making is particularly challenging among Occupational Therapists intending to use technologies in their practice to improve clients' occupational performance and engagement. As technology applications continue to expand and become increasingly integrated into daily occupations, Occupational Therapists' knowledge and skills within technology practice must also continually grow (Mattison et al., 2020;Wang et al., 2022). Within the CAOT, the practice network: Technology for Occupation and Participation (TOP) aims to build capacity and take action in developing and implementing policies and practices involving technology by supporting research related to evaluating Occupational Therapy and participation outcomes with technology-oriented interventions (https://caot.ca/site/pd/ networks/technology?nav = sidebar). The network is comprised of volunteers who typically have full time jobs and students with limited time, but who are eager to advance practice. As such, we included in our proposed methodologies constraints that can help expedite the review process in a manner that can ensure for example, the completion of one rapid review per year by the CAOT TOP Practice Network. Motivated by the purpose of strengthening Occupational Therapy practice relating to the development and provision of technology, we developed this guide to provide suggestions on how to plan, conduct, and disseminate rapid reviews in the context of professional organization practice networks. It is important to note that there is no "one-size-fits-all" approach for rapid reviews and all suggestions within this guide are just that, suggestions for what may be included to streamline Table 2 Overview of Rapid Review Approaches Review stage Rapid review approaches Potential bias introduced Topic selection, and refinement Those requesting a specific review should be included in topic refinement so the purpose of the review can be clearly defined, and assurances can be made from the onset so that the resulting review answers key questions of importance to Occupational Therapy practice. Develop search strategy To streamline the search, many rapid reviews place limits on date, language, and study design, geographical location. Many rapid reviews use a staged search approach in which they first limit the search to existing reviews (if they can answer the research question by summarizing existing reviews the search stops here), then studies with other designs that provide the most rigorous evidence to answer the research question. Excluding grey literature will invariably exclude unpublished data and negative results. Excluding articles based on year of publication, language or geographic location may not capture the full extent of the literature and exclude potentially significant and relevant studies. Screening and study selection Common limits for rapid reviews include study screening and selection completed by a single reviewer. Less transparency and reproducibility. Data extraction & Risk of bias assessment Common limits include data extraction and risk of bias (where appropriate) completed by a single reviewer. Data extraction is often limited to only key study characteristics and outcomes needed to answer the key questions identified in stage 1. Increased errors. Evidence synthesis & Dissemination Rapid reviews commonly result in a narrative summary of the literature. Results often include implications for occupational therapists, policy or healthcare delivery recommendations, and limitations of the research. Rapid reviews can be reported in the way that best meets end-user needs (peer review journals, internal reports, rapid report briefs, social media, etc.) the review process. Constraints placed on a review should be dependent on time and available resources while still upholding rigour in the review process . Rapid Review Process: An Overview Early and continued engagement with end-users is essential for rapid reviews to ensure that the limitations employed throughout allow for actionable and relevant information for Occupational Therapists. Rapid reviews draw from systematic review methods and the same broad stages are followed; however, rapid reviews can be streamlined at all review stages from the research question to disseminating results. In order to maintain methodological rigour and transparent reporting, decisions and rationale for all limits placed on a rapid review should be thoroughly documented (Plüddemann et al., 2018). See Table 2 for an overview of review stages and the approaches that can be taken to streamline those stages within a rapid review. Based on previous recommendations on how to conduct rapid reviews, we piloted a rapid review within the CAOT TOP Practice Network (MacPherson et al., 2022). Our decisions, and how this process will be implemented within the CAOT TOP practice network, will be described in the following steps where relevant. See Figure 1 for a process map outlining these steps. Stage 1: Topic Selection and Refinement Rapid reviews should be conducted using a person-centered and participatory action research approach including Occupational Therapy scholars, clinicians, and other community members (e.g., patients, other health professionals, policy makers) to collaboratively identify and address health or social challenges faced by the community (Asaba & Suarez-Balcazar, 2018;Minkler & Wallerstein, 2011;Satcher, 2005). Professional organization practice networks are well poised to bridge the gap between scholars and clinicians, allowing for the co-creation of knowledge through rapid reviews towards developing applicable solutions to community identified topics. By linking both theoretical and empirical knowledge with realworld issues within Occupational Therapy practice, all individuals within the CAOT TOP practice network (i.e., scholars and clinicians) can collaborate to impact both occupational science and Occupational Therapy practice (Asaba & Suarez-Balcazar, 2018). Within the CAOT TOP network, rapid reviews will only be conducted on topics nominated by an end-user (i.e., a member of the network or other Occupational Therapist or relevant community member) and that have been agreed upon by the network as a high value area for a rapid review. For example, topic ideas may be generated within TOP Practice Network knowledge sharing circles and via strategic planning surveys. Early and continued engagement with end-users allows researchers to narrow the scope of a review while ensuring that the question being answered is useful and relevant to knowledge users (Garritty et al., 2020). This is of the utmost importance, because in rapid reviews, there are often trade-offs dependent on end-user timelines. To complete a review in a timely fashion, researchers should consult with end-users to narrow the scope of the review topic. Limiting the number of questions, interventions, and outcomes to be targeted in a review can decrease the time needed to answer those question. In order to narrow the scope, it is imperative to understand the end-user needs, their intended use of the review, and their timeframe (Hartling et al., 2015a;Khangura et al., 2012;Thomas et al., 2013). Within the CAOT TOP network, a brief abstract (3-5 sentences) will be posted internally to notify all network members of ongoing and upcoming reviews and allow for iteration on the research question based on additional end-user needs. Other organizations are encouraged to do the same to allow for the resulting review to have the greatest impact on policy and practice by being tailored to a variety of end-user needs. Stage 2: Develop a Search Strategy All rapid reviews should make available the full search strategy for at least one database they have searched (Moher et al., 2010). Two or more relevant databases are often searched for rapid reviews with the following databases most commonly cited: PubMed, Embase, and the Cochrane Library (Abou-Setta et al., 2016;Polisena et al., 2015;Tricco et al., 2015). Additional databases that are commonly used within Occupational Therapy reviews and should be considered include CINAHL, Web of Science, and PsycInfo (e.g., Bauer et al., 2022;Marcotte et al., 2020;Wallis et al., 2020). Further, involving subject matter experts and healthcare librarians with experiences in reviews can aid in the efficient development of a comprehensive search strategy (Dudden & Protzko, 2011;Lefebvre et al., 2019). Many rapid reviews streamline the systematic review process by limiting the dates, languages, geographical area, or study design within the search strategy (Abou-Setta et al., 2016;Hartling et al., 2015b;Polisena et al., 2015;Tricco et al., 2015). Grey literature (i.e., non-peer reviewed reports) searches may be essential for certain topics, but are another common way researchers can limit the search strategy if necessary. Based on the research experiences within the CAOT TOP, the following potentially relevant grey literature resources within the context of Canadian Occupational Therapy were identified: clinicaltrials.gov, theses and dissertations, professional associations' conference proceedings, and the McMaster Health Forum (https://www.healthsystemsevidence. org/; https://www.socialsystemsevidence.org/). Similar to Garritty et al., (2020), we recommend using a staged approach where appropriate, in which researchers start by identifying existing reviews of the literature. If a synthesis of existing reviews can be used to provide a robust answer to the research questions, the search stops here. If there are still gaps the researcher and end-user feel are necessary to answer in this review, then the review can be updated by identifying recent studies that have been published since the most recent included review. Researchers can limit this additional search, if necessary, to only those study designs that provide the most rigorous evidence to answer their questions. If there are no current reviews of the literature, review authors can look at individual studies (but may limit to the study design that can most appropriately answer their question). Stage 3: Screening and Study Selection As previously mentioned, it is important for a robust search to include multiple databases; however, this will likely result in overlap between the various databases. Within the screening step, management of data overlap through de-duplication must be conducted to prevent reviewers from screening one source multiple times (McKeown & Mir, 2021). Reviewers should consider using specialized software such as Covidence or reference managers such as EndNote, Zotero, or Mendeley to expedite the review process by automatically de-duplicating search results. Within a recent evaluation of the performance of different methods for de-duplicating references, McKeown and Mir (2021) noted that Covidence was among the most accurate and efficient methods for de-duplication, and it significantly outperformed reference management software. When choosing a software program to aid in the review process, McKeown and Mir (2021) note that reviewers should consider not only the de-duplication performance of a software, but also the availability of other program functionalities to aid in screening references, resolving conflicts, and data extraction. Covidence is one of the more comprehensive tools which can aid in all review steps, as such we recommend it to streamline the review process and reduce the need for the review team to use multiple software and programs to complete their rapid review. Following de-duplication, reviewers can screen each title and abstract then full text for eligibility. Whereas systematic reviews typically require two independent coders at the screening phase, rapid reviews can expedite the time to complete a review by limiting these phases to a single coder (Abou-Setta et al., 2016;Hartling et al., 2015b;Polisena et al., 2015;Tricco et al., 2015). It is important to note that, while use of a single reviewer has been estimated to reduce screening time by 60%, it also results in 8%-20% of eligible studies being missed (Edwards et al., 2002;Glasziou et al., 2002;Shemilt et al., 2016). Given this level of error, it is recommended that dual reviewers be used (when possible) in the screening phase to ensure the accuracy of the pool of articles to be synthesized. When this is not possible, we recommend that two reviewers dual screen approximately 25% of the studies (Garritty et al., 2020). Additionally, we recommend that authors include an audit trail (Carcary, 2009). Typically used in qualitative research, an audit trail can be applied to rapid reviews to improve transparency and trustworthiness of the research by providing a record of how the review was carried out and how decisions were arrived at by the reviewers (Carcary, 2009). By making available an audit trail, authors are challenged to be intentional with their record keeping and decision-making processes throughout the review. When beginning title and abstract screening, we recommend that a standardized screening form be developed which thoroughly defines the inclusion and exclusion criteria (Tricco et al., 2017). Eligibility criteria should be well-defined, use clear and unambiguous language, and thorough explanation and elaboration should be provided to reviewers to define concepts and terms. This explanation and elaboration document can support the reviewers with study selection and ensure that eligibility criteria are being applied consistently. Additionally, the inclusion of content experts who have experience conducting reviews can be useful for an expedited screening process (Lefebvre et al., 2019). The screening form, along with the explanation and elaboration document, should be piloted and refined by the review team on 5 to 10 articles (Garritty et al., 2020) and an audit log should be used to document key decisions made (Tricco et al., 2017). By piloting these documents, the review team can identify potential discrepancies in how an eligibility criterion is being applied, and provide additional examples and definitions where necessary. Eligibility criteria should be based on the research question; however, some limits commonly placed within rapid reviews to limit the potential pool of eligible resources are as follows: date of publication, geographical location, language, age range, and study design (Garritty et al., 2020). If there are justifiable reasons to limit your search (e.g., you are reviewing apps so limit the date range to when the app stores were first introduced, no review team members speaking another language, etc.), include these reasons within your audit log and place those constraints at the onset. Once these justifiable limits are placed on your search strategy, we recommend running a search without placing any additional arbitrary limits, and only applying additional limits if the pool of potential studies is too large for your given timelinethis can help to ensure that you are retrieving as many potentially relevant sources as possible prior to introducing bias with these arbitrary limits (Tricco et al., 2017). Previous work has described the screening phase as one of the most difficult and time consuming among systematic reviews (Carver et al., 2013). To improve the flow within the screening process and reduce clerical errors required to keep track of all potentially relevant articles, we recommend that those wanting to conduct a rapid review use abstract screening software such as Covidence and complete a PRISMA flow diagram to document the article search and selection process. If the review team does not have the resources available to purchase specialized software, excel may also be used within the screening and data extraction steps of the review; however, use of specialized software such as Covidence can aid in streamlining the review process (McKeown & Mir, 2021). Covidence allows multiple reviewers to assess the eligibility of each source at the same time, select pre-specified reasons when excluding an article, tracks eligibility decisions as they are made, and remove an article from the review queue in real time (Kellermeyer et al., 2018). If two reviewers are included within the screening stage, Covidence has a conflict resolution workflow integrated into its design to allow for all sources in which there was a conflict to be easily located and resolved (Kellermeyer et al., 2018). Further, Covidence produces a PRISMA flow diagram, thereby reducing the burden on researchers to manage the number of studies screened and reasons for exclusion. Stage 4: Data Extraction & Risk of Bias Assessment Data extraction. This stage in the rapid review process includes extraction of all relevant data from those resources identified as eligible during the screening stage. We at the CAOT TOP Practice Network will be working in collaboration with members of the network as well as Occupational Therapy students during the extraction stage. Those involved in creating the research question will collaboratively develop a customized data extraction tool which will help other reviewers extract relevant data and critically appraise the literature. The efficient translation of research evidence via rapid reviews into health systems requires community knowledge to ensure that data being extracted is of interest to them (Jull et al., 2017). Further, by working with those who are interested in the practice area, and including them in the data extraction process, we hope to build their capacity while also ensuring that the data collected in the review is relevant to those who will be using the review outcomes in practice. To streamline the data extraction, reviewers should work with end-users to identify what information is necessary to extract and the data extraction should be limited to a minimal set of required items (Tricco et al., 2017). Having engaged with Occupational Therapists, decision-makers, content experts, patients, and caregivers at the outset to develop specific research questions, we can prioritize which outcomes are most important and relevant to practice and ensure that all relevant information is collected and synthesized. Use of data extraction forms which provide explanation and elaboration on relevant items can ensure consistency in extraction (Garritty et al., 2020). Within the CAOT TOP Practice Network, when data extraction includes a novice reviewer, a network lead or expert reviewer will complete data extraction for a random 20% of studies. This data will be compared to that extracted by novice reviewers to ensure the accuracy of data collection. We recommend a single extractor with verification, as past research has estimated that, compared to dual extraction, single extraction with verification resulted 36% less time and 22% more errors; however, these errors rarely cause changes to a reviews results or conclusion (Buscemi et al., 2006). Risk of bias assessment. Critical appraisal of the quality of the research being synthesized and quality of methods employed for each included resource is standard within systematic reviews (Higgins et al., 2019); however, this step is conducted selectively within rapid reviews based on the aim of the review (Polisena et al., 2015). For example, when the purpose of a review is to scope the literature, not to evaluate specific effects, a risk of bias assessment may not be necessary (Tricco et al., 2017). When conducting a rapid review of reviews, researchers may wish to accept the summary assessment of risk of bias conducted within existing reviews; however, they may wish to examine the risk of bias within the reviews themselves. Like data extraction, we recommend use of a single reviewer with verification by another reviewer when this step is necessary (Garritty et al., 2020). Different risk of bias and quality assessment tools are available and are often chosen based on specific study designs being assessed. Below we provide a list of commonly used risk of bias tools (note this is not a comprehensive list): A Risk of Bias Assessment Tool for Systematic Reviews (ROBIS) (Whiting et al., 2016); Physiotherapy Evidence Database (PEDro) (Maher et al., 2003), and Risk of Bias 2 (RoB 2) (Sterne et al., 2019) for RCTs; Risk Of Bias In Non-randomised Studies-of Interventions tool (ROBINS-I) (Sterne et al., 2016); and Appraisal of Guidelines for Research & Evaluation Global Rating Scale (AGREE GRS) (Brouwers et al., 2012) to appraise clinical practice guides. Additionally, The Critical Appraisals Skills Programme (CASP) (https://casp-uk.net/casp-tools-checklists/), Scottish Intercollegiate Guidelines Network (SIGN) (https://www.sign. ac .uk/what-we-do/methodology/checklists/), and Joanna Briggs Institute (JBI) (https://jbi.global/critical-appraisal-tools) provide a number of tools for different study designs which can be found at their respective websites. Stage 5: Evidence Synthesis & Dissemination Results from research syntheses have the potential to play a major role within the healthcare system (Lavis et al., 2005). Whereas systematic reviews often conduct meta-analyses to answer causal questions, rapid reviews often provide a narrative synthesis of the included resources (Garritty et al., 2020;Tricco et al., 2017). Evidence synthesis and dissemination of rapid review outcomes should be tailored for use within specific contexts to ensure that research evidence can be effectively and efficiently implemented into clinical practice (Graham et al., 2006;Graham & Tetroe, 2010). How the evidence in a rapid review is synthesized depends on end-user needs and can include conclusions, recommendations, implications for policy, a table or repository of tools, etc. Experts recommend that all rapid reviews clearly state what steps were taken to streamline the process and discuss potential limitations arising from those methodological decisions (Abou-Setta et al., 2016;Hartling et al., 2015b;Watt et al., 2008). Examples of some potential bias introduced based on methodological choices can be seen in Table 2. Dissemination of results involves communicating the results of the review to a specific audience with the goal of maximizing uptake and impact (Straus et al., 2013). As the purpose of rapid reviews is most often to help inform clinical decision making and policy decisions, the ways in which they are disseminated should be customized for each review by considering the target audience and the anticipated impact of the review on practice (Tricco et al., 2017). This may include posting results on an organizational website, presenting at stakeholder meetings or workshops, rapid response briefs (a summary of the best available evidence presented in direct response to the initial topic question) published in Occupational Therapy practice magazines, publishing full reports in peer-reviewed journals, creating online databases or repositories, or sharing results via social media or email distribution to key knowledge users. Where important topics exist that are likely to evolve over time, we recommend having "living" rapid reviews. Similar to living systematic review, living rapid reviews can provide up-to-date summaries which are updated as new information becomes available (Elliott et al., 2014;Kelly et al., 2022). For example, the research regarding mHealth apps is likely to continually grow. If wishing to develop a repository of apps for clinicians to recommend to clients, having a living review in which results are updated on an annual or semi-annual basis is warranted. Although producers of rapid reviews have access to the same dissemination tools and distribution channels as systematic reviews, they should prioritize the practical needs of the knowledge user over traditional or academic approaches to dissemination (Tricco et al., 2017). While not all knowledge products will require in-depth discussion of the methods used, we recommend having available, at end-user request, a rapid review report and audit log detailing the methodologies, strengths and weaknesses of the review process used, and results which lead to the conclusions within the final knowledge product presented to end-users (to improve transparency, these documents can be posted on online repositories such as open science framework https:// osf.io/). We also recommend that within all knowledge products, it is acknowledged that these results were garnered from a rapid review and the potential limitations this has on the impact of the review findings (Tricco et al., 2017). See Table 3 for a list of recommendations made within this guide. Conclusion Occupational Therapists, like other healthcare decision makers, require the timely synthesis of research evidence to improve their ability to integrate research evidence into their practice; however, traditional systematic reviews are too time and resource intensive. For example, within the field of healthrelated technology, technological advancement is likely to outpace research resulting in limited utility of systematic reviews on outdated technology. Rapid reviews are an efficient way to translate valuable research evidence into policy and Table 3 Key Recommendations for Conducting Rigorous Rapid Reviews Recommendations Explanation Make transparent methodological choices to expedite the review process. These choices should be informed by stakeholder needs and must ensure that the review can appropriately answer key stakeholder driven questions. Use software to automate and track review steps. Use of computer software such as endnote and Covidence can enhance transparency and timeliness of a review by generating values for use in a PRISMA flow chart. Using these software can assist researchers by making various steps in the rapid review process more efficient. Engage with end-users early and often. End-users should be the ones driving the research question, eligibility criteria, data extraction, and dissemination. It is integral to collaborate with end-users in decision making throughout the review process to ensure that the review meets end-user needs and to improve uptake and impact of the review once completed. Create explanation and elaboration documents to ensure consistency and transparency. Explanation and elaboration documents should be developed to define key terms, provide examples, and elaborate of specific eligibility criteria or items to be extracted. If the reviewer has a thorough understanding of what and why certain information is needed (and has written guidance for when they are unsure), they are more likely to be able to consistently apply these criteria during the screening and extraction phases. Having these documents can also aid in the rapid review report by providing a transparent record of what and why certain eligibility criteria and data extraction items were included. Time permitting, use two reviewers for screening stage. Given time and resources, you will have a more robust and rigorous review if all titles and abstracts, then full text screening is conducted independently by two reviewers. If there is insufficient time, have screening completed by one reviewer and have ∼25% double screened to ensure consistency. Time permitting, have 20% of data extraction and risk of bias assessment verified by an expert coder. Having this additional check within data extraction can ensure that results are accurate and that the novice reviewer has consistently applied the data extraction form to all included resources. Use experts throughout to streamline the process. Subject matter librarians can aid in developing the search strategy, review experts can aid in running the search and using review software, content experts can aid in verification of screening and data extraction if necessary. Create a rapid review report outlining key methodological choices and limitations and provide a disclosure of limitations within all knowledge products. While not all end-users will value or need the full rapid review report, it is necessary for transparency and can aid end-users in decision making by providing insights into the methods used, the quality of the review and included studies, and limitations. By understanding the limitations within the methodological choices, this can aid decision makers in knowing the scope of the review and how far the results can be extended (i.e., can they make causal claims from your rapid review or do they need to exercise caution when using these results to imply a causal relationship between an assistive device and clinical outcome?) practice decision making; however, no standard methods exist for the conduct and dissemination of rapid reviews tailored to professional practice networks and Occupational Therapy practice. Practice Networks and professional groups provide a sustainable mechanism in which rapid reviews can be conducted and efficiently communicated to end-users. This paper provides a guide to rapid reviews which will continue to be used within the CAOT TOP practice network and can be applied more broadly for Occupational Therapy rapid reviews and used by other professional organizations. It is hoped that the recommendations presented in this article are helpful for the conduct of rapid reviews and professional organization practice, can help build capacity, and aid in the timely translation of relevant research evidence into clinical practice. Key Messages • Rapid reviews, when conducted in a rigorous manner with transparent reporting, can be integral to the integration of research evidence into clinical decision making. • This work synthesizes previous rapid review guides and proposes rapid review mechanisms relevant to professional organization practice. Use of this guide within the CAOT TOP network will help to build capacity and take action in implementing research-based technology into Occupational Therapy practice. • It is anticipated that the rapid review guide presented may be easily adapted to other professional organizations and practice networks to improve the timely translation of knowledge to practice.
2022-10-15T06:17:38.825Z
2022-10-13T00:00:00.000
{ "year": 2022, "sha1": "b673643d77d825777d2e401e8225f86ff13b1f14", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Sage", "pdf_hash": "39f5930c90ecbe47abd6aa7ec4f6b58ea6833252", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115178198
pes2o/s2orc
v3-fos-license
Inequivalent contact structures on Boothby-Wang 5-manifolds We consider contact structures on simply-connected 5-manifolds which arise as circle bundles over simply-connected symplectic 4-manifolds and show that invariants from contact homology are related to the divisibility of the canonical class of the symplectic structure. As an application we find new examples of inequivalent contact structures in the same equivalence class of almost contact structures with non-zero first Chern class. INTRODUCTION The Boothby-Wang construction [2] associates to each symplectic manifold (M, ω), such that the symplectic form ω represents an integral class in H 2 (M ; R), a contact structure ξ on the circle bundle X over M whose Euler class is given by the class represented by ω. In this article we are interested in the case where X is a simply-connected closed 5-manifold. In Section 4 we will show that in this case the 4-manifold M also has to be simply-connected and the Euler class [ω] indivisible. In addition, it follows that the integral homology of X is torsion free. By the classification of simply-connected closed 5-manifolds due to D. Barden [1], it is possible to determine the 5-manifold X up to diffeomorphism: X is diffeomorphic either to the connected sum depending on whether X is spin or non-spin. Here S 2× S 3 denotes the non-trivial S 3 -bundle over S 2 . Moreover, the 5-manifold X is spin if and only if M is spin or the mod 2 reduction of the Euler class [ω] equal to the second Stiefel-Whitney class of M . As a consequence of this diffeomorphism classification, one can construct Boothby-Wang contact structures on the same simply-connected 5-manifold X using different simply-connected symplectic 4-manifolds (M, ω) and (M ′ , ω ′ ). Up to the spin condition, the 4-manifolds only need to have the same second Betti number. In Section 3 we consider contact structures and almost contact structures on simply-connected 5-manifolds in general. In particular, we consider the notion of equivalence of these structures, i.e. when two such structures can be made identical by a sequence of deformations and self-diffeomorphisms of the manifold. We will show that two almost contact structures are equivalent on a simply-connected 5manifold if and only if their first Chern classes have the same maximal divisibility. Since symplectic 4-manifolds exist in great number, it is likely that many of the induced Boothby-Wang contact structures on the same 5-manifold X are not equivalent as contact structures, even if they are equivalent as almost contact structures. In Section 7 we will show that invariants derived from contact homology defined in [5] are related to the divisibility of the canonical class of the symplectic structure on the simply-connected 4-manifold. This is summarized in the main result Corollary 43. It shows that the existence of inequivalent contact structures on simplyconnected 5-manifolds with torsion free homology is connected to the geography question of simply-connected 4-manifolds with divisible canonical class. As an application we find new examples of inequivalent contact structures in the same equivalence class of almost contact structures with non-zero first Chern class. A related discussion has appeared in [18]. Inequivalent contact structures on simplyconnected 5-manifolds with vanishing first Chern class have been found before by O. van Koert in [13]. Also I. Ustilovsky [19] found infinitely many contact structures on the sphere S 5 and F. Bourgeois [3] on T 2 × S 3 and T 5 , both in the case of vanishing first Chern class. smooth, closed, oriented 5-manifold. For each pair of elements η, ξ in the torsion subgroup TorH 2 (X) there exists a linking number b(η, ξ) in Q/Z. These numbers define a skew-symmetric non-degenerate bilinear form b : TorH 2 (X) × TorH 2 (X) −→ Q/Z, called the linking form. Suppose that the 5-manifold X is simply-connected. Then the first integral homology group vanishes and the Universal Coefficient Theorem implies that there exists an isomorphism H 2 (X; Z 2 ) ∼ = Hom(H 2 (X), Z 2 ), via evaluation of cohomology on homology classes. Hence we can think of the second Stiefel-Whitney class w 2 (X) ∈ H 2 (X; Z 2 ) as a homomorphism The following theorem is the classification theorem for simply-connected 5-manifolds and was proved by Barden [1, Theorem 2.2] using surgery theory: Theorem 1. Let X, Y be simply-connected, closed, oriented 5-manifolds. Suppose that θ : H 2 (X) → H 2 (Y ) is an isomorphism preserving the linking forms on the torsion subgroups and such that w 2 (Y ) • θ = w 2 (X). Then there exists an orientation preserving diffeomorphism f : X → Y such that f * = θ. Since the linking number and the second Stiefel-Whitney class are homotopy invariants, it follows in particular that simply-connected, closed 5-manifolds which are homotopy equivalent are already diffeomorphic. It is possible to give a complete list of building blocks of simply-connected 5manifolds such that each simply-connected 5-manifold is a connected sum of some of those building blocks. In the following, we are particularly interested in simplyconnected 5-manifolds X whose integral homology is torsion free. By Poincaré duality and the Universal Coefficient Theorem the whole integral homology is torsion free if and only if the second homology H 2 (X) is torsion free. Simplyconnected 5-manifolds satisfying this condition have a simple structure, because they can be constructed using only two building blocks, which can be described in the following way. There exist up to isomorphism precisely two oriented S 3 -bundles over S 2 -the trivial bundle S 2 × S 3 and a non-trivial bundle denoted by S 2× S 3 . The manifold S 2× S 3 can be constructed as follows: Let B = S 2× D 3 denote the non-trivial D 3 -bundle over S 2 . Then the boundary ∂B is the non-trivial S 2 -bundle over S 2 , hence diffeomorphic to CP 2 #CP 2 . Let φ : ∂B → ∂B denote the orientation reversing diffeomorphism obtained by interchanging the summands of ∂B. Then the 5-manifold S 2× S 3 is obtained by gluing together two copies of B along their boundaries via the diffeomorphism φ. In particular, the manifold S 2× S 3 is nonspin, because a spin structure would induce a spin structure on B and hence on ∂B, which is non-spin. It follows from the list of building blocks in Barden's article [1] that S 2 ×S 3 and S 2× S 3 are the only building blocks with torsion free second integral homology. Hence every simply-connected 5-manifold with torsion free homology decomposes as a connected sum of several copies of these two manifolds. Moreover, one can show with Theorem 1 that there exists a diffeomorphism hence in every non-spin connected sum one S 2× S 3 summand suffices. This implies: Proposition 2. Let X be a simply-connected closed oriented 5-manifold with torsion free homology. Then X is diffeomorphic to The empty sum in (a) for b 2 (X) = 0 is the 5-sphere S 5 . CONTACT STRUCTURES ON SIMPLY-CONNECTED 5-MANIFOLDS Let X 2n+1 denote a connected, oriented manifold of odd dimension. By definition, an almost contact structure on X is a rank 2n-distribution ξ ⊂ T X together with a symplectic structure σ on the vector bundle ξ → X. A contact structure is an almost contact structure such that the symplectic form σ on ξ is of the form (dα)| ξ , where α is a nowhere vanishing 1-form on X that defines ξ in the sense that the kernel distribution ker α equals ξ. The 1-form α is called a contact form. If (ξ, σ) is an almost contact structure, we can choose a complex structure on ξ compatible with the symplectic form σ and hence define Chern classes c k (ξ) ∈ H 2k (X). These classes do not depend on the choice of compatible complex structure, because the space of complex structures compatible with a given symplectic form is contractible. However, they depend on the choice of symplectic structure. For a contact structure we can choose complex structures compatible with the symplectic form (dα)| ξ for a defining 1-form α. Since any two defining 1-forms only differ by multiplication with a nowhere zero function on X, it follows that the Chern classes c k (ξ) of a contact structure depend only on the contact distribution ξ, not on the choice of contact form α. The first Chern class of an almost contact structure ξ is related to the second Stiefel-Whitney class of the manifold X in the following way: Lemma 3. Let ξ be an almost contact structure on X. Then c 1 (ξ) ≡ w 2 (X) mod 2. Proof. By the Whitney sum formula for T X = ξ ⊕ R, Since ξ → X is a complex vector bundle, with complex structure compatible with σ, we have w 2 (ξ) ≡ c 1 (ξ) mod 2. This implies the claim. Suppose that ξ t , for t ∈ [0, 1], is a smooth family of contact structures on a closed manifold X. We can choose a smooth family of 1-forms α t defining ξ t . Using the Moser technique, one can prove that there exists a smooth family ψ t of self-diffeomorphisms of X with ψ 0 = Id X such that ψ * α t = f t α 0 , for smooth functions f t on X [16]. This implies the following theorem of J. W. Gray [8]. Because of this theorem, we call contact structures ξ, ξ ′ which can be deformed into each other by a smooth family of contact structures isotopic. We call almost contact structures homotopic, if they can be connected by a smooth family of almost contact structures. The contact structures in an isotopy class or the almost contact structures in a homotopy class all have the same Chern classes. We can also consider (almost) contact structures ξ, ξ ′ which are permuted by an orientationpreserving self-diffeomorphism ψ of X, in the sense that ψ * ξ ′ = ξ. Definition 5. We call almost contact structures and contact structures on an oriented manifold X equivalent, if they can be made identical by a combination of deformations (homotopies and isotopies, respectively) and by orientation-preserving self-diffeomorphisms of X. The existence question for almost contact structures on 5-manifolds was settled by the following theorem of Gray [8]. Theorem 6. Let X be a closed, orientable 5-manifold. Then X admits an almost contact structure if and only if W 3 (X) = 0. Here W 3 (X) ∈ H 3 (X) is the third integral Stiefel-Whitney class, defined as the image of w 2 (X) under the Bockstein homomorphism. The existence of contact structures on simply-connected 5-manifolds was proved by H. Geiges [6]. He also proved a classification theorem for almost contact structures on simply-connected 5-manifolds up to homotopy: Theorem 7. Let X be a simply-connected, closed 5-manifold. (a) Every class in H 2 (X) that reduces mod 2 to w 2 (X) arises as the first Chern class of an almost contact structure. Two almost contact structures ξ 0 , ξ 1 are homotopic if and only if c 1 (ξ 0 ) = c 1 (ξ 1 ). (b) Every homotopy class of almost contact structures admits a contact structure. A different proof for the existence of contact structures on simply-connected 5manifolds can be found in [13,14]. The fact, that two almost contact structures are homotopic if they have the same first Chern class holds more generally for closed, oriented 5-manifolds without 2-torsion in H 2 (X). For a proof see [9,Theorem 8.18]. We want to prove the following theorem, which is a consequence of Barden's classification theorem. Theorem 8. Suppose that X is a simply-connected, closed, oriented 5-manifold. Let c, c ′ ∈ H 2 (X) be classes with the same divisibility and whose mod 2 reduction is w 2 (X). Then there exists an orientation preserving self-diffeomorphism φ : X → X such that φ * c ′ = c. Here divisibility means the maximal divisibility as an element in the free abelian group H 2 (X). The divisibility is zero if and only if the class is zero itself. The proof of the theorem uses the following lemma. Lemma 9. Let G be a finitely generated free abelian group of rank n. Suppose α ∈ Hom(G, Z) is indivisible. Then there exists a basis e 1 , . . . , e n of G such that α(e 1 ) = 1 and α(e i ) = 0 for i > 1. Proof. The kernel of α is a free abelian subgroup of G of rank n−1. Let e 2 , . . . , e n be a basis of kerα. The image of α in Z is a subgroup, hence of the form mZ. Since α is indivisible, m = 1, so there exists an e 1 ∈ G such that α(e 1 ) = 1. The set e 1 , . . . , e n is linearly independent. They also span G, because if g ∈ G is some element, then α(g − α(g)e 1 ) = 0, hence g = α(g)e 1 + i≥2 λ i e i . We can now prove Theorem 8. Proof. By the Universal Coefficient Theorem, H 2 (X) ∼ = Hom(H 2 (X), Z) since X is simply-connected. Hence we can view c, c ′ as homomorphisms on H 2 (X) with values in Z. Let p : Z −→ Z 2 denote mod 2 reduction. The assumption on c and c ′ is equivalent to as homomorphisms on H 2 (X) with values in Z 2 . Since c and c ′ have the same divisibility, we can write c = kα, c ′ = kα ′ with α, α ′ ∈ Hom(H 2 (X), Z) indivisible. Let H 2 (X) = G ⊕ TorH 2 (X) with G free abelian. Since c and c ′ are homomorphism to Z they vanish on TorH 2 (X). By Lemma 9 there exist bases e 1 , . . . , e n and e ′ 1 , . . . , e ′ n of G such that α(e 1 ) = 1 = α ′ (e ′ 1 ), α(e k ) = 0 = α ′ (e ′ k ) ∀k > 1. Let θ be the group automorphism of H 2 (X) given by θ(e k ) = e ′ k for all k ≥ 1, and which is the identity on TorH 2 (X). Then Hence c ′ • θ = c on the free abelian subgroup G. This equality holds on all of H 2 (X) because c and c ′ vanish on the torsion subgroup. By the assumption above, this implies that w 2 (X) • θ = w 2 (X). Moreover, since θ is the identity on TorH 2 (X), it preserves the linking form. By Theorem 1, the automorphism θ is induced by an orientation preserving self-diffeomorphism φ : We get the following corollary for almost contact structures. Corollary 10. Let X be a simply-connected, closed, oriented 5-manifold. Then two almost contact structures ξ 0 and ξ 1 on X are equivalent if and only if c 1 (ξ 0 ) and c 1 (ξ 1 ) have the same divisibility in integral cohomology. One direction is clear, because homotopies do not change the Chern class and self-diffeomorphisms of the manifold do not change the divisibility. The other direction follows from Theorem 8 and the first part of Theorem 7. Definition 11. For an almost contact structure ξ on a simply-connected 5-manifold X, we denote the divisibility of c 1 (ξ) as a class in the free abelian group H 2 (X) by d(ξ). We call d(ξ) the level of the almost contact structure ξ. By Corollary 10, almost contact structures and hence contact structures on a simply-connected 5-manifold X naturally form a "spectrum" consisting of levels which are indexed by the divisibility of the first Chern class. Two contact structures on X are equivalent as almost contact structures if and only if they lie on the same level. By Lemma 3, simplyconnected spin 5-manifolds have only even levels and non-spin 5-manifolds only odd levels. In Section 7, we will use invariants from contact homology to investigate the "fine-structure" of contact structures on each level in this spectrum. For instance, O. van Koert [13] has shown that for many simply-connected 5-manifolds the lowest level, given by divisibility 0, contains infinitely many inequivalent contact structures. TOPOLOGY OF CIRCLE BUNDLES In this section we collect some results on the topology of circle bundles. In particular, we determine which simply-connected closed 5-manifolds can arise as circle bundles over 4-manifolds. Let M be a closed, connected, oriented n-manifold. Suppose that π : X → M is the total space of a circle bundle over M with Euler class e ∈ H 2 (M ). For the following proofs we will need two results which are probably well known, but included here for completeness. The first result is related to the exact Gysin sequence [17]: The homomorphism π * is called integration along the fibre and can be characterized in the following way. Lemma 13. Integration along the fibre π Proof. We only sketch the proof. Let π : D → M denote the disc bundle with Euler class e. Then X ∼ = ∂D and integration along the fibre is given by (see [17]) Here δ denotes the connecting homomorphism in the long exact sequence of the pair (D, ∂D) and τ −1 the inverse of the Thom isomorphism. The proof follows from this. The second result is related to the long exact homotopy sequence associated to the fibration Lemma 14. The map ∂ : π 2 (M ) → π 1 (S 1 ) ∼ = Z in the long exact homotopy sequence for fibre bundles is given by where h denotes the Hurewicz homomorphism. Proof. Let f : S 2 → M be a continous map and E = f * X the pull-back S 1 -bundle over S 2 . By naturality of the long exact homotopy sequence there is a commutative diagram Since f can represent any element in π 2 (M ) and the equation f * (e(X)) = e(E) holds by naturality of the Euler class it suffices to prove the claim for M equal to S 2 . We then have to prove that the map ∂ : By the exact sequence above it follows that π 1 (S 1 ) = Z maps surjectively onto π 1 (E). Hence π 1 (E) is an abelian group. Therefore we have to prove that is equal to Z/aZ. This follows from the following part of the Gysin sequence: Proof. Consider the following part of the Gysin sequence: This shows that e is indivisible if and only if π * : H n (X) → H n−1 (M ) is an isomorphism, in other words is an isomorphism. The long exact homotopy sequence of the fibration S 1 → X → M induces by abelianization an exact sequence Hence we see that e is indivisible if and only if the fibre S 1 ⊂ X is null-homologous. From the long exact homotopy sequence above, we see that the fibre is nullhomotopic if and only if ∂ : π 2 (M ) → π 1 (S 1 ) is surjective. By Lemma 14, this happens if and only if e, − is surjective on spherical classes. Both statements are equivalent to π * : π 1 (X) → π 1 (M ) being an isomorphism. Lemma 16. X is simply-connected if and only if M is simply-connected and e is indivisible. Proof. If X is simply-connected, the long exact homotopy sequence shows that π 1 (M ) = 1 and ∂ : π 2 (M ) → π 1 (S 1 ) is surjective. Hence M is simply-connected and the surjectivity of ∂ implies that e is indivisible. Conversely, suppose that M is simply-connected and e is indivisible. Then the Hurewicz map h : π 2 (M ) → H 2 (M ) is an isomorphism and it follows that ∂ is surjective. The long exact homotopy sequence then implies the exact sequence 1 → π 1 (X) → 1. Hence π 1 (X) = 1. Lemma 17. Suppose the first Betti number of Proof. We consider the following part of the Gysin sequence: By assumption, Proof. We consider the following part of the Gysin sequence: Since integration along the fibre is by Lemma 13 Poincaré dual to this proves the claim. We now determine when the total space X is spin. Proof. We claim that the following relation holds: This follows because the tangent bundle of X is given by T X = π * T M ⊕ R and the Whitney sum formula implies We consider the following part of the Z 2 -Gysin sequence: where e denotes the mod 2 reduction of e. Hence the kernel of π * is {0, e}. This implies the claim. We now specialize to the case where the dimension of M is equal to 4. Theorem 20. Let X be a simply-connected closed oriented 5-manifold which is a circle bundle over a closed oriented 4-manifold M . Then M is simply-connected and the Euler class e is indivisible. Moreover, the integral homology and cohomology of X are torsion free and given by: Proof. We only have to prove that the cohomology of X is torsion free and the formula for H 2 (X). The cohomology groups H 0 (X), H 1 (X) and H 5 (X) are always torsion free for an oriented 5-manifold X. We have the following part of the Gysin sequence: By assumption, H 3 (M ) = 0. Therefore the homomorphism π * injects H 3 (X) into H 2 (M ), which is torsion free by the assumption that M is simply-connected. Hence H 3 (X) is torsion free itself. It remains to consider H 2 (X) and H 4 (X). With Proposition 2, we get the following corollary (this has also been proved in [4]). Corollary 21. Let M be a simply-connected closed oriented 4-manifold and X the circle bundle over M with indivisible Euler class e. Then X is diffeomorphic to The first case occurs if and only if M is spin or w 2 (M ) ≡ e mod 2. Since every closed oriented 4-manifold is Spin c and hence w 2 (M ) is the mod 2 reduction of an integral class, it follows as a corollary that every closed simplyconnected 4-manifold M is diffeomorphic to the quotient of a free and smooth S 1 -action on #(b 2 (M ) − 1)S 2 × S 3 . THE BOOTHBY-WANG CONSTRUCTION We want to construct circle bundles over symplectic manifolds M whose Euler class is represented by the symplectic form. Since the Euler class is an element of the integral cohomology group H 2 (M ), the symplectic form has to represent an integral cohomology class in H 2 (M ; R), i.e. it has to lie in the image of the natural homomorphism By multiplication with a suitable integer, we can find a symplectic form which represents an integral class. If we want, we can choose the integer such that the class is indivisible. Note also that all symplectic forms in ω + B ǫ can be connected to ω by a smooth path of symplectic forms. This implies that they all have the same canonical class K as ω. We fix the following data: [ω] Z . By a theorem of Kobayashi [12] we can choose a U (1)-connection A on X −→ M whose curvature form F is 2π i ω. Then A is a 1-form on X with values in u(1) ∼ = iR which is invariant under the S 1 -action and there are the following relations, coming from the definition of a connection on a principal bundle: Here R denotes the fundamental vector field generated by the action of the element 2πi ∈ u(1). An orbit of R, topologically a fibre of X, has period 1. Proof. We have the relations This implies the corresponding relations for λ. The tangent bundle of X splits as T X ∼ = R ⊕ π * T M , where the trivial R-summand is spanned by the vector field R. Hence λ ∧ (dλ) n is a volume form on X, and λ is contact. Remark 23. If we define the orientation on X via the splitting T X ∼ = R ⊕ π * T M , where the trivial R-summand is oriented by R and T M by ω, then λ is a positive contact form if n is even and negative otherwise. Definition 24. The contact structure ξ on the closed oriented manifold X 2n+1 , defined by the contact form λ above, is called the Boothby-Wang contact structure associated to the symplectic manifold (M, ω). Since dλ(R) = 0, the Reeb vector field of λ is given by the vector field R along the fibres. For the original construction see [2]. Proposition 25. If λ ′ is another contact form, defined by a different connection A ′ as above, then the associated contact structure ξ ′ is isotopic to ξ. Proof. The connection A ′ is an S 1 -invariant 1-form on X with Hence A ′ − A = π * α for some closed 1-form α on M . Define A t = A + π * tα for t ∈ R. Then A t is a connection on X with curvature −2πiω for all t. Let λ t = λ + π * ( 1 2πi tα). Then λ t is a contact form on X for all t ∈ [0, 1] with λ 0 = λ and λ 1 = λ ′ . Therefore, ξ and ξ ′ are isotopic through the contact structures defined by λ t . The Chern classes of ξ are given by the Chern classes associated to ω in the following way. THE CONSTRUCTION FOR SYMPLECTIC 4-MANIFOLDS We fix the following data: (a) A closed, simply-connected, symplectic 4-manifold (M, ω) with symplectic form ω representing an integral cohomology class in H 2 (M ; R), given by the argument at the beginning of Section 5. Since H 2 (M ) is torsion free, [ω] has a unique integral lift, denoted by [ω] Z ∈ H 2 (M ). We somtimes denote the integral lift also by [ω] or ω. We assume that [ω] Z is indivisible. (b) Let π : X −→ M be the principal S 1 -bundle over M with Euler class e(X) = [ω] Z . Then X is a closed, simply-connected, oriented 5-manifold with torsion-free homology by Theorem 20. (c) Let λ be a Boothby-Wang contact form on X with associated contact structure ξ. By Proposition 25, the contact structure ξ does not depend on λ up to isotopy. By Corollary 21, the 5-manifold X is diffeomorphic to Hence the same abstract, closed, simply-connected 5-manifold X with torsion free homology can be realized in several different ways as a Boothby-Wang fibration over different simply-connected symplectic 4-manifolds M and therefore admits many, possibly non-equivalent, contact structures. Hence the possible levels of Boothby-Wang contact structures are restricted to the multiples of the divisibility of the canonical class. CONTACT HOMOLOGY In this section we consider invariants derived from contact homology. We only take into account the classical contact homology H cont * (X, ξ) which is a graded supercommutative algebra, defined using rational holomorphic curves with one positive puncture and several negative punctures in the symplectization of the contact manifold. We use a variant of this theory for the so-called Morse-Bott case, described in [3] and in Section 2.9.2. in [5]. We are going to associate to each Boothby-Wang fibration π : X → M as in the previous section a graded commutative algebra A(X, M ). Choose a basis Choose a class A 0 ∈ H 2 (M ) such that This is possible, because ω was assumed indivisible. The classes A 0 , A 1 , . . . , A N form a basis of H 2 (M ). We consider variables z = (z 1 , . . . , z N ), and where a = b 2 (M )+1 and N denotes the set of positive integers. They have degrees defined by where deg∆ i is given by In our situation the degree of all variables is even (hence the algebra we are going to define is truly commutative, not only supercommutative). Definition 31. We define the following algebras. • L(X) = C[H 2 (X; Z)] = the graded commutative ring of Laurent polynomials in the variables z with coefficients in C. • A(X, M ) = d∈Z A d (X, M ) = the graded commutative algebra of polynomials in the variables q with coefficients in L(X). A homomorphism φ of graded commutative algebras A, A ′ over L(X) and which is the identity on L(X). Then ψ preserves degrees and is invertible. We choose a class A 0 ∈ H 2 (M ) with ω(A 0 ) = 1 and denote c 1 (A 0 ) by ∆. Hence the degrees of the variables q k,i are equal to The integer ∆ has the following properties. and γ ≡ ∆ mod d(ξ). Hence there exist integers x, y ∈ Z such that d(K) = xd(ξ) + y∆. This proves the claim. We are interested in the algebra A(X, M ) because of the following result, described in [5], Proposition 2.9.1: Theorem 34. For a Boothby-Wang fibration X → M as above, the Morse-Bott contact homology H cont * (X, ξ) specialized 1 at t = 0 is isomorphic to A(X, M ). If two Boothby-Wang contact structures ξ and ξ ′ on X are equivalent, then their contact homologies are isomorphic. We now make the following assumptions: (a) The simply-connected 5-manifold X can be realized as the Boothby-Wang total space over another closed, simply-connected, symplectic 4-manifold (M ′ , ω ′ ) where ω ′ represents an integral and indivisible class. This implies in particular that b 2 (M ′ ) = b 2 (M ) and both are equal to a − 1. Denote the canonical class of (M ′ , ω ′ ) by K ′ and its divisibility by d(K ′ ) (b) We assume that ξ and ξ ′ are contact structures on the same level, hence This shows that the isomorphism type of the contact homology for Boothby-Wang contact structures on the same level is strongly related to the divisibility of the canonical class of the symplectic structure. The proof of this theorem is done in several steps. Let d denote the integer d(ξ). Remark 37. If c 1 (ξ) = 0, the variables z 1 , . . . , z n which generate the ring L(X) do not all have degree zero. Hence B(X, M ) = A(X, M )/L(X), which is an algebra over C, does not inherit a natural grading in this case. However, since the degrees of the variables z n are all multiples of 2d, the algebra B(X, M ) has a grading by elements in Z 2d . The images of the generators q k,i form generators for this infinite polynomial algebra and Q b is the set of generators of degree 2b mod 2d. 2 The following lemma shows that there is a relation between the cardinality of the set Q b of generators and the divisibility of the canonical class of the symplectic structure. for infinitely many k ≥ 1. Hence these q k,i are all in Q b . Conversely, suppose that d(K) does not divide any of the integers b + ǫ, with ǫ ∈ {−1, 0, 1}. Suppose that Q b contains an element q l,j . We have deg(q l,j ) = −2ǫ + 2∆l for some ǫ ∈ {−1, 0, 1}. By assumption, deg(q l,j ) = −2ǫ + 2∆l = 2b − 2dα, for some α ∈ Z. This implies b + ǫ = ∆l + dα. This is impossible, since d(K) divides the right side, but not the left side. Proof. Suppose that d(K) = d(K ′ ). By Lemma 38, the sets Q b and Q ′ b have the same cardinality for all 0 ≤ b < d. Conversely, suppose that d(K) = d(K ′ ); without loss of generality d( Using Lemma 40, we can prove the following. l,j }, which we still denote by the same symbols. By Lemma 40, there exists an integer 0 ≤ b < d such that Q b and Q ′ b have different cardinality. Without loss of generality, we may assume that Q b is empty and Q ′ b infinite (otherwise we consider φ −1 ). Let q ′ r,s be a generator in Q ′ b . Then q ′ r,s is a polynomial in the images {φ(q k,i )} k∈N,0≤i≤a , with coefficients in C and we can write . . , φ(q kv,iv )]. The images φ(q k,i ) are themselves polynomials in the variables {q ′ l,j } with coefficients in C. Expressed as a polynomial in the variables {q ′ l,j }, at least one of the images φ(q kw,iw ), 1 ≤ w ≤ v, must contain a summand of the form αq ′ r,s with α ∈ C non-zero. Since φ preserves degrees modulo 2, the element φ(q kw,iw ) is homogeneous of degree deg(φ(q kw,iw )) = deg(αq ′ r,s ) = deg(q ′ r,s ) ≡ 2b mod 2d. This implies deg(q kw,iw ) ≡ 2b mod 2d, hence q kw,iw ∈ Q b . This is impossible, The other direction of Theorem 35 follows from the next lemma. Proof. We can choose a basis B 1 , . . . , B N of H 2 (X) such that Choose elements A 0 ∈ H 2 (M ) and A ′ 0 ∈ H 2 (M ′ ) which evaluate to 1 on the symplectic forms and set We will use these bases to define the algebras A(X, M ) and A(X, such that deg(q k,i ) ≡ deg(q ′ ψ(k,i) ) mod 2d. Since z ′ 1 has degree −2d, there exists for each (k, i) ∈ N × {0, . . . , a} an integer α(k, i) ∈ Z, such that The map Using Theorem 35 and Proposition 28 we get the following corollary. The part concerning equivalent contact structures follows because equivalent contact structures have isomorphic contact homologies. Corollary 43. Let X be a closed, simply-connected 5-manifold which can be Proof. Let k = d(K). We can assume that ω is integral and choose a basis for H 2 (M ; Z) such that ω 2 , 0, . . . , 0). By a deformation we can assume that ω is not parallel to K, hence ω 2 = 0. We can also assume that ω 1 is negative while ω 2 is positive: Consider the change of basis vectors Hence if q is large enough, has the correct sign and the ± sign is chosen correctly, the claim follows. Definition 46. For integers d ≥ 4 and r ≥ 2 denote by Q(r, d) the number of integers in the following set:    k ∈ N k ≥ 4, k divides d and there exists a simply-connected symplectic 4-manifold (M, ω) with b 2 (M ) = r and b + 2 (M ) > 1 whose canonical class K has divisibility d(K) = k.    The numbers Q(r, d) are connected to the geography of simply-connected symplectic 4-manifolds with divisible canonical class. The following lemma relates knowledge about the numbers Q(r, d) to the existence of inequivalent contact structures on simply-connected 5-manifolds. Here we make essential use of Corollary 43 and Theorem 45. Lemma 47. Let d ≥ 4 and r ≥ 2 be integers. Suppose that either • d is odd and X the simply-connected 5-manifold (r − 2)S 2 × S 3 #S 2× S 3 , or • d is even and X the simply-connected 5-manifold (r − 1)S 2 × S 3 . In both cases, there exist at least Q(r, d) many inequivalent contact structures on the level d on X. Proof. Recall that a spin (non-spin) simply-connected 5-manifold has only even (odd) levels. Suppose that d ≥ 4 is an integer and (M, ω) a simply-connected symplectic 4-manifold with b 2 (M ) = r and b + 2 (M ) ≥ 2 whose canonical class has divisibility k = d(K) ≥ 4 dividing d. We can write d = mk. Since the divisibility of K is greater than 1, the manifold M is minimal, because non-minimal symplectic 4-manifolds contain a symplectically embedded 2-sphere S with intersection number K · S = −1. By Theorem 45 there exists a symplectic structure ω ′ on M that induces on the Boothby-Wang total space X with b 2 (X) = r − 1 a contact structure with d(ξ) = d. Since the symplectic form ω ′ is deformation equivalent to ω the canonical class K remains unchanged. By Corollary 43 the contact structures on the same non-zero level d on X coming from symplectic 4manifolds with different divisibilities k ≥ 4 of their canonical classes are pairwise inequivalent. We define the following purely number theoretic numbers. The following lemma gives a bound on the maximal number of inequivalent contact structures that can be distinguished with our method. Lemma 49. Let d ≥ 4 and r ≥ 2 be integers. Then there are the following upper bounds for Q(r, d). (a) For any r we have Q(r, d) ≤ N (d). (b) If d is even and r is not congruent to 2 mod 4, then Q(r, d) ≤ N (d ′ ). Proof. The first statement is clear by the definitions. For the second statement, suppose that M is a simply-connected symplectic spin 4-manifold. Then the intersection form Q M is even and b + 2 (M ) odd. Note that b − 2 = b + 2 − σ, hence b 2 (M ) = 2b + 2 (M ) − σ(M ). Since Q M is even, the signature σ(M ) is divisible by 8. This implies that b 2 (M ) is congruent to 2 mod 4 because b + 2 (M ) is odd for a simply-connected symplectic 4-manifold. Hence if r is not congruent to 2 mod 4 then there does not exist a simply-connected symplectic spin 4-manifold M with second Betti number r. Hence all elements of Q(r, d) are in this case odd. To calculate some of the numbers Q(r, d) we can use the geography work in [10]. For example, recall that a homotopy elliptic surface M is a closed, simplyconnected 4-manifold that is homeomorphic to a surface of the form E(m) p,q with p, q coprime. By definition, homotopy elliptic surfaces have topological invariants Proof. For part (a), let r = 12n − 2 and suppose that d ≥ 4 is odd. To prove the claim, we have to find for every divisor k ≥ 4 of d a simply-connected 4-manifold M with b 2 = r and b + 2 > 1 whose canonical class has divisibility k. Since d is odd, the integer k is odd also. If n > In a similar way we can use other geography results from [10] to find more inequivalent contact structures on the same level on simply-connected 5-manifolds X of the form rS 2 × S 3 and rS 2 × S 3 #S 2× S 3 .
2012-11-13T15:27:49.000Z
2010-01-12T00:00:00.000
{ "year": 2010, "sha1": "8c5d1d5b86c9966055a13017e5e13245bee5d88b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1001.1953", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8c5d1d5b86c9966055a13017e5e13245bee5d88b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
104453871
pes2o/s2orc
v3-fos-license
Reduction of interface state density in SiC (0001) MOS structures by low-oxygen-partial-pressure annealing We report that annealing in low-oxygen-partial-pressure (low-p$_{\rm O2}$) ambient is effective in reducing the interface state density (D$_{\rm IT}$) at a SiC (0001)/SiO$_{\rm 2}$ interface near the conduction band edge (E$_{\rm C}$) of SiC. The D$_{\rm IT}$ value at E$_{\rm C}$$-$0.2 eV estimated by a high (1 MHz)-low method is 6.2$\times$10$^{12}$ eV$^{-1}$cm$^{-2}$ in as-oxidized sample, which is reduced to 2.4${\times}$10$^{12}$ eV$^{-1}$cm$^{-2}$ by subsequent annealing in O$_{\rm 2}$ (0.001%) at 1500${}^\circ$C, without interface nitridation. Although annealing in pure Ar induces leakage current in the oxide, low-p$_{\rm O2}$ annealing (p$_{\rm O2}$ = 0.001 - 0.1 %) does not degrade the oxide dielectric property (breakdown field ~ 10.4 MVcm$^{-1}$). Silicon carbide (SiC) is a suitable material for power device applications, owing to its superior physical properties, such as wide bandgap, high critical electric field, and high thermal conductivity. 1,2) A unique advantage of SiC over other compound semiconductors is that it can be thermally oxidized to give high-quality silicon dioxide (SiO2). 2) Thus, SiC metal-oxide-semiconductor field effect transistors (MOSFETs) have attracted much attention for low-loss and fast power switches. SiC MOSFETs have, however, suffered from the low channel mobility due to the high interface state density (DIT, ~ 10 13 eV -1 cm -2 ) of SiC/SiO2 systems. [2][3][4][5][6][7] Although the physical origin of the interface states remains uncertain, several methods were found to passivate the defect levels. For instance, post-oxidation annealing in nitric oxide (NO) 8,9) or nitrous oxide (N2O) 10,11) (interface nitridation) or in a gas mixture of phosphoryl chloride (POCl3), oxygen (O2), and nitrogen (N2) 12) (POCl3 annealing) is effective in reducing the DIT near the conduction band edge (EC) of SiC. However, since these methods rely on the incorporation of foreign atoms (nitrogen (N) or phosphorus (P)), generation of extrinsic defects at the interface and in the oxide have been pointed out as a problem. By interface nitridation, generation of very fast interface states 13) and oxide hole traps 14) was indicated, and for POCl3 annealing, generation of electron and hole traps in the oxide 15) was suggested. It is more desirable if high-quality interface can be obtained without introducing foreign atoms into the interface and oxide. In recent years, ultrahigh-temperature oxidation (~ 1400 -1600 • C), 16) thin (~ 15 nm) oxidation with rapid cooling (> 600 • C/min), 17) and post-oxidation argon (Ar) annealing 18) have been reported to be effective in reducing the interface states, without introduction of foreign atoms. In this study, we demonstrate that the DIT reduction by the post-oxidation Ar annealing is not the effect of "pure" thermal annealing but the effect of unintentional very-low-oxygen-partial-pressure (pO2) annealing. Samples employed in this study were n-type SiC (0001) MOS capacitors (donor concentration (ND): ~ 10 16 cm -3 ). Oxides were formed by dry oxidation at 1300 • C for 30 min followed by annealing in either pure Ar or Ar containing very small amount of oxygen (O2) (0.001 -0.1%; low-pO2) at 1300 -1500 • C for 1 min. In the case of pure-Ar annealing, Ar was purified so that the concentration of contained oxygen was below 100 ppt. For low-pO2 annealing, the partial pressure of oxygen was strictly controlled by supplying a gas directly from a gas cylinder containing the mixture gas of Ar and O2. The low-pO2 annealing was performed in an induction heating furnace with a fast cooling rate (> 600 • C/min), in order to minimize the additional oxidation during the cooling phase. The oxide thicknesses were about 51 -58 nm and the gate electrodes were aluminum (Al) (diameter: 0.5 -1 mm). All of the measurements were conducted at room temperature. Figure 1 shows the current density-electric field (J-E) characteristics of the prepared MOS capacitors. Here, the oxide electric field, EOX, was estimated by EOX = V/tOX, where V and tOX are the applied voltage and the oxide thickness, respectively. We see that the as-oxidized and low-pO2annealed samples exhibit typical J-E characteristics with Fowler-Nordheim (F-N) tunneling current 2,19) at a sufficiently high oxide field (> 7 MVcm -1 ). In contrast, a high leakage current is observed even at a very low field (< 0.2 MVcm -1 ) in the case of pure-Ar-annealed sample. In order to clarify the origin of the leakage current, secondary ion mass spectrometry (SIMS) measurements were performed. Figure 2 depicts the depth profiles of carbon concentration in the SiC/SiO2 samples acquired by SIMS. After annealing in pure Ar, a high concentration of carbon atoms (> 10 20 cm -3 ) is detected in the oxide, as reported in Ref. 6. Note that such a pure Ar ambient cannot be realized simply by introducing Ar immediately after the oxidation process, since residual oxygen remains in the oxidation furnace. In our case, we excluded the effect of oxygen by performing the Ar annealing in a leak-tight resistive heating furnace which is different from the oxidation furnace. In the case of low-pO2-annealed sample, the carbon concentration in the oxide is close to the detection limit (~ 10 18 cm -3 ). Thus, carbon atoms are ejected from the interface during the annealing, and they remain in the oxide in the case of pure Ar annealing, which leads to the severe degradation of oxide dielectric property (Fig.1). In the case of low-pO2 annealing, slight oxygen (0.001 -0.1%) removes the ejected carbon atoms by oxidizing them into gas species such as CO or CO2, leading to the suppression of the leakage current (Fig.1). Energy distributions of DIT extracted by a high (1 MHz)-low method 19) are compared in Fig.4. In the case of annealing in O2 (0.001%), the DIT is effectively reduced by increasing the temperature, and takes its minimum values (e.g. 2.4×10 12 eV -1 cm -2 at EC − 0.2 eV) after annealing at 1500 • C. For O2 (0.1%) annealing, in contrast, the DIT at EC − 0.2 eV increases from 4.9×10 12 eV -1 cm -2 to 6.3×10 12 eV -1 cm -2 by increasing the temperature from 1300 • C to 1500 • C. The oxide thicknesses of the as-oxidized and low-pO2-annealed samples determined by spectroscopic ellipsometry are summarized in Fig.5. We see that the oxide thickness hardly changes (< 1 nm) with the annealing in O2 (0.001%), whereas, the thickness increases by about 6 nm with annealing in O2 (0.1%) at 1500 • C. Such results indicate that the DIT is determined by the balance of the removal and creation of the interface defects during the low-pO2 annealing, and that it is important to avoid excessive oxidation of SiC during the low-pO2 annealing to suppress additional defect generation. Note that, in SiC MOS systems, it is known that the incorporation of impurities, such as boron (B), 20) P, 12) and sodium (Na), 21) in a high concentration of about 10 20 -10 21 cm -3 leads to remarkable reduction of DIT. 12,20,21) From SIMS measurements, we confirmed that the concentration of B, P, and Na atoms near the SiC/SiO2 interface is at least below 2×10 16 cm -3 after the low-pO2 annealing, indicating that the observed DIT reduction by the low-pO2 annealing (Fig.4) is not due to the impurity contamination. We indicated that a high density of positive fixed charge (1.2×10 12 cm -2 ) resides in the sample annealed in O2 (0.001%) at 1500 • C. It should be noted that, it is difficult to estimate the real positive fixed charge density simply from the flat-band voltage shift in the case of as-oxidized sample, since electrons trapped at the acceptor-like interface states act as "effective" negative charge and compensates the positive charge. Thus, the positive charge may even reside in the as-oxidized sample and may become apparent by the low-pO2 annealing owing to the reduction of acceptor-like interface states (Fig.4). Here, we discuss the possible atomistic configurations of the major interface defects in SiC/SiO2 systems. It has widely been believed that the carbon byproducts at (or near) the interface are the origin of interface states in SiC MOS structures, 2,6,16,17,[22][23][24][25][26][27] since carbon is one of the host atoms of SiC. We confirmed that a high concentration of carbon atoms is ejected from the interface by the pure Ar annealing (Fig.2), which also suggests that the interface defects are related to carbon species. A result of density-functional calculations indicates that, among the various forms of carbon atoms that are frequently observed at a SiC/SiO2 interface during molecular dynamics (MD) simulations 28) , ethylenelike structure (SiO>C=C<SiO) creates defect levels near the EC of SiC. 23) Si2>C=C<Si2 defect [24][25][26][27] and Si2>C=O defect 24) are also possible candidates, since they also create defect levels near the EC. It is also suggested that oxygen helps the dissociation of interface carbon defects by reducing the energy of the structure after the dissociation by terminating the Si dangling bonds at the interface. 23) Thus, the low-pO2 annealing may reduce the DIT near EC (Fig.4) by dissociating the carbon defects while preventing the additional generation of carbon defects caused by excessive oxidation of SiC. In conclusion, we found that low-pO2 annealing is effective in reducing the DIT at a SiC (0001)/SiO2 interface without introduction of foreign atoms. For annealing in O2 (0.001%), the DIT decreased by increasing the temperature up to 1500 • C, whereas, for O2 (0.1%) annealing, the DIT increased by increasing the temperature from 1300 • C to 1500 • C. The oxide thickness hardly changed (< 1 nm) with the annealing in O2 (0.001%), whereas the thickness increased by about 6 nm with annealing in O2 (0.1%) at 1500 • C. Thus, during the low-pO2 annealing, it is of importance to remove the interface defects by oxidizing them, while preventing the excessive oxidation of SiC to minimize additional defect creation. In the case of pure Ar annealing, carbon atoms are ejected from the interface, and they remain in the oxide, which leads to the severe degradation of oxide dielectric property. In low-pO2 annealing, however, slight oxygen (0.001 -0.1%) removes these carbon atoms by oxidizing them into gas species such as CO or CO2, leading to the suppression of the leakage current. This work was supported in part by the JSPS KAKENHI (Grant Number 15J04823) and the Super Cluster Program from the Japan Science and Technology Agency.
2019-01-17T08:45:05.000Z
2019-01-17T00:00:00.000
{ "year": 2019, "sha1": "a7be10b513d5bafc07223b7eb35dd57a0cba01d9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1901.05681", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a7be10b513d5bafc07223b7eb35dd57a0cba01d9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
55598661
pes2o/s2orc
v3-fos-license
Effect of Microstructure on Hydrogen Diffusion in Weld and API X 52 Pipeline Steel Base Metals under Cathodic Protection 1Mechanical Engineering Department, Universidade Federal de São João Del Rei (UFSJ), 170 Praça Frei Orlando, 36307-352 São João Del Rei, MG, Brazil 2Postgraduate Program in Materials Science and Engineering, Universidade Federal da Paraı́ba (UFPB), João Pessoa, PB, Brazil 3Metallurgical and Materials Engineering Department, Universidade Federal do Rio de Janeiro (UFRJ), Ilha do Fundão, Bloco F, Rio de Janeiro, RJ, Brazil Introduction The great challenge for increasing oil and gas productions has been the need for more detailed studies related to steel for pipelines applications.Therefore, the knowledge about mechanical behavior and microstructure of steels [1,2] for manufacturing these pipelines is important to assure integrity and safety conditions of operation, which is extremely important for oil and gas industry [3,4]. It requires a continual improvement of steels grade API X52 [5] in order to prevent failure.Han et al. [6] showed that the welding procedure involved in the manufacture of pipes might modify the microstructure of the base metal in the region of heat-affected zone (HAZ).Therefore, the mechanical properties in this region are changed. Metal fractures related to "environmentally induced cracking" are often associated with stress corrosion cracking (SCC) or hydrogen embrittlement (HE) mechanisms [7][8][9].Some researchers believe that the process of external cracking of pipelines in contact with soil pH near neutral is associated with HE instead of SCC [2,4,10]. The initial process in HE is associated with the diffusion of hydrogen through the material.Hydrogen permeation starts when there is atomic hydrogen on the metal surface; therefore, the hydrogen can diffuse into the metal.A large amount of atomic hydrogen can recombine inside the metal forming H 2 , which is retained in the form of gas bubble under high pressure inside the metal.Furthermore, it is well known that initiation and propagation of cracks occur from these points of hydrogen concentration [11].Therefore, it is important to evaluate if the hydrogen diffusion occurs differently through different microstructure, such as base metal and weld metal. In pipelines, the external HE is associated with the excessive cathodic potential imposed and soils contaminated with sulfate-reducing bacteria (SRB) [12].Contreras et al. [13] pointed out that minimal amounts of H 2 S are enough to cause HE.Then, external cracking caused by hydrogen 2 International Journal of Corrosion embrittlement could be associated with SRB.These bacteria use sulfate as an oxidizing agent, reducing to sulfide (H 2 S).Plus, they can also utilize oxidized sulfur compounds such as thiosulfate and sulfite or even elemental sulfur.In the presence of H 2 S produced by these bacteria, the reaction of atomic hydrogen recombination to molecular hydrogen is retarded, thereby permitting the diffusion of atomic hydrogen through the metal [10]. Microbiologically influenced corrosion (MIC) is a major problem in many industries, such as oil and gas.According to Xu et al. [14] many attacks of anaerobic MIC can be classified by two types based on the two anaerobic metabolisms: respiration and fermentation.Therefore, the mechanism of SRB involves microorganisms that perform an aerobic respiration.For example, SRB respiration typically uses sulfate as the terminal electron acceptor.Venzlaff et al. [15] reported that SRB gain biochemical energy for growth by reducing sulfate (SO 4 ) to sulfide (H 2 S, HS − ) with natural organic compounds as electron donors, which are oxidized to CO 2 (also referred to as sulfate respiration).However, if the SRB have contact with carbon steel, the Fe acts as an electron donor for its respiration [16].Then, the reaction involved in anaerobic respiration using Fe 0 is Fe − Fe 2+ + 2e − .In the absence of oxygen, electrons must be accepted by a nonoxygen oxidant [13]; thus, SRB use SO 4 as an oxidizing agent.SO 4 is reduced to H 2 S and HS − through the reaction SO −4 Horowitz [17] showed an increase in the amount of hydrogen during permeation tests with the use of sodium thiosulfate solution.The sodium thiosulfate solution allows the generation and stabilization of H 2 S on the metallic surface.These tests were carried out applying cathodic potentials.According to Pourbaix diagram for H 2 S, when cathodic potential is imposed, the steel is located into H 2 S domain [18].Therefore, the sodium thiosulfate is reduced to H 2 S. The reaction depends on the potential applied and the pH of the solution. Another problem related to HE is because the steels, used in the manufacture of pipelines for transporting oil and derivatives, are exposed to excessive cathodic protection.Therefore, cathodic potentials imposed on the external part of pipelines promote hydrogen reduction, which becomes thermodynamically spontaneous on the metal surface [19].Bueno et al. [4,12] report that API X46 carbon steel exhibited a decreasing ductility as long as cathodic potentials were imposed.This effect was more evident in soil solutions than in NS4 standard solution.The deterioration mechanism is related to the influence of hydrogen.Transgranular cracking occurred even under cathodic conditions where the anodic dissolution of the steel can be considered as negligible. Recent studies gave emphasis to the influence of the metal structure in hydrogen permeation [1,2,20] and discuss the effective diffusion coefficient.Lan et al. [1] studied the hydrogen permeation behavior in relation to microstructural evolution of low carbon bainitic steel weldments.They have shown that the effective diffusion coefficient in the welded joint is highly affected by the heat input.This is mainly due to coarsening grain and inclusion sizes.Park et al. [2] tested the hydrogen trapping efficiency of API X65 and showed an increase in order of ferrite/degenerated pearlite, ferrite/bainite, and ferrite/acicular ferrite.Haq et al. [20] showed that, due to hydrogen trapping, X70 medium strip exhibits lower hydrogen diffusivity than the standard Mn strip.This is mainly due to finer ferrite grains and a higher density of carbonitride precipitates.Fischer et al. [21] indicated that under specific circumstances the diffusion of hydrogen cannot be described well by a constant effective diffusion coefficient due to the presence of hydrogen traps and the magnitude of the concentration gradient of hydrogen. The aim of this paper was to evaluate the influence of microstructure and some inclusions on the susceptibility of hydrogen permeation of API X52 carbon steel (base metal and weld metal) submitted to the cathodic protection system.Different types of microstructures were obtained by heat treatments, such as quenching and annealing under different temperatures.The hydrogen permeation tests in these microstructures were compared and evaluated in the presence of a synthetic modified soil solution NS4 + sodium thiosulfate concentration of 10 −2 M. Methods and Materials The material used was an API X52 pipeline carbon steel under different conditions: base metal, as received; welded metal; base metal after quenching heat treatment; base metal after annealed heat treatment.The evaluation of the chemical composition was carried out by Optical Emission Spectroscopy (OES).Table 1 shows the results in weight percent (wt%) of the chemical elements present in the base metal (BM) and the weld metal (WM). The microstructures of the samples were produced by different heat treatment.Base metals were heated at 900 ∘ C for two hours.Then, the samples were submitted to a quenching process performed in a solution of water, ice, and salt.The annealed samples were left in the oven until it reaches at room temperature.All the tests were performed in triplicate.The specimens and conditions of heat treatment are described in Table 2. The metallographic analysis and microstructure characterization were performed according to Bott et al. [24], The presence of austenite-martensite phases was detected by SEM after double electrolytic attack.The following steps were used to the attack: initially 5 g EDTA, 0.5 g of NaF, and 100 ml of distilled water at 5 V for 15 seconds were used; secondly 5 g of picric acid and 25 g NaOH were used; finally, 100 ml of distilled water at 100 V for 5 seconds was used. The hardness tests were conducted to supplement the materials characterization.These were performed on Rockwell B scale using sphere 1/16 with a load of 100 kg and Rockwell C using diamond cone with a load of 150 kg. The electrochemical test performed was potentiodynamic polarization curves.The potentiostat used in polarization tests was AUTOLAB models: Autolab type III/FRA 2 and PGSTAT 128N coupled to computers NOVA 1:10 software.The scan rate adopted was 1.0 mV⋅s −1 , and the applied potential range covered a value of −1.5 V to 0.5 V.The measurements were performed at room temperature (25 ∘ C ± 3 ∘ C).The cell used was a conventional three-electrode cell, being platinum as counter electrode, saturated calomel (SCE) as reference electrode, and the working electrode (samples of API X52 carbon steel).The specimens for the electrochemical tests were cut, embedded in cold resin, and ground with SiC paper up to 600 grit.The exposed area of the samples for permeation tests at Devanathan cell was 0.75 cm 2 .For polarization experiments, the exposed area was 1 cm 2 . Synthetic soil solution, also called NS4 solution, was used during the test, to simulate a synthetic soil, with pH around 8.4.The solution was made according to Parkins et al. [25].The composition was (in g/l) KCl: 0.122, NaHCO 3 : 0.483, CaCl 2 : 0.093, and MgSO 4 : 0.131.Plus, the synthetic solution NS4 + sodium thiosulfate was used to study the effect caused by sulfate-reducing bacteria.It was prepared with a concentration of 10 −2 M of sodium thiosulfate in the standard NS4 solution.Some studies [26] adjust the pH to 6,5-7, in order to evaluate soils with this characteristic by bubbling a mixture of CO 2 and N 2 . The hydrogen permeation tests were carried out with the most aggressive solution, NS4 + sodium thiosulfate.The potentiostat used in hydrogen permeation tests was AUTO-LAB models: Autolab type III/FRA 2 and PGSTAT 128 N coupled to computers NOVA 1:10 software.The Devanathan cell was utilized in the test using specimens with a thickness of 2 mm.Both sides of the steel specimen were in contact with different solutions controlled by independent potentiostats.The anodic side of the cell was filled with 1 M of NaOH solution and the cathodic side was filled with NS4 + sodium thiosulfate solution.The counter electrode of the anodic side cell was attached to a computer to measure the anodic current.Hydrogen permeation tests were carried out in the following steps. (1) Assemble the hydrogen permeation Devanathan cell containing the steel specimen. (2) 1 M NaOH solution was introduced in the anodic side and the system was stabilized at the open circuit potential (OCP). (3) Application of anodic potential 100 mV above the free corrosion potential was done at the anodic side until the anodic passive current density became stable and below 1 A/cm 2 . (4) Introduction of NS4 solutions was done in the cathodic side which remained at the open circuit potential during 20 h. (5) Application of cathodic potential of −1.5 V (ECS) was done for 24 h.The test piece used was a flat plate of API X52 steel polished with diamond paste on both sides, with thickness and permeation section area constant.The cathodic potential applied of −1.5 V below OCP was carried out in order to simulate cathodic protection system [27], once ISO 15589-1 indicates that from values lower than −1.2 V the steel is already suffering effects of hydrogen embrittlement. The diffusion coefficient (), in transient state, can be measured through various different methods as found in the literature.In this research, the three most common methods were used: Time Lag, Breakthrough, and Fourier, calculated according to literature [20,28,29]. Chemical Analysis. The API 5L standard [5] classifies carbon steel for the manufacture of pipes used in pipelines transportation system in the petroleum and natural gas industries.The requirements used in the standard are divided into two levels for seamless and welded pipelines: PSL1 and PSL2.The PSL1 requirement is a loose standard quality for line pipe, whereas PSL2 contains additional testing requirement and stricter chemical physicals, along with different ceiling limits of mechanical properties, and requires Charpy impact testing conditions.According to Table 3, base metal reaches the chemical requirement of PSL1; however the same is nonconformity with PSL2 due to the carbon content limits (Table 1).Weld metal is in accordance with the required specifications of PSL2 chemical composition. Metallographic Features. The metallographic characterization of the samples was conducted on all heat treatment conditions specified on Table 2: base metal (BM), weld metal (WM), annealed base metal (ABM), and quenched base metal (QBM).The hardness tests were performed to complement the materials characterization, as shown in Table 4. Figure 1 shows optical microscopy image of the positions where the metallographic analyses were performed.Figure 2 presents the interface between WM and HAZ, showing the difference between the microstructures.HAZ presents mainly pearlite grains, shown to be affected by the heat produced during the welding process.According to Vargas-Arista et al. [30] SEM analysis, HAZ generated by the welding thermal cycle showed a complex recrystallized microstructure located near to the fusion line, formed by coarse-grained ferrite, acicular ferrite, small discontinuous pearlite colonies, and few bainite grains. Base metal (Figure 3) presented heterogeneous distribution of ferrite and fine pearlite grains with grain boundaries well-defined.This microstructure arrangement presents an intermediary value for hardness in Table 4.The same microstructure for the API X52 steel was found in several other literatures [31,32], owning a ferritic-pearlitic combination. The weld metal in region 4 (Figure 4) showed a microstructure formed by low recrystallization, where it is possible to observe pearlitic microstructure and a decrease in grain size with degenerated pearlite regions.This fact was discussed by Park et al. [2] and can be explained because the degenerated pearlite structure, without the banding pattern, was different from pearlite evolved by normalizing and slow cooling treatment.The cooling rate in the weld metal was higher than necessary to form typical pearlite; thus the carbon diffusion was not enough to create lamellar structure of cementite. Figure 5 shows the heat-affected zone (HAZ), where there is a great similarity with the microstructure of the BM; the little difference is due to the thermal effect caused by the deposition of the weld bead, which provides an increase grain boundary density in the microstructure of HAZ.Scanning electron microscopy (SEM) was performed at 3 different positions at the welded joint, as shown in optical image on Figure 6.The BM (Figure 7(a)) presents predominate phases of ferrite and pearlite.The HAZ and WM (Figures 7(b) and 7(c)) present phases of ferrite and pearlite with constituents of martensite/austenite (M/A); this constituent cannot be observed by optical analysis (OM), called constituent M/A or micro phase M/A, regions of microscopic dimensions presented in C-Mn steels, and low alloy that consists of cells stabilized austenite.Chatzidouros et al. [31] emphasize that most pipeline steels are manufactured using thermomechanical processes that involve multiple heating and rolling stages which favor the formation of M/A constituents in low carbon steels.This micro constituent directly affects the tenacity of material due to high hardness and fragility, where the high density of discordances in the submicrostructure contributes to this formation.The M/A sites are mostly present in the grain boundaries of ferrite and bainite grains shown in Figure 7(c); however they occasionally could also be observed within the phase of pearlite between the cementite lamellar. Observations of the materials, without chemical attack, revealed the presence of a significant amount of inclusions, as shown in Figure 8. EDS technique was used to evaluate the composition of the inclusions.Figure 9 shows the WM inclusions analyses.Therefore, the inclusions presented in API X52 carbon steel showed, besides aluminum and calcium, significant concentrations of S and Mn.Haq et al. [20] have concluded that MnS inclusions are considered strong irreversible trapping sites for hydrogen, working as follows: during the solidification of steel, Mn can combine with S giving rise to MnS inclusions.The behavior of inclusions/matrix metal interface is reported by literature as strong trapping sites for hydrogen, consequently decreasing the hydrogen flux through the material. After heat treatments, SEM analyses show that the ABM (Figure 10(a)) and BM (Figure 7(a)) present the same microstructure; however the BM grain size is slightly lower.This is evidenced by the greater hardness submitted by BM.The quenched base metal specimens presented martensitic structure, but due to the low carbon they have noted some ferrite sites, as proved in Figure 10(b).The heat treatment changes can also be noted in hardness values (Table 4) where 6 International Journal of Corrosion there is a significant difference in hardness between QBM and ABM. Polarization. The cathodic and anodic polarization curves were carried out in order to evaluate if the microstructure could affect the corrosion resistance of the API X52 carbon steel.Curves were obtained in the solutions NS4 and NS4 + sodium thiosulfate shown in Figure 11, related to BM and WM.The anodic current density was highest for NS4 + sodium thiosulfate solution, and it may be attributed to reduction reaction of sodium thiosulfate that converted into H 2 S.This makes it more aggressive than the NS4 standard solution.Thus, the electrochemical tests for the specimens ABM and QBM were performed only in this solution (Figure 12 and Table 5).Table 5 shows the open circuit potential (OCP) in each test condition as well as the values of current density at 50 mV and 100 mV (SCE) above open circuit potential (OCP). All the samples showed active dissolution in all tested conditions.Therefore, any domain of passivation in a range of 700 mV of anodic polarization was not observed.The cathodic currents density observed, in all tests, can be attributed to the reduction reactions of hydrogen and oxygen. It is possible to note a significant variation of the density current occurred when the sodium thiosulfate was added, showed on Table 5.The addition of thiosulfate accentuated the corrosion process, anodic density current increase with respect to the solution without sodium thiosulfate.It proves that the solutions with sodium thiosulfate presented a corrosion potential more anodic, becoming more aggressive, which evidence the results obtained in the polarization curves. The open circuit potential (OCP) of Figures 11 and 12 and Table 5 was analyzed according to the Pourbaix electrochemical equilibrium diagram for the system Fe/H 2 O, at 25 ∘ C [18].All the specimens, in both solutions, presented OCP within the domain of corrosion and below the equilibrium line H/H + .In this case, the reactions of Fe/Fe 2+ anodic dissolution and reduction of hydrogen are thermodynamically spontaneous.Thus, all of samples showed effect of active dissolution, being within the domain of corrosion with solubility of Fe 2+ ion, as well as the reaction of hydrogen reduction on the metal surface.In addition, it is possible to note that the anodic current densities increase in relation to the applied potential above 50 mV and 100 mV of the OCP, proving that all samples presented active dissolution.The anodic current densities, measured at 50 and 100 mV above OCP in all specimens tested with NS4 + thiosulfate solution, presented similar values (Table 5).In other words, it is possible to conclude that different microstructures have no significant effects about corrosion resistance. 3.4.Hydrogen Permeation. Figure 13 presents the permeation test of all specimens.They were performed by hydrogen permeation using an aggressive solution, namely, NS4 + sodium thiosulfate, already evidenced in polarization test and by some authors [33][34][35], as a solution of soil synthetic contaminated with SRB.The permeation tests with cathodic potential applied of −1.5 V below OCP were carried out in order to simulate cathodic protection system.The solution NS4 + sodium thiosulfate was able to induce absorption and permeation of hydrogen in all materials tested and it was used to simulate the effect of H 2 S in synthetic soil solution.The effect of H 2 S can be compared to the effect of SRB in the same environment, preventing H 0 from turning into H 2 .Due to the addition of sodium thiosulfate, the potential of the cathode side in contact with the API X52 carbon steel was located within the domain of stability of H 2 S (Figure 14).Therefore, there is an increase in the activity of ions and reduction hydrogen on the steel surface. As found in the literature, there are different factors that involve the hydrogen flow through the material.During the initial stage, the permeation process resembles a stationary permeation behavior, but in a second stage a progressive increase of current starts as the time goes by.However, this rise of current occurs differently in the carbon steel.Thus, this difference in the current flow is probably due to the microstructural characteristics, like the carbide form and size of grains differentiated among the studied conditions [1,28]. Hydrogen diffusion coefficient in steel matrix generally is very small at low temperatures.Therefore, most of hydrogen is retained not in the unit cells interstices but in different sites commonly called traps.These traps have been related to microstructural features such as dislocations, interfaces, vacancies, impurity atoms, micro voids, or any other lattice defect [19,36].The trap densities are inversely proportional to the diffusion coefficients [20].Literature [37,38] reports that when the carbon steel is submitted to a heat treatment, it changes the structural arrangement of the carbides (Fe 3 C) which assume different forms for each one.These different forms promote significantly modifications on permeability properties in relation to the diffusion constant and the solubility of hydrogen in the carbon steel.The typical pearlite, formed by both cementite (carbide) and ferrite, in lamellar shape, is a weak hydrogen trap due to its continuous interphase which acts as a freeway to the hydrogen, easing the diffusivity.This feature is present in the BM and ABM and it is one of the reasons that they display high diffusion compared to the other two (Figure 13).On the other hand the presence of an irregular thin cementite, which holds hydrogen inside the metal acting as a trap, contributes to the lower diffusivity as is shown by WM.Similar results were obtained by Ramunni et al. [38]. International Journal of Corrosion There are reports in the literature that affirm that Mn, S, and other inclusions, as shown in Figure 9, are some of the reasons that contribute to variance of ease with which the hydrogen is solubilized or diffused on metallic materials solid at room temperature [20,39].In other words, MnS inclusions are considered strong irreversible trapping sites for hydrogen, being reported by literature as strong trapping sites for hydrogen, consequently decreasing the hydrogen flux through the material.However, this research had not been able to perform the hydrogen permeation tests directly on the inclusion to be sure that only they would affect the hydrogen permeation flux. The data of the permeation tests are listed in Table 6, showing the highest density current and the time needed to reach that for each microstructure of the API X52 carbon steel. These values are in accordance with other authors [2,38,39].These authors report that so many parameters can influence the hydrogen diffusion into the microstructure.The hydrogen permeation cannot be considered constant inside the metal during the Devanathan cell test because of the hydrogen trapping process.Thus, only an apparent diffusion coefficient can be evaluated.Moreover, the microstructure, inclusions, dislocations, grain boundaries, grains shapes, International Journal of Corrosion vacancies, interfaces with nonmetallic inclusions, precipitated particles, and void can act as traps and affect hydrogen movement through the material.Then, hydrogen diffusibility is associated with the diffusion process controlled by Fick's laws and physic-chemical reaction of hydrogen with traps inside the bulk.The effective diffusion coefficient ( eff ) is an important parameter used in studies of chemical elements diffusion on solid and liquid matrices.In the present work, the coefficient was studied for all four different samples submitted to 3 different methods to calculate.The methods known as Time Lag and Breakthrough are employed to estimate the eff values using specific points of the permeation curves.Fourier method is more complex once it uses all the data points from the transient part of the permeation curve to determine eff ; however, the method is considered more accurate.Figure 15 shows the hydrogen permeation results for BM samples using all three methods.Permeation times used to calculate eff are represented by t L (Time Lag) and t B (Breakthrough) in Figure 15(a).Fourier method was used to estimate eff from the graphic in Figure 15(b) [28]. Table 7 summarizes all the data collected from electrochemical permeation tests, for all the conditions.Samples that presented higher stationary permeation currents ( ∞ ) also showed higher values of effective diffusion coefficient ( eff ).WM obtained the lowest effective diffusion coefficient followed by ABM, BM, and QBM, respectively. The values obtained for eff are in accordance with the literature in Table 8.Comparing Tables 7 and 8, Time Lag method presented the lowest values of eff , while Breakthrough and Fourier methods showed similar values, except for QBM.In contrast, literature data showed less variation and Fourier method produced low values for API X52 steel.The distinct results obtained could be associated with different parameters used for the tests.Also, the different steels used can imply higher quantities of alloy elements present in the composition, increasing the amount of precipitates, which contributes to the reduction of the hydrogen diffusion. Annealed Base Metal (ABM). The highest hydrogen flux occurred in the ABM samples as evidenced in Figure 13 and Table 6.Annealed samples showed in the micrographs (Figure 7) considerable grain growth for ferrite and the presence of pearlite formation at the edges with the decrease of hardness.Consequently, the microstructure with large grains size favored the increase on the hydrogen flow through the metal.The annealed microstructure (Figure 7) had lower discordances density than other samples.Therefore, according to Haq et al. [20], ferrite grains often show the highest diffusivity.At the grain boundaries, the pearlite does not act as a blocking to the flux.The lamellar interface of cementite and ferrite within pearlite creates an easy path for hydrogen pass through.In addition, Svoboda et al. [39] confirmed that annealing thermal treatment was enough to recover the majority of defects, decreasing the discordance density, with only a small amount of them remaining.Thereby, the hydrogen atom could easily pass through the metal, the fact that was also confirmed by Han et al. [6]. The diffusivity of hydrogen in pure -iron (ferrite) is around 10 −3 mm 2 ⋅s −1 .The value obtained for ABM samples (Table 6) (2.28 × 10 −4 mm 2 ⋅s −1 ) is lower due to the presence of pearlite and inclusions.In addition, it is close to those found by Park et al. [2] (9.27 × 10 −4 mm 2 ⋅s −1 ) that used similar composition.The slight difference of values can be explained by the difference between the parameters used in both researches; the sample thickness and the current density applied on the cathodic side were different. Base Metal (BM). Base metal was tested as received, showing micrographs with similar microstructure to ABM, being mainly ferrite grains with pearlite formation at the edges.However, there is a grain size difference.Therefore, it is not possible to affirm what heat treatment the BM was submitted to during its production; however BM presented smaller grain size than ABM, which was submitted to a heat treatment at the laboratory. The smaller grain size in relation to ABM causes an increase in the number of discordances and defects, raising the hydrogen trapping density and decreasing the diffusion coefficient (Table 6).It was also observed by Haq et al. [20]. BM had the second highest hydrogen diffusion, below only the ABM and above the other samples.These results are in accordance with Luu and Wu [40], where the authors compared the diffusion coefficient of different microstructures and concluded that regular ferrite shows the highest values.Han et al. [6] found similar results and concluded that equiaxed ferrite grains and pearlite, as presented in BM, favor the diffusivity of hydrogen due to the low trap density compared with other microstructures. Comparing Figures 3 and 10(a), BM presented small grain sizes than ABM.According to Haq et al. [20] ferrite grain sizes smaller than 45 m can reduce the mobility of hydrogen by trapping at nodes and triple junctions.Then, finer grains could increase the trapping of hydrogen and thereby give rise to a lower diffusion coefficient. Quenched Base Metal (QBM). The tests conducted on the QBM (Figure 13 and Table 6) showed lower current flow and enhancement of the time to reach a stationary value to hydrogen permeation than the ABM and BM.Similar results were obtained by Nagu et al. [37] where the quenched material had martensitic interlath interfaces with a bodycentered tetragonal (BCT) matrix, small grains, a large extension of grains boundaries, high density of dislocations, and carbide/matrix interfaces.Therefore, all these characteristics acted as hydrogen traps.The grain boundaries reduce the mobility of hydrogen, acting as reversible hydrogen trapping sites at nodes and junction points [20]. The traps of QBM samples were effective in delaying the hydrogen transport compared with the ABM and BM samples.The fastest cooling rate during heat treatment process promoted the phase transformation to martensite at lower temperature with an increase in dislocations density, arising from the transformation volume change (Figures 10(a) and 10(b)).Then, this behavior is probably due to the difference in grain size caused by thermal treatments performed and generated several changes in the structure of the material. Considering the dislocations acting as traps for hydrogen, the combined effect of a lower grain size and higher dislocation density could result in the strong trapping hydrogen.It is known that the quenched samples have martensitic microstructure, which owns an atomic arrangement in bodycentered tetragonal (BCT) matrix.Thereby, stable phases at room temperature (ferrite and cementite) cannot be formed due to the fast cooling, differently from the annealed samples (ABM) and the base metal (BM) that present a mixture of ferrite/cementite (pearlite) and grains of ferrite bodycentered cubic system (BCC) [20]. The results are in accordance with literature, where Luu and Wu [40] also showed that lower permeation and diffusivity of hydrogen occur in martensitic microstructure due to high density of defects and discontinuities imposed by fast cooling.Plus, there is the fact that the matrix is saturated with carbon that does not completely diffuse.Therefore, these combinations of factors act as strong traps and significantly decrease the hydrogen flow.The diffusion coefficient of martensite reported by Olden et al. [41] for API steel X70 is 1.26 × 10 −5 mm 2 ⋅s −1 and it is lower than those found to ferrite/perlite, 7.60 × 10 mm 2 ⋅s −1 .These values are in accordance with this present project; however it shows one order of magnitude lower.It could be explained by the higher level of micro-allowing elements than those present on API X52 steel, which might form precipitations that act as strong traps.Luppo and Ovejero-Garcia [42] also reported similar results, affirming that the hydrogen diffusivity attains a minimum value in a fresh martensite because of the high density of lattice imperfections introduced by martensitic structure.Thus, it is confirmed that the martensitic transformation acts as traps for diffusing hydrogen atoms and consequently a decrease in diffusivity and hydrogen permeation flux. Svoboda et al. [39] reported that the main factor affecting hydrogen permeation is the hardness, if compared with microstructure or chemical composition.There is a general trend of decreasing the diffusion coefficient with the increasing of strength.However, it is important to note that heat treatment does not change the distribution and chemical composition of the inclusions inside the bulk.Then, the grain boundaries, dislocations, and inclusions can act not only as hydrogen traps, but also as obstacles to physical diffusion through the metal [43]. Weld Metal (WM). The WM samples showed the lowest permeation rate of all analyzed samples (see Table 6 and Figure 13).Due to melting and the solidification process during the welding, WM microstructure was changed.Therefore, the recrystallization and uncontrolled grain growth at the heat-affected zone (HAZ), caused by thermal cycles, increase the density of discordance.In addition, these processes contribute for any factors such as large changes in the microstructure due to the spot heat incidence, phase additions, phase changes, precipitation, residual stresses, discontinuities in the matrix, and many others according to Han et al. [6].According to Fallahmohammadi et al. [43], hydrogen diffusion decreases when the grains size decreases.Analyzing Figures 2 and 13, WM had small size of grains compared to the other microstructures, causing less hydrogen permeation rate.In addition, during the welding process, the weld metal microstructure is charged because of melting and solidification.The process of recrystallization and grain growth occur differently at the heat-affected zone (HAZ).Then, the welded joints can be affected by different welding heat input and hence to change the hydrogen permeation behavior through the weld metal. The results imply that an increase seen in the number of discordances was one of the main factors for decay of the diffusion coefficient (Table 6), as seen by [20,34].Moreover, the presence of inclusions had an important role to hold the hydrogen.Variations of microstructure and a significant presence of inclusions are showed in the metallographic analysis of WM in HAZ, Figure 9. Haq et al. [20] reported that a high level of S and Mn on the metal may form MnS precipitates, which is a strong reversible trap.They also considered that trapping sites increased with S content.Table 3 shows S content, in WM, as higher than in BM; hence the number of trapping sites is higher as well.It is associated with the low diffusion coefficient presented by WM. The pearlitic phase is the dominant trap site of diffused hydrogen [2].These are located at the interface between ferrite and cementite, in lamellar pearlite or the pearlite boundary.Thus, the large number of interfaces of fine cementite in a bainitic structure, as the grains shown in Figure 7(c), acts as a strong inhibitor for hydrogen diffusion.The M/A constituents are expected to be a reversible trap; however the retained austenite does not trap hydrogen significantly alone.Park et al. [2] attribute the great capacity to decrease the diffusion to the interfaces between retained austenite and martensitic layer within M/A. Conclusions After the experiments, current density was not affected by the changes in microstructure provided by thermal treatments.This could imply that thermal treatments possibly do not affect the corrosion resistance.The low permeation and diffusivity of hydrogen occurred in martensitic microstructure and were related to the high density of defects and discontinuities imposed by rapid cooling.In addition, there is the fact that the matrix is saturated with carbon that does not completely diffuse.Therefore, these combinations of factors act as traps and significantly decrease the hydrogen flow.Plus, the quenched material had martensitic interlath interfaces, high density of dislocations, and carbide-matrix interfaces; all of these act as hydrogen traps.WM samples showed the lowest permeation rate of all analyzed samples as can be seen on the diffusion coefficient calculation.It probably occurred because of melting and solidification process during welding; the weld metal microstructure was changed.Therefore, the recrystallization and uncontrolled grain growth in weld metal and in the heat-affected zone (HAZ), caused by thermal cycles, increase the density of discordance.The lowest rate permeation occurred because of a huge number of discordances and inclusions that works to retard the hydrogen diffusion. Figure 1 : Figure 1: Optical microscopy image (low magnification) of the weld zone in API X52 carbon steel and regions analyzed. Figure 2 : Figure 2: Optical microscopy image of the interface between weld metal and the heat-affected zone: (a) at position 1; (b) at position 2 (both with magnification of 200x). 1 2 3 Figure 6 : Figure 6: Optical image of the weld zone in API X52 carbon steel and regions analyzed by SEM secondary electron image with high magnification. Figure 7 :Figure 8 : Figure 7: SEM secondary electron image of (a) base metal at zone 1; (b) weld metal at zone 2, showing the constituent M/A and the inclusions; (c) HAZ at zone 3, showing the constituent M/A and regions formed by bainite. Figure 9 :Figure 10 :Figure 11 : 2 )Figure 12 : 2 )Figure 13 : Figure 9: SEM secondary electron image and EDS spectra of (a) inclusions presented in the API X52 carbon steel and (b) an area without inclusions. Figure 14 : Figure14: E versus pH for sodium thiosulfate and H 2 S thermodynamic equilibrium in aqueous solutions[18]. Figure 15 : Figure 15: Effective diffusion coefficient of hydrogen in API X52 steel using different methods: (a) Time Lag, t L , and Breakthrough, t B ; (b) Fourier. Table 1 : Chemical analysis of the base metal (BM) and the weld metal (WM) API X52 carbon steel. Table 2 : Terminology and heat treatment conditions of API X52 carbon steel the samples. Table 4 : Measures hardness of the samples studied. Table 5 : OCP, current density at 50 mV and 100 mV above the OCP. Table 6 : Values of permeation in different microstructures of API X52 carbon steel. Table 7 : Data obtained from analysis of the hydrogen permeability plot for all samples of API X52 steel. Table 8 : eff values of hydrogen for different steels obtained by literature.
2018-12-07T16:45:52.174Z
2017-10-12T00:00:00.000
{ "year": 2017, "sha1": "851913dbab9eaa6bceb6757144b56bf061fb4843", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijc/2017/4927210.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "851913dbab9eaa6bceb6757144b56bf061fb4843", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
269613165
pes2o/s2orc
v3-fos-license
Arrayed in vivo barcoding for multiplexed sequence verification of plasmid DNA and demultiplexing of pooled libraries Abstract Sequence verification of plasmid DNA is critical for many cloning and molecular biology workflows. To leverage high-throughput sequencing, several methods have been developed that add a unique DNA barcode to individual samples prior to pooling and sequencing. However, these methods require an individual plasmid extraction and/or in vitro barcoding reaction for each sample processed, limiting throughput and adding cost. Here, we develop an arrayed in vivo plasmid barcoding platform that enables pooled plasmid extraction and library preparation for Oxford Nanopore sequencing. This method has a high accuracy and recovery rate, and greatly increases throughput and reduces cost relative to other plasmid barcoding methods or Sanger sequencing. We use in vivo barcoding to sequence verify >45 000 plasmids and show that the method can be used to transform error-containing dispersed plasmid pools into sequence-perfect arrays or well-balanced pools. In vivo barcoding does not require any specialized equipment beyond a low-overhead Oxford Nanopore sequencer, enabling most labs to flexibly process hundreds to thousands of plasmids in parallel. Introduction Sequence verification of plasmid DNA is a cornerstone of many cloning and molecular biology workflows.Sanger sequencing, a method developed in 1977 ( 1 ) and commercialized in 1986, remains one of the most commonly used methods.In its current form, Sanger sequencing uses a DNA primer, a DNA polymerase, and fluorescent dideoxy chain terminating nucleotides to produce DNA fragments, which are then separated by capillary electrophoresis to 'read' which nucleotide caused a termination at each position.The method requires a relatively clean plasmid template (e.g. a miniprep) or amplicon and results in reads of < 1 kb for ∼$2-5 / reaction, not including the cost of DNA purification ( ∼$2) or a sequencing primer ( ∼$8).Although essential for small-scale studies, Sanger sequencing can be expensive and time-intensive for applications that require sequencing many plasmid clones or long DNA molecules: each clone requires a separate plasmid purification and each ∼500-600 bp region of a long DNA molecule requires a separate sequencing primer and Sanger reaction.Because of this, alternative methods that leverage high-throughput sequencing technologies are being developed. In vitro methods have been developed to multiplex highthroughput sequencing by introducing DNA barcode 'indices' via PCR (2)(3)(4)(5) or Tn5 transposase tagmentation ( 6 ,7 ) (see also https:// www.octant.bio/blog-posts/ octopus-v3 ), enabling the short-read (e.g.Illumina, MGI, Element) or longread (e.g.Oxford Nanopore Technology (ONT), PacBio) sequencing platforms to sequence many barcoded samples at once.Illumina provides the high throughput and accuracy at low cost, but maximum read lengths of 125-300 bp make barcoding and read assembly schemes more challenging and can miss some common plasmid features such as long repeated elements, structural variation, or plasmid multimers ( 8 ,9 ).By contrast, the ONT platform generates long reads spanning entire plasmids at reasonable quality and offers better discrimination between these long DNA features ( 10 ,11 ).Additionally, the ONT instrument is inexpensive enough to be purchased and run by most academic labs, and the ability to vary the user-defined runtime enables labs to scale the sequencing throughput and cost to meet variable demand.This flexibility means that a highthroughput plasmid barcoding and sequencing method built on the ONT platform could be more widely accessible.Indeed various in vitro protocols ( 3 ,7 ) and commercial services ( https: // www.plasmidsaurus.comand https:// www.primordiumlabs.com ) have been developed to more efficiently multiplex ONT plasmid sequencing.However, all these methods require that each plasmid is purified from cells and barcoded individually, with these library preparation steps having an outsized impact on the overall cost and throughput of plasmid sequencing. Here, we develop a B acterial P ositioning S ystem ( BPS ): a platform that uses bacterial conjugation, in vivo DNA cutting, and in vivo recombination to barcode and index plasmids in large bacterial arrays.This platform enables different samples to be pooled before plasmid isolation, library preparation, and ONT sequencing, greatly increasing throughput of routine plasmid sequencing.We show that BPS can sequence, with high accuracy and recovery rate, tens of thousands of plasmids in parallel at a cost between $0.12 and $1.40 per plasmid (a 5-to 70-fold cost reduction relative to existing protocols).To demonstrate new capabilities that come with this increased scale, we show that BPS can be used to transform overdispersed error-containing oligonucleotide and gene library pools into sequence-verified arrays and well-balanced pools. BPS protocol for in vivo barcoding and pooled sequencing of plasmid arrays For the most current version of a 'wetbench' protocol, visit http:// darachm.gitlab.io/bps/ and navigate to the 'BPS Proto-col'.For questions, suggestions, or issues, please open an Issue at http:// gitlab.com/darachm/ bps/ -/ issues . Construction of barcoded donor plasmids and arrayed donor clones The backbone of donor vectors (pSL438 and pSL439, Supplementary Table S4 ) were constructed to contain (i) KanR (kanamycin resistance), (ii) oriT (origin of transfer), (iii) R6K ori γ (conditional replication origin depending on the phage-derived pir1 expression) and (iv) a swapping region, I-SceI-HU-HD-I-SceI, where I-SceI is the recognition site of the endonuclease I-SceI, and HU (5 -ttgccctctctcttca ttcagggtcatgagaggcacgccattcaaggggagaagtgagatc-3 ) and HD (5 -aagaacttttctatttctgggtaggcatcatcaggagcagga-3 ) are the upstream and downstream homology regions for recombination.In the swapping region, a selection cassette containing HygR-SacB was cloned between HU and HD.To insert random barcodes into donor backbones (pSL438 and pSL439), an oligonucleotide library (pXL633, Supplementary Table S5 ) that contains a NotI restriction site, a barcode region including a random 15 nucleotides, and a region of homology to both donor backbones, was ordered from IDT. pXL633, paired with pXL585, was used to PCR the barcodes with ∼1 ng of either pSL438 or pSL439 as template.The resulting PCR products were restriction digested and ligated into the corresponding donor vector via NotI and XmaI sites.Following the same cloning protocol above, the ligation products were transformed into competent BUN20 donor cells and the barcoded donor clones were selected on the LB + Kan agar plates at 37ºC.Transformant clones were then randomly selected and arrayed to generate two 96-well barcoded donor collections: pSL438_BC and pSL439_BC ( Supplementary Table S4 ).To identify the barcode sequences in the arrayed donor collec-PAGE 3 OF 13 tions, the regions containing the barcodes were amplified by colony PCR ( 15 ) using pXL583 and pXL584 ( Supplementary Table S5 ) as primers.The amplicons were then purified and Sanger sequenced using pXL583.Barcodes were then extracted to compile two lists of donor barcode collections. Construction of barcoded recipient plasmids and arrayed recipient clones Plasmid pSL937, which is used as the backbone to insert the random barcodes to generate the arrayed and barcoded recipient collection, was constructed from the following sources: (i) plasmid backbone / origin of replication from pBR322 ( 16 ,17 ), (ii) GmR (gentamicin resistance marker) from pUC18-mini-Tn7T-Gm ( 18), (iii) homology sequences HU and HD, and two I-SceI recognition sites in a HU-I-SceI-I-SceI-HD configuration, (iv) a rhamnose-inducible toxin relE (P rhaBAD -relE) from pSLC-217 ( 19 ) , which was cloned between two I-SceI sites.To insert random barcodes into the recipient backbone (pSL937), an oligonucleotide library (pXL631, Supplementary Table S5 ) that contains an XhoI restriction site, a barcode region including 20 random nucleotides, and a region of homology to pSL937, were ordered from IDT. pXL631, paired with pXL154 ( Supplementary Table S5 ), was used to generate barcodes via PCR with ∼1 ng pSL937 as template.The resulting PCR products were digested and ligated into pSL937 using MluI and XhoI restriction sites.The ligation products were then transformed into competent BW28705 cells that contain a spectinomycin-resistant helper plasmid pSL361, which was constructed by integrating a multi-cloning site and the pBAD-I-SceI (Addgene) endonuclease gene into pML104 ( Supplementary Table S4 ).Barcoded recipient clones were selected on the LB + Sp + Gm + 2% glucose at 30ºC.Transformants were then randomly selected and arrayed into ten 96-well plates. The previously constructed two donor barcode plates were repeatedly used to acquire position and sequence identity of unknown barcodes on recipient plasmids to construct recipient positional barcodes plates.Because sequencing chimeras may generate and mis-associate donor barcodes to recipient barcodes, each of these 96-array recipient barcode plates were mated with two donor plates to allow cross-validations and ensure accurate parsing results.The resultant recombinants containing known donor barcodes and unknown recipient barcodes were sequenced by Illumina MiSeq platform.A total of 831 unique recipient barcodes were detected and 768 (requiring a pairwise hamming distance > 5) were randomly rearrayed into 8 × 96-arrayed and 2 × 384-arrayed plates, which serve as positional barcode plates for parsing unknown DNA blocks in donor plasmids. Construction of donor plasmid libraries containing oligonucleotide pools and arrayed donor clones pSL1071 and pSL1064, which contain the NsrR-PheS or HygR-SacB cassettes, respectively, two I-SceI sites, and two homology regions for recombination (HU and HD), were used as the backbone to insert oligonucleotide pools.The 300-base oligonucleotide pools were ordered from IDT and Twist according to the following design, GCTTA TTCGTGCCGTGTTA TGGCGCGCCNN ... NNG CGGCCGCGGGC AC AGCAATCAAAAGTA, where GCTT A TTCGTGCCGTGTT A T and GGGC AC AGC AAT-CAAAAGTA are priming sites for the forward and reverse primers (skpp-101-F and skpp-101-R, Supplementary Table S5 ) to amplify the oligonucleotide pool, GGCGCGCC and GCGGCCGC are recognition sites for restriction enzymes AscI and NotI, and NN…NN denotes the 244-base sequences randomly selected from the human genome assembly (GRCh38) by using a custom Python script ( Supplementary Table S6 ).The amplification of the oligonucleotide pool was performed with ∼5 ng of template DNA and KAPA HiFi polymerase (Roche) with the annealing temperature at 53ºC and extension time of 15 s for 14-20 cycles.PCR products were purified using DNA Clean & Concentrator-5 (Zymoresearch).To clone PCR products into the donor plasmid pSL1071 or pSL1064, AscI and NotI restriction enzymes were used.The digestion reactions of PCR products and pSL1071 / 1064 were performed at 37ºC for 4 hours.Digested products were then size selected by running a 1.2% agarose gel and recovered using Zymoclean Gel DNA Recovery Kit (Zymoresearch).The ligation reaction was performed with 25 ng of digested vector and 3.8 ng of inserts using T4 DNA ligase (NEB) at 16ºC for 15 h.Donor plasmids were then transformed into the donor cells by heat shock at 42ºC for 60 s and recovery at 37ºC for 45 min.Resulting donor clones were randomly arrayed on PlusPlates.384-arrayed donor clones were generated by PIXL colony picker. Pooled amplicon sequencing on the Illumina platform To extract the recombinant plasmids, cells were scraped from each 96-position array selection plate and a pooled plasmid extraction was performed using Plasmid Plus Mini Kit (QI-AGEN).The plasmid DNA was quantified and diluted to ∼ 1 ng / μl, which is approximately 1.5 × 10 6 copies of each unique barcode-barcode pair per 96-array plate.A two-step PCR was performed, as described ( 20 ) with modifications.First, 4-5 cycles of PCR with OneTaq polymerase (New England Biolabs) was performed using the forward (pBPS_fwr) and reverse (pBPS_rev) primers listed in Supplementary Table S9 .∼1 ng of recombinant plasmid DNA was amplified in a single 50 μl PCR reaction with the annealing temperature at 55ºC and extension time of 20 s.To increase the multiplexity of sequencing samples, a unique pair of 1st and 2nd PCR primers (see Supplementary Table S9 ) were used to amplify the plasmid DNA from each mated plate, which uniquely barcodes each amplification reaction and enables pooling of multiple mated plates together in one sequencing library. Primers for the first step PCR have this general configuration and sequences are listed in Supplementary Table S9 : pBPS_fwr: A CA CTCTTTCCCTA CA CGA CGCTCTTCC-GATCTNNNNNNNNXXXXXXttcggttagagcggatgtg pBPS_rev: GTGA CTGGA GTTCA GA CGTGTGCTCTTC-CGATCTNNNNNNNNXXXXXXXXXaggtaacccatatgcatggc. The Ns in these sequences correspond to any random nucleotides and are used in the downstream analysis to remove skew in the counts caused by PCR jackpotting.The Xs correspond to one of several multiplexing tags, which allows different plasmid pools to be distinguished when loaded on the same sequencing flow cell.The lowercase sequences correspond to the priming sites on the recombinant plasmids.The uppercase sequences correspond to the Illumina Read 1 or Read 2 sequencing primer.The PCR products were purified using NucleoSpin columns (Macherey-Nagel) and eluted into 33 μl water.A second 23-25 cycles PCR was performed with PrimeStar HS polymerase (Takara) or KAPA HiFi DNA Polymerase (Roche), with 33 μl of cleaned product from the first PCR as template and 50 μl total volume per tube.The annealing temperature is 69ºC and extension time is 20 s.Primers for this reaction were the standard Illumina TruSeq dual-indexed primers (D501-D508 and D701-D712) listed in Supplementary Table S9 . PCR products were cleaned using NucleoSpin columns.Amplicons from each mating plate were uniquely labeled with our customized primer indexes (first step PCR) as well as standard Illumina indices (second step PCR).This quadrupleindexed strategy increases the multiplexing capacity for sequencing.Cleaned amplicons were pooled and paired end sequenced at ∼800 reads per barcode-barcode pair on an Illumina MiSeq, HiSeq or NextSeq with 25% PhiX genomic DNA spike-in. Illumina sequencing data analysis for demultiplexing and sequence verification Donor-recipient double barcode amplicon sequencing data was analyzed by customized Python scripts and Bartender ( 21 ).First, Illumina reads were demultiplexed using the Illumina indices.Any sequence without an exact match to two Illumina indices was discarded.Barcodes were extracted from demultiplexed sequences using the regular expressions ' Unique molecular identifiers (UMIs, the Ns in pBPS_fwr and pBPS_rev) were also extracted based on their expected position in the Illumina reads.Barcode reads, which contain a mix of true barcode sequences and sequences that contain errors stemming from PCR or sequencing, were next clustered into consensus sequences using Bartender ( 21 ).Each barcode cluster was next examined for replicate UMIs (indicating PCR duplicates) using Bartender, and all duplicates were removed to generate final counts of each barcode pair.The double barcodes with < 20 reads were excluded, many of which are expected to be PCR chimeras (barcodes fused by PCR amplification).The remaining reads were used to ascertain the position of each donor barcode from each corresponding recipient barcode.Customized scripts are available at https:// github.com/Li-WY/ BPS-data-analysis . E. coli pre-LASSO probe design The pre-LASSO probes ( ∼160-180 nt) used in this study were designed from the E. coli str.k-12 substr.mg1655 reference ORFeome (RefSeq: NC_000913.3)by using a custom Biopython script.The algorithm was set up to select probes that capture E. coli ORFs ranging from 999 to 2000 bp in size.The ligation and extension arms of pre-LASSO probes had similar melting temperatures, in the 65-70 • C range ( 22 ). The complete list of the pre-LASSO probes with the targeted ORFs are included in Supplementary Table S8 .The 417 pre-LASSO probes were obtained as pooled oligonucleotides from Twist Bioscience and used for the assembly of mature LASSO probes, as described ( 22 ).The pre-LASSO design was: 5´CA GA CGA CGGCCA GTGTCGA C, Ligation Arm, AA CA CTTCTTGCGGCGATGGTTCCTGGCTCTTC-GATC, Extension Arm, GGATCCTACGGTC ATTC AGC 3´. Capture and cloning of E. coli ORFs by LASSO probes Capture of E.coli ORFs was performed as described ( 23 ).Briefly, the 417 LASSO probe library was hybridized to E. coli genomic DNA in 15 μl of 1 × Ampligase DNA Ligase Buffer (Epicentre) containing 250 ng of unsheared E. coli K12 total genomic DNA and 5 ng of the LASSO probe pool.In the hybridization reaction, the concentration of E. coli chromosomes was approximately 10 pM.The reaction (15 μl) containing the LASSO probe pool and the E. coli genomic DNA was denatured for 5 min at 95 • C in a PCR thermocycler (Eppendorf Mastercycler), then incubated at 65 • C for 60 min.After hybridization, 5 μl of freshly prepared gap filling mix ( Supplementary Table S7 ) was added into the hybridization solution while maintaining the reaction at 65 • C in the thermal-cycler.Gap filling and ligation was performed for 30 min at 65 • C.After capture, the DNA samples were denatured for 3 min at 95 • C, and the temperature was reduced to 37 • C. Next, 4 μl of linear DNA digestion solution was added.Digestion was performed for 1 h at 37 • C, followed by 20 min at 80 • C. A post-capture PCR was performed using AttB1CaptF and AttB1CaptR ( Supplementary Table S5 ) as described ( 22 ).The post capture PCR product was purified by using AMPure XP Beads (Beckman Coulter) and mixed with the Gateway 'donor vectors' (pDONR221 ( 24 )) and the BP Clonase enzyme mix (Invitrogen).The BP reaction was purified and used for electroporation in NEB 10-beta Electro-competent E. coli (c3020K) to generate cloned libraries. Integration of captured ORF pools into BPS donor plasmids Plasmid pSL1064, which contains the HygR-SacB cassette, two I-SceI sites, and two homology regions for recombination (HU and HD), was used as the backbone to insert pooled E. coli ORFs.The E. coli ORFs were integrated into the plasmid pDONR221.oSL1581 and oSL1582 ( Supplementary Table S5 ) were used to introduce AscI and NotI recognition sites to these ORFs for cloning into pSL1064 by PCR.The amplification of the pooled ORFs was performed with 10 ng of template ORFs and KAPA HiFi polymerase (Roche) for 20-cycle PCR with the annealing temperature at 57ºC and extension time of 3.5 min.Amplified ORFs were purified using DNA Clean & Concentrator-5 (Zymoresearch).To clone amplified products into the donor plasmid pSL1064, AscI and NotI restriction enzyme recognition sites were used.The digestion reaction of amplified products and pSL1064 were performed at 37ºC for 4 h.Digested products, ranging from 1-2 kb, were size selected by cutting bands from a 1.2% Agarose gel and PAGE 5 OF 13 isolating DNA using Zymoclean Gel DNA Recovery Kit (Zymoresearch).The ligation reaction was performed with 100 ng of digested vector backbone and 70.8 ng of inserts using T4 DNA ligase (NEB) at 16ºC for 15 h. Arrayed mating for barcoding DNA with BPS Each donor plate was mated to each barcoded recipient plate in an arrayed format on agar plates.The donor arrays were grown on LB + Kan + Hyg / clonNat plates overnight at 37ºC; the recipient arrays were grown on LB + Sp + Gm + 2% glucose overnight at 30ºC.The agar media for arrayed mating was LB + Ara + IPTG, pre-warmed in 37ºC for 1 hour.Both donor and recipient clones were transferred onto the mating plates using SINGER RO T OR HDA robot with 96-or 384position pin pads, and grown for 3-6 h at 30ºC.The mated cells were then transferred onto the selection plates containing LB + Ara + Rha + Gm + Hyg / clonNat.Recombinant clones were then selected at 37ºC overnight. Pooled sequencing of whole plasmid backbones on the Oxford Nanopore platform Bacterial clones on selection plates were scraped and pooled to extract recombinant plasmids containing the recipient barcodes and DNA blocks (donor barcodes, oligonucleotides, and E. coli ORFs) using Plasmid Plus Mini Kit (QIAGEN).The pooling capacity is determined by the number of unique positional barcodes.In this study, 768 positional barcodes were routinely used and up to 768 clones (2 × 384-array plates) containing unique positional barcodes can be pooled. Two fragmentation approaches were used to generate linearized plasmids for nanopore library constructions.One is to use the restriction enzyme PmlI (NEB) to cut circular plasmids by incubation at 37ºC for 2 h.Linearized plasmids were size selected by running a 1.2% Agarose gel and recovered using Zymoclean Gel DNA Recovery Kit (Zymoresearch).The second approach is to tagment circular plasmids using the transposome complex from Rapid Barcoding Kit (SQK-RBK114.96). BPS analysis pipeline A bioinformatics pipeline was devised to achieve appropriate, performant, and scalable analysis of BPS experiments, and this pipeline uses a flexible configuration interface to enable diverse applications.The aim of the pipeline is to (i) gather long-read sequencing data from multiple runs, (ii) extract a small barcode from each read, (iii) use this small barcode to separate reads from each colony, (iv) perform one of two assembly strategies and (v) assess the 'purity' and 'correctness' of the assembly at that position. Basecalled FASTQ files are filtered for size and a known contaminant file is used to optionally filter out reads using alignment (minimap2 ( 25 ) and samtools ( 26 )).Within each sequencing library pool, demultiplexing barcodes that are introduced during ONT library preparation are used to assign sample membership to each read.Alignment to unique 'signature' sequences are used to separate out different plasmids within each sample (minimap2 and awk).Alignment to known 'trimming' sequences is used to remove most of the backbone (min-imap2 and Python), and fuzzy regular expressions are used to extract exactly the barcode from a known sequence context (itermae, https:// gitlab.com/darachm/ itermae/ ).Barcodes are clustered and assigned to an optionally provided list of known barcodes (starcode ( 27 ) and Python), then each combination of sample and positional barcodes is used to separate raw FASTQ records into separate files (awk).These may be assembled using one of two options (A or B).(A) To assemble long ( > 1 kb) target sequences, read-length distributions per well are analyzed with a gaussian-mixture model to separate different species of plasmid in each well (Python), then each cluster of reads is used for de novo assembly (trycycler ( 28 ) and flye ( 29 )) and polishing of the assembly (medaka).(B) To assemble of short ( < 1 kb) targets, raw reads are trimmed using alignment to known trimming sequences (minimap2 and Python) before multiple-sequence alignment (kalign3 ( 30 )), merging to a draft consensus (Python), and polishing of the consensus (racon ( 31 ) and medaka).The purity of either assembly is assessed by aligning each raw input sequence to the resulting assembly (either with minimap2 or with pairwise alignment using Biopython ( 32 )).The resulting assemblies are subject to another round of post-assembly processing using options of raw output, a 'rotation' to begin all of the circular assemblies in a similar location (minimap2 and Python), trimming of known sequences (minimap2 and Python), and / or exact extraction using fuzzy regular expressions (itermae).Positions with at least 5 reads for which at least 90% of the reads matched the consensus reference with at least 90% of end-toend identity were considered pure.Final processed assemblies are optionally compared to a known target sequence file to assess 'correctness', with a perfect match between the consensus and reference sequences considered to be correct (minimap2 or bwa ( 33 )).Each successfully considered position is analyzed (using R ( 34 )) to output a per-position per-sample call of purity and correctness (in matching a particular reference in the provided set).The entire pipeline is written in Nextflow ( 35 ), uses Singularity ( 36 ) to execute Docker containers, and makes extensive use of GNU utilities (including GNU parallel ( 37 )). The pipeline is available on GitLab under the BSD 3-clause license, documentation is available at darachm.gitlab.io/bps, and we will support users via Issues at gitlab.com/darachm/bps/ -/ issues . Illumina sequencing to evaluate the uniformity of oligonucleotides before and after amplification To approximate the abundance of the original single stranded oligonucleotides generated from service providers, second strands were synthesized using a primer annealing and extension approach (38)(39)(40)(41)(42).The second strand synthesis reaction was performed with ∼50 ng (IDT) or ∼8 ng (Twist) of template DNA and KAPA HiFi polymerase (Roche) with the annealing temperature at 53ºC and extension time of 30 sec for 2 (IDT) or 10 (Twist) cycles.For second strand synthesis only one primer (skpp-101-R) was used. The resultant dsDNAs, together with amplicons of oligonucleotides generated from 14 (IDT) or 20 (Twist) cycles of PCR, were ligated with xGen™ UDI-UMI Adapters (IDT) which contain full length sequencing primers, i7 / 5 indices, and Unique Molecular Identifiers (UMIs).PE150 reads were generated by Illumina iSeq platform.The absolute copy number of oligonucleotides / amplicons were estimated by counting the number of unique UMIs associated with each type of oligonucleotide / amplicon. Results Overview of the in vivo barcoding platform MAGIC cloning ( 12 ), developed as an in vivo alternative to in vitro GATEWAY cloning ( 43 ), enables rapid subcloning of a DNA block from one plasmid to another using bacterial conjugation and in vivo recombination.A DNA block in a donor plasmid is conjugated to a recipient cell containing a recipient plasmid.A genetic program in the recipient cell recombines the DNA block from the donor plasmid to the recipient plasmid using the endonuclease I-SceI to cut both plasmids and the recombinase lambda Red to stitch (recombine) the donor DNA block into the recipient plasmid backbone.We extended this platform for use in multiplexed plasmid sequencing by constructing arrays of cells with a barcode that is unique to a position on an array (positional barcodes) in either the donor plasmids or the recipient plasmids (Figure 1 A).In addition, we made two major modifications to improve the reliability and portability of the platform.First, the PheS negative selection marker is replaced by relE , with flanking terminators to ensure deliberate control of expression ( 19 ).Second, the homing endonuclease I-SceI is placed in a helper plasmid (pSL361) with a temperature-sensitive replication origin (pSC101 -ori ts ), providing a convenient way to cure (remove) the helper plasmid prior to isolation of recombinant plasmid DNA. In one design (Figure 1 B), DNA constructs are sequenced by integrating them into donor plasmids that are subsequently conjugated to arrays of cells containing recipient plasmids with positional barcodes.In another design (Figure 1 C), the whole backbone of recipient plasmids are sequenced by conjugating positional barcodes from arrays of donor cells and recombining the barcodes into the recipient plasmids.The resultant recombinant plasmids from either workflow, including positional barcodes and to-be-sequence-verified plasmid DNA, can be pooled and processed for ONT sequencing with a single DNA miniprep and a single ONT library prep.To process these data, we developed a flexibly-configured Nextflowbased pipeline to enable read partitioning and scalable de novo assembly of tens of thousands of plasmids. Accuracy and recovery rate To test the accuracy and recovery rate of the in vivo barcoding and sequencing platform, we generated 192 clones of barcodes in donor cells / plasmids and 768 clones of barcodes in recipient cells / plasmids in 2 × and 8 × 96-position arrays, verified the barcode sequence at each position using a combination of Sanger and Illumina sequencing (Materials and Methods), and used these arrays as a test set.We next mated each donor array to each recipient array (96 × 8 matings per donor array), selected for colonies that contain barcodebarcode recombinant plasmids, and performed two minipreps (one for each donor array mating).For each plasmid pool (768 clones), we tagmented plasmids using the ONT Rapid Barcoding Kit 96 (V14).Each tagmentation reaction introduces one of 96 unique sample indices to the plasmid pool enabling up to 73 728 (96 × 768) plasmids to be processed in parallel with our 768-recipient barcode array.We sequenced plasmid pools on an ONT MinION (R10.4.1) at average sequencing depth of 130 reads per position.A custom BPS bioinformatics pipeline (Materials and Methods) was developed to partition ONT sequencing data by both the positional barcodes (introduced in vivo via conjugation) and the sample indices (introduced in vitro during ONT library preparation), and to assemble (using Trycycler ( 28 ) and Flye ( 29)) and polish (using medaka https:// github.com/nanoporetech/ medaka ) the consensus sequence for each position on each 96-position array.Consensus ONT sequences of barcode-barcode pairs in all cases matched the sequence determined by Illumina and Sanger sequencing (Materials and Methods).Using these ONT data, we calculated the recovery rate (percent of known positions that were detected) and accuracy (percent of detections that were correct) of donor barcode sequences (Figure 1 D and Supplementary Figure S1 ).We found perfect accuracy and a high recovery rate ( > 98%) that improves when the same donor barcode is assayed multiple times (Figure 1 D).Missing positions were, in all cases, due to a lack of sufficient sequencing coverage ( Supplementary Figure S2 ).ONT reads also enabled de novo (reference-free) assembly of the whole recombinant plasmids to detect various types of sequence variation from the expected recipient backbone sequence (Figure 1 E). Low-cost whole-plasmid backbone sequencing with a fast turnaround time The number of plasmids that can be processed in parallel by in vivo barcoding is determined by the number of unique positional barcodes across arrays.Here, we constructed arrays of 768 positional barcodes to enable DNA isolation and library construction of pools of 768 plasmids in parallel.At this scale of multiplexing, using off-the-shelf robotics for colony picking and arrayed bacterial conjugation / mating, we estimate that thousands of plasmids can be sequenced in parallel for between $0.12 (100 × sequencing depth) and $0.53 (1000 × sequencing depth), including all consumables and labor costs (high throughput in Figure 1 F and Supplementary Table S1 ).At lower scales of multiplexing, we estimate that manual colony picking and mating can be used to sequence hundreds of plasmids in parallel for between ∼$1.00 (100 × sequencing depth) and $1.40 (1000 × sequencing depth), including all consumables and labor costs (low throughput in Figure 1 F and Supplementary Table S2 ).Because most steps of the procedure are performed with cell or plasmid pools, both lowthroughput and high-throughput protocols can be performed with ONT reads ready as soon as the next day (Figure 1 G). Demultiplexing and sequence verification of oligonucleotide pools Generation of oligonucleotide pools using arrayed synthesis technologies can be several orders of magnitude less expensive than one-at-a-time column-based DNA synthesis ($0.0005-0.035/ base versus $0.07-0.50/ base) ( 44 ,45 ).Parallelized methods that capture long blocks of natural DNA can offer similar cost savings relative to de novo gene synthesis.Yet, many testing modalities require arrays of DNA designs (e.g.mass spectrometry , microscopy , and enzymatic assays) and are unable to take advantage of these sources of low-cost DNA.We next explored whether the higher throughput of the BPS plasmid sequencing platform could be used to demultiplex such DNA pools.S1 and S2 .( G ) Experimental timeline for sequence verification by BPS.We assume a MinION flow cell contains 250 active pores generating data at 100 bases / second, enabling ∼500 (7 kb) plasmids to be sequenced at 100 × depth in 4 h. PAGE 8 OF 13 Nucleic Acids Research , 2024, Vol.52,No. 10,e47 To test the capability of BPS to demultiplex oligonucleotide pools, we designed a library of 1100 oligonucleotides, each containing a 244-base sequence chosen randomly from the human reference genome GRCh38 (Figure 2 A) ( 46 ).Differences in the efficiency of synthesis across different nucleotide sequences may result in oligonucleotide pools that are more or less dispersed in frequency (defined here as pool dispersion).Higher pool dispersion would require picking and sequencing more clones per design (higher sampling depths) to recover the same number of designs across an array ( 20 ).To minimize potential pool dispersion in our first test, we subset the 1100 oligonucleotide designs into one 100 randomlychosen oligonucleotide pools that were scored as 'low complexity' by the IDT online sequence complexity analysis tool ( https:// www.idtdna.com/site/ order/ plate/ gblocks ).This pool was synthesized by IDT as an oPool, and inserted into donor plasmids / cells by PCR, digestion, and ligation.Randomly arrayed clones were generated from this pool at a ∼20 × sampling depth (the average number of clones picked per DNA block in the pool) for in vivo barcoding and sequencing, as described above, to generate a consensus sequence at each position on each plate.Given the relatively high error rate of ONT simplex reads, we needed to distinguish positions that contain only sequencing errors from those that contain a mixture of two or more plasmids.For each position, we determined a purity score (the fraction of ONT reads that are > 90% end-to-end identical to the consensus sequence at that position) and assayed, across a range of purity scores, which clones were indeed pure by examining Sanger sequencing traces ( Supplementary Figure S3 ).We found that purity scores > 0.8 were reliably pure clones (15 / 15) but that those with purity scores < 0.8 were frequently mixed clones (7 / 8).Based on these results, we set a conservative 0.9 purity score threshold for calling a clone 'pure' and flagged putatively 'impure' positions.Illumina sequencing was performed on a subset of samples to validate the correctness of consensus sequences (Materials and Methods), and for unflagged positions the consensus Illumina and ONT sequences agreed 99.7% of the time (with most differences presumably stemming from errors in PCR during the Illumina library prep).After discarding flagged positions, we recovered a sequence-perfect clone for 83% of the oligonucleotide designs in this pool. We next scaled-up the method by demultiplexing, at ∼20 × sampling depth, a pool containing all 1100 oligonucleotide designs, synthesized by Twist.This full set includes 100 oligonucleotides that were scored as 'intermediate complexity' by the IDT online oligonucleotide analysis tool, indicating that it may be more dispersed than the 100 oligonucleotide pools synthesized by IDT.We recovered sequenceperfect clones for 51.5% (567 / 1100) of the oligonucleotide designs in the full pool, and 52.0%(52 / 100) of the oligonucleotide designs in the subset with the same designs as the subpool synthesized by IDT. Examination of factors impacting the recovery rate of demultiplexing Given the high recovery rate of BPS using pre-arrayed colonies as inputs (see Accuracy and Recovery Rate experiments above), we sought to understand what factors impact the recovery rate of commercial oligonucleotide pools.The recovery rate could be influenced by sequence errors introduced by (i) synthesis and / or assembly of linear DNA, (ii) PCR amplifica-tion or (iii) introduction into plasmids and cells.The vendorreported oligonucleotide synthesis error rate of ∼ 4 × 10 −4 / nt predicts that ∼ 11.3% ( 1 − 0 .9996 300 ) of oligonucleotides in the pool contain erroneous sequences.Additional sequence errors introduced during amplification and cloning of the BPS protocol appeared to have little impact on the recovery rate: we observed a base substitution error rate that is roughly the same as the rate reported by the vendors ( 4 .75 × 10 −4 / nt and 3 .81 × 10 −4 / nt for IDT and Twist pools respectively, Supplementary Figure S4 ). We next explored the impact of variation in sequence abundance on the recovery rate by examining the level of pool dispersion at different stages of the protocol for both the IDT (Figure 2 B) and Twist (Figure 2 C) pools: following second strand synthesis (Original dsDNA), PCR amplification (Amplicon dsDNA), introduction into plasmids and plating of cell colonies (Randomly Plated Clones), and construction of colony arrays and barcoding by BPS (Arrayed Clones).The lowest abundance correlations were observed between the Amplicon dsDNA and Randomly Plated Clones steps.Using Gini coefficients as a measure of pool dispersion, we found that this step also resulted in the greatest increase in pool dispersion, resulting from large changes in abundance of particular oligonucleotide designs (Figure 2 D and F).To determine whether the pool dispersion at this cloning step was due to inadequate clone sampling, we estimated the expected recovery rate based on the frequency distribution we observed in the Amplified dsDNA pools (Figure 2 E and G).We found that, at a 7 × sampling depth, > 93% of sequence perfect designs are expected to be present at the Randomly Plated Clones step, while we could only recover 86% and 49%, for IDT and Twist pools respectively, by resampling the observed Randomly Plated Clones (Figure 2 E and G).These data suggest that the cloning step, which includes introduction of oligonucleotide designs into plasmids and cells, has the greatest impact on frequency dispersion and therefore demultiplexing performance. Demultiplexing a captured open reading frame (ORF) library To demonstrate the capability of BPS to demultiplex pools of longer DNA (1-2 kb) with variable sizes, we parsed a library of ORFs captured using long-adapter single-strand oligonucleotide (LASSO) probes ( 47 ).This DNA capture technique uses pools of long inversion probes to selectively hybridize and amplify multiple target regions from genomic DNA.While LASSO probes have been demonstrated to capture > 3000 E. coli ORFs in parallel ( 47 ), off-target sequences are also captured, desired on-target sequences may contain mutations from PCR, and, as expected for any pooled technique, the relative abundance of each type of sequence can vary significantly.These features can limit the cost-efficiency and data quality of downstream pooled assays.To determine if BPS could be used with LASSO probes to generate ORF arrays or well-balanced sequence-perfect pools, we captured 417 E. coli ORFs (1-2 kb) by LASSO probes, cloned them as a pool into the BPS donor plasmid backbone, and picked a total of 10752 clones ( ∼25 × sampling depth) for in vivo barcoding and sequence verification (Figure 3 A).Of the 8562 pure clones isolated by BPS, we found that 30.7% were sequence perfect, 61.2% had at least one mismatch, and 7.5% were off-target DNA.The high number of sequences that contain at least one mismatch is likely a result of the many cycles of PCR required for the process: 25 cycles for post ORF capture amplification and 20 cycles for cloning into the BPS donor plasmids.Sequence perfect clones contained 52.3% (218 / 417) of targeted ORFs (Figure 3 B).The distribution of DNA lengths of these error-free ORFs suggests that our protocol can capture ORFs in all size ranges in the original pool (Figure 3 C). When DNA pools contain sequence errors or are overdispersed, the power of sequencing-based pooled functional assays (e.g.massively parallel reporter assays ( 48)) can suffer because: (i) constructs with errors may not provide useful data; (ii) more reads are needed to assay low abundance constructs and (iii) some sequence errors (such as errors result in premature stop codons) may contaminate selection schemes.To demonstrate the capability of our platform to address these issues by constructing error-free well-balanced pools, we arrayed 111 clones carrying sequence perfect ORFs, grew replicates of these arrays both in liquid multi-well plates and on agar pads, and pooled each replicate separately.For each replicate, we extracted plasmids and ONT sequenced the library (without an additional amplification).We found that demultiplexing with BPS and re-pooling drastically improved the uniformity of the pool (Figure 3 D).ORFs that were highly overrepresented following LASSO capture were not overrepresented in the balanced libraries.The relative abundance of each ORF in a balanced pool was highly correlated between replicate pools using the same outgrowth procedure [Pearson's r = 0.73, P < 2.2 × 10 -16 in liquid vs. liquid, Pearson's r = 0.82, P < 2.2 × 10 -16 in agar versus agar] or different outgrowth procedures [Pearson's r = 0.82, P < 2.2 × 10 -16 in liquid versus agar] (Figure 3 E), suggesting that abundance differences are due to reproducible growth rate differences between clones. Discussion We have developed a Bacterial Positioning System (BPS): an in vivo plasmid barcoding platform for high-throughput sequence validation of plasmid DNA.In contrast to in vitro barcoding methods that require each sample to be prepared independently (using microwell plates, seals, thermocycler time, high fidelity polymerase, and primers for individual barcoding reactions), BPS barcodes plasmid DNA in vivo, enabling arrays of cells to be pooled prior to sample processing.This dramatically reduces the cost and hands-on time required, and PAGE 11 OF 13 using BPS with low-overhead ONT sequencing enables most labs to flexibly process hundreds to thousands of plasmids. One application of high-throughput plasmid sequencing is demultiplexing of DNA pools that are the products of nextgeneration DNA synthesis ( 49 ), pooled DNA assembly (50)(51)(52) or pooled DNA capture ( 47 ,53 ) methods.While construction of these pooled libraries is inexpensive relative to constructing each design independently, demultiplexed plasmids are required for many testing modalities (e.g.mass spectrometry, microscopy and enzymatic assays) and for downstream DNA engineering.We have previously developed an arrayed in vivo barcoding platform in S. cerevisiae and used it to demultiplex and sequence verify pools of oligonucleotides encoding gRNA variable regions ( 54 ).However, that platform requires that both the barcode and the DNA-to-be-sequenced are placed at a similar location in the yeast genome.This limitation, in combination with the slower growth rate relative to E. coli , makes the yeast platform too cumbersome for most applications.Another demultiplexing solution is dial-out PCR (55)(56)(57)(58), which uses pre-designed unique tags to prime specific sequences from a pool.Although this in vitro approach is expected to recover designs at low relative abundance, it is expensive and time consuming to scale up: isolation of each design requires a PCR with a unique set of primers.Byproducts in the retrieved sequences by dial-out pcr also limit its applications ( 56 ).Scalable low-cost plasmid sequencing with technologies such as BPS offers an alternative 'shotgun' approach (59)(60)(61)(62) to demultiplexing: instead of isolating each design by bespoke methods, clones are oversampled from a pool and sequenced, with the aim of recovering a large fraction of designs. However, shotgun demultiplexing approaches have several constraints that limit their utility.Similar to the high sequencing depths required for shotgun sequencing, the number of clones sampled (sampling depth) must be several fold the size of a library to have a good chance of recovering most designs, even when the frequency dispersion is low.Frequency dispersion was not low in some of our experiments, meaning that, even at high cloning depths, we would be unable to recover many designs (Figure 2 G).Frequency dispersion can be introduced at several steps before and during the BPS protocol: DNA library construction (e.g.pooled DNA synthesis), PCR amplification, integration into a plasmid backbone, transformation into host cells, and cell outgrowth.In our experiments, we find that the measures of oligonucleotide design abundances have the lowest correlation before and after the cloning step.Several factors such as the size, initial concentration, complexity of a DNA library ( Supplementary Figure S5 ), differences in the efficiency of plasmid integration between designs, undersampling of transformants, and jackpotting of some designs following transformation may all contribute to dispersion.Despite these potential sources of variation, several groups have achieved pooled design libraries with relatively low levels of dispersion and high recovery rates ( 5 ,50 ).More research is needed to determine which differences between protocols contribute to this cloning variance and how it can be minimized.Nevertheless, shotgun demultiplexing is a strategy made viable by inexpensive plasmid screening as described here, and even overdispersed pools could be made useful when an investigator only needs to sparsely sample the design space ( 4 ) (e.g.randomly sampling 10 3 designs from a pool of 10 6 ).In addition, demultiplexing and subsequent pooling of sequence-verified clones could be used to transform error-prone overdispersed pools into error-free low-dispersion pools that can be more cost-effectively assayed by sequencing readouts. BPS provides two methods for sequencing of plasmid DNA: (i) a DNA block of interest is transferred to become adjacent to a positional barcode (Figure 1 B) or (ii) a positional barcode is transferred into a plasmid of interest (Figure 1 C).Both methods use ONT long-read sequencing, meaning that large DNA blocks and / or plasmids with repetitive features (tandem repeats and long interspersed repeats) can be more accurately characterized than by short read methods.The first method is readily applicable to routine sequence validation of DNA blocks (e.g.oligonucleotides, genes and variants) that have been synthesized, captured, or assembled.Practicing this method only requires cloning of DNA blocks of interest into the donor plasmid using standardized approaches.The provided donor plasmid backbone contains a multicloning site including 8-mer AscI and NotI restriction sites to insert DNA blocks, and we routinely insert DNA blocks into linearized plasmid using ligation or Gibson assembly in 96-position arrays.The donor plasmid also contains a required R6K ori γ , facilitating plasmid propagation in the donor cell carrying pir-116 or pir + gene, and an o riT , enabling the transfer to the recipient cell through bacterial conjugation.One limitation of the first BPS approach (Figure 1 B) is potential incompatibilities between a DNA block sequence and the donor plasmid or strain.For example, a DNA block that contains an additional origin of replication, OriT , or an element that is toxic to the donor strain may not behave properly, although we expect these cases to be rare. With the second BPS method (Figure 1 C), the entire plasmid is sequenced, enabling not only verification of a DNA part of interest, but also detection of undesired changes to the plasmid backbone, such as point mutations, insertions, deletions, duplications, and rearrangements.However, plasmid backbones sequenced by this method must be engineered to contain a BPS 'landing pad' composed of homology regions (HRs), I-SceI restriction sites, and the relE negative selection marker.The plasmid of interest is also required to be transformed into the recipient cell that contains a helper plasmid with a temperature sensitive replication origin (pSC101 ori ) and spectinomycin resistance marker, which restricts the use of this origin or marker in the recipient plasmid.With further development of the BPS platform, it may be possible to remove the requirement for a landing pad on the recipient plasmid or the use of a helper plasmid.For example, new BPS barcode cassettes could be designed with different selection markers and / or gRNA cut sites that integrate directly at sequences that are common to most plasmids in lab use (i.e.natural landing pads).Furthermore, genetic elements that facilitate homologous recombination (I-SceI endonuclease and lambda red system) could be engineered into the chromosome of recipient cells.With these advances, BPS could be used on most plasmids without any modifications, dramatically reducing the cost and hands-on time for sequencing of most plasmid DNA. PAGE 7 OF 13 Figure 1 . Figure 1.Workflow and performance of BPS, an in vivo barcoding platform.( A ) Donor cell arrays (blue) containing donor plasmids are conjugated to recipient cell arra y s (gra y) containing recipient plasmids.A DNA casset te from the donor plasmid is recombined into the recipient plasmid bac kbone to join an indexing DNA barcode with a sequence of interest.Cells from one or more plates can subsequently be pooled and prepared for sequencing.( B ) In one design, a cassette with a DNA block on a donor plasmid is recombined into a recipient plasmid backbone containing a positional barcode.( C ) In another design, where a whole plasmid backbone needs to be sequenced, a 'landing pad', including (homology region for recombination) HR, I-SceI cut sites and relE negative selection marker, is firstly engineered into the plasmid backbone of interest.A donor positional barcode is then recombined into a recipient plasmid.In both ( B ) and ( C ), a scissor icon is an I-SceI cut site, HR is a region of sequence identity that mediates homologous recombination, dashed lines are homologous recombination e v ents, and + or -icons are selection (Hyg / clonNat) or negative selection (relE) markers.( D ) The recovery rate (percent of positions that were detected, orange) and accuracy (percent of detections that were sequence correct, blue) for the design in (B).Number of matings (x-axis) indicates the number of times the same DNA block was mated to a barcode and sequenced.Error bars indicate standard errors calculated by bootstrapping.( E ) In the design illustrated in (C), donor barcodes were mated to recipient plasmids and plasmid sequences at each position were assembled de novo by sequencing pools of clones.Variation from the a priori reference expectation, including insertions, deletions, and substitutions are shown by colored dots.Successful de novo assemblies are ranked by decreasing ONT read coverage.( F ) The cost of plasmid sequencing when BPS is performed at high-and low-throughput.Low-throughput assumes barcoding is performed in 96-well plates and 4608 plasmids are sequenced at 10 0-40 0 × co v erage per flo w cell.High-throughput assumes barcoding is perf ormed on 384-position agar arra y s and 9216 plasmids are sequenced at 10 0-20 0 × co v erage per flow cell.Detailed cost assumptions are listed in Supplementary TablesS1 and S2. ( G ) Experimental timeline for sequence verification by BPS.We assume a MinION flow cell contains 250 active pores generating data at 100 bases / second, enabling ∼500 (7 kb) plasmids to be sequenced at 100 × depth in 4 h. Figure 2 . Figure 2. Sequence verification and demultiplexing of oligonucleotide pools.( A ) Oligonucleotide pools from vendors were cloned into donor plasmids and donor cells, and randomly arra y ed.For experiments here, oligonucleotides are designed with fixed priming sites (red) and restriction endonuclease sites (blue, AscI and NotI) to facilitate cloning.( B , C ) The distribution of oligonucleotide abundances from the IDT (B) and Twist (C) pools, which contain 100 and 1100 oligonucleotide designs, respectively, assessed at multiple stages during the in vivo barcoding workflow: (i) original oligonucleotide pools after the second strand synthesis (original dsDNA), (ii) f ollo wing 14-20 rounds of PCR amplification (Amplicon dsDNA), (iii) random clones after inserting the oligonucleotides into plasmid backbones and cells and randomly plating for clones (Randomly Plated Clones), (iv) and parsed clones after in vivo barcoding and sequencing (Arra y ed Clones).Pairwise comparisons of the abundance of each oligonucleotide design at different stages are displa y ed in scatter plots with Pearson correlation coefficients ( r ).The level of corresponding significance is indicated by asterisks (*** P < 10 −7 ).Histograms represent frequency distributions of oligonucleotide designs with different abundance le v els.Gini coefficients, G , are used to quantify the degree of pool dispersion.A uniform distribution has a Gini coefficient of 0. ( D , F ) The change of normalized abundance of each oligonucleotide design from amplicons to clones for the IDT and Twist pools, respectively.G denotes the increase in Gini coefficients from the Amplified dsDNA to Randomly Plated Clones pools.( E , G ) R eco v er y cur v es f or all oligonucleotide designs at different sampling depths f or the IDT and Twist pools, respectiv ely.T he e xpected reco v ery rate at different in silico sampling depths is calculated by assuming the probability of sampling one oligonucleotide design equals its empirical frequency observed in the corresponding pool. Figure 3 . Figure 3. Sequence validation and construction of a balanced ORFs library.( A ) Pooled capture of E. coli ORFs and cloning into the BPS donor cells.ORF libraries were amplified from E. coli genomic DNA using Long-Adaptor Single-Stranded Oligonucleotide (LASSO) probes and inserted into a Gateway v ector.T hen, ORFs w ere amplified b y PCR and cloned into BPS donor plasmids and cells.( B ) A Sank e y diagram sho wing the sequencing results of the ORF library f ollo wing BPS.Mixed clones (orange, purity score < 0.9, see Materials and Methods) indicate that the same position on an array is likely to contain more than one ORF.Pure clones (blue, purity score ≥ 0.9) are further classified into sequence perfect ORFs, ORFs with errors (point mutations or indels), off-target DNA (cannot be mapped to the target ORFs but can be mapped to the E. coli genome), and others (cannot be mapped to any region of the E. coli genome).( C ) The length and abundance of sequence-perfect ORFs identified from pure donor clones.( D ) The distribution of normalized ORF abundance in the original amplicon pools after PCRing from genomic DNA (original amplicons), after cloning into Gate w a y v ector plasmids (original plasmids), and after re-pooling f ollo wing gro wth of sequence perfect clones in liquid or on agar.ORFs w ere rank ed according to their abundance from lo w est to highest.Gini coefficients were calculated to quantify the degree of uniformity of ORF abundances in a pool.A lower value indicates a higher degree of uniformity.( E ) The correlation of the relative abundance of each ORF from balanced libraries constructed using agar and liquid protocols.
2024-05-08T06:17:05.085Z
2024-05-06T00:00:00.000
{ "year": 2024, "sha1": "22fbce624523166f56dedc6563be109fd007123c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/nar/advance-article-pdf/doi/10.1093/nar/gkae332/57420306/gkae332.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d94f568e59d721efc2624c30fdebbbcf0f342324", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
15030774
pes2o/s2orc
v3-fos-license
ABUSE VICTIMIZATION IN CHILDHOOD OR ADOLESCENCE AND RISK OF FOOD ADDICTION IN ADULT WOMEN Objective Child abuse appears to increase obesity risk in adulthood, but the mechanisms are unclear. This study examined the association between child abuse victimization and food addiction, a measure of stress-related overeating, in 57,321 adult participants in the Nurses’ Health Study II (NHSII). Design and Methods The NHSII ascertained physical and sexual child abuse histories in 2001 and current food addiction in 2009. Food addiction was defined as ≥3 clinically significant symptoms on a modified version of the Yale Food Addiction Scale. Confounder-adjusted risk ratios (RRs) and 95% confidence intervals (CIs) were estimated using modified Poisson regression. Results Over eight percent of the sample reported severe physical abuse in childhood, while 5.3% reported severe sexual abuse. Eight percent met the criteria for food addiction. Women with food addiction were 6 units of BMI heavier than women without food addiction. Severe physical and severe sexual abuse were associated with roughly 90% increases in food addiction risk (physical abuse RR=1.92; 95% CI: 1.76, 2.09; sexual abuse RR=1.87; 95% CI: 1.69, 2.05). The RR for combined severe physical abuse and sexual abuse was 2.40 (95% CI: 2.16, 2.67). Conclusions A history of child abuse is strongly associated with food addiction in this population. Introduction National survey results suggest that more than a third of girls in the United States experience some form of physical or sexual child abuse by the time they turn 18 years old (1)(2). Published studies indicate that child abuse is related to adult obesity (3)(4)(5)(6)(7), with potentially serious consequences for long-term health. In our own work, we have found that child abuse is associated with substantial increases in obesity-related disease risk in adulthood, including hypertension (8), type 2 diabetes (9), and cardiovascular events (10). The mechanisms linking abuse to weight gain remain largely unexplored. A compelling body of animal and clinical evidence suggests that stress can dysregulate eating, promoting a preference for highly palatable (high fat/high sugar) foods and disrupting homeostasis of body weight (11)(12). Laboratory research has used social stressors to induce consumption of palatable foods in humans (13), and studies have demonstrated a dose-response relationship between exogenous administration of glucocorticoid stress hormones and food intake in both animals (14)(15) and humans (16). Studies further suggest that stress-related overeating has important similarities to drug addiction. Palatable food ingestion and drug use both stimulate reward systems in the brain (17) that dampen the physiologic stress response (18). Rats exposed to stressors seek palatable food in much the same way that they seek cocaine (19)(20). Over time, exposure to both palatable foods and addictive drugs appears to disrupt brain reward function, resulting in withdrawal symptoms when the foods or drugs are removed (21), and encouraging continued consumption even in the face of adverse stimuli (21); brain imaging studies have highlighted neurological overlaps between uncontrolled eating and drug use (22)(23)(24). These and other findings have motivated recent calls to define certain eating behaviors as "food addiction" (25)(26). The push to define food addiction as a psychiatric disorder is controversial, but regardless of its status as a diagnosis, the food addiction construct may be useful for identifying uncontrolled eating in response to distress. We hypothesized that addiction-like eating behaviors may be one potential pathway from child abuse to obesity. There are currently no published studies on child abuse as a risk factor for food addiction. In this study, we examined the association between history of child physical or sexual abuse and food addiction among women in the Nurses' Health Study II. Data sources The Nurses' Health Study II (NHSII) follows 116,430 female registered nurses recruited at ages 25-42 in 1989. Biennial questionnaires gather sociodemographic, behavioral, and medical data. In 2001, a supplemental Violence Questionnaire asking about experiences of physical and sexual abuse in childhood was sent to 91,297 NHSII participants who had responded to the previous biennial questionnaire within three mailings. Questionnaires were returned by 68,376 (75%) of the supplemental questionnaire recipients. Violence Questionnaire respondents were more likely to be white than the NHSII cohort as a whole, but had similar childhood socioeconomic status and childhood body size. The 2009 biennial questionnaire included a modified version of the Yale Food Addiction Scale (27), which ascertains the extent to which participants' eating behavior can be characterized as a dependency. The Institutional Review Board of Partners Health Care System (Boston, MA) approved this study. Variables and variable definitions Exposures-Our main exposures were physical and sexual abuse experienced in childhood or adolescence (up to and including age 17). Child and/or adolescent physical abuse was assessed using questions from the Revised Conflict Tactics Scale (28), which asked participants to report the frequency with which a parent, step-parent or adult guardian pushed, grabbed, or shoved; kicked, bit, or punched; hit with something that hurt; choked or burned; or physically attacked the participant when they were children (age 0-10) and when they were adolescents (age [11][12][13][14][15][16][17]. As in previous analyses (9-10), we categorized physical abuse into the following four categories: none, mild (being pushed, grabbed, or shoved at any frequency or being kicked, bitten, or punched once or hit with something once), moderate (being hit with something more than once or physically attacked once), and severe (being kicked, bitten, or punched or physically attacked more than once or ever choked or burned). Child and/or adolescent sexual abuse was ascertained by asking participants (1) whether and how often, as a child (age 0-10) or adolescent (age 11-17), they had ever been touched in a sexual way by an adult or an older child or forced to touch an adult or an older child in a sexual way when they did not want to, and (2) whether an adult or older child had ever forced or attempted to force them into any sexual activity by "threatening you, holding you down, or hurting you in some way when you did not want to?" (29). We categorized sexual abuse into the following four categories (9): none, sexual touching only, one experience of forced sexual activity, and more than one experience of forced sexual activity. Outcome-Food addiction was assessed in 2009 using a modified version of the Yale Food Addiction Scale (YFAS) (27), which parallels measures of drug and alcohol addiction. The original YFAS uses 25 questionnaire items to assess 7 diagnostic criteria for food addiction, and has been shown to have adequate internal reliability, high convergent validity with other eating pathology constructs, and discriminant validity with related but distinct disorders such as alcohol abuse (27) in a non-clinical sample of undergraduate students. The modified Yale Food Addiction Scale (mYFAS) uses a core of 9 questionnaire items, with one question for each of the symptom groups included in the 7 diagnostic criteria, plus two items assessing clinically significant impairment and distress. In an application of the mYFAS to the validation data for the YFAS, 9% of participants met the criteria for food addiction using the mYFAS, while 11% met the criteria in the original YFAS validation (30). The mYFAS shows good construct validity and reasonably high sensitivity (79%), providing a valid, though conservative, measure of food addiction. The mYFAS defines food addiction as 3 or more of the following 7 symptoms plus clinically significant impairment or distress: (a) eating when no longer hungry four or more times per week, (b) worrying about cutting down on certain foods four or more times per week, (c) feeling sluggish or fatigued from overeating two or more times per week, (d) experiencing negative feelings from overeating that interfere with other activities two or more times per week, (e) having physical withdrawal symptoms when cutting down on certain foods two or more times per week, (f) continuing to consume the same amount of food despite significant emotional or physical problems due to overeating at any frequency, and (g) feeling the need to eat an increasing amount of food to reduce distress at any frequency. The mYFAS defines clinically significant impairment or distress as either (a) experiencing significant distress related to eating behavior two or more times per week or (b) experiencing a decrease in ability to function due to issues related to food two or more times per week. Supplemental Table 1 presents the frequency with which each symptom was endorsed among women meeting the criteria for food addiction. We use the term 'food addiction' throughout the manuscript, as shorthand for this set of uncontrolled eating behaviors. But we caution the reader that the extent to which this reflects a physical dependency on certain foods has not been fully established. Covariates-We included the following covariates, plausible predictors of both child abuse and food addiction, as potential confounders in adjusted models: age at baseline (continuous with a squared term), race (indicators for African American, Asian, Hispanic, and other, with non-Hispanic white as the referent), mother's and father's educational attainment when participant was an infant (indicators for <9, [9][10][11]12, and 13-15 years, with 16+ as the referent), indicators for mother in professional occupation and father in professional occupation, indicator for parental home ownership when participant was an infant, recalled body size at age 5 (continuous; the participant could choose one of nine female figures ranging from very lean, a score of 1, to very obese, a score of 9, that best represented her body type at age five years(31)), and parental lifetime history of depression (indicator if either parent had depression history). Data analysis We used modified Poisson regression (Poisson regression with robust standard error estimation (32)) to estimate risk ratios for food addiction comparing women with histories of abuse to women without histories of abuse. Analyses included those women who responded to both the 2001 Violence Questionnaire, on which abuse was ascertained, and the 2009 biennial questionnaire, on which food addiction was assessed (n=63,002). Women who left 3 or more food addiction symptom questions blank were excluded (n=5,344), as were 337 additional women missing information on clinical significance of symptoms, leaving 57,321 participants for our analyses. To assess the sensitivity of our results these exclusions, we reran our models in datasets recreated under various assumptions about the value of the missing outcome data; results indicated that our findings are robust to a variety of missingness patterns (see Supplemental Tables 2a and 2b). For each of our abuse exposures, we ran an age-adjusted model and a model adjusted for additional potential confounders. Physical abuse and sexual abuse were first modeled separately. We then examined physical abuse-food addiction associations with and without any sexual abuse exposure, and sexual abuse-food addiction associations with and without any physical abuse exposure. Women missing physical abuse data were excluded from physical abuse analyses (n=176), and women missing sexual abuse data were excluded from sexual abuse analyses (n=374). Analyses of combined physical and sexual abuse excluded women missing either physical or sexual abuse (n=428). We examined the possibility of a multiplicative interaction between physical abuse and sexual abuse by running a modified Poisson model with a physical abuse indicator, a sexual abuse indicator, and a physical-by-sexual abuse interaction term. We used a Wald test to assess the impact of the interaction term on model fit. Likewise, we assessed the potential additive interaction between physical and sexual abuse with a Wald test of the interaction term in a Poisson model with an identity link. To determine whether food addiction risk varies with abuse timing and duration, we estimated the effects of abuse in childhood only, adolescence only, and in childhood and adolescence relative to no abuse in either period. Finally, to estimate absolute age-adjusted food addiction risks at each level of physical and sexual abuse severity, we ran a Poisson model with an identity link, including indicator variables for each of the 16 cross-classifications of the four physical and four sexual abuse severity levels, plus age, centered at the baseline mean (35 years), coded as a continuous variable with a squared term. Food addiction risks for each level of physical and sexual abuse severity were calculated by adding the intercept of this model to the parameter estimates for each physical and sexual abuse category. Results There were 57,321 Violence Questionnaire respondents with complete food addiction data. Of 57,145 women with complete physical abuse data, 18.5% reported mild physical abuse in childhood or adolescence, 26.3% reported moderate physical abuse, and 8.5% reported severe physical abuse. Of 56,947 women with complete sexual abuse data, 22.4% reported sexual touching only, 5.8% reported one experience of forced sexual activity, and 5.3% reported repeated experiences of forced sexual activity. Table 1 presents the distribution of covariates by physical and sexual abuse severity categories. Of the confounders examined, parental depression was most strongly related to child abuse. Physical and sexual child abuse were highly correlated; for example, repeated forced sex was reported by 21.3% of women with a history of severe physical abuse, compared to just 2.4% of women with no physical abuse history. Overall, 8.2% of our sample met the criteria for food addiction. Women meeting the criteria for food addiction were 6 units of BMI heavier in 2009 than women not meeting the food addiction criteria; almost two thirds of women with food addiction were obese (body mass index >30 kg/m 2 ) in 2009, compared to a quarter of women without food addiction. Table 2 shows risk ratios (RRs) and 95% confidence intervals (CIs) for food addiction as a function of physical and sexual child abuse severity. We saw a dose-response relationship between physical abuse severity and food addiction risk, with confounder-adjusted RRs of 1.24 (95% CI: 1.14, 1.34), 1.39 (95% CI: 1.30, 1.49), and 1.92 (95% CI: 1.76, 2.09) for mild, moderate, and severe physical abuse, compared to no physical abuse. The relationship of sexual abuse severity to food addiction was similar (Table 3), with an RR for the most severe category of sexual abuse, repeated forced sex, of 1.87 (95% CI: 1.69, 2.05). Combined exposure to both types of abuse imparted greater risk for food addiction than either type alone. For example, compared to women with neither physical nor sexual abuse histories, the RR associated with severe physical abuse when sexual abuse was also present was 2.40 (95% CI: 2.16, 2.67; Table 2). Likewise, women with repeated experiences of forced sex in addition to a history of physical abuse had an RR for food addiction of 2.32 (95% CI: 2.07, 2.59; Table 3). There was no evidence of a multiplicative interaction (Wald test p-value=0.68). A test of additive interaction between any physical abuse and any sexual abuse approached statistical significance (p=0.06). We estimated similar effects of abuse occurring in childhood only and abuse occurring in adolescence only (Table 4), with RRs of 1.21 (95% CI: 1.12, 1.30) and 1.29 (95% CI: 1.13, 1.47) for physical abuse and comparable RRs for sexual abuse. Longer duration of abuse conferred a greater food addiction risk: when experienced in both childhood and adolescence, physical abuse was associated with an RR of 1.61 (95% CI: 1.51, 1.71) and sexual abuse was associated with an RR of 1.79 (95% CI: 1.65, 1.94). The age-adjusted risks of food addiction by cross-classifications of child physical and sexual abuse severity ranged from 6.1% among women with no history of physical or sexual abuse to 16.1% among women with a history of both severe physical and severe sexual abuse ( Figure 1). Discussion In this study, we found dose-response associations between physical and sexual child abuse severity and the likelihood of adult food addiction. Combined experiences of physical and sexual abuse conferred the greatest food addiction risks, with RRs approaching 2.5. Likewise, abuse that occurred in both childhood and adolescence was associated with greater food addiction risk than abuse in a single time period. No previous study has examined child abuse and food addiction. A relatively large number of cross-sectional studies suggest a relationship between child abuse and both anorexia nervosa and bulimia nervosa (33)(34)(35). A small number of studies provide suggestive evidence for an association between child abuse and binge eating disorder (36-37), which captures a distinct but related (38) uncontrolled eating phenotype that affects an estimated 2-3% of the US population (39). In general, studies of child abuse and eating behaviors have relied on small sample sizes, and have been unable to distinguish different types of eating disorder outcomes; few have examined eating behaviors prevalent enough to contribute importantly to obesity rates. One exception is an examination of childhood abuse and "using food in response to stress" among 1650 adult respondents in the National Survey of Midlife in the US. The authors reported that childhood exposure to frequent physical and psychological abuse was associated with using food in response to stress, which partially explained a 40% increase in obesity incidence in adults reporting child abuse (7). The use of food in response to stress was ascertained with two questionnaire items and was associated with a two-fold increase in obesity prevalence. Our large cohort allowed us to conduct in-depth examinations of the associations between the type and timing of child abuse and food addiction, a measure of uncontrolled eating reported by 8% of our sample. Nearly two thirds of women meeting the food addiction criteria were obese in 2009, compared to a quarter of women without food addiction, suggesting that food addiction may contribute significantly to obesity rates. The NHSII has rich data on possible confounders, including childhood socioeconomic status and family history of depression, allowing us to adjust for common causes of abuse and eating behaviors that are frequently overlooked. Despite these strengths, our study has several important limitations. First, we were unable to date the onset of food addiction symptoms, and were therefore unable to establish with certainty that child abuse occurred prior to food addiction. While we believe that a mechanism from child abuse to food addiction is more plausible than one from food addiction to child abuse, this study would have been strengthened by data on timing of food addiction symptoms, which would have allowed us to identify periods of vulnerability to food addiction. Second, the degree to which food addiction represents a valid eating disorder phenotype-and in particular the extent to which the food addiction construct reflects a physical dependency-remains in question; the potential application of these study results will depend in part on whether food addiction is ultimately seen as physical or behavioral in nature and whether effective treatments are identified. Third, the NHSII is comprised primarily of white women, and thus our findings may not be generalizable to the US population as a whole. Future studies in more diverse population would enrich our understanding of eating behavior sequelae of child abuse. Finally, as with many studies of child abuse, we relied on women's self-report of child abuse, which could not be validated. Because child abuse goes underreported (40), it is unclear what gold standard should be used to validate self-reports of child abuse. Using reported and substantiated cases of child abuse would identify only the most severe cases of abuse and would miss the majority of the exposed population. While we could not validate self-reports of child abuse, the prevalence in our study is very similar to the child abuse prevalence self-reported on national surveys (1-2). Our finding that child abuse victimization is associated with food addiction adds to accumulating evidence of the importance of stress in the etiology of some obesity phenotypes (12), and may help to inform the development of weight-loss regimens for women with abuse histories. Our study also contributes to a growing body of literature documenting widespread and long-lasting mental and physical health repercussions of child abuse, which help to clarify the true societal costs of child maltreatment and lend urgency to abuse prevention efforts. The epidemic prevalence of obesity and its toll on health call for focused efforts to understand widespread obesity risk factors that may be modified to improve public health. A better understanding of the mechanisms by which child abuse, experienced by over a third of girls (1-2), influences weight gain is likely to be important in addressing obesity risk in women. Our study suggests that uncontrolled eating in response to distress may be one important element of this pathway. Future work should further articulate the pathways from abuse to weight gain, to identify critical periods of vulnerability and targets for intervention that can inform prevention and treatment efforts. Supplementary Material Refer to Web version on PubMed Central for supplementary material. What is already known about this subject • Women with a history of child abuse victimization appear to be at increased risk of obesity in adulthood, but the mechanisms are not known. • Child abuse is associated with the development of rare eating disorders such as anorexia nervosa and bulimia nervosa, but less is known about more prevalent obesity-related eating behaviors. • Animal studies suggest that chronic stress may provoke increased consumption of high-calorie food, possibly leading to "food addiction." What this study adds • Food addiction is relatively common in this population, with 8% of the sample meeting food addiction criteria. • Women who meet the criteria for food addiction are on average 6 units of BMI heavier than women without food addiction. • Severe physical and sexual abuse victimization in childhood are both associated with 90% increases in adult food addiction risks in this population. Table 2 Risk ratios for food addiction by physical child abuse severity in all women and by exposure to sexual child abuse: Nurses' Health Study 2. * Adjusted for age in 1989, race, mother's educational attainment, father's educational attainment, mother in professional occupation, father in professional occupation, parental home ownership, parental history of depression. † Any sexual abuse, including sexual touching only, forced sexual activity once, and forced sexual activity more than once. ‡ Mild physical abuse was defined as being pushed, grabbed, or shoved at any frequency or being kicked, bitten, or punched once or hit with something once; moderate physical abuse was defined as being hit with something more than once or physically attacked once, and severe physical abuse was defined as being kicked, bitten, punched, or physically attacked more than once or ever choked or burned. Table 3 Risk ratios for food addiction by sexual child abuse severity in all women and stratified by exposure to physical child abuse: Nurses' Health Study 2. Table 4 Risk ratios for food addiction by timing of child and adolescent physical and sexual abuse: Nurses' Health Study 2.
2018-04-03T06:12:58.534Z
2013-05-16T00:00:00.000
{ "year": 2013, "sha1": "d55934012d51d2902d780325ab79d7c847bf4395", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/oby.20500", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d55934012d51d2902d780325ab79d7c847bf4395", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3466072
pes2o/s2orc
v3-fos-license
Integrating biogeochemistry and ecology into ocean data assimilation systems ABSt rAct. Monitoring and predicting the biogeochemical state of the ocean and marine ecosystems is an important application of operational oceanography that needs to be expanded. The accurate depiction of the ocean’s physical environment enabled by Global Ocean Data Assimilation Experiment (GODAE) systems, in both real-time and reanalysis modes, is already valuable for various applications, such as the fishing industry and fisheries management. However, most of these applications require accurate estimates of both physical and biogeochemical ocean conditions over a wide range of spatial and temporal scales. In this paper, we discuss recent developments that enable coupling new biogeochemical models and assimilation components with the existing GODAE systems, and we examine the potential of such systems in several areas of interest: phytoplankton biomass monitoring in the open ocean, ocean carbon cycle monitoring and assessment, marine ecosystem management at seasonal and longer time scales, and downscaling in coastal areas. A number of key requirements and research priorities are then identified for the future. GODAE systems will need to improve their representation of physical variables that are not yet considered essential, such as upper-ocean vertical fluxes that are critically important to biological activity. Further, the observing systems will need to be expanded in terms of in situ platforms (with intensified deployments of sensors for O 2 and chlorophyll, and inclusion of new sensors for nutrients, zooplankton, micronekton biomass, and others), satellite missions (e.g., hyperspectral instruments for ocean color, lidar systems for mixed-layer depths, and wide-swath altimeters for coastal sea levels), and improved methods to assimilate these new measurements. integrating Biogeochemistry and Ecology into Ocean Data Assimilation Systems ABStr Act. Monitoring and predicting the biogeochemical state of the ocean and marine ecosystems is an important application of operational oceanography that needs to be expanded. The accurate depiction of the ocean's physical environment enabled by Global Ocean Data Assimilation Experiment (GODAE) systems, in both real-time and reanalysis modes, is already valuable for various applications, such as the fishing industry and fisheries management. However, most of these applications require accurate estimates of both physical and biogeochemical ocean conditions over a wide range of spatial and temporal scales. In this paper, we discuss recent developments that enable coupling new biogeochemical models and assimilation components with the existing GODAE systems, and we examine the potential of such systems in several areas of interest: phytoplankton biomass monitoring in the open ocean, ocean carbon cycle monitoring and assessment, marine ecosystem management at seasonal and longer time scales, and downscaling in coastal areas. A number of key requirements and research priorities are then identified for the future. GODAE systems will need to improve their representation of physical variables that are not yet considered essential, such as upper-ocean vertical fluxes that are critically important to biological activity. Further, the observing systems will need to be expanded in terms of in situ platforms (with intensified deployments of sensors for O 2 and chlorophyll, and inclusion of new sensors for nutrients, zooplankton, micronekton biomass, and others), satellite missions (e.g., hyperspectral instruments for ocean color, lidar systems for mixed-layer depths, and wide-swath altimeters for coastal sea levels), and improved methods to assimilate these new measurements. In this context, data assimilation is a relevant approach to achieve: (1) better control of the physical circulation that enhances the quality of biogeochemical dynamics, (2) initialization of the biological variables for prediction, (3) estimation of physical and biogeochemical model parameters, and (4) data-based assessments of modeling hypotheses. B y p i E r r E B r A S S E u r , N i c O l A S G r u B E r , r O S A B A r c i E l A , K E i t h B r A N D E r , M A é VA D O r O N , A B D E l A l i E l M O u S S A O u i , A l i S tA i r J . h O B D Ay, M A r t i N h u r E t, A N N E -S O p h i E K r E M E u r , pAt r i c K l E h O D E y, r i c h A r D M At E A r , c y r i l M O u l i N , r A G h u M u r t u G u D D E , i N N A S E N i N A , A N D E i N A r S V E N D S E N The importance of mesoscale activity in primary production is the subject of much discussion in the literature (e.g., Oschlies and Garçon, 1998), and the example below confirms that the resolution of GODAE products is key to realistically monitoring phytoplankton biomass in ocean basins. During the EU-funded MERSEA (Marine Environment and Security for the European Area) project, a prototype of a coupled physical/biological assimilation system based on the PISCES model (Aumont et al., 2003) has been iNtrODuctiON In its original design, the Global Ocean Data Assimilation Experiment (GODAE) was conceived as a practical demonstration of real-time ocean data assimilation in order to provide a regular and complete depiction of global ocean circulation at eddy resolution and better consistency with observations of physical parameters and dynamical constraints (Le Traon et al., 1999). This type of oceanic information was identified in the early stages of GODAE as potentially very valuable to different applications related to the "living ocean, " such as the fishing industry and fisheries management (Griffin et al., 2002). Numerous examples of GODAE products used for environmental applications are found in the literature, ranging from real-time temperature products for monitoring seasonal fish migration (Hobday and Hartmann, 2006) and ecological regime shifts (Brander, in press), to ocean currents and velocities used to understand the transport and spread of fish larvae (Bonhommeau et al., 2009;Johnson et al., 2005), to sources and sinks of atmospheric CO 2 . Some of these examples will be discussed in the following sections. Today, advances in modeling biogeochemical processes and increased computer power have made it possible to couple physical and biogeochemical models online to address wider environmental and societal issues by providing hindcasts, nowcasts, and forecasts from short lead times (Siddorn et al., 2007) to climate time scales (Johns et al., 2006). (Gregg et al., 2005;Behrenfeld et al., 2006;Polovina et al., 2008). Accurate reanalyses of the physical ocean during the past 60 years, such as those expected from the Mercator reanalysis systems, will offer new insight into how phytoplankton biomass and production has varied over this period. OcEAN cArBON cyclE MONitOriNG AND ASSESSMENt The operational systems available as GODAE ended are not yet able to accurately provide real-time monitoring of global or basin-scale air-sea carbon fluxes as required (e.g., for attempts to obtain reliable regional carbon budgets). MAriNE EcOSyStEM MANAGEMENt Marine exploitation issues need to be addressed from an ecosystem perspective, that is, using an ecosystem-based management approach (e.g., Svendsen et al., 2007). The long-term vision is to develop the systems required for the study, management, and monitoring of exploited and protected marine species. populations that are interacting with their modeled prey. sea-to-air CO 2 Flux (Pg C yr -1 ) s ea-to-air CO 2 Flux (Pg C yr -1 ) sea-to-air CO 2 Flux (Pg C yr -1 ) Figure 2. comparison of the ocean inversion estimate of the contemporary sea-to-air cO 2 flux Gruber et al. (2009) other approaches, for example, a continuous size spectrum (Maury et al., 2007). Ultimately, collected data should be used for parameter optimization and data assimilation . From the user's perspective, oceanographic information is used in fisheries management in two basic ways. First, traditional fisheries oceanography has involved a search for explanation of historical catch-related patterns. Stock assessment scientists then use this information to correct observed catch rates, and derive historical population estimates (e.g., Bigelow et al., 2002). These estimates lead to an understanding of how fishing mortality changes population size, and the mortality is the control variable that management seeks to adjust via gear restrictions, quotas, and spatial management. The relationships with the environment are often quite weak, because although the selected environmental variables may be correlated with fish distributions and abundances, they are not strong drivers (Myers, 1998). Derived variables such as mixed layer depth, productivity, and prey biomass may be more relevant to the distribution and abundance of fish. Thus, improved "physical" and "biological" variables from coupled models may improve understanding of historical patterns, and offer improved management for ocean resources (e.g., Senina et al., 2008). (Brander, in press). Decadal changes are observed in the North Pacific as well, with significant impacts on fisheries (Chavez et al., 2003;Brander, in press). Because give rise to prolonged system changes. (Huret et al., 2007;Gruber et al., 2006). Biological tracers or particles need to be transferred in a conservative way at the chosen boundary between the global and regional models in a two-way mode. For that to occur, the slope region, with its complex exchange processes of biological material, should not represent a boundary for regional applications, but rather it needs to be part of the regional high-resolution model.
2017-09-14T02:17:43.861Z
2009-09-01T00:00:00.000
{ "year": 2009, "sha1": "682533f524b7f401a0f62b118b70e3d46d414fa6", "oa_license": "CCBY", "oa_url": "https://tos.org/oceanography/assets/docs/22-3_brasseur.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "46cc60e4773898bb3a99dba11eadb5d1caa87b32", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Geology" ] }
34286200
pes2o/s2orc
v3-fos-license
Zoonotic infection of Brazilian primate workers with New World simian foamy virus. Simian foamy viruses (SFVs) are retroviruses present in nearly all nonhuman primates (NHPs), including Old World primates (OWP) and New World primates (NWP). While all confirmed human infections with SFV are from zoonotic transmissions originating from OWP, little is known about the zoonotic transmission potential of NWP SFV. We conducted a longitudinal, prospective study of 56 workers occupationally exposed to NWP in Brazil. Plasma from these workers was tested using Western blot (WB) assays containing NWP SFV antigens. Genomic DNA from blood and buccal swabs was analyzed for the presence of proviral SFV sequences by three nested PCR tests and a new quantitative PCR assay. Exposure histories were obtained and analyzed for associations with possible SFV infection. Ten persons (18%) tested seropositive and two persons were seroindeterminate (3.6%) for NWP SFV. Six persons had seroreactivity over 2-3 years suggestive of persistent infection. All SFV NWP WB-positive workers reported at least one incident involving NWP, including six reporting NWP bites. NWP SFV viral DNA was not detected in the blood or buccal swabs from all 12 NWP SFV seroreactive workers. We also found evidence of SFV seroreversion in three workers suggestive of possible clearance of infection. Our findings suggest that NWP SFV can be transmitted to occupationally-exposed humans and can elicit specific humoral immune responses but infection remains well-controlled resulting in latent infection and may occasionally clear. Introduction Emerging infectious diseases in humans are an important public health concern with most having a zoonotic origin and over 70% originating from wildlife [1]. Recent examples of emerging infectious diseases with high public health significance include the Ebola virus outbreak in West Africa and the Zika virus outbreak in Central and South America. Among retroviruses, simian immunodeficiency viruses (SIVs) and simian T-cell lymphotropic viruses (STLVs) are prevalent in nonhuman primates (NHPs), and crossed into and spread amongst humans to become the human immunodeficiency viruses (HIV) and human T-lymphotropic viruses (HTLV), respectively (2). Similarly, simian foamy virus (SFV) is another complex retrovirus that is highly prevalent in NHPs, including Old Word primates (OWP) and New World primates (NWP) that can zoonotically infect humans [2]. All confirmed SFV infections identified so far in humans have resulted from zoonotic transmission of SFV infecting OWPs [3]. SFV DNA integrates as a provirus in the genomic DNA of many tissues and organs from infected animals, including peripheral blood mononuclear cells (PBMCs), lung, kidney and liver [4,5]. However, SFV replication has been reported to be limited to superficial epithelial cells of the oral mucosa [6][7][8][9], supporting horizontal transmission through bites as the major route of SFV infection. Humans infected with SFV have been reported in occupationally and naturally exposed individuals with direct contact with NHPs and/or their body fluids, including veterinarians, zoo keepers, hunters and butchers [10][11][12][13][14][15]. SFV seroprevalence in exposed humans can reach 37%, depending on the studied population and the involved risk activity, with persons reporting severe bites having greater risk of infection [9,13,[16][17][18][19]. SFV DNA and antibodies can be detected in both blood and saliva of infected humans, but there are no reports of viral RNA expression at those sites [20][21][22][23]. So far, evidence of SFV transmission between humans has not been described [17]. SFV is known for its long persistence and non-pathogenic effects in infected mammalian hosts, including zoonotically infected humans, despite the cytopathic effects and cellular death observed in infected cell cultures (21,22). Nonetheless, most studies have focused on infection of healthy workers and hunters and it is not known what effects SFV may have in unhealthy persons or immunocompromised hosts. Co-infection with SFV and HIV-1 has been characterized in a commercial sex worker from the Democratic Republic of Congo, a blood donor from Cameroon, and two persons from Ivory Coast [24,25] raising questions about possible SFV pathogenicity in human hosts immunocompromised by HIV [24,25]. Interestingly, rhesus macaques co-infected with simian immunodeficiency virus (SIV) and SFV show increases in SIV plasma viral loads, faster CD4 + T-cell decline and accelerated progression to simian AIDS compared with SIV mono-infected macaques [26]. Combined, these results suggest that further studies are needed to define the pathogenic potential of SFV. SFVs have been identified in many species of all three major NWP (Platyrrhini) families (Cebidae, Atelidae, Pitheciidae) in both captive and wild-caught monkeys with high prevalences in some species (32)(33)(34)(35). In addition, NWPs are common members of zoological collections, are frequently used in research studies [27], and are hunted in the wild by Amerindians and other groups and kept as pets or for consumption [28]. NWP SFV can also infect a variety of human cells in vitro (34). Combined, these findings suggest that humans in contact with NWPs may be at risk for infection with SFV as has been shown for OWP exposures. A single cross-sectional study by Stenbak et al. detected antibodies to SFV in 12% (8/69) of primatologists exposed to NWP, but failed to detect SFV DNA in those persons [29]. Hence, additional studies to further investigate the zoonotic potential of SFV from NWP are needed. In the present report, we enrolled primate workers in a voluntary longitudinal, prospective study and collected their occupational primate exposure information. Blood and buccal swab specimens were collected and tested for evidence of infection with NWP SFV using previously validated, sensitive and specific serologic and PCR assays. We document persistent antibody to NWP SFV in these workers suggestive of chronic, persistent infection but also demonstrate seroreversion in others. We found no evidence of NWP SFV DNA sequences in any seroreactive workers. Material and methods Study population, sample collection and classification of NHP exposure risks Workers occupationally exposed to NWP from Centro de Primatologia do Rio de Janeiro (CPRJ) and Fundação Jardim Zoológico da Cidade do Rio de Janeiro (RIOZOO) consented to participate in a study to investigate the occupational risk of simian retrovirus infection. Workers at CPRJ and RIOZOO were recruited to participate in the study following an information sharing presentation about SFV in NWPS and the exposure risks. All participants signed a consent form with information about this study and potential benefits of participation. The human subjects' research protocols were reviewed and approved by the Universidade Federal do Rio de Janeiro Review Board (54654616.0.0000.5257) and a project determination was approved for retroviral testing of these anonymized samples at CDC. Subjects were asked to answer a questionnaire describing type and length of exposure to NHPs (NWPs and OWPs) using a questionnaire adopted from a previous study of exposure to Old World monkeys and apes [13]. Specimens were collected at three different time points at each institution with multiple collections from most participants. Risk for NHP exposure was classified into four levels based on type of contact. Level 1 included activities with indirect contact with simians, such as feeding and cage cleaning. Level 2 included activities with direct NHP contact but with no reported accidents, including capture and containment, medicine administration, blood collection, teeth extraction and cleaning, surgery and necropsy. Level 3 exposure included direct contact resulting in a reported accident, such as bites, scratches and injuries with sharp objects (scalpels, needles, etc.) containing simian body fluids. All workers without any reported NHP contact were classified as Level 0. Each individual was classified according to the highest contact risk level they reported. Sample preparation and confirmation of genomic DNA integrity Ten milliliters of EDTA-treated whole blood was collected from each participant and refrigerated at 4˚C for up to 8 hrs prior to processing. Plasma was separated from cells by centrifugation, aliquoted and stored at -80˚C. Peripheral blood mononuclear cells (PBMCs) were isolated from whole blood with Ficoll-Paque™ Plus (GE Healthcare BioSciences, Pittsburgh, PA). Buccal swabs were collected using a cotton swab that was then placed immediately into a sterile tube containing 0.8% saline and stored at -20˚C. Genomic DNA (gDNA) was extracted from PBMCs and oral swabs using the PureLink 1 Genomic DNA kit (ThermoFisher Scientific, Grand Island, NY) following the manufacturer's protocol and stored at -20˚C. The integrity of the gDNA for PCR analysis was checked by PCR amplification of β-actin sequences using primers BAF1 (5'-GTG CTG TCC CTG TAC GCC TCT-3') and BAR1 (5'-GGC CGT GGT GGT GAA GCT GTA-3') as previously described [30]. All DNA samples testing positive for β-actin sequences were further considered suitable for SFV PCR detection. SFV serology To insure detection of a broad range of genetically diverse NWP SFV, plasma samples were first screened for antibodies to NWP SFV using a Western blot (WB) assay containing antigens from both a marmoset (Callithrix jacchus, SFVcja) and spider monkey (Ateles species, SFVasp) as previously described [31,32]. Briefly, plasma samples were diluted 1:50 and reacted separately to 150 μg of infected and uninfected cell lysates overnight at 4˚C after protein separation through 4-12% polyacrylamide gels and transfer to Nytran membranes. Seroreactivity was detected using peroxidase-conjugated protein A/G (Pierce, Rockford, IL) and chemiluminescence (Amersham, Uppsala, Sweden). Seroreactivity to both Gag p68 and p72 precursor proteins with an absence of similar reactivity to antigen from uninfected Cf2Th cells was interpreted as seropositive. Samples with seroreactivity to a single Gag protein were considered seroindeterminate. Specimens without reactivity to either Gag protein were considered seronegative. All participants with plasma samples reactive for NWP SFV were also tested by WB analysis for cross-reactivity to OWP SFV antigens derived from humans infected with SFVcsp (Chlorocebus species, African green monkey, previously referred to as SFVagm) and SFVptr (Pan troglodytes, chimpanzee, previously referred to as SFVcpz) and to SFV isolated from the prosimian Galago crassicaudatus panganiensis (SFVgal also known as SFVgpa) as described [33,34]. PCR analysis All gDNA samples were first screened by PCR testing for detection of NWP SFV integrase sequences (192-bp) using a semi-nested approach that utilizes generic polymerase gene (pol) primers and conditions previously reported [35]. We refer to this test as the shorter pol PCR assay. Two additional NWP SFV subgenomic regions were targeted by nested PCR (398-bp LTR and gag sequences consisting of 225-bp in LTR and 173-bp in gag, and a 520-bp pol fragment). The latter is referred to as the longer pol PCR assay. Primers and PCR conditions for these additional fragments have been previously described [35]. PBMC DNA from persons with plasma seroreactivity to OWP SFV antigens were also tested using OWP SFV PCR analysis using nested PCR primers DNHF1/DNHR2 and DNHF3/DNHR4 to generate a 465-bp pol fragment as previously described (13). We also applied a newly developed real-time quantitative PCR (qPCR) assay to simultaneously detect and quantify NWP SFV integrated in host gDNA of both PBMC and buccal swab specimens as described in detail elsewhere (Muniz et al., accepted PLoS ONE). Primers and probes were designed using an alignment of available pol sequences from NWP SFV, including representatives from all three NWP families [35]. Briefly, two forward and one reverse primers were used (QSIP4Nmod (for) 5'-TGC ATT CCG ATC AAG GAT CAG C-3', QSIP4Nmod2 (for) 5'-YTT TGC YRC TTG GGC MAM RGA VA-3', and QSIR1Nmod2 (rev) 5'-TTC CTT TCC ACY WTY CCA CTA CT-3') with the probe DIAPR2 5'-FAM-TGG GGI TGG TAA GGA G"T"A CTG WAT TCC A-SpC6-3', where "T" is labeled with the black hole (BHQ1) quencher and SpC6 is a six carbon spacer arm on the 3' terminus to block polymerase activity by preventing the probe from priming. Following a 10 min incubation at 95˚C to activate the Taq polymerase, a three step PCR was performed at 95˚C for 15 sec, 50˚C for 15 sec, and 62˚C for 15 sec for 55 cycles using a BioRad CFX96 instrument. Sensitivity of the qPCR assay was evaluated using replicative serial dilutions of seven different NWP SFV plasmids, including SFVcja, SFVasp, SFVcme (Cacajao melanocephalus, uakari monkey), SFVagu (Alouatta guariba, howler monkey), SFVssp (Saimiri species, squirrel monkey, also known as SFVsqu), SFVsxa (Sapajas xanthosternos) and SFVpsp (Pithecia species, saki monkey). The SFVcja and SFVssp plasmids were kindly provided by Joseph Sodroski (30). The SFVasp plasmids were generated by cloning of a pol PCR fragment amplified from infected tissue culture DNA from PBMCs of a spider monkey (Ateles species). The SFVcme, SFVagu, SFVsxa, and SFVpsp plasmids were generated by cloning of a pol PCR sequence amplified from PBMC DNA, all using the nested primers SNF3 and SNR3 as previously described (20). PCR fragments were cloned using the TA cloning kits per the manufacturer's instructions (ThermoFisher Scientific, Grand Island, NY). Specificity of the qPCR assay was done by testing of PBMC gDNA from 35 To normalize the amount of diploid cells per reaction, a new generic qPCR assay was developed for the housekeeping gene ribonuclease P/MRP 30 kDa subunit (RPP30). Primers (RPP30FM 5'-GCA CAT TTG GAC CCT GCG AGC G-3' and RPP30RM 5'-GTG AGC GGC TGT CTC CAC AAG-3') and probe (RPP30PM 5'-HEX-TTC TGA CCT GAA GGC "T"CT GCG CGG-SpC6-3') were designed using the human RPP30 sequence (GenBank # U77665). The RPP30 qPCR assay was performed using an initial 95˚C incubation for 10 min followed by 55 cycles of 95˚C for 15 sec and 62˚C for 30 secs. The sensitivity of the RPP30 qPCR assay was determined by using replicative serial dilutions of a positive control standard generated by PCR amplification of RPP30 sequences using PBMC DNA from a pedigreed HIV, HTLV, and SFV-negative blood donor and the primers RPP30FM and RPP30RM and cloned as described above. Statistical analyses The Mann-Whitney U test was used to evaluate an association between SFV NWP WB seropositivity and length of exposure to NWP. Workers classified with level 3 NHP exposure were divided into two groups, those with negative and positive SFV NWP WB results, which included two persons with indeterminate WB results. For length of exposure to NWP SFV, we considered the most recent time point at which the individual was seronegative in the NWM SFV WB assay. Statistical tests were considered significant at the level of p 0.05. Study population Fifty-six workers occupationally exposed to NHPs at CPRJ (n = 18) and RIOZOO (n = 38) participated in our study (Table 1). Two workers at CPRJ and 12 at RIOZOO decided to not join the study. At CPRJ, blood samples were collected in 2011 from 15 participants, in 2013 from 18 persons and in 2014 from 11 workers. At RIOZOO specimens were collected in 2011 from 26 participants, in 2012 from 33 persons and in 2014 from 20 workers. Buccal swabs were collected in 2012 for 13 participants from CPRJ and 11 workers from RIOZOO and in 2014 from 11 persons at CPRJ and 20 workers from RIOZOO. Overall, we obtained serial blood specimens from 56 subjects (one sample from seven persons, two samples from 30 and three samples from 19 subjects) and serial buccal swabs from 40 subjects (one sample from 23 and two samples from 17 subjects). Nine volunteers performing laboratory research in Brazil consented to join the study, reported no contact with NHPs, and their blood samples were used as negative assay controls. Exposed only to NWP 21 Exposed only to OWP 0 Detection of SFV seropositivity in workers occupationally exposed to NHP Plasma samples were screened for antibodies to NWP SFV using a WB test that combines viral antigens from a spider monkey (SFVasp) and common marmoset (SFVcja). This WB assay has been shown to be highly sensitive and specific for detecting SFV in all three Platyrrhini families (33,35). Results from a representative WB assay are shown in Table 2) showing reactivity to the diagnostic Gag doublet proteins p68 and p72). Five seropositive samples were from persons working at CPRJ, corresponding to 27.8% (5/18) of the workers, while WB seropositivity for workers at the RIOZOO was 13% (5/38). Occupations of the NWP SFV WB-positive workers included four general helpers (4/7, 40%), 3/7 (43%) veterinarians, and 3/18 (16%) animal handlers. For 12 NWP SFV seroreactive workers, seven (58.4%) reported accidents with only members of the Cebidae family, three (25%) with only the Atelidae family and two (16.6%) reported accidents with both NWP families ( Table 2). Two samples (C15H and C18H) from CPRJ were classified as seroindeterminate showing strong seroreactivity to only a single Gag band (p72). All nine plasma samples from negative control individuals with no direct or indirect contact with NHPs were negative for NWP SFV antibodies. All 10 NWP SFV-positive and the two indeterminate plasma samples were subjected to additional WB analyses with OWP and prosimian SFV antigens to check for possible crossreactivity. All 12 specimens tested negative in both the OWP and prosimian WB tests, including three workers (Z10H, Z17H and Z19H) having reported exposure to both NWP and OWP, but who were only reactive in the NWP WB tests (Tables 2 and 3). Forty-four samples were not reactive to NWP SFV of which 32 were tested for antibodies to OWP SFV using specimens collected from at least one time point. Two samples (C04H and Z01H) were weakly OWP SFV-positive and three (Z05H, Z13H and Z14H) were indeterminate with reactivity to only a single band; all samples were collected in 2011. Worker C04H did not report contact with OWP but reported a bite accident by a Leontopithecus chrysomelas (golden lion tamarin). PCR testing of triplicate PBMC gDNA specimens from these five persons was negative for OWP SFV sequences, a finding that may represent cross-reactivity to NWP SFV or nonspecific seroreactivity. Contact risk levels and time of exposure to NHP Workers were categorized into four contact risk levels based on severity of exposure to NHPs, with level 3 being the most severe and included bites, scratches and injuries with sharp objects (scalpels, needles, etc.) containing simian body fluids. All workers without any reported NHP contact were classified as Level 0. Each individual was classified according to the highest contact risk level they reported. All ten persons with NWP SFV WB-positive results were classified with level 3 contact risk ( Table 2). Six of the 10 level 3 contact risk individuals reported being bitten by NWPs. Two persons (C15H and C18H) with indeterminate NWP SFV WB results were classified as level 3 and 2, respectively, and reported contact exclusively with NWPs ( Table 2). Among 44 persons with NWP SFV WB-negative results, 22 (50%) were classified as level 3, five (11.3%) as level 2, eight (18.2%) as level 1 and two (4.5%) as level 0, while contact risk information was not provided for seven workers (16%) ( Table 1). Two workers were WBpositive for OWP SFV; person C04H only reported contact risk level 3 exposure to NWP, while person Z01H reported contact risk level 3 exposure to OWP. Both workers were negative for OWP SFV sequences by PCR testing. Thirty-three workers classified with level 3 exposures were further divided into two groups according to WB-positivity for NWP SFV and the length of historical contact with primates and were then compared by statistical analysis. Interestingly, the WB-negative persons (n = 21) had a median contact duration of 12.3 years, whereas the WB-positive workers (n = 12) had a median duration of NWP contact of 3 years. This difference was statistically Fig 1. Detection of simian foamy virus (SFV) antibodies in workers exposed to New World primates. SFV antigens from a spider monkey (Ateles species, SFVasp) and a common marmoset (Callithrix jacchus, SFVcja) were prepared by expansion in canine thymocyte cells (Cf2Th) and were combined and reacted with test plasma and sera in the upper panel and simultaneously to uninfected Cf2Th antigens in the lower panel to check the specificity of the seroreactivity to the SFV antigens. Worker codes beginning with "C" and "Z" are from the Centro de Primatologia do Rio de Janeiro (CPRJ) and Fundação Jardim Zoológico da Cidade do Rio de Janeiro (RIOZOO), respectively. All human samples were reactive to both Gag proteins except for participants C15H and C18H whose samples were reactive for only the 72 kD Gag protein and which were classified as seroindeterminate. Two seropositive serum controls from an SFV-infected spider monkey (Ateles species) and a capuchin (Cebus apella) and a pedigreed seronegative human plasma sample (negative control) are included in each assay run. Molecular markers in kDa are provided on the left and the location of the 68/72 kDa Gag doublet proteins are shown with blue arrows. https://doi.org/10.1371/journal.pone.0184502.g001 New World simian foamy virus infects primate workers significant (p = 0.033; Mann-Whitney U test). Of the 33 workers with level 3 exposures, six reported NWP contact exposure for less than 3 years. Five of these six persons (83.3%) were NWP SFV WB-positive and one (16.7%) was NWP SFV WB-negative. Of 12 workers with NHP contact exposure between 3-10 years, three (25%) were NWP SFV WB-positive and 9 (75%) were NWP SFV WB-negative. Fifteen persons reported contact exposure over 10 years, of which four (26.7%) were NWP SFV WB-positive and 11 (73.3%) were WB-negative. Thus, high-risk occupational exposures are common in this population and there appears to be an inverse correlation of NWP SFV seropositivity and duration of exposure to NWPs. Anti-SFV antibody clearance Three of 10 (30%) workers who were initially NWP SFV WB-positive (C06H, Z02H and Z19H) did not retain seroreactivity at all three time points tested (Table 2, Fig 1). Samples from persons C06H and Z02H showed strong reactivity to both Gag precursor proteins in the WB assay at the first two collection time points (2011 and 2012/2013 (not shown)) but were WB-negative in 2014. The same pattern was observed for samples from person Z19H that Fig 1). Nested PCR and qPCR for NWP SFV DNA in blood and buccal swabs gDNA was extracted from PBMC samples from the 56 workers at the three different time points (a total of 124 PBMC gDNA samples) and was analyzed by nested PCR targeting three NWP SFV regions (shorter pol (192-bp), longer pol (520-bp) and LTR-gag primers) in order to confirm SFV infection molecularly and to allow identification of the virus by sequence analysis. All samples were tested in triplicate for each genomic region using 500 ng of gDNA. All 124 PBMC gDNA specimens were negative using the three different PCR tests. Given that our PCR assays are highly sensitive, that the assay controls performed as expected, and the quality of the extracted DNA was verified by PCR amplification of the β-actin gene, these results show that the 10 NWP SFV seropositive workers have undetectable levels of NWP SFV DNA in PBMCs. Buccal swabs were also collected from persons who participated in the last two sample collections in 2012/2013 and 2014. Due to the low concentration of gDNA obtained from the buccal swabs, samples from some workers were not suitable for SFV PCR testing following the DNA quality PCR screening. For the 2012/2013 time point, the shorter pol PCR test was performed for 25 samples, whereas the longer pol region PCR was done for 17 samples, including all 10 NWP SFV WB-positive samples. Thirty-one buccal samples collected in 2014 were tested using the shorter and longer pol PCR assays, including all 10 specimens from the NWP SFV WB-positive persons. All buccal swab samples with sufficient gDNA amounts tested negative New World simian foamy virus infects primate workers for pol sequences by nested PCR, corroborating the results obtained in the blood for those participants. We validated the novel qPCR assay by determining the sensitivity of the assay to detect a broad range of highly genetically diverse NWP SFV from seven different NWP genera across all three NWP superfamilies: SFVcja (Callithrix jacchus), SFVasp (Ateles species), SFVcme (Cacajao melanocehpalus), SFVagu (Alouatta guariba), SFVssp (Saimiri species), SFVsxa (Sapajas xanthosternos), and SFVppi (Pithecia pithecia). The qPCR assay could reliably detect > 90% of replicates containing at least 20 copies of each SFV strain per μg DNA with all strains being detected at 100% of the 20 copy replicates except SFVcme which was detected 90% of the time at that concentration. All SFV control strains could be detected at five copies in 60-100% of the replicates, from 50-70% at three copies, and from 10-60% at a single copy. Assay specificity was verified by the absence of signal in PBMC DNA from 35 persons known to be negative for SFV, HIV, and HTLV using serology testing and by obtaining negative results using PBMC DNA from 30 SFV-seronegative diverse NWPs. All available PBMC gDNAs and buccal swab specimens from 2012/2013 and 2014 tested negative using the new qPCR assays that also targets pol sequences, indicating that if SFV is present in seropositive individuals the viral load is less than 20 copies/μg DNA. Insufficient amounts of gDNA from the 2011 PBMC specimens were available for qPCR testing. Discussion Compared to OWP SFV, the potential of NWP SFV zoonotic transmission to humans is largely unknown. Recent studies have shown that SFV can be highly prevalent in both captive and wild NWPs, increasing potential infection risks for persons in contact with NWPs. In addition, the potential numbers and types of exposures to NWP SFVs are similar to those for OWP SFVs. For example, because of their relatively small size NWPs are cheaper to maintain and have simpler husbandry (high reproductive efficiency and can reach sexual maturity in about one year) and thus are frequently used in biomedical research, are common collections at zoological gardens, and are often kept as pets. An estimated 15,500-31,000 NWPs are used in research worldwide [27] and among 129 American Zoological Association (AZA) member institutions responding to a 2009 questionnaire, about 2,200 NWPs consisting of at least ten genera were at the zoos. NWPs are also frequently hunted. An estimated 2.2-5.4 million NWPs are hunted each year in the Brazilian Amazon alone [28]. Estimating numbers of monkeys kept as pets is difficult because these are illegal activities, are common among Amazonian river communities and are not typically reported. Thus, similar to OWP SFV, abundant opportunities exist for exposure and possible human infection with NWP SFV. Until now only a single study has been published investigating the zoonotic potential of NWP SFV. In 2014 Stenbak et al. reported finding a WB seropositivity of 11.6% among 69 primatologists that reported exposure to NWP, and four of the eight SFV-positive subjects reported accidents involving NWP [29]. However, all eight seroreactive persons tested negative for NWP SFV sequences by PCR testing of blood specimens, which the authors explained could either represent latent infections occurring in other body compartments or viral replication controlled by host immune responses. In the present study, we found a higher serological prevalence (18%) to NWP SFV in primate workers using WB testing, considering results from at least one of three time points collected for each person. The greater prevalence found in our study may be attributed to a possible higher exposure of our population to NWP, with a high number (59%) of workers reporting accidents involving NWP body fluids, including bites and scratches. However, the total number of persons reporting accidents involving NWP in the former study were not reported for direct comparison [29]. We also used a prospective study design with longitudinal sampling of the workers over a three-year period which may have permitted detection of additional positives not found by a cross-sectional study design. This study design also allowed us to detect persistent SFV antibody in some individuals over a period of two to three years, suggestive of possible ongoing viral replication. All ten NWP SFV-positive individuals in our study reported accidents with NWP. Interestingly, six of these workers were bitten by NWP, reaffirming other evidence that biting is very efficient for SFV transmission [17]. All ten NWP SFV-positive samples were not seroreactive to OWP and prosimian SFV, showing further the specificity of our WB assay. Two individuals (C15H and C18H) were seroindeterminate in the NWP SFV WB test at all time points. These two workers reported scratches/bites and contact with feces, respectively, for the same NWP species, Brachyteles arachnoides (wooly spider monkey). While reactivity to a single Gag protein band may reflect a specific antibody response to SFV from this primate species, it is atypical of other SFV infections where reactivity to both Gag proteins is seen. For example, another worker (C11H) also reported bite exposures to Brachyteles arachnoides, but showed reactivity to the characteristic double Gag bands. Therefore, additional factors such as nonspecific seroreactivity to the NWP SFV antigens might explain the seroindeterminate WB pattern observed in these two persons. Additional serologic testing such as measurement of antibody titers and/ or neutralizing antibodies may help to further characterize these infections but adequate plasma volumes were not available for this testing. PCR assays targeting four viral regions, three by nested PCR and one using qPCR, were used to detect NWP SFV sequences in gDNA prepared from both PBMC and buccal swabs of the seroreactive workers [35,36]. However, we did not detect NWP SFV sequences in any sample. Nonetheless, our findings are consistent with those of Stenbak et al. [29] and our combined findings suggest that human infection with NWP SFV may persist with extremely low SFV viral DNA loads in both the blood and the oral mucosa. These results contrast with those obtained from humans infected with OWP SFV, in which proviral sequences can be consistently detected in PBMCs in most cases using nested PCR and qPCR and also in oral specimens [13,14,20,37]. For example, we have previously detected SFV DNA in PBMCs from all nine persons with those specimens and in saliva from three of seven SFV-positive humans occupationally exposed to NHP by nested PCR and isolated SFV from the oral cavity from a person infected with a chimpanzee SFV [13,20]. More recently Rua et al. showed that SFV DNA, but not RNA, could be detected by nested PCR and qPCR in PBMCs in all 14 hunters infected with ape SFV but in only 8/14 (57.1%) saliva samples from these persons [21]. Proviral loads in these persons were very low (< 20 copies/10 5 PBMCs) which is close to the assay limit of detection of three copies/150,000 PBMCs used in that study and may explain why virus was not detected in all compartments of all persons by qPCR [21]. Detectable viral loads in the oral mucosa in that study were associated with a long duration of infection (median > 16 yrs.) but were present in less than five SFV DNA copies/10 5 cells [21]. In contrast, persons in our study with seroreactive NWP SFV results reported primate exposures for an estimated median duration of less than three years, which could help explain the negative PCR results in the oral fluids. The sensitivity of our qPCR assay was determined to reliably detect 20 copies/ug DNA (about 150,000 cell equivalents) though the assay could also detect from one to five copies/ug DNA for some NWP SFV strains, like that in other studies, but less frequently. It's possible that the NWP SFV DNA levels in infected humans is lower than what can be reliably detected by qPCR or nested PCR. Little is also known about viral loads in infected NWPs. For example, Stenbak et al. found that proviral loads in blood specimens from two squirrel monkeys were as low (177 copies/ 150,000 PBMCs) as those in four OWPs (macaques) and a human infected with a macaque SFV [29]. Testing of NHP buccal swabs were not reported in that study for comparison with our results [29]. Thus, more studies are needed to investigate SFV expression and replication sites in NWP and in humans to better interpret the lack of proviral sequence detection in seropositive persons. The undetectable NWP SFV proviral loads in exposed humans may also reflect a latent or well-controlled infection. The activity of human restriction factors that mediate innate immunity against retroviruses may play in a role in controlling this infection. TRIM5α proteins, for example, mediate a species-specific blockage of infection by particular retroviruses. Pacheco et al. showed that squirrel monkey SFV replication was inhibited by human TRIM5α, whereas marmoset and spider monkey SFV were not inhibited [38]. Other restriction factors, like tetherin and APOBEC3G, were shown to inhibit FV replication but their specific activity on NWP SFV is still undetermined [39,40]. It is plausible that these intrinsic factors modulate NWP SFV infection in humans by stabilizing the virus and maintaining very low proviral loads. FV are also potent inducers of type I interferon (IFN) by plasmacytoid dendritic cells and by PBMC, and such activation of the innate immune responses may be involved in the control of SFV replication in humans [41]. Alternatively, the undetectable viral levels in blood and buccal specimens and longitudinal detection of SFV antibodies may be from sequestration and active replication of virus in lymph nodes or some other tissue compartment. We also found serological evidence of antibody clearance after a first seropositive result for NWP SFV in three workers over a two-to three-year period. These results suggest possible immune control of SFV infection in these workers. To investigate seroreversion in these persons further we also compared the duration of exposure to NWP between the NWP SFV WBpositive and negative workers in our study and found a significant difference in length of exposure and seropositivity in these two groups. NWP SFV WB-positive individuals had an overall shorter duration of NWP exposure compared with seronegative workers, though some SFVpositive workers were consistently seroreactive and had decades of exposure. Although this is the first evidence of antibody clearance in SFV infection, other retroviruses, such as simian type D retrovirus (SRV), can present as latent infections with the presence of integrated virus detected by PCR and/or tissue culture but without antibody detection and with each SRV analyte waxing and waning overtime without consistent detection [42]. Antibody clearance could also reflect possible abortive infections. A complete understanding of these results will require further studies, including additional follow-up of SFV-seropositive persons for longer periods of time. In summary, using a prospective study design we examined a group of workers occupationally exposed to Neotropical primates in Brazil to better understand the zoonotic transmission potential of SFV infecting these monkeys. We used validated serological assays, and nested PCR and qPCR assays which are highly sensitive and specific for detecting infection with NWP SFV. We identified a high prevalence of seropositive workers against NWP SFV, documenting human susceptibility to these viruses and show that none had detectable levels of SFV DNA reflecting latent or well-controlled infections. We also document for the first time antibody reversion in several NWP SFV WB-positive workers, suggesting possible latent infection or clearance of infection. Our findings contribute to the current understanding of human infection with SFV from NWP. Supporting information S1 File. Study questionnaire. This file contains the study questionnaire in Portuguese and translated into English. (DOCX)
2018-04-03T03:52:47.927Z
2017-09-20T00:00:00.000
{ "year": 2017, "sha1": "4294754d40220c1efeb3a79e04613a78a631e2a7", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184502&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4294754d40220c1efeb3a79e04613a78a631e2a7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
246902041
pes2o/s2orc
v3-fos-license
Clinical needs and technical requirements for ventilators for COVID-19 treatment critical patients: an evidence-based comparison for adult and pediatric age The spread of severe acute respiratory syndrome coronavirus 2, taking on pandemic proportions, is placing extraordinary and unprecedented demands on healthcare systems worldwide. The increasing number of critical patients who, experiencing respiratory failure from acute respiratory distress syndrome, need respiratory support, has been leading countries to race against time in arranging new Intensive Care Units (ICUs) and in finding affordable and practical solutions to manage patients in each stage of the disease. The simultaneous worldwide emergency caused serious problems for mechanical ventilators supply. This chaotic scenario generated, indeed, a frenetic race to buy life-saving ventilators. However, the variety of mechanical ventilators designs, together with the limitations in time and resources, make the decision-making processes on ventilators procurement crucial and not counterbalanced by the evaluation of devices quality. This paper aimed at offering an overview of how evidence-based approach for health technologies evaluation, might provide support during Corona Virus Disease 2019 (COVID-19) pandemic in ICUs management and critical equipment supply. We compared and combined all the publicly available indications on the essential requirements that ICU ventilators might meet to be considered acceptable for treating COVID-19 patients in severe to critical illnesses. We hope that the critical analysis of these data might help readers to understand how structured decision-making processes based on evidence, evaluating the safety and effectiveness of a given medical device and the effects of its introduction in a healthcare setting, are able to optimize time and resources allocation that should be considered essential, especially during pandemic period. Introduction The Severe Acute Respiratory Syndrome -Coronavirus ([SARS]-CoV-2) was identified in December 2019 from a group of patients presenting severe pneumonia symptoms in Wuhan, China [1,2]. From that moment to April 22, 2020, the Corona Virus Disease 2019 (COVID-19) caused more than 7,016,794 confirmed cases and more than 402,874 deaths worldwide. 1 A report from the Chinese Centre for Disease Control and Prevention [3], including approximately 44,500 COVID-19 confirmed cases, reported that 81% of patients showed mild symptoms, 14% severe disease and 5% critical illnesses with an overall case fatality rate of 2.3%. Data from the same report showed that 87% of patients were between 30 and 79 years old, 3% were 80 years old or older, while relatively few cases of infants confirmed with COVID-19 have been reported (1% were aged 9 years or younger, 1% were between 10 to 19 years old), and they experienced principally mild illness. It appears as a respiratory infection that can vary from mild respiratory symptoms with spontaneous resolution, to severe pneumonia that in some cases, can be fatal. Data showed that patients who experienced severe clinical symptoms revealed diffuse alveolar damage resulting in end-stage respiratory failure. Unfortunately, at present time even if several therapies have proven to be effective, none of them has been a game changer so far. Severe cases, presenting acute respiratory failure, can receive only respiratory support therapies throughout ventilators (invasive and non-invasive) [4]. Currently available data, indeed, suggest that a significant number of subjects diagnosed with infection from COVID-19 presented acute respiratory failure demanding for Intensive Care Unit (ICU) admissions. However, data collection methods significantly vary across countries, reflecting the inhomogeneity in representing the ratio between the number of patients who required intensive care and the total number of those hospitalized. In China, data updated to February 11th, 2020, showed that slightly less than 20% of the infected population required hospitalization, comprising 14% of cases with severe symptoms and about 5% of critical cases that required intensive care [3]. In Italy, from February 24th, 2020 to today, the ratio between patients who required intensive care and the total of those hospitalized halved 2 (from 20% to 10%). Data from France suggested that this ratio even today is 18%. 3 In addition to the significant number of COVID-19 patients who have been requiring intensive care treatment, data of patients' permanence in ICUs who required ventilation support significantly vary from few days to several weeks. Data from China, for instance, reports that patients' require a mean of 12.8 days of respiratory support [5]. In Italy, where the mean age of patients is much higher, data show that some patients need ventilation for several weeks [6]. These differences in duration and intensity of ICU treatments for COVID-19 patients depend on several factors including age and comorbidities together with the total number of patients in the country, which could affect the time delay to receive care, ICUs capacities and the availability of COVID-19 rapid testing. Compared to the duration of stay in ICU for patients with community-acquired pneumonia in influenza season, these values appear extremely higher and vary widely across countries. A paper published in 2018 showed that in Turkey, during a non-emergency situation, the median duration of stay in ICU for Patients with Community-Acquired Pneumonia in influenza season was 5.0 days [7]. It is a general knowledge that hospital admissions due to pneumonia are likely to increase during influenza outbreaks. It is common knowledge that longer duration of ICU stay may lead to problems in terms of beds and hospital resources allocation. This concept is exasperated during this global pandemic, which has been straining the health care systems worldwide. As reference, let's analyse the Italian scenario. In Italy, before the COVID-19 emergency, the number of ICU beds was 5090 [8]. Assuming all the ICU beds are available for 365 days (i.e., no downtime), this equate to a max capability (i.e., 5090 ICU beds × 365 days) of 1,857,850 ICU bed-days in 12 months (equates 154,820 per month). The number of COVID-19 admissions in Italian ICUs was 4068 from March 5th to April 4th with an average length of stay of 30 days (compared to the normal ICU stay of 14 days) [9].This equates to a demand of 122,040 ICU bed-days in one month just for COVID-19, which is the 79% of Italian ICU monthly capability. Considering that on average 48.3% of ICU beds are occupied for other than COVID-19 patients, this creates a saturation of Italian ICUs, with a demand of extra 41,998 ICU bed-days per month (i.e., 1399.98 novel ICU beds required). It must be remarked that ICU saturation affects all the hospitalizations. In fact, many clinical procedures and elective surgeries cannot be performed if ICU beds are not available, in case of complications. However, the real situation is even worse than the estimation presented above by the fact that the infected people and the consequent demand for ICU beds in March in Italy was not equally distributed on the entire country, but was almost exclusively limited to Northern regions. As this situation is common to most countries worldwide, based on data acquired until now, World Health Organization (WHO) published its recommendations for the European Region as a technical guidance for health systems to respond to COVID-19 outbreak for increasing ICU surge capacity [10]. In Italy, for instance, similar indications aiming at strengthening ICU departments were published on March 4th, 2020. In order to deal with the unexpected influx of patients, the Italian hospitals drastically increased the number of ICU beds, as well as ICU staff and life-saving ventilators and other related supplies [11]. As a direct consequence, and following the example of China, some countries decided, where possible, to build new hospitals, others to reshape several departments into ICUs, others to create new ICUs into buildings different from hospitals or to renovate old hospitals to become new COVID-19 centres. However, increasing ICU beds means increasing the related medical equipment including ICU ventilators. These life support devices, providing temporary or permanent respiratory assistance to patients who cannot breathe on their own, or who require assistance maintaining adequate ventilation because of illness [12], were agreed to be considered pivotal to the care of COVID-19 critical patients. During this critical situation, in which countries are racing against time in arranging new ICUs, the main problem of ventilators' supply has emerged. The high technological complexity of these devices makes the time required for their production crucial. In addition, as life support devices, ventilators have to pass robust regulatory tests before they are receiving the approval and can be delivered to hospitals. Considering the complicated variety of ICU ventilators designs, currently offered by a number of manufacturers, together with the limitations in time and resources during the emergency, stakeholders necessitate affordable solutions to rapidly understand the real needs of a specific health context. This means learning how to correctly evaluate the coherent amount of ventilators really needed in a specific context as well as the essential technical requirements that ventilators should have to ensure effective treatment for COVID-19 critical patients. On April 4th, for instance, a number of UK journals reported that the more than 250 ICU ventilators purchased from China, as an important step in the country's fight against the COVID-19 outbreak, were ditched because serious concerns over the basic quality of ventilators emerged. UK Government was looking forward to the withdrawal and replacement of these ventilators with devices able to ensure safety and effectiveness in providing ventilation for critical patients [13,14]. This reflects that the current pandemic emergency is requiring multidisciplinary efforts to evaluate ICU ventilators costeffectiveness. We wrote this manuscript aiming at offering an overview of how the application of a structured and reliable evidencebased approach for technologies evaluation might provide support during COVID-19 pandemic, resuming and providing all the main information currently publicly available on the essential requirements for the ventilators' management for critical patients. The expression "evidence-based" is well known in the medical field, while it has been spreading in biomedical engineering field in the recent years. As evidence-based medicine is based on using the best available evidence to make decisions about individual patients' care, evidence-based clinical engineering uses the best available evidence to make decisions on medical devices and healthcare setting. Historically, clinical engineering has been based on best practices and clinical engineers have been less involved in research activities as well as in publishing their activities' results in peer reviewed journals. However, in the past 10 years, clinical engineering has been growing globally [15], with the first peer reviewed journal papers on evidence-based being published in 2019 [16,17]. Similarly to what Pecchia et al. did for personal protective equipment [18], starting from the clinical indications spread by WHO and the Italian Ministry of Health, we joined our efforts to conscientiously share the minimum technical requirements that an ICU ventilator might meet to be considered acceptable for treating COVID-19 patients in severe to critical illnesses, comparing and, at the same time, combining in an easily readable way, all the indications publicly available in order to provide a unified and global vision. We hope this could significantly help and support manufacturers in designing a basic product which is able to provide a sufficient ventilation support for COVID-19 patients, and, on the other side, hospitals decision-makers in prioritizing, during the COVID-19 pandemic, the most important mechanical ventilators' technical characteristics for the definition of tender specifications. Clinical indications for suspected and confirmed COVID-19 patients A general search on PubMed and google search engines was carried out in order to retrieve as much information as possible about indications and guidelines for the management of suspected and confirmed COVID-19 patients. The most relevant documents were gathered from the Italian Ministry of Health 4 and from WHO 5 web pages. The two health organizations provided guidelines to manage severe acute respiratory infections in suspected and confirmed COVID-19 patients. As the online documentation confirmed, they agreed on the suggested strategies to manage these critical patients. WHO provided a detailed description of the clinical syndromes' profiles associated with COVID-19 for mild, severe and critical illnesses and it suggested strategies to manage patients for each of the described conditions [19]. As the patients who experienced mild symptoms do not need ventilation support, our attention was focused on patients with severe and critical illnesses. Considering adults with severe symptoms, the two health organizations suggested to start the oxygen therapy at 5 L/min and titrate flow rates to reach target SpO2 ≥ 90% in nonpregnant adults and children, SpO2 ≥ 92-95% in pregnant patients [20,21]. They strongly recommended to closely monitor these patients in order to early detect any signs of clinical decline and, if necessary, immediately turn to mechanical ventilation. For those patients with Acute Respiratory Distress Syndrome (ARDS) who required endotracheal intubation they recommended to use a low Tidal Volume (TV) (4-8 mL/kg per predicted body weight, PBW) with a plateau pressure (Pplat) of less than 26-28 cmH 2 O and a driving pressure less than 12-14 cmH 2 O [19,22]. WHO claimed that for children, TV should be included in the range 3-6 mL/kg PBW in the case of poor respiratory system compliance, or in the range 5-8 mL/kg PBW if respiratory system compliance is better preserved [23]. The Italian Ministry of Health also suggested to use higher Positive end-expiratory pressure (PEEP) rather than lower PEEP in patients with moderate and severe ARDS. A table was provided, indicating the possible PEEP/FiO 2 (FiO2 is the Fraction of inspired oxygen) combinations to obtain the following therapeutic objectives: Oxygen Saturation (SpO 2 ): 88-95%, PaO 2 : 55-80 mmHg, Pplat less than 26 cmH 2 O, or less than 28 cmH 2 O if Body Mass Index (BMI) resulted greater than 30 kg/m2, and a driving pressure less than 12 cmH 2 O. [22] Moreover, WHO [19] suggested to treat ARDS patients with hypoxemic respiratory failure with non-invasive or high-flow oxygen systems, even if they should be closely monitored because they are likely to clinical decline and, in the case of Non-invasive ventilation (NIV) strategy failure, the patient needs immediate endotracheal intubation. Essential technical requirements for ICU ventilators The urgent demand for ICU ventilators to guarantee continuous care for COVID-19 patients has been leading the major health organizations to publish the minimum acceptable technical requirements for ventilators to be used during the current pandemic. In this respect, a detailed search of the publicly available indications for mechanical ventilators' essential technical requirements was carried out in order to resume and compare all the indications that the major worldwide health organizations are providing during this time. In our analysis we considered data from UK National Health Service (NHS) [24,25], WHO [12] and CONSIP S.p.A. [26] (held by the Italian Ministry of Economy and Finance, it is the purchasing centre of the Italian Public Administration sector). Table 1 summarizes the three directions. A number of technical specifications were grouped into eleven classes: (i) controls/setting ranges, (ii) invasive and non-invasive ventilation modes, (iii) patient assessment tools, (iv) integrated capabilities, (v) monitored/displayed parameters, (vi) patient alarms, (vii) equipment alarms, (viii) display, (ix) patient transport capability, (x) on-board air compressor or turbine, (xi) internal back-up battery. Each detailed indicator has been made explicit according to the data retrieved from the three organizations websites. Taking into account the clinical guidelines for suspected and confirmed COVID-19 adult and pediatric patients discussed above, even if the three organizations provided different ranges for ventilators parameters, they do not appear significantly different from each other. More specifically, while CONSIP S.p.A. and WHO provided indication for adult and pediatric patients, the information retrieved from NHS focused only on adults. The indication for TV, as well as that for plateau airway pressure, are guaranteed by the minimal technical specifications in terms of both volume and pressure. The former clinical requirement is ensured because a TV less than 1000 ml is enough for ventilating a patient with a predicted body weight up to 125-250 kg (considering the WHO indication of 4-8 ml/ kg). As for the latter, the ranges of Inspiratory Pressure (0-40 cmH 2 O) and PEEP (0-20 cmH 2 O) allow the clinician to adjust for the Pplat and driving pressure values recommended. Regarding the Respiratory Rate (RR), WHO suggested a range of 10-60 acts per minutes, NHS recommended a range of 10-30 acts per minutes, whereas CONSIP S.p.A did not specify any RR values. Moreover, the ventilator must accurately control the fraction of inspired oxygen (FiO2) provided to the patient, allowing its full-range adjustment (21-100% for CONSIP S.p.A and WHO versus 30-100% for NHS) to meet the different patients' health needs. The ratio of the duration of inspiratory and expiratory phases needs to be regulated (NHS specified the range 1:1-1:3) in order to allow the full expiration of the diseased lung, to increase CO2 clearance and to prevent gas trapping. A feature for air leaks compensation is also required by CONSIP S.p.A, so that the ventilator can adjust the delivered flow accordingly: leaks in the breathing circuit can determine a different volume than the one selected and prevent accurate PEEP supply and gas flow measure. The ventilator must be provided with several ventilation modes. The three organizations agreed that volumecontrolled (which still guarantees the selected TV when changes in lung compliance and/or in airway resistance occur) and pressure-controlled (particularly for ARDS patients, since they facilitate lung recruitment) ventilation modes should be guaranteed. For ventilation modes that rely on recognition of patient's efforts, trigger detection systems, both flow-based and pressure-based, are required. More specifically, CONSIP S.p.A required that the device must be provided with Continuous Mandatory Ventilation modes (CMV), Pressure and Volume Assisted/Controlled Ventilation modes (P/VCAC) and for patients partly capable of breathing alone, NHS and WHO suggested that the ventilator should be equipped with Synchronized Intermittent Mandatory Ventilation (SIMV) mode. In addition, for CONSIP S.p.A and WHO, Pressure Support Ventilation (PSV) mode is required for patients capable of spontaneous breathing: a mode in which every respiratory act is still aided by the device, in order to prevent the risk of barotrauma and to decrease work of breathing. The capability of switching to NIV, including Continuous Positive Airway Pressure (CPAP) and Bilevel Positive Airway Pressure (BIPAP) modes, is also required by Table 1 Compares a number of intensive care ventilators' technical specifications gathered form NHS [19,20], WHO [11] and CONSIP S.p.A. [21] (held by the Italian Ministry of Economy and Finance, it is the purchasing centre of the Italian Public Administration sector). Technical indicators were grouped into eleven classes: (i) controls/setting ranges, (ii) invasive and non-invasive ventilation modes, (iii) patient assessment tools, (iv More advanced ventilation modes, like Pressure Regulated Volume Controlled (PRVC) and High Frequency Oscillatory (HFO) ventilation, are also required. NHS and WHO agreed that PRVC, through which the selected volume is provided at the lowest possible pressure level, is needed. CONSIP S.p.A, instead, recommended the inclusion of HFO mode, considering more technologically complex ventilators, since they need to generate a high frequency ventilation, combined with conventional ventilation modes in some commercially available neonatal ventilators and less frequently used to ventilate adult patients. Regarding the monitoring of the ongoing treatment, the three organizations agreed that the display of the device should show numeric values for the main respiratory parameters (TV, PEEP, FiO2, respiratory rate). Furthermore, monitoring of both static and dynamic lung compliance (typically lower in ARDS patients) and airway resistance is required by CONSIP S.p.A. More sophisticated tools for the visualization of respiratory mechanics could allow for a better assessment of respiratory function and the obstructive or restrictive nature of disorders. CONSIP S.p.A and WHO believe that the device must also allow showing at least three respiratory parameters' waveforms at the same time, which are important for the visual assessment of ventilation trends and setting optimization. Pressure/volume and flow/volume loops are also required, in order to allow the clinician observe possible pulmonary recruitment anomalies and obstructive or restrictive alterations. A touchscreen and sufficiently large (>12″) display is required, for a quick and easy use by the clinicians. The integration of a capnography/CO2 monitoring system is agreed by CONSIP S.p.A and NHS for a more accurate assessment of cause and severity of respiratory disorders, and for a guidance to therapeutic choices (and their follow-up). Furthermore, CONSIP S.p.A suggested the inclusion of a system for endotracheal/tracheostomy tubes' pressure drops compensation for a more accurate ventilation. The device must be provided with several patient alarms, which must be adequately visible and audible even in the noisy ICU environment. CONSIP S.p.A and WHO agreed that the system must check for high and low values of the main respiratory parameters (FiO2, minute volume, apnea, respiratory rate, PEEP) to alert the clinician of possible changes in the patient's condition. Furthermore, the three organizations agreed for the low and high inspiratory pressure alarm as well as for high PEEP, high/low TV and for the breathing circuit disconnection. Alarms must also warn about device malfunctions (breathing circuit disconnection, power failure, supply gas failure, low battery). Finally, the device must be provided with an internal battery for backup in case of power supply failure and for in-hospital transport (for which the ventilator must be equipped with a cart). As discussed, some differences can be found between the minimal specifications recommended by the three organizations taken into account. NHS also recommends fewer ventilation modes, not requiring PSV nor assist/control modes (VCAC, PCAC), as well as fewer alarms than WHO and CONSIP S.p.A. Moreover, while CONSIP S.p.A resulted more demanding regarding the recommendations for monitored/displayed parameters than NHS and WHO, it is somewhat vague about respiratory parameters; meanwhile, it provides detailed specifications about patient's assessment tools, display and patient's transport capabilities. Discussion and conclusions The differences highlighted between the three health organizations point of views might be explained by the fact that NHS proposed ventilators for few hours short-term stabilisation. They also claimed that they might be used up to one day for a patient in critical condition. Ideally, these models might work as broader function ventilators for supporting patients for a limited number of days, before more advanced ventilator support becomes necessary [24]. In line with that, Alison Pittard, the dean of the Faculty of Intensive Care Medicine, confirmed that the initial request to UK industry was toward the production of simple ventilators for the early-stage treatment of COVID-19, and if patients needed a prolonged period of ventilation, a more sophisticated device would be more suitable [27,28]. At the end of March, UK ministers ordered 10,000 extra ventilators, prioritising the production of basic devices. However, it was noted that while the overall number of new COVID-19 confirmed cases in UK was slightly decreasing, critical cases were more complicated than expected [29]. For these reasons, the UK Government decided to prioritise more sophisticated devices and cancelled an order for thousands of units of a simple ventilator model developed by industrial companies to treat COVID- 19. Similar examples could be observed in Spain [30] and in France [31]. In France, for instance, the Government requested the production of 10,000 ventilators, half of which were sophisticated models, while the others had a more basic design (used in 1998). As the former are more complex and take more time to be manufactured, Government required the production of a greater number of basic design ventilators, approving the manufacture of 8500 basic and 1600 more sophisticated devices in 50 days. Unsurprisingly, the 8500 basic devices were considered useful for patient transport but not for treating critical patients in ICUs [32]. These examples are suggestive of a waste of time and resources, while during critical and emergency situations, costs and time are driver indicators for the manufacturing of new medical devices. However, we strongly believe that incentivizing cost-effectiveness analysis and the application of evidence-based health technology assessment methods, through the gathering, analysis and synthesis of the best available scientific evidence could have led to a reduction of unnecessary expenses, keeping the real health needs and time as driver indicators. With these reflections, we would like to highlight how the prioritization of healthcare needs might be important. From these last examples, it clearly emerged that the frenetic race to the increase of ventilators provision was initially not counterbalanced by the quality evaluation of devices. During an emergency situation the absolute priority is saving lives and the way to better do that should be guided by scientific evidence. Structured decision-making processes based on evidence, are able to optimize time and resources allocation, evaluating the safety and effectiveness of a given medical device together with the effects of its introduction in a healthcare setting. In this context, the comparison among different countries points of view might be crucial. We combined the answers to the COVID-19 pandemic of three Institutions that had to deal with different settings, populations, policies, etc.… However, it could be noted that the general lines issued by the three countries are similar but not identical, also because while CONSIP S.p.A and WHO extended the indications to pediatric population, the information retrieved from NHS focused only on adults. It means that Institutions should collaborate more and more in order to provide univocal reliable indications to help the world to emerge from this crisis. We strongly believe that elaborating clear indications from available scientific evidence and spreading them throughout the most renowned world Institutions, might provide significant help in this pandemic emergency, guiding and supporting the crucial decision-making processes that are affecting the life of millions people and the economic burden of all countries that are dealing with this common enemy. Availability of data and material Not applicable. Compliance with ethical standards Conflicts of interest/competing interests Not applicable. Code availability Not applicable. Ethics approval Not applicable. Consent to participate Not applicable. Consent for publication Not applicable. Conflict of interest The authors declare that they have no competing of interests.
2020-08-01T05:08:40.878Z
2020-07-30T00:00:00.000
{ "year": 2020, "sha1": "530b3a4592717cbee09dbdf90c4e328dbe7e23e6", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12553-020-00467-w.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "530b3a4592717cbee09dbdf90c4e328dbe7e23e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
233739994
pes2o/s2orc
v3-fos-license
3D Vehicle Detection Using Camera and Low-Resolution LiDAR —Nowadays, Light Detection And Ranging (LiDAR) has been widely used in autonomous vehicles for perception and localization. However, the cost of a high-resolution Li-DAR is still prohibitively expensive, while its low-resolution counterpart is much more affordable. Therefore, using low-resolution LiDAR for autonomous driving perception tasks instead of high-resolution LiDAR is an economically feasible solution. In this paper, we propose a novel framework for 3D object detection in Bird-Eye View (BEV) using a low-resolution LiDAR and a monocular camera. Taking the low-resolution LiDAR point cloud and the monocular image as input, our depth completion network is able to produce dense point cloud that is subsequently processed by a voxel-based network for 3D object detection. Evaluated with KITTI dataset, the experimental results shows that the proposed approach performs significantly better than directly applying the 16-line LiDAR point cloud for object detection. For both easy and moderate cases, our detection results are comparable to those from 64-line high-resolution LiDAR. The network architecture and performance evaluations are analyzed in detail. I. INTRODUCTION In recently years, many research have been focused on autonomous driving technology. LiDAR is one of the most important sensors for perception tasks such as drivable region segmentation [1] [2], object detection [3] and vehicle tracking [4]. Different from images captured by cameras, point cloud generated by LiDARs supplies 3D spatial information of the objects in the form of (X, Y, Z) coordinates and intensity. This alleviates the barrier of distance estimation and makes 3D object detection or tracking much more accurate. However, the price of high-resolution LiDARs is much higher than their low-resolution counterparts. The specification of the most popular Velodyne 64-line LiDAR HDL-64E and 16-line LiDAR VLP-16 is compared in Table I. As we can see, the cost of a low-resolution LiDAR is only about 1/10 of the high-resolution ones. Therefore, it is necessary to pay attention to the lowresolution LiDARs in order to build low-cost autonomous driving systems. However, it is a major challenge to perform object detection from the point cloud produced by a lowresolution LiDAR since it is too sparse to even show the shapes of objects. As illustrated in Fig. 2, we can barely find objects from the depth map captured from 16-line LiDAR, while the 64-line LiDAR data is much more visible. A. Low-Resolution LiDAR for Perception Some research works were focused on segmentation using low-resolution LiDARs. [5] introduced the local normal vector for the LIDAR's spherical coordinates as an input channel. Based on the existing LoDNN architectures [2], its road segmentation performance using low-resolution LiDAR was comparable to that from high-resolution LiDAR within a reasonable degradation. A supervised domain adaption was utilized by [6] to predict the low-resolution point cloud into high-resolution point cloud in spherical coordinate and further evaluated the results in 3D semantic segmentation task. Low-resolution LiDARs had been also employed for object tracking tasks. In [7], a LiDAR-based system was proposed for the actual position and velocities estimation of the detected vehicles. The tracking results was evaluated in terms of distance between LiDAR and target vehicles. The tracking performance decreases sharply with the increasing of distance. Some other works utilized depth completion for 2D object detection, such as [12] and [11]. In [12], a weighted depth filling algorithm was proposed to make the high-resolution (HDL-64E) LiDAR depth map even denser. Subsequently, this dense depth map was concatenated with the corresponding RGB image as the input of YOLOv3 [13] network for 2D object detection. Similarly, the authors of [11] introduced a self-supervised depth completion network to fill the high-resolution (HDL-64E) LiDAR depth map. 2D object detection networks such as Faster R-CNN [14] and SSD [15] were trained using dense depth map and image as inputs. B. High Resolution LiDAR for BEV Object Detection Nearly all state-of-the-art object detectors utilize highresolution LiDAR. In [16], it first transformed the point cloud into BEV map, and then extracted the ground and proposed the objects in two branches separately. Finally the objects were predicted by a post-processing block. [17] further refined the previous version into an end-to-end model and achieved better performance. Single-stage detector, PIXOR, was proposed in [9] by using 2D convolution on the voxelized BEV map. Without any anchor, it achieved real-time processing speed. As mentioned earlier, due to the extreme sparsity, lowresolution LiDAR depth map does not supply enough shape information of the objects, but some sub-samples of the precise depth information. Meanwhile, the RGB image supplies rich context information. Thus, we argue that when fusing sparse depth map and RGB image together, object detection becomes possible. III. PROPOSED CNN-BASED FRAMEWORK In this paper, we investigate the possibility of lowresolution LiDAR usage in BEV object detection task. In Fig. 2, red box, orange box and blue box represents the vehicle in short range, medium range and long range respectively. For short range vehicles, their shapes are clearly visible from dense depth maps. In sparse depth maps, the shapes are very blurry but still recognizable since the number of points hitting on the vehicles is still large enough. Concerning to the medium and long range vehicles (in orange and blue boxes), we can only get a small number of points even using 64-line LiDAR. While in the sparse depth map from 16-line LiDAR, the number of hit points is few to none. Taking the medium range vehicles in orange boxes in Fig. 2 (h) for instance, it is easy to recognize them as obstacles due to sharp distance distinction but difficult to recognize them as vehicles. This also applies to vehicles with occlusion (green boxes in Fig. 2 (c), (f) and (i)). The long range vehicles in blue box (in both Fig. 2 (e) and (h)) get too few point to be correctly localized and classified. According to the analysis above, we found that unlike the depth map from 64-line LiDAR, 16-line Li-DAR depth map does not show reliable context information but accurate distance information. This implies that 16-line LiDAR depth map is more useful for depth estimation rather than context information extraction. Therefore, to better use the information from 16-line depth map, we put a depth completion network prior to the object detector to generate a dense depth map with context information (Fig. 3). After the dense depth map is generated, it is sent into 3D object detector, as demonstrated in Fig. 4. A. Depth Completion Network The depth completion network aims to fill the sparse depth map from 16-line LiDAR point cloud with the help of RGB image. The state-of-the-art depth completion network [8] is Fig. 4: The proposed framework for object detection using low-resolution point cloud with image adapted here with some modifications. It requires two inputs, RGB image and low-resolution sparse depth map. The RGB image supplies the context information in detail, while the sparse depth map supplies the precise depth information for some pixels on the image. The sensor fusion strategy adopted here is also referred as early fusion. To make the network more compact, we first replace the ResNet-34 backbone with ResNet-18. For performance improvement, global attention modules and an Atrous Spatial Pyramid Pooling (ASPP) [18] module are placed to bridge the encoder and decoder. As shown in Fig. 5, the global attention module is used to extract global context information of the feature map by global pooling layer, and then fuse the global information back to guide the feature learning. Through adding this module, the global information is merged into features without up-sampling layer. This helps the decoder part to achieve better performance. Besides, an ASPP module (Fig. 6) is placed between encoder and decoder, with each convolution dilated rate 2, 4, 8 and 16. The ASPP module concatenates feature maps with different field of perception, so that decoder has a better understanding of the context information. The loss function of depth completion network is shown in Eq. 1, which calculates the Mean Square Error (MSE) between the predicted depth map and the ground truth. B. Object Detection Network The object detection network adopted in this framework is PIXOR [9]. Its main idea is to take the advantage of 2D convolution and anchor-free network to realize super-fast point cloud object detection in BEV. PIXOR consists of two Fig. 6: Structure of atrous spatial pyramid pooling module steps. The first step is to reform the representation of input point cloud. It reduces 3 degrees of freedom to 2 in BEV, and extracts the 3rd freedom (z or height) as another input feature map channel. So that 2D convolution instead of 3D convolution can be used to greatly decrease the computation complexity. The second step is to feed reformed input feature map into an anchor-free one-stage object detector network (Fig. 7). For the highly efficient computation on dense predictions, a fully convolutional architecture is utilized to build the backbone and header of PIXOR. Without any predefined anchors and proposals, PIXOR outputs the predicted class and orientation from header in a single network. Fig. 7: The network architecture of PIXOR [9] Concerning to the loss function, the total loss of object detection consists of the classification loss and the regression loss (Eq. 2), where λ cls and λ reg are the corresponding coefficients. The classification loss L cls targets to correctly predict the object (cars in our case) and the regression loss L reg aims to refine the size, center and the orientation of the predicted bounding boxes. C. Implementation Details The depth completion network is firstly trained on KITTI Depth Completion dataset. The depth completion network is trained with batch size of 4, and learning rate starts at 1e-4 which decreases every 5 epochs. The total number of training epoch is 10. After training the depth completion network and keeping it as is, we move on to train the object detector from scratch. The KITTI Object Detection dataset has been split into training and validation parts according to [10]. The optimizer is Adam, with batch size 8. The learning rate starts at 1e-3 and reduces by a factor of 2 when the validation loss does not decrease. Finally, we fine-tune the entire framework with both depth completion network and object detection network together, with 16-line point cloud and images as input and vehicles in BEV as output. A. Dataset Training and evaluation of the whole framework both employ KITTI dataset (both Depth Completion and Object Detection). Before feed into the framework mentioned above, the point cloud are down-sampled to emulate the VLP-16 low-resolution LiDAR. KITTI depth completion dataset contains 85,898 training data and 1,000 selected validation data. Its ground truth is produced by aggregating consecutive LiDAR scan frames into a semi-dense depth map, about 30% annotated pixels. KITTI object detection dataset has 7,481 training data and 7,518 testing data. Evaluation is categorized into three regimes: easy, moderate and hard, representing objects at different occlusion and truncation levels. B. Depth Completion Performance Evaluation As described in Sec. III-A, in order to enhance the depth completion performance, multiple GAM modules have been added to bridge the encoder and the decoder of depth completion network. The performance comparison on validation dataset is illustrated in Tab. II. Adding GAM modules results the performance improvement of about 3.6% and 7.0% measured by Root Mean Square Root (RMSE) and Mean Average Error (MAE) respectively. Fig. 8 (b) and (c) demonstrate the predicted depth maps of depth completion networks with and without GAM modules respectively. And the bottom figure shows the ground truth. In this example, the depth map from depth completion network with GAM module gives objects slightly better shape representation. The performance of our framework on KITTI object detection validation dataset is illustrated in Tab. III and Fig. 9. The results are shown in two circumstances IoU=0.5 and IoU=0.7 respectively. When IoU=0.5, our frameworks achieves 89.0%, 75.8% and 68.1% detection accuracy for easy, moderate and hard cases respectively. While in case of IoU=0.7, the prediction accuracy is decreased into 75.4%, 61.2% and 55.2% respectively. Comparing to feeding 16-line point cloud directly into PIXOR, our framework pulls up the detection accuracy significantly in all cases. If comparing to PIXOR with 64-line point cloud as input, the performance of our framework is relatively comparable in easy and moderate cases. But in hard case, the prediction accuracy drops around 20% in both IoU criteria. The precision-recall curve is demonstrated in Fig. 10. D. Analysis and Future Work 1) Location prediction: The depth completion task in KITTI adopts RMSE and MAE as benchmark. However, if training with RMSE or MAE as loss function, the network cannot distinguish object boarder pixels as foreground pixels or background pixels. Instead, to reach higher RMSE and MAE, depth completion network attempts to predict the depth of the car edge as a value between the distance of the vehicle and the distance of the ground in front of the car ( Fig. 11). This "long-tail" boundary effect might make the object localization more difficult. In our future work, a new benchmark will be proposed for LiDAR depth completion in order to remove the boundary effects mentioned above. 2) Vehicles in long range: Due to the extreme sparsity of low-resolution LiDAR depth map, the vehicles in long distances from the LiDAR are only visible in images. Thus for these vehicles, the depth map assisted depth prediction is practically downgraded as depth estimation from image only. This explains a sharp performance drop on hard cases in KITTI dataset. V. CONCLUSION In this paper, a 3D object detection framework is proposed for low-resolution LiDAR point cloud. By cascading a depth completion network prior to the object detector, it first converts the sparse point cloud into a much denser depth map that is subsequently processed for 3D object detection. It makes object detection possible from a sparse, low-cost LiDAR by fusing with images captured by a camera. When evaluated on KITTI dataset, the network can achieve comparable object detection accuracy in both easy and moderate cases as that of using high-resolution point cloud.
2021-05-06T01:15:49.265Z
2021-05-04T00:00:00.000
{ "year": 2021, "sha1": "27fc81d2bab70ee1b04883f0e5fa18719e2bc99f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "27fc81d2bab70ee1b04883f0e5fa18719e2bc99f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
266824607
pes2o/s2orc
v3-fos-license
Effects of multirepresentation-based creative problem-solving learning model on students’ critical thinking and diet nutritional quality This research investigated how the multirepresentation-based creative problem-solving (MBCPS) learning model could enhance students’ critical thinking skills in relation to the Introduction The object of the science learning process is procedural scientific work.Science involves factual, conceptual, and procedural knowledge, as well as metacognition (Wisudawati et al., 2015).Effective science teaching and learning through the application of cognitive psychology within a systematic instructional framework can help to motivate students (Meagher et al., 2018), a systematic instructional framework based on balanced learning (Sackes et al., 2020).Factual knowledge in the science context can relate to daily nutritional behavior.It could cover the concept of balanced nutrition, health, and food quality (Hardinsyah, 2017).Balanced nutrition is the arrangement of daily food that contains nutrients of the required types and amounts.The four pillars of balanced nutrition include: 1) consuming a variety of foods, 2) getting used to clean living behaviors, 3) doing physical activity, and 4) monitoring the body weight regularly to maintain the normal weight (Ministry of Health of the Republic of Indonesia, 2014). A nutritional status is also applied for students at university.However, the nutritional status of students has been shown to vary.The prevalence of underweight and overweight was high.The World Health Organization (WHO) (2021) reported that 39% of adults were overweight and 13% were obese.A higher percentage was found for teens (Development Initiatives, 2018).At the villages in Central Java, Indonesia, a 68.8% of women of 35-49 years old were overweight (Lowe et al., 2021), meanwhile in Equador, the obesity level was of 44.2% (Hajri et al., 2021).The research has reported that the students at the Faculty of Engineering, Universitas Negeri Semarang with underweight, normal and overweight status were 23.5%, 64.2%, and 9.2%, respectively (Fathonah et al., 2018).Two years later, at the same location but different students, but the number of students with overweight drastically increased to 22.0% while those with underweight was almost unchanged (21.9%) (Fathonah et al., 2020).The improvement of the students' nutritional status was extremely important, e.g., through a learning process covering the concepts of balanced nutrition in "nutrition science" course.Educating children, teens, and adults on nutrition and healthy eating practices could create a healthy food environment. Various learning models have been used in the learning process related to nutrition and health, such as interprofessional intervention (educating two professions together) (Asprey 2016), active learning (Santos et al., 2020).E-learning has been trialled for medical students in China (Luo et al., 2017) and teachers in Greece (Katsagoni et al., 2019), animated videos in Latinos (Calderon et al., 2014), and online health information (Aydin et al., 2015;Smith, et al., 2019). The learning process that could support the students' activity in collaboratively, exploring creative problem solving from various solutions offered could be in accordance with the characteristics of the concepts of nutrition and health.The creative problem-solving model involves a collective effort by students to solve problems (Kim, et al., 2019) and find more original solutions (Hooijdonk, et al., 2020).This model could promote the students' higher-order thinking skills (Skeriene, et al., 2020) including creative, critical, logical, reflective, and metacognitive thinking skills. The students' critical thinking skills in selecting dietary and food to maintain the good nutritional status through a creative problem-solving learning was covered in this study.This thinking skill is one of the must-mastered thinking skills by the students in this century, in addition to three other skills, i.e., creative, communication and collaboration skills (National Education Association, 2019).This was due to the occupational requirements for job applicants set by the employers that the critical and problem-solving thinking skills were the main requirements, not the science competence alone (Kyllonen, 2012).As various perspectives on problems were involved in the critical thinking skills (Dekker, 2020), integrating the creative problem-solving learning explicitly in the learning process was extremely important to improve the students' critical thinking skills.The four steps in the creative problem-solving learning model, namely identification of problems, generating ideas, evaluating ideas, and validating solutions could be modified and adjusted to have more meaningful learning process and to promote the students' creativity (Nazzal, et al. 2020).The third step of the creative problem-solving model could be integrated with the multirepresentation approach to offer various forms of representation on the same certain concept to communicate the meaning of science (Nielsen et al., 2022), and develop literacy (Nielsen & Yeo, 2022).The CPS model is an educational model for collective creative endeavors or thought processes used by groups to solve problems.This model is a continuous process of using divergent and convergent thinking together at each stage, which is not segmented but interrelated and cyclical (Kim et al., 2019).Many modern models of CPS, such as the Geneplore model, have two main phases, namely generation and exploration.In the generation phase mental representations of possible solutions are collected.The exploration phase of possible solutions is explored until the best one is selected (Finke et al., 1996). A four stage CPS model has been applied in the engineering (Cropley, 2015;Nazzal, et al., 2020;Next Generation Science Standards Lead States, 2013), and mathematics subjects with the changing of the stage of creating ideas into brainstorming (Pepkin, 2000).Pepkin model has been applied to the students and was proven to affect higher-order thinking skills and was higher than that of the conventional models (Adila et al., 2020).This study used a CPS model with the four syntaxes above.The stage of idea evaluation was expanded by multirepresentation.Multirepresentation learning is able to improve mastery of science concepts (Hubber et al., 2018, Gillies et al., 2020), scientific literacy (Prain, 2019), modeling dimension (Parent et al., 2017), and digital multimodal (Andersen et al., 2018).Multirepresentation-based worksheets were able to cultivate critical thinking skills (Abdurrahman et al., 2019).Multirepresentation-based inquiry learning was reported to effectively train the critical thinking skills in physics of high school students (Amanati, 2020).The chemistry teachers need to apply the multirepresentation to improve the students' understanding of chemistry concepts (Li & Arshad, 2014).The representation of concept of balanced nutrition in learning has been conveyed verbally, visually, and mathematically.The model developed by research is called Multirepresentation-Based Creative Problem Solving (MBCPS).The components of the learning models include syntaxes, social systems, reaction principles, supporting systems, and instructional and accompaniment impacts (Joyce et al., 2015).The syntax that has been developed in research is the CPS syntax with a factor of creative thinking skills and combined with multirepresentations. This multirepresentation-based creative problem-solving (MBCPS) learning model could be considered as a promising collective creative way used by groups of students to solve problems by presenting the same concept in different forms.This study used the MBCPS learning model to improve students' critical thinking skills in relation to planning and implementing proper diets.MBCPS has developed critical thinking skills.Critical thinking skills are performed through activities, namely thinking solutions referring to problems, thinking deeply, thinking focused, and taking the best solution.This research objectives to analyze the learning activity of the concepts of balanced nutrition, and to analyze its effectiveness in improving the students' critical thinking skills in relation to the nutritional quality of diet.Hence, two questions have guided the current research: (i) How to analyze the learning activity of the concepts of balanced nutrition?and (ii) how to analyze its effectiveness in improving the students' critical thinking skills in relation to the nutritional quality of diet? The Subjects of the Research This research is investigation the effect of the MBCPS learning model on students understanding of the concepts of balanced nutrition.A purposive sampling method was used to select the subjects of the research.The subjects of the research were the students taking the course in two separated classes with 36 students in each class.The MBCPS model was applied to one class of the subjects (experiment group), while another was using conventional problem solving (control group).In the MBCPS model, it is equipped with a textbook compiled by researchers.The textbook was prepared based on the Regulation of the Ministry of Health of the Republic of Indonesia No. 41 concerning the guidelines of balanced nutrition which contains four pillars (for control group), which were modified.The four pillars of balanced nutrition equipped with MBCPS model syntax examples, nutrition literacy and the latest research results from reputable scientific journals were explained.Sequence of idea evaluation activities: 1) Determining the specific, measurable, and best solution design, 2) Presenting clear, easy to understand, and correct verbal solutions according to problem identifications, 3) Presenting visual/image solutions that contain the correct concept, communicative display, and interested readers, by searching the website to obtain quality images which are the best solutions, for example obesity, 4) Presenting mathematical solutions as a coherent way to give students opportunities to complete tasks or problems, for example calculating IMT and determining normal weight.Solution validation 1) Applying the best solution design, 2) Comparing the solution design against the standard values or sources (if any in the literature), 3) Re-examine the best solution for the problems based on the correct concepts. The Learning Process of Multirepresentation-Based Creative Problem Solving The learning activities observed was in line with the CPS syntax as reported by Nazzal et al. (2020) which included (1) problem identification, (2) idea generation, (3) idea evaluation, and (4) solution validation.At the evaluation stage, the idea was spelled out in three representations of answers.The three representations of the answer are verbally, visually, and mathematically.Table 1 lists the modified syntax of the MBCPS learning model which previously reported by Nazzal et al. (2020).The students and lecturer activities as the source of data at each stage in the syntax were displayed.The experimental activity began with the lecturer's explanation of the research and learning objectives of MBCPS.The activities carried out by students on each pillar of balanced nutrition were 1) studying the material of each pillar of balanced nutrition, 2) studying problems according to the pillars, 3) working on problems with MBCPS in groups with the stages of problem identification, idea generation, evaluation of ideas with multirepresentation, and 4) validation of solutions, 5) listening to material explanations, 6) presenting the results of problems in class, 7) problem-solving class discussion with MBCPS, and 8) fixing the problem solutions.The activities carried out by the lecturer were 1) explaining each pillar with Zoom application, with various examples of problem solving with the MBCPS model, 2) accompanying and guiding students both works individually and in groups, 3) observing discussion and presentation activities and 4) evaluating the results of problem solving with MBCPS and providing feedback.The support system for implementing the MBCPS learning model for balanced nutrition were 1) learning tools in the form of Semester Learning Plans and Student Activity Sheets, and 2) learning media in the form of 5 learning videos, for the first video in the form of a learning model contains 4 videos containing 4 pillars of balanced nutrition; 3) The textbook on balanced nutrition with the MBCPS model with the title "Balanced Nutrition Literacy in Science Learning" with an ISBN of 978-623-02-3790-4 (Fathonah et al., 2022), 4) Research instruments include critical thinking skills, and food nutrition quality, and 5) Other learning resources related to balanced nutrition materials in the form of e-books, textbooks, and national and international journals. The learning outcomes included 1) mastering the concept of balanced nutrition in overcoming nutritional problems, 2) being able to solve the problems of each pillar of balanced nutrition critically and multirepresentation, and 3) being able to improve the quality of food nutrition.The instructional impact of MBCPS learning was that students master the concept of balanced nutrition.The impact of accompaniment is to improve 1) critical thinking ability, and and 2) the quality of food nutrition.The learning environment required by the learning model were 1) information technology such as laptops, tablets, iPads, and smart phones (Kaup et al., 2020); 2) using online platforms such as Zoom (Fuady et al., 2021); 3) Smooth and powerful internet connectivity (Naji et al., 2020); and 4) Universitas Negeri Semarang has a quality information technology team (Kaup et al., 2020).The students' activities during the learning process were observed and analyzed.The learning process explained that student activities in following the four stages of the MBCPS learning model.The written test with a closedquestions mode and the measurement of the students' Nutritional Quality of Diet using a food recall form were done before and after the learning activities. Multirepresentation in the Science Learning Multirepresentation is a variety of ways or forms of presenting an idea or concept.In science learning, the use of multirepresentation can facilitate the understanding of concepts because it supports various learning styles of students, so that competencies are easily realized in the learning process.In the learning process, students are able to produce explanations of science concepts, exhibit high-order reasoning, and interesting representations in communicating meaning (Nielson & Yeo, 2022).The results of research on pre-service science teachers showed parallel patterns, quality improvement, and the use of multirepresentations are interrelated.Students use, produce, and reflect on the level of representation that is part of the involvement of coherent argument-based laboratory activities (Yaman, 2020).Another research, in pre-service teachers on the knowledge of representations of physical and chemical changes focused on the development of cognitive structures.The use of multirepresentation makes it easier for students to master the concepts of physical and chemical changes effectively.The instructive findings have clearly addressed the difficulties teachers have experienced about particle and symbolic representations of physical and chemical changes (Derman & Ebezener, 2020).The MBCPS learning model is a development of the CPS model that has been successful in significantly uncovering critical thinking skills, which contribute to the quality of student food quality in health education. The stages of multirepresentation in MBCPS balanced nutrition learning are as follows.1. Students have written down the answers verbally correctly, in the order according to the identification of the problem.2. The lecturer explains how to find various images related to the key concepts of the verbal answer, by searching/googling from the appropriate internet.Applications or web sites that create these various images include google.com(select image), freepik.com.For example, the nutritional status of a person is depicted with a picture of a person with various body mass index (BMI) sizes accompanied by a formula for calculating BMI; risk of obesity.By looking for images of obese people equipped with the location of potentially diseased organs.3. Students search and discuss with the group the drawing of key concepts and determine the most appropriate one.4. The image is equipped with a brief caption.The image of the key concept is connected with an arrow to explain its interrelationships, so that a clear concept map is built.5. Representation is opletically used for mathematical concepts, such as the BMI formula, calculating normal weight. The Learning Process of Multirepresentation-Based Creative Problem Solving Four problems related to balanced nutrition in accordance with on the model of the multirepresentation-based creative problem-solving learning were developed and examined by three examiners with a minimum score of 0 if the students did not provide any answer and a maximum score of 3 if the students provided a correct and complete answer, as is listed in the assessment rubric of Appendix 1.An example of MBCPS problems with balanced nutrition in the weight monitoring pillar is presented in Appendix 2. Each question involved 11 indicators and four pillars balanced nutrition.Therefore, the minimum score was 0 and the maximum was 132 (4 problems x 11 indicators x 3 score) (Table 1).The average value was calculated using Equation (1).The achievement score was calculated from the division of the achieved score by the maximum score.This score was used to determine the students' achievement criteria as listed in Table 2.Meanwhile, the achievement criteria of the multirepresentation-based creative problem-solving learning activities and gain factor criteria are presented in Table 3 and Table 4, respectively.(3) The effectiveness of implementing the MBCPS learning model between the experimental group and the control group is measured by N-gain or <g> (Fadaei, 2019).The N-gain value for each respondent is calculated using Equation (3), while the N-gain value for MBCPS learning is the average of all respondents.The <g>, <Spre> and <Spost> in Equation (3) were designated as N-gain or gain factor, average score pre-test of Nutrition and Health concepts (%), and average score post-test on Nutrition and Health concepts (%), respectively. The Students' Critical Thinking Skills The data collected in this research included 1) observation results of students' learning activities on the MBCPS learning process, 2) written test results on the concepts of balanced nutrition (critical thinking skills applied to balanced nutrition) as is shown in Appendix 3, and 3) results of food recall form on the intake filled out by the students.The students' critical thinking skill was measured using a multiple-choice test instrument on the concept of balanced nutrition, according to the stages of the thinking process to solve the problems correctly.The critical thinking indicators used in this study are analysis, inference, evaluation, and decision making.application, evaluation, analysis and synthesis (Nasution et al., 2023).The students' answers were scored based on the critical thinking steps using a reference scale of 1 -4. Diet Nutritional Quality The Nutritional Quality of Diet is measured quantitatively with the food recall form for seven days as published by the Ministry of Health of the Republic of Indonesia.The nutritional value of each food ingredient consumed by the students was calculated based on the reference list of food ingredient composition and compared with the nutritional adequacy referring to the 2019 Recommended Dietary Allowances (RDA).The percentage of the Nutrition Adequacy Level (NAL) was calculated using equation (Lim & Choue, 2013) and the categories are listed in Table 5.The students' Nutritional Quality of Diet was then calculated based on the value of the nutrition adequacy level using equation (Kementerian Kesehatan Republik Indonesia, 2019).(5) All data collected was analyzed descriptively to determine the effectiveness of the MBCPS learning model during the teaching and learning process, as well as its effect on students' critical thinking abilities and the Nutritional Quality of Diet.Differences in students' critical thinking abilities and Nutritional Quality of Diet before and after the learning process were analyzed using the paired sample t test (Bonamente, 2017) and the experimental and control groups were also analyzed using Ngain (Fadaei, 2019). Findings The Multirepresentation-Based Creative Problem-Solving Learning Process A slight improvement in the MBCPS learning for the "experiment group" was observed with a maximum score before and after the implementation of the MBCPS learning model of 40 % and 53 %, respectively.The N-gain value due to the implementation of the MBCPS learning model for the "experiment group" was 0.21 with a low category.The detail learning achievement after the implementation syntax of the MBCPS learning model for the experiment group is presented in Table 6. Table 6 The The implementation of the MBCPS learning model was carried out for the "balanced nutrition" concepts.Four syntax in the creative problem solving were involved during the learning activities, namely problem identification, idea generation, idea evaluation, and validation of solution.The increase in problem identification scores in the high category from 61% before to 78% after implementing the MBCPS learning model, was significant at p < 0.05.The problems given in the student worksheets are explicitly problems that students must solve so that the problem identification stage is relatively easy for students to complete.Meanwhile, the idea generation stage only achieved a-half of the maximum score with no significant change in the learning achievement observed.More efforts to find other solutions towards provided problems through literature review from e-books and published research papers. Moreover, the learning achievement of the idea evaluation to conclude the best alternative answers after the implementation of the MBCPS learning significantly increased by two-fold increment from 24% before to 44% after the learning.Meanwhile, an insignificant increase in the learning achievement from 43 % to 49% was observed for the validation toward solutions.This suggested the low and negligible change in the students' creativity.The students found it difficult in stating the complete verbal answers and illustrating the verbal to visual answers.Therefore, the students were required to improve the literature study and their concept understanding. The Students' Critical Thinking Skills The students' critical thinking skills in this study were measured based on the stages of the thinking process, namely the skill to argue, the skill to make inferences, the skill to evaluate, and the skill to generate the best solution for the problems.The example of critical thinking skills is listed in Appendix 3. The students' critical thinking skills were determined before and after the learning process for each group involved in the research.The results showed that there was no significant difference between the students' critical thinking skills of the "experiment" and the "control" groups before the implementation of particular learning process.Even, the indicators of inference skill and decision-making skill showed the same average value. Overall, the students' critical thinking skills of the "experiment" group and the "control" group after the learning process was significantly different, with p value of 0.00* (Table 7).It indicated that the implementation of the MBCPS learning model could enhance the students' critical thinking skills.Among the indicators of the critical thinking skills, the indicators of analytical skill and evaluation skill showed a significant difference as indicated by the p value of the two indicators of 0.01 and 0.00, respectively.However, only a slight difference was observed for the indicators of the inference skill and the decision-making skill with a p value of 0.82 and 0.11, respectively. The Students' Diet Nutritional Quality The students' Nutritional Quality of Diet consumption includes the adequacy level of energy, carbohydrate, protein and fat.Before the implementation of the MBCPS learning model, the students' Nutritional Quality of Diet consumption measured in this study was quite concerning, as indicated by the value of the nutritional adequacy level mostly (above 50%) in the deficit category (nutrition adequacy level below 70%).These conditions were observed in all indicators of the adequacy of energy, carbohydrates, protein, fat, and Nutritional Quality of Diet (as is shown in Table 8). After the implementation of MBCPS and conventional learning model for the "experiment" and "control" group, respectively, the overall Nutritional Quality of Diet, energy, protein, and fat for both groups has improved towards an adequate nutrition level.However, the carbohydrate adequacy level for both the "experiment" and "control" groups was totally in a deficit level.Furthermore, the protein adequacy level was the only average nutritional quality that was in the medium category (80-99%).The four other nutritional adequacy aspects, i.e., energy, carbohydrates, fats and the overall Nutritional Quality of Diet were below 70% on average.Moreover, the nutritional quality of carbohydrates was only about 45%.By comparing the results of the students' nutritional quality of before-after learning state, there was no significant difference of that before and after learning, except for the fat adequacy level with a p-value of 0.01 (as is shown in Figure 1).The Nutritional Quality of Diet presented in Figure 1 clearly shows the level of the students' nutritional adequacy.It indicated that the highest level of protein adequacy and the lowest level of carbohydrate adequacy had been achieved.Overall, the nutritional quality of diet was still at deficit level (59.2-73.6%).The learning outcomes showed that the N-gain of the "experiment" group was higher in the critical thinking skills with moderate criteria compared to that of the "control" group (as is shown Table 9).This suggested that the MBCSP model was quite effective in enhancing the students' critical thinking skills, and significant at p < 0.05.In contrast, the Nutritional Quality of Diet of the students in the "experiment" group achieved a low-level N-gain or low effectiveness, while those in the "control" group obtained the N-gain with a medium category or a moderate effectiveness level, but significant at p <0.05. Figure 1 Average Score of the Students' Nutritional Quality of Diet before and after the Implementation of Conventional and MBCPS Learning Models for the "Experiment" and "Control" Groups t-test 0.01 Discussion The implementation of the MBCPS learning model during the nutrition science course was carried out in this study.It was shown that the students' critical thinking skills have improved to the moderate level, except for the Nutritional Quality of Diet for the students in the "experiment" group.This indicated the significance of the implementation of the MBCPS learning model in enhancing the students' critical thinking skills.Moreover, the improvement of the critical thinking skills and nutritional quality of diet for the students in the "experiment" was higher in comparison with those for the students in the "control" group, indicated by a very significant t-test value (both variables with a p value of 0.00). During the implementation of the MBCPS learning model for the "experiment" group, several typical learning activities were employed, i.e., 1) working in groups to share knowledge and experiences (Mayseless et al., 2019;Sophonhiranraka et al., 2015), 2) generating beneficial solutions for all group members and between groups, 3) deciding the best solution for all groups, 4) developing creative thinking skills (Cetinkaya, 2014), 5) developing critical thinking skills (Kim et al., 2019;Nazzal et al., 2020), and 6) improving creativity (Cetinkaya, 2014;Nazzal et al., 2020;Phaksunchai et al., 2014).During the MBCPS learning process, the students worked in peer groups with different skill levels to articulate the concept understanding and solve the higher-level problems (Mahalingam et al., 2017).This type of learning could enhance the students' self-reflective learning and engagement through providing feedback.Furthermore, the students enjoyed the challenging learning experiences (Strohfeldt et al., 2015). The students' ability in generating ideas and validating solutions did not significantly change after the implementation of MBCPS learning model in the "nutrition science" course.It was in the low category.It suggested that the students' ability to solve each problem has been done correctly, although not all of them are correct, with various alternative solutions.The students' skill to locate and find the solutions to problems from various available literature (e.g., modules, textbooks, and scientific journals) has not developed maximally.Moreover, the students could not find the most correct solutions for the problem in the stage of validation of solutions.Validation of solution activities in the problem-solving process could be a good challenge.In addition to problem solving, the results of these activities could help bringing positive impacts and feedback (Sennewald et.al., 2021).In advanced, collaborative activities involving knowledge and experience sharing between students in groups were highly required (Grott, 2019, O'Neil, 2014).Specifically, the divergent way of thinking to create widely various thoughts was necessary for generating ideas (Phaksunchai et al., 2014). Related to the idea evaluation stage, where students are required to choose and decide on the best alternative solution with various logical considerations and correct arguments.This activity can increase flexibility and detail, and student evaluation abilities.An improvement in the students' ability to evaluate ideas by 16.6 points (before and after the implementation of the MBCPS learning model from low to moderate category) was observed.The effectiveness of the CPSBM group was higher because this stage was carried out in a multi-representation manner, and not in the control group.Answering multiple visual representations (images), in the form of linking alternative solutions, requires high thinking and creativity.According to Ainsworth et al. (2011) drawing activities can increase engagement, to communicate, explore and justify understanding in science.The precision of the images provides an opportunity to exchange and clarify ideas.Besides, people prefer image stimulation to written words (Sweet, 2021).Various research related to multirepresentations has been used in various materials.Multiple representations have been proven to be effective in increasing understanding of scientific concepts (Carolan et al., 2008).Various scientific concepts, such as chemical concepts (Ferreira & Lawrie, 2019;Olaleye, 2012), work-energy concepts (Suhandi & Wibowo, 2012), solving Newton's law problems (Rizky et al., 2014), cognitive abilities physics (Widianingtiyas, 2015). The lowest stage of multiple representation is visualizing verbal answers into visuals.Students have not been able to immediately find pictures that match the verbal answer key concepts, connected by arrows to explain the relationship.Some students visualize using leaflets that are available on the internet so that it is not suitable for verbal answers and a clear concept map has not been formed.This happens because visualizing verbal answers requires high imagination, the ability to explore and is something new for students.This is in line with previous research which states that students focus more on visualizations of other people.Creating visualizations is an integral part of scientific thinking and improving understanding (Ainsworth et al., 2011;Ferreira et al., 2019).Visualization in class needs to be thoroughly integrated into the curriculum (da Silva et al., 2022). Research that supports the importance of multiple representations in learning includes developing knowledge about scientific concepts and processes and having the potential for effective learning (Waldrip et al., 2006), increasing scientific literacy (Prain, 2019), increasing scientific literacy (Gillies et al., 2020;Waldrip et al., 2006), and quality learning (Hubber et al., 2018).Therefore, mastery of multiple representations needs to be maintained and even developed further in the learning process by extending the duration of the learning process. After the MBCPS learning process, there was an increase in critical and different thinking skills in the experimental group and the control group, which was supported by indicators of analytical ability and evaluation ability.This happened because analysis and evaluation skills were the most important abilities in critical thinking that had been taught since learning.Various studies with various learning models show the same results, namely increasing critical thinking skills, including the Discovery Learning Model (Nurrohmi et al., 2017), the Wiki Learning Project (Crist et al., 2017), the Teaching Factory Based on Troubleshooting learning model (Maksum et al., 2022), and Scratch-assisted Wave teaching materials (Negoro et al., 2023).Critical thinking skills are very necessary in the learning process and dealing with various problems that arise.This is in accordance with research (de la Sienra, 2020) which states that critical thinking skills have been considered an important skill for future success and encouraging innovation. The students' critical thinking skills investigated in this study showed an insignificant difference in all critical thinking indicators, either in groups (pre-test to post-test) or between groups (simulation versus written case studies) (Blakeslee, 2020).Moreover, no significant difference between the students with high and low critical thinking in the explicit textual reading.However, the students' ability in reading implied texts and item-based scripts in both "experiment" and "control" groups differed significantly (Heidari, 2020). Nutritional adequacy, including energy, carbohydrates, protein and fat, increased after learning in both the experimental and control groups.However, the level of energy and carbohydrate adequacy is still in the deficit category (> 70%).This shows that students do not have a good diet, they still lack macronutrients.This result is the same as Indonesian national data on adolescents and adults which shows energy and carbohydrate deficits, even protein and fat (Rahmawati et al., 2016).If this lack of energy is not immediately corrected, it can result in malnutrition.Energy adequacy is very important because energy intake is the main predictor of micronutrient adequacy (Gibson, 2007).Energy adequacy is very important to support health and life (Sizer et al., 2020).The new recommendations issued by WHO regarding carbohydrate intake for everyone aged 2 years and over should come from whole grains, vegetables, fruit and nuts.WHO recommends adults consume at least 400 grams of vegetables and fruit and 25 grams of natural dietary fiber per day (WHO, 2023).Conditions are better at the level of protein adequacy in the medium category, around 91-93%.Adequate protein nutrition will support students to study better.Proteins play a role in behavioral and neurocognitive development (Garcia et al., 2018, Kadosh et al., 2021). The nutritional quality of food is indicated by the average nutritional adequacy in the moderate category (70% in the control group and 74% in the experimental group), but is close to a deficit.The same results occurred for German students in the same moderate level category as measured by the Healthy Eating Index (HEI-NVS) (Nossler et al., 2022).Therefore, it is recommended for students to increase food consumption towards balanced nutrition.Student responses after implementing the CPSBM learning and conventional learning that had been given regarding problems in balanced nutrition encouraged students to pay more attention to eating patterns, both in terms of types of food and portions.Students understand what foods are good to consume and what foods to reduce, but still pay attention to nutritional adequacy. Conclusion and Implications The achievement of student activities in the four stages of learning before and after the implementation model increased from 52% to 68%.Learning achievement in evaluating ideas to conclude the best alternative solution after implementing MBCPS learning increased significantly by two times and they were actively involved in learning.The effectiveness rate of the MBCPS learning model is still relatively reasonable with an achievement of 53%; whereas in the experimental group students improved critical thinking skills (p < 0.05) and Nutritional Quality of Diet (p < 0.05).MBCPS learning on balanced nutrition material is effective and able to improve critical thinking skills and Nutritional Quality of Diet with N-gains of 0.47 (moderate level) and 0.28 (low level), respectively.From the research analysis, it was found that the development of students' critical thinking skills, on the topic of "balanced nutrition" was sufficient.Research findings that students' critical thinking skills can assist students in choosing food and eating patterns to maintain a good daily diet and increase the level of nutritional adequacy to normal. Through learning MBCPS, students can obtain good health and well-being with normal nutritional adequacy status, both from excess and deficiency levels of nutritional adequacy.Improving the quality and effectiveness of the developed model needs to be applied to more sample questions and extending the learning process from 5 weeks to a minimum of 7 weeks so that student activities are more detailed and can be assessed more accurately.The MBCPS model has prospects for development in a wider branch of knowledge in the scope of University Health Education. 6. Based on the answers or information obtained at the stage of idea generation is analyzed in group discussions.In the group discussion are compiled alternative answers to all questions in order to make verbally correct decisions in detail, in order, and interrelated.A verbal answer is an answer that is presented in easy-to-understand sentences. Idea Evaluation 1) Verbal: The nutritional status of adults is most appropriate using the BMI indicator.BMI = weight (kg)/ Height 2 (m) = 62.7/1.6122= 24.1 (normal BMI 18.5 -25.0).Based on the BMI formula, the minimum normal weight can be calculated at a BMI of 18.5 and a maximum weight at a BMI of 24.9 and obtained a value of 48.1 to 64.7 kg.The risk of diseases that may be experienced is heart disease, diabetes and kidneys.7. Based on verbal answers, you look for images or photos from various sources (textbooks, journals, leaflets) that are appropriate, and combine them with arrows that make them look interesting, and easy to understand, answers. 2) Visual 8. If there is a calculation in the verbal answer, you rewrite the formula and the calculation result in the mathematics section. 9. At the solution validation stage, you must conduct an analysis to find the correct reason and strong evidence for each answer in the evaluation of ideas from various sources of textbooks, journals and research results. Table 1 The Syntax of MBCPS Learning Model has been Developed Consisting of Four Modified Stages of CPS in The Table 2 Component and Score the Multirepresentation-Based Creative Problem-Solving Learning Activities Table 3 The Achievement Criteria of the Multirepresentation-Based Creative Problem-Solving Learning Activities Table 4 Gain Factor Criteria <g> Table 5 Categories of Nutritional Adequacy Levels (NAL) Detailed Results of The Learning Activities and Its Achievement in The Implementation Syntax of the MBCPS Learning Model for the "Experiment Group" Table 7 Results of the Assessment and the Differential Test of The Students' Critical Thinking Skill Note.* Significant difference before and after learning Table 8 Description of the Students' Nutrition Quality of Diet of the "Experiment" and "Control" Groups Before andAfter the Implementation of the Learning Model Table 9 The Effectiveness Level of the Multirepresentation-Based Creative Problem-Solving Learning
2024-01-08T16:52:55.142Z
2023-12-31T00:00:00.000
{ "year": 2023, "sha1": "992fee1154b391ed81ac212e64012ccceafb7cec", "oa_license": null, "oa_url": "https://www.tused.org/index.php/tused/article/download/2005/870/10199", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "13354ce16d3b33ed14339dd9265bfd2f46964776", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
216047987
pes2o/s2orc
v3-fos-license
Comparison of four sarcopenia screening questionnaires in community-dwelling older adults from Poland using six sets of international diagnostic criteria of sarcopenia Introduction There are four screening sarcopenia questionnaires (SARC-F, SARC-CalF, MSRA-5, MSRA-7). To unambiguously determine which of them is the most effective tool in community-dwelling older adults, we performed a diagnostic accuracy study. The aim of the analysis was to assess the diagnostic values of SARC-F, SARC-CalF, MSRA-5, MSRA-7 and compare their psychometric properties against six criterion standards (EWGSOP1, EWGSOP2, FNIH, AWGS, IWGS, SCWD criteria). Materials and methods We included 100 community-dwelling volunteers aged ≥ 65yrs. The sensitivity/specificity analyses were performed. Receiver operating characteristic (ROC) curves and area under the ROC curves (AUC) were calculated to compare the overall diagnostic accuracy of the four questionnaires. Ideal screening tools should have reasonably high sensitivity and specificity, and an AUC value above 0.7. Results With respect to the six criterion standards used, the sensitivity of SARC-F, SARC-CalF, MSRA-5, and MSRA-7 ranged 35.0–90.0%, 20.0–75.0%, 64.7–90.0%, 76.5–91.7%, respectively, whereas the specificity ranged 86.9–91.1%, 80.0–90.0%, 45.8–48.8%, 28.9–31.0% respectively. The AUCs of SARC-F, SARC-CalF, MSRA-5, and MSRA-7 ranged from 0.655–0.882, 0.711–0.874, 0.618–0.782 and 0.588–0.711 respectively. Only SARC-CalF had AUC >0.7 and <0.9 against the six criterion standards but obesity was a confounding factor, which may affect the diagnostic power of SARC-CalF. MSRA-7 had the smallest AUC of all the questionnaires and MSRA-5 had slightly larger AUC than MSRA-7. Conclusion Based on our analysis, the standard sarcopenia screening questionnaires deliver contradictory results in many practically occurring cases. It appears that SARC-CalF is an optimal choice for screening sarcopenia in community-dwelling older adults. Introduction Sarcopenia is a significant public health concern which causes a substantial economic burden. Mijarends et al. [1] found that the average costs of health care provided to Dutch communitydwelling older adults with sarcopenia were almost three times higher than in non-sarcopenic individuals, amounting to 4,325 euro and 1,533 euro, respectively. It is thus essential to detect sarcopenia at the earliest possible stage, when there are yet no apparent symptoms of the condition (e.g. muscle weakness), to limit these over-expenses. Timely recognition of sarcopenia makes early treatment possible which, in turn, minimises the risk of severe consequences in the future (e.g. falls, injuries, hospitalisation, and even death) [2]. Despite the widespread interest in sarcopenia for over three decades (the term 'sarcopenia' was first proposed in medicine in 1988 by Rosenberg [3]), there exists no effective screening tool for this condition. The SARC-F questionnaire developed by Malmström and Morley, and first published in 2013, appears to be the most popular screening test [4]. A range of studies has found SARC-F to be characterised by low sensitivity but high specificity [5][6][7][8]. It is stressed, though, that high sensitivity is hugely desirable for a screening test, resulting in a good ability to detect individuals who actually have the condition. Given the low sensitivity of SARC-F, Barbosa-Silva et al. [9] proposed a new questionnaire, called SARC-CalF, in 2016. It evaluates the same domains as SARC-F, but it uses calf circumference (CC) as an additional measurement. In a few studies, SARC-CalF was found to have superior sensitivity than SARC-F, and similar specificity [9][10][11]. Another promising questionnaire proposed for sarcopenia screening is Mini Sarcopenia Risk Assessment Questionnaire (MSRA), available in two versions: short (MSRA-5) with five items, and full (MSRA-7) with seven items. The questionnaire, developed by Rossi et al. [12], was first published in 2017. However, the number of studies on its diagnostic value is minimal [12,13]. As already mentioned, SARC-F is currently the most popular of the four available sarcopenia screening questionnaires. In September 2018 the Extended European Working Group on Sarcopenia in Older People (EWGSOP2) revised the criteria for sarcopenia initially published in April 2010 and recommended the application of SARC-F as a screening tool in the first step of the practical algorithm: the so-called Find-Assess-Confirm-Severity (FACS) pathway [14,15]. Furthermore, the European Union Geriatric Medicine Society, Sarcopenia Special Interest Group, has taken action to validate different language versions of this questionnaire [16]. To unambiguously determine which questionnaire is the most effective tool for sarcopenia screening, analyses are necessary to compare the diagnostic values of each of the tools against gold standards, both in community-dwelling older people and high-risk groups, i.e. hospitalised older patients and residents of nursing homes. To the best of our knowledge, the only study comparing the SARC-F, SARC-CalF, MSRA-5 and MSRA-7 questionnaires is the analysis by Yang et al. [17], performed in residents from nursing homes. No studies comparing all four tools in non-institutionalised older subjects have been published. For community-dwelling older adults, there are as few as three reports that compare the diagnostic values of the SARC-F and SARC-CalF questionnaires [9,10,18], and one analysis comparing MSRA-5, MSRA-7 and SARC-F [19]. Thus, our study aims to assess the diagnostic value of these tools and compare the obtained results to fill the research gap in this area. Materials and methods We performed a diagnostic accuracy study from March until July 2019, for which we recruited older adults, living in the community in Poznan, one of the largest cities in Poland. The inclusion criteria were as follows: age (65 years or more), lack of cognitive impairment [defined as Abbreviated Mental Test Score (AMTS) � 8 points)], the ability to take a vertical position (necessary for measuring body height and analysing body composition for the assessment of Appendicular Lean Mass), and the ability to perform a 4-m usual walking speed test. The exclusion criteria were designed based on what makes the measurement of body composition impossible (e.g., implanted artificial pacemaker or the presence of metal implants). One hundred ten persons volunteered for the study. Ten of them were excluded for the following reasons: cognitive impairment (n = 5), having a pacemaker (n = 2), physical disability preventing a 4-m usual walking speed test (n = 3). The study protocol was approved by the Bioethics Committee of the Poznan University of Medical Sciences, Poland (approval No: 872/18). Informed consent was obtained from each subject prior to the study. According to the EWGSOP1 criteria [14], sarcopenia is defined as low muscle mass (LMM) and strength, and/or low physical performance. We used cut-off points for LMM for the Polish population defined by the ALM index and young, healthy reference population aged 18-40 years, i.e. 7.4 kg/m 2 for men and 5.6 kg/m 2 for women [24]. Each subject was considered to have low muscle mass if their ALM index was less than or equal to the sex-specific Polish cutoff points. The cut-off point for low handgrip strength (HGS) was <30 kg for men, <20 kg for women and the cut-off point for low physical performance was a gait speed (GS) of � 0.8 m/s both sexes. According to EWGSOP2 definition [15], sarcopenia is defined as low muscle strength, ie. HGS < 27 kg for men and <16 kg for women and/or chair stand test (CST) > 15 s for both sexes and low muscle quantity (i.e. low muscle mass). To define low muscle mass, we used the same as in the EWGSOP1 algorithm sex-specific Polish cut-off points (i.e. �7.4 kg/ m 2 for men and �5.6 kg/m 2 for women [24]). In accordance with the recommendations of FNIH [20] sarcopenia is defined as low muscle mass [appendicular lean mass (ALM)/body mass index (BMI): <0.789 for men and <0.512 for women], and weakness (HGS: <26 kg for men <16 kg for women), and slowness (GS �0.8 m/s for both sexes). According to the diagnostic criteria of AWGS [21], sarcopenia is defined as low muscle mass (ALM index <7.0 kg/ m 2 for men and <5.6 kg/m 2 for women), accompanied by low muscle function (HGS < 26 kg for men and < 18 kg for women and/or GS < 0.8 m/s for both sexes). According to the IWGS criteria [22], sarcopenia is defined as an ALM index value �7.23 kg/m 2 for men and �5.67 kg/ m 2 for women, and a GS value of <1 m/s both sexes. According to the diagnostic criteria of the SCWD [23], sarcopenia is defined as low muscle mass and low physical performance. Following the recommendations of SCWD [23], we used Polish cut-off points determined earlier from a study of healthy subjects between 20 and 30 years of age of the same ethnic group, i.e. 7.29 kg/m 2 for men and 5.52 kg/m 2 for women [25]. Each participant was considered to have low muscle mass if their ALM index was below or equal these sex-specific cut-off points for LMM. The cut-off point for low physical performance was a gait speed (GS) of � 1.0 m/s for both sexes. Assessment of muscle mass The muscle mass level was assessed in each study participant using the BIA method (InBody 120, Biospace, Seoul, South Korea). The InBody 120 is a segmental impedance device which uses a tetrapolar 8-point tactile electrode method. The device has built-in hand and foot electrodes. Ten impedance measurements are performed using two different frequencies (20 and 100 kHz) at each segment (right arm, left arm, trunk, right leg, and left leg). The subject's identification number, age, sex and height were entered into the analyser. The analyser gives immediate and detailed results, including quantitative values of weight, BMI and other body composition parameters. Only segmental lean mass data were used for further analysis for calculating the Appendicular Lean Mass (ALM) index. The ALM index [the ratio of ALM (kg) and squared height (m 2 )] was calculated for each subject. Height assessment was performed by means of a mobile stadiometer (Tanita, Poznan, Poland). Assessment of muscle strength Muscle strength was assessed by handgrip strength with a dynamometer (Saehan, Changwon, South Korea). Participants performed the handgrip strength test in a sitting position, with arms bent to 90 degrees in the elbow and shoulder joint. Both the left and right arms were measured twice. The results were recorded in kilograms (kg). The mean value of all measurements was used as the final score for each individual. We also assessed lower limb strength using The Chair Stand Test (CST), which was necessary to apply the EWGSOP2 criteria [15]. Each subject was asked to rise five times from a chair with arms folded across the chest, and the time needed to complete the test was measured. The results were recorded in seconds (s). Assessment of physical performance Physical performance was assessed using the 4-m usual walking speed test. This test measures the walking pace at the distance of 4 meters-subjects are asked to walk the course at their usual gait speed. Time taken to perform the walk was recorded, and the result expressed as meters per second. If necessary, canes or walkers were permitted during this test. Screening for sarcopenia The risk of sarcopenia was evaluated in each studied subject using four questionnaires: SARC-F, SARC-CalF, MSRA-7, and MSRA-5. The SARC-F questionnaire. The SARC-F [4] examines five domains: 1) strength, 2) assistance with walking, 3) rising from a chair, 4) climbing stairs, and 5) falls, scored from 0 to 2. A score of �4 out of the maximum of 10 points indicates a risk of sarcopenia. The SARC-CalF questionnaire. SARC-CalF [9] is composed of six items, the first five items being and scored the same as the SARC-F and the sixth additional item being the calf circumference item (CC; measurement of the right calf in standing position).The measurement of CC requires the use of an anthropometric measuring tape. The CC score is interpreted separately for each gender. The cut-off points of CC are 34 and 33 cm for men and women, respectively. The CC item is scored as 0 points if its value is above the cut-off points and as 10 if its value is below or equals the cut-off points. A score of �11 points indicates a risk of sarcopenia. The MSRA questionnaires. The full version of the MSRA questionnaire (MSRA-7) examines seven domains including 1) age, 2) hospitalisation in the last year, 3) level of activity, 4) regularity of meals, 5) daily dairy consumption, 6) protein intake, and 7) weight loss >2 kg in the last year. The short version (MSRA-5) excludes dairy and protein consumption. A total score of MSRA-7 �30 and MSRA-5 �45 points indicates a risk of sarcopenia. Covariates Assessment of cognitive function. Cognitive functions were assessed with the Abbreviated Mental Test Score (AMTS) [26]. The test is composed of 10 questions. Every subject scores 1 point for a correct answer and 0 points for an incorrect answer or no answer. Individuals who score 8 points or more are considered cognitively intact. Only subjects who scored at least 8 points were qualified for this study. Nutritional assessment. To evaluate the nutritional condition of the participants, the Mini Nutritional Assessment-Short Form (MNA-SF) was used [27]. The MNA-SF is composed of 6 items and assesses decrease in food intake, weight loss, mobility, psychological stress or acute disease, neuropsychological problems (dementia or depression), and BMI. The maximal score of the MNA-SF is 14 points. A score below 7 points indicates malnutrition, 8-11 points-a risk of malnutrition, and 12 points or more-normal nutritional status. Assessment of independence in activities of daily living. Independence in basic and instrumental activities of daily living was assessed with the Katz scale and Lawton scale, respectively [28,29]. The Katz scale is composed of six tasks: bathing, dressing and undressing, toileting, transferring from and to bed, and continence (bowel and bladder), scored as 0, 0,5 or 1. According to the ADL score, participants were classified as: dependent (0-2 points), partially dependent (3-4 points) and independent (5-6 points). The Lawton scale assesses performance in eight dimensions: the ability to use the telephone, ability to use different modes of transportation, shopping, food preparation, housekeeping (doing laundry and cleaning), control over one's own medications and ability to handle finances, scored from 1 to 3. The maximum score is 24 points. As far as the Lawton scale is concerned, there are no cut-off points that would define different levels of independence. However, it does allow for profiling the patient's needs for assistance or care, as lower results indicate a higher level of dependence. Statistical analysis Statistical analysis was performed using the STATISTICA 12.0 package (StatSoft, Poland). Continuous data were presented as mean ± SD and compared using a Student's t-test or the Cochran-Cox test or Mann-Whitney test as appropriate. Categorical variables were expressed as number (percentage) and compared with the χ2 test (applying the Yates correction when necessary). The EWGSOP1 [14], EWGSOP2 [15], FNIH [20], AWGS [21], IWGS [22], and SCWD criteria [23] were used as the criterion standards for sarcopenia (gold standards). Next, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of SARC-F, SARC-CalF, MSRA-5 and MSRA-7 were calculated. The sensitivity is the proportion of subjects actually presenting sarcopenia (based on the gold standard), having been correctly identified as sarcopenic using the screening test (i.e., positive screening test). The specificity represents the proportion of individuals who do not have sarcopenia (based on the gold standard), which were correctly identified as non-sarcopenic using the screening test (i.e., negative screening test). The PPV is a measure of the probability of presenting sarcopenia in case of a positive screening test; in turn, the NPV represents the probability of not having sarcopenia in case of a negative screening test [30]. All of these parameters were specified with 95% confidence intervals (CI). The ROC curve was used for comparing the overall diagnostic accuracy. Areas under the ROC curve (AUC) with 95%CI were calculated. A higher AUC corresponded to a higher overall diagnostic accuracy. It was assumed that the AUC values >0.9, 0.7 to 0.9, and 0.5 to 0.7 corresponded to the high, moderate and low diagnostic accuracy of the screening test, respectively [10,31]. The areas under the ROC curve were compared using the Hanley-McNeil non-parametric method [32,33]. Characteristics of the study group The analysis included a total of 100 community-dwelling volunteers aged 65 years and older (age range: 65-93 years); 21% of them were male. Table 1 shows the characteristics of the whole study group by gender. The mean age of women and men was comparable (p>0.05). Comparing the women to men, women were statistically significantly shorter (156.9±6.0 vs 173.5±6.6 cm, p<0.001) and thinner (65.9±13.8 vs 78.1±11.7 kg, p<0.001) but had similar BMI to men (26.8± 5.6 vs 25.9±3.6 kg/m 2 , p>0.05). Almost 1/5 of the study group had low BMI, and this feature was observed twice as often in women. Almost 1/3 of the participants had poor nutritional status (i.e. malnutrition or risk of malnutrition). Almost all participants were independent according to the ADL scale. The group of studied women had a statistically significantly lower score for the activities of daily living (5.7±0.5 vs 5.9±0.3 points, p<0.05, respectively), but not for the instrumental activities of daily living (25.5±2.9 vs 26.0±1.7 points, p>0.05). More than half of the participants took four or more drugs a day-this affected women and men equally. An assessment of muscle function showed that men were stronger than women (32.8±8.0 vs 19.0±5.0 kg, p<0.001, respectively). However, both groups were characterised by a similar level of physical performance assessed by the 4-m usual walking speed test. Lower appendicular lean mass (ALM) was found in the studied women compared to men (15.7±2.8 vs 23.4±2.6 kg, respectively, p<0.001). The ALM index was statistically significantly lower in women than in men as well. Table 1 also contains the mean values for the studied sarcopenia screening questionnaires for the all study group and according to gender. No statistically significant differences were noted. Table 2 summarises the answers given to the questions from the SARC-F questionnaire, with additional calf circumference measurement (for the SARC-CalF questionnaire). Almost half of the respondents reported difficulties with lifting and carrying a weight of 5 kg, and this problem was statistically significantly more frequently reported by women (p<0.05). 1/3 of the participants indicated problems standing up from a chair or bed. Almost a quarter of the study group reported problems climbing a flight of 10 stairs and experienced at least one fall in the past year. About 15% of participants declared moderate or major difficulties in walking across a room. Calf circumference below the recommended cut-off points (� 33 cm for women and � 34 cm for men) was observed in more than 1/4 of the subjects (comparably often in the group of women and men). Table 3 presents the answers given to the questions from the MSRA questionnaire. Over 2/ 3 of the study group was aged 70 years or above. More than 1/3 of participants reported that they had been treated in hospital at least once in the last year. A similar percentage indicated that they lost weight >2kg in the last year, and this issue affected women 2.5 times more often than men (p = 0.0621 was close to statistical significance). About 1/5 of respondents skip a meal up to twice per week, and a quarter of the participants in this analysis did not consume protein-rich products (e.g. meat, eggs, legumes, milk or dairy products). 1/5 of the study group was unable to walk more than 1000 metres. Prevalence of sarcopenia The frequency of sarcopenia varied from 17% to 72%, depending on the questionnaire used (Table 4). SARC-F identified the lowest number of subjects with a risk of sarcopenia (17 persons, including 16 women), whereas MSRA-7 -the highest (72 persons, including 56 women). A large spread of results was observed when we used six sets of international diagnostic criteria for sarcopenia ( Table 4). The lowest percentage of patients with sarcopenia (10%) was diagnosed with FNIH criteria. In contrast, the highest percentage of patients with sarcopenia (20%) was identified by the EWGSOP1 criteria. The same frequency of sarcopenia was recognised by IWGS and SCWD criteria (n = 12). Regardless of the type of screening test or diagnostic criteria for sarcopenia, the condition was found to be more prevalent in women than in men. However, due to a low number of men with sarcopenia in our study, the statistical analysis including gender was not performed. Diagnostic value of the analysed questionnaires for sarcopenia screening Concerning the six criterion standard for sarcopenia (gold standards) used in the study, the sensitivity of the compared tools varied in the following ranges: SARC-F 35.0-90.0%, Discussion Since sarcopenia has serious health implications, early detection of the condition through screening in the general population is an important task. Several sarcopenia screening tools are currently available, but there have scarcely been any studies to determine which of them has superior efficacy in detecting sarcopenia in community-dwelling older people. Our analysis fills this gap. To the best of our knowledge, the results reported in this paper are the first analysis of this type in Caucasian community-dwelling older adults (from Central and Eastern Europe). The purpose of screening is to detect sarcopenia in as early a stage as possible, so that early therapeutic intervention is possible. However, the screening results must be verified with a subsequent professional diagnosis, due to the risk of a false positive. Ideal screening tools should thus have reasonably high sensitivity and specificity, and an AUC value above 0.7 [10,34]. The larger the AUC, the better the overall diagnostic accuracy [10,18]. In our analysis, SARC-F was shown to have the highest sensitivity (90.0%), high specificity (91.1%) and large AUC (0.882), but only against the FNIH criteria [20]. At the same time, the FNIH criteria recognised the lowest percentage of people with sarcopenia (ten persons). In turn, based on the literature, SARC-F had low sensitivity, but high specificity and overall good diagnostic accuracy [5][6][7][8]. That was confirmed by our study for five out of six sets of international diagnostic criteria of sarcopenia (except the results related to the FNIH criteria). In response to the reported unsatisfactory sensitivity of SARC-F, Barbosa-Silva et al. proposed an extension of the questionnaire for sarcopenia screening, called SARC-CalF [9]. In an analysis of 179 older Brazilians, a comparison of SARC-CalF against SARC-F showed the former to have higher sensitivity (66.7% vs 33.3%, respectively) and AUC (0.736 vs 0.592, respectively), and comparable specificity (82.9% vs 84.2%, respectively). Only the EWGSOP1 criteria were used as the gold reference standard in this analysis. SARC-CalF differs from SARC-F by the evaluation of an additional parameter (calf circumference). This measurement should be regarded as a surrogate measure for muscle mass, which, in addition to low muscle strength, represents an essential component of sarcopenia. In our analysis, SARC-CalF, depending on the reference standard, exhibited highly varied sensitivity (20.0 to 75.0%), a less varied specificity (80.0 to 90.0%) and moderate diagnostic accuracy (AUC: from 0.711 to 0.874). SARC-CalF was shown to have the lowest sensitivity against the FNIH criteria (only 20.0%), with sarcopenia identified in only two older person, even though in relation to the same criteria SARC-F exhibited 90.0% sensitivity and detected this condition in 9 out of 10 subjects. Such discrepancies may be attributed to obesity (BMI>30 kg/m 2 ) and large calf circumference in six of these ten individuals, which exceeded the CC cut-off points in the SARC-CalF questionnaire. It should be noted here that, according to the SARC-CalF questionnaire, a score of �11 points already indicates a risk of sarcopenia. As a consequence, if the calf circumference is small (� 33 for women and � 34 cm for men, which gives 10 points), a slight deterioration in one of the other five evaluated domains is sufficient to be screened as sarcopenic. Accordingly, if large deficits are present in those five domains, the maximum score of 10 points can be obtained, but that alone is not enough to detect sarcopenia with SARC-CalF. In addition, Mohd Nawi et al. stressed that calf circumference measurements might be unreliable in many [2]. In our analysis, obesity was a confounding factor. Obesity does not exclude the coexistence of sarcopenia (i.e. sarcopenic obesity) but often masks low muscle mass [35]. Also, Yang et al. reported that using SARC-CalF may bear a risk of masking sarcopenia in older subjects with obesity [17]. The literature lists just one study in which the diagnostic values of the MSRA-5 and MSRA-7 questionnaires were compared with SARC-F in community-dwelling elderly individuals [19]. In this analysis, conducted by Yang et al., 384 elderly Chinese individuals were included, in which only one gold standard was used-the AWGS criteria. In contrast to our results, they showed a similar frequency of sarcopenia risk when using both the MSRA-7 and MSRA-5 questionnaires (34.4% and 39.0% respectively). In turn, SARC-F identified sarcopenia risk in 12.2%. Unfortunately, the possible causes of these discrepancies were not discussed by the authors. Similarly to our results, MSRA-5 showed higher sensitivity, specificity and AUC than MSRA-7, and SARC-F had much lower sensitivity but higher specificity than both MSRA-5 and MSRA-7. However, in the study by Yang et al. [19], MSRA-5 and SARC-F had similar overall diagnostic accuracy, which is not consistent with our results. It is worth pointing out that MSRA is based on low muscle mass risk factors, whereas SARC-F is based on the symptoms of sarcopenia, focusing on parameters related to the assessment of muscle strength. It should also be noted that four out of seven questions from the MSRA-7 questionnaire address issues related to the problem of malnutrition in old age (skipping meals, inadequate protein intake and dairy products consumption, weight loss). The use of MSRA-7 in our study group indicated a risk of sarcopenia in almost 3/4 respondents, while MSRA-5 (version without two questions about protein intake and dairy products)-in over 1/2 of them. In both cases, the indicated percentage of respondents with possible sarcopenia seems overestimated, especially so since the prevalence of sarcopenia in Poland is below 13% [36]. These results may be affected by the nutritional status of the respondents (almost 1/3 of them had poor nutritional status). In Poland, almost every second older person presents inadequate nutritional status, as demonstrated by the Polsenior study (representative of the Polish population) [37]. In addition, many of our respondents' answers indicated a poorly balanced diet (i.e. irregular consumption of protein-rich products and/or skipping main meals). If the intake of calories and protein is low, it may contribute to weight loss and protein-energy malnutrition. In turn, malnutrition increases the risk of sarcopenia, as noted in 2012 by Vandewounde et al. who introduced the concept of Malnutrition-Sarcopenia Syndrome [38]. Moreover, in the original version of the MSRA-7 and MSRA-5 questionnaires, to have a positive screening result, it is enough to be aged 70 or over and lose weight >2 kg in the last year, or be hospitalised in the previous year. Many of our subjects met these conditions, but after using various diagnostic algorithms for sarcopenia, it turned out that they did not have it. We think that the cut-off points for MSRA-7 and MSRA-5 proposed by Rossi et al. [12] (� 30 points and � 45 points, respectively) may not be suitable for populations similar to the Polish one. Our study has some limitations. Firstly, a relatively small group of men (n = 21) was included in this analysis-this is mainly due to the feminisation of old age in Poland and the fact that older men are less likely to volunteer for research. Moreover, due to a low number of men with sarcopenia in our study, the comparative analysis for sarcopenia prevalence according to gender was not performed. Secondly, in our study, we collected neither the socio-demographic data (i.e., marital status, living alone, level of education) nor information on the number of chronic diseases or those potentially related to sarcopenia. Thirdly, we used the BIA method for the assessment of ALM instead of CT, MRI or DEXA, which are considered more precise but are hardly available in Poland. Moreover, BIA is free of x-ray exposure and seems to be a more practical (because analysers are portable) and inexpensive choice. A strong point of our analysis is that we were the first to use all currently available sets of international diagnostic criteria for sarcopenia as a gold standard [there are six of them, developed independently by European Working Group on Sarcopenia in Older People 1 (EWG-SOP1) [14], European Working Group on Sarcopenia in Older People 2 (EWGSOP2) [15], Foundation for the National Institutes of Health (FNIH) Sarcopenia Project [20], Asia Working Group for Sarcopenia (AWGS) [21], the International Working Group for Sarcopenia (IWGS) [22], and Society on Sarcopenia, Cachexia and Wasting Disorders (SCWD) [23]. Conclusions Based on our analysis, the standard sarcopenia screening questionnaires deliver contradictory results in many practically occurring cases. It appears that SARC-CalF is an optimal choice for screening sarcopenia in community-dwelling older adults. However, the SARC-CalF may be inappropriate for use in obese subjects (those who often present a large calf circumference). The original cut-off points for the MSRA questionnaires may not be suitable for countries that have a high proportion of older people with poor nutritional status and inadequate diet. Perhaps, for such populations, it would be justified to set new cut-off points.
2020-04-22T13:04:54.136Z
2020-04-20T00:00:00.000
{ "year": 2020, "sha1": "2f30a3cef8d08e83e2789fc7f5dfec32c3bd4875", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0231847&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4496b9c0057ee6095fdc48db47f526f496f6979", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
67753932
pes2o/s2orc
v3-fos-license
On Estimation of Discretization Error Norm via Ensemble of Approximate Solutions The issue of single-grid discretization error estimator, operating in the postprocessor mode, is addressed in the paper. An ensemble of numerical solutions, obtained using solvers of different accuracy, is shown to provide an upper estimate for the norm of the discretization error. Numerical tests for the supersonic flows, governed by two dimensional Euler equations, confirm the feasibility for the norm estimation, if the ensemble of numerical solutions is separated into clusters of accurate and inaccurate solutions. Introduction The standard grid convergence strategy is based on the heuristic rule by C. Runge [1]. From this perspective, if the difference between two approximate solutions on coarse grid and on the fine grid is small, then solutions are close to exact one. However, from a practical needs perspective one should desire a quantitative estimate of the form d £ -u u h~ with computable d . Formally, the Richardson method [2] is close to this ideal. It enables us to determine the refined solution and the error estimate using a set of solutions computed on different meshes, which should belong to the asymptotic range of convergence. Two meshes are necessary if a single error order exists in flow field. Unfortunately, in most CFD problems the error order on different flow structures varies, so the order should be determined additionally, requiring at least three consequent meshes and causing an ill-posed statement. The present paper addresses an alternative to the Richardson method. The set of solutions is collected on the same mesh using different solvers that provide an estimate of the global error norm. Calculations may be terminated if a preassigned error level d £ -u u h~ is attained. At present, CFD uses a wide selection of numerical methods that are characterized by a rich variety of properties such as monotonicity, conservativity, order of approximation etc. This is naturally caused by the search for more "accurate" numerical solutions. The abundance of numerical methods may provide some additional opportunities for quantitative analysis of CFD results, which we consider herein. The "accurate" and "inaccurate" numerical schemes are often compared in terms such as the truncation error and the discretization error. The truncation error u d is obtained via Taylor series decomposition of the discrete operator For linear problems, the approximation error ) ( n h O u = D tends to zero with the same order n (by Lax theorem, [3]), if the discrete operator is well-posed (i.e. the inverse operator is uniformly bounded ). The estimation of the error order is significantly more complicated for the case of nonlinear equations with discontinuities [4, 5,6, and 7]. In this event, the discretization error comprises the components of different orders, which occur at various elements of the flow structure (such as shock waves or expansion fans). Thus, the observed order of local convergence may not be equal to the nominal order of the approximation error even for the asymptotic range. There are several general directions for the error estimation. A priori error estimation is the most widely used approach for the error analysis and may be expressed in the form which contains the unknown constant independent of the current numerical solution. It is the theoretical basis both for the development of numerical algorithms and for the mesh refinement strategy commonly used in CFD. A posteriori error estimation [1,8] has the form where h C is the computable stability constant, which depends on the numerical solution, and h e is the computable indicator of the truncation error. At present, the main successes in this direction are achieved for elliptic partial differential equations and finite element methods. In most of practical applications the stability constant is not estimated, while the error indicator is used for mesh adaptation. The truncation error u d estimates may serve as the simplest computable error indicator. It may be computed by the action of the high order scheme stencil on the precomputed flow field [9,10], by the action of the differential operator on the interpolation of the numerical solution [11] or via the differential approximation [12,13]. The application of the truncation error u d implies the calculation of the discretization (global) error . A survey of the error calculation methods may be found in [14]. In the simplest form, the estimation of this error may be performed using a defect correction [9,15]. In the defect correction frame, the truncation error u d is used as the source term inserted in the discrete algorithm in order to correct the solution. However, the total subtraction of the error implies the elimination of the scheme viscosity that may cause oscillations near discontinuities or an activation of some additional dissipation sources, which engenders their own error. Also, the estimation of the error may be performed via a linearized problem [16], complex differentiation [17] or by adjoint equations [10,11,13,18]. Usually, adjoint equations are applied to the estimation of the uncertainty of certain valuable functional (drag, lift coefficients etc.). Nevertheless, the approach of [13] enables one to estimate the norm of the solution error. Unfortunately, it requires solving a number of adjoint problems, which is proportional to the number of grid nodes that implies an extremely high computational burden. The presence of unknown components of the truncation error is the general disadvantage of above discussed residual-based error estimation methods. The differential approximation based methods use minor terms of Taylor series [13] and do not account for remaining higher terms. The postprocessor based methods do not account for the higher scheme truncation errors [10] or the interpolation errors [11]. The present paper considers the feasibility of finding the discretization error norm using the ensemble of calculations performed by solvers of different approximation order on the same mesh. We shall refer to this operation as the "ensemble based error estimation". Since the analysis is conducted in the space of numerical solutions, the truncation error is accounted for implicitly and completely. It is important that a mesh refinement is not used, thus requiring only moderate computational costs. 1 L norm seems to be natural for problems dealing with shocks, since most results on approximation error are obtained in this norm [4,5]. However, most practical research interests are related to valuable functionals (lift, drag, etc). Their uncertainty may be related to the 2 L error norm via the Cauchy-Bunyakovsky-Schwarz inequality. For this reason, the discrete 2 L -equivalent norm is used herein, while some results for other norms may be found in [19]. The presentation of this paper is organized as follows. In Section 2 we discuss the estimation of discretization error norm based on a priori information regarding error magnitude rating. Section 3 considers a posteriori analysis of error norm relations provided by the ensemble of numerical solutions obtained by different solvers. The supersonic shocked flows, described by two dimensional Euler equations are considered as the test problems in Section 4. The results of the ensemble based error norm estimation in comparison with the true error are presented for a set of solvers. The Section 5 provides a discussion concerning the features of considered approach to the error analysis. Conclusions are presented in the Section 6. The estimate of discretization error norm via the set of approximate solutions Let's consider the ensemble of numerical solutions obtained using finite difference or finite volume schemes of different accuracy orders on the same grid. Let the relation of the approximation error of these schemes be known a priori. We denote the numerical solution as the vector In the simplest event of two numerical solutions ) ) the following theorem may be stated. Theorem 1. Let the norm of difference of two numerical solutions be known from computations and there is the a priori information then the norm of approximate solution ) 2 ( u error is less than the norm of difference of solutions: Proof. The triangle inequality [20] for ) u , and ũ assumes the form . By accounting (2) as and, finally, the desired expression (Eq. 3): A posteriori analysis of discretization error norm rating The widespread opinion that the schemes of higher order are more accurate has an asymptotic origin and, usually, is not supported by quantitative error norm estimates. So, the evident weakness of Theorem 1 from the standpoint of applications is the assumption of the existence of solutions with a priori ranged error. Herein, we consider a posteriori check of error ranging. we define as a down border of the second cluster 2 d (distances between "accurate" solutions and most inaccurate one). The separation of distances between solutions into clusters may be considered as evidence of the existence of solutions with significantly different error norms. The quantitative criterion for applicability of Theorem 1, based on dimension of first cluster and the distance between clusters, may be stated as the following heuristic Criterion 1: If the distance between clusters is greater than the size of the cluster of accurate solutions and serves as the justification of Criterion 1. The Criterion 1 may be rigorous only in the limit of an infinite set of solutions, computed by independent methods. Nevertheless, the numerical check for this criterion confirmation or violation is of interest from the viewpoint of its applicability as heuristics. Numerical Tests The results of the error norm estimation using above mentioned criterion are presented below for several test flows governed by two dimensional unsteady Euler equations. 5 Here the summation over repeating indexes is assumed, are the velocity components, enthalpies and energies, RT P r = is the state equation and v p C C / = g is the specific heat ratio. The single oblique shock wave, the interaction of shock waves of I and VI kinds according to Edney classification [21] were used as the test problems. Only steady-state solutions were considered, so only the spatial discretization error is addressed. Several analytical solutions were constructed for these problems. The shock wave is the main element of these solutions. First, the shock wave angle b was computed from the flow deflection angle a using the expression [22] ÷ ÷ ø ö ç ç è ae + a g a r r For the single shock wave test these parameters are sufficient for the flow field generation. For the Edney I shock structure (Fig. 1), an additional iteration was used to determine the angles of shock waves past crossing. It was performed by fitting the contact line direction such that the pressures on both sides of the contact line coincide. For the Edney VI shock interaction (Fig. 2) the flow field was computed using two consequent oblique shocks and the single shock past the point of initial shocks crossing. The contact line and additional weak wave of opposite family [22] emerge at the crossing point. The additional wave (shock or expansion fan) should be computed to equalize pressures on both sides of the contact line. In the present tests, it was the expansion fan (Prandtl-Mayer flow). The resulting flow field was projected to the computational grid and the result was considered as the "exact" solution. The flow field contains undisturbed domains (nominal order of error, declared by authors of numerical methods, is expected), shock waves (error order about 1 = n [7]), contact discontinuity line (error order about 2 / 1 = n , [6]). As a result, one may hope to obtain a nontrivial error composed of components with different error orders. The paper contains an analysis of the ensemble of computations performed by methods listed below. The first order scheme by Courant Isaacson Rees (CIR) [23] is referred to as 1 S . The second order scheme using the MUSCL method [24] and algorithm by [25] at cell boundaries is denoted as 2 S . Second order TVD scheme of relaxation type by [26] is referred to as TVD S 2 . Third order modified Chakravarthy-Osher scheme [27,28] is marked as 3 S . Fourth order scheme by [29] The distance between solutions was calculated using the following norm It should be noted that methods S1, S2, S3, S4 (1, 2, 3 and 4 nominal (declared) truncation orders) demonstrates the global order of convergence a bit below The comparison with the analytical solution allows us to conclude that scheme 1 S (as "inaccurate") and schemes 2 S , 3 S and 4 S (as "accurate") enable one to find the upper bound of the error norm, if the Criterion 1 is satisfied. Second order TVD S 2 scheme [26] For all tests, if the Criterion 1 is not satisfied (there are no clusters, or distance between them is less the dimension of the cluster of "accurate" solutions) the error norm estimation fails. The numerical tests for the single oblique shock demonstrate the feasibility for the error norm estimation if the Criterion 1 is satisfied. However, the set of distances between solutions splits into clusters in about half of tests, more frequently for finer meshes. For Edney-I shock interaction (Fig. 1), the set of distances between solutions also splits into clusters in about half of the tests without dependence on the mesh size. However, for the distance between clusters, which approximately equals the dimension of the cluster, the error estimation may fail. The worst result over all tests, was obtained in calculations for S3-S1 S2-S1 S3-S1 S4-S1 S3-S4 S2-S1 S4-S2 S3-S2 is marked as Sk Si -. The norm is laid out along both axes. The results demonstrate the successful estimation of error and the convergence order about 2 / 1 for all tested methods without dependence on the formal order of approximation. This result stresses the differences between current approach and "p-refinement" (p-FEM) widely used in the domain of finite elements [30,31]. Thus, for the estimation of error norm upper bound, one should have a priori information regarding error rating (Theorem 1) or the ensemble of minimum three solutions with the distances between them split into two clusters. The distance between clusters should be greater than the dimension of the cluster of distances between more accurate solutions (Criterion 1). Two thirds of the numerical tests for two dimensional supersonic inviscid flows confirm the estimation (3) , if the heuristic Criterion 1 is satisfied. In one third of tests, the estimation (3) failed, however, the maximal observed violation of expression (3) is found to be about 15%. The relation of errors obtained in the paper is not necessarily attributed to properties of the considered schemes. In a strict sense, it may be caused by the imperfections of numerical realization performed by the paper authors. So, the authors do not pretend to provide a definitive assessment of the methods considered. Our purpose is rather to verify the single-grid error estimator based on the numerical results obtained by the solvers (algorithms realizations) of different accuracy. Discussion The determination, whether the scheme is "accurate" or "inaccurate" one, has the asymptotic sense in a priori analysis. It may also be performed by the comparison with the limited number of analytic solutions. The above results demonstrate the feasibility to distinguish "accurate" and "inaccurate" numerical solutions in the sense of discretization error norm rating. For example, the above 15% (Fig. 4) is not detected in tests. At first glance, the present approach is similar to the "p-refinement", widely used in the domain of finite elements [30]. However, "p-refinement" estimates the error of the lower order solution (less precise) by difference between it and the high order solution. Herein, we majorize the error of more precise solution by the difference between solutions under specific CFD conditions (shock waves and contact lines), when schemes of any formal order of approximation have the same real order of convergence. Some works discuss the analogue of Richardson extrapolation [31], which utilizes three finite element solutions with consequent orders of accuracy, that is of some analogy with our technique. However, algorithm [31] is based on specific asymptotic of energy norms and is not related with the triangle inequality and formation of clusters. The above considered single-grid discretization error estimator operates with the total error including the discretization error in flow field, initial and boundary condition error and round-off errors. It is used in a postprocessor mode like the Richardson extrapolation. However, it does not require any mesh refinement and may be used away from the asymptotic range. The dependence on the set of numerical methods and analyzed solution is the drawback of the ensemble based estimator. The same set of methods may provide a segregation into clusters for one flow pattern and may not provide it for another. So, this approach cannot replace the mesh refinement and is only aiming to supplement it by a non-expensive algorithm. Conclusions It is feasible to estimate the discretization error norm using a collection of numerical solutions obtained on the same grid by solvers of different orders of approximation. If the collection of solutions is split into separate clusters, corresponding to "accurate" and "inaccurate" schemes and the distance between cl usters is greater than the dimensi on of the "accurate" cluster, the norm of the error of the more accurate solution is majorized by the norm of the solutions difference. Numerical tests demonstrated the applicability (with a reasonable tolerance) of this heuristic rule in 2 L for two dimensional supersonic problems (containing shocks and contact discontinuities) governed by the Euler equations. The above considered single-grid discretization error estimator may be constructed using an ensemble of numerical solutions obtained by different solvers of various orders of accuracy. It is used in a non-intrusive postprocessor mode and does not require mesh refinement.
2019-04-13T17:51:40.659Z
2017-04-17T00:00:00.000
{ "year": 2017, "sha1": "06e4613c5a07ad0a33d0db895cc8c9966e179bec", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "06e4613c5a07ad0a33d0db895cc8c9966e179bec", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
234374507
pes2o/s2orc
v3-fos-license
Infrastructural Cyber-Physical Energy Systems: Transformations, Challenges, Future Appearance The directions for the transformation of the key hierarchically-structured infrastructure energy systems, i.e., electric power, heating, and gas supply systems, as influenced by intensively expanding adoption of technology innovations in physical (power) and information-and-communication subsystems are studied. The peculiarities of the transformation of the energy systems structure at their different hierarchical levels are analyzed. Changes in the properties of future energy systems as influenced by innovation-driven processes and facilities are discussed. Interpretations of new properties of transforming energy systems, i.e. flexibility and resiliency, which have recently been the subject of much research effort are analyzed. It is argued that in the future the above energy systems will acquire the features of integrated cyber-physical systems. The fundamental role of developing control systems for future infrastructure cyber-physical energy systems is emphasized. where level of the super-system incorporates large power plants, system energy storages of large capacity, and main electrical power network of high and ultra-high voltages; the level of the mini-system is that of mini-sources of the electric power up to 25 MW of unit capacity, mini-storages, and the distribution network of 6-110 kV; the level of the micro-system subsumes micro-sources and micro-storages of the electric power of unit capacity up to 25 kW and the local power grid of 0.4-10 kV. Heating systems differ from electric power and gas supply systems only in their scale and location in urban areas. They are structured as a hierarchical system as well ( Figure 2) and are differentiated into super-systems that cover large heat sources (combined heat and power (CHP)) plants, boilers, etc. with a capacity above 25 Gcal/hour), transit and main heat networks (HN) coming from them, mini-systems with a heat capacity below 25 Gcal/hour as part of heat sources (cogeneration plants, boilers, nonconventional and renewable energy sources), distribution networks of small diameters and individual heat points, and micro-systems in the form of individual non-conventional and renewable energy sources built directly at one or several consumers with their heat consumption systems, as well as micro-systems of buildings and structures. As for GSSs, at the level of the super-system one should deal with process facilities that supply gas to the network of its supply mains, starting from the fields and main compressor stations at the field outlets and down to the points of its delivery to gas distribution stations and units. The main facilities of GSSs at this level are main and intermediate compressor stations, high pressure gas trunklines, and underground gas storage facilities that compensate for seasonal unevenness of gas consumption. The share of natural gas transportation allotted to liquefied natural gas (LNG) transport, mostly via waterways as shipped by gas carriers, is ever increasing. At the same time, natural gas liquefaction plants, tankers, and regasification terminals themselves should also be counted as belonging to the facilities of the GSS super-system. At the level of the GSS mini-system, one should consider gas distribution networks of high pressure (0.3 to 1.2 MPa) from gas distribution stations, as well as medium (0.05 to 0.3 MPa) and low (up to 0.05 MPa) pressure networks I. IntroductIon The key energy infrastructure systems are electric power systems (EPS), heating systems (HS), and gas supply systems (GSS). They have an elaborate multi-level structure and play a crucial role in ensuring a guaranteed energy supply to consumers with their increased requirement for reliability and quality of energy supply in accordance with the new paradigm of user-oriented energy systems. By way of illustration, Figure 1 [1] presents the structure of the EPS as a super-mini-micro-system from gas distribution units down to shut-off devices at the inlets to gas-fired thermal power plants, boilers houses, gas chemical complexes, and buildings with gas equipment. Moreover, at this level one should take into account the systems of discrete transport (primarily road transport) of liquefied and compressed natural gas. The level of microsystems is represented by internal gas pipelines running from their entries into buildings and structures to connection points of consuming devices and appliances: gas boilers, turbines, equipment of gas-chemical enterprises, and other gas-consuming gas receivers. The above energy systems in the process of development are transformed in terms of their structure and properties due to the expanding adoption of technology innovations in production, transport, distribution, storage, and consumption of energy, intensive development of renewable energy sources, as well as the activity of consumers in the processes of their energy supply, etc. This study analyzes the main directions of transformation of the structure and properties of future EPSs, HSs, and GSSs, discusses the transformation trends these energy systems have in common and their unique features, as well as topical problems and challenges associated with the discussed trends that determine the future appearance of these key infrastructure systems. Taking into account the above-mentioned specific features of infrastructural energy systems, Chapter 2 of this paper deals with the structural trends in energy systems development. The transformation of energy systems properties is discussed in Chapter 3. Integrated cyber-physical energy systems of the future are the focus of Chapter 4. Directions of control systems development for integrated energy systems are discussed in Chapter 5. Some conclusions are presented in Chapter 6. A. Electric power systems Trends in the transformation of the EPS structure at the super-system level are determined by a number of long-established and emerging factors. The key longestablished factor is the realization of system effects from the joint operation of EPSs, that are considered as system services. The Association of System Operators of the world's largest EPSs G015 draws attention to the need for a new understanding of such system services under market conditions [2]. A relatively new factor is the formation of megacenters of power generation with the use of renewable energy resources, such as the largest hydroelectric power plants -HPP (the "Three Gorges" HPP on the Yangtze River in China, a large-scale hydroelectric power plant on the Congo River in Central Africa, etc.), mega-parks of wind power plants in the North Sea and the Arctic coast of Russia, solar power plants in the Sahara and Gobi deserts, and others. By way of illustration Figure 3 shows a mega-project for the development of Western European Interconnection based on wind parks in the North Sea and solar power plant parks in the Sahara desert [3]. Electricity generated by these mega-centers should be distributed over ultra-long distances, which is feasible given the active development and reduction of the cost of long-distance power transmission technologies of ultrahigh AC and DC voltages. This served as the basis for the proposal to form the Global Energy Interconnection (see Figure 4) [4]. On the other hand, active development of distributed generation continues, which radically redefines the structure and properties of mini-systems. However, the sometimes proposed extreme scenario of EPS development that makes use of distributed generation only when major power plants are withdrawn from operation seems unrealistic. The mixed scenario of development of relatively large power plants at the level of super-systems (centralized power supply) and distributed generation units at the level of mini-systems (decentralized power supply) has a high probability of implementation success. The function of power supply of large consumers remains the responsibility of large centralized sources, which is practically impossible to attain by means of distributed generation, along with ensuring the reliability of power supply and quality of electricity in terms of frequency and voltage. The general structural trend for super-systems and mini-systems is the continuing increase in the density and complexity of main and distribution power networks. The structure of large EPSs is becoming more and more heterogeneous, with cases of system "voltage-wise" instability in concentrated parts of the EPS being more and more probable, while the problems of "angle-wise" instability in its extended parts with long electrical connections persevere. The growing complexity of the structure of developing EPSs with the general growth of installed capacity and scale of EPSs lead to the more devastating aftermath of severe system accidents of cascade nature, which is confirmed, in particular, by the US EPS historical data for 1991-2005, shown [5] (see Figure 5). Micro-systems have traditionally been formed as based on alternating current. Nowadays, many electrical receivers operate on direct current. Rectifier-inverter units are used for their connection with the EPS. On this basis, DC micro-systems or hybrid DC/AC micro-systems are developed. The unique project of implementation of DC micro-systems is the program of power supply of isolated individual consumers of Mongolia "100,000 Solar Houses (gers)" [6]. Both stand-alone and joint operation of micro-systems with EPSs generates a number of urgent problems for studies on substantiation of micro-systems development and control of their operation under various conditions. Thus, structural changes in EPSs at all three levelssuper-, mini-and micro-systems-lead to changes in their properties and new problems that are to be solved. B. Heating systems Heating systems, often integrated with electric power and gas supply systems through heat sources (CHP plants), heat networks, and consumers, stand out as relatively local in nature. In accordance with the energy policy of Russia, their development is aimed at the prevalence of district heating. In terms of capacity and production of thermal energy, they reach a large scale and therefore are aptly classified as large energy systems. Thus, the heating systems of Moscow in terms of capacity (over 52 thousand Gcal/h) and thermal energy production (over 95 million Gcal/year) exceed the HSs of such industrial regions as Irkutsk, Novosibirsk, Sverdlovsk regions, Krasnoyarsk Krai, etc. in terms of their size. All three process structures of systems (super-, mini-, and micro-systems) can exist either as part of a single centralized system, or separately, performing each their own functions. Relatively new trends are manifested at all structural levels and are related not only to processes and equipment used in the systems of production, transport, and consumption of thermal energy, but also to changes in the processes of their operation, administration and control models (see Figure 6). At the level of large systems, there is a growing tendency to merge scattered systems at the level of main heat networks with the organization of heat sources into unified HSs. Gas-powered steam-turbine CHP plants are being upgraded into combined cycle and gas turbine power plants based on cogeneration units. They have a wide power range depending on whether they belong to superor mini-systems. In both types of HSs, new materials (composites, metal-filled plastics, etc.) and modern technology of trenchless laying of thermal networks are actively adopted. The system of automation and regulation in heat networks at the consumer side undergoes significant changes, it is focused on providing remote control with variable regulation of heat transfer medium flows, with automated units for maintaining and altering pressure and flowrate values, monitoring of consumption level, and the possibility of its regulation. A very pronounced trend of transformation can be seen at the level of mini-and micro-systems, in which nonmechanized manual labor is replaced by automation and regulation systems. Structural transformations on the basis of the outlinebased division into subsystems (sources, networks, consumers) are accompanied by the transition to new technologies of systems operation in real-time and consumer activity. Changes in the principles of construction and operation of the HSs being developed determine their structural complexity and require the appropriate development of methodological and theoretical support. C. Gas supply systems The main trends in the transformation of the GSS structure at the super-system level are related to the features of prospective changes in the structure of production, trunkline transport, and natural gas consumption. As for the structure of gas consumption, it is based on various options for the development of energy consumption in the world, including Europe [7, 8, etc.], and even taking into account a significant anticipated growth of the role of renewable energy sources (RES) in the generation of electricity and heat (by 2040, RES may take up 15% of the world fuel balance as compared to 3% today), it can be assumed that consumption of natural gas will not decrease at least until 2040, but it may also increase slightly due to the active development of the gas chemical industry. Of all regions of the world, the largest growth in gas consumption Fig. 3. Mega-project of West European interconnection development. will take place in developing but not yet sufficiently industrialized regions. Pipeline infrastructure in such regions of the world is undeveloped or non-existing. This creates prerequisites for active development of the LNG transportation system with further development of the natural gas liquefaction and regasification plants network. The same holds true for the entire world as well, where for reasons of lesser dependence of gas producers on binding to specific consumers, it seems that the mainstream LNG transportation by sea will be developing at a faster pace than pipeline transportation. In this case, the main structural transformations may concern the increase of maneuverability in response to changes in supply and demand. For example, over the last few years natural gas shortages in the USA have been transfigured into its abundance, which led to the development of plans to expand the area of regasification terminals to establish natural gas liquefaction capacities [9, etc.]. Implementation of such projects in the world will allow both exporting and importing natural gas using the same infrastructure depending on the market situation. As for the structure of natural gas production, due to the high level of development of already proven conventional gas-bearing fields, costly access to new fields located mostly in hard-to-reach conditions, and engineering challenges posed by shale gas extraction, as well as taking into account the environmental problems of this process, the technology behind gas production from gas hydrates and world ocean-dissolved gas is the most promising in the world. Estimates of methane content in gas hydrates in the world are huge and according to various estimates they reach several hundred trillion cubic meters [10, 11, etc.]. However, with all the successes of industrial and [12, etc.] there are still obviously serious issues with the economic feasibility of extracting methane trapped inside the frozen lattice of gas hydrates molecules. This requires either lowering the pressure in the deposit, or heating the area near the well, or injecting carbon dioxide to replace methane in the hydrate. All these methods are still unacceptably expensive. In any case, humanity's quest for utilizing these resources will be answered in the future. The structure of sales gas production will shift towards gas hydrates development, and the gas trunkline system, in this case it is likely to be maritime LNG transport, will be tied to the point of its production facilities. With potential changes in sources of sales gas supply due to depletion of the old fields and the need to access new ones, gas trunkline systems as well will undergo structural transformations. Individual gas trunklines tied to specific depleted fields will lose their relevance. One will have to build new gas trunklines tied specifically to new sources. Apparently, gas trunkline systems tied to regasification terminals and associated with LNG transportation via waterways will be developing at a faster pace. These terminals serve as gas suppliers to the gas pipeline network regardless of changes in gas production locations. At the level of mini-systems of gas supply to the consumers, the GSS will change structurally in line with the scientific and engineering progress at the gas consumers side. Thus, as the share of NGV fuel use in transport increases, the networks of liquefied and compressed natural gas refueling will increase. Such refueling stations are adapted to the transport infrastructure and can be built at considerable distances from the gas transmission networks. Accordingly, the structure of discrete (road, railway, and water) transport of both LNG and compressed natural gas (CNG) will be developing at a faster pace as the number of such refueling stations grows. Micro-systems of gas supply are unlikely to undergo drastic structural changes in the future. In all likelihood, they will be the same pipeline distribution across buildings, enterprises, and facilities using innovative intelligent shutoff and control gas pipeline valves with the incorporations of various levels of smart systems to control the energy consumption of the respective consumers. A. Electric power systems Recently, due to the broader adoption of generating units running on renewable energy resources, that are characterized by unsteady power output, and increased activity on the part consumers to control their own electricity consumption in real-time, the uncertainty of the current operation mode of the EPS has increased significantly, which encouraged the study of the flexibility of the EPS and justification of the means to increase it. The flexibility of the EPS is a relatively new concept characterizing its ability to maintain the normal or close to normal state under the influence of internal (sudden changes and fluctuations of generation and load, flows along lines) and external (random disturbances) random (uncertain) factors [13]. Modern EPSs while utilizing conventional energy and electrical power engineering technologies and control systems possess a sufficiently high level of flexibility due to the presence of self-adaptation and self-stabilization properties in relation to internal and external destabilizing factors. The above properties of the EPS are determined by the action of voltage/frequency governing effects of load, frequency characteristics of generators, as well as the inertia of rotating masses of rotors of synchronous and asynchronous machines, and the action of regulation and automation systems. Due to the presence of these properties, EPSs adapt to abrupt operation mode changes set within permissible limits, and when the operation mode parameters exceed permissible limits, the emergency control system comes into operation. Electric power systems of the 21st century are undergoing significant changes in their properties not only due to the transformation of their structure but also due to the use of technology innovations in production, transport, storage, distribution, and consumption of the electric power. The factors internal to the EPS that significantly reduce the ability of systems to self-adaptation and self-stabilization are related to the mass use of power electronics and rectifier and inverter units for communication with the EPS of highspeed gas turbine and gas-reciprocating generators, wind turbines, photovoltaic units, electric energy storage, and frequency-controlled load motors. The growth in the scale of the use of these technologies at the levels of super-systems and mini-systems significantly reduces the ability of the EPS to self-adapt and self-stabilize and, as a result, reduces the level of its flexibility. On the other hand, the growth of the share of randomly fluctuating generation on renewable energy resources (wind turbines, Fig. 7. Effect of HS intellectualization. Fig. 8. Integrated energy systems. Fig. 9. Architecture of integrated intelligent energy systems. photovoltaic panels, small HPPs) also leads to a decrease in the flexibility of the EPS. At the same time, the control systems of many devices using power electronics (FACTS, energy storage, DC lines and links) have high efficiency of control and stabilization. Their wide adoption in future EPSs will ensure a drastic increase in controllability and, consequently, in flexibility of these systems. In general, there are numerous possibilities for future EPSs to ensure their flexibility and to choose reasonable means for this purpose: this is a far from easy problem than can be solved for standard conditions of operation and development of these systems [13]. At the same time, the relevance of studies of off-standard (extreme) conditions under the impact of corresponding external and internal factors that require careful consideration increases. These factors are associated with the notion of EPS survivability [13], and recently with the closely related property of resiliency, which is considered in relation to large-scale disturbances and cascade accidents in EPSs [14, etc.]. To counteract such complex emergencies, the emergency control and protection system is available and is being developed, including effective procedures for restoring EPSs. An important role is relegated to regular dispatcher training sessions, as well as analysis and generalization of the nature and mechanisms of occurrence and development of such unique accidents. B. Heating systems Heating systems starting from the end of the 19th century to the present day have undergone the transformation in four generations. Heating in Russia is now at the stage of the transition from second-to third-generation systems, although European countries are already at the stage of development of systems of the fourth generation. In order for domestic heating systems to level up, it is necessary to move from a single-loop system to a two-or three-loop system with individual intelligent heat consumption systems. This will allow changing the technology of systems operation by advancing to the regulation of heating to consumers in real-time, which will complement the energy-consuming centralized regulation with local and individual settings and will provide an opportunity to strengthen existing and implementing new properties such as flexibility (adaptation to the current level of energy consumption); customer-centricity (the ability of the system to respond to consumer demands); integration (integration into the urban infrastructure); efficiency (compliance with energy efficiency requirements); competitiveness (economic advantage); reliability (meeting the growing energy demand, resistance to accidents), interoperability (the ability of subsystems and their elements to exchange energy and information so as to attain the maximum gains), etc. Technological transformations of HSs will allow implementing other properties listed above that are typical also of electric power and gas supply systems and various types of their structures. An important change in the properties of the energy systems under consideration is the transition from a unidirectional (from the source down to the consumer) scheme of movement of energy flows to a multidirectional one (i.e., with flows in multiple directions). This increases the mobility of HSs, their maneuverability, implements the bilateral nature of supply to consumers, significantly enhancing the property of reliability of heating, but at the same time, it gets way more difficult to control their development and operation. The transition from systems with high inertia of the process to flexible informationand-energy structures controlled in real-time seems to be the key for HSs, which provides them with quality performance of their functions with respect to heating provided to consumers while reducing energy and financial costs (see Fig.7). C. Gas supply systems Under the influence of technology innovations, the properties of GSSs are transformed at all levels considered herein, from fields and facilities of sales gas production to the processes of its use. Thus, the concept of the intelligent gas field as a system of automatic control of gas production operations, which provides for automatic optimization of all the most important technological processes in their direct interrelationships, is already beginning to be implemented in practice. The pros of the introduction of this concept are expressed in the emergence of a new property of facilities and the system as a whole, that of a possibility of remote control. It also provides opportunities to monitor energy consumption, improve the efficiency of equipment operation and deliverability of wells through the monitoring and regulation of well yields, the prediction parameters of well exhaustion based on machine learning methods, forecasting of the behavior of new wells, centralized control of a large number of wells through remote monitoring systems, rational personnel management, and transparency of information. As for improving the reliability of the operation of the line pipe section of the GSS, the so-called "intelligent tieins" or remote monitoring system for the strain-stress state of pipelines are already being adopted at newly constructed and reconstructed gas trunklines. Such systems are tied into the gas pipeline and allow monitoring mechanical loads by comparing them against current strength characteristics of the pipe. Their use proves feasible both in the gas trunklines themselves and in the piping of compressor and gas distribution stations, etc. Incidentally, cathodic protection parameters of gas pipelines are also monitored. Such a concept can be applied to the full extent, with consideration of the respective differences, at all levels of the GSS. As a result of all these innovations, individual elements of the system and the entire GSS system as a whole have significantly increased their reliability. It is achieved by means of receiving reliable information on the state of elements of the system, arriving at optimum parameters of their operation and systems of protection against negative influences (galvanic corrosion, etc.), and also the possibility of timely decision-making when it is required to take preventive measures on maintenance of the pre-defined level of operability of the GSS. All that having been said, at the micro-level, novel technologies will also have a positive impact on the energy efficiency of consumer processes, incorporating them into a common friendly interface between gas suppliers and consumers, within the framework of intelligent energy consumption processes, both for process and domestic needs. Iv. Integrated cyber-phySIcal energy SyStemS of the future A. Integrated intelligent energy systems The current objectively observable trends in the development of energy infrastructural systems are characterized by their tighter integration at the levels of production and consumption of energy and energy resources (see Fig.8) [1, etc.]. The processes of integration of electric power, heating, and gas supply systems into a meta-system increase its level of integrity and organization while contributing to the higher volume and intensity of relationships and interactions between individual systems. As a result, a high level of comfort in residential, public, and production buildings is achieved, including quantitative and qualitative growth of the array of energy services (related to electric power, heating, and gas supply) at an affordable price; ensuring controllability, reliability, safety, and efficiency of energy systems; reducing their negative impact on the environment, including greenhouse gas emissions. Combining multiple energy systems into a single energy and process meta-system with a common coordinated control system yields a synergistic effect in many aspects. An integral property of such an integrated meta-system is its intelligent nature. It is based on the agent-based paradigm: each consumer, receiving information through its intelligent agents about all other participants in the energy supply process, determines its own behavior. The technology of intelligent meta-system operation is also getting new. Due to the complex structure, possible conflicts, and competition in this meta-system, the classic hierarchical principle of integrated energy systems control fails to deliver on the targets they share. The new system design should combine certain independence of multiple decision-making centers and their coordination in ensuring reliable energy supply to consumers. It should be based on the principles of subsidiarity (ability to delegate control functions to the system levels remote from the center) and self-regulation, according to which control is implemented from the inside instead of by acting on the controlled system from the outside. The implementation of this principle presupposes arranging the interaction of agents with each other, which results in the introduction of internal control factors. At the same time, the systems have their own control, goals, and tasks and operate relatively independently, coordinating themselves with other systems by pursuing a common target. These provisions predetermine a network model of relationships, based on the principle of complementarity, where the actions of one participant in achieving their tasks simultaneously contribute to achieving certain tasks of other participants. The architecture of integrated intelligent energy systems is shown in Fig.9. The network organization is a principle of higher order in comparison with the existing hierarchical subordinate structure of energy systems control. As a result of the integration of energy systems into a higher level meta-system, the old properties get enhanced, while the new ones manifest themselves, among which the most significant are: flexibility; intelligent nature; integration; efficiency; competitiveness; reliability; complementarity in the performance of a shared task; unity of principles of organization and operation, and independence in the implementation of local functions. This generates new tasks for the management and control of integrated intelligent infrastructure systems of the energy industry. B. Cyber-physical energy systems of the future Modern energy infrastructure systems (EPS, HS, and GSS) are the entities of utmost complexity in terms of their structure operation, with each of them being made up of two closely interconnected subsystems: physical (process) and information-and-communication (ICS) subsystems. Already at present, and even more so in the future, the process and information-and-communication subsystems are getting comparable in complexity and responsibility in terms of ensuring the normal operation of each of these subsystems. Currently, the digitalization of the above integrated intelligent infrastructure systems of the energy sector is being actively carried out. It implies not only the acceleration of information processing in digital form but also an increase in the efficiency of technological processes in energy systems due to optimal intelligent control of processes. These, as well as the noted factors of ICS complexity and responsibility, predetermine the necessity to treat EPSs, HSs, and GSSs as complex cyber-physical systems [15, etc.]. Within such systems, the ICS can operate inadequately due to internal defects (errors in algorithms, etc.) and can also be exposed to unauthorized external impacts (cyber attacks) [16, etc.]. Taking into account internal and, especially, external factors (cyber attacks) the problem of cybersecurity becomes urgent [17, etc.]. Reliability of information on the current state of the energy system or its loss due to internal defects of digital devices or external cyber attacks on the ICS may be the reason for working out and implementing incorrect control actions and unfolding of the emergency process in the physical subsystem. In turn, a failure or accident of an element in a physical subsystem may not only cause an emergency in that subsystem but may also contribute to the failure of the ICS elements. Taking into account these interrelationships, the integration of physical and information-and-communication factors should be implemented at the level of substantiation of the development of cyber-physical energy systems, as well as in solving various problems of controlling their operation modes. Thus, for the present, and even more so for the future, cyber-physical energy systems, the scope of factors that largely determine the transformation of the structure and properties of energy systems and form a list of relevant problems for research and ensuring the flexibility and survivability (resiliency) of these systems is drastically expanding. A. Electric power systems The presented analysis of the transformation of the structure and properties of future EPSs testifies to the key role of control in ensuring the normal operation of these cyber-physical systems. Taking into account the growing complexity of processes that take place in EPSs, control systems should match them in their developments. Forecasts that deal with this direction indicate that future EPS control systems should have a hierarchical structure [18, etc.]. To counteract the development of cascade accidents, the need for coordinated hierarchical control is discussed. The important role of artificial intelligence methods in improving control efficiency is formulated. The ideology of Wide Area Monitoring, Protection, and Control Systems based on vector measurements with the prediction of state variables to ensure adaptive control is developed. It is typical for electricity storage devices, FACTS devices, etc. for control purposes. In the above plan, the currently operating and developing highly efficient system of automatic emergency control of EPSs of Russia including the key hierarchical subsystem of adaptive emergency control automatics [19, etc.] should be represented. The lower level covers microprocessorbased automatiс devices that implement specific control actions, which are cyclically adjusted at the upper level, thus providing adaptive control. The multi-tier principle of automatics operation is realized: if at the first stage automatics failed to ensure maintaining of stability of the EPS, the next group of automatic devices counteracting the cascade development of accident comes into operation. The above tenets of the transformation of EPS control systems are mainly related to the level of super-systems and can be considered as a baseline for mini-systems. The ideology of control systems of micro-systems on a multiagent basis with the aid of consensus control algorithms when using appropriate protocols of the interaction of agents in the process of control is actively developed [20, etc.]. B. Heating systems The fundamental thesis for HSs that is implemented as a result of their technological transformation, is the transition from a qualitative method of regulation of heat supply to a quantitative method of its supply. This radically changes the organization of systems and the principles of control of their thermal and hydraulic modes. The availability of automatiс equipment, intelligent systems, and the horizontally-organized control structure ensures the control of heating in accordance with the needs, increases the efficiency of HSs while improving the quality of performing their functions. Consumers are increasingly beginning to exercise active load control functions. They assume a part of the tasks aimed at creating comfortable conditions delegated to them by centralized control structures. Active consumers not only manage their power consumption mode but also influence the operation of the HS as a whole. If they have their own heat source, they can provide heat not only to meet their own heat needs but also to meet the demand for heat by neighboring consumers. The presence of such consumers in the HS expands the functionality of the systems, creates the necessary conditions for controlling the reliability of their heating, and enhances resiliency. For many years, computational and optimization computer models have been and continue to be used to control the development and operation of HSs, as well as other energy systems. With the development of information and hardware technologies as well as methods of computational mathematics, they were transformed into their digital doubles with a wider range of information and intellectual resources. They represent digital models of the interconnected elements of the HS and are used for remote information acquisition, its processing, and control of operation modes of heat sources, heating networks, and ensuring heat load profiles of consumers with automatic monitoring and maintenance of comfortable indoor temperature. At the same time, there are new opportunities emerging for controlling heating systems, improving efficiency, timely identification and localization of damage locations in visually unobservable heat networks distributed over a large area, assessment of causes of excess heat losses, prioritization within repair schedules, etc. Many energy companies, such as Gazprom Energoholding, are already implementing services of online applications for contracts, exchange of billing documents, and online payment for consumed heat. The automated system of heat energy metering is being implemented, which allows automatic collecting and storing the readings coming from the metering units, as required for analysis and monitoring of heating parameters. Their processing, digital structural representation for integration into automated complexes of dispatching control of systems operation is being carried out. C. Gas supply systems Just like with electric power systems, in connection with the global intellectualization of the GSS, there are certain fears associated with the increased vulnerability of the system given the negative impact of the intellectual nature of cyber attacks. In this case, the task of reducing GSS vulnerability should be solved simultaneously with the growth of the system's intellectualization. This can be done from the standpoint of the developing information technology and administration of the corresponding systems, as well as from the perspective of improving the optimality of dispatch control in the event of emergencies of various nature. One should provide for the possibility of switching over individual process operations and logical chains thereof to the "manual" control mode with the return back to the mode of the interconnected intellectual operation following the elimination of possibilities of threat realization. Intelligent dispatching control systems of the GSS should have a hierarchical structure with independent modules of individual facilities and subsystems, linked into a single control system. At the super-system level, dispatching of strategic processes in each element and the GSS as a whole is implemented, starting from the main compressor stations at the field outlets and down to the points of its delivery to gas distribution stations and units, natural gas liquefaction plants with the corresponding infrastructure, gas carriers, and regasification terminals. The same control ideology should apply to the level of gas distribution systems (the level of mini-systems in the GSS). As for the operation of gas supply systems inside buildings, structures, industrial enterprises of different levels, the control systems of the micro-level GSS fit nicely into the ideology of intelligent systems of "smart" houses and enterprises that are nowadays being developed and put into operation. Gas supply control systems of this level of the GSS will be actively developed inseparably from the directions and rates of intellectualization of control of internal utility systems of buildings and structures in direct connection with unified dispatching centers of the minisystems level. concluSIon The development of infrastructure energy systems on the basis of technology innovations in physical and information-and-communication subsystems under digitalization and intellectualization of operation processes will lead to the decisive transformation of the structure and properties of these systems. As a result, future energy systems will take the form of elaborate intelligent cyberphysical systems, radically different from the systems of today. This transformation will require a significant reconsideration of the existing principles and methods of modeling such systems, analysis of their new properties, justification of their development and control of their operation. The basis of new methods and models, along with the traditional ones, should become the proven-tobe-effective apparatus of artificial intelligence. The key role in ensuring the normal operation of the transforming energy systems will be played by future control systems, the ideology behind construction and operation of which should be ahead of the needs of the transforming cyberphysical systems. aknowledgement This study was supported by grant # III.17.4.1 (AAAA-A17-117030310432-9) of State Program for Basic Researches 0f Siberian Branch of Russian Academy of Sciences referenceS
2021-05-13T00:03:46.007Z
2020-12-27T00:00:00.000
{ "year": 2020, "sha1": "945958641d7ba6a7645dba2e51827396f49bed91", "oa_license": "CCBYNC", "oa_url": "http://esrj.ru/index.php/esr/article/download/2020.03.0003/2020.03.0003", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9aea8b2710c194dfaef1e69724645589f5326e85", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
249812671
pes2o/s2orc
v3-fos-license
Optimal Placement of Vibration Sensors for Industrial Robots Based on Bayesian Theory : This paper presents an optimal sensor placement method for vibration signal acquisition in the field of industrial robot health monitoring and fault diagnosis. Based on the general formula of Bayes and relative entropy, the evaluation function of sensor placement is deduced, and the modal confidence matrix is used to express the redundancy of sensor placement. The optimal placement of vibration sensors is described as a discrete variable optimization problem, which is defined as whether the existing sensor layout can obtain joint state information efficiently. The initial layout of the sensor was obtained from the structural simulation results of the industrial robots, and the initial layout was optimized by the derived objective function. The efficiency of the optimized layout in capturing joint state information is proven by the validation experiment with a simulation model. The problem of popularizing the optimization method in engineering is solved by a verification experiment without a simulation model. The optimal sensor placement method provides a theoretical basis for industrial robots to acquire vibration data effectively. Background and Significance of the Study As more and more robots are introduced into space, industry, and private homes, fault monitoring is becoming a more critical problem. In the past 30 years, fault monitoring and diagnosis methods for various nonlinear systems and robotic systems have been studied. Model-based analytical redundancy methods have been used for fault detection and isolation of nonlinear and robot systems [1][2][3]. The purpose of fault detection technology is to generate fault-sensitive diagnostic signals. In existing automatically controlled systems, faults can occur in both the mechanical and electrical parts of the plant. Fault isolation allows fault-related inputs to be located from all other system inputs and generates specific residual signals for each fault. For example, in an electromechanical system, such as a robot, a single fault may occur in a specific driver, a specific sensor, or a system on a specific component [4,5]. For multi-joint robots, dynamics [6], kinematics [7], joint clearance [8], and friction models [9] have been studied. The results show that the mechanical transmission system is an essential part of multi-joint robots to transmit motion and force [10]. When the transmission accuracy of the robot is reduced, the working efficiency and output product quality decrease, and the positioning accuracy of the robot is also affected by various factors. Therefore, when the sensors are arranged, the motion state information of the joints should be captured as efficiently as possible. In theory, how to define the validity of sensor distribution is the core of solving this problem. Related Work Juri [11] proposed a greedy frame sense algorithm to select the optimal sensor location when estimating parameters from the measured data of sensors. This algorithm is the first one that is close to optimal in the mean square error. In the last 10 years, the optimal placement of sensors in mechanical systems and structures has become a hot research topic. Applications include modeling, identification, fault detection, and active control of systems, such as bridges [12]. To ensure safety and functionality, more and more structures are equipped with various types of sensors, such as accelerometers, displacement sensors, strain gauges, and fiber optic sensors for monitoring [13]. The modal confidence matrix is an excellent tool for evaluating the correlation of modal shape vector space. The calculated scalar value is between 0~1 or expressed as a percentage. For industrial robots, the redundancy of sensor placement can be evaluated using a modal confidence matrix. Yi [14] proposed a hybrid optimization method to optimize sensor placement when constructing an effective structural health detection system. In this method, the modal confidence matrix is introduced, and dual structure coding based on a generalized genetic algorithm is used to determine the sensor position. Hanis [15] argued that sensor configurations should also minimize unnecessary high-mode spillovers in addition to the classic EFI approach. Castro-Triguero [16] used four classical sensor location methods: two based on the Fisher information matrix and two based on the rank optimization of the energy matrix. Methods based on information theory have been developed to provide reasonable solutions to the problem of selecting the optimal sensor configuration in modal identification and structural parameter estimation [17,18]. Li [19] considered that, in the sensor placement of the structural health monitoring system, the structural array and natural frequency should be considered, along with the degree of participation in the structural response. Therefore, a sensor placement method considering both the dynamic characteristics of the structure and the actual load conditions is proposed, and it is verified that the method has a better modal identification effect. Brehm [20] focused on the problem of determining the optimal reference sensor position under random excitation in a weakly stationary process, combined with the set design variables of the sensor location, and the genetic algorithm was used to avoid evaluating all possible combinations of reference sensor positions. The proposed method was verified by a numerical benchmark study of a supported beam and a practical sample. David used a theoretical estimation framework to calculate the optimal geometric sensor formations that would yield the best achievable performance in terms of target positioning accuracy by maximizing the determinant of the appropriately defined Fisher Information Matrix (FIM) [21]. Flynn [22] considered the optimal sensor placement problem in structural health inspections. Based on the general Bayes formula, optimal placement was established as a process of minimizing the expectation of specific errors. Finally, the optimal solution generated by the algorithm was discussed. Using the influence of spatial correlation prediction error on optimal placement, Costas Papadimitriou [23] used information entropy as the performance measure of sensor configuration, and he expressed the optimal position of sensors as the optimization problem of discrete variables, which solved the problem of modal identification and parameter estimation of the structure-related model. Sun Hao [24] transformed the optimization problem into an integer optimization problem and then proposed a discrete optimization scheme based on an artificial bee colony algorithm to solve the optimization problem. Health monitoring and fault diagnosis of industrial robots are essential for safe and reliable operation, and a practical sensor layout is essential for fault diagnosis and other work. From the above situation, it can be concluded that most of the existing sensor placement methods are based on an optimization algorithm or optimization matrix, and the objective function cannot meet the functional characteristics of industrial robots. Furthermore, as flexible equipment, the operational characteristics and the error problem in the process of sensor signal acquisition need to be considered. To solve the sensor placement problem of industrial robots, we propose the importance of joint state information acquisition in this paper, and the optimal sensor placement method for joint state information acquisition and its corresponding theoretical framework is proposed. By simultaneously interpreting the error between the velocity and the actual velocity of the distribution theory and the posterior probability of the joint motion, the evaluation function of the sensor placement based on relative entropy is derived. The constraint function is established by using the modal confidence matrix of different sensor layouts. Finally, the evaluation function is combined with the constraint function as the objective function of the sensor placement. This is the first time in the field of industrial robot fault diagnosis and health assessment. The motion state of joints can be obtained more effectively by this method, which is of great significance to the fault diagnosis and health assessment of industrial robots. Sensor Placement Method First, the kinematic and dynamic simulation of the industrial robot is considered in this study. Through the simulation, the deformation nephogram of the industrial robot and the velocity distribution at different positions can be obtained. The position of the sensor placement is determined by the Bayesian optimal design, which is realized by maximizing the information gain of joint motion state information. Derivation of Forward Kinematics of Industrial Robots The mechanical arm of the six-axis industrial robot used in the simulation in this paper is assembled by a series of connecting rods, so a corresponding coordinate system should be constructed to express the robot. At present, the two commonly used link coordinate system construction methods in robotics are the standard type and the improved D-H coordinate system. Among them, the improved D-H refers to adding a new parameter on the basis of the standard four parameters, through which the singularity that occurs when the adjacent connecting rods are in a parallel relationship can be solved. Since the general six-axis industrial robot does not have parallel links, an improved D-H coordinate system that uses more parameters is not used. On the other hand, in the existing standard D-H coordinate system, there are also two different establishment methods. The first is that the origin of the coordinate system o i−1 is unified with the joint i; the corresponding second is that the coordinate system o i−1 is unified with the joint i − 1. Due to the problem of the tree structure, the first coordinate system will be ambiguous when dealing with it. Considering the diversity of industrial robots, the second method of establishing a system was chosen after a comprehensive comparison. The first method of establishing a system is described in detail below, and the MATLAB model effects corresponding to the two coordinate systems are given. The establishment method of the connecting rod coordinate system o i x i y i z i is shown in Table 1. When the axis of joint i is out of plane with the axis of joint i + 1, take the intersection of the common perpendicular of the two axes and the axis of joint i On the common perpendicular of the links i and i + 1, its direction is from i to i + 1 When the axis of joint i is parallel to the axis of joint i + 1, take the intersection of the common perpendicular of the axis of joint i and the axis of joint i + 1 with the axis of joint i Appl. Sci. 2022, 12, 6086 4 of 23 The coordinate system fixed on the robot base (link 0) is coordinate system {0}. This coordinate system has the property of being invariant and can be used as a reference. The reference coordinate system {0} itself is not specially set, but considering the subsequent calculation, the reference coordinate system {0} is set to coincide with the coordinate system {1}. For the revolute joint n, θ n is set to 0. According to the coordinate system establishment criteria described in this paper, the connecting rod coordinate system of the six-axis industrial robot is solved, as shown in Figure 1. Appl. Sci. 2022, 12, 6086 4 The coordinate system fixed on the robot base (link 0) is coordinate system {0} coordinate system has the property of being invariant and can be used as a reference reference coordinate system {0} itself is not specially set, but considering the subseq calculation, the reference coordinate system {0} is set to coincide with the coordinat tem {1}. For the revolute joint n, n  is set to 0. According to the coordinate system establishment criteria described in this pape connecting rod coordinate system of the six-axis industrial robot is solved, as show Figure 1. Considering the established six-axis industrial robot model, the model param need to be calculated according to the specified connecting rod parameter calcul method. Link length a i : along the x i axis, the distance from z i−1 to z i ; 2. The torsion angle α i of the connecting rod: the angle from z i−1 to z i around the x i axis; 3. Link offset d i : along the z i axis, the distance from x i−1 to x i ; 4. Joint angle θ i : the angle of rotation from x i−1 to x i around the z i axis. For a rotary joint, due to its joint characteristics, it is considered that the definitions of its link length a i , link torsion angle α i and link offset d i are unified, and the joint angle θ i is a joint variable. Considering the established six-axis industrial robot model, the model parameters need to be calculated according to the specified connecting rod parameter calculation method. For the six-axis industrial robot model shown in Figure 1, the corresponding link D-H parameters are calculated, as shown in Table 2. Through the D-H parameters of the industrial robot in the table, the configuration of the connecting rod of the industrial robot can be described, which can provide calculation support for the subsequent kinematics analysis and provide importable related parameters for the subsequent kinematics and dynamics simulation. i−1 T i is defined as: According to the description method of homogeneous transformation of robotic transformation matrix, i−1 T i can be obtained according to the principle from left to right: The parameters corresponding to each joint are brought in, and the motion transfer matrix of each joint is solved as follows: where s i = sin θ i , c i = cos θ i . After obtaining the transformation matrix of the connecting rod coordinate system, the kinematic equation of the industrial robot is derived. The overall transfer matrix is 0 T 6 , and the expression of is: Numerical Method of Velocity To obtain the theoretical velocity value of the placement position, we first have to study the relevant knowledge of robotics. To calculate the velocity value of the precise position, we first need to calculate the velocity of the robot connecting rod, that is, the velocity transfer formula between connecting rods. where i+1 v i+1 is the velocity of the i + 1 connecting rod relative to the {i + 1} coordinate system, i v i is the velocity of the origin of the coordinate system {i} relative to the coordinate system {i}, i w i is the angular velocity of the connecting rod i relative to the coordinate system {i}, i+1 R i is the rotation transformation matrix from the coordinate system {i} to the coordinate system {i + 1}, is the first three order matrix of i+1 T i , and i+1 P i is the distance of the i + 1 connecting rod relative to the ith connecting rod. By introducing the parameters of each link into the expression, a theoretical solution for the speed of each link can be obtained. In the actual sensor layout, the speed of each measuring point is not the same, so it needs to be specific to each measuring point to calculate its corresponding speed. Therefore, based on the calculation formula for connecting rod speeds given by robotics, according to the distance between each point and adjacent joints, the speed calculation method of a specific measuring point is given. First, the velocity of each joint or connecting rod relative to its coordinate system {i + 1} is calculated according to the formula. Then, the velocity of the point relative to the coordinate system {i + 1} is calculated according to the distance from the point to the joint. The calculation formula of the velocity of the point relative to the coordinate system {i + 1} is as follows: where, i+1 v p is the velocity of the sensor point p relative to the coordinate system {i + 1}, i+1 v i+1 is the velocity of the origin of the coordinate system {i + 1} relative to the coordinate system {i + 1}, i+1 w i+1 is the angular velocity of the connecting rod i + 1 relative to the coordinate system {i + 1}, and d is the distance of the sensor point relative to the origin of the coordinate system {i + 1} Finally, the velocity in the base coordinate system is obtained through the corresponding transfer matrix. The coordinate transfer formula is as follows: ple, they are obtained through finite element models [25]. Considering that before the optimization of the sensor location, there are many measuring points available for the actual industrial robot, to determine a more reasonable initial layout scheme, a modal simulation of the industrial robot is carried out to analyze its deformation under the influence of vibration. The parameters of the industrial robot are shown in Table 3. The modal simulation of the model is carried out using ANSYS Workbench software. The first six modal shapes of the model are shown in Figure 2. (e) (f) Figure 2. The first six modes of industrial robot: (a) 1st order mode shape, (b) 2nd order mode shape, (c) 3rd order mode shape, (d) 4th order mode shape, (e) 5th order mode shape, and (f) 6th order mode shape. The simulation parameters are as follows: 254,880 divided nodes, 150,163 divided units, and the bottom of the base are set as the fixed support. The maximum natural frequencies and relative amplitudes of the first six orders of the whole machine are shown in Table 4. The first six modes of industrial robot: (a) 1st order mode shape, (b) 2nd order mode shape, (c) 3rd order mode shape, (d) 4th order mode shape, (e) 5th order mode shape, and (f) 6th order mode shape. The simulation parameters are as follows: 254,880 divided nodes, 150,163 divided units, and the bottom of the base are set as the fixed support. The maximum natural frequencies and relative amplitudes of the first six orders of the whole machine are shown in Table 4. The results of the modal analysis are as follows: First-order vibration: The natural frequency is 12.6 Hz, and the maximum amplitude is 0.11 M. The deformation mainly shifts along the x-axis and rotates around the z-axis. The closer to the end, the greater the amplitude. Second-order vibration: The natural frequency is 20.0 Hz, and the maximum amplitude is 0.09 M. The deformation mainly moves along the y-axis and rotates around the x-axis. The closer to the end, the greater the amplitude. Third-order vibration: The natural frequency is 30.0 Hz, and the maximum amplitude is 0.10 M. The main deformation is the torsion of the main arm around the y-axis, with a large amplitude at the elbow and end. Fourth-order vibration: The natural frequency is 70.3 Hz, and the maximum amplitude is 0.12 M. The main deformation is that the end of the main arm swings around the z-axis, and a larger amplitude is concentrated in the elbow and forearm. Fifth-order vibration: The natural frequency is 116.7 Hz, and the maximum amplitude is 0.15 M. The main deformation is the rotation of the arm and elbow around the z-axis, and a larger amplitude is concentrated in the elbow. Sixth order vibration: The natural frequency is 338.3 Hz, and the maximum amplitude is 0.14 M. The main deformation is that both ends of the jib swing around the y-axis, and a larger amplitude is concentrated at the two ends and the middle of the forearm. According to the simulation results of the modal analysis, there is a large amount of vibration deformation at the forearm, elbow, and related joints of the manipulator. When considering the initial location, the end of the forearm and elbow should be considered. Therefore, the initial placement of the sensor is selected as shown in Figure 3. Appl. Sci. 2022, 12,6086 Fifth-order vibration: The natural frequency is 116.7 Hz, and the maximum am is 0.15 M. The main deformation is the rotation of the arm and elbow around the z -a a larger amplitude is concentrated in the elbow. Sixth order vibration: The natural frequency is 338.3 Hz, and the maximum tude is 0.14 M. The main deformation is that both ends of the jib swing around the and a larger amplitude is concentrated at the two ends and the middle of the forea According to the simulation results of the modal analysis, there is a large am vibration deformation at the forearm, elbow, and related joints of the manipulator considering the initial location, the end of the forearm and elbow should be cons Therefore, the initial placement of the sensor is selected as shown in Figure 3. Optimal Sensor Placement Based on Bayesian In the existing practical working environment, the sensor placement of indus bots relies more on work experience. There is no complete theoretical system to gu sensor placement of industrial robot running state monitoring, which cannot effe Optimal Sensor Placement Based on Bayesian In the existing practical working environment, the sensor placement of industrial robots relies more on work experience. There is no complete theoretical system to guide the sensor placement of industrial robot running state monitoring, which cannot effectively improve the efficiency and accuracy of domestic robot fault diagnosis and predictive maintenance. Moreover, the sensor placement data of industrial robots is too noisy, can easily result in large data processing, and is unable to achieve effective data collation. Therefore, it is necessary to quantify the advantages and disadvantages of the sensor layout, reduce the number of points, and find the optimal placement scheme through theoretical analysis. Bayesian Estimation of Motion Joint Position Whether the running state of the industrial robot can be better expressed is an important index for measuring the sensor placement of the industrial robot, and the joint motion state of the industrial robot is the main component of the running state of the robot. First, according to the operational characteristics of the industrial robot, an appropriate event is established, and the corresponding probability distribution is given. Then, assuming that the running joint of the industrial robot in the current state is r and the running joint is the event A, the uncertainty of event A is quantified by the probability distribution, which is updated according to the data measured by the sensors arranged on the industrial robot. If the sensor arranged on the industrial robot can detect the motion of the position due to the motion of joint r, it can detect the number of the moving joint, and then it can correctly judge event A. Therefore, the optimal sensor placement problem can also be understood. The sensor placement we determined can make the best estimation of event A (the number of joints in motion). Since the model of the industrial robot has been established in the previous chapter, and the theoretical calculation formula of industrial robot speed has been given, now suppose that the distance from the manipulator to the origin of the base is taken as the coordinates of sensor placement, and a reasonable initial sensor layout is determined according to the simulation results in Section 2.2. The corresponding acceleration sensor is arranged at each position to obtain the acceleration of the point, and then the acceleration is integrated to obtain the corresponding velocity. V (r; s) is the predicted value of the velocity measured at the point s, which is obtained by calculating the theoretical velocity of s when r joint moves? Moreover, assuming that the prior distribution of event A exists and is known, let the prior probability distribution be p(r). Then, when the measured value V t of the sensor is known, the posterior distribution p(r|V t , s) of event A can be determined. According to the Bayesian principle, the posterior distribution p(r|V t , s) is proportional to the product of its prior distribution p(r) and likelihood p(V t |r, s), namely p(r|V t , s) ∝ p(r)·p(V t |r, s). The likelihood equation represents the probability that the measured value y comes from the real moving joint r after a given sensor placement s. Since there is an error between the real measured value and the theoretically calculated value, assuming that the error is ε(s), the relationship among them is as follows. The principle of maximum entropy is a criterion for selecting the distribution of random variables whose statistical characteristics are most consistent with objective conditions. It is an effective criterion for selecting the distribution of random variables with maximum entropy. In the discrete case, the entropy of the equiprobability model is the maximum, but the detection of the joint motion state of the industrial robot is not an equiprobability model. Therefore, the discrete model does not meet the requirements. Multivariate Gaussian distribution is the most natural expression of random variables in ignorance. When the mean and covariance are constant, the random variable with a normal distribution has maximum entropy. Therefore, it can be assumed that the error of the above formula conforms to the definition and that ε(s) obeys a multivariate Gaussian distribution with a mean value of 0 and a specific covariance matrix. Therefore, according to the error formula between the theoretical velocity and the real velocity, the likelihood function p(V t |r; s) of the real velocity should obey the multivariate Gaussian distribution with the mean value of V (r; s) and the specific covariance matrix, which is expressed as follows: Optimal Sensor Placement for Industrial Robots Based on Information Gain The optimal sensor placement problem is to find the sensor position that can obtain the most information about the joint position. The information gain can be measured by the Kullback-Leibler divergence between the prior distribution and the posterior distribution. The utility function is maximized by determining the optimal sensor placement, defined as the expected value of the Kullback-Leibler divergence over all possible measurements. In the evaluation function, p(r) is a known distribution p(r|V t , s) and p(V t |s) are unknown distribution, so it is necessary to use the Bayesian principle to transform the evaluation function. According to the Bayesian formula, the relationship between two conditional probabilities can be obtained P(A|B) = P(B|A) * P(A)/P(B) According to this formula p(r|V t , s) = p(V t |r, s) * p(r, s)/p(V t , s), event A and sensor layouts are assumed to be independent events. Through the transformation of the above formula, the evaluation function is rewritten as follows: In the current evaluation function, p(r) is a known prior distribution, p(V t |r, s) has been mathematically expressed by the multivariate Gaussian distribution, and only the distribution of p(V t |s) is unknown. When the probability distribution p(V t |r, s) of measured velocity is known, the mathematical expression of p(V t |s) distribution is obtained by integrating the joint variable r [26]. where N r is the number of joint positions. When i = 1, . . . , N r , the integral of R is approximated by point r i . The evaluation function is rewritten as follows: Monte Carlo sampling can be used to estimate the above evaluation function. where N V t is the number of initial layouts of vibration sensors. Thus far, the sensor placement evaluation function that can be expressed by a known mathematical formula has been obtained, which can be recorded as follows: After obtaining the evaluation function, the theoretical derivation of the sensor optimal placement model was completed. Then, the initial placement points need to be imported to calculate the corresponding optimal placement. In practical applications, due to the large number of sensor locations, it is unrealistic to calculate the evaluation function of all kinds of combinations of different numbers. While it is not easy to obtain the optimal global solution by using optimization algorithms such as genetic algorithms, the number of optimizations needs to be given, that is, the super parameters. In this case, a more efficient method should be considered for multi-point optimization. Since the evaluation function of the optimization system has been given, and the data collection of sensors can be considered independent of each other, the heuristic sequential sensor placement method is considered, and the evaluation function is used to arrange the sensors iteratively, one sensor at a time. Firstly, the maximum evaluation function value of each initial location is obtained under the real speed and theoretical speed, and the corresponding initial location is the best location s 1 of the first sensor. By using the heuristic sequential placement method, the first sensor's optimal placement s 1 and the remaining initial sensor placement are combined to obtain the placement combination (s 1 , s i ). The maximum evaluation function values of different combinations of distribution points are calculated, respectively, and the corresponding combination of distribution points is the optimal combination of distribution points, and the second sensor's optimal placement s 2 is obtained, then, s 1 and s 2 are combined with the remaining points to obtain the distribution point combination (s 1 , s 2 , s i ). The maximum evaluation function values of a different combination of sensor points are calculated, respectively, and the corresponding combination (s 1 , s 2 , s i ) of sensor points is taken as the optimal combination of sensor points to get the third optimal combination of sensor points, and the third sensor optimal distribution point 3 is obtained. The above steps are repeated until the optimal number of sensors reaches the preset number or the difference between the current evaluation function value and the previous evaluation function value is less than the set threshold; then, the optimal sensor points are obtained. Constraint Equation After the initial selection of the optimal layout is completed through the optimal layout model, considering the structural characteristics of the industrial robot, it is necessary to further optimize the completed optimal layout by using the redundancy index to use the minimum number of sensors to represent the overall state of the industrial robot. The most common method is to carry out an overall modal analysis and calculate the MAC matrix (modal confidence matrix) of different points. The maximum value of the off-diagonal elements in the MAC matrix is minimized as the constraint function of the subsequent optimization algorithm. Finally, the global optimization algorithm is used to find the optimal layout. The calculation formula of the MAC matrix is as follows: where φ i , φ j , are the ith and jth order vectors of the modal matrix. The off-diagonal elements of the MAC matrix represent the intersection angles of corresponding modal vectors i and j are the values of degrees of freedom corresponding to the ith and jth mode shapes of N sensors, respectively. The smaller the off-diagonal elements of the modal confidence matrix, the better the independence of the calculated mode shapes and the better the effect of the sensor configuration. On the contrary, the greater the correlation of the calculated mode shapes, the worse the effect of the sensor configuration. Therefore, the maximum value of the off-diagonal elements of the modal confidence matrix can constrain the evaluation function. In addition, to ensure the unity of dimensions, it is necessary to standardize the constraint value and evaluation value, respectively, and z-score standardization can eliminate the influence of dimensions. The final objective function value can be obtained by subtracting the standardized evaluation value and the constraint value. The objective function is expressed as follows: where M is the maximum value for off-diagonal elements for each layout. Verification Method for Layout Because the sensor layout of an industrial robot has not formed a complete theoretical system, the judgment basis of the sensor layout should be given according to the above probabilistic method. The flow chart of the layout verification method is shown in Figure 4. 1. According to the given initial position, sensors are arranged in the corresponding position of the real industrial robot; 2. Set the joint speed of the industrial robot as a fixed speed, make the industrial robot move accordingly, and collect the acceleration of sensor distribution in the process of motion; 3. The acceleration signal is processed to get the velocity of each point, and the probability of motion from each joint is calculated by using the probability model; 4. Compare the probability of each joint with the real motion joint to determine whether the maximum probability corresponds to the real motion joint. If so, it is considered that the sensor layout can obtain the whole machine state. In this paper, two types of experiments are conducted for industrial robots with and without models, and each type of experiment acquires a series of measured experimental data through multi-channel sensors and calculates the data pre-processing and sensor optimization layout through MATLAB for the measured experimental data. Algorithm 1 is the pseudo code for the sensor-optimized layout algorithm. Algorithm 1: Optimal sensor placement based on Bayesian and Constraint equation Input: the measured velocity V t and the predicted velocity V (r; s) of the sensor. Output: optimal sensor placement s_best. 1 rn = 3; 2 if sn = 1 do 3 for i = 1 to rn do 4 for j = 1 to le do 5 compute p(i,j);//according to Equation (16 s_best = s_best(n_best);//optimal sensor placement 37 end 38 //rn-number of joints; 39 //s-initial Sensor placement; 40 //sn-number of sensors currently optimized; 41 //le-number of sensors in the initial layout; 42 //p(i j)-the likelihood function; 43 //U(i,j)-the sensor placement evaluation function according to Equation (19); 44 //Us(i)-sum of regression values of each sensor coordinate; 45 //snew-the new sensor placement based on heuristic sequential sensor placement; 46 //n_best-the optimal number of sensors according to the change of the objective function; where is the maximum value for off-diagonal elements for each layout. Verification Method for Layout Because the sensor layout of an industrial robot has not formed a complete theoretical system, the judgment basis of the sensor layout should be given according to the above probabilistic method. The flow chart of the layout verification method is shown in Figure 4. Verification of Optimal Sensor Placement for Industrial Robots Based on the Simulation Model To verify the effectiveness of the sensor placement method proposed in this paper, we carry out a single joint motion verification experiment for a six-axis industrial robot. The experimental scene is shown in Figure 5. 46 //n_best-the optimal number of sensors according to the change of the objective function; Verification of Optimal Sensor Placement for Industrial Robots Based on the Simulation Model To verify the effectiveness of the sensor placement method proposed in this paper, we carry out a single joint motion verification experiment for a six-axis industrial robot. The experimental scene is shown in Figure 5. To obtain the real speed, we combined the measured acceleration with the theoretical speed as noise. Moreover, because the frequency of the real collected signal is generally high, it is not advisable to reduce all acceleration points to one dimension. At the same time, bringing all acceleration points into the calculation will lead to too much calculation. Therefore, an appropriate dimension reduction method is needed to process the original signal. Considering the signal preprocessing method of the vibration signal, we select the To obtain the real speed, we combined the measured acceleration with the theoretical speed as noise. Moreover, because the frequency of the real collected signal is generally high, it is not advisable to reduce all acceleration points to one dimension. At the same time, bringing all acceleration points into the calculation will lead to too much calculation. Therefore, an appropriate dimension reduction method is needed to process the original signal. Considering the signal preprocessing method of the vibration signal, we select the typical time-domain characteristics of the vibration signal. The time-domain characteristics of the input vibration signal are taken as the input of the probability model for calculation. The selected features should reflect the amplitude and fluctuation characteristics of the signal. Therefore, the selected time-domain characteristics include mean value, root mean square value, absolute mean value, skewness, kurtosis, and variance. Divide the abovementioned original signals into two groups, take 40,000 sampling points for each group of original signals, calculate the time domain signal characteristics of the two groups of signals, respectively, and combine the real speed with the time domain characteristics' overall input. The input is shown in Figure 6. Appl. Sci. 2022, 12, 6086 16 of 24 typical time-domain characteristics of the vibration signal. The time-domain characteristics of the input vibration signal are taken as the input of the probability model for calculation. The selected features should reflect the amplitude and fluctuation characteristics of the signal. Therefore, the selected time-domain characteristics include mean value, root mean square value, absolute mean value, skewness, kurtosis, and variance. Divide the above-mentioned original signals into two groups, take 40,000 sampling points for each group of original signals, calculate the time domain signal characteristics of the two groups of signals, respectively, and combine the real speed with the time domain characteristics' overall input. The input is shown in Figure 6. The signal features of the three joints in motion are input into the probability model program. The optimal layout evaluation is calculated circularly, and the corresponding redundancy is calculated. The optimal layout order is [0.7, 1.47, 0.7, 0.8, 1.18, 0.8, 1.56, 1.47, 1.37, 1.37, 1.18, 0.8, 0.5, 0.5, 0.2, 0.2]. By observing the distribution of objective function values, as shown in Figure 7, and considering the problem of the degree of freedom of the industrial robot, ten sensors are selected to be arranged. The optimal sensor layout is [0. According to the process shown in Section 3.1, the second and third joint motio judged by probability, respectively, and the optimal layout, empirical layout, and un layout are selected for comparison. According to the results in Figure 9, in the layout verification experiment wi simulation model, the empirical layout and the optimal layout have a higher discri tion effect. In the judgment results of the third joint motion, the empirical layout sh According to the process shown in Section 3.1, the second and third joint motions are judged by probability, respectively, and the optimal layout, empirical layout, and uniform layout are selected for comparison. According to the results in Figure 9, in the layout verification experiment with the simulation model, the empirical layout and the optimal layout have a higher discrimination effect. In the judgment results of the third joint motion, the empirical layout shows a better discrimination effect than the optimal layout. In general, both empirical layout and optimal layout have good performance in the layout verification experiment with the model. However, due to the complex process of model simulation and kinematic analysis, this method cannot be applied to large-scale industrial scenes. Appl. Sci. 2022, 12, 6086 18 of 24 model. However, due to the complex process of model simulation and kinematic analysis, this method cannot be applied to large-scale industrial scenes. Optimal Sensor Placement Method and Verification of Industrial Robots without Simulation Models In the implementation process of the sensor layout optimization method, first, the kinematics and dynamics simulation analyses are carried out based on the simulation model and DH parameters, and the theoretical speed is taken as the theoretical value of the probability model. The subsequent optimization is carried out through the error distribution. However, in the actual scene, in some cases, the simulation model of the industrial robot is not easy to obtain, and the calculation process based on simulation analysis is more complex, which is not suitable for the scene that needs to conclude sensor placement quickly. Therefore, based on the defects of the above placement method, a sensoroptimal placement method that discards the simulation model is considered. Considering that the calculation of theoretical velocity is due to the quantization of the error distribution in the new layout method, the acceleration signal collected by the sensor is directly taken as the real value. Because of the uniform motion of each joint, the theoretical acceleration value of each layout point is 0, so the error of the two is the real acceleration signal collected by the sensor. According to the maximum entropy principle, the acceleration signal collected by the sensor includes the interference of the theoretical acceleration value with the environmental noise and its structure, in which the environmental noise and its structure interference can be regarded as random variables; that is, the error also conforms to the multivariate Gaussian distribution. Hence, the subsequent optimization process is the same as above. The industrial robot without the 3D model used in the experiment is also a six-axis industrial robot, as shown in Figure 10. Divide the original signals into two groups, take 40,000 sampling points for each group of original signals, and calculate the time-domain signal characteristics of the two groups of signals. The results are shown in Figure 11. Optimal Sensor Placement Method and Verification of Industrial Robots without Simulation Models In the implementation process of the sensor layout optimization method, first, the kinematics and dynamics simulation analyses are carried out based on the simulation model and DH parameters, and the theoretical speed is taken as the theoretical value of the probability model. The subsequent optimization is carried out through the error distribution. However, in the actual scene, in some cases, the simulation model of the industrial robot is not easy to obtain, and the calculation process based on simulation analysis is more complex, which is not suitable for the scene that needs to conclude sensor placement quickly. Therefore, based on the defects of the above placement method, a sensor-optimal placement method that discards the simulation model is considered. Considering that the calculation of theoretical velocity is due to the quantization of the error distribution in the new layout method, the acceleration signal collected by the sensor is directly taken as the real value. Because of the uniform motion of each joint, the theoretical acceleration value of each layout point is 0, so the error of the two is the real acceleration signal collected by the sensor. According to the maximum entropy principle, the acceleration signal collected by the sensor includes the interference of the theoretical acceleration value with the environmental noise and its structure, in which the environmental noise and its structure interference can be regarded as random variables; that is, the error also conforms to the multivariate Gaussian distribution. Hence, the subsequent optimization process is the same as above. The industrial robot without the 3D model used in the experiment is also a six-axis industrial robot, as shown in Figure 10. By observing the distribution of objective function values, as shown in Figure 12, and considering the problem of the degree of freedom of the industrial robot, ten sensors are selected to be arranged. The optimal sensor layout is [1.56, 1.37, 1.18, 0.8, 0.8, 1.47, 0.7, 1.18, 0.8, 1.37]. The corresponding objective function value is 2.716. The empirical layout and uniform layout are shown in Figure 13. By observing the distribution of objective function values, as shown in Figure 12, and considering the problem of the degree of freedom of the industrial robot, ten sensors are selected to be arranged. The optimal sensor layout is According to the process shown in Section 3.1, the second and third joint motions are judged by probability, respectively, and the optimal layout, empirical layout, and uniform layout are selected for comparison. According to the results in Figure 14, the uniform layout and the optimal layout have an excellent distinguishing effect, and the empirical layout has a high distinguishing effect. According to the maximum entropy principle, when the peak power is limited, the random variable with a finite domain has maximum entropy when it is uniformly distributed. Therefore, uniform distribution has a good effect when the simulation model is unknown. In general, for the layout verification experiment without a simulation model, the optimal layout still has obvious advantages. Considering the calculation process of optimal placement, we take the acquisition of joint state signal as an important basis. The objective function of optimization is quantified by relative entropy so that the optimal placement can effectively obtain the motion state of the robot. According to the process shown in Section 3.1, the second and third joint motions are judged by probability, respectively, and the optimal layout, empirical layout, and uniform layout are selected for comparison. According to the results in Figure 14, the uniform layout and the optimal layout have an excellent distinguishing effect, and the empirical layout has a high distinguishing effect. According to the maximum entropy principle, when the peak power is limited, the random variable with a finite domain has maximum entropy when it is uniformly distributed. Therefore, uniform distribution has a good effect when the simulation model is unknown. In general, for the layout verification experiment without a simulation model, the optimal layout still has obvious advantages. Considering the calculation process of optimal placement, we take the acquisition of joint state signal as an important basis. The objective function of optimization is quantified by relative entropy so that the optimal placement can effectively obtain the motion state of the robot. Result In the experiment of this paper, a verification experiment of the optimal layout method of an industrial robot sensor is completed. According to the connection of the joint motion of the industrial robot, the sensor layout verification method of the industrial robot is first designed, and the probability of each joint is considered to be compared with the real moving joint to determine whether the maximum probability corresponds to the real moving joint, and joint 2 and joint 3 are determined as the judgment kinematic joints (two joints move in the same plane). For an industrial robot with a simulation model, its optimal layout is obtained, and the optimal layout, empirical layout, and uniform layout are used to verify the layout of the industrial robot sensors. It is calculated that the optimal layout has a probability increase of 0.0244 and a probability decrease of 0.0399 compared with the empirical layout, respectively. A relatively uniform layout has a probability increase of 0.1869 and 0.0339, respectively. For the model-free simulated industrial robot, its optimal layout is obtained. Compared with the empirical layout, the optimal layout has a probability increase of 0.2693 and 0.2630, respectively, and a probability increase of 0.0277 and 0.0255, respectively, compared to the uniform layout. Based on the above experimental conclusions, the effectiveness of the optimal layout method for industrial robot sensors can be proved, and the applicability of the method in the industrial field is proved. Conclusions This paper studies the optimal placement method of sensors to obtain better data sources for the health assessment and fault diagnosis of industrial robots. The work in this paper can be summarized as follows. Combining the 6-DOF industrial robot speed calculation formula with Bayesian optimization, taking redundancy as the constraint, and finally determining the layout method of the industrial robot acceleration sensor. Considering the importance of joint motion state information for the health assessment and fault diagnosis of industrial robots, we want this layout method to capture joint motion state information to the greatest extent. To obtain the initial sensor layout, we carry out the modal simulation of the industrial robot model. At the same time, considering the need to calculate the speed of specific points, the original speed transfer formula of the robot is rewritten, and the speed calculation formula of specific points is obtained. Considering that the evaluation function cannot give the number of sensors, the modal confidence matrix is improved by using the kinematic characteristics of the industrial robot, and the improved modal confidence matrix is used to constrain the evaluation function. For different experimental objects, verification experiments with models and without models are carried out. Compared with the current common layout and uniform layout, Result In the experiment of this paper, a verification experiment of the optimal layout method of an industrial robot sensor is completed. According to the connection of the joint motion of the industrial robot, the sensor layout verification method of the industrial robot is first designed, and the probability of each joint is considered to be compared with the real moving joint to determine whether the maximum probability corresponds to the real moving joint, and joint 2 and joint 3 are determined as the judgment kinematic joints (two joints move in the same plane). For an industrial robot with a simulation model, its optimal layout is obtained, and the optimal layout, empirical layout, and uniform layout are used to verify the layout of the industrial robot sensors. It is calculated that the optimal layout has a probability increase of 0.0244 and a probability decrease of 0.0399 compared with the empirical layout, respectively. A relatively uniform layout has a probability increase of 0.1869 and 0.0339, respectively. For the model-free simulated industrial robot, its optimal layout is obtained. Compared with the empirical layout, the optimal layout has a probability increase of 0.2693 and 0.2630, respectively, and a probability increase of 0.0277 and 0.0255, respectively, compared to the uniform layout. Based on the above experimental conclusions, the effectiveness of the optimal layout method for industrial robot sensors can be proved, and the applicability of the method in the industrial field is proved. Conclusions This paper studies the optimal placement method of sensors to obtain better data sources for the health assessment and fault diagnosis of industrial robots. The work in this paper can be summarized as follows. Combining the 6-DOF industrial robot speed calculation formula with Bayesian optimization, taking redundancy as the constraint, and finally determining the layout method of the industrial robot acceleration sensor. Considering the importance of joint motion state information for the health assessment and fault diagnosis of industrial robots, we want this layout method to capture joint motion state information to the greatest extent. To obtain the initial sensor layout, we carry out the modal simulation of the industrial robot model. At the same time, considering the need to calculate the speed of specific points, the original speed transfer formula of the robot is rewritten, and the speed calculation formula of specific points is obtained. Considering that the evaluation function cannot give the number of sensors, the modal confidence matrix is improved by using the kinematic characteristics of the industrial robot, and the improved modal confidence matrix is used to constrain the evaluation function. For different experimental objects, verification experiments with models and without models are carried out. Compared with the current common layout and uniform layout, the optimized layout obtained by this method can capture joint state information more effectively. The effectiveness of the optimal layout is verified by a model-based sensor optimal layout experiment, which verifies the effectiveness of the optimization method. The sensor optimal layout experiment realizes the extension of the layout method without a model. It is also proved that the optimal layout of the sensor depends on the structure of the industrial robot itself and the real signal collected by the sensor. The layout is evaluated by the error between the real signal collected by the sensor and the theoretical calculation value, and redundancy is taken as the constraint function of the layout by using part of the results of the modal simulation. Finally, the optimal layout objective function of the industrial robot acceleration sensor is obtained, which can provide better data sources for health assessment and fault diagnosis of industrial robots.
2022-06-18T15:21:39.415Z
2022-06-15T00:00:00.000
{ "year": 2022, "sha1": "15220c072d305382566c0c5c88047005bde6d8f4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/12/6086/pdf?version=1655295990", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d27f4e8754717466a1be8e42fc83515d0908ec19", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
219162136
pes2o/s2orc
v3-fos-license
THE EFFECT OF FUEL MIX, MODERATED BY INDONESIA CRUDE PRICE AND FOREIGN EXCHANGE, AND POWER LOSSES ON PROFITABILITY OF PT PLN (PERSERO) Like any other corporate, state owned enterprises (SOE) main objective is to achieve optimum profit to ensure its sustainability. In Indonesia, electricity business in run by PT PLN (Persero), a corporate owned 100% by Government of Indonesia. Thus, PLN should maximize its effort on cost reduction and sales volume optimization to ensure its profitability growth. Looking at the structure of operational cost and revenue, this study aims to explore the effect of fuel mix and power losses on profitability (operating profit/loss) of PLN, where the moderating role of Indonesia crude price (ICP) and exchange rate on fuel mix effect toward profitability also take into account. By using moderated regression analysis (MRA), this study found out that the effect of fuel mix on profitability, and power losses on PLN’s profitability exist negatively. ICP and exchange rate also strengthen the effect of fuel mix on PLN’s profitability. This study suggested PLN to continue exploring ways to optimize its fuel mix and power losses, as well as increasing sales volume. This could not be done without Government intervention that favors PLN i.e., tariff adjustment or incentives, especially when exchange rate and ICP value worsen. This is to ensure that PLN has a healthy financial sustainability. INTRODUCTION Electricity has now shifted to be primary need in our daily life. Every activity needs electricity. The urgency could also be seen in the increasing demand for electricity in public spaces. Moreover, as an infrastucture that affect lives of many people, electricity is not only needed for supporting daily activities, but also as a business driver that supports the growth of economy. It is seen as the mother of investment climate, where its availability, quality and reliability effects the investment significantly (Fontana and Arifin, 2016). Considering its great economic impact, it is not surprising that major electricity business in Indonesia is run by state owned enterprises (SOE). PT PLN (Persero), further refers to as PLN, is a SOE that runs its business in electricity where its ownership belongs 100% to Government of Indonesia. Its line of business includes generation, transmission, distribution, and other services related to electricity that run by 7 regional operating segments and 11 subsidiaries. business with profit maximization as its foundation. A company with big profit indicates good efficiency in terms of operation and invesment, and a company with adequate profit could survive when economic situation worsens and financial pressure increases (Khan, 2017). Profitability in this regard is not narrowly defined as a tool to dividend payment (as this is also PLN obligation) as argued by Gantino and Iqbal (2017); Azmal et al. (2019), that it effects yet significant toward dividend payment policy, as the company use the profit for business expansion purpose. In this regard, to achieve optimum profit, PLN profitability will grow up if electricity sales revenue increases and cost of production reduces (Fontana and Arifin, 2016). Aforementioned ideal profitability concept is unfortunately not fully applied in PLN. The increasing sales of electricity increased PLN revenue, yet did not automatically increase PLN profitability because electricity selling tariff is regulated by Government. PLN electricity tariff has not risen up since 2015. Tariff adjustment policy for unsubsidized customers as per Peraturan Menteri Energi dan Sumber Daya Mineral Republik Indonesia No. 31/2014 should have enabled tariff adjustment to follow economic price. In that case, the effort to increase profitability (under PLN control) should have been undergoned in terms of efficiency betterment and increasing sales volume. The analysis of cost reduction could be started by identifying PLN operating expenses, and sales volume maximization analysis could be started through operating revenue identification. According to the PT PLN (Persero) Annual Report 2018, the biggest factor that contributed to operating expenses is fuel and lubricant (around 45%). In detail, the unique factor in fuel and lubricant expenses is oil. According to PLN internal data, eventhough oil contributed 6,8% to the whole fuel volume, its cost contributed 23,2% to the whole fuel and lubricant expenses. This contrasted coal that contributed 58,3% in volume yet only 33,8% in expense. From this point of view, oil became the priciest fuel in generating electricity. PLN then should be able to control the oil expenses. PLN could control its consumption volume, known as fuel mix (total energy produced from oil-fueled generators compared to total energy produced). However, oil price is out of PLN control, where it is affected by the fluctuation of exchange rate and Indonesia Crude Price (Waluyo, 2015). In this understanding, the cost reduction effort is seen in both perspectives: internal and external. The analysis of revenue optimization effort was started by looking at the PLN operating revenue structure. Its 2018 Annual Report stated that 96% of operating revenue came from electricity sales. Sales revenue was calculated by the function of sales volume and selling tariff. The increasing sales volume was seen as the only way to generate more profit because the selling tariff must not be higher. In this regard, PLN should reduce the power losses so that the total generated energy could be optimally absorbed that contribute directly to the increasing of sales volume. Previous studies on PLN profitability (at corporate level) are found very limited. In order to align with problem identification, this study refers to the finding of several previous studies in other sectors, such as banking, transportation, energy, even agriculture, by identifying its issues. Skalsky et al. (2008) studied the effect of fuel and fertilizer on profitability of plantation sector. Qianqian (2011) studied the effect of oil prices on China's economy. Hoffman et al. (2018) studied the the impact of oil prices on logistics transportation providers' financial capability. Wattanatorn and Kanchanapoom (2012) studied the effect of oil price on profitability of multisectoral companies listed in Thailand Stock Exchange. Hidayati (2014) studied the effect of inflation, Bank Indonesia rate, and exchange rate on profitability of Islamic banks in Indonesia. Baum et al. (2001) studied the impact the long-term and short-term effect of currency value on profitability. Kandir et al. (2015) studied the effect of exchange rate on energy companies in Turkey. Gawlak studied the profitability of electricity companies. Sadugol (2012) studied the impact of system load factor improvement on the profitability of Karanataka utilities. Kwakwa (2018) studied the impact of power losses on Ghana's economy. This study further comprehended them by applying and analyzing the case of PLN. From the aforementioned background, the objectives of this study are: (1) To find out the effect of fuel mix on PLN profitability; (2) To find out the ICP role in enhancing the effect of fuel mix on PLN profitability; (3) To find out the exchange rate role in enhancing the effect of fuel mix on PLN profitability; (4) To find out the effect of power losses on PLN profitability; (5) To provide recommendation for both PLN and government as the way forward from the study. Efficiency Efficiency refers to the ratio of used resources toward production output (Kurtz and Boone, 2011). Jumono et al. (2019) argued that profitability is used to evaluate the internal performance of a company by determining whether it has succeeded in achieving its ultimate objective. Efficiency happens when human resources, business process and technology altogether utilised to increase productivity and product value of a business operation by lowering the routine operating expenses at some desired level. Operational efficiency, along with asset utilization and financial leverage, could be used to increase company value (Jumono et al., 2016). In other words, efficiency happens when lower input produces same output, or same input produces more output (Mubyarto and Hamid, 1987). According to Tasman and Hafidz (2013), the effort to optimize output with minimum input could be categorized in two: (1) Cost efficiency, related to management decision toward the allocation of overall production input under corporate control; (2) Technical efficiency, related to the corporate permanent resources. Fuel Mix In a broader context, the combination of several primary energy resources used to generate electricity within a specific area is called energy mix. The primary energy resources include fossil fuels (coal, oil, and gas), renewable energies (hydro, wind, solar, geothermal, biomass, etc) even waste. In PLN, the percentage of energy produced from oil-fueled power plant is called fuel mix. Fuel mix reducing is one of PLN strategic action toward efficiency enhancement and cost reduction. This is one of risk mitigation effort toward oil price volatility that is significantly affected by various factors out of PLN control. In other sector, such as transportation, the reduction of oil consumption is also a major issue. The rising of oil price affected public transport company where tariff adjustment must take place. For that reason, government should seek alternative transportation system with the lowest oil consumption per passenger, such as broadening railway coverage, MRT for urban transport, and highway development (Abdulkadir, 2000). Power Losses Power losses is defined as the total energy loss caused by external and internal factor that happens in the transmission line between power plant and consumers (Anumaka, 2012). Power losses could be categorized as two: technical and non-technical losses. Technical losses is identified as the energy loss when being transformated, transmitted, and distributed through conductors and appliances. Non-technical, or commerical losses, is identified as the total energy loss caused primarily because of theft or any other non-technical issues such as unpaid bills and accounting errors (Smith, 2004). Power losses could be seen from the perspective of cost reduction and sales volume optimization. In the formulation of electricity production cost, a utility could count power losses as an allowable cost and consider it as sold energy. However, if the actual power losses percentage is higher than the regulated, a utility company should bear the difference and count it as a loss. Thus, the difference of allowed revenue (as per the regulated power losses percentage) with total energy sales (if the actual power losses percentage is lower than the regulated) is considered as additional profit for company (Antman, 2009). In the context of operating revenue, the bigger power losses, the bigger opportunity for gone, eventhough technically, the total transmitted energy could not reach 100%. Exchange Rate Exchange rate is defined as the conversion of foreign currency in local currency, or vice versa (Karim, 2013). Exchange rate could affect the economy if the currency is appreciated or depreciated. The high depreciation of exchange rate could cause a company debt denominated in foreign currency rises. This condition threatens company financial capability and also contributes to the country macro economics instability (Salvatore, 2014). Exchange rate fluctuation could potentially affect a company cost structure in running operation and production process, where most of production factors are indexed in foreign currency. The affected components include contract, debt and interest payment in foreign currency. ICP Indonesia crude price (ICP) refers to the crude oil price set with formula by government to implement oil and/or gas cooperation contract as well as crude oil sales from government part (Febriyanto, 2016). The primary energy price (oil) is set by referring to the mean of platts Singapore (MOPS) as per the presidential decree No. 55/2005. ICP is affected by global oil price. The first factor (fundamental) is supply and demand mechanism, includes production, stock, refinerery condition, piping facilities, production policy, economic growth, needs, season, and the availability of alternative resources technology. The secord factor (non-fundamental) is issues other than supply-demand mechanism, such as politics disturbance, security, and speculation in oil market (DPR, n.d). Profitability Profitability is defined as company ability in generating profit (Sartono, 2014). Profitability becomes an important indicator bearing in mind that a company main purpose is to generate maximum profit. Profitability indicates management capability in company business. Profitability could be understood from the ratio of profit gained toward sales or investment. The bigger profitability ratio, the better organization capability in generating company profit (Jamil, 2013). In general, profitability analysis aims to measure management effectiveness, with the objectives: (1) To detect the cause of profit/loss in certain period; (2) To showcase company success in terms of management capability and motivation; (3) As a tool to forecast company profit that shows the correlation between profit and investment; (4) As a management control tool to set target, budget, coordination line, evaluation, and also as a justification in decision-making process (Kasmir, 2013). RESEARCH HYPOTHESIS Fuel mix or oil consumption affects the overall operating expenses. One of the main sector that easily affected by the fluctuation of oil price is transportation. When the oil price rises, logistic tariffs mostly rises because the previous tariff might no longer provide adequate profit for transportation company to operate. This tariff adjustment is required to maintain the sustainability of transportation company (Soemarno and Darza, 1990). In other sector, such as, industry, the rise of oil price around 29,43% and electricity price in 12%, industries oil and electricity expenses might rise around 20,3% (Abdulkadir, 2000). This also happens to air transportation industry. According to IATA Economic Chart of the Week (2019), the rise of fuel prices affects the outlook of the airline industry profitability. This is also the study finding of Skalsky et al. (2008) that stated that fuel affected profitability. If seen from PLN operating expenses, the production cost of electricity generated from coal was Rp 650, where gas costed Rp 945 (average price was US$ 8 per MMBTU and exchange rate was Rp 13.500), and oil costed Rp 1.600 when its price was RP 6.450/l (Satrianegara, 2018). Seeing the comparison, if oil price rises, PLN operating expenses will also raise significantly that contributes to its operating revenue loss. From above data and literature, the proposed hypothesis is: H 1 : Fuel mix affects PLN profitability negatively. The oil price affects every aspect of national economy: production and consumption, cost and price, trade and investment. The rise of oil price will cause the decreasing of national output (Qianqian, 2011). It also causes the rise of commodities prices that burdens business expenses. This finding second the study by Wattanatorn and Kanchanapoom (2012) that oil price affects company profit. According to PLN internal data, the rise of ICP by USD 1/barrel causes the rise of operating expenses by Rp 268 billion. From aforementioned data and literature, the proposed hypothesis is: H 1a : ICP strengthens the effect of fuel mix on PLN profitability. Indonesia adopts the free floating exchange rate system. It means if Indonesian Rupiah weakens to US dollar, commodity price might rise. Thus, the exchange rate might also affect company profitability, specifically on export-import transaction using foreign currency as well foreign debt. Rupiah depreciation will cause the bloating of company debt in local currency (Rahardjo, 2009). This finding also found out in Baum et al. (2001), also Hidayati (2014) that say exchange rate affects profitability. Kandir et al. (2015) also found out that exchange rate fluctuation also affect the financial capability of energy companies, eventhough the effect might vary. In transportation sector, the 1997 crisis in Indonesia, for example, the price of imported goods rose, people purchasing power declined, and transportation company revenue was not adequate enough to keep its financial ability (Abdulkadir, 2000). In PLN, if we look at the profit loss structure, the effect of exchange rate could bee seen in operating expenses component, specifically in contract payment and purchase of primary energies (oil and coal) indexed in foreign currency. According to PLN internal data, if Rupiah weakens by Rp 100/USD, the operating expense raises by Rp 1.263 billion. From the aforementioned data and literature, the prtposed hypothesis is: H 1b : Exchange rate strengthens the effect of fuel mix on PLN profitability. In general, the increasing of power losses rises the utility operating expenses and higher production cost (Anumaka, 2012). Furthermore, Jamil (2013) explains the efefct of the increasing power losses: (1) Losses decreases the utility/ company profitability; (2) Losses prevents the new investment that should aim the company capacity building; (3) Losses affect the electricity reliability; (4) Losses affects the company financial capability. Some study, such as Kwakwa (2018), predicts that the insufficiency and unreliability of electricity cost the country 2-6% loss of gross national product. Gawlak (2019) found out that the losses affects the profitability of electricity company. Sadugol (2012) claimed that the annual savings can be achieved through the improvement of load factor that affects the power losses reduction in Karnataka State. He estimated that the potential annual saving in regards to the improvement of load factor was Rs 220,86 Crores. From the aforementioned data and literature, the proposed hypothesis is: H 2 : Power losses affect the PLN profitability negatively. Based on above explanation, the research model for this study is shown Figure 1 RESEARCH METHOD This study uses the quantitative approach with moderated regression analysis (MRA) method. According to Arikunto (2012), this approach requires many numbers/data, started from data collection, data interpretation, and study result presentation. The independent variable used in this study is fuel mix, power losses, exchange rate (Indonesian Rupiah to US Dollar), and Indonesia Crude Price. In this study, the exchange rate and ICP acts as moderating variable. The dependent variable is PLN profitability in terms of its net profit margin after interest, tax, and other financial expense. The variables are time series data from 2016-2018. Measurement To prove the hypotheses, the data were examined using classical assumption test and MRA test. The classical assumption test included multicollinearity test, autocorrelation test, normality test, and heterokedasticity test. Multicollinearity test is used to examine whether there is an intercorrelation in the regression model, which means a strong correlation between an independent variable with other independent variables. The intercorrelation could be concluded by looking at the correlation coefficient value between independent variables, in which its variance inflation factor (VIF) value must smaller than 10 (Wijaya, 2009). The autocorrelation test is used to examine whether serial correlation among predictors in t-1 period exists. It could be examined through Run Test by identifying its Asymp. Sig (2-tailed) value. If it is higher that 0,05, it is concluded that the autocorrelation does not exist (Ghozali, 2013). Heterokedasticity test is used to examine whether in regression model, the residual variance of one predictor is not the same as the other predictor's (Ghozali, 2013). The result is concluded by looking at its graphic plot between dependent variable predicted value (ZPRED) and its residual (SRESID), where its dots should be scattered above and below 10 in its Y-axis. Meanwhile normality test is used to examine if the data is normally distributed. It could be concluded by examining its P-Plot, where its dots should spread around diagonal line and form a pattern aligning to the line. The moderrated regression analysis (MRA) is a special technique to examine whether in a multiple linear regression equation, there is an interaction between its regression equations (Ghozali, 2013). This analysis aims to find out the moderating variable effect in strengthening or weakening the effect of the independent variable to dependent variable (Ghozali, 2013). The justification is by looking at the Sig. value that must be smaller than 0,05. RESULTS The output of multicollinearity test, autocorrelation test, heterokedasticity test, normality test, and MRA test indicate that the data used suited the justification to prove the effect of independent variables on dependent variable. In multicollinearity test, the VIF value of all independent variables were 1,000 (<10). The Asymp.Sig. (2-tailed) value in autocorrelation test was 0,237 (>0,05). The scatterplot from heterokedasticity test indicated that the data spread randomly and did not form a special pattern can show in Graph 1. The P-plot from normality test also indicated that the data distributed normally can show in Graph 2. P-plot showed by the dots that spread around the diagonal line and formed a pattern aligning to the line. The hypothesis testing result by using MRA method is shown in below Table 1: DISCUSSION From the H 1 test result, it is found that fuel mix affects the PLN profitability negatively. Fuel mix, in terms of oil consumption (volume and price), should be managed efficiently to ensure PLN profitability. This result supports the study by Skalsky et al. (2008), that stated that fuel affects profitability. The reducing of oil consumption is also seen as an important effort as per the priority principle of energy development regulated in Rencana Umum Energi Nasional (RUEN)/National Energy General Planning according to Presidential Decree No. 22 Year 2017 stipulating "to minimize the use of oil." From cost perspective, the cost of generating electricity from oil-fueled power plant is 6 times higher than cost of coal-fired power plant. For that reason, PLN should consistently seek the most rational primary energy mix by reducing oil consumption, such as conversion program to non-oil energy resource, ensuring the commerical operation date (CoD) of non-oil power plant project, fostering the use of biodiesel, and other relevant strategies. The result of H 1a test showed that ICP strengthens the effect of fuel mix on PLN profitability. This supports the study of Wattanatorn and Kanchanapoom (2012) that oil price affects the profitability of energy and food industries in Thailand. For that reason, the volume of oil consumption should be reduced at minimum level as a risk mitigation effort towards oil price that affected by the ICP fluctuation. The ICP fluctuation will stress PLN financial ability if it is followed by Rupiah depreciation. It is because the ICP unit is in USD. As a factor that is not under PLN control, PLN should run its operational excellence to minimize the impact. Government, in this case, should provide space for PLN if the ICP threatens PLN financial capability. The result of H 1b showed that exchange rate strengthens the effect of fuel mix on PLN profitability. This supports the study of Baum et al. (2001), also Kandir et al. (2015) that showed that exchange rate affects profitability. This also supports the effort to reduce fuel mix as part of risk mitigation toward oil price fluctuation. With the free floating exchange rate system, it comes as no surprise that the exchange rate always fluctuates. For that reason, PLN should minimize the impact of exchange rate on PLN financial condition. Especially, if we look at its profit loss structure, the exchange rate hits PLN finance twice: in the calculation of operating expense, and financial expense. The result of H 2 test also showed that power losses affect PLN profitability negatively. This supports the study of Nwanna and Oguezu (2017) and Gawlak (2019) indicicating that loss affects the profitability of electricity industry. The effect of loss could be seen from two perspectives: its effect on production cost calculation, and electricity sales volume optimization. In case of the actual power losses percentage is above the target set by Government, PLN would count the gap as revenue loss. Thus, PLN should carry out strategies to lower the loss percentage, i.e., providing additional feeder for medium and low voltage network, installation of additional substation in the distribution network, reconducturing, transformers load balancing, maintenance of kWh meter measuring devices, consistently controlling illegal electricity use, intensifying revenue assurance program, improving main distribution materials (MDU) procurement, improving billing management system, and ensuring the completion of power plants and transmission construction. PLN then should promote efficiency as its core, especially in providing electricity with the least cost through performance improvement. argued that performance improvement aims to improve shareholder wealth, so strategies are need to deal with environmental uncertainty. Restructuring is also essential to adopt with external challenges, as Jumono et al. (2017) argued that restructuring that adjusts with external changes is able to change the asset, financial, and profit structure. As the managerial implications from the result of hypothesis tests, there are five issues that should be highlighted: (1) Low production cost should be the core output of PLN efficiency strategy, meaning that all action taken should aim to cost less; (2) The significant effect of oil consumption/fuel mix to the PLN operating expense indicates the urgency of fuel switching strategy to cheaper energy resources and ensuring the COD of non-oil-fueled power plants; (3) Urgency of risk mitigation strategies toward the fluctuation of ICP and exchange rate, including hedging, swap, and performing transaction in Rupiah as per the Bank Indonesia regulation; (4) Technology innovation to lower losses percentage; and (5) Tariff restructuring with achieavable targets. CONCLUSION The conclusion of this study is: (1) Fuel mix affects PLN profitability negatively; (2) ICP strengthens the effect of fuel mix on PLN profitability; (3) Exchange rate strengthens the effect of fuel mix on PLN profitability; (4) Power losses affect PLN profitability negatively; (5) PLN should optimize the effort to reduce fuel mix and power losses to increase profitability; (6) The fluctuation of ICP and exchange rate affect PLN profitability, yet however these factors are not under PLN control, it needs government intervention that favors PLN to ensure PLN financial sustainability. RESEARCH LIMITATION AND SUGGESTION FOR FURTHER RESEARCH Factors that affect PLN profitability are not limited only to fuel mix, ICP, exchange rate, and power losses. Other factors might also affect evenmore if the data period expanded. As an initial study, these 4 factors could represent the analysis of PLN profit. To enhance the study accuracy, further researches should also examine the effect of other factors, be it through quantitative or qualitative analysis. They could be developed from this study, i.e., analysis of other operating expenses component such as personnel and administration expenses, or analysis of the managerial implications such as the analysis of fuel switching or losses reduction strategy from investment point of view. The opportunity to explore this issue is quite wide in order to produce a more comprehensive study.
2020-05-21T09:12:54.679Z
2020-05-15T00:00:00.000
{ "year": 2020, "sha1": "36356b94a2074ca44716d47d74f256a8613afd8f", "oa_license": "CCBY", "oa_url": "https://econjournals.com/index.php/ijeep/article/download/9575/5144", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "72a407390d0bc358b8066a774ec298429caebc58", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
261158585
pes2o/s2orc
v3-fos-license
A Case of Post-Myocardial Infarction Ventricular Septal Rupture Complicated by Postoperative Septal Rupture We present the case of a 60-year-old man who presented with a post–myocardial infarction ventricular septal rupture caused by a delayed presentation of myocardial infarction. Despite revascularization, hemodynamic stability, and a 10-day delay until operative management to allow for tissue healing, the patient experienced a fatal recurrent postoperative ventricular septal rupture. (Level of Difficulty: Beginner.) Symptoms had begun 5 days earlier with poor appetite, nausea, and diaphoresis.The following day, he endorsed dyspnea at rest and was evaluated at an urgent care facility.He was prescribed steroids and antibiotics for bronchitis.His symptoms persisted, and he presented to our institution.Physical examination showed a blood pressure of 109/90 mm Hg and tachypnea.An electrocardiogram on arrival revealed 2-mm ST-segment elevations with T-wave inversions in the inferior leads with reciprocal ST-segment depressions in leads I and aVL (Figure 1).Pathologic Q waves were noted in the inferior leads.Highsensitivity troponin was 9,543 ng/L (reference range: 3-20 ng/L), which remained flat on repeat at 9,586 ng/L. PAST MEDICAL HISTORY The patient had no past medical history but had smoked one-half pack of cigarettes daily for the past 30 years and drank 2 to 3 liquor-containing drinks daily. DIFFERENTIAL DIAGNOSIS The differential diagnosis included subacute inferior ST-segment elevation myocardial infarction (MI), LEARNING OBJECTIVES To identify a post-MI VSR with left-to-right shunt physiology infarction and review the expected hemodynamic changes.To evaluate approaches to the management and repair of post-MI VSR with subsequent cardiogenic shock.murmur, a palpable thrill, and shock. 1,23][4] Even with prompt revascularization and surgical repair, post-MI VSR still carries a mortality risk of between 20% and 87%. 1 A greater delay between symptom onset and presentation for lytics or percutaneous coronary intervention also increases the likelihood of post-MI septal rupture, highlighting the importance of prompt door-to-balloon time. 5idelines recommend surgical repair when a post-MI VSR is suspected regardless of hemodynamic stability, but this often differs from clinical practice. 1inical instability often deters surgical teams from urgent operative intervention given the higher mortality risk.7][8] This may contribute to some degree of surgical bias, with the more stable patients being selected more often for surgery to reduce negative surgical outcomes.However, more than half of patients with post-MI VSR experience cardiogenic shock requiring invasive support devices and early surgical repair, and mortality remains high. 7Inferior-basilar septal ruptures also carry a 1.73-times higher mortality than an anterior-apical defect because of the more difficult intraoperative access. 7Consequently, a 2 , r e a t i v e c o m m o n s .o r g / l i c e n s e s / b y -n c -n d / 4 .0 / ) .or papillary muscle rupture, ventricular septal rupture (VSR), free wall rupture, or ventricular aneurysm.INVESTIGATIONS Coronary angiography revealed thrombus throughout the right coronary artery (RCA) with plaque rupture in the mid-RCA (Figure Video 1).Multiple runs of aspiration thrombectomy were performed, and intravascular ultrasound was used to guide the placement of 6 overlapping drugeluting stents throughout the RCA, which were selected because of institutional availability (Figure 3, Video 2).Because of persistent dyspnea, an intraprocedural echocardiogram was performed and demonstrated a VSR measuring 0.8 cm at the largest diameter, with a serpiginous path in the midinferoseptum with severe right ventricle (RV) dilation.No aneurysm was visualized.The left ventricular (LV) ejection fraction was normal and the basal, midinferior, midinferoseptal, and midinferolateral segments were hypokinetic (Figure 4, Videos 3 and 4).Following revascularization, a right heart catheterization revealed elevated filling pressures and a depressed cardiac index (Table 1).A shunt run confirmed the presence of a left-to-right interventricular shunt with a step-up in oxygen saturation from the right atrium (50%) to RV (93%).The shunt fraction (Q p /Q s ) was 4.08.MANAGEMENT An intra-aortic balloon pump was placed to temporize the shunt physiology.The serum lactic acid improved from 3.4 mmol/L to 1.2 mmol/L (reference range: 0.5-2.2mmol/L), and blood pressure stabilized within 24 hours after placement.A heart team approach was taken, and surgical repair of the VSR was planned. FI GURE 2 3 A Preintervention Coronary Angiography Angiography showing significant thrombus throughout the right coronary artery.A B B R E V I A T I O N S A N D A C R O N Y M S LV = left ventricle MI = myocardial infarction RCA = right coronary artery RV = right ventricle VSR = ventricular septal rupture Morton et al J A C C : C A S E R E P O R T S , V O L . 2 2 , 2 0 2 Complicated Post-MI Ventricular Septal Rupture S E P T E M B E R 2 0 , 2 0 2 3 : 1 0 1 9 9 6 The patient was upgraded to an axillary LV assist device to allow greater mobility.ST-segment elevations resolved on electrocardiogram within 48 hours of revascularization, and the patient remained free of chest pain and electrically stable and did not require vasopressor support.However, he progressively developed pulmonary edema and RV dysfunction despite diuresis, so surgical repair was undertaken 10 days later (Video 5).Intraoperatively, a 3-cm defect from the RV to LV was noted posterior to the septal leaflet of the tricuspid valve with a large, circumferential partial thickness tear around the VSR extending nearly 3 cm beyond the distal margin of the defect itself (Figure 5A).A patch was sewn deeply into the endocardium of the LV, spanning the entire partial thickness injury (Figure 5B).The tricuspid valve was replaced given the proximity to the septal leaflet.Postoperatively, there was no evidence of shunt on or off bypass.On postoperative day 1, the patient had a mucusplugging event with hypoxia, triggering ventricular tachycardia arrest.Emergent bedside sternotomy and JA A C C : C A S E R E P O R T S , V O L . 2 Complicated Post-MI Ventricular Septal Rupture cardiac massage were performed, and the patient was cannulated for extracorporeal membrane oxygenation.Repeat transesophageal echocardiogram revealed a new VSR with bidirectional flow located near the commissure between the right coronary cusp and noncoronary cusp of the aortic valve with severe depression of RV systolic function (Figure 6).His hospital course was further complicated by renal failure, atrial fibrillation with rapid ventricular response, and incessant polymorphic ventricular tachycardia.Ultimately, because of clinical decline, ongoing arrhythmia, and positioning of the new VSR near the aortic valve, he was not a candidate for repeat surgical repair.The family decided to pursue a comfort-based approach, and the patient died 23 days from the index percutaneous coronary intervention.DISCUSSION This case highlights the high mortality from mechanical complications conferred by delayedpresentation MI.A post-MI VSR can often be diagnosed clinically with the presence of a new systolic FIGURE 5 3 A FIGURE 5 Intraoperative Photos TABLE 1 Hemodynamics on Right Heart Catheterization
2023-08-26T15:47:30.677Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "6bf395255c0cc3525975f75086f535106d2dd0a5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jaccas.2023.101996", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "05d2f5130967f32274823f7e84a139bd232a9d43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237572064
pes2o/s2orc
v3-fos-license
Cluster Assembly Times as a Cosmological Test The abundance of galaxy clusters in the low-redshift universe provides an important cosmological test, constraining a product of the initial amplitude of fluctuations and the amount by which they have grown since early times. The degeneracy of the test with respect to these two factors remains a limitation of abundance studies. Clusters will have different mean assembly times, however, depending on the relative importance of initial fluctuation amplitude and subsequent growth. Thus, structural probes of cluster age such as concentration, shape or substructure may provide a new cosmological test that breaks the main degeneracy in number counts. We review analytic predictions for how mean assembly time should depend on cosmological parameters, and test these predictions using cosmological simulations. Given the overall sensitivity expected, we estimate the cosmological parameter constraints that could be derived from the cluster catalogues of forthcoming surveys such as Euclid, the Nancy Grace Roman Space Telescope, eROSITA, or CMB-S4. We show that by considering the structural properties of their cluster samples, such surveys could easily achieve errors of $\Delta \sigma_8$ = 0.01 or better. INTRODUCTION The 'concordance' Λ-Cold Dark Matter (ΛCDM) cosmological model is now well established as a single theoretical framework that is consistent with many different observational tests. The present-day abundance of dark matter and dark energy and the statistical properties of the matter distribution are increasingly well constrained, as expressed by cosmological parameters with gradually decreasing uncertainties (e.g. Planck Collaboration et al. 2020). As the uncertainties in parameter values shrink; however, they reveal tension in several places in the model. Most notably, the Hubble parameter 0 appears to differ significantly between high-redshift and low-redshift tests, with the tension in independent measurements of this parameter now exceeding 4 ; see Verde et al. 2019 for a review). In addition to the 0 tension, there is also growing evidence that the amplitude of the matter fluctuations (typically expressed as 8 , the r.m.s. of fluctuations in the matter density on a scale of 8ℎ −1 Mpc) may display a similar tension at the 2-3 level (or ∼ 0.1 in this parameter, e.g. Battye et al. 2015;Douspis et al. 2019;To et al. 2021;Heymans et al. 2021). More generally, the fundamental natures of dark energy and dark matter remain unknown, raising the possibility of new, exotic physics not yet included in the standard cosmological model. Given the evidence for tension in the current results, multiple, independent tests of the standard cosmological model are needed, on different mass and length scales and at different redshifts, to either reconcile the current results, or to reveal the physical origin ★ E-mail: ayuba@uwaterloo.ca † E-mail: taylor@uwaterloo.ca of the disagreements. Current and forthcoming space missions, including Euclid, the Nancy Grace Roman Space Telescope (Roman), and eROSITA (Pillepich et al. 2012), together with data from large ground-based surveys such as UNIONS (Chambers et al. 2020), DESI (DESI Collaboration et al. 2016) or Rubin LSST (LSST Science Collaboration et al. 2009), or experiments such as CMB-S4 (Abazajian et al. 2019), will provide remarkable new datasets, mapping out structure over a significant fraction of the observable universe, out to redshifts of a few. Given, on the one hand, the enormous potential of this data, and, on the other hand, the exacting precision required to resolve current parameter tensions, there is a need for new tests of the cosmological model that make full use of our growing understanding of structure formation. The measured abundance of massive galaxy clusters is a classic cosmological test. Cluster abundance has been estimated using samples detected in the X-ray (e.g. Henry et al. 2009;Mantz et al. 2010;Böhringer et al. 2014), via the Sunyaev-Zel'dovich (SZ) effect (e.g. de Haan et al. 2016;Planck Collaboration et al. 2020), in optical galaxy redshift surveys (e.g. Abdullah et al. 2020), in weak lensing surveys (Kacprzak et al. 2016), or using combinations of these techniques (e.g. Abbott et al. 2020;Costanzi et al. 2021). These rare objects evolve from peaks in the matter distribution present at early times, and their present-day abundance places a tight constraint on the function = 8 Ω m , where Ω m is the present-day matter density parameter, and ≈ 0.5 is the growth index. Cluster abundance measurements alone do not place very tight constraints on Ω m or 8 individually, due to the degeneracy between them. Simply counting clusters does not leverage the full potential of the underlying datasets, however. The structural properties of clusters -their projected shape, central concentration, substructure and non-axisymmetry -are all related to their degree of dynamical relaxation, which in turn traces their formation history (see Taylor 2011, for a review). Thus, measurements of these properties provide a separate constraint on the growth rate. While measurements of structural parameters in individual clusters may be noisy, the sheer number of systems expected in forthcoming surveys should allow us to make robust measurements of the average trends, using the expertise developed in fields like weak lensing. The idea of using the structural properties of clusters to constrain cosmological parameters is not new (e.g. Richstone et al. 1992;Evrard et al. 1993;Mohr et al. 1995), but the context for these tests has changed radically in the 30 years since the idea was first proposed. First, the size of the datasets has grown enormously, giving better statistics. Second, our understanding of the systematics in individual structural measurements has developed considerably. Third, there is increasing sophistication in understanding and exploiting large, complex datasets. In particular, fields such as cosmic shear have illustrated how it is possible to extract parameter constraints from large sets of noisy measurements, even when the link between parameters and observables is indirect and non-linear. Finally, simulations of structure formation have progressed dramatically, allowing us to calibrate some aspects of non-linear structure formation at the per cent level, even if other aspects remain uncertain. Thus, it seems high time to reconsider cosmological tests based on the internal structure of haloes. In this paper, we consider the possibility of estimating from their structural properties the mean assembly time or formation epoch for a large sample of galaxy clusters. This measurement of mean 'age' would leverage the same data already collected for cluster abundance studies, providing an independent constraint on the cosmological parameters. We focus in particular on the parametric dependence of halo age, and its sensitivity to the parameters Ω m and 8 ; in a subsequent paper, we will consider the (non-trivial) path to developing practical observational tests based on age estimates. The outline of the paper is as follows. In Section 2, we review theoretical models of cluster abundance and age, and use them to predict how these properties vary as a function of the cosmological parameters. Given the approximate nature of the theoretical estimates, in Section 3 we test these predictions using catalogues from several different N-body simulations. We show that with some careful analysis, we can reconcile the analytic and numerical results to reasonable accuracy. In Section 4, we estimate the sensitivity a realistic observational program could achieve, using concentration as a proxy for age. Finally, in Section 5, we review and summarize our results. The details of the analytic calculations and the dependence of several important quantities on the cosmological parameters are discussed in the appendices. We consider a range of cosmologies throughout the paper, but assume flatness (Ω m + Ω Λ = 1), and take a model with Ω m = 0.3 as the fiducial case. COSMOLOGICAL SENSITIVITY OF HALO ABUNDANCE AND HALO AGE We will begin by estimating the potential sensitivity of age tests, using theoretical predictions of how cluster abundance and age depend on the cosmological parameters. We use analytic models of abundance and age based on the extended Press & Schechter formalism, and calculated using standard tools and techniques summarized in Appendix A. Analytic Models of the Halo Mass Function The Press-Schechter (PS -Press & Schechter 1974;Bond et al. 1991) and extended Press-Schechter (EPS -Lacey & Cole 1993) formalisms provide a convenient analytic framework for computing the number density of dark matter haloes and their growth rate, given a background cosmological model. The basic expression for the halo mass function, derived assuming spherical collapse, is where 0 is the matter density at the redshift of interest, = ( ) is the r.m.s. amplitude of fluctuations in the density field smoothed on mass scale , and is the threshold for collapse to a virialized halo. Although fluctuations grow in amplitude as decreases to zero, the condition for collapse by redshift can also be considered at some fixed, early redshift, taking ( ) to be a function of mass only, and = ( ) to be a function of the collapse redshift. The mass function can also be written in a more compact form as where ( , ) ≡ ( )/ ( ) is the height of the collapse threshold at redshift , relative to typical fluctuations on mass scale , and is the mass fraction that has collapsed per unit interval of 1 . It is well known, however, that this basic form of the mass function fails to reproduce the halo abundance found in N-body simulations, particularly for low-mass haloes (Sheth & Tormen 1999, 2002Jenkins et al. 2001). This failure is due to several simplifying assumptions made in the model, the most important one being a fixed threshold for (spherical) collapse that is independent of mass and environment. To solve this problem, Sheth & Tormen (1999, ST hereafter) considered a mass-dependent collapse threshold (or 'moving barrier'), to derive a functional form that better fits the mass function from simulations with = 0.322, = 0.707 and = 0.3. A number of subsequent studies have improved our understanding of the mass function. Tinker et al. (2008) demonstrated that the HMF is not completely universal, but evolves with redshift; allowing the parameters , , and in the fit to evolve as a power-law of 1 + provides a better match to simulations. This non-universality has since been confirmed by other groups (e.g. Watson et al. 2013). Despali et al. (2016), argued that it is in fact, an artifact of the halo mass definition, and that the common choices of overdensity of 200 or 178 times the critical density induce much of the non-universality. Finally, several authors (Velliscig et al. 2014;Bocquet et al. 2016;Castro et al. 2021) have studied the impact of baryonic effects on the HMF by measuring halo masses, profiles, and abundance in hydrodynamic simulations. These improvements to the HMF fit are required in precision applications, but are generally secondary in importance (≤ 20% - Tinker et al. 2008;Velliscig et al. 2014;Bocquet et al. 2020), relative to the large variations in abundance with cosmology shown below. Thus, for simplicity in what follows, we will assume the ST form of the collapsed fraction (Eq. 4), in order to study how abundance and age depend on cosmology. We discuss the possible effect of baryons on the internal structure of haloes in Section 4.4 below. Cosmological Dependence of Halo Abundance Cluster abundance depends on the cosmological parameters both through the cluster mass function and through the survey volume. Within a survey volume subtending a solid angle ΔΩ, the expected number of clusters in the mass bin : [ , +1 ] and redshift bin : where / is the HMF given above and / is the volume element per unit solid angle and per unit redshift. As discussed previously, the HMF is calculated as a fraction of the material in a region that has collapsed to form haloes on some mass scale. Thus, rather than relating the halo abundance to the volume probed by the survey, we can express it in terms of the total mass of material in the survey volume: The advantage of this form is that we can now separate the cosmological dependence of the first factor, the total mass probed by the survey in a given redshift range Δ , from that of the second factor, which is the fraction ( )Δ of that mass that has collapsed to form haloes in the mass range Δ = ( / )Δ by that redshift. To make explicit the cosmological dependence of the HMF, in Appendices B and C we consider each of these factors separately. Over a realistic range of (Ω m , 8 ), the survey mass varies by a factor of 2, while the collapsed fraction can vary by several orders of magnitude, and thus dominates the parametric dependence of the HMF. As demonstrated in Appendix C, the peak height varies approximately as ( , . The resulting behaviour in the Ω m -8 plane is shown in Fig. 1, for several mass/redshift combinations. We see that peak height depends mainly on 8 ; variations in Ω m introduce a slight tilt in the contours, that goes from negative at low mass/redshift, where ( ) > ( ), to slightly positive at high mass/redshift (and low Ω m ), where ( ) > ( ) (see also Appendix C, and the lower right panel of Fig. C1, which shows the dependence on Ω m for fixed 8 , at several different masses and redshifts). In both spherical collapse (PS -Eq. 3) and ellipsoidal collapse (ST -Eq. 4) models, the collapsed fraction exhibits power-law growth for < 1, and an exponential decay for > 1. Thus, there are two main regimes, the first where the abundance of haloes increases with , and the second where it decreases rapidly. The effect of the cosmological parameters on the two regimes is shown, for instance, in Fig. C3. The amplitude of fluctuations 8 controls the mass at which the transition between regimes occurs, while Ω m controls the sharpness of the transition. In the case of clusters, we are generally in the second regime, where increases in produce an exponential decrease in abundance. Thus, the parametric dependence of the collapsed fraction (shown in the right panel of Fig. B1) is very similar to the corresponding figure for peak height, Fig. 1, but with an inverted and logarithmic colour scale, since ( ) ∝ exp(− 2 /2) implies that in a log scale, ln( ) ∝ − 2 /2. Finally, we can combine the parametric dependence of the survey volume and the collapsed fraction (shown in the left and right panels of Fig. B1 respectively) to plot the dependence of number counts on Ω m and 8 . This is shown in Fig. 2, for the same mass/redshift combinations considered previously. We note that the variation of the total mass within the survey volume has a minimal effect, and aside from the change in the overall scale, the contours are almost identical to those for the collapsed fraction. Theoretical Estimates of Halo Assembly Time In CDM cosmologies, dark matter haloes grow through repeated, stochastic mergers, gradually assembling their mass from a large number of smaller progenitors. Thus, deciding when a given halo has 'formed' is rather arbitrary. Most definitions in the literature are based on the Mass Accretion History (MAH) (van den Bosch 2002). This is calculated by tracing the growth of the halo backwards in time and selecting the largest progenitor of each merger, to produce a single monotonic growth sequence ( ); the MAH is then defined as the relative value ( )/ (0). Given a MAH, the formation epoch is often defined as the redshift by which a halo has reached some fixed fraction of its final mass (e.g. 50 for = 0.5 - Lacey & Cole 1993). As for the HMF, the EPS formalism provides an analytic framework for exploring the cosmological dependence of halo age. We will first consider the predicted 50 distribution derived by Lacey & Cole (1993) assuming spherical collapse, and then give two different models of the ellipsoidal collapse equivalent, derived by Sheth & Tormen (2002) and Zhang et al. (2008) respectively. Given a halo of mass 0 at redshift 0 , the fraction of its mass that was in progenitor haloes of mass 1 ± 1 /2 at redshift 1 , is given by the conditional probability where ( ) = 2 ( ), and the other variables are as in Section 2.1 Multiplying by the factor 0 / 1 , we get the progenitor mass function (PMF), that is the number of progenitors of mass 1 Following Lacey & Cole (1993), if we integrate the PMF from masses 0 /2 to 0 , we are calculating the average number of progenitors at redshift 1 that have more than half the final halo mass at 0 . Since the halo cannot have more than one progenitor with more than half of its final mass, this quantity is also the probability that the halo had built up at least half of its mass in a single progenitor by redshift 1 . Thus, it gives the cumulative distribution of the formation redshift 50 : (We note that this approach only works for formation redshifts with ≥ 0.5; there is no simple analytic way to obtain the distribution of for < 0.5.) As with the HMF, this estimate of halo formation redshift is limited by the assumption of spherical collapse. Sheth & Tormen (2002) provided a version of the conditional mass function using a Taylor expansion of their moving barrier from (Sheth & Tormen 1999) that can be used to calculate 50 (e.g. Giocoli et al. 2007). Their conditional probability is where is the moving barrier with parameters = 0.7, = 0.485 and = 0.615, while T is the first terms of a Taylor expansion of the function Inspired by the ellipsoidal collapse model, Zhang et al. (2008) also developed a fitting function for the conditional probability based on ellipsoidal collapse: ≡ ( ). The probability distributions obtained using the three models (Eqs. 7, 10 and 13) in Eq. 9 are shown in Fig. 3. The ellipsoidal collapse models predict earlier formation times 50 at all masses, although the difference is largest at low mass. The figure also illustrates a well-known feature of hierarchical structure formation, that massive haloes have formed more recently. The predictions of the two ellipsoidal collapse models are very similar, so we will use the model from Zhang et al. (2008) as our base model in what follows, as it is slightly faster to calculate. Cosmological Dependence of Halo Assembly Time Given a prediction for the distribution of halo formation redshifts, we can study how it varies with cosmological parameters. We have tested the dependence of three quantities in particular: • The median formation redshift, defined as 50 = such that ( > ) = 0.5 • The peak of the differential probability distribution, = max dP dz (z) • The average formation redshift, = ∫ ∞ . Of these three, we will focus on the median formation redshift, noting that the average formation redshift is slightly higher. Both spherical and ellipsoidal collapse models predict the same behaviour of the median formation time, as shown in Fig tion amplitude causes typical peaks in the density field to cross the threshold for collapse earlier in the process of structure formation. The Ω m -dependence is less trivial and differs between low-mass and high-mass haloes. As explained in Appendix C, high-Ω m cosmologies have more power on small scales relative to large ones. Thus at fixed 8 , low-mass haloes form earlier in higher Ω m universes, while high mass haloes form slightly later. This agrees with the previous findings of Giocoli et al. (2012). The general dependence of formation epoch 50 on Ω m and 8 is shown in Fig. 5. The main trend is for age to increase with 8 ; since the masses shown here are all beyond the cross-over point in Fig. 4, median age also decreases slightly with Ω m , particularly at large masses. We explore how the dependence of 50 on Ω m and 8 arises in more detail in Appendix D. Comparing Figs. 2 and 5 closely, we note an important feature of halo age relative to halo abundance: for lower mass haloes and/or at lower redshift, the contours for the two are fairly orthogonal over much of the Ω m -8 plane. To highlight this point, Fig. 6 shows the two sets of contours superimposed, for the ranges of mass and redshift accessible to large cluster surveys. In the region of particular interest, around the concordance model (Ω m = 0.3, 8 = 0.8), the two sets of contours are almost exactly orthogonal for lower masses and/or redshifts, where ∼2-3 (top and middle left hand panels). They only become similar for the most massive clusters, at ≥ 1, where ∼ 4-6 (bottom right panel). This implies that age or age proxies, measured for clusters with masses < 5 × 10 14 at < 1, can potentially break the main degeneracy in cluster abundance measurements. Simulation Data In Section 2.3, we provided analytic EPS estimates of how the halo formation time 50 depends on Ω m and 8 . Given the approximations made in these models, it is worth testing the accuracy of their predictions in N-body simulations. Previous work has demonstrated that different merger tree algorithms can produce significantly different MAHs (Avila et al. 2014;Srisawat et al. 2013). The exact value of 50 may be particularly sensitive to these differences, as discussed in Srisawat et al. (2013). In particular, some merger tree algorithms allow fragmentation events, where haloes lose mass with time, such that MAHs are not always monotonic. Our previous EPS estimates assume strictly hierarchical growth, and thus we anticipate that the numerical results may disagree with them to some degree. To test the effect of different methods of analysis, we consider three sets of simulations (two public, and one of our own), that employed three different merger tree algorithms: • The Illustris TNG simulation, (Nelson et al. 2019) using the Sublink merger tree algorithm (Rodriguez-Gomez et al. 2015). The simulation parameters are summarized in Table 1. Data from the TNG and Bolshoi simulations were obtained directly from their respective websites. In particular, we used the Rockstar merger tree data available for the Bolshoi simulation. For the TNG and MxSy simulations, mass accretion histories were calculated using the Sublink and AHF codes, respectively. For the Bolshoi simulations, they were generated by following the main progenitor sequence in the Rockstar files. Formation Time Distributions Compared In each numerical MAH, we define 50 to be the lowest redshift at which the mass of the halo has dropped to less than half of the mass at = 0. Fig. 7 compares the distribution of these formation redshifts to the analytic (EC) predictions. For all three simulations considered, but particularly for the Bolshoi and M3S8 simulations, we see a clear offset between the numerical results and the EC predictions, that is largest at small masses. In general, the numerical formation redshifts are larger than predicted, by up to 0.1-0.2 on average. One possible explanation for this shift lies in the different definitions of merger time assumed. Given a particular merger event, EPS theory takes the corresponding collapse redshift (that is, roughly, the time by which newly-accreted mass has first fallen to the centre of the halo) to be the moment at which a halo's virial mass is said to increase. In contrast, numerical group finders may link haloes when their outer virial surfaces first touch. Thus, numerical mergers may occur up to one infall time earlier than analytic ones. Adding a delay equal to the infall time to the numerical results, we obtain the 50 distributions in Fig. 8. The discrepancy between the numerical and analytic results is greatly reduced, although some differences remain, as seen most clearly in the cumulative distributions. The remaining differences may have several possible explanations. There are slight offsets between the distributions for the three simulations, suggesting that the different halo finders and merger tree algorithms used to analyze them affect the results. A detailed comparison of halo finders and merger tree algorithms, including AHF, Rockstar/ConsistentTrees and Subfind/Sublink, was presented in Knebe et al. (2011). They highlight a number of significant differences between methods, notably in how they treat fragmentation events and non-monotonic MAHs. We note that the discrepancy between numerical and analytic results is greatest at low mass, so resolution may also play a role. Finally, even the revised EC version of EPS remains an approximate theory, so its predictions may be inaccurate at some level. We summarize the comparison between numerical and analytic results in Fig. 9, which shows the median 50 of each of the simulations, together with the uncertainty (points with errorbars), compared to the analytical predictions from the EC model of Zhang et al. (2008). As discussed in Section 2.4, age is most sensitive to the amplitude of fluctuations 8 , and depends only weakly on Ω m . At high mass, the numerical results agree well with the analytic prediction, at least in terms of the median value of 50 . Since this mass range is the one relevant for cluster surveys, we conclude that our previous analytic estimates are reasonably valid, although the details of the halo age distribution require further study in future work. OBSERVATIONAL PROSPECTS A number of ongoing and future surveys are expected to produce very large samples of galaxy clusters, with (10 5 ) significant detections, out to redshifts of = 1 or higher (e.g. Pillepich et al. 2012;Sartoris et al. 2016;Abazajian et al. 2019). Supposing such a sample were available, with age information based on one or more observational proxies, we can ask what sensitivity this dataset would have to the cosmological parameters, or equivalently how large a sample would be needed to provide significant improvement on parameter constraints. To estimate age observationally, we need a structural proxy for age (as expressed, say, by the formation epoch 50 ). There are several known examples of structural properties that correlate with 50 (Wong & Taylor 2012), including concentration (Zhao et al. 2003;Wang et al. 2020), shape (as a product of major mergers -Drakos et al. 2019a), substructure (e.g. Gao et al. 2004;Taylor & Babul 2005;Diemand et al. 2007), or overall degree of relaxation, as measured by a centre-of-mass offset (Macciò et al. 2007;Power et al. 2012). We will take concentration as an example here, as its age dependence is the best studied. We note that on some mass scales and at some redshifts, baryons may have an important effect on halo structure, and on concentration specifically. We will start by discussing concentration measurements ignoring these possible effects, but then consider them separately in Section 4.4 below. Mean Concentration versus 50 Since the discovery of the universal density profile (Navarro et al. 1997, NFW hereafter), the value of the concentration parameter = vir / s has been linked to the halo's formation history. In NFW, concentration depends on how early a critical fraction of the final mass was first assembled into (any number of) progenitors. Subsequent models (e.g. Bullock et al. 2001;Wechsler et al. 2002;Zhao et al. 2003Zhao et al. , 2009Ludlow et al. 2014;Correa et al. 2015a) related concentration instead to the growth history of a single main progenitor, as expressed by the mass accretion history (MAH). In the simplest picture (Wechsler et al. 2002;Zhao et al. 2003), 0 ( 0 / ) where is the scale factor at the end of the period of rapid growth in the MAH, and 0 ∼ 3-4 is the concentration of newly-formed systems at this time. In these models, there is, therefore, a direct correlation between and 50 , or any similar estimate of the formation epoch f (Wong & Taylor 2012). All of these models focus on the relation between the average growth rate and the mean concentration of a sample of haloes of a given range of mass and redshift (although Ludlow et al. 2013 however, depending on the net input of (orbital) energy (Drakos et al. 2019b). Most recently, Wang et al. (2020) have shown that the measured value of the concentration parameter oscillates during major mergers, and that these fluctuations may dominate the statistics of the average values measured for large ensembles. Clearly, the subject is complicated and requires further study; we will not consider it in further detail here, but will assume a correlation between and 50 , that makes mean concentration measurements sensitive to mean age. Table 1. son noise in the figure). The basic pattern is similar for each set of cosmological parameters, and has been explored extensively in the literature (e.g. Wechsler et al. 2002;Zhao et al. 2003Zhao et al. , 2009Giocoli et al. 2012;Correa et al. 2015b;Ludlow et al. 2013), though interestingly, there is also a slight change in the mean relation over the range of parameters explored. In particular, the intercept of the linear regression relation between log and log(1 + ) increases monotonically, both with Ω , and with 8 . This indicates that the concentration has an additional cosmological sensitivity, beyond its main dependence on formation history. Here too, there is clearly further complexity to explore in the concentration-mass-redshift relation; in future work, we will focus on understanding and calibrating the mean ( , ) and -50 relationships, and consider more generally the links between concentration, mass accretion history, and cosmology. For the purpose of our present calculations, we will assume a power-law correlation with a fiducial scatter of 30%, which provides a reasonable fit to the results from all nine cosmologies. Measured Concentration versus Mean Concentration From the preceding results, the actual 3D concentration of an individual halo should scatter by ∼ 30% for a given value of 50 . This actual concentration can be estimated in various ways, including weak lensing convergence, detailed modelling of the lensing potential in strong lensing systems, X-ray or SZ emission, or even the galaxy distribution within a group or cluster. Each of these techniques will add observational errors and biases; see, for instance, Groener et al. (2016), which provides a fairly recent review of individual concentration estimates in clusters. Generally, the observational errors are 0.1 or 0.2 dex, i.e. 25-60%, for each individual system. There are also systematic uncertainties, both identified and unidentified, associated with each method. In principle, future work with large samples, dedicated simulations, and comparison between observational modalities may help reduce these. Overall, we will assume typical errors of either 30% or 50% in going from an actual 3D concentration to an observational estimate. Combining these errors with the intrinsic scatter in the -50 relation, we expect a net scatter of 50 = 40-60% in the relation between an observational estimate of concentration and the formation epoch 50 . This large uncertainty makes individual measurements relatively uninteresting; we can compare the situation to weak gravi- tational lensing, however, where shape measurements for individual galaxies are extremely noisy, but careful averaging extracts the mean value in an unbiased way. 50 versus Cosmology The remaining factor in our calculation is the connection between an estimate of the mean value of 50 and the values of the cosmological parameters. From Fig. 9, to achieve a nominal precision of 0.01 in 8 , we need 0.55% precision in the estimate of 50 . Assuming unbi-ased averaging over a sample of clusters, 50 = / √ . Solving, we get = (0.40/0.0055) 2 -(0.60/0.0055) 2 ∼ 5,000-12,000. Thus, with low-precision but unbiased concentration measurements for (10,000) clusters, we could obtain constraints on the value of 8 that correspond to 1/10 or less of the current range of uncertainty in this parameter. Baryonic Corrections A major uncertainty in the preceding calculations is the net effect of baryons on cluster concentrations. Baryons may alter the halo density profile, increasing halo concentration via adiabatic contraction, or reducing it through outflows driven by stellar or AGN feedback. These effects are complex, and depend on mass, redshift, and radius within the halo. Overall, these processes can affect halo concentration and masses significantly (Debackere et al. 2021). Although simulations suggest that baryonic effects are largest in galaxy-scale haloes (e.g. Velliscig et al. 2014), they may still be significant when measuring concentration or internal structure in clusters - Debackere et al. (2021), for instance, find a 10% bias in estimates of the scale length at masses of 5×10 14 ℎ −1 . Baryonic effects are not yet well enough understood to include in our predictions; in particular, their detailed dependence on the cosmological parameters is not yet known. We can point out a few possible avenues, however, to calibrating and correcting for their effects on structural measurements. First, simulations now model these effects with increasing accuracy, allowing the potential for calibration of any net bias in structural properties. Second, observations of nearby, well-studied systems allow verification of the simulations, independent of the samples used for cosmological tests. Third, as pointed out in Section 2.4, over some mass and redshift ranges, cluster age and abundance are predicted to vary almost identically with the cosmological parameters Ω m and 8 . This should provide an independent test of any assumed concentration-mass relationships, at least for this range of mass and redshift, as the constraint in the Ω m -8 plane derived from age/concentration measurements must agree with the one derived from abundance, and since the contours for the two are parallel, there is little room for bias in one relative to the other. A final, and basic, reason for optimism is the differential nature of structural tests, whether based on concentration or on other structural properties. These would depend on the relative distribution of structural properties, measured across a population of systems. A simple proxy for the mean age of the sample, for instance, might be the number of high-concentration systems, relative to low-concentration ones. Thus, to lowest order, a net shift in concentration for the whole population would largely cancel out, reducing the bias in the final results. At the same time, the shape of the measured concentration distribution for the whole sample would provide another test of the consistency of the method, and any biases or effects due to sample selection. Overall, it is clear that the impact of baryonic effects on halo structure requires much more detailed study, to see whether and to what degree they would compromise structural tests of cosmology, for a given survey and methodology. What is Achievable? In summary, in previous sections, we have shown that with lowprecision but unbiased concentration estimates for (10 4 ) clusters, one could obtain excellent constraints on 8 , assuming the net effect of baryons is small, or can be corrected using simulations. While measuring concentration observationally is challenging, our method does not require particularly accurate measurements for individual systems -given the intrinsic scatter in the -50 relation, the errors in individual estimates need only be accurate at the ∼30% level. We can consider this goal in more detail for a particular survey. The Euclid mission is a 1.2m space telescope, operating in the visible and near-infrared (NIR). As part of its wide survey, it will image 15,000 deg 2 of the sky in one optical and three NIR bands, detecting galaxies down to an AB magnitude of 24 or fainter. Photo-zs will be derived for these objects, using ancillary data from ground-based surveys such as UNIONS (Chambers et al. 2020). The Euclid wide survey should provide large catalogues of clusters detected photometrically (that is by clustering in projection and in photo-z space), and also as peaks in weak lensing maps. Based on the forecasts of (Sartoris et al. 2016), the photometric detections should include all clusters with masses 10 14 out to redshift ∼ 2. Below redshift z = 0.5, the wide survey is expected to detect 1.5 million clusters at 3 or greater significance, and 200,000 clusters at 5 significance. Extrapolating from these predictions, the number of 7 detections is in excess of 20,000 (with half that number at redshifts < 0.7). Admittedly, these objects may be slightly more massive than the example considered above in section 4.3 (3.5 × 10 14 /ℎ, versus 2 × 10 14 /ℎ) but they are close enough that the slope of the 50 -8 relation should be similar. The Roman Space Telescope mission is a 2.4m wide-field space telescope, with optical/NIR imaging and slitless spectroscopy capabilities. Its High Latitude Survey will image ∼2000 deg 2 of sky in four NIR bands, reaching depths 1-2 AB magnitudes deeper than Euclid Wide, as well as providing slitless spectroscopy of brighter targets. This deeper data over a smaller area should produce a cluster sample that extends to lower masses and higher redshifts, and thus provides an interesting counterpoint to Euclid data. In particular, we note that at high redshift, the complementarity of age and abundance is reduced (cf. the left-hand panels of Fig. 6). This could provide an important consistency check on age estimates, as discussed previously. Finally, a number of other forthcoming experiments expect to detect large numbers of clusters, including eROSITA (Pillepich et al. 2012) in the X-ray, CMB-S4 (Abazajian et al. 2019) in the mm, and the ground-based UNIONS (Chambers et al. 2020 Overall, we conclude that multiple samples of (10 4 ) clusters with sufficient signal-to-noise ratio (SNR) to allow structural measurements should become available in the near future. One could imagine using a large, uniform survey such as Euclid for the lowredshift sample selection, together with other complementary observations to make structural measurements on individual clusters. A high-redshift sample could then be used to test and calibrate age proxies, as mentioned above. It might also be possible to stack clusters to use a mean projected density profile, measured at high SNR, to derive constraints. We will consider these and other approaches in future work. As in weak lensing studies, the challenge of averaging over large numbers of low SNR measurements will be in controlling for and reducing systematics. Beyond the baryonic effects discussed in the previous section, systematics related to basic structure formation could also affect sample selection (e.g. by preferentially highlighting or neglecting disturbed systems); they could bias individual mass estimates (although the slope of the concentration-mass relation is fairly shallow, so accurate masses are less important than in abundance studies); or they could bias concentration measurements, e.g. by bi-asing the sample selection towards objects with a particular 3D shape, or with a disturbed IGM (if the confirmation or structural measurements are performed in the X-ray). In lensing-based studies, false peaks and projections could be a particular problem, as these may look less regular and have lower concentrations, biasing the average. Environment can also have an impact on formation time through assembly bias, as simulations have shown that haloes form earlier in dense environments (Gao et al. 2005;Wechsler et al. 2006;Harker et al. 2006), so unbiased sampling of large volumes is important. Here again, there is much future work to be done considering the the possible biases for different observational modalities and survey strategies. CONCLUSION The enormous success of CMB analyses and large cosmological surveys over the last few decades has been driven, for the most part, by a robust and detailed understanding of structure formation in the linear regime. Cluster number counts are an important exception, but they only probe one limited aspect of non-linear structure formation. As cluster catalogues grow by several orders of magnitude in size over the next decade, it is worth considering what other cosmological information we might extract from them. Measurements of internal halo structure can, in principle, tell us about the rate of non-linear structure formation, and are worth considering as a next-generation cosmological test. Previous work has established that as haloes grow through hierarchical merging, this process leaves structural signatures that can last for many dynamical times, that is, for many Gyr at low redshift. As a result, structural measurements provide several different avenues to estimate cluster assembly times or "ages". In this paper, we have shown that for typical clusters at < 1, age varies almost orthogonally to abundance in the space of the cosmological parameters Ω m and 8 . The same datasets that provide abundance constraints could be used to estimate mean values for structural parameters, and thus age, providing significantly tighter parameter constraints from a single set of observations. Of course, given the accuracy of current constraints from the CMB, it may seem less interesting to invest further in other techniques. A survey of current results hints at tension between the different measurements, however, emphasizing the importance of redundant cosmological tests, over different ranges of redshift, mass and/or spatial scale. To resolve the deep mysteries of dark energy and dark matter, and to rule out yet-undiscovered variations on the current cosmological model, we need to test it as sensitively as possible, across as broad as possible a range of parameter space. In pursuit of this goal, our growing understanding of non-linear structure formation will open up many exciting possibilities for new tests and new tools. APPENDIX A: DETAILS OF THE ANALYTIC CALCULATIONS We use standard tools and techniques for the analytic calculations in Section 2. In particular, the growth factor ( ) is calculated using the approximation given by Carroll et al. (1992), which is accurate to a few percent: The critical overdensity is the value a linearly-extrapolated density perturbation needs to reach to collapse and form a virialized object, and can be estimated using the spherical collapse model. Its The linear matter power spectrum is computed using the approximation to the transfer function ( ) given by Equation 16 in Eisenstein & Hu (1998). We take the primordial amplitude to be A=1 initially, and then adjust this value retroactively to set the correct value for 8 . The index of the primordial spectrum is taken to be = 0.965. The variance of the density fluctuation field, 2 , is computed numerically by convolving the power spectrum ( ) with a top hat smoothing filter: where We have compared our derived values of ( ) and ( ) to values calculated using the Colossus python package (Diemer 2018), and find good agreement. APPENDIX B: SURVEY MASS VERSUS Ω M Halo abundance depends on the total mass of material sampled in a survey volume, , and on the collapsed fraction at that scale and redshift. The survey mass obviously depends on the mean matter density and thus on Ω directly, but it also depends on Ω indirectly, through the volume sampled for a given solid angle and redshift range. The left panel of Fig. B1 shows the total mass contained within a survey volume per unit redshift per unit solid angle at two different redshifts, as a function of the cosmological parameter Ω m . While the volume element is a decreasing function of Ω m for flat ΛCDM cosmologies (since volume at a given redshift grows as Λ increases), the total mass enclosed increases overall, with the greatest Ω m sensitivity at low redshift. In the end, however, the total mass has relatively little influence on the overall shape of the abundance contours in the Ω -8 plane, because its variation is of order a factor 2 or less, while the collapsed fraction varies over several orders of magnitude as Ω changes (right panel of Fig. B1; see Appendix C for a discussion of the parametric dependence of the collapsed fraction). APPENDIX C: PEAK HEIGHT AND COLLAPSED FRACTION VERSUS Ω M AND 8 We can write peak height as the product of three factors: where we have defined Γ( ) ≡ ( )/ 8 . The redshift evolution of the collapse threshold stems from the linear growth of fluctuations Figure B1. Left : total mass contained within a survey volume, per unit solid angle and per unit redshift interval, as a function of Ω m , at redshifts 0.3 and 1 (note a flat ΛCDM cosmology is assumed). Right : Variation of the (EC) collapsed fraction ST in the Ω m -8 plane, for the particular choice of halo mass and redshift indicated. Note the similarity to the dependence of peak height (Fig. 1), although the colour scale here is now inverted, and logarithmic. where is the fluctuation amplitude, is the linear growth factor, and is the scale factor. Combining these, = (1 + ) Thus, while peak height scales simply as 1/ 8 , the dependence on Ω m is through both the shape of the matter power spectrum Γ( ) and the relative growth factor ( )/ (0). We consider each of these in turn. C0.1 Γ( ) The function Γ( ) describes the shape of the amplitude of fluctuations as a function of mass, ( ), independent of its normalization 8 . This shape will depend on the value of the matter density parameter Ω m . More specifically, in cosmologies with larger matter densities, matter-radiation equality occurs sooner. Growth is suppressed by radiation on all scales below the horizon scale at recombination, but these scales are smaller and spend less time inside the horizon when Ω m is larger. Thus, for a fixed amplitude of the primordial power spectrum, small-scale power at recombination will increase with Ω m (dashed curves in the top left panel of Fig. C1). Fixing 8 , the amplitude of fluctuations at scales of 8ℎ −1 Mpc, reduces some of the difference in power (solid curves in the top left panel of Fig. C1), but the shape of ( ) remains steeper in cosmologies with larger values of Ω m (top middle panel of Fig. C1). Thus, over the range of interest, the ratio Γ = ( )/ 8 increases with Ω m , especially at lower mass (top left panel of Fig. C1). We can model this dependence as where ( ) decreases with mass, and becomes negative at = 8 ≈ 2 × 10 15 /ℎ where ( ) = 8 . C0.2 Growth factor The lower panels of Fig. C1 show how ( ) and ( )/ (0) vary with Ω m . In a flat ΛCDM model, the growth factor ( ) at a fixed redshift is reduced for low Ω m , as dark energy suppresses the growth of fluctuations. The relative amplitude of the effect is greater for very low values of Ω m , or for large redshift ranges, and thus the ratio ( )/ (0) is largest for low Ω m and/or high redshift (middle panel). Mathematically, we can estimate the dependence on the density parameter by using the approximation ( ) ∝ Ω 3/7 m ( ) (Carroll et al. 1992): where is the Hubble ratio. The index 0 < ( ) < 1 is an increasing function of , and approaches the value 3/14 at high redshift. C0.3 Peak Height Given these results, the overall dependence of the peak height on Ω m and 8 can be written where > 0 and increases with redshift to the limiting value 3/14, while ( ) is a decreasing function of mass, and is negative at large masses. Combining the positive slope of Γ with the negative slope of ( )/ (0), the bottom right-hand panel of Fig. C1 shows how generally decreases with Ω m , but can increase for large masses and high redshifts. This explains the slight -dependence seen in Fig. 1. The summary of how the peak height varies with mass and redshift for different values of Ω and 8 is shown in Fig. C2. C0.4 Collapsed Fraction Combining the peak height dependence described above with the functional form of the collapsed fraction as a function of peak height (Eqs. 3 or 4) we obtain the collapsed fraction as a function of mass for different values of 8 and Ω , shown in the left and right panels of Fig. C3 respectively. It shows two regimes, a power-law increase at low masses, followed by an exponential decrease for cluster mass haloes. The figure also shows why the collapsed fraction sensitivity to 8 and Ω varies with mass, and the transition between the two regimes occurs at different masses for different values of 8 . We can express as a function of a single variable as follows: Thus, we see that the conditional probability has the same form as the unconditional one, but with the argument rather than . The formation redshift distribution is proportional to the average value of the PMF between 0 /2 and 0 , and since the factor 0 / does not vary much over this range, it is also approximately equal to the conditional probability evaluated around the middle of the range. This probability in turn goes as exp[−( ) 2 /2], so we expect the parametric dependence of 50 to resemble an inverted, logarithmic version of the dependence for . Fig. D1 shows as a function of mass and redshift, while the top 4 panels of Fig. D2 show the value of as a function of 1 and mass fraction 1 / 0 , for various values of 0 and 0 . We see that the shape of the contours is generally similar to those for , except when 1 is close to 0 (bottom of the plot), or when the mass fraction is close to 1 (right hand side of the plot). The second set of 4 panels in Fig. D2 shows the value of the conditional probability. As expected, the conditional probability is similar to an inverse, logarithmic mapping of . Finally, we can consider the behaviour of and the PMF in the Ω m -8 plane. The top 4 panels of Fig. D3 show the value of in this plane, for the values of ( 0 , 0 ) indicated, a mass fraction 1 / 0 = 0.5, and Δ = 1 − 0 = 0.1. The overall pattern is very similar to that of (cf. Fig. 1). The bottom four panels show the value of the PMF, for the same choices of ( 0 , 1 , 0 , 1 ). Relative to the top panels, we see that the colour scale is inverted and logarithmic, as expected. The overall behaviour explains the shape of the contours in Fig. 5, and their relative orthogonality to abundance contours in the same plane. This paper has been typeset from a T E X/L A T E X file prepared by the author.
2021-09-21T01:15:34.168Z
2021-09-18T00:00:00.000
{ "year": 2021, "sha1": "d1bdbb2fc37db36d18e42ef391a15e95bbd588c6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.08986", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d1bdbb2fc37db36d18e42ef391a15e95bbd588c6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233296556
pes2o/s2orc
v3-fos-license
RingCNN: Exploiting Algebraically-Sparse Ring Tensors for Energy-Efficient CNN-Based Computational Imaging In the era of artificial intelligence, convolutional neural networks (CNNs) are emerging as a powerful technique for computational imaging. They have shown superior quality for reconstructing fine textures from badly-distorted images and have potential to bring next-generation cameras and displays to our daily life. However, CNNs demand intensive computing power for generating high-resolution videos and defy conventional sparsity techniques when rendering dense details. Therefore, finding new possibilities in regular sparsity is crucial to enable large-scale deployment of CNN-based computational imaging. In this paper, we consider a fundamental but yet well-explored approach -- algebraic sparsity -- for energy-efficient CNN acceleration. We propose to build CNN models based on ring algebra that defines multiplication, addition, and non-linearity for n-tuples properly. Then the essential sparsity will immediately follow, e.g. n-times reduction for the number of real-valued weights. We define and unify several variants of ring algebras into a modeling framework, RingCNN, and make comparisons in terms of image quality and hardware complexity. On top of that, we further devise a novel ring algebra which minimizes complexity with component-wise product and achieves the best quality using directional ReLU. Finally, we implement an accelerator, eRingCNN, in two settings, n=2 and 4 (50% and 75% sparsity), with 40 nm technology to support advanced denoising and super-resolution at up to 4K UHD 30 fps. Layout results show that they can deliver equivalent 41 TOPS using 3.76 W and 2.22 W, respectively. Compared to the real-valued counterpart, our ring convolution engines for n=2 achieve 2.00x energy efficiency and 2.08x area efficiency with similar or even better image quality. With n=4, the efficiency gains of energy and area are further increased to 3.84x and 3.77x with 0.11 dB drop of PSNR. Recognition and detection CNNs aim to extract high-level features and usually process small images with a huge amount of parameters. In contrast, computational imaging ones need to generate low-level and high-precision details and often deal with much larger images with fewer parameters. For example, the state-of-the-art FFDNet for denoising [50] requires only 850K weights but can be used to generate 4K UHD videos at 30 fps. This will demand as high as 106 TOPS (tera operations per second) of computation, and the precision of multiplications could be at least 8-bit for representing sufficient dynamic ranges. Therefore, for computational imaging it is the intensive computation for rendering fine-textured, high-throughput, and high-precision feature maps to pose challenges for the deployment in consumer electronics. Exploiting sparsity in computation is a promising way to reduce complexity for CNNs. Many approaches have been analyzed in detail, but most of them are discussed only for recognition and detection. The most common one is to explore natural sparsity for feature maps [3], [34] and/or filter weights [16], [38], [54]. Utilizing such sparsity, like unstructured pruning [17], will induce computation irregularity and thus significant hardware overheads. For example, the state-ofthe-art SparTen [16] only delivers 0.43 TOPS/W on 45 nm technology for the dedicated designs to tame irregularity. If, instead, structured pruning [35], [40] is applied to improve regularity, model quality will then drop quickly. Thus, natural sparsity is hard to support high-throughput inference with low power consumption. Another common approach is to explore the low-rank sparsity in over-parameterized CNNs by either decomposition [27], [30], [37] or model structuring [12], [19], [23], [41]. It aims high compression ratios and approximates weight tensors by regular but radically-changed inference structures. This lowrank approximation works well for recognition CNNs which extract high-level features. But it could quickly deteriorate the representative power of computational imaging ones for generating local details. For example, merely applying depthwise convolution can lead to 1.2 dB of peak signal-to-noise ratio (PSNR) drop for SR networks [21]. Therefore, low-rank sparsity may not be suitable for fine-textured CNN inference. A recent alternative for providing regular acceleration is to enforce full-rank sparsity on matrix-vector multiplications [13], [52], [53]. It partitions them into several n × n sub-block multiplications and then replaces each one by a componentwise product between n-tuples. This is equivalent to a group convolution with data reordering [53]; therefore, for restoring representative power additional pre-/post-processing is required to mix information between components or groups. CirCNN [13] equivalently applies Fourier transform on each sub-block for this purpose by forcing weight matrices to be block-circulant. ShuffleNet [52] instead performs global channel shuffling, and HadaNet [53] adopts simpler Hadamard transform. However, the applicability of this approach is unclear for computational imaging because CirCNN aims very high compression ratios (66× for AlexNet [29]), and Shuf-fleNet and HadaNet focus only on bottleneck convolutions. Lastly, a fundamental but less-discussed approach is to exploit algebraic sparsity. In contrast to using real numbers, CNNs can also be constructed by complex numbers [44] or quaternions [15], [39], [57]. By their nature, the number of real-valued weights can decrease two or four times, respectively. Moreover, their multiplications can be accelerated by fast algorithms. For example, the quaternion multiplication is usually expressed by a 4 × 4 real-valued matrix and can be simplified into eight real-valued multiplications and some linear transforms [20]. Regarding activation functions, the realvalued component-wise rectified linear unit (ReLU) is mostly adopted, and its efficiency over complex-domain functions is demonstrated in [44]. Since this algebraic sparsity can reduce complexity with moderate ratios and high regularity, it is a good candidate for accelerating computational imaging. However, previous work only discusses the two traditional division algebras and thus poses strict limitations for implementation. In this paper, we would like to lay down a more generalized framework-RingCNN-for algebraic sparsity to expand its design space for model-architecture co-optimization. Observing that division is usually not required by CNN inference, we propose to construct models by ring, a fundamental algebraic structure on n-tuples with definitions of multiplication and addition. In particular, we consider a bilinear formulation for ring multiplication to have transform-based fast algorithms and thus include full-rank sparsity into this framework. For constructing CNN models, we also equip non-linearity to the rings. Then several ring variants are defined properly and compared systematically for joint quality-complexity optimization. This algebraic generalization also brings architectural insights on ring non-linearity. We observe that conventional methods mostly adopt the component-wise ReLU for nonlinearity and use the linear transforms in ring multiplication for information mixing. However, for fixed-point implementation these transforms will increase input bitwidths for the following component-wise products and bring significant hardware overheads. Inspired by this, we propose a ring with a novel directional ReLU to serve both non-linearity and information mixing. Then we can avoid the transforms before the products to eliminate the bitwidth-increasing overheads. Extensive evaluations will show that the proposed ring can achieve not only the best hardware efficiency for multiplications but also the best image quality for its compact structure for training. Finally, we design an accelerator-eRingCNN-to utilize the proposed ring for high-throughput and energy-efficient CNN acceleration. For comparison purposes, we adopt eCNN [21], the state-of-the-art for computational imaging, as our architecture backbone. Then we devise highly-parallel ringconvolution engines for efficient inference and simply replace the real-valued counterparts in eCNN thanks to their regularity and similarity in computation. For the directional ReLU which involves two transforms, conventional MAC-based accelerators may need to perform quantization before each transform and cause up to 0.2 dB of PSNR drop. Instead, we apply an onthe-fly processing pipeline to avoid unnecessary quantization errors and facilitate fixed-point inference on 8-bit features. With 40 nm technology, we implement two sparsity settings, n = 2 and 4, for eRingCNN to show the effectiveness. In summary, the main contributions and findings of this paper are: • We propose a novel modeling framework, RingCNN, to thoroughly explore algebraic sparsity. The corresponding training process, including quantization, is also established for in-depth quality comparisons. • We propose a novel ring variant with a directional ReLU which achieves better image quality and area saving than complex field, quaternions, the rings alike to CirCNN and HadaNet, and all newly-discovered ones. Its image quality even outperforms unstructured weight pruning and sometimes, when n = 2, can be better than real field. • We design and implement accelerators with two configurations: eRingCNN-n2 (50% sparsity) and eRingCNN-n4 (75%). They can deliver equivalent 41 TOPS using only 3.76 W and 2.22 W, respectively, and support high-quality computational imaging at up to 4K UHD 30 fps. • Our ring convolution engines achieve near-maximum hardware efficiencies ( ∼ = n). Layout results show that for n = 2 they have 2.00× energy efficiency and 2.08× area efficiency compared to the real-valued counterpart. Those for n = 4 can increase the corresponding efficiency gains to 3.84× and 3.77×, respectively. • RingCNN models provide competitive image quality. Compared to the real-valued models for eCNN, those for eRingCNN-n2 even have an average PSNR gain of 0.01 dB and those for eRingCNN-n4 only drop by 0.11 dB. When serving Full-HD applications on eR-ingCNN, they can outperform the advanced FFDNet [50] for denoising and SRResNet [31] for SR. II. MOTIVATION We aim to enable next-generation computational imaging on consumer electronics by achieving high-throughput and high-quality inference with energy-efficient and cost-effective acceleration. However, computational imaging CNNs require dense model structures to generate fine-textured details. Thus before deploying any complexity-reducing method we need to examine the impact of image quality and the gain of computation complexity as a whole. Without loss of generality, we demonstrate this qualitycomplexity tradeoff using the advanced model SRResNet as an example. In Fig. 1, two conventional sparsity techniques are examined. One is unstructured magnitude-based weight pruning for exploring natural sparsity. It shows graceful quality degradation when compression ratios are up to 2×, 4×, and 8×. However, its irregular computation will erode the performance gain due to induced hardware overheads and load imbalance. For example, only 11.7% of power consumption and 5.6% of area are spent on MACs in the sparse tensor accelerator SparTen [16]. The other examined technique is depthwise convolution (DWC) which exploits low-rank sparsity. The quality drops very quickly and even can be worse than the old-fashioned VDSR [26]. As a result, weight pruning and DWC are unfavourable for computational imaging due to the computation irregularity and the quality distortion respectively. A more straightforward approach is to reduce the model size in a compact way, and here we consider two cases: shrinking either model depth or feature channels. For SRResNet, the depth reduction causes sharp quality loss. In contrast, the channel reduction provides a good quality-complexity tradeoff which shows a similar trend as weight pruning and performs much better than DWC. In particular, this approach maintains high computation regularity and can be accelerated by energyefficient dense tensor accelerators, such as eCNN [21] in which 94.0% of power consumption and 72.8% of area are spent on convolutions. Therefore, the compact model configurations should also be considered before applying sparsity. In this paper, we would like to explore the possibility of having the quality of weight pruning and the regularity of compact modeling at the same time. We will approach this goal by using ring algebra for the elementary operations in CNNs. In this way, we can achieve local sparsity and assure global regularity simultaneously. Our results, RingCNN, for SRResNet are also shown in Fig. 1 to demonstrate the effectiveness. The details of our approach will be introduced in the following. III. RING ALGEBRA FOR NEURAL NETWORKS Deep neural networks consist of many feed-forward layers. These layers are usually defined over real field R and formulated by its three elementary operations: addition +, multiplication · , and non-linearity f . With tensor extensions, we can have a common formulation for each l-th layer: Fig. 1. Computation efficiency versus image quality. Different complexityreducing methods are applied to SRResNet for four-times SR tasks (scaling up by four times for both of image width and height). The models are trained using the same training strategy. The image quality is measured by the averaged PSNR over test datasets Set5 [7], Set14 [48], BSD100 [36], and Urban100 [22]. , and f (l) represent the input feature tensor, weight tensor, bias tensor, and non-linear tensor operation respectively. Conversely, as long as we define the three operations properly, we can construct neural networks at will using other algebraic structures. An example for using complex field is shown in Fig. 2. Each complex number z can be expressed by either a complex form z 0 + z 1 i or an equivalent 2-tuple ( z0 z1 ). Then weight storage can be reduced by a half, and arithmetic computation can be accelerated by the complex multiplication algorithm, i.e. the complexity for each complex multiplication can be reduced from four real multiplications to three. In the following, we will first consider ring algebras to generalize this idea and define proper ring multiplication for discussion. Then we will analyze their demands of hardware resources, and, finally, propose a novel ring variant with directional non-linearity to maximize hardware efficiency. A. Ring Algebra A ring R is a fundamental algebraic structure which is a set equipped with two binary operations + and · . Here we consider the set of real-valued n-tuples, i.e. For clarity, x is a ring element and x i is its real-valued component. And we simply use the component-wise vector addition for the ring addition +. As for ring multiplication · , it plays an important role for the properties of different rings. Given where z, g, x ∈ R, we consider it has a bilinear form to have a general formulation for fast algorithms, which will be discussed in Section III-B. In particular, the components of the three ring elements are related by Fig. 2. A simple neural-network layer for four real-valued inputs (x 0 , x 1 , y 0 , and y 1 ) and two real-valued outputs (ẑ 0 andẑ 1 ) by (top) real field R, (middle) complex field C, and (bottom) a proposed 2-tuple ring (R I2 , f H2 ). For the latter two algebras, the inputs are equivalently two 2-tuples (x and y) and the output becomes one 2-tuple (ẑ). Their tensor formulations, z = ( g h ) x y in C or R I2 , have isomorphic operations in real field: z = Gx + Hy in R. Then the degrees of freedom (DoF) in real numbers for each weight sub-matrix, e.g. G, are reduced from four (g 00 , g 01 , g 10 , and g 11 ) to two (g 0 and g 1 ). where M is a 3-D indexing tensor with only 1, 0, and −1 as its entries. In other words, the products of input ring components in form of g k x j are distributed to output components z i through M ikj . With the bilinear form (3), the ring multiplication (2) will be isomorphic to a matrix-vector multiplication where the matrix G has entries G ij = n−1 k=0 M ikj g k . Without loss of generality, we will use g for filter weights and x for feature maps in the following. After having definitions of + and · , we still need to define a unary non-linear operation f for a ring to construct neural networks. A conventional choice, which is usually adopted by previous methods for full-rank or algebraic sparsity, is a component-wise ReLU where max(0, ·) is the commonly-used real-valued ReLU. B. Fast Ring Multiplication Now we will integrate transform-based full-rank sparsity into this framework. For the bilinear-form ring multiplication (3), from [46] we know that its optimal general fast algorithm over real field can be expressed by the following three steps filter/data transform: reconstruction transform: where T g and T x are m × n transform matrices for g and x respectively, and T z is n × m for z. And • represents a component-wise product, i.e.z i =g ixi for i = 0, 1, ..., m − 1 for the three m-tuplesz,g, andx. If the transform matrices involve only simple coefficients, e.g. ±1 or 0, then they can be implemented by adders, and the component-wise product will dominate computation complexity. In particular, the number of real-valued multiplications can be reduced from the general n 2 for matrix G to m in (7). Therefore, the complexity of fast ring multiplication depends on how we decompose the indexing tensor M or its isomorphic matrix G into (6)- (8). When G is diagonalizable over real field, i.e. G = T −1 DT , this complexity can be minimized as m = rank(G) for g = diag(D). The proof is given in Appendix A. In this perspective, a ring R H alike to HadaNet has a full-rank G, i.e. rank(G) = n, which is diagonalized by Hadamard transform. Another example is a ring R I equivalent to group convolution which applies component-wise products for a diagonal full-rank G, and its invertible T is simply the identity matrix I. In contrast, if G is not diagonlizable over real field, we can instead apply the tensor rank decomposition for the indexing tensor M as mentioned in [20]. However, the complexity is usually larger than rank(G) in this case, and the generic rank (grank) represents the lower bound for real-valued multiplications: m ≥ grank(M ). For example, the rotation matrix for complex field C leads to three real-valued multiplications as its grank(M ) = 3 while rank(G) = 2, and the circulant matrix in CirCNN also belongs to this category. The related properties of C as well as R H and R I with ring dimension n = 2 are as shown at the top of Table I. C. Proper Ring Multiplication In addition to the rings at hand, we would like to search more proper variants for in-depth analysis. We make three practical assumptions to confine the scope of discussion. The first one is exclusive sub-product distribution: each input subproduct g k x j in (3) is distributed to one output component z i exclusively. It provides complete and non-redundant information mixing between ring components for maintaining compact model capacity. Then G has full rank and can be formulated by a sign matrix S and a permutation indexing matrix P : where S ij ∈ {1, −1}, and each row or column of P is a permutation of {0, 1, ..., n − 1}. For example, the rotation matrix g0 −g1 g1 g0 for C has S = 1 −1 1 1 and P = 0 1 1 0 . Furthermore, given the existence of a ring unity 1, we consider the following explicit condition: gn−1 g0 Without loss of generality, the condition on the first column of G and 1 is drawn from the permutation definition of P and g ·1 = g. The diagonal of G is then derived by 1·g = g which states that the isomorphic matrix of 1 is the identity matrix, and therefore G = g 0 I if g = g 0 1. The second assumption is commutativity. It is not necessary for constructing neural networks, e.g. quaternions H are not commutative. But it is sufficient to enable the demanded associativity for a ring, together with the exclusive sub-product distribution and an additional condition on commutative permutation. The details are discussed in Appendix B. Then, by examining the matrix form Gx = Xg for g · x = x · g, we have a cyclic-mapping condition for reducing candidates: Finally, the last assumption is that a smaller grank(M ) is preferred for saving computations and leads to this rule: Consider only S ∈ arg min S grank(M (S ; P )). (C3) In practice, for each P satisfying (C1) and (C2) we ran the CP-ARLS algorithm [6] in MATLAB to evaluate grank(M (S ; P )) for all possible S and determined ring variants based on the results. In the following, we consider moderate sparsity for computational imaging with n = 2 and 4. We searched new ring variants as mentioned above and determined their transform matrices as discussed in Section III-B. Our findings are listed in Table I where we distinguish ring symbols by indicating n in the subscripts for clarity. For n = 2, only R H2 and C can satisfy . For n = 4, we found, by exhaustion, that there are two such non-isomorphic permutations. After applying (C3), the minimum grank(M ) of them is found to be 4 and 5. The grank-4 permutation leads to two ring variants: R H4 and R O4 which are diagonalized respectively by Hadamard transform H and a reflected Householder On the other hand, there are four grank-5 ring variants. Two of them, R H4-I and R H4-II , have transform matrices related to H, and the other two, R O4-I and R O4-II , are similarly connected to O. In particular, R H4-I applies circular convolution as Cir-CNN and needs five real multiplications for complex Fourier transform. The details of isomorphic G and fast algorithms are summarized in Table II. D. Hardware Efficiency Now we can systematically examine the benefits of these rings in terms of hardware resources. For concise hardware analysis, we assume that different algebraic structures have the same bitwidths for layer inputs and parameters. Then the weight storage is directly proportional to the degrees of freedom (DoF), and the multiplier complexity can be evaluated on the same basis. Regarding the amount of filter weights, a real-valued network would require n 2 weights for an ntuple pair of input and output features. But using n-tuple rings instead will only need n real-valued weights to represent the matrix G, i.e. DoF of G is reduced from n 2 to n. Therefore, the efficiency of weight storage with respect to the real-valued networks is n×, e.g. 2× and 4× for 2-and 4-tuple rings respectively. Similarly, the corresponding efficiency in terms of real-valued multiplications can be derived as n 2 /m. In Table TABLE II DETAILS OF ISOMORPHIC G AND FAST ALGORITHMS. I, only R I , R H , and R O4 can reach the maximum efficiency, n×, for full-rank G. More importantly, for practical implementation we need to consider fixed-point computation and include the bitwidths of the multiplications for precise evaluations. Fig. 3 shows such an example for the fast algorithm (6)- (8). The main overheads brought by the transforms are the increased bitwidths forx andg, e.g. T x and T g will transform w-bit x and g into wider w x -bitx and w g -bitg. The circuit complexity of a multiplier can be approximated by the product of its input bitwidths. We further consider this factor, w x × w g , for evaluating the multiplier complexity for 8-bit features and weights as shown in the rightmost column of Table I. In this case, only R I can reach the maximum efficiency for using identity transforms, and the other rings all suffer the corresponding overheads induced by their transforms. For example, R H4 and R O4 merely achieve 2.6× efficiency which is 1.6× worse than R I4 . E. Proposed Ring with Directional ReLU The above discussions only involve linear operations of neural networks. For non-linearity, the component-wise ReLU f cw is conventionally adopted even when we actually operate on n-tuples. As a result, R I will have the worst model capacity, although it has the best hardware efficiency. It is because the information between different components of an n-tuple is not communicated or mixed, which is the same as the discussion on group convolution in [52]. This is also the reason why we assumed the complete information mixing property for searching ring multiplications in Section III-C. In the following, we apply algebraic-architectural co-design to have the hardware advantages of R I while recovering the model capacity. By examining the fast algorithm (6)- (8), we found that the information is in fact mixed by the transforms for data, T x , and reconstruction, T z . In addition, for neural networks this should be required only near non-linearity because cascaded linear operations will simply degrade to another single linear operator. Based on these two observations, we propose to mix information only before and after non-linearity and thus can adopt R I for linear operations to have its architectural benefits. This proposal leads to a novel algebraic function for ring nonlinearity: directional ReLU f dir (y) U f cw (V y), where U and V are two n × n matrices for an input n-tuple y. It is equivalent to performing non-linearity in the directions of the row vectors of V , instead of the conventional standard axes, and then turning the axes to the column vectors of U . Thus the components of an n-tuple are considered as a whole, not separately, for non-linearity. The computation of U and V induces complexity overheads. But they are only linearly proportional to the number of output channels, unlike the bitwidth-increased products in (7) which grow quadratically. To further reduce the overhead, we consider the simple Hadamard transform in Table II and propose a novel ring (R I , f H ) with the directional ReLU as shown in Fig. 4: For n = 4, another similar variant (R I4 , f O4 ) with f O4 (y) Of cw (Oy) is also possible. They have the same hardware advantages as R I and possess better model capacity for additional information mixing. Note that for constructing neural networks they are different from R H and R O4 , especially when skip connections exist or some convolutions are not followed by non-linearity. A. Model Construction We propose a unified RingCNN framework to include all the considered rings for in-depth comparisons on qualitycomplexity tradeoffs. By extending ring algebra to ring tensors z, g, and x, we formulate a K×K ring convolution (RCONV): where c o and c i are indexes for output and input n-tuple channels, p and q for feature positions, and s and t for weight positions. Then a real-valued convolution layer, either with non-linearity or not, can be converted into an RCONV layer as shown from Fig. 5(a) to (b). In this way, we can convert any existing real-valued model structure into an RingCNN alternative. B. Model Training An RingCNN model can be treated as a conventional realvalued CNN if we implement it in form of the matrixvector multiplication (4). Then the Backprop algorithm can flow gradients as usual without any special treatment. For the completeness of ring algebras, we can also represent the gradients in terms of ring operations and then express Backprop using only the ring terminology. For example, we have ∇ x L = G t ∇ z L from (4) for a training loss L. Then ∇ x L = g · ∇ z L for R I , R H , and R O4 since G is symmetric for them. Similarly, the gradient ∇ x L equals to g c · ∇ z L for R H4-I and g * · ∇ z L for H, where g c and g * represent circular folding and quaternion conjugate of g respectively. The same approach can be applied to express ∇ g L in ring operations. C. Efficient Implementation Dynamic fixed-point quantization. We prefer fixed-point computation for hardware implementation. It has been shown effective to apply dynamic quantization with separate per-layer Q-formats [1] for real-valued feature maps and parameters [21]. We found that this approach also works well for the RingCNN models that adopt the component-wise ReLU. But when the directional ReLU is applied, image quality is deteriorated in many cases. It is because after this non-linearity different ring components have different dynamic ranges, and using one single Q-format for them causes large saturation errors. Therefore, for the directional ReLU we propose to use component-wise Q-formats for feature maps to address this issue. In other words, there are n different feature Q-formats in one layer, and each component of n-tuple features follows its corresponding Q-format. Fast algorithm. We use the fast algorithm to formulate a fast ring convolution (FRCONV): whereg andx are the ring tensors after the transforms T g and T x respectively. For minimizing overheads, we avoid redundant transform operations by applying T g , T x , and T z only once for each of weight, input, and output ring elements respectively. Then each RCONV layer can then be efficiently implemented in hardware by applying FRCONV to its fixedpoint model as shown in Fig. 5(c). Note that for R I FRCONV is the same as RCONV for its identity transform matrices. V. ERINGCNN ACCELERATOR To show the efficiency on practical applications, we further design an RingCNN accelerator, named eRingCNN, over the proposed ring (R I , f H ). For supporting high-throughput computational imaging, we use the highly-parallel eCNN as a backbone architecture and simply replace its real-valued convolution engine by a corresponding one for RCONV. This portability of linear operations is an advantage of algebraic sparsity, but we need a new and specific design for the directional ReLU. We implement two sparsity settings for n = 2 and 4, and the details are introduced in the following. System diagram. Fig. 6 presents the overall architecture. In one cycle, it can compute (32/n)-channel n-tuple output features from (32/n)-channel n-tuple inputs for 4×2 spatial positions. For both 3×3 and 1×1 convolution engines, the number of MACs is reduced by 50% and 75% for the settings n = 2 and 4 respectively. Similarly, the size of the weight memory can be reduced by the same ratios, e.g. from 1280 KB in eCNN to 640 KB for n = 2 and 320 KB for n = 4. However, for simplicity the parameter compression in eCNN was not implemented; instead, we increase the size by 1.5× to 960 KB and 480 KB, respectively, to support large models. The rest architectural differences from eCNN are mainly on the designs of the RCONV engines and the novel directional ReLU. RCONV engine. To have local sparsity while maintaining global regularity, we increase the computing granularity from real numbers to n-tuple rings. Fig. 7 shows such a modification for the 3×3 convolution engine with n = 4. It is a channelwise 2D computation array for 8-channel 4-tuple inputs and outputs. Each of the 8×8 computing units is responsible for the 2D 3×3 ring convolution for the corresponding input-output pair with ring tensor weights g coci g[ : , : , c i , c o ]. Thanks to (R I , f H ), it simply computes component-wise 2D convolutions for saving complexity. Finally, a novel directional ReLU block, including dynamic quantization with component-wise Q-formats, is devised to replace their real-valued counterparts. Directional ReLU unit. It mixes information for RCONV outputs to recover model capacity; however, the mixing demands Hadamard transforms on high-bitwidth accumulated outputs, e.g. 24-bit for n = 4. This induces two issues for conventional accelerator architectures. Firstly, the two transforms for f H are likely to be implemented by the same fixedpoint MACs for convolutions to meet the high computation throughput. But since the weights are only −1 and 1, the hardware efficiency would be low for the multipliers. Secondly and more importantly, the features before the Hadamard transforms Fig. 7). The input is y = (y 0 , y 1 , y 2 , y 3 ) t , and the output x = (x 0 , x 1 , x 2 , x 3 ) t . With component-wise Q-formats, their numbers of fractional bits are given as n y,i and n x,i , respectively. The numbers of shift bits are then derived as s i = max i n y,i − n y,i and t i = max i n y,i − n x,i . Green lines represent minus terms of the adders for Hadamard transform. will need to be quantized for the MACs. We found that these additional quantizations, e.g. 24-bit to 8-bit for n = 4, would cause up to 0.2 dB of PSNR drop for denoising and SR tasks. Therefore, we propose an on-the-fly processing pipeline for this novel function, and Fig. 8 shows our implementation for n = 4. It specifically implements the butterfly structures for Hadamard transforms to optimize hardware efficiency and keeps full-precision operations to preserve image quality. In this case, the internal bitwidths are up to 33-bit, in which the component-wise Q-formats contribute 5-bit for aligning components (through the left-shifters). This circuit is the major overhead for using (R I , f H ) and also appears in the inference datapath for the non-linearity after skip or residual connections. VI. EVALUATIONS We show extensive evaluations for (A) ring algebras, (B) image quality on eRingCNN, and (C) hardware performance of eRingCNN. For clarity, the two sparsity configurations for eRingCNN are denoted by eRingCNN-n2 and eRingCNN-n4. A. Ring Algebras Training setting and test datasets. For quality evaluations, we use the advanced ERNets for eCNN [21] as the real-valued backbone models. Then RingCNN models are converted from them as shown from Fig. 5(a) to (b). To fairly compare RingCNNs and real-valued CNNs, we evaluate their best performance by increasing their initial learning rates as high as possible before training procedures become unstable. Note that the real-valued ERNets in this paper will therefore perform better than those in [21] because of using higher learning rates. The models are trained using the lightweight settings as summarized in Table III if not mentioned. Finally, we test denoising networks on datasets Set5 [7], Set14 [48], and CBSD68 [36], and super-resolution ones on Set5, Set14, BSD100 [36], and Urban100 [22]. Quality comparison for different rings. Fig. 9 compares image quality in PSNR for the rings in Table I. When the component-wise ReLU is used, R I performs the worst due to the lack of information mixing. The two traditional algebra alternatives C and H also do not perform well, considering more real-valued multiplications are required. Between the two grank-4 variants for n = 4, the newly-discovered R O4 performs better than the HadaNet-alike R H4 . Similar results can be found for their corresponding grank-5 variants, e.g. the newly-discovered R O4-I better than the CirCNN-alike R H4-I . However, by using the directional ReLU, the proposed (R I , f H ) can give better quality and constantly outperform the others. Since (R I4 , f O4 ) shows inferior quality, we therefore focus on (R I , f H ) and adopt it for our implementation. Ablation study between (R I , f H ) and R H . They share similar structures but have two major differences. First, (R I , f H ) multiplies input features by weights g directly while R H does that after applying the filter transform. Second, (R I , f H ) applies Hadamard transform only when non-linearity is required, but R H always does that and results in a redundant structure. Therefore, R H can imitate (R I , f H ) by making up the differences: first training on transformed weightsg and then modifying model structures accordingly. Fig. 10(a) shows an example for modifying a residual block, and Fig. 10 (b) illustrates typical PSNR results using two SR×4 networks as examples. Training ong is occasionally helpful, but structure modification improves image quality most of the time. Therefore, the compact model structure is the main reason why (R I , f H ) outperforms R H for computational imaging. Comparison with weight pruning. We also compare image quality between the proposed algebraic sparsity and the unstructured magnitude-based weight pruning. While RingCNNs are trained directly, real-valued CNNs are first pre-trained, then pruned, and finally fine-tuned. Fig. 11 shows the comparison results, and RingCNNs over (R I , f H ) can deliver better image quality than the weight pruing for compression ratios 2×, 4×, and 8×. In particular, the 2-tuple networks can even outperform the original (1×) real-valued networks in many cases. This shows that the algebraic sparsity can serve strong prior for CNN models. As a result, (R I , f H ) not only provides more regular structures but also achieves better quality for computational imaging. A case for recognition tasks, though not the focus of this paper, is also studied in Appendix C, where convolutions and corresponding non-linear functions are implemented with (R I , f H ), and batch normalization is remained as real-valued operations. Fixed-point implementation. For comparisons in hardware efficiency, we implemented highly-parallel FRCONV engines, as depicted in Fig. 5(c) with non-linearity, for different rings. Their RTL codes are synthesized with 40 nm CMOS technology. For quality comparison, the models are quantized in 8-bit and then fine-tuned using the setting at the bottom of Table III. The quality loss due to quantization is similar for each ring, and Fig. 12 shows the comparison results. The area efficiencies are very close to the estimated 8-bit complexity in Table I because at the same time. Compared to the CirCNN-alike R H4-I and HadaNet-alike R H4 , it has nearly 0.1 dB PSNR gain for the SR×4 task and provides 1.8× and 1.5× area efficiencies respectively. In summary, R I can save area efficiently, and f H can recover image quality significantly. B. Image Quality on eRingCNN Training setting and model selection. To show competitive image quality, we further train models using the polishment setting in Table III with two large datasets, DIV2K [2] and Waterloo Exploration [33]. We consider two throughput targets for hardware acceleration: HD30 for Full HD 30 fps and UHD30 for 4K Ultra-HD 30 fps. For each throughput target and application scenario, we adopt the compact ERNet configuration for the real-valued eCNN in [21]. It has been optimized over model depth and width in terms of PSNR, and we build its corresponding RingCNN models with (R I , f H ). Floating-point models. The PSNR results are shown in Table IV. The RingCNN models show significant gains over the traditional CBM3D [11] for denoising and VDSR for SR×4. Compared to the advanced FFDNet and SRResNet, the models for eRingCNN-n2 can outperform them with PSNR gains up to 0.15 dB at HD30 and have similar quality at UHD30. With 75% sparsity, the models for eRingCNN-n4 still give superior quality and only show noticeable PSNR inferiority for denoising at UHD30 due to the shallow layers. Dynamic fixed-point quantization. On the top of Fig. 13, we show the effect of the 8-bit dynamic quantization which is used to save area. The quality degradation for ring tensors is around 0.11-0.12 dB of average PSNR drops, which is similar to the case of using real numbers. We also show the effect of applying sparse ring algebras (eCNN⇒eRingCNN) at the bottom of Fig. 13. The degradation is not obvious for n = 2, and the models for eRingCNN-n2 even outperform those for eCNN by 0.01 dB on average. For n = 4, those for eRingCNN-n4 only suffer small PSNR degradation in 0.11 dB. C. Hardware Performance of eRingCNN Implementation and CAD tools. We developed RTL codes in Verilog and verified the functional validity based on bit-and cycle-accurate simulations. Then the verified RTL codes were synthesized using Synopsys Design Compiler with TSMC 40 nm CMOS library. And SRAM macros were generated by ARM memory compilers. We used Synopsys IC Compiler for placement and routing and generating layouts for five well-pipelined macro circuits which constitute eRingCNN collectively. Finally, we performed time-based power consumption using Synopsys Prime-Time PX based on post-layout parasitics and dynamic signal activity waveforms from RTL simulation. Hardware performance. We show the design configurations and layout performance in Table V. The areas of eRingCNN-n2 and eRingCNN-n4 are 33.73 and 23.36 mm 2 respectively, and the corresponding power consumptions are 3.76 and 2.22 W. They mainly differ in the ring dimension, and eRingCNN-n4 uses only a half number of MACs and a half size of weight memory compared to eRingCNN-n2. Area and power breakdown. The details are shown in Table VI. For eRingCNN-n2, the convolution engines contribute 57.42% of area and 86.51% of power consumption for the highly-parallel computation. And for eRingCNN-n4 their contributions go down to 45.63% and 76.56%, respectively, because of the saving of MACs. In addition, for a larger n the directional ReLU uses more adders and causes wider bitwidths. Therefore, for the RCONV-3×3 engines it occupies only 3.4% of area for eRingCNN-n2 but up to 8.9% for eRingCNN-n4. Accordingly, the inference datapath in eRingCNN-n4 is also 0.53 mm 2 larger than that in eRingCNN-n2. Comparison with eCNN. As shown in Fig. 14, the RCONV engines reach near-maximum hardware efficiencies ( ∼ = n). Those in eRingCNN-n2 achieve 2.08× area efficiency and 2.00× energy efficiency. And those in eRingCNN-n4 further increase the efficiency gains of area and energy to 3.77× and 3.84×. The numbers for the whole accelerator are as high as 1.64× and 1.85× for eRingCNN-n2, and 2.36× and 3.12× for eRingCNN-n4. In addition, we compare their quality-energy tradeoff curves in Fig. 15. The eRingCNN accelerators show clear advantages over eCNN; in particular, the low-complexity eRingCNN-n4 is preferred when less energy is allowed to be consumed for generating one pixel. Finally, as eCNN, the eRingCNN accelerators demand only 1.93 GB/s of DRAM bandwidth for high-quality 4K UHD applications. Comparison with Diffy. [34], another state-of-the-art accelerator for computational imaging, along with eCNN. Diffy applies optimization on bit-level computation which is hard to compare with eRingCNN directly; therefore, we perform comparison based on the same application target: FFDNetlevel inference at Full-HD 20fps. In this case, the energy efficiencies of eRingCNN-n2 and eRingCNN-n4 over Diffy are 2.71× and 4.59× respectively by running at 167 MHz. Comparison with SparTen, TIE, and CirCNN. Table VIII compares with the state-of-the-arts for different spar- sity approaches: SparTen (natural) [16], TIE (low-rank) [12], and CirCNN (full-rank). Here we compare synthesis results because only such numbers are reported for SparTen and CirCNN. For comparing over different compression ratios, we consider an equivalent throughput which corresponds to the computing demand of the target uncompressed or realvalued model. With only 2-4× compression, our eRingCNN accelerators already provide competitive energy efficiencies as equivalent 19.1-28.4 TOPS/W. In contrast, SparTen merely achieves 2.7 TOPS/W due to significant overheads for handling irregularity. Although TIE is very efficient for highlycompressed fully-connected (FC) layers, it shows inefficiency for the CONV layers with lower compression ratios. Finally, CirCNN only provides 10.0 TOPS/W using as high as 66× compression. The potential of algebraic sparsity is therefore demonstrated, in particular on moderately-compressed CNNs for computational imaging. VII. RELATED WORK Low-rank sparsity. This line of research also provides regular structure for efficient hardware acceleration. One approach is low-rank approximation, including tensor-train (TT) [37], canonical polyadic (CP) [30], and Tucker [27]. Another one is building networks using low-rank structures, such as MobileNet (v1/v2) [19], [41] and SqueezeNet [23]. However, this sparsity mainly aims to provide high compression ratios, and the effect for computational imaging need further exploration. Block-based inference flows. They eliminate huge external memory bandwidth for feature maps, and two approaches were proposed to handle the boundary features across neighboring blocks. One is feature reusing, such as fused-layer [4] and Shortcut Mining [5], and the other one is recomputing, like eCNN [21]. In this paper, we adopt the latter only for the purpose of implementation and comparison. VIII. CONCLUSION This paper investigates the fundamental but seldomexplored algebraic sparsity for accelerating computational imaging CNNs. It can provide local sparsity and global regularity at the same time for energy-efficient inference. We lay down the general RingCNN framework by defining proper ring algebras and constructing corresponding CNN models. By extensive comparisons with several rings, the proposed one with the directional ReLU achieves near-maximum hardware efficiency and the best image quality simultaneously. We also design two high-performance eRingCNN accelerators for verifying practical effectiveness. They can provide highquality computational imaging at up to 4K UHD 30 fps while consuming only 3.76 W and 2.22 W, respectively. Based on these results, we believe that RingCNN exhibits great potentials for enabling next-generation cameras and displays with energy-efficient and high-performance computational imaging. ACKNOWLEDGMENT The author would like to thank the anonymous reviewers for their feedback and suggestions. I also would like to thank Chi-Wen Weng for his help on layout implementation.
2021-04-20T01:15:53.553Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "c0573a1d175f0c1ebbef318bba9f2a1b46275d18", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.09056", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c0573a1d175f0c1ebbef318bba9f2a1b46275d18", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
18497044
pes2o/s2orc
v3-fos-license
Corneal topography with high-speed swept source OCT in clinical examination We present the applicability of high-speed swept source (SS) optical coherence tomography (OCT) for quantitative evaluation of the corneal topography. A high-speed OCT device of 108,000 lines/s permits dense 3D imaging of the anterior segment within a time period of less than one fourth of second, minimizing the influence of motion artifacts on final images and topographic analysis. The swept laser performance was specially adapted to meet imaging depth requirements. For the first time to our knowledge the results of a quantitative corneal analysis based on SS OCT for clinical pathologies such as keratoconus, a cornea with superficial postinfectious scar, and a cornea 5 months after penetrating keratoplasty are presented. Additionally, a comparison with widely used commercial systems, a Placido-based topographer and a Scheimpflug imaging-based topographer, is demonstrated. Introduction Anterior segment optical coherence tomography (OCT) was introduced in 1994 [1]. It was based on a time domain OCT method presented three years earlier, in which the distribution of scattering points along the scanning beam is detected indirectly by varying the length of the reference arm of the Michelson interferometer [2]. The first commercial time domain OCT instrument, called OCT1 (produced by Humphrey Instruments, Inc., later acquired by Carl Zeiss, Inc.), used a light source with central wavelength of 830 nm and was designed to measure the human retina. After some modifications, it was possible to reconstruct crosssectional images of only a small portion of the anterior segment, with an imaging speed of 100-400 lines/s [3]. Results from a prototype, based on 1310 nm superluminescent diode capable of covering larger portion of the anterior segment-entire cross-sectional information about the cornea-were reported in 2000 [4], and 5 years later a commercial 1310 time domain OCT system for anterior segment imaging was launched under the name Visante OCT (Carl Zeiss, Inc.). Currently this is the most widely used OCT device for dedicated in vivo anterior segment imaging. It provides cross-sectional images with axial resolution of 18 µm and an imaging speed of 2000 lines/s. Visante OCT creates pachymetry maps but cannot perform topographic analysis of the cornea, mostly because of limitations in acquisition time. The introduction of Fourier domain OCT in 2002 [5][6][7], with the primary advantage of increased sensitivity or speed and the possibility of three-dimensional imaging [8][9][10][11], promised to improve the ability of OCT to quantitatively assess the corneal topography [12][13][14][15][16]. Although commercial Fourier domain OCT systems using a spectrometer (Spectral OCT -SOCT) have reached an axial resolution of 3-6 µm and imaging speed of 20,000-50,000 lines/s, they are not able to provide adequate assessment of anterior and posterior corneal topography because of significantly strong fringe washout and sensitivity drop off typical for SOCT [17][18][19]. High-speed swept source OCT (SS OCT), first reported in 2003, employs wavelength tunable lasers (and is a promising new modality for quantitative evaluation of the cornea. SS OCT provides improved imaging speed and higher sensitivity in comparison with previous OCT technologies [20]. Two major modalities, depending on the method used for wavelength selection, can be used here: diffraction grating with polygonal mirror [21-23] or fiber Fabry-Perot tunable filter [24,25]. SS OCT performance might be additionally improved by Fourier domain mode locking (FDML) lasers achieving a repetition rate of up to several hundreds of kilohertz [26-28]. Apart from improved speed and sensitivity, there is another advantage of 1300 nm SS OCT for anterior segment imaging, due to its better tissue penetration for the central wavelength comparing with 800 nm light sources, especially for those applications when iris, sclera, or irido-scleral angle are investigated. Recently, Tomey Corporation (Japan) introduced an instrument for anterior segment evaluation based on SS OCT. It offers an axial resolution of 10 µm and a scanning speed of 30,000 lines/s, which is equivalent to commercial SOCT instruments. One of the most important features of this system is the ability to provide topographic maps of both anterior and posterior surfaces of the cornea. In current clinical and interventional ophthalmic practice, precise quantitative assessment of the cornea is more important than ever. Robust development in the field of refractive surgery has resulted in the need for very early diagnosis of corneal ectatic diseases, ideally at a subclinical stage. Knowledge of the exact corneal shape is essential, not only to determine patients' suitability to undergo surgery, but also to properly plan the treatment. Placido-based computerized videokeratoscopy, proposed first by Klyce in 1984 [29], is incapable of providing all required information. In this technology, multiple concentric light rings are projected onto the cornea. The reflected image is captured on a camera, and computer software analyzes the data and displays the results. The deviation of reflected rings is measured, and the curvature of the corneal surface points in the axial direction is calculated, primarily. Surface curvature measures how fast the surface bends at a certain point in a certain direction. The error of Placido-based corneal topography under optimal conditions is ±0.25 D, but in abnormal corneas seen in clinical practice it is often ±0.50-1.00 D [30]. This technology has several inherent limitations: -There is no information on the posterior corneal surface, but many ectatic disorders initially present changes posteriorly. -Pachymetric maps depicting the distribution of corneal thickness cannot be made. -The imaging requires an intact epithelial surface and healthy tear film. -The data over the apex of the cornea and the periphery are commonly extrapolated from the surrounding curvature. Recently introduced elevation-based topography overcomes the shortcomings of Placidobased systems. True topographic imaging requires the reconstruction of the corneal image in three dimensions [31]. It is very difficult to detect pattern differences on a map produced with raw elevation data; thus a reference shape must be used. The most commonly used is the bestfit sphere (BFS), but the best fit ellipse and the best fit toric ellipsoid can also be used. The elevation of a point on the corneal surface is displayed as the height of the point on the corneal surface relative to a spherical reference surface. The best fit sphere is calculated by dedicated software for every elevation map separately. At the moment, four systems use optical cross sectioning to triangulate both the anterior and the posterior corneal surfaces. Orbscan (Bausch & Lomb, USA) uses a scanning-slit technology, and Pentacam (OCULUS, Germany), Galilei (Ziemer, Switzerland), and TMS-5 (Tomey, Japan) utilize rotating Scheimpflug imaging. Although the instruments based on rotating Scheimpflug cameras are considered the most comprehensive and accurate, they also have some limitations. The most important is the imaging speed. Pentacam requires 2.0 seconds to obtain 50 radial scans. This increases the risk of motion artifacts, even though there is an inbuilt second camera to control for eye movements. Moreover, radial scanning may not provide sufficient scan density of the corneal periphery, resulting in a requirement for interpolation. Another limitation is that the instruments using the Scheimpflug principle are less accurate in comparison to Placido-based ones in providing traditional curvature maps of the anterior surface, and only moderate agreement in simulated keratometry values between these technologies are reported [32]. Also, from the perspective of using different OCT techniques, we must consider the speed as one of the most important factors. Reasonably high corneal curvature leads to strong signal drop in spectrometer-based OCT or to a resolution decrease in SS OCT due to the fringe washout effect [17]. To avoid this drawback, dense transverse scanning is required, and the same applies to higher speed OCT systems. Another reason for choosing a SS OCT instrument for corneal topography is the much less profound effect of depth dependent signal decay in SS OCT comparing with spectrometer-based systems [33]. The aim of this paper is to present the applicability of a next-generation system using a swept laser working with a repetition rate of 108,000 lines/s, along with OCT technology, for quantitative evaluation of the cornea. The first elevations maps generated with a SS OCT setup have been already presented [33]. However, no deeper analysis and verification of published results, especially for eyes with pathologies, has been presented so far. Here we would like to perform such preliminary verification by comparing the performance of the prototype SS OCT system with currently used technologies: Placido-based topography and Scheimpflug imaging-based topography. According to our best knowledge, this is the first study presenting SS-OCT-based quantitative analysis of the anterior segment for different clinical pathologies in comparison with commercial systems. Figure 1 shows the experimental setup of the SS OCT system with a 1300 nm FDML laser operating as a light source, both constructed in the Institute of Physics at Nicolaus Copernicus University, Torun, Poland. Details of the design of the SS OCT instrument are given elsewhere [33]. For the purpose of this study, two changes in laser operation were introduced: the frequency of the Fabry-Perot filter driving signal was set to 54 kHz (108 kHz effective sweep rate), and a filter scanning range of 82 nm (Fig. 1a) was chosen-resulting in 20 µm of axial resolution in tissue and an imaging depth of 9 mm. These are necessary settings to get information about corneal surfaces and the iris. The average output power of the FDML laser was 21 mW. The eye was illuminated with 2.5 mW of optical power, which is consistent with safety exposure limits for the human eye given by the American National Standards Institute (ANSI) [34]. The system sensitivity was measured to be 105 dB at 9.2 µs exposure time. To generate the reference fringe pattern for numerical recalibration to the optical frequency space, a silica glass plate was used. This approach is an alternative to using a Michelson or Mach-Zehnder interferometer. Because of the very low dispersion of silica at 1300 nm, the interference between glass surfaces produces more stable fringes and provides an invariable parameter for conversion to real geometry in the z-axis. The SS OCT system presented here uses a scanning protocol of 50 B-scans with 500 Ascans each. Such a raster scanning was selected in order to obtain dense 3D information of corneal topography and to minimize artifacts caused by involuntary eye movements. A single tomogram was measured within 4.6 ms. Each 3D volume was captured within 230 ms. The lateral dimension of the scanned area was 11.2 mm × 11.2 mm. Collected data were postprocessed (k-space resampling, shaping, and fast Fourier transformation) to achieve cross-sectional images. The segmentation procedure was employed to delineate the corneal surfaces and mark pupil edges. Segmented data were corrected for index of refraction and tilt misalignment. At this level, topography maps of both cornea surfaces as well as cornea thickness were calculated. Subsequently, the sphere was fitted (best-fit sphere) to anterior and posterior cornea surfaces, and differences between the fitted surface and real data were plotted on elevation maps. Supplementary to measurements performed by a prototype SS OCT system, two other standard instruments were used: a conventional Placido-based topographer (PTC 110, Optopol Technology, Poland) and a rotating Scheimpflug camera (Pentacam HR, Oculus, Germany). We used sagittal (axial) anterior surface maps from the Placido-based topographer and elevation subtraction maps created with the use of the best-fit sphere algorithm (BFS). It should be mentioned here that Scheimpflug images were processed (segmentation, fitting, calculating for elevation maps) with software provided by the manufacturer. For elevation-based systems, the same color scales and the same 9 mm area of interest were used. All examinations of the same patient were performed in a single day in a noninvasive, noncontact manner. Swept-source-based system for corneal topography Written informed consent was obtained from all patients. The Ethics Committee of the Collegium Medicum in Bydgoszcz, Nicolaus Copernicus University in Torun, Poland, approved the study. Results and discussion Fan and optical distortions can significantly affect image reconstruction in OCT crosssectional images, and the same can strongly obstruct corneal topographic reconstruction. The fan distortion is associated with the OCT scanning system, which includes a pair of separate scanner mirrors and optical elements. Optical distortions are related to imaging optics and can cause, for example, pincushion or barrel effects in projection OCT images. When flat surfaces are imaged with OCT, they become curved. However, this effect can be minimized by careful adjustment of the optical setup. Following Ortiz et al. [35,36], we have adjusted the position of the objective lens with respect to scanners to decrease the magnitude of fan distortion. During this procedure a flat surface was imaged on a screen in the preview mode in two orthogonal directions. Careful adjustment of the objective lens position enabled us to flatten reconstructed images, which are presented in Figs. 2a and 2b for two perpendicular directions. To investigate the influence of fan and optical distortion of our OCT system in three dimensions, we performed a simple experiment using 2D gridlines printed on paper, which were then measured by the SS OCT system. Figure 2c shows an en face projection image generated from 3D OCT data. The view is limited to the 9 mm area (the same area size used for elevation map calculation). A slight defocusing is visible at the edge of the scanned area. However, the shape of the grid is not significantly distorted. We can assume that for demonstration of the applicability of SSOCT for corneal topography the optical distortions are negligible. To verify the performance of the SS OCT system for anterior segment imaging, a reference sphere was measured (EyeSys Technologies) (Fig. 3c). The reference sphere was measured at nine different randomly set tilt-shift positions (Figs. 3a-3b), which was consistent with the scanning protocol used for patient examination. Since the reference sphere consists only of one surface, the postprocessing was performed without the refraction correction step. The reference shape was segmented automatically by using custom designed software written in LabView. In the method proposed by Gora et al. [33] the pupillary plane is used for mathematical correction of tilt or misalignment of the measured eye with respect to the optical axis of the instrument. As the reference sphere has no pupil, we used the basal surface of the target. For each of nine measurements the best fit sphere was found, and its radius of curvature was calculated (Fig. 3d). A mean value of 7.96 mm (standard deviation of 0.07 mm) was consistent with the value given by the manufacturer of the reference sphere (7.94 mm) and the value measured by the Placido-based topographer (7.94 mm). The 0.07 mm error for a 7.96 mm radius will produce a 0.37 D error. Quantitative analysis of three corneas from three different patients (keratoconus, a cornea with superficial postinfectious scar, and a cornea 5 months after penetrating keratoplasty) was performed with three different instruments. Figure 4 shows the topographic analysis of a keratoconic cornea measured with three different instruments. The confirmation of the clinically significant keratoconous is based on Placido topography in which typical pattern is detected on anterior sagittal curvature map. We revealed good agreement between anterior elevation topographies provided by Pentacam and SS OCT in this case (Fig. 4). The central keratometry value (K1) and axis in a flat meridian is almost the same for Placido-based topography, Pentacam, and SS OCT, but readings for a steep meridian (K2) show differences above 1 D and above 20° between the Placido-based topographer and elevation-based systems ( Table 1). Agreement of the results from elevation topographers is also high for posterior corneal topography and pachymetry. Figure 5 shows results of the examination of the cornea with a scar in the anterior stroma after superficial infectious ulcer. Placido topography shows irregular astigmatism, and keratometric values are quite different from elevation-based systems. Pentacam shows an area below the reference shape in the center and paracentrally, whereas SS OCT topography shows a more regular anterior surface. Posterior topography patterns are similar for both instruments; thus a distinct decrease of corneal thickness in the center is visible on Pentacam pachymetry map compared with that calculated by the SS OCT system (Table 1). Figures 6(a-b) show magnified tomograms of the cornea with a scar measured with the Pentacam HR and the SS OCT prototype. Blue light from the 475 nm LED of the Pentacam imaging system is strongly reflected back from the opaque scarring tissue. The signal is oversaturated, which results in difficulties in segmentation of the anterior surface. Moreover, the posterior surface delineation is poor. In contrast to SS OCT, Pentacam segmentation causes a falsely decreased thickness of the central cornea (Fig. 6a). On SS OCT images, both anterior and posterior surfaces are clearly visible and easy to delineate. In addition, because of its better resolution and sensitivity, SS OCT offers a much better view of the corneal structure. In this pathological case, SS OCT images reveal that the lack of superficial stromal tissue due to the scarring process is compensated by increased thickness of the epithelium (Fig. 6(b)), and the resultant corneal thickness is larger than that measured by the Pentacam instrument (Table 1). Results obtained from the eye 5 months after penetrating keratoplasty are shown in Fig. 7. Placido reflections are much distorted. Only a couple of rings in the center could be partially distinguished and analyzed by the computer. Despite the fact that part of the cornea is covered with the upper lid, the Placido system provides a curvature map of the whole cornea. Also, the differences in the central keratometry reading are very large in comparison with Pentacam and SS OCT. Anterior topographies from Pentacam and SS OCT show comparable patterns, but the elevated central island is located slightly higher on the SS OCT map, resulting in a significant difference in central keratometry readings. The area where the donor and host tissues are interconnected is more elevated on maps generated with SS OCT. As a result, SS OCT shows much thicker pachymetry in the area of graft-host junction. A closer look on a single cross-sectional image has to be done to fully understand origin of differences in elevation maps generated with both Pentacam and SS OCT. Figure 8 shows tomograms of the corresponding parts of the cornea after penetrating keratoplasty acquired with two different instruments. The Scheimpflug image is moderately clear, and segmentation of the tissue by the software is not correct (Fig. 8(a)). This inaccuracy is caused by the simplified segmentation algorithm's assuming a certain level of corneal smoothness, which is obviously not true in the case of very complex corneal topography like that after penetrating keratoplasty. SS OCT provides tomograms with a more homogenous distribution of the backscattered intensity. Therefore it is easier to apply a segmentation algorithm that does not assume the smoothness of the corneal surface ( Fig. 8(b)). Conclusions In this paper, we demonstrated the applicability of the high-speed SS OCT system for quantitative evaluation of the cornea. Full 3D anterior segment imaging necessary to perform further topographic analysis was achieved with the custom-designed SS OCT system running with speed of 108,000 lines/s, 20 µm axial resolution in tissue, and imaging depth of 9 mm. A raster scanning protocol with 25000 lateral points was applied to obtain better coverage of the cornea. Dense sampling is especially important in cases with focal pathologic changes. The accuracy of keratometry readings was verified on the reference sphere from EyeSys Technologies, showing good correlation with values given by the manufacturer and high repeatability of measurements. Quantitative analysis of corneas with pathological changes was performed with three different instruments: the SS OCT system, a Placido-based topographer, and a rotating Scheimpflug camera system. In conclusion, high-speed SS OCT is a promising technology for quantitative corneal evaluation that has some advantages over the Scheimpflug system, including better tomogram quality and shorter measurement time. However, the most important advantage is that Fig. 7. Qualitative evaluation of a cornea 5 months after penetrating keratoplasty with three different instruments. K1, K2, central keratometry readings. The red lines on Scheimpflug images correspond to the lateral size of cross-sectional images from SS OCT. topographic analysis can be done along with the high-quality cross-sectional imaging. SS OCT could be used not only for elevation-based topography but also for evaluation of the corneal structure, including the epithelium. Thus, as a decrease in epithelial thickness masks the presence of an underlying cone on front surface topography [37], SS OCT may be helpful in detecting keratoconus at a very early stage.
2018-04-03T02:12:01.920Z
2011-08-29T00:00:00.000
{ "year": 2011, "sha1": "73d6d10d899009e1e31f0ae15afc39c4b988c6f2", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1364/boe.2.002709", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73d6d10d899009e1e31f0ae15afc39c4b988c6f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2298833
pes2o/s2orc
v3-fos-license
Lower bounds for oblivious subspace embeddings An oblivious subspace embedding (OSE) for some eps, delta in (0,1/3) and d<= m<= n is a distribution D over R^{m x n} such that for any linear subspace W of R^n of dimension d, Pr_{Pi ~ D}(for all x in W, (1-eps) |x|_2<= |Pi x|_2<= (1+eps)|x|_2)>= 1 - delta. We prove that any OSE with delta<1/3 must have m = Omega((d + log(1/delta))/eps^2), which is optimal. Furthermore, if every Pi in the support of D is sparse, having at most s non-zero entries per column, then we show tradeoff lower bounds between m and s. Introduction A subspace embedding for some ε ∈ (0, 1/3) and linear subspace W is a matrix Π satisfying An oblivious subspace embedding (OSE) for some ε, δ ∈ (0, 1/3) and integers d ≤ m ≤ n is a distribution D over R m×n such that for any linear subspace W ⊂ R n of dimension d, That is, for any linear subspace W ⊂ R n of bounded dimension, a random Π drawn according to D is a subspace embedding for W with good probability. OSE's were first introduced in [16] and have since been used to provide fast approximate randomized algorithms for numerical linear algebra problems such as least squares regression [4,11,13,16], low rank approximation [3,4,13,16], minimum margin hyperplane and minimum enclosing ball [15], and approximating leverage scores [10]. For example, consider the least squares regression problem: given A ∈ R n×d , b ∈ R n , compute The optimal solution x * is such that Ax * is the projection of b onto the column span of A. Thus by computing the singular value decomposition (SVD) A = UΣV T where U ∈ R n×r , V ∈ R d×r have orthonormal columns and Σ ∈ R r×r is a diagonal matrix containing the non-zero singular values of A (here r is the rank of A), we can set x * = V Σ −1 U T b so that Ax * = UU T b as desired. Given that the SVD can be approximated in timeÕ(nd ω−1 ) 1 [6] where ω < 2.373 . . . is the exponent of square matrix multiplication [18], we can solve the least squares regression problem in this time bound. A simple argument then shows that if one instead computes for some subspace embedding Π for the (d + 1)-dimensional subspace spanned b and the columns of A, then Ax − b 2 ≤ (1 + O(ε)) Ax * − b 2 , i.e.x serves as a near-optimal solution to the original regression problem. The running time then becomesÕ(md ω−1 ), which can be a large savings for m ≪ n, plus the time to compute ΠA and Πb and the time to find Π. It is known that a random gaussian matrix with m = O((d + log(1/δ))/ε 2 ) is an OSE (see for example the net argument in Clarkson and Woodruff [4] based on the Johnson-Lindenstrauss lemma and a net in [2]). While this leads to small m, and furthermore Π is oblivious to A, b so that its computation is "for free", the time to compute ΠA isÕ(mnd ω−2 ), which is worse than solving the original least squares regression problem. Sarlós constructed an OSE D, based on the fast Johnson-Lindenstrauss transform of Ailon and Chazelle [1], with the properties that (1) m =Õ(d/ε 2 ), and (2) for any vector y ∈ R n and Π in the support of D, Πy can be computed in time O(n log n) for any Π in the support of D. This implies an approximate least squares regression algorithm running in time O(nd log n) +Õ(d ω /ε 2 ). A recent line of work sought to improve the O(nd log n) term above to a quantity that depends only on the sparsity of the matrix A as opposed to its ambient dimension. The works [4,11,13] give an OSE with m = O(d 2 /ε 2 ) where every Π in the support of the OSE has only s = 1 non-zero entry per column. The work [13] also showed how to achieve m = O(d 1+γ /ε 2 ), s = poly(1/γ)/ε for any constant γ > 0. Using these OSE's together with other optimizations (for details see the reductions in [4]), these works imply approximate regression algorithms running in time O(nnz(A) Interestingly the algorithm which yields the last bound only requires an OSE with distortion (1 + ε 0 ) for constant ε 0 , while still approximately the least squares optimum up to 1 + ε. As seen above we now have several upper bounds, though our understanding of lower bounds for the OSE problem is lacking. Any subspace embedding, and thus any OSE, must have m ≥ d since otherwise some non-zero vector in the subspace will be in the kernel of Π and thus not have its norm preserved. Furthermore, it quite readily follows from the works [9,12] that any OSE must have m = Ω(min{n, log(d/δ)/ε 2 }) (see Corollary 5). Thus the best known lower bound to date is m = Ω(min{n, d + ε −2 log(d/δ)}), while the best upper bound is m = O(min{n, (d + log(1/δ))/ε 2 }) (the OSE supported only on the n × n identity matrix is indeed an OSE with ε = δ = 0). We remark that although some problems can make use of OSE's with distortion 1 + ε 0 for some constant ε 0 to achieve (1 + ε)-approximation to the final problem, this is not always true (e.g. no such reduction is known for approximating leverage scores). Thus it is important to understand the required dependence on ε. We also make progress in understanding the tradeoff between m and s. The work [14] observed via a simple reduction to nonuniform balls and bins that any OSE with s = 1 must have m = Ω(d 2 ). Also recall the upper bound of [13] of m = O(d 1+γ /ε 2 ), s = poly(1/γ)/ε for any constant γ > 0. Our proof in the first contribution follows Yao's minimax principle combined with concentration arguments and Cauchy's interlacing theorem. Our proof in the second contribution uses a bound for nonuniform balls and bins and the simple fact that for any distribution over unit vectors, two i.i.d. samples are not negatively correlated in expectation. Notation We let O n×d denote the set of all n × d real matrices with orthonormal columns. For a linear subspace W ⊆ R n , we let proj W : R n → W denote the projection operator onto W . That is, if the columns of U form an orthonormal basis for W , then proj W x = UU T x. We also often abbreviate "orthonormal" as o.n. In the case that A is a matrix, we let proj A denote the projection operator onto the subspace spanned by the columns of A. Throughout this document, unless otherwise specified all norms · are ℓ 2 → ℓ 2 operator norms in the case of matrix argument, and ℓ 2 norms for vector arguments. The norm A F denotes Frobenius norm, i.e. ( i,j A 2 i,j ) 1/2 . For a matrix A, κ(A) denotes the condition number of A, i.e. the ratio of the largest to smallest singular value. We use [n] for integer n to denote {1, . . . , n}. We use A B to denote A ≤ CB for some absolute constant C, and similarly for A B. Dimension lower bound Let U ∈ O n×d be such that the columns of U form an o.n. basis for a d-dimensional linear subspace W . Then the condition in Eq. (1) is equivalent to all singular values of ΠU lying in the interval [1 − ε, 1 + ε]. Let κ(A) denote the condition number of matrix A, i.e. its largest singular value divided by its smallest singular value, so that for any such U an OSE has κ(ΠU) ≤ 1 + ε with probability 1 − δ over the randomness of Π. Thus D being an OSE implies the condition We now show a lower bound for m in any distribution D satisfying Eq. (2) with δ < 1/3. Our proof will use a couple lemmas. The first is quite similar to the Johnson-Lindenstrauss lemma itself. Without the appearance of the matrix D, it would follow from the the analyses in [5,8] using Gaussian symmetry. [7]). Let g = (g 1 , . . . , g n ) be such that g i ∼ N (0, 1) are independent, and let B ∈ R n×n be symmetric. Then for all λ > 0, Lemma 2. Let u be a unit vector drawn at random from S n−1 , and let E ⊂ R n be an mdimensional linear subspace for some 1 ≤ m ≤ n. Let D ∈ R n×n be a diagonal matrix with smallest singular value σ min and largest singular value σ max . Then for any 0 < ε < 1 for some σ min ≤σ ≤ σ max . Proof. Let the columns of U ∈ O n×m span E, and let u i denote the ith row of U. Let the singular values of D be σ 2 1 , . . . , σ 2 n . The random unit vector u can be generated as g/ g for a multivariate Gaussian g with identity covariance matrix. Then We have and DUU T D ≤ D 2 · UU T = σ 2 max . Therefore by the Hanson-Wright inequality, Similarly E g 2 = n and g is also the product of a matrix with orthonormal columns (the identity matrix), a diagonal matrix with σ min = σ max = 1 (the identity matrix), and a multivariate gaussian. The analysis above thus implies Therefore with probability 1 − C(e −Ω(ε 2 n) + e −Ω(ε 2 m) ) for some constant C > 0, We also need the following lemma, which is a special case of Cauchy's interlacing theorem. Lastly, we need the following theorem and corollary, which follows from [9]. A similar conclusion can be obtained using [12], but requiring the assumption that d < n 1−γ for some constant γ > 0. Theorem 4. Suppose D is a distribution over R m×n with the property that for any t vectors Proof. The proof uses Yao's minimax principle. That is, let U be an arbitrary distribution over t-tuples of vectors in S n−1 . Then Switching the order of probabilistic quantifiers, an averaging argument implies the existence of a fixed matrix Π 0 ∈ R m×n so that The work [9, Theorem 9] gave a particular distribution U hard for the case t = 1 so that no Π 0 can satisfy Eq. (5) unless m min{n, ε −2 log(1/δ)}. In particular, it showed that the left hand side of Eq. (5) is at most 1 − e −O(ε 2 m+1) as long as m ≤ n/2 in the case t = 1. For larger t, we simply let the hard distribution be U ⊗t hard , i.e. the t-fold product distribution of U hard . Then the left hand side of Eq. Rerranging terms proves the theorem. Proof. We have that for any d-dimensional subspace W ⊂ R n , a random Π ∼ D with probability 1 − δ simultaneously preserves norms of all x ∈ W up to 1 ± ε. Thus for any set of d vectors x 1 , . . . , x d ∈ R n , a random such Π with probability 1 − δ simultaneously preserves the norms of these vectors since it even preserves their span. The lower bound then follows by Theorem 4. Now we prove the main theorem of this section. Theorem 6. Let D be any OSE with ε, δ < 1/3. Then m = Ω(min{n, d/ε 2 }). Proof. We assume d/ε 2 ≤ cn for some constant c > 0. Our proof uses Yao's minimax principle. Thus we must construct a distribution U hard such that cannot hold for any Π 0 ∈ R m×n which does not satisfy m = Ω(d/ε 2 ). The particular U hard we choose is as follows: we let the d columns of U be independently drawn uniform random vectors from the sphere, post-processed using Gram-Schmidt to be orthonormal. That is, the columns of U are an o.n. basis for a random d-dimensional linear subspace of R n . Let Π 0 = LDW T be the singular value decomposition (SVD) of Π 0 , i.e. L ∈ O m×n , W ∈ O n×n , and D is n × n with D i,i ≥ 0 for all 1 ≤ i ≤ m, and all other entries of D are 0. Note that W T U is distributed identically as U, which is identically distributed as W ′ U where W ′ is an n × n block diagonal matrix with two blocks. The upper-left block of W ′ is a random rotation M ∈ O m×m according to Haar measure. The bottom-right block of W ′ is the (n − m) × (n − m) identity matrix. Thus it is equivalent to analyze the singular values of the matrix LDW ′ U. Also note that left multiplication by L does not alter singular values, and the singular values of DW ′ U and D ′ MA T U are identical, where A is the n × m matrix whose columns are e 1 , . . . , e m . Also D ′ is an m × m diagonal matrix with D ′ i,i = D i,i . Thus we wish to show that if m is sufficiently small, then Henceforth in this proof we assume for the sake of contradiction that m ≤ c·min{d/ε 2 , n} for some small positive constant c > 0. Also note that we may assume by Corollary 5 that m = Ω(min{n, ε −2 log(d/δ)}). Assume that with probability strictly larger than 2/3 over the choice of U, we can find unit vectors z 1 , z 2 so that A T Uz 1 / A T Uz 2 > 1 + ε. Now suppose we have such z 1 , z 2 . Define y 1 = A T Uz 1 / A T Uz 1 , y 2 = A T Uz 2 / A T Uz 2 . Then a random M ∈ O m×m has the same distribution as M ′ T , where M ′ is i.i.d. as M, and T can be any distribution over O m×m , so we write M = M ′ T . T may even depend on U, since M ′ U will then still be independent of U and a random rotation (according to Haar measure). Let T be the m × m identity matrix with probability 1/2, and R y 1 ,y 2 with probability 1/2 where R y 1 ,y 2 is the reflection across the bisector of y 1 , y 2 in the plane containing these two vectors, so that R y 1 ,y 2 y 1 = y 2 , R y 1 ,y 2 y 2 = y 1 . Now note that for any fixed choice of M ′ it must be the case that occurs with probability 1/2 over T , and the reverse inequality occurs with probability 1/2. Thus for this fixed U for which we found such z 1 , z 2 , over the randomness of M ′ , T we have κ(D ′ MA T U) ≥ D ′ MA T Uz 1 / D ′ MA T Uz 2 is greater than 1 + ε with probability at least 1/2. Since such z 1 , z 2 exist with probability larger than 2/3 over chioce of U, we have established Eq. (7). It just remains to establish the existence of such z 1 , z 2 . Let the columns of U be u 1 , . . . , u d , and defineũ i = A T u i andŨ = A T U. Let U −d be the n×(d−1) matrix whose columns are u 1 , . . . , u d−1 , and letŨ where the columns of A are the projections of the columns of A onto the subspace spanned by the columns of Since the singular values ofŨ andŨ T are the same, it suffices to show κ(Ũ T ) > 1 + ε. For this we exhibit two unit vectors Let (A ⊥ ) T = CΛE T be the SVD, where C ∈ R m×m , Λ ∈ R m×m , E ∈ R n×m . As usual C, E have o.n. columns, and Λ is diagonal with all entries in [ Condition on U −d . The columns of E form an o.n. basis for the column space of A ⊥ , which is some m-dimensional subspace of the (n − d + 1)-dimensional orthogonal complement of the column space of U −d . Meanwhile u d is a uniformly random unit vector drawn from this orthogonal complement, and thus (1) by Lemma 2 and the fact that d ≤ εn and m = Ω(ε −2 log d). Note then also that ΛE T u d = ũ d = (1 ± C 6 ε) m/n with probability 1 − d −Ω(1) since Λ has bounded singular values. Also note E T u/ E T u is uniformly random in S m−1 , and also B T C has orthonormal rows since B T CC T B = B T B = I, and thus again by Lemma 2 with E being the row space of We first note that by Lemma 3 and our assumption on the singular values ofŨ −d ,Ũ T has smallest singular value at most (1 + C 2 ε) m/n. We then set x 2 to be a unit vector such It just remains to construct x 1 so that Ũ T x 1 > (1 + ε)(1 + C 2 ε) m/n. To construct x 1 we split into two cases: Sparsity Lower Bound In this section, we consider the trade-off between m, the number of columns of the embedding matrix Π, and s, the number of non-zeroes per column of Π. In this section, we only consider the case n ≥ 100d 2 . By Yao's minimax principle, we only need to argue about the performance of a fixed matrix Π over a distribution over U. Let the distribution of the columns of U be d i.i.d. random standard basis vectors in R n . With probability at least 99/100, the columns of U are distinct and form a valid orthonormal basis for a d dimensional subspace of R n . If Π succeeds on this distribution of U conditioned on the fact that the columns of U are orthonormal with probability at least 99/100, then it succeeds in the original distribution with probability at least 98/100. In section 3.1, we show a lower bound on s in terms of ε, whenever the number of columns m is much smaller than ε 2 d 2 . In section 3.2, we show a lower bound on s in terms of m, for a fixed ε = 1/2. Finally, in section 3.3, we show a lower bound on s in terms of both ε and m, when they are both sufficiently small. Proof. We first need a few simple lemmas. Lemma 8. Let P be a distribution over vectors of norm at most 1 and u and v be independent samples from P. Then E u, v ≥ 0. Proof. Let δ = E u, v . Assume for the sake of contradiction that δ < 0. Take t samples u 1 , . . . , u t from P. By linearity of expectation, we have 0 ≤ E( i u i ) 2 ≤ t + t(t − 1)δ. This is a contradiction because the RHS tends to −∞ as t → ∞. Proof. We prove the contrapositive. If P(X ≤ −δ) > 1/(1 + δ), then Let u i be the i column of ΠU, r i and z i be the index and the value of the coordinate of the maximum absolute value of u i , and v i be u i with the coordinate at position r i removed. Let p 2j−1 (respectively, p 2j ) be the fractions columns of Π whose entry of maximum absolute value is on row j and is positive (respectively, negative). Let C i,j be the indicator variable indicating whether r i = r j and z i and z j are of the same sign. If the pairs (i 1 , i 2 ) and (i 3 , i 4 ) share one index then P(C i 1 , Thus for this case, The last inequality follows from the fact that the ℓ 3 norm of a vector is smaller than its ℓ 2 norm. We have Therefore, . Thus, with probability at least 1 − O(ε), we have C ≥ 4ε −2 . We now argue that there exist 1/ε pairwise-disjoint pairs (a i , b i ) such that r a i = r b i and z a i and z b i are of the same sign. Indeed, let d 2j−1 (respectively, d 2j ) be the number of u i 's with r i = j and z i being positive (respectively, negative). Wlog, assume that d 1 , . . . , d t are all the d i 's that are at least 2. We can always get at least t i=1 (d i − 1)/2 disjoint pairs. We have For each pair (a i , b i ), by Lemmas 8 and 9, P[ v a i , v b i ≤ −ε] ≤ 1 1+ε and these events for different i's are independent so with probability at least 1 − (1 For Π to be a subspace embedding for the column span of U, it must be the case, for all i, Proof. We first prove a standard bound for a certain balls and bins problem. The proof is included for completeness. Proof. Let X i be the indicator r.v. for bin i having t = α/(2γ) balls, and X def = i X i . Then Next we prove a slightly weaker bound for the non-uniform version of the problem. Proof. The following procedure is inspired by the alias method, a constant time algorithm for sampling from a given discrete distribution (see e.g. [17]). We define a set of m virtual bins with equal probabilities of receiving a ball as follows. The following invariant is maintained: in the ith step, there are m − i + 1 values p 1 , . . . , p m−i+1 satisfying j p j = (m − i + 1)/m. In the ith step, we create the ith virtual bin as follows. Pick the smallest p j and the largest p k . Notice that p j ≤ 1/m ≤ p k . Form a new virtual bin from p j and 1/m − p j probability mass from p k . Remove p j from the collection and replace p k with p k + p j − 1/m. By Lemma 11, there exist d 1−α /2 virtual bins receiving at least α/(2γ) balls. Since each virtual bin receives probability mass from at most 2 bins, there exist d 1−α /2 groups of balls of size at least α/(4γ) such that all balls in the same group land in the same bin. Finally we use the above bound for balls and bins to prove the lower bound. Let p i be the fraction of columns of Π whose coordinate of largest absolute value is on row i. By Lemma 12, there exist a row i and α/(4γ) columns of ΠU such that the coordinates of maximum absolute value of those columns all lie on row i. Π is a subspace embedding for the column span of U only if ΠUe j ∈ [1/2, 3/2] ∀j. The columns of ΠU are s sparse so for any column of ΠU, the largest absolute value of its coordinates is at least s −1/2 /2. Therefore, e T i ΠU 2 ≥ α/(16γs). Because ΠU ≤ 3/2, it must be the case that s = Ω(α/γ). Proof. Let u i be the i column of ΠU, r i and z i be the index and the value of the coordinate of the maximum absolute value of u i , and v i be u i with the coordinate at position r i removed. Fix t = α/(4γ). Let p 2i−1 (respectively, p 2i ) be the fractions of columns of Π whose largest entry is on row i and positive (respectively, negative). By Lemma 12, there exist d 1−α /2 disjoint groups of t columns of ΠU such that the columns in the same group have the entries with maximum absolute values on the same row. Consider one such group G = {u i 1 , . . . , u it }. By Lemma 8 and linearity of expectation, E u i ,u j ∈G,i =j v i , v j ≥ 0. Furthermore, u i ,u j ∈G,i =j v i , v j ≤ t(t − 1). Thus, by Lemma 9, P( u i ,u j ∈G,i =j v i , v j ≤ −t(t − 1)(εγ)) ≤ 1 1+εγ . This event happens independently for different groups, so with probability at least 1 − (1 + εγ) −1/(εγ) ≥ 1 − e εγ/2−1 , there exists a group G such that The matrix Π is a subspace embedding for the column span of U only if for all i, we have u i = |ΠUe i ≥ (1−ε). We have |z i | ≥ s −1/2 u i ≥ s −1/2 (1−ε). Thus, u i ,u j ∈G,i =j u i , u j ≥ t(t − 1)((1 − ε) 2 s −1 − εγ). We have Because ΠU ≤ 1 + ε, we must have s ≥ (α/γ−4)(1−ε) 2 (16+α)ε .
2013-08-14T17:23:36.000Z
2013-08-14T00:00:00.000
{ "year": 2013, "sha1": "1ea2edd6fe6cec9efac98b9c94bb93ff9979e91d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1308.3280", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "402a6c206bfdcc34712745a77ee3dc0fc0cb3161", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
251779044
pes2o/s2orc
v3-fos-license
Save Money to Lose Money? Implications of Opting Out of a Voluntary Audit Review for a Firm’s Cost of Debt* ABSTRACT An audit review (AR) is a mechanism used by boards to assess the quality of interim financial reports on a timely basis. In Canada, the AR is voluntary, with listed firms mandated to disclose when they choose to not purchase additional audit verification. Given the relatively low cost of an AR, opting out of it can be regarded as a negative signal, especially in the context of lenders’ sensitivity to downside risk. Using a sample of 7,585 firm-year observations from 1,616 public firms in Canada over the period 2004-2015, we document that firms without a voluntary AR have a higher cost of debt than firms with an AR. Furthermore, after firms opt out of the AR, the increase in the cost of debt is accompanied by a rise in discretionary abnormal accruals and managers’ stock-based compensation. Moreover, no-AR firms are more likely to reduce post-switch private borrowing and have lower equity analyst following. Our study is the first to document that although listed borrowers that opt out of an AR have a higher cost of debt financing, they are concurrently able to engage in more earnings management and grant their managers higher stock-based compensation because of lower external monitoring. I. Introduction Auditing provides an essential verification of the information disclosed in financial statements (Fama, 1985;Jensen & Meckling, 1976;Watts & Zimmerman, 1983) and is important for lenders in both mandated and voluntary forms (Minnis, 2011). This study assesses the implications of voluntarily choosing to not have audit verification by focusing on the audit review (AR) of quarterly reports in Canada. The Canadian context is relevant because Canadian listed firms are not mandated to subscribe to an external AR of their interim reports but have to disclose that the interim reports have not been reviewed by an external auditor (OSC, 2004). Given lenders' asymmetrical sensitivity to negative information (Ball, Bushman, & Vasvari, 2008a;Ball, Askon, & Sadka, 2008b;Hasan, Hoi, Wu, & Zhang, 2014), the absence of an external AR is likely to have adverse consequences from a debt market perspective. We, therefore, examine whether a firm's choice to not purchase an AR is associated with its cost of debt financing. The relatively little empirical evidence on the debt market impact of the AR is inconclusive and based mainly on data from small, privately held firms (DeFond & Zhang, 2014). This stream of literature is limited by the scarcity of unaudited publicly available data because ARs are mandatory in jurisdictions such as the United States (US), and the identification of public firms with voluntary ARs in other jurisdictions is cumbersome due to a lack of uniform disclosure requirements. This limitation has led extant research to focus on private firms to assess the effects of voluntary AR purchases. 1 However, these settings lack generalizability to public firms because of the high level of heterogeneity among non-listed firms (DeFond & Zhang, 2014). In this study, we attempt to address this limitation by using data from publicly listed firms in Canada, which can voluntarily choose to purchase/not purchase an AR. 2 Previous research documents that although the choice to voluntarily purchase an AR entails costs in the form of marginally higher audit fees (Bédard & Courteau, 2015), 3 it also brings benefits. For example, the AR decreases fourth-quarter adjustments (Ettredge, Simon, Smith, & Stone, 2000) and strengthens the association between returns and earnings (Krishnan & Zhang, 2005;Manry, Tiras, & Wheatley, 2003). 4,5 Although these findings are insightful regarding the importance and the effects of the AR, they focus on the costs and benefits of ARs mainly from an equity holder's perspective. Despite calls to assess the usefulness of auditing for a wide array of financial statement users (Church, Davis, & McCracken, 2008;DeFond & Zhang, 2014), the literature has yet to thoroughly explore the implications of purchasing/not purchasing an AR from the perspective of lenders. This omission is important because lenders have different information needs from equity holders (Chen, 2016;Chiu, Guan, & Kim, 2018;Florou & Kosi, 2015;Hasan et al., 2014) and assess borrowing firms on an intermittent basis. More specifically, lenders are unlikely to systematically follow borrowing firms over periods in which they do not have a contractual relationship; once a financing request is registered, lenders will likely use the most recent interim financial information to complement year-end financial information. For example, credit rating agencies (CRAs) acknowledge using interim reports to analyse clients' credit risk (S&P, 2008). The ability of interim earnings to forecast rating downgrades reduces the information asymmetry between lead and participating lenders in syndicated loans (Ball et al., 2008a) and the rating dispersion in the bond market (Akins, 2018). 6 Given this setup, a borrower's voluntary choice to not purchase an AR is likely to be observed by lenders and incorporated into the cost of debt financing. Nonetheless, it is also possible that lenders do not price the absence of the AR, since its findings are available only to internal parties within the firm, where it serves as a monitoring mechanism for potential accounting distortions. Therefore, whether and to what extent firms that opt out of the AR will have a higher cost of borrowing remains an empirical question. Our study attempts to shed light on this issue by addressing the following research question: Does the cost of debt differ between no-AR firms and AR firms? 7 Research indicates that the monitoring of independent auditors reduces the information asymmetry between lenders and borrowers (Balsam, Krishnan, & Yang, 2003;Watts & Zimmerman, 1983) and provides reliable and valuable information for lending decisions (Minnis, 2011). The AR allows auditors to evaluate internal controls and accounts throughout the fiscal year (Bédard & Courteau, 2015). 8 Therefore, they can detect and signal potential financial reporting misstatements made by the management to the audit committee in a timely manner (Ettredge, Simon, Smith, & Stone, 1994). This characteristic of the AR is likely to reduce lenders' screening efforts in assessing borrowers' riskiness. Nonetheless, the no-AR firms forfeit the benefits associated with the AR and assume the potential debt-market costs entailed by their choice. Opting out of the AR is likely to be priced by lenders as a negative signal because it informs them about the borrower's information and credit risks (Chow, 1982;Lennox & Pittman, 2011;Melumad & Thoman, 1990). Using hand-collected data for a sample of 1,616 non-financial listed Canadian firms from 2004 to 2015 and a propensity-score-matching (PSM) approach to reduce sample heterogeneity based on observable firm characteristics, we test whether no-AR firms have a higher cost of debt than AR firms. We find that no-AR firms obtain debt capital at a higher cost than AR firms by 20 basis points on average. When we conduct our tests on samples of public bonds (610 bond issuances from 174 firms) and private loans (358 loan facilities from 135 firms), we find that the no-AR firms have a higher bond yield spread and loan interest rate spread. For no-AR firms, the yield spread of the public bonds is 90 basis points higher, and the interest rate spread of the private loans is 40 basis points higher than the corresponding values for matched AR firms. To further strengthen identification and increase confidence in our results, we examine subsamples of firms that switch (1) from a no-AR to an AR status (positive switchers) and (2) from an AR to a no-AR status (negative switchers). We find that, for negative switchers (positive switchers), the pre-to post-switch change in the cost of debt (using various windows of up to three years around the switch) increases (decreases) more than for the matched firms. These results 6 Moreover, anecdotal evidence suggests that ARs are important for borrowers to obtain bank lending (Forbes, 2016). 7 Extant literature that assesses the debt market benefits of auditing concludes that annual financial statements' verification is important for the cost of debt. For example, Blackwell, Noland, and Winters (1998) suggests that financial statement audits reduce creditors' information gathering costs and interest rates on loans. In a similar vein, Kim, Simunic, Stein, & Yi (2011a) and Lennox and Pittman (2011) indicate that firms with voluntary audits are perceived as less risky and are compensated by banks with lower interest rates. Finally, Robin, Wu, and Zhang (2017) find that individual auditor quality and financial covenants in debt contracts are negatively associated. Although we build on this emerging stream of literature, our study differs from previous research by focusing on the impact of the AR on quarterly financial statements from both the private and the public debt market perspective. 8 According to Section 7060, 'Auditor Review of Interim Financial Statements' prepared by the Auditing and Assurance Standards Board (AASB) in 2014, 'members of audit committees have indicated that the guidance on interim review procedures is particularly useful. Similarly, many practitioners have commented that carrying out interim review procedures has assisted them in identifying financial reporting matters to management and audit committees on a timely basis.' confirm that lenders are sensitive to the borrowers' voluntary AR status and punish (reward) the firms with negative (positive) switches. The tests of switchers strengthen our confidence in the positive link between no-AR and the cost of debt. However, given the associated economic costs, it is difficult to explain why firms choose to discontinue the AR. Besides, we are not sure of the mechanism via which the value of AR manifests in the cost of debt. Melumad and Thoman (1990) predict that voluntary auditing will reveal the borrower type, as borrowers with higher information and credit risk may not opt for external auditing. In line with the predictions of agency theory (Jensen & Meckling, 1976), managers could also behave in a self-serving manner at the expense of principals when voluntarily opting out of the AR. Because the external monitoring likely constrains earnings manipulation, managers with strong incentives to manipulate reported earnings may therefore be reluctant to purchase an AR. 9 If this prediction holds, we anticipate that the information quality of the financial statements will be lower for the negative switchers. Our tests show that the negative switchers have higher abnormal discretionary accruals following the switch than the matched control firms (i.e. the matched AR firms that do not switch). These results indicate that the no-AR firms are likely to engage in more aggressive financial reporting and, as suggested by Mansi, Maxwell, & Miller (2004), are unlikely to commit to higher audit quality via purchasing an AR. To further identify a potential self-serving behavior of managers in connection with the AR, we investigate whether the decision to opt out of the AR is associated with potential managerial benefits. Our analysis indicates that managers of no-AR firms have a higher total stock-based compensation following the negative switch. Moreover, we find that negative switchers are more likely to be followed by fewer equity analysts. These findings are particularly insightful because they motivate why firms may voluntarily choose to opt out of the AR. The increases in earnings management and managers' stock-based compensation for the negative switchers are likely to go unobserved due to the reduction in external monitoring. Moreover, we observe that the negative switchers reduce the amount of post-switching borrowing, thereby partially offsetting the increase in the cost of debt. Together, these findings indicate that financial statement quality is the bridge that links AR and the cost of debt and that managers derive personal benefits through higher stock-based compensation when opting out of the AR. We also conduct cross-sectional tests to examine whether the association between the voluntary choice to opt out of the AR and debt financing cost is moderated by the information asymmetry between the lender and the borrower. Our findings indicate that the impact of not purchasing an AR on the cost of public bonds is greater for firms with a more opaque information environment. Concurrently, we do not find a significant effect for private debt, indicating that bondholders are likely to respond more to the negative signal provided by the absence of the AR than syndicated loan lenders. Relative to the US setting, where the AR is mandatory for listed firms, Canadian firms can choose to not have their interim financial statements reviewed by external auditors. Consequently, we are able to assess the cost of not having an AR from a debt market perspective. To the best of our knowledge, our study is the first to analyse the implications of opting out of an AR for public firms' cost of debt financing. The literature on this topic provides contradictory predictions, with the empirical evidence being based primarily on limited data from privately held firms. Our study attempts to address this issue; it documents that publicly listed firms without a voluntary AR incur a higher cost of debt. Moreover, it shows that the cost increases with the borrower's level of information opacity. The significant cross-country variation in regulatory approaches regarding the AR suggests a lack of consensus regarding the desirability of the review (Bédard & Courteau, 2015), and our study provides empirical evidence regarding the costs associated with choosing to not purchase it. We therefore contribute to the worldwide debate on mandatory AR by providing empirical evidence from the Canadian experience. 10 Although prior research assesses whether the AR is associated with better disclosure quality of interim financial statements (Bédard & Courteau, 2015), it does not examine the real economic implications of the AR. Our study documents one important real economic implication of opting out of the AR, the increased cost of debt financing. Overall, our findings support the voluntary AR setting in Canada by showing that it allows the debt market to better differentiate between high-risk and low-risk borrowers. Characteristics of the AR in Canada A unique feature of the Canadian setting that makes it especially appropriate for analysing reporting and auditing practices is that the AR of quarterly reports is done on a voluntary basis. In contrast, in most countries (e.g. the US and Australia), the AR is mandatory for listed firms' interim financial reports. The main reason for this difference in policy is an ongoing debate regarding the asymmetry between the costs of purchasing an AR and the benefits it would bring (OSC, 2000;TSX Venture Exchange, 2002). On the cost side, the AR represents additional work for auditors, which results in increased audit fees (Bédard & Courteau, 2015). Critics of the review argue that mandating the AR would most likely be disadvantageous for small, listed firms because they would bear the additional costs but not benefit commensurately from the additional verification. Despite this criticism, in 2014, the AASBC initiated a discussion on potentially modifying the current standard on Auditor Review and Interim Financial Statements to make the AR mandatory (AASBC, 2014). The discussion did not end up revising the previous requirements, as the debate on whether to make the AR mandatory did not produce definitive conclusions. Consequently, Canadian firms maintain the right to voluntarily purchase the AR, even when publicly listed in the US, given the exemption granted for 'foreign private issuers' (SEC, 1999a). In addition to allowing the purchase of the AR on a voluntary basis, Canadian law has clear requirements regarding how firms should disclose the purchase of the AR to their stakeholders. National Instrument 51-102 'Continuous Disclosure Obligations' requires firms to disclose in their financial reports if their interim reports have not been reviewed by an auditor (OSC, 2004). Moreover, firms are not allowed to reveal the outcome of the AR to external parties. Because its use is restricted to internal purposes, it mainly serves as a monitoring mechanism for potential accounting distortions in the quarterly financial statements. Although the AR is not mandatory, its purchase is highly recommended by the Canadian Securities Administrators (CSA) because it aims to address, in a timely fashion, potential accounting misstatements in annual reports (OSC, 2004). Audit Reviews and Debt Financing A growing body of research investigates the role of auditing in assisting lending decisions. In the context of quarterly ARs, previous evidence links their use to improvements in the quality of financial statements (Ettredge et al., 1994(Ettredge et al., , 2000. Specifically, external auditors are able to detect potential financial misstatements during the interim periods and not just during the yearend audit, which reduces fourth-quarter adjustments. In this study, we assess the debt market implications of the decision to opt out of the AR, by examining whether the voluntary choice to not purchase the AR by a sample of listed Canadian firms adversely influences their cost of debt. The AR is likely to be priced by lenders, given that its purchase is voluntary. According to Kausar, Shroff, and White (2016), borrowers that voluntarily purchase external audit verification send lenders a signal regarding their future investment opportunities and, therefore, the ability to repay their loans. However, given the relatively small cost of the AR, the positive signal sent through its purchase is likely to have low credibility because other firms can easily replicate it. In contrast, we propose that borrowers will negatively signal their lenders when they voluntarily opt out of external verification because such an action implies higher riskiness (Melumad & Thoman, 1990). In a similar vein, according to Lennox and Pittman (2011), the choice to not have a voluntary audit implicitly suggests that the firm forfeits any assurance benefits associated with the external verification. Our proposition fits the Canadian context since the mandatory disclosure about the absence of the AR can be observed by lenders. The relevance of the AR for lenders could also be explained through auditing's established channels: (1) providing 'verification' to the disclosure quality of financial statements (information role) and (2) providing 'insurance' to investors through the auditor's legal liability (insurance role). Regarding the information role, an AR of quarterly financial statements has a lower assurance level compared to a statutory audit of year-end financial statements. 11,12 Despite its reduced scope, the AR aims to improve the reliability of financial information reported in quarterly financial statements by a timely verification of accounting misstatements. This verification is important because managers have significant incentives to use their discretion in preparing quarterly reports. According to Myers, Myers, and Skinner (2007), the main motivations for manipulating interim reports are reporting flows of increasing earnings. Manry et al. (2003) indicate that interim reports are also manipulated to beat financial analysts' and budget targets. The AR, therefore, represents a mechanism through which the financial reporting decisions of management are assessed continuously. If the AR caters to the informational needs of 11 According to Kajüter et al. (2016), the review verifies whether the reported numbers in financial statements are plausible or not. 12 While an audit provides a positive assurance, i.e., an indication that the financial statements are prepared, in all material aspects, in accordance with the applicable Generally Accepted Accounting Principles (GAAP), the review provides a negative assurance, i.e., an indication of no evidence to assume that the financial statements are not presented in accordance with the applicable GAAP (Gay, Schelluch, & Baines, 1998). For example, Barton, Hodder, and Shepardson (2015) find for a sample of firms in the financial industry, the audit review is associated with reduced likelihood of bank failure. lenders, they are likely to require a lower risk premium for lending due to the improvement in borrowers' reporting quality (Graham, Li, & Qiu, 2008). Therefore, the reduced screening costs will be associated with the cost of debt financing (Minnis, 2011). 13 Nonetheless, the effect of AR through the insurance role of auditing is doubtful, given that the AR in Canada is limited to internal use (CICA, section 7050.08). Since external parties are not able to access the outcome of the AR, its insurance role is constrained. In summary, it is unclear whether the voluntary decision to not purchase the AR is likely to represent a negative signal that would significantly impact the cost of debt financing for AR firms relative to no-AR firms. We, therefore, formulate our research question as follows: RQ: Does the cost of debt differ between no-AR firms and AR firms? IV. Data Although the review of interim financial statements is voluntary in Canada, starting from fiscal years on or after January 1, 2004, listed Canadian firms are required to disclose whether their interim financial statements have not been reviewed by an auditor (OSC, 2004). This new regulation makes Canada an ideal institutional setting for examining the benefits of the AR because (1) the purchase of the review is voluntary and (2) the disclosure of notice to not have a review is mandatory. Therefore, we begin by selecting all Canadian listed firms included in the Compustat database between 2004 and 2015. We use SEDAR, the official website that provides access to public security documents filed by Canadian firms, to hand-collect the information on the AR purchase and the auditor's name from firms' interim reports. 14 We obtain data on all firm-specific controls from Compustat. This process results in an initial sample of 22,026 firm-year observations from 3,575 firms. We exclude 7,283 observations of financial firms (SIC codes 6000-6799) and 5,903 observations without available interest expense and short-term and long-term debt data in Compustat to compute the average interest rate. We also eliminate 543 observations for firms with non-listed status, 276 observations with missing review information due to the unavailability of the annual reports, and 436 observations with missing firm-specific controls identified in our research models. These criteria result in a sample of 7,585 observations from 1,616 listed firms for our main analysis. The sample includes 4,815 observations for firms with a voluntary AR (AR sample) and 2,770 observations for firms without an AR (no-AR sample). 15 In addition to the full sample, we use a subsample of publicly listed firms that issue public debt, which we refer to as the bond sample, and a subsample of firms that issue private debt, which we refer to as the loan sample. The bond sample for the public debt analysis includes 610 straight bond issues from 174 non-financial firms during the 2004-2015 period, of which no-AR firms issued 55 bonds and AR firms issued 555 bonds. We obtain bond data from the SDC Platinum database. The loan sample for the private debt analysis includes 358 loan facilities from 135 firms during the 2004-2015 period, of which 36 facilities are for no-AR firms and 322 facilities for AR firms. 16 We obtain loan data from the Dealscan database. 13 This is consistent with Bharath, Sunder, and Sunder (2008), who find a negative association between the quality of accounting information and the cost of debt. 14 According to the National Instrument 51-102 'Continuous Disclosure Obligations', Canadian firms are mandated to disclose in their quarterly reports if their auditors do not perform an audit review (OSC, 2004). 15 Our sample composition is consistent with previous literature, as no-AR observations make up 37 per cent of the overall sample, compared with 41 per cent for Bédard and Courteau (2015). 16 Over the period 2004-2015, 2,170 loan facilities from 345 Canadian firms are available on Dealscan. However, spread all-in-drawn information is only available for 431 loan facilities and 148 firms. Finally, out of 431 loan facilities, we exclude 73 facilities pertaining to firms with missing reviews, firm characteristics, and auditor information. V. Methodology We use the full, bond, and loan samples to answer our research question. We follow the empirical approach of Kim et al. (2011a) and Minnis (2011) and estimate the following model: The dependent variable, Cost of Debt, is the average interest rate on outstanding debt (Inter-estRate), bond spread (BondSpread), and loan spread (Spread All-in-Drawn) for the full, bond, and loan samples, respectively. For the full sample analysis, we follow Kim et al. (2011a) and Minnis (2011) and use the average interest rate on outstanding debt (InterestRate) as the dependent variable. 17 We compute InterestRate as the firm's total interest expense in year t divided by the total short-term and long-term debt outstanding in year t. 18 We use BondSpread as the dependent variable for the bond sample analysis. We compute BondSpread as the yield-to-maturity (YTM) difference between the firm's public bonds and the maturity-matched Canadian government marketable bonds. For the bond sample analysis, we also use bond-specific controls (Bond Amount, Bond Maturity, Foreign Currency, and Senior Bond Dummy) in addition to the firm-specific control variables in Model (1). We use Spread All-in-Drawn as the dependent variable for the loan sample analysis. We compute Spread All-in-Drawn as the interest rate on the loan that a borrower pays in basis points over LIBOR or LIBOR equivalents for each dollar drawn down as provided in the DealScan database, divided by 10,000. 19 Similar to the bond sample analysis, for the loan sample analysis, we also include loan-specific controls (Foreign Currency, Loan Amount, Loan Maturity, and Number of Lenders) in addition to the firm-specific control variables in Model (1). We acknowledge that InterestRate is a coarse proxy for the cost of debt. While recent literature tends to use the interest rates charged on loans and bonds directly from debt contracts (which is what we do for the subsamples of public bonds and syndicated loans), we note that because Canadian firms do not rely on the public bond market and the syndicated loan market as frequently as their American counterparts, we do not observe sufficient bond and loan issues from Canadian firms. In addition, our database does not provide the details of the debt structure (e.g. maturity, proceeds, and interest rate of all liabilities in a given year) of each Canadian public firm, which means we cannot calculate a precise cost of debt for a firm by weighting different debt instruments. Given these data limitations, we use InterestRate as the proxy for the cost of debt for the full sample and complement this analysis by using BondSpread and Spread All-in-Drawn as the cost of debt proxy for the smaller bond sample and loan sample, respectively. (1) is No_Review, an indicator variable that equals one if a firm does not have a voluntary AR and zero otherwise. 20 A positive (negative) coefficient for No_Review would indicate that the lack of voluntary purchase of an AR is associated with a higher (lower) cost of debt. Our main variable of interest in Model In Model (1), we control for firm-specific determinants of the cost of debt. We include firm size (Size) because previous literature on debt financing suggests that firm size is negatively associated with the cost of borrowing (e.g. Blackwell et al., 1998). We include return on assets (ROA) because lenders charge firms with higher profitability a lower cost of borrowing (Kim et al., 2011a). We include current ratio (CR) because firms' ability to meet their short-term obligations is negatively associated with borrowing costs. As indicated by Jensen and Meckling (1976), agency costs increase with the level of debt. We include leverage (LEV ) and an indicator variable for negative earnings (NegE) to control the risks of distress and agency costs. Previous literature on loan contracting suggests that the cost of borrowing is negatively associated with tangible assets, as their use as collateral represents an additional assurance for lenders (e.g. Kim, Tsui, & Yi, 2011b;and Florou & Kosi, 2015). Therefore, we include asset tangibility (TANG). Following Denis and Mihov (2003), Bharath et al. (2008), and Florou and Kosi (2015), we include the market-to-book ratio (MB) to control the impact of forward-looking growth opportunities, which are expected to decrease firms' average interest rate. Firms with a higher credit rating have a lower cost of borrowing. Therefore, we include an indicator variable for investment-grade credit rating (INVEST) to control the impact of firms' credit ratings on the cost of debt. We control the impact of cross-listing and public bond and syndicate loan offerings by including indicator variables for cross-listed firms (Cross Listed), firms with bond issuances (Bond Dummy), and firms with syndicated loan issuances (Loan Dummy). We also include industry and year indicator variables to control for the effects of unobserved heterogeneity on our estimation. Table 1 defines all the variables used in our analyses. Because errors of observations with the same individual auditor may be correlated (Francis, Hunter, Robinson, Robinson, & Yuan, 2017;Gul, Wu, & Yang, 2013), we use standard errors clustered by auditor to correct for unobserved within-auditor correlations. Propensity Score Matching The potential non-random assignment of firms to AR and no-AR groups resulting from the voluntary nature of the review in Canada could systematically bias our results. To reduces sample heterogeneity based on observable firm characteristics, we use a PSM approach to match firms from the AR and the no-AR groups. 21 Specifically, we use logistic regression to estimate the following audit choice model and the propensity score for each firm, and match firms from the no-AR group (treatment sample) with firms from the AR group (control sample) based on their InterestRate Interest expense in year t divided by average total short-term plus long-term debt at the beginning and the end of year t. No_Review An indicator variable that equals one (zero) if a firm's interim reports are not reviewed (reviewed) in year t-1. Bond and Syndicated Loan Sample Variables BondSpread The difference in the yield-to-maturity (YTM) between the public bond and the maturity-matched Canadian government marketable bond as provided in the SDC database. Spread All-in-Drawn The interest rate on loan contracting that a borrower pays in basis points over LIBOR or LIBOR equivalents for each dollar drawn down as provided in the DealScan database, scaled by 10,000. BondAMT The log of the proceeds of a public bond. BondMaturity The log of bond maturity measured in months. BondSeniority An indicator variable that equals 1 if the public bond is a senior security; 0 otherwise. DForCurr An indicator variable that equals 1 if the public bond (syndicated loan) is not quoted (set) in Canadian or U.S. dollars; 0 otherwise. LoanAMT The log of the amount of a loan facility. LoanMaturity The log of the loan maturity measured in months. Nlender The total number of lenders in each loan facility. Firm-Specific Controls Size The log of total assets in year t-1. ROA Net income divided by total assets in year t-1. TANG Net property, plant, and equipment divided by total assets in year t-1. CR Current assets divided by current liabilities in year t-1. LEV Total liabilities divided by total assets in year t-1. MB A firm's market value divided by its book value in year t-1. NegE An indicator variable that equals 1 if total year-end liabilities are greater than total year-end assets in year t-1; 0 otherwise. INVEST An indicator variable that equals 1 if a firm's Standard & Poor's or Predicted credit rating is BBB-or higher; 0 otherwise. 22 Cross Listed An indicator variable that equals 1 if a firm is cross-listed; 0 otherwise. Bond Dummy An indicator variable that equals 1 if a firm has issued a new bond for year t-1; 0 otherwise. Loan Dummy An indicator variable that equals 1 if a firm has a new loan for year t-1; 0 otherwise. Other Variables Dec_FYEnd An indicator variable that equals 1 if a firm's fiscal year end is in December in year t-1; 0 otherwise. Big4 An indicator variable that equals 1 if a firm is audited by Big4 auditors in year t-1; 0 otherwise. AltmanZ Altman Z-score is computed using the following equation: AltmanZ = 1.2 * (working capital / total assets) + 1.4 * (retained earnings / total assets) + 3.3 * (earnings before interest and tax / total assets) + 0.6 * (market value of equity / total liabilities) + 1.0 * (sales / total assets) InfAsym Information asymmetry is proxied using analyst forecast dispersion, which is the square of the difference between the mean analyst forecast and the firm's actual earnings divided by stock price, measured at the end of the eleventh month of year t-1. We rank the analyst forecast dispersion into deciles each year and assign a value of 10 to the decile with the highest analyst forecast dispersion and a value of 1 to the decile with the lowest analyst forecast dispersion. The decile ranking is based on all Canadian public firms. 22 Following Florou and Kosi (2015) and (Barth et al., 1998) Analyst following A ranked measure of the number of equity analysts following the firm. We rank the number of equity analysts into deciles each year and assign a value of 10 to the decile with the smallest number of equity analysts and 1 to the decile with the largest number of equity analysts. The decile ranking is based on all Canadian public firms. A higher value represents a lower financial analyst following. Compensation Total stock-based compensation in the form of company stock divided by total assets. Abs(ABN_ACC) Following Dechow and Dichev (2002) and Peek et al. (2013), we estimate abnormal accruals by regressing working capital accruals on current cash flow, previous year's cash flow, and next year's cash flow for each country, industry, and year group. estimated propensity scores: Dec_FYEnd indicates whether a firm has a fiscal year-end in December. We include this variable to control for the additional working pressures auditors face because most of their clients have a December fiscal year-end. To deal with this issue, auditors encourage clients to have interim reviews in order to shift some procedures to less busy seasons and better utilize their capacity (Hay, Knechel, & Wong, 2006;López & Peters, 2012). Therefore, we posit that firms with a December fiscal year-end are more likely to have an interim review. Big4 indicates whether the firm uses a Big4 auditor for external verification. Bédard and Courteau (2015) show that a Big4 auditor decreases the likelihood of not having an interim review. Given that creditworthiness and bankruptcy risk are fundamental factors that might affect firms' choices regarding the AR, we also include the Altman Z-score (AltmanZ) in the choice model. Moreover, information asymmetry is another important factor that may affect a firm's decision to purchase or opt out of the AR. We, therefore, control for InfAsym, a ranked measure of analyst forecast dispersion. Furthermore, we include Size, TANG, and MB in the model because Ettredge et al. (1994) show that No_Review is negatively associated with these variables. Because agency cost and firms' financial strength are likely related to purchasing an AR, we include CR, LEV, and NegE in the model. We expect No_Review to be negatively associated with CR, LEV, and NegE. We also include ROA, INVEST, Cross Listed, Bond Dummy, and Loan Dummy. We expect firms with lower profitability, non-investment-grade credit rating, and cross-listing to have a higher propensity to not purchase a review. Descriptive Statistics and Univariate Analysis We present descriptive statistics for the unmatched full sample and the subsamples with no-AR and AR firms in Table 2. We also report univariate test statistics for the mean differences between the no-AR and the AR groups. The average interest rate for the full sample is 9.2 percent, with no-AR firms having a higher average interest rate (10.3 per cent) than AR firms (8.6 percent). No-AR firms exhibit significantly lower firm size, ROA, tangible assets, leverage, market-tobook ratio, credit rating, and Altman-Z, and higher information asymmetry and probability of having negative equity. We also document that 21% of the no-AR firms and 33% of the AR firms are cross-listed. Lastly, 0.6% (3.6%) of the no-AR firms and 5.9% (18%) of the AR firms have public bonds (syndicated loans). The significant differences between the no-AR and the AR subsamples justify our use of the PSM approach. Table A1 of the online appendix presents the summary statistics for the bond and loan samples. Table 3 presents Pearson correlations. Our variable of interest, No_Review, is significantly positively correlated with InterestRate (Pearson correlation = 0.14). In line with previous studies, Size is significantly negatively correlated with No_Review (Pearson correlation = − 0.43), indicating that smaller firms are less likely to purchase an AR voluntarily. Matched Sample We match the no-AR firms with the AR firms based on propensity scores derived from Model (2). We employ the nearest neighbor matching approach with no replacement within a caliper of 0.01. Our matching yields a sample of 4,066 firm-year observations (2,033 matched pairs) for the full-sample test of our RQ. Table 4, Panel A, reports the estimation results of the audit choice model. The pseudo R 2 of 0.19 suggests that the voluntary review is not random and warrants the use of PSM. In line with our expectations, the no-AR firms are less likely to have a December fiscal yearend and be audited by a Big4 auditor. Also, the no-AR firms have a higher Altman Z-score (lower bankruptcy risk). Although positive, the coefficient of InfAsym is not significant. Firm size, market-to-book ratio, and tangibility are negatively related to the no-AR decision. Consistent with signaling theory, a firm with negative equity is less likely to decline an AR. Firms with a high return on assets have a higher tendency not to purchase an AR. Furthermore, the probability of not purchasing an AR decreases when the firm has a bond issuance or a syndicated loan borrowing. Table 4, Panel B, documents the univariate test comparisons of the PSM matched samples. The differences between the no-AR and the AR subsamples are insignificant for the matched sample, indicating that the PSM approach is efficacious. Table 5 presents the Model (1) estimation results for the matched full, bond, and loan samples. The coefficient of No_Review for the matched full sample is significantly positive (β = 0.002; t-value = 2.40). 23,24 Ceteris paribus, no-AR firms exhibit a 20-basis point higher interest rate 23 We control industry-and year-fixed effects and use standard errors clustered by auditor in our main analyses. In additional tests, we use AltmanZ and InfAsym as additional controls in our matched full sample, and for the bond and loan samples. Moreover, in a separate test, we repeat our analysis after controlling for auditor opinion on internal controls. Untabulated results are statistically similar to those presented in Table 5. 24 The PSM approach used in Table 5 matches observable characteristics and aims to reduce sample heterogeneity. Therefore, to control for potential bias due to differences in unobservable firm characteristics, we follow Minnis (2011), Powers (2007), Hsu, Troy, and Huang (2015), and Ireland and Lennox (2002), and use a two-stage Heckman procedure that includes the exogenous instrument Dec_FYEnd. We posit that Dec_FYEnd satisfies the exclusion restriction because auditors may advance some procedures to less busy interim periods when they have the excess capacity by encouraging clients to use interim reviews (Hay et al., 2006;López & Peters, 2012). Therefore, Dec_FYEnd is directly related to the decision to not have an AR but not directly related to the firm's cost of debt. Using the inverse Mills ratio from the first stage, we separately estimate coefficients for no-AR and AR firms in the second stage to capture the endogenous switching effect and predict the average interest rate for each no-AR and AR firm. Untabulated results of the endogenous switching model approach suggest that the no-AR firms are associated with higher interest rates, in line with the findings of our main test. 9.445 This table presents the summary statistics for the full sample, AR sample, and no-AR sample. The mean difference test is conducted between the AR and no-AR firms for each variable. All variables are described in Table 1. * * * , * * , and * denote significance at 1%, 5%, and 10%, respectively. We have 6,815 and 6,954 observations in the full sample for Compensation and Abs(ABN_ACC), respectively. (1) InterestRate 1.00 (2) No_Review 0.14 1.00 1918.07 (0.000) Observations included in the matching 7,520 than AR firms. Consistent with our expectation, larger firms and firms with better operating performance (proxied by ROA) are associated with a lower cost of debt. The coefficient of Loan Dummy is negative and significant, consistent with firms with bank loans being monitored more closely and the increased monitoring being priced in new loan facilities. We present the results of Model (1) for the bond sample in the second column of Table 5. The dependent variable in this model is bond spread (BondSpread). The coefficient of No_Review is significantly positive (β = 0.009; t-value = 2.40) and indicates that no-AR firms, on average, have a 90-basis-point higher bond spread than AR firms. 25 Our findings are consistent with the review assisting the screening of borrowers' credit risk, which bondholders reward through a lower cost of debt. We present the results of Model (1) for the loan sample in the third column of Table 5. The dependent variable in this model is the all-in-drawn spread (Spread All-in-Drawn). The coefficient of No_Review is significantly positive (β = 0.004; t-value = 3.41). The results for the loan sample document that the spread in private loan contracting is higher by 40 basis points for no-AR firms than for AR firms, suggesting that private bank lenders reward borrowers that voluntarily purchase and AR. 26 25 We also use the S&P's issue rating as the proxy for the cost of public debt financing (untabulated results). The voluntary review is significantly associated with better credit ratings after controlling the determinants of credit ratings. These results are available upon request. 26 In the main tests, for the bond and loan analyses, we present the results from the unmatched samples. In additional tests, we match the no-AR firms with the AR firms based on propensity scores from the choice model presented in Equation (2). We exclude industry dummies, AltmanZ and InfAsym from the choice model to increase the number of matched firms. We employ the nearest neighbour matching approach with replacement within a calliper of 0.01. However, given This table presents the coefficients of the audit-choice model (Panel A) and the univariate tests post-matching (Panel B). All variables are described in Table 1. T-values are presented in parentheses. * * * , * * , and * denote significance at 1%, 5%, and 10%, respectively. To summarize, the results in Table 5 indicate that if an external auditor does not review a firm's interim financial statements, the cost of debt is higher relative to the cost of debt for a matched AR firm. Furthermore, this result holds for both the bond and the loan samples. The results also show that the effect of the AR on the cost of debt is asymmetric between private and public debt; it is less strong for private lenders than for bondholders. We contend that this difference is due to banks having alternative channels to access borrowers' private information, which lowers their reliance on the signaling value of the AR. By contrast, the signaling effect of the AR is likely to be more pronounced for the cost of public debt financing because public bondholders do not have access to borrowers' private information and therefore price the borrowers' lack of commitment to have a timely AR higher. In sum, the findings show that the cost of debt is lower for review firms relative to no review firms and the additional monitoring brought by the audit review is more important for public debt holders relative to private lenders. Results using Newey-West 1987 robust standard errors that correct for heteroskedasticity and first-order autocorrelations, as well as standard errors clustered by unique firms using CIK codes, are similar and even stronger in some instances. Results using Newey-West 1987 robust standard errors that correct for heteroskedasticity and first-order autocorrelations, as well as standard errors clustered by unique firms using CIK codes, are similar and even stronger in some instances. the small sample size and many control variables, our final matched bond and loan samples have 65 and 34 observations, respectively. The coefficient of No_Review is still positive and significant for both samples (β = 0.014; t-value = 2.100 for the bond sample and β = 0.012; t-value = 2.712 for the loan sample). This table presents the relation between the cost of debt and the no audit review for the matched full sample, bond sample, and syndicated loan sample. All variables are described in Table 1. T-values are presented in parentheses. * * * , * * , and * denote significance l at 1%, 5%, and 10%, respectively. Additional Tests Switching to/from an AR Our sample includes firms that switch their review status from no-AR to AR and others that change from AR to no-AR. We refer to the former as positive switchers and the latter as negative switchers. The subsamples of positive and negative switchers allow us to conduct sharper tests that better identify the relation between the cost of debt and AR. Depending on the direction of the switch, the decision to purchase (discontinue) an AR provides a positive (negative) signal to creditors. Specifically, relative to no-AR firms, positive switchers are more likely to be rewarded by lenders with a lower cost of debt, given their commitment to increase verification of their interim financial statements. By contrast, relative to AR firms, negative switchers are likely to experience an increase in the cost of debt, given the AR discontinuation and, consequently, the commitment to maintain verification of their interim financial statements. Again, it is likely that the decision to switch is not random. To examine the effect of switching on the cost of debt, we start with the audit choice model and match switchers to non-switchers with similar firm characteristics. Following Francis et al. (2017), we estimate the audit choice model annually and match switchers to non-switchers using a one-to-one matching without replacement. We use the switching year of the treatment firm as a pseudo switching year for its matched non-switching firm. We construct 36 matched pairs for positive switchers and 38 matched pairs for negative switchers. 27 We then use the following model to test whether the pre-switch to post-switch change in the cost of debt differs between switchers and non-switchers: 28 ΔInterestRate is the difference in the firm's cost of debt between the post-switch period and the pre-switch period. We use pre-switch and post-switch periods of: (1) one year before and one year after the year of the switch; (2) two years before and two years after the year of the switch; and (3) three years before and three years after the year of the switch. For (2) and (3), we calculate the average InterestRate in the pre-switch and the post-switch periods. 29 Switch is an indicator variable that equals one if a firm voluntarily switches its review status from no-AR to AR (AR to no-AR) and 0 if a firm does not switch. Switch is our primary variable of interest because it captures the difference of the impact of AR on the cost of debt between switchers and non-switchers. We use standard errors clustered by auditor. We no longer include the time-invariant control variables because we use differences in all the control variables. Table 6 reports the effects of positive switching (from no-AR to AR) in Columns 1-3 and negative switching (from AR to no-AR) in Columns 4-6 on the change in firms' cost of debt. Column 1 shows that the change in InterestRate from t-1 to t + 1 (where t is the year of the switch) is lower for positive switchers than for non-switchers; however, the difference is insignificant. We find a negative and significant coefficient of Switch in Columns 2 and 3 when we expand the the pre-switch and the post-switch periods to t-2 to t + 2 (β = − 0.026 and t-value = − 2.21) and t-3 to t + 3 (β = − 0.014 and t-value = − 2.34), respectively. Overall, the results show that positive switching is associated with a reduction in the cost of debt. Columns 4-6 show that the coefficients of Switch are positive and significant for all three windows (β = 0.026 and 27 We start with a sample of 56 unique positive switchers and 53 unique negative switchers from 2006 to 2015. The number of positive (negative) switchers for the change analysis declines to 36 (38) because we require them to have available data for the dependent and independent variables in pre-and post-switching periods. 28 To further control omitted variables that may simutaneously influence the firm's AR decision and the cost of debt, we include additional variables, such as change in the firm's discretionary abnormal accruals and Altman Z-score. Untabulated tests show that our findings are robust. 29 We do not use the public bond and the syndicated loan samples in the switching test because the number of observations is insufficient to draw meaningful statistical inferences. We present the characteristics of bond and loan issues of the switching and controlling firms in Appendix A2. t-value = 3.51; β = 0.014 and t-value = 2.77; β = 0.014 and t-value = 2.36). These results suggest that firms' debt costs significantly increase after they discontinue the AR of their interim financial statements. Although the impact of switching on the change in the cost of debt is pervasively and statistically significant, we interpret the economic magnitude of the effects with caution. We acknowledge that InterestRate is a rough proxy of the firm's cost of debt. Besides, after the switching, the negative switchers experience an increase in the cost of debt which is 140 basis points more than the change in the cost of debt for the matched firms (Column 6) and is larger than the documented effects of voluntary audit on the cost of debt in other studies. 30 Considering that Canadian firms do not issue debt securities as frequently as their U.S. counterparts, the impact of AR on the cost of debt could be different between Canadian and U.S. firms. We present the characteristics of the bond and loan issuances by the switchers and the control firms in Table A2 of the online appendix. Compared to the matched control firms, positive switchers experience a larger decrease in bond and loan spreads of 98 (the difference between − 20 and 78) and 127 (the difference between − 95 and 32) basis points, respectively. In contrast, negative switchers experience a larger increase in bond and loan spreads of 99 (the difference between 229 and 130) and 77 (the difference between 256 and 179) basis points, respectively. Even though our estimates are likely to be noisy because of the small samples of bond and loan issues, the changes in the cost of public bonds and bank loans are comparable to the changes in the InterestRate. Characteristics of Switchers The switching analysis in Table 6 shows that positive (negative) switching is associated with a lower (higher) cost of debt. However, we have not demonstrated why no-AR firms are willing to forgo the benefit of an AR and bear a higher cost of debt. We conjecture that lenders interpret the AR as the borrower's commitment to higher financial statement quality and the discontinuation of the AR as a refusal to make such a commitment. Therefore, creditors can differentiate the risk of the AR and the no-AR firms. One possibility is that after opting out of the AR, some firms may reduce the frequency of accessing external debt financing. 31 More likely, the no-AR decision could also signal self-serving managerial behavior at the expense of the firm's principals. Under this scenario, managers would have incentives to forgo the benefits of the AR so that they could manipulate reported earnings to extract private benefits. We, therefore, expect that the information quality of the financial statements will be lower for the negative switchers. We examine the changes in firm characteristics after the switching to empirically validate this reasoning. Table 7, Column 1 presents a difference-in-differences analysis to test whether the change in absolute abnormal accruals surrounding the switch differs between switchers and non-switchers. We find that after switching, positive switchers experience a reduction in absolute abnormal discretionary accruals, which is significantly larger than the change for the control firms. In contrast, negative switchers have significantly higher absolute abnormal accruals relative to their control firms after the switch. To further validate the existence of private managerial benefits, we assess whether the decision to opt out of the AR is associated with changes in managers' stockbased compensation. Table 7, Column 2 shows that, following the negative switch, managers of no-AR firms receive higher total stock-based compensation. Table 6. Relation between changes in cost of debt and AR switching. Dechow and Dichev (2002) and Peek et al. (2013), we estimate abnormal accruals by regressing working capital accruals on current cash flow, previous year's cash flow, and next year's cash flow for each country, industry, and year group. Analyst Following is a ranked proxy for the number of equity analysts following the firm at the end of the eleventh month of year t-1. We rank this into deciles each year and assign a value of 10 to the most asymmetric decile and 1 to the least asymmetric decile. A higher value represents a lower financial analyst following. Compensation is the managerial stock-based compensation scaled by the firm's total assets. Following Francis et al. (2017), we conduct a PSM to select constant no-AR firms as controlling firms for the positive switchers and constant AR firms as the controlling firms for the negative switchers. We then present difference-in-difference tests based on the switchers and their controlling firms. T-values are presented in parentheses. * * * , * * , and * denote significance at 1%, 5%, and 10%, respectively. Lastly, we consider why the self-serving behavior of no-AR firms' managers goes unobserved. We reason that other stakeholders may not be able to monitor no-AR firms efficiently. Table 7, Column 3 presents the difference in equity analyst following between switchers and nonswitchers. In particular, the variable Analyst Following is a ranked measure of the number of equity analysts following the focal firm. We sort all Canadian public firms into decile portfolios by year and assign a rank of 1 to the firms with the highest analyst following and 10 to those with the lowest analyst following. Therefore, the higher the variable Analyst Following, the higher the information opaqueness between firm managers and outsiders. After opting out of the AR, we find that fewer equity analysts follow negative switchers than non-switchers. Overall, the findings in Table 7 indicate that negative switchers engage in more earnings management activities and have higher levels of managerial stock-based compensation. Concurrently, this opportunistic behavior is likely to go unsanctioned due to independent equity analysts' decrease in external monitoring. The discontinuation of the review is therefore likely to be associated with self-serving managerial behavior. No-AR, Cost of Debt Financing, and Information Asymmetry Lenders have less information on which to base their financing decisions for borrowers with a more opaque information environment than borrowers with a more transparent information environment. 32 The value of the signal provided by the no-AR is likely to be higher when borrowers have a more opaque information environment. It will lead to a greater increase in lender information asymmetry than a similar signal for borrowers with a more transparent information environment. Therefore, we expect that lenders will increase the cost of lending more for no-AR firms with higher levels of information asymmetry for not committing to a higher verification level by purchasing an AR, than for borrowers with lower levels of information asymmetry. In the case of the AR, lenders are likely to assess the value of the signal transmitted by the lack of voluntary purchase of the additional audit verification contingent upon the borrower's level of information asymmetry. We, therefore, examine the moderating role of information asymmetry on the cost of debt-AR relationship by interacting InfAsym and Analyst Following with our main variable of interest, No_Review. The results shown in Table A3 of the online appendix indicate that the impact of not purchasing an AR on the cost of public bonds is more pronounced for firms with a more opaque information environment. Unlike private lenders, public bondholders lack access to the issuer's private information and rely more on public information. Therefore, they are more sensitive to the signaling effect of not purchasing the AR when they lend to firms with higher information asymmetry. Our findings suggest that public debt holders' inability to access private firm information increases their reliance on publicly available financial information relative to private lenders. Moreover, since bondholders cannot renegotiate debt agreements after bond issuance, they are more likely to reward borrowers with more reliable financial reports and resist borrowers' lack of commitment to external auditor verification. VII. Conclusion In this study, we use (1) a sample of 7,585 firm-year observations from non-financial 1,616 public firms in Canada over the 2004-2015 period, (2) a subsample of 610 straight bond issuances from 174 non-financial Canadian listed firms, and (3) a subsample of 358 loan facilities from 135 nonfinancial Canadian listed firms to examine the impact of not purchasing an AR on a firm's cost of debt. Our results indicate that opting out of the AR is associated with a higher cost of debt, with public debt mostly driving this effect. We also find that the effect is stronger for firms with higher information asymmetry. We add to the literature by being the first to document that the lack of a voluntary AR purchase is associated with debt market costs for public firms. Relative to previous studies highlighting that the AR is associated with costs and (marginal) benefits for firms, we focus on the debtmarket implications of the voluntary decision to not purchase the additional audit verification. We observe a decrease in the negative switchers' lending volume, which partially offsets the 32 Information asymmetry between lenders and borrowers influences lending decisions (Diamond, 1991;Aghion & Bolton, 1992;Holmstrom & Tirole, 1997). Accounting information plays a crucial role in public and private debt markets because it decreases information asymmetry and better indicates borrower quality (Chen, 2016). Low-quality public information leads to higher perceived borrower risk, affecting loan terms (Bharath et al., 2008(Bharath et al., , p. 2011. Lenders value the verification offered by auditors (Minnis, 2011) because it improves the reliability of publicly available information. effect of the increase in the cost of debt. Moreover, our analysis indicates that the managers of no-AR firms engage in more earnings management activities, receive increased stock-based compensation, and have reduced equity analyst following. Overall, our findings provide evidence of why firms voluntarily opt out of the AR despite an increased cost of debt financing. We conclude by identifying some avenues for future research and discussing the potential limitations of our study. As the external monitoring from auditors mitigates the information asymmetry between banks and borrowers (Balsam et al., 2003;Watts & Zimmerman, 1983), private information obtained via an AR is likely to influence lending decisions. However, Biddle and Hilary (2006) and Beatty, Liao, and Weber (2010) suggest that the quality of accounting information might not be a relevant factor for lending decisions if the financiers have alternative ways of reducing agency costs. Future research could examine whether the association between opting out of the AR and the cost of debt persists in the presence of additional agency cost-reducing channels. A limitation of our study concerns the endogenous nature of the no-AR decision and potential structural differences between the no-AR and AR firms. Although we perform multiple tests in our empirical analysis to alleviate endogeneity concerns, our results should be interpreted with caution. Further, our analysis does not include the effect of the outcome of the AR, as we cannot access the content of Canadian firms' ARs. Future research could examine whether the output of the AR has a significant effect on lending decisions, particularly on the cost of debt.
2022-08-25T15:21:15.311Z
2022-08-23T00:00:00.000
{ "year": 2022, "sha1": "79d48139c17602ca8bbb92c5bd7ce464eac86275", "oa_license": "CCBY", "oa_url": "https://www.pure.ed.ac.uk/ws/files/266300019/2022.05.04_EAR_manuscript.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "8cdf4da27dfbf9d141dff4a326d18c9ece51d206", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
232117166
pes2o/s2orc
v3-fos-license
Does endometrial morular metaplasia represent odontogenic differentiation? The nature of endometrial morular metaplasia (MorM) is still unknown. The nuclear β-catenin accumulation and the not rare ghost cell keratinization suggest a similarity with hard keratin-producing odontogenic and hair matrix tumors rather than with squamous differentiation. We aimed to compare MorM to hard keratin-producing tumors. Forty-one hard keratin-producing tumors, including 26 hair matrix tumors (20 pilomatrixomas and 6 pilomatrix carcinomas) and 15 odontogenic tumors (adamantinomatous craniopharyngiomas), were compared to 15 endometrioid carcinomas with MorM with or without squamous/keratinizing features. Immunohistochemistry for β-catenin, CD10, CDX2, ki67, p63, CK5/6, CK7, CK8/18, CK19, and pan-hard keratin was performed; 10 cases of endometrioid carcinomas with conventional squamous differentiation were used as controls. In adamantinomatous craniopharyngiomas, the β-catenin-accumulating cell clusters (whorl-like structures) were morphologically similar to MorM (round syncytial aggregates of bland cells with round-to-spindled nuclei and profuse cytoplasm), with overlapping squamous/keratinizing features (clear cells with prominent membrane, rounded squamous formations, ghost cells). Both MorM and whorl-like structures consistently showed positivity for CD10 and CDX2, with low ki67; cytokeratins pattern was also overlapping, although more variable. Hard keratin was focally/multifocally positive in 8 MorM cases and focally in one conventional squamous differentiation case. Hair matrix tumors showed no morphological or immunophenotypical overlap with MorM. MorM shows wide morphological and immunophenotypical overlap with the whorl-like structures of adamantinomatous craniopharyngiomas, which are analogous to enamel knots of tooth development. This suggests that MorM might be an aberrant mimic of odontogenic differentiation. Supplementary Information The online version contains supplementary material available at 10.1007/s00428-021-03060-2. Based on the presence of nuclear β-catenin accumulation and the ghost cell keratinization, it has been suggested that MorM might represent a differentiation towards hair [10]. In fact, tumors originating from the hair matrix, such as pilomatrixoma and pilomatrix carcinoma, often show nuclear β-catenin expression (due to CTNNB1 mutation) and ghost cell keratinization (due to the presence of hard keratin). However, the described features are also present in tumors exhibiting odontogenic differentiation, such as calcifying odontogenic cyst and adamantinomatous craniopharyngioma (ACP) [16,17]. In these tumors, the presence of ghost cells and hard keratin seems to represent an irregular deposition of tooth enamel matrix rather than aberrant hair formation [18,19]. In spite of these similarities, a systematic comparison between MorM and hard keratin-expressing tumors has never been performed. Furthermore, to our knowledge, hard keratins have never been assessed in MorM. The aim of this study was to assess whether MorM may represent a differentiation towards hair or tooth, by comparing MorM to hard keratin-expressing tumors. Case selection All specimens were retrieved from the archives of the Pathology Unit of the Federico II University Polyclinic of Naples. The study sample included 15 endometrioid carcinomas with MorM (out of which 11 showed overt squamous/keratinizing features) diagnosed between January 2019 and October 2020; 10 of these cases were previously described [9]. Twenty pilomatrixomas diagnosed in the 2019-2020 period and 6 pilomatrix carcinomas diagnosed in the 2010-2020 period were selected as tumors representative of hair matrix differentiation. Fifteen cases of adamantinomatous craniopharyngioma (ACP) diagnosed in the 2003-2019 were also selected as tumors representative of odontogenic differentiation; 11 out of 15 ACP were previously reported [20]. Ten previously described cases of endometrioid carcinoma with conventional SqD were used as controls [9]. Histological and immunohistochemical methods Histological methods for endometrioid carcinomas were previously reported [9]. For pilomatrixomas and pilomatrix carcinomas, the lesion was sampled in toto and surgical margins were assessed separately. Calcifying odontogenic cysts and ACP were sampled in toto. All samples were fixed in formalin for 24-48h. Pathological evaluation All cases underwent complete immunohistochemical assessment. ACP cases were used as positive controls for hard keratin based on published data [17], while glandular epithelium and epidermis served as internal negative controls in endometrioid carcinomas and hair matrix tumors, respectively. For the other markers assessed, representative cases of endometrioid carcinomas with MorM and conventional SqD from our previous study [9] were used as controls, since the presence of different tumoral components (glands, MorM, squamous features within MorM, conventional SqD) offered both positive and negative controls. Hair matrix and odontogenic tumors were screened by immunohistochemistry for β-catenin, in order to identify tumoral areas suitable for comparison with MorM. Cases were considered relevant if they exhibited (i) distinct nuclear accumulation of βcatenin and (ii) morphological similarity to MorM (i.e., syncytial aggregates of bland cells with round/ovoidal-to-spindled nuclei and profuse cytoplasm) in the same area; squamous/keratinizing features associated with MorM and previously described (i.e., clear cell with distinct cell borders, overtly squamous cells in round aggregates, ghost cells) [9] were also assessed. All specimens were evaluated by one pathologist (AT) and a second subspecialized pathologist at a double-headed microscope (DR, EG, MDBDC, LI, MM). All data were anonymized. Endometrioid carcinomas with MorM MorM areas were observed as both rounded formation and irregular sheets (Fig. 1). Eleven out of 15 cases also exhibited squamous/keratinizing features within MorM areas (Figs. 1 and 2). The distribution of squamous/keratinizing features was diffuse in 7 cases and focal in 4 cases. Morphological and immunohistochemical features of 10/15 cases were previously detailed [9]. In brief, all MorM areas consistently showed positivity for β-catenin (nuclear), CD10, and CDX2, with low ki67 expression (Fig. 3); CK8/18 and CK19 positivity, variable CK5/6 positivity, and weak-to-null CK7 expression were observed (Fig. 4). Rudimental squamous features within MorM consisted of clear cell with evident cell borders ( Fig. 1) and strong p63 positivity. More mature squamous features consisted of rhomboidal cells with wide eosinophilic cytoplasm and prominent cell membranes, often in a rounded arrangement (Fig. 1); such areas showed loss of the MorM markers (nuclear β-catenin, CD10, and CDX2); p63 expression decreased with the keratinization process [9]. Keratiniziation within MorM appeared either as central ghost cells aggregate or as multiple single ghost cells (Fig. 2). Hard keratin showed multifocal positivity in 3 cases and focal positivity in 5 cases; other 3 cases showed a slight nuance within keratinizing areas which was not considered significant; the 4 MorM cases with no squamous/keratinizing features were negative. In the 10 conventional SqD cases, hard keratin staining was negative in all but one case, which showed focal and weak positivity (Fig. 5). Hair matrix tumors Compared to MorM, the cell nuclei of pilomatrixomas were almost invariably round/ovoidal and regularly spaced, with no spindling. The cytoplasm of basaloid cells was invariably scant, and the squamous cells consistently showed a finely dispersed chromatin. Unlike MorM, the ghost cells of pilomatrixoma were always arranged in tightly cohesive sheets. Out of 17 pilomatrixomas with adequate epithelial component, 13 cases showed nuclear/cytoplasmic β-catenin accumulation in both basaloid and squamous cells. CD10 showed distinct positivity in the basaloid cells, which became weaker in the squamous cells. CDX2 was positive in 13 cases (diffuse in 5 cases and focal/multifocal in 8 cases). Ki67 expression was high in the basaloid cells and low in the squamous cells. P63 was strongly expressed in both components. The CK expression pattern was variable for CK5/6, negative-to-focal for CK7, and CK8/18, and multifocal-to-diffuse for CK19 expression. Pilomatrix carcinomas showed β-catenin accumulation in 3/6 cases; compared to pilomatrixomas, they showed diffuse striking nuclear atypia with high mitotic index and high ki67 expression in all cases. Hard keratin showed distinct expression in the ghost cell layers nearest to the squamous cells in 11 hair matrix tumors; the remaining cases showed weak and focal expression. Morphological and immunophenotypical features of hair matrix tumors are shown in Supplementary Figures 1-3. Odontogenic tumors All ACP showed nuclear β-catenin accumulation in the socalled whorl-like structures (WS). Analogously to MorM, the WS were mostly observed in the form of small, round syncytial aggregates of bland cells, with round/ovoidal-to-spindled nuclei and variable amount of eosinophilic-to-amphophilic cytoplasm (Fig. 1). Cluster of clear cells with evident cell membranes were observed within WS (Fig. 1). Overt squamous features including rhomboidal cells with wide eosinophilic cytoplasm and prominent membranes were also present; these areas often maintained a round, "morular" shape ( Fig. 1). Foci of ghost cell keratinization in WS were often associated with clear/squamous cells; in some areas, keratinization occurred in the form of multiple, single ghost cells (Fig. 2). On immunohistochemistry, WS were sharply distinguished from the background by CD10 and CDX2 positivity, as observed in MorM; ki67 expression was low-to-null (Fig. 3). Compared to the other markers assessed, the expression of CK was positive with no appreciable difference from the surrounding tissue (Fig. 4). Unlike MorM, the expression of p63 in WSs was prominent even in the absence of squamous features, resulting similar to the surrounding stellate reticulum; however, analogously to MorM, p63 expression decreased with the development of terminal squamous/keratinizing features. All cases showed diffuse hard keratin expression in the ghost cells, although with variable intensity (Fig. 5). Similarities and differences among MorM, SqD, WS, and hair matrix tumors are summarized in Table 2. The nature of MorM has long since remained a mystery. On the one hand, several features have led to consider MorM as a putative immature form of SqD, e.g., the profuse Fig. 1 Morphological features of morular metaplasia in endometrioid carcinoma and whorl-like structures in adamantinomatous carcinoma (hematoxylin-eosin). a Round shape (magnification ×200). b Cytological detail (×400). c Clear cells with evident cell membrane (×200). d Round aggregates of squamous cells (×200) eosinophilic cytoplasm, the expression of Bcl2 intermediate between glands and squamous areas, the focal presence of tonofilaments on electron microscopy [3,8,12]. On the other hand, the unique immunophenotype of MorM (nuclear β-ca-tenin+, CD10+, CDX2+) contrasts with such hypothesis [1,9,15]. In addition, some results reported in the literature that might have been misleading in this field. It was reported that MorM showed evidence of neuroectodermal differentiation [7], although such findings have been considered inconclusive due to the inconsistency of the results [21]. Furthermore, several studies separated MorM from conventional SqD based on the absence of overt squamous features and keratinization [5-8, 12, 13]. Due to this view, many cases of conventional SqD with positivity for the MorM markers (nuclear β-catenin, CD10, CDX2) have been reported [5,12]. As discussed in our previous study, we believe that squamous/keratinizing features are not a criterion to differentiate between MorM and conventional SqD, since it can be observed in both entities. Instead, MorM seems to differ from conventional SqD based on the presence of the typical MorM cellularity with the typical MorM immunophenotype, the variable tendency to maintain a "morular" shape, and the ghost cell keratinization [9]. Therefore, we believe that part of the previously described cases of SqD with MorM immunophenotype, or of mixed MorM/SqD, may represent in fact MorM with squamous/ keratinizing features. The evidence of squamous/keratinizing features within MorM appears in agreement with previous observations [10,11]. Discussion The characteristic features of MorM, in particular the nuclear β-catenin accumulation and the ghost cell keratinization, suggest its similarity with hard keratin-producing tumors. Such tumors mainly include hair matrix tumors (i.e., pilomatrixoma and pilomatrix carcinoma) and odontogenic tumors (i.e., calcifying odontogenic cyst and ACP). Analogously to MorM, these tumors harbor CTNNB1 mutation (which underlies the nuclear β-catenin accumulation); the ghost cell is related to the presence of hard keratin [16,17]. The analogy between MorM and hard keratin-producing tumors has previously been postulated by Tanaka, who supported that MorM could represent a differentiation towards hair [10]. However, contrary to it was previously suggested [17], hard keratin in odontogenic tumors seems to represent a component of the tooth enamel and not of hair [18,19]. In fact, the presence of enamel-related proteins has been shown in the ghost cell of ACP [18]. Nonetheless, to the best of our knowledge, MorM has never been systematically compared to hard keratin-producing tumors. In the present study, we did not find significant analogies between MorM and hair matrix tumors, since the latter ones showed diffuse nuclear β-catenin accumulation in the absence Fig. 3 Immunohistochemical features of endometrioid carcinomas with morular metaplasia and whorl-like structures of adamantimomatous craniopharyngioma (magnification ×200X). a Nuclear β-catenin accumulation. b CD10 positivity. c CDX2 positivity. d Low/absent ki67 expression of the typical MorM cellularity. By assessing ACP, we noticed that the WS showed strong morphological analogies with MorM. Indeed, the rounded shape, the syncytial appearance, the bland round/ovoidal-to-spindled nuclei, and the profuse cytoplasm were identically observed in MorM and WS. The squamous/keratinizing features were also similar, with the presence of round/ovoidal clear cell with prominent membranes, overtly squamous cells in a "morular" arrangement and ghost cell keratinization. We noticed that, while the ghost cells of pilomatrixoma always were densely stratified, those of MorM and ACP often appeared as multiple single, round ghost cells within areas with syncytial cellularity. Although ACP and MorM also show layers of stratified ghost cells, Rumayor et al. showed that these were dyscohesive on electron microscopy, while the ghost cells of pilomatrixoma appeared tightly cohesive [22]; such ultrastructural finding is in agreement with our observation. The immunophenotype of MorM and WS was widely superimposable, with consistent β-catenin (nuclear), CD10, and CDX2 positivity that contrasted with the negative background. Interestingly, CDX2 was previously reported to be negative in ACP [23]. Instead, we found a CDX2 positivity limited to the WS, which could be missed at low magnification. The proliferation marker ki67 was low/absent in both MorM and WS. In endometrioid carcinomas, the low proliferation index of MorM contrasted with the highly proliferating glandular Fig. 4 Cytokeratins (CK) expression in endometrioid carcinomas with morular metaplasia and whorl-like structures of adamantimomatous craniopharyngioma (magnification ×200). a Variable CK5/6 expression. b Decreased CK7 expression compared to the background. c Increased CK8/18 expression compared to the background. d CK19 positivity similar to the background component. In ACP, the proliferation index was also low in the stellate reticulum around WS; however, one of us previously reported that ki67 expression remained low in WS even when the surrounding cells showed increased proliferation [20]. The CK pattern was also similar between MorM and WS, despite being less consistent than β-catenin, CD10, and CDX2; by contrast, hair matrix tumors showed weak/absent expression of CK8/18, which was the most strongly expressed CK in both MorM and WS. The only evident difference was p63, which was positive in MorM only in the presence of overt squamous features, while it was consistently positive in WS. However, such difference might be attributed to the difference between endometrioid carcinoma and ACP, since p63 is negative in the former and positive in the latter. In both endometrioid carcinoma and ACP, positivity for β-catenin (nuclear), CD10, and CDX2 was observed around several ghost cell-keratinizing areas, consistently with their derivation from MorM/WS [9]. In addition to these findings, most MorM cases in our series showed focal/multifocal positivity for hard keratin. Such positivity, although not comparable to that of ACP (which showed diffuse positivity in the ghost cells), was completely absent in all but one SqD case, which showed weak and focal staining. The described findings suggest that MorM of endometrioid carcinomas might be biologically similar to the WS of ACP. Interestingly, a recent study demonstrated that WS are analogous to structures termed "enamel knots," which play a crucial role in the tooth development [19]. On the account of these findings, we suggest that MorM, just like WS, might be analogous to enamel knots. This would mean that MorM is a form of odontogenic differentiation and the keratinizing process found in MorM is an aberrant mimic of the normal tooth development. Remarkably, while MorM resembles enamel knots, the equivalents of other odontogenic components (such as stellate reticulum) are not found in endometrioid lesions. The reason why endometrium would differentiate into a specific odontogenic structure might lie in the different gene expression among different odontogenic components. In fact, enamel knots differ from the other odontogenic components by nuclear β-catenin accumulation and increased expression of several proteins (such as p21 and ectodysplasin A receptor) [19]. It is possible that endometrioid proliferations share the expression of key molecules with enamel knots; in such scenario, a specific molecular event (e.g., CTNNB1 mutation) might trigger the development of MorM. This would explain why MorM is considerably more common in endometrium than in other tissues. Further studies are necessary in this field. Conclusion MorM shows nuclear β-catenin expression and not rarely exhibits ghost cell keratinization, as observed in hair matrix and odontogenic tumors. However, we did not find striking morphological or immunophenotypical analogies between MorM and hair matrix tumors. On the other hand, MorM showed considerable morphological and immunophenotypical overlap with the WS of ACP. This may suggest that MorM mimics odontogenic differentiation, in particular the enamel knot of the normal tooth development. Further studies are warranted in this field. Author contribution AT: study conception, data collection, pathology review, data interpretation, manuscript writing; AR: study conception, data interpretation, manuscript writing, DR: data collection, pathology review, data interpretation, manuscript writing, supervision; EG: data collection, pathology review, data interpretation, manuscript writing, supervision; SP: data collection, data interpretation, manuscript writing; PM: data collection, data interpretation, manuscript writing; FZ: study conception, data interpretation, supervision; MDBDC: study conception, data collection, pathology review, data interpretation, supervision; LI: study conception, data collection, pathology review, data interpretation, supervision; MM: study conception, pathology review, data interpretation, manuscript writing, supervision. Funding Open access funding provided by Università degli Studi di Napoli Federico II within the CRUI-CARE Agreement. Declarations This study was based on retrospective analyses of routine archival formalin-fixed, paraffin-embedded tissue. All patients provided a routine written consent for the use of their biological specimen for research purpose at the time of surgery, in agreement with the Declaration of Helsinki and the indications of Italian Legislative Decrees no. 196/03 and 101/18 (Codex on Privacy). All data were anonymized. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-03-05T14:32:32.578Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "089e5de7367a63e377a8bf423d57ddb443af08d2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00428-021-03060-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "089e5de7367a63e377a8bf423d57ddb443af08d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230593289
pes2o/s2orc
v3-fos-license
Satisfaction in Old Age: Activity or Disengagement? The aging process of human being is intertwined with two vital aspects of life experiences; work and retirement when elderly people face greater uncertainties than other age groups as they have to replace themselves in the newer environs with shifting roles. Thus, in this process, researchers have queries whether elderly disengage or withdraw, whether their disengagement or activity brings satisfactions and how is their attitude towards the functionality of disengagement. To measure these, disengagement and activity theories have been used with descriptive research design when respondents were selected purposively and interviewed Face-to-Face. Most of the elderly in Bangladesh believe themselves to be forced to retire. A significant portion of retired elderly answered that they wanted to be engaged instead retire but, in reality, most of them enjoy disengagement escaping from earlier activities that ensures their quality of life and satisfaction. After all, it is found that disengagement is functional as the sense that the elderly people give up their positions to the young as they are not able to defeat them in the activity level. Introduction In the postmodern era, where the economy is primarily based on information and technology, the concern for the elderly has been rising. The young people with their up-to-date education and knowledge about technology are increasingly becoming demandable to the information-based economy which makes the knowledge of the elderly old and obsolete. The aged workers have to retire to replace them for the young generation. After retirement, they lose their statuses and vital roles in society and decrease their interaction with others that make them feel worthless (Whiteside, 1965). If the retired person does not get the opportunity to re-engage themselves, they often lose their self-esteem and become self-centred. The elderly are the vital parts of our society. After retirement, some of them want to be disengaged from activities and some want to remain active despite being old and many of them are actually forced to disengage by society. The study uses two significant theories of Aging-activity and disengagement theory to explore whether disengagement or activity brings satisfaction in old age and whether they disengage themselves from society or are forced to withdraw by the society. Havighurst, Neugarten, and Tobin (1964), in their study, tested disengagement theory and their findings support the theory. They found that with the increase of age, people's interaction with others and psychological engagement as well as social engagement with others reduced. A similar finding of decreased social interaction with growing old was also found by Maddox (1964). In contrast, Somers' study (1977) did not support the disengagement theory and he found that instead of being disengaged the elderly tried to resist disengagement and kept them busy by continuing their activities. Zborowski's findings (1962) were also against the propositions of the disengagement theory that people disengage themselves from society as they grow old. His findings did not support the voluntary disengagement of the elderly. Objectives of the Study • To explore whether the elderly disengage themselves or withdrawn by the society forcefully • To explore whether disengagement or activity ensures the quality of life of the elderly • To examine the attitude of the elderly about the functionality of their disengagement Literature Review and Theoretical Connections Human being completes a cycle through physical maturation and the succession of Journal of Sociological Research ISSN 1948-5468 2021 age-related roles: child, adolescent, adult, parent, senior etc. At each point, as the individual sheds the previous roles, copes with a new one. Being elderly, one has to involve with new institutions and circumstances. With the dramatic and revolutionary societal changes occurred in the last few decades by the process of rapid industrialization, modernization and urbanization, the attitudes of and toward elderly have been affected mainly with their waning of power, influence and prestige once they held in the social institutions in traditional societies. In the last seventy years the percentage of older has been increased by 5 to 7 percent (Lee, 2009). This percentage has a massive effect on the dependency ratio: the number of productively employed people to non-productive; young, disabled, and elderly (Bartram and Roe, 2005). The rapid social change brings about mixed blessings for the elderly; getting excellent health benefits and losing economic role. There have been greater cultural variations of caring for the elder worldwide. For example- (Yap, Thang, and Traphagan, 2005) in their book "Introduction: Aging in Asia-Perennial Concerns on support and Caring for the Old" gave attention on the demographics of population Aging in Asia in the epoch of globalization and discussed that in Asian society, a family is solely answerable to care the elders (Yap, Thang, and Traphagan, 2005) while in the most western societies, the elders are taken as independent and probable to tend to their personal maintenance. Nevertheless, it is common for family members to interfere voluntarily only if the elderly relative needs support in poor health condition. In North America, the young members care for the elders in the condition of having future returns such as legacy or, in some cases, the amount of care the elderly delivered to the caregiver in the past (Hashimoto, 1996). In China, it is a great virtue to respect and care the elders and ancestors (Hamilton 1990 andHsu, 1971). In Japan, the elderly always deserve support (Ogawa and Retherford, 1993). However, the dramatic changes in dominant social and economic institutions (like family and economy) created the call for community and government care (Raikhola and Kuroki, 2009). The people of North America feel elderly as a burden and most of the caregivers could not be able to support as they generally work outside the home. Another crucial fact is that many middle-class families cannot bear the financial burden of "outsourcing" professional health care (Bookman and Kimbrel, 2011). However, the Chinese Canadians are more supportive than the Caucasian Canadians (Funk, Chappell, and Liu, 2013). It is really interesting that different demographic groups treat elderly differently (Bookman and Kimbrel, 2011). Globally, wealthy nations are equitably well prepared to support an exponentially growing elderly population while peripheral and semi-peripheral countries face similar increases but lack resources. Poverty among elder especially elderly women is a big concern. The feminization of the Aging poor is evident in peripheral nations because a lot of elderly women in those countries are single, illiterate, and not a part of the labour force (Mujahid, 2006). Aging, a lifelong process happens in the changes of physical, psychological, and social levels (Railey, 1978). Age is also considered as hierarchal like race, class and gender and is valued differently. Though children want more independence, many young people view aging as negative (Packer and Chasteen, 2006). A complete distinct line might be drawn between the old and the young at different levels like-the institutional, societal, and cultural (Hagestad and Uhlenberg, 2006). Elderly females are also seen in terms of negative stereotype considering less successful than older men (Bazzini, Mclntosh, Smith, Cook, and Harris, 1997). On the contrary, at the presence of other men, Aging men may lack prospects to proclaim the masculine identities (e.g., sports participation) (Drummond, Newton and Yemm, 1998). According to some social scientists, in the western world, aging male bodies are sometimes considered as genderless (Spector-Mersel, 2006). Widows (the living female wife of a dead male companion) and widowers (the living male wife of a dead female companion) lead their post-marital lives differently while men feeling loss of something are more likely to marry after the death of his partner, many women do not re-marry instead enjoy being alone (Davidson, 2002). Elderly people have to cope with new challenges with the lack of partial independence and physical ability. They have to face age discrimination. Some have self-sufficiency; others need more caring as they typically no longer hold jobs. Elderly people might be the target of abuse, mockery and stereotype. According to the functional view on elderly, it is observed that the aged who are equipped with more resources enjoy their post-retirement being active and adjust well with the new challenges (Crosnoe and Elder, 2002). As functionalist perspective disengagement theory denotes that shirking social roles is a regular fact of growing old. There are some salient facets of this theory. Firstly, everyone fears death and has a declination of physical and mental maturation with the passage of time which is an entirely natural process. For this reason, elderly withdraw themselves from others and society. Secondly, for their escaping social roles, they have to accept less reinforcement to adapt to social norms. Therefore, they become freer from the pressure to conform. Finally, Men and women experience this situation differently as men generally seek for giving attention to work and women in the marriage and family, at the time of their withdrawn might be hopeless and depressed unless the shift their accustomed roles which are compatible with the disengaged state (Cumming and Henry, 1961). The theory is also criticized as it is not accepted as a classic form. Criticisms stereotypically emphasis on that elders universally naturally withdraw from society with the process of Aging, and that it does not permit for a varied distinction in the approach people experience Aging (Hothschild, 1975). Cumming and Henry recognized (1961) the withdrawn of social roles and opined that elderly have to find the possible replacement of those roles that is addressed anew in activity theory. For the happiness and enjoyment of elderly, activity levels, social participation and connection are very rudimental (Havinghurst, 1961;Havinghurst, Neugarten, and Tobin, 1968;Neugarten, 1964). The happiness is entirely depending upon the active participation of the elderly. Critics of this theory mentioned that the access to social opportunities and activity are totally unevenly distributed. The theory recommends that activity is a resolution to the comfort of elders without being capable of maintaining the distribution of access to these social opportunities and activities reflects broader issues of power and inequality in society. Moreover, in the presence of others or participation in activities, everyone may not find fulfillment. Reformulations of this theory propose that involvement in comfortable events, such as hobbies, become more fruitful in later life satisfaction (Lemon, Bengtson, and Petersen, 1972). Cumming and Henry (1961), the proponents of disengagement theory, explain the process of Aging from the functionalist perspective and use the term disengagement to describe a mutual process by which people inevitably tend to withdraw themselves from society, social roles and relationships as well as society reliefs them of many social responsibilities, makes them less obligatory to follow social norms, as they grow old and thereby provides them with greater freedom through mutual disengagement. As people gradually be older, they limit their social interaction as well as social relations and become self-centred (Cumming and Henry, 1961). Obligatory retirement is one of the processes by which society disengages the elderly, and a radical change occurs in an individual's life with retirement. The nature and type of social interaction as well as the number of people one communicates and the nature and types of activities one engages significantly alter after retirement (Whiteside, 1957). The process of retirement is not only functional for the aged but also for the younger generations as well and above all for the society as a whole. For the elderly, it brings an opportunity to pursue the desires that they were not able to fulfil before retirement by giving up their previous statuses, roles and responsibilities. The retirement of the aged creates job vacancy for the younger generations, and the young people with their full energy, up-to-date knowledge and technological skills replace the elderly, the overall innovation, creativity and productivity of the society significantly increase that bring change in every segment of society (Thompson, 2012). Men and women differently experience social withdrawal as men's primary focus is work, and women's main focus is family. In response to the withdrawal of old roles, if they don't take a new role their life will be miserable and meaningless (Cumming and Henry, 1961). Activity Theory On the contrary, the activity theory, developed by Havighurst (1961), states that the quality of life of the elderly largely depends on their involvement in significant social activity and society disengages itself from the elderly against their will. Benjamin, Edwards and Bharati (2005) found that those who remain active in old age are more likely to stay physically sound and to delay physical limitations of functioning than those who are not active. The activity theory, closely connected with symbolic interactionist perspective, contends that the statuses people occupy and the roles they perform together create a person's sense of self and social identity and therefore when they lose those, they also lose their sense of self and social identity that reduce their life satisfaction (Thompson, Zack, Krahn, Andresen, & Barile, 2012). In order to lead a satisfying life after retirement, the retiree must replace the lost social identities and roles and long-familiar habits, situations, roles, contacts, that they have to abandon forcefully (Whiteside, 1967), with new ones by adopting new statuses, roles and social identities, for example-pursuing hobbies remain unfulfilled before retirement, engaging in various social and religious organizations, doing various social welfare activities, travelling, grandparenting, forming new contacts with old friends/peers, gardening and religious activities etc. ISSN 1948-5468 2021 Research Design and Method A descriptive quantitative research design has been chosen based on the objectives of the study. To collect data from the field, a Face-to-Face structured interview has been scheduled. The Study Area and Unit of Analysis Barisal city is selected as the study area. Retired government officials are considered as the unit of analysis. Study Population The elderly people of the city who are retired from government jobs are considered as the population of this study. Sampling Technique and Sample Size The study uses purposive sampling technique to draw the necessary sample from the target population. A total of 89 samples are selected who were retired government employees from various professions in Barisal city. Data Collection Techniques, Tools and Ethical Considerations In order to conduct the study, Face-to-Face Interview is used as the main data collection technique of this study. A structured questionnaire has been used as the tools for data collection. Data were coded, analyzed and processed using Statistical Package for Social Sciences (SPSS). The frequency distributions of the study variables have been measured by descriptive analyses. Confidentiality and privacy of the respondents are highly ensured. Demographic Profile of the Respondents The demographic traits of the respondents like sex, age and marital status have been presented in the below table. A significant portion of the respondents is male, estimating 83.1 percent. As the participants are all retired personnel, they are aged above 57. The highest representation of the age between 62-65, measuring 28.1 percent and the lowest is 87-91 (5.6 percent). In the case of marital status, 85.4 percent elderly are married, and the rest percentage consists of widows or widowers. ISSN 1948-5468 2021 In table 2 the scenarios of disengagement have been depicted. Everyone believes in one thing that retirement gives greater freedom. Only 7.9 percent older people believe that they are unable to perform the job as they grow old and about 97 percent think that they have got forceful retirement while very few accept it normal or have to accept the reality of bearing forceful retirement while they consider that they have available power to continue their ISSN 1948-5468 2021 performance. In the case of capability concern, 92.1 percent thinks that they still feel capable of working. All of the respondents think that there will be no boundary of retirement. If anyone wants a retirement, he may accept it right after whereas society must not poke its nose. In the question of status, 94.4 percent think that retirement causes their lowering status and shrinking role in 86.6 percent cases. Feeling of nothingness or unimportance is heavy in them, and they do not get that much opportunity for work. 30.3 percent indicates that they are withdrawing themselves from their roles, while 39.3 percent shirks social responsibilities as they grow old. Table 3 describes the percentage of the interaction of the elderly after retirement. They like to interact with others, but all have opined that their social engagement is on the decrease and 97.8 percent think that growing old forces them to decrease interaction. 70.8 percent like mingling with all ages while only 47.2 percent prefer their same-aged people. Around 70 percent prefers social gathering, making new friends and pursuing personal moments, respectively. Only 58.7 percent think that interaction contributes to life satisfaction. Table 4 describes the activities of the elderly after retirement. Their activities mostly revolve around their family. The unfulfilled hobbies have been pursued by 57.3 percent. Around 87 percent has grandchildren staying at home and 94.8 percent like grandparenting. Very few; all cases are below 25 percent engage themselves with activities at home like cooking, daily shopping, grandparenting, gardening, religious activities, childcare. 27 percent think that they are to do some household chores imposed by others. In case of engagement or relief, they are at almost fifty-fifty positions. In social and religious events, around 40 percent respectively have interests while in economic, political, cultural, sports have below 6 percent respectively. About 40 percent engage themselves in social and religious organizations, respectively, on the other hand, 62.9 percent adds themselves with social welfare organizations. After all, 74.2 percent still feel themselves as a part of the broader community after retirement. ISSN 1948-5468 2021 ISSN 1948-5468 2021 Source: Field Data 2019, N=89 Table 5 delineates the consideration of the best thing of the elderly. About 58 percent believes that it would be very convenient if there is no age restriction where 41.6 percent think that they can perform their religious activities being free. 18 percent can do the things which they could not be able to do. In the case of directing family (5.6 percent) and giving time to the family (4.5 percent), they are entirely unavailable. Some think that they are free from all responsibilities and some do nothing estimating 14.6 percent and 15.7 percent respectively. Feel free to participate in leisure activities of your choice as relieved of many of previous responsibilities, such as parenting, working, and other social and professional activities 60.7 Journal of Sociological Research Doing things that were unable to do due to your job or other responsibilities 52.8 The family still expects the same responsibility that you performed before retirement 53.9 Have nothing more to do in life as accomplishing all duties like parenting, job responsibilities, household responsibilities etc. 52.8 Still want to be active if they get a new opportunity 41.6 Source: Field Data 2019, N=89 In table 6, the options about the functionality of disengagement have been described. All respondents think that the retirement of the elderly creates new jobs and other opportunities ISSN 1948-5468 2021 for the young generation. The second highest percent that is 87.6 percent think that elderly people are replaced through retirement by the new generation providing renewed interest and energy for productivity, innovation, creativity, and change. 60.7 percent feels free to participate in leisure activities of your choice as relieved of many of previous responsibilities, such as parenting, working, and other social and professional activities. They can do things that were unable to do due to their job or other responsibilities, their families still expect the same responsibility that they performed before retirement, some have nothing more to do in life as accomplishing all duties like parenting, job responsibilities, household responsibilities etc. estimating around 53 percent each. 41.6 percent still wants to be active if they get new opportunities. Discussion and Findings Elderly people encounter a new reality to face their family as well as society after their retirement as there happens a mutual withdrawal or disengagement between them and others in the social system. Therefore, Aging is entirely a give-and-take process where the process should literally be influenced by the features the individual and the society carry (Cumming, 1963, p. 377-8). From this point of argument, it is mentionable that the elderly of Bangladesh are a no longer anomaly of the fact of the aging process and retirement. After their retirement, there emerged a distinct two-way relationship where the elderly had to adjust with their surroundings. They have to mingle and reshape themselves for the sake of survival. In this process, the environment eases the way of successful Aging while the culture promotes it. On the contrary, the aging persons have to accept the challenges to make them fit to fulfil their functions within the familial or societal settings (Simmons, 1962, p. 40). In this respect, older people chose to be disengaged or active. Disengagement might be the release and freedom of the individual from engrossing ties, involvements, and relationships while activity denotes the opposite proposing that successful Aging is tied up with activity and feeling useful. In the activity, a person must undergo steady expansion throughout life. As frequent contacts, roles, and situations are removed, the Aging individual must establish new interests to substitute for those they are forced to give up. This issue draws the attention of the researchers to frame the questions whether the retired in Bangladesh disengaged or accepts activity. Whether the Elderly Disengage Themselves or Withdrawn by the Society Forcefully In the question of whether the elderly disengage themselves or are withdrawn by society forcefully, it is found that only about 8 percent believe that they are not able to be part of other jobs soon after their retirement. On the issue of their retirement, 96 percent call it a forceful disengagement by society when they were totally unwilling to accept it generally as it is a rule. The elderly of Bangladesh opined that retirement should be based on one's own choice rather than forced by society. They have the feeling of losing status after their retirement by delimiting their previous multifarious roles and responsibilities played for the sake of family ISSN 1948-5468 2021 and society. Only a few older people feel unique and essential as before after retirement. Stephen J. Miller (1965) states that occupational retirement is "possibly the most crucial life change requiring a major adjustment on the part of the older person" (p. 78). Individual's identity crisis begins with the forced retirement because, as Miller continues: "Work not only provides the individual with a meaningful group and a social interaction." Streib and Schneider (197l) found the same as retirement as a form of disengagement narrowing the life processes. Journal of Sociological Research It is an exciting finding that about 21 percent retired though they had the opportunity to continue their job. Around 40 percent willingly withdrawn themselves from social roles and responsibilities as they think they grew old. At the same time, the retired assume themselves having greater freedom shirking their job roles and responsibilities. It is clear that a large portion of the elderly do not want to retire at the same time they want to limit their familial and social activities. Whether Disengagement or Activity Ensures the Quality of Life of the Elderly In the process of disengagement, Aging people being disengaged reduce themselves from the number of active roles he holds; such as-a friend, neighbor, worker, churchgoer, club member, etc. Strong evidence exists to show that the aged in Bethnal Green reduced the number of roles they held. Over half of the men were retired, sixty-one per cent of those studied were not interested in belonging to older people's clubs, and the role of neighbor was communicative but not an intimate one. Also, these older people rarely, if ever went to church. They gave up part-time occupations, visits to the cinema, shopping, cleaning and washing services for neighbors and associations with them, friendships outside the family, holidays and weekends with relatives, the care of grandchildren, the provision of meals for children, and finally their own cooking and budgeting. . . (Townsend 1957, p. 55.) By comparing with Townsend's finding, more significant similarities have been found here that the elderly people largely remain inactive during their post-retirement period. They have very minimal participation in household activities (about 7.9 percent), Cooking (4.5 percent), Daily shopping (20.2 percent), Grand parenting (7.9 percent), gardening (4.5 percent), Religious activities (19.1 percent), Child care (24.7 percent). About 50 percent try to pursue their hobbies which were unfulfilled. It is also found that over 85 percent elderly have their grandchildren whom they live with and 94 percent elderly enjoy their grandparenting. For example-strong evidence exists to show that the Hopi loses roles as he grows older even though he continues to work at whatever tasks he can perform. Cumming and Henry (1961) state that the close, indulgent grandparent's role can be assumed only after they give up the mother or father role. Dennis (1940) and Titiey (1944) state that warmth of attachment exists between grandparents and grandchildren. The Hopi child does not tease his true grandfathers. The old grandparents spend much of their time teaching Hopi songs and legends and the Hopi way of life to the young and for as a man grows older, he learns more concerning the conventional forms and his knowledge of songs, ritual, and traditions may become greater and more significant (Dennis, 1940, p. 86). Streib and Schneider (1971), modified the view of disengagement and spoke of "differentia1 disengagement," and conclude that "disengagement in one sphere, such as retirement, does not signal withdrawal and retrenchment in all spheres" (p. 180) and it matches the disengagement process of elderly in Bangladesh. In some cases, elderly people are imposed by others to do household chores. There have been fifty-fifty positions of their opinion on engagement or relief of responsibilities. Actually, they have doubt in it as they do not have much interest in activities in reality. They have little interest rated below 10 percent in economic, political, sports. At the same time, only around 40 percent have an interest in social and religious events and engage themselves in social organizations. In this way, 74.5 percent still feel that they are part of the broader community after retirement. This finding may support Streib and Schneider's (197l) concept of "activity within disengagement" (pp.180-182). In the case of older people of Bangladesh, growing old makes elderly bound to shrink their interactions with others although 70 percent elderly people prefer to participate in social gathering and about 60 percent form new contact with peers most of them were at same personality. 73 percent enjoy their own time being secluded from others and make their own choice of passing the time. About 59 percent opine that interacting with others contributed to their life satisfaction. Elderly's Consideration about the Best Thing at This Age The best thing in the post retirement period for the elderly that they can perform religious activities and half of them believe that it would be convenient if there is no age restriction of retirement. However, most of them want to be in passive mode as well as not to be engaged themselves with anything. In a comparison between normal aged subjects and young normal groups, Laken and Eisdorfer (1960) state that at first glance they disagreed with the disengagement theory but upon closer examination it went along with it. The Attitude of the Elderly about the Functionality of Their Disengagement There are lucid emblems of the functionality of disengagement. Young skillful generation can occupy posts which were emptied by the retirement of elders. Evidence of disengagement was also found by Maddox (1964) in the Duke Geriatrics Project research. In the subjects over the age of 60, he found a tendency for decreased social interaction and for decreased contact with the environment as age increases. In a comparison between normal aged subjects and young normal groups, Laken and Eisdorfer (1960) state that the aged subjects were exceeded .by the younger group in a number of affective descriptions and in activity level. With this replacement, the young generation may bring forth more considerable developments with their renewed interest, energy for productivity, innovation, creativity and change. About 60 percent elderly feel free to participate in leisure activities of their choice as they are relieved of many of previous responsibilities, such as parenting, working, and other social and professional activities. Over 50 percent elderly do things that were unable to do before due to their job or other responsibilities, their family still expects the same responsibility that you performed before retirement and they have nothing more to do in life as accomplishing all duties like parenting, job responsibilities, household responsibilities etc. respectively. In the case of the functionality of disengagement, Cumming and Henry's views can be compared with. They are thought to be the pioneer sociologists who viewed Aging theory from the psychosocial viewpoint and the concepts of disengagement theory (Growing Old, 1961), was widely discussed when hastily extoled by some, as hastily condemned by others (Kastenbaum, 1969). Accepting Cumming and Henry's theory, Parsons (1963) sought for adding his version; consummatory phase: a harvesting period when an individual gathers fruits for his previous instrumental commitments, (p. 53). Streib (1968) supported the theory and felt that "there is abundant evidence to support the proposition that disengagement is universal," although personality and situational variables may influence its incident (p. 70). Streib and Schneider point to flaws in both activity and disengagement theories and propose a third approach of "activity within disengagement" which would make available to the aged new roles in the areas of leisure and citizenship service (pp. 180-182). Thus, it proves the functionality of disengagement of the elderly and to some extent it may be called 'activity within disengagement'. Conclusion Industrialization, modernization and information capitalism bring forth new socio-spatial changes apparently lowering the power, influence, and prestige of the elderly once held that carry not only numerous blessings but also utter sufferings for them as they merely cope with new technologies and new production system. In Bangladesh, elderly have to retain themselves from a job at a certain age which creates opportunities for the young. After their retirement, they were asked whether they accept the reality of disengagement. A significant portion of them answered that they would like to be engaged rather than retire. However, in the end, when they were asked about their life after retirement, it is found that most of them enjoy disengagement escaping from earlier activities. After retirement, being disengaged, elderly shift their active roles to passive roles like enjoying grandparenting. A significant portion of them believes that disengagement ensures the quality of life as they have more time to interact with the same personality and more importantly, they can give more time and attention to themselves. The best thing they get in this time is that they can perform their religious and social activities. Disengagement is functional as the sense that elderly give up their position to the young as they are not able to defeat them in the activity level. Retirement is not entirely the negation of activities of the elderly, it is also an opportunity of 'being active within disengagement' to give special service to family and society.
2020-12-24T09:09:22.782Z
2020-12-17T00:00:00.000
{ "year": 2020, "sha1": "b74b22619ccbedc4a8ecd15fc2051445e2dfc2b1", "oa_license": "CCBY", "oa_url": "https://www.macrothink.org/journal/index.php/jsr/article/download/17399/14030", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ad40c396c9e9dadc2c3385e5d4245c7608b903e2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
244657634
pes2o/s2orc
v3-fos-license
DNA Microarray Image Segmentation Using Markov Random Field Algorithm A deoxyribonucleic acid (DNA) microarray image requires a three-stage process to enhance and preserve the image’s important information. These are gridding, segmentation, and intensity extraction. Of these three processes, segmentation is considered the most difficult, as its function is to differentiate between features in the foreground and background. The elements in the foreground form the object or the vital information of the image, while the background features less critical information for DNA microarray image analysis. This paper presents a study that utilises the Markov random field (MRF) segmentation algorithm on a DNA microarray image. The MRF algorithm evaluates the current pixel depends on its neighbouring pixels. The experimental results show that the MRF algorithm works effectively in the segmentation process for a DNA microarray image. Introduction Scientists can investigate thousands of gene expressions simultaneously using a DNA microarray image [1]. Initially, these gene expressions are kept on a glass slide containing thousands of probes [2]. The glass slide is then used to perform hybridisation between two samples. The two cDNA (complementary DNA) samples are stained with different fluorescent dyes; Cy3 dye is used for the normal sample, and Cy5 dye is used for the malignant sample [3]. When the hybridisation step is completed, the DNA microarray image is created, and the intensity of the spots on the image is calculated. The intensity of the dots shows their state, and the aggregate results allow scientists to evaluate and study gene expression [4], [5]. A high-quality DNA microarray image is required to generate this information. The DNA microarray image may become polluted during the scanning procedure, compromising gene expression analyses [6]. One way for improving and optimising the microarray image is image processing [7]. The processing of the microarray image is divided into three parts. First, gridding (addressing) is employed to determine each spot's location. Second, segmentation to detect the features in the image's foreground (object). Finally, intensity extraction is employed to calculate the intensity of each spot [8]. An MRF segmentation for a DNA microarray image is presented in this research. Based on the experimental results, the performance of this algorithm is then evaluated. Section II introduces and explores various ways to image segmentation using MRF algorithms. Section III describes the approach employed in this investigation. Section IV discusses and analyses the experimental data, and Section V concludes this study. Markov Random Field The MRF algorithm evaluates the current pixel value, taking into consideration the neighbouring pixels [14]. Figure 1 shows an example of a neighbourhood system, where N0 denotes the site of interest and N1, N2, N3, N4, and N5 its neighbours. This neighbourhood system and its groupings, known as cliques, can be understood as follows. Pixels labelled N1 indicate the sites of the first-order neighbourhood system, as shown in Figure 1 (a). Pixels labelled N1 and N2 indicate the sites of the second-order neighbourhood system, as shown in Figure 1 (b). Figure 1 (c) shows the nth-order neighbourhood system, in which n = 5. The neighbouring sites can be viewed as a single element enclosing the site (N0). Figure 2 shows some examples of several types of a clique; the cliques for the first-order neighbourhood system are shown in (a). The cliques in (a), (b), (c), and (d) constitute the second-order neighbourhood system. This shows that the number of types of clique increases as the order of the neighbourhood increases. The probability in equation (1) below is defined as the Gibbs distribution [14]. The parameter Z is the normalising constant, β is a positive constant, and U(x) is the energy function also knows as Gibbs energy [15]. where Vc(x) denoted the sum over all clique potentials of the given neighbourhood systems. The s and n are the neighbours of each other, which the only n is the elements of neighbourhood systems [15]. Equation (5) defined the Bayesian theorem, represented by equation (6). The P(Y) have a total probability that equal to one, and thus considered to be a constant. Therefore, the posterior probability P(X|Y) is proportional to the prior probability P(X) and likelihood probability P(Y|X), as expressed in equation (7) [15]. Equation (8) defined the conditional probability P(Y|X) follow a Gaussian distribution by considering the image intensity representing either the foreground or background [15]. where y is the observed image, and μs and σs are the parameters of the distribution of the xs. Equation (9) is the maximum a posteriori (MAP) estimation of posterior given by equation (7). Then equation (10) produced by substitute equation (1) and equation (8) into equation (9). Then the equation (10) is optimising further by taking the negative and generates the minimisation of the equation as stated in equation (11) [15]. Methodology This section discusses the segmentation of a DNA microarray image [16] (2200×7300 pixels) using the MRF algorithm [15]. Figure 3 shows a fragment of an image of size 446×431 pixels from the DNA microarray image, which is used as the input image for this work. The yellow box shown in figure 3 is the worst case for this segmentation process. Figure 4 presents a flow chart of MRF segmentation. Firstly, the input image is converted into a greyscale image. Then, an initial labelled image, based on the input image, is generated. Next, MRF segmentation is applied to the initial labelled image to generate a new labelled image. This process continues until the maximum number of iterations is reached. Finally, the segmented image is produced after the iteration is completed. In this study, a second-order neighbourhood system was chosen, and the iteration was set to 5. All experiments were performed using MATLAB R2019a software on a Windows 7 operating system with a 2.50GHz Intel Core i5 CPU and 8GB of RAM. The first labelling was generated as part of the initial segmentation of the input image, with the initial foreground labelled as two, and the initial background labelled as one. In this study, the labelling process were using K-means algorithm. In this investigation, the second order neighbourhood system is selected to guide the Gibbs energy, as stated in equation (3). This system allows the comparing method only included the neighbourhood in labelled N1 and N2 for each N0, as shown in figure 5 (a). Following the system, the comparing method will be using horizontal, vertical, and diagonal pair-site to calculate the Gibbs energy, as shown in Figure 5 The total energy is the summation between the Gibbs energy and the log-likelihood, which can be calculated as stated in equation (8) of each labelled possibility. The possibility refers to the site's label (N0), label '1' and '2'. After both total energies are computed, the label with minimum total energy is selected as the new site (N0) label. Then, the total energy of the following site is calculated till the new labelled image is generated. This new labelled image results from the MRF segmentation of the initial labelled image, and the process is repeated until the maximum iteration is reached. After five iterations, the final segmented image is generated. The label '2' of the segmented image representing the foreground while label '1' representing the background. Results and Discussion The previous section described the MRF algorithm and the steps used in the segmentation in this experiment. The segmentation experiment was performed using MATLAB simulation tools on a Windows operating system. The experimental results of all steps are presented here. Firstly, the microarray image is cropped to the size described above. Next, this cropped image is converted from true colour (RGB) to a greyscale image, as shown in Figure 3. Figure 6 presents the initial labelled image and the MRF segmentation result after five iterations. Firstly, the initial labelled image is generated based on the input image by using K-means algorithm, as shown in Figure 6 (a). The labels are determined by the value of the foreground and background means. A value of 2 is triggered if the intensity is close to the foreground mean; otherwise, one is triggered. Next, based on the initial labelled image, the likelihood energy and the Gibbs energy are computed. A new labelling image is produced based on the total energy computed. Finally, the process is repeated until the maximum iteration is reached, with the MRF segmentation results are shown in Figure 6 (b). The value of 2 representing the foreground while the value of 1 representing the background. Figure 6 (b) shows that the result improves the foreground identification compared to the initial labelled. Conclusion In this paper, a segmentation process based on the MRF algorithm is used, demonstrating that this approach is suitable for performing segmentation on a DNA microarray image. An evaluation of each pixel that considers its neighbourhood pixels offers improvements in classifying pixels into the foreground and background features. The results show that the MRF algorithm performs well in the segmentation of a DNA microarray image.
2021-11-26T20:07:38.805Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "fa3f09b77036bf3a6dde963ad8a87ba4287fab25", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2071/1/012032/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fa3f09b77036bf3a6dde963ad8a87ba4287fab25", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Physics" ] }
257861267
pes2o/s2orc
v3-fos-license
Nanomaterials for Fighting Multidrug-Resistant Biofilm Infections Multidrug-resistant bacterial infections represent a dire threat to global health. The development of antibiotic resistance in bacteria coupled with the lack of development of new antibiotics is creating infections requiring antibiotics of last resort, and even some infections for which we have no available treatment. Biofilm-based infections present some of the most challenging targets for treatment. The biofilm matrix provides a physical barrier that can impede access of antibiotics and antimicrobials to resident bacteria. The phenotypic diversity found in biofilms further exacerbates the difficulty of eliminating infections, with quiescent “persister” cells evading therapeutics and re-initiating infections after treatment. Nanomaterials provide a tool for combatting these refractory biofilm infections. The distinctive size regime and physical properties of nanomaterials provide them with the capability to penetrate and disrupt biofilms. Nanomaterials can also access antimicrobial pathways inaccessible to conventional antimicrobials, providing a synergistic strategy for treating biofilm infections. This review will summarize key challenges presented by antibiotic resistance and biofilms when treating infection and provide selected examples of how nanomaterials are being used to address these challenges. Introduction Multidrug-resistant (MDR) bacterial infections present a global healthcare crisis [1].The development of antibiotic resistance through antibiotic use and overuse has created infections that are difficult or impossible to treat using traditional smallmolecule therapies [2,3].The challenges presented by MDR bacteria are exacerbated when these bacteria form biofilms.The biofilm matrix provides physical protection of bacteria from the host immune system and many antimicrobials [4][5][6].The biofilm matrix also provides a heterogeneous environment that fosters phenotypic diversity in the resident microbes [7].This diversity includes persister cells that have slow metabolic rates, with concomitant innate resistance to antibiotic targeting energydependent mechanisms [8]. Bacterial biofilms are present in many types of infections [9], including implants/joint replacements, circulatory infections (e.g., endocarditis), and bone infections [10,11].Wound biofilm infections are a particularly acute therapeutic challenge [12,13].The economic cost of these infections is high, costing $25B in the US alone [14].Treatment of wound biofilm infections typically involves surgical removal of infected tissues (debridement) combined with an extended regimen of antibiotic therapy [15].Surgery is both expensive and invasive, and the long-term administration of antibiotics can induce drug resistance [16]. Nanomaterials provide a unique toolkit for tailored interactions with biological systems [17,18].Nanoparticles (NPs) can be fabricated to have unique physical properties, including optical [19,20] and magnetic behavior [21,22].NPs can be fabricated to cover a size range commensurate with biosystems ranging from proteins to small bacteria.This size range, coupled with engineering of the NP surface, can be used to provide tailored and selective interactions with bacteria that can kill bacteria without harming mammalian cells [23,24].Nanomaterials can also be engineered to penetrate and eradicate biofilms [25], providing the potential for a comprehensive strategy for combatting biofilm infections [26,27].This review will discuss the challenges presented by biofilm infections, ground rules for NP-bacteria and biofilm interactions, and how nanomaterials can be used to address the challenge of MDR biofilm infections. A Brief Introduction to MDR Biofilm Infections Microbial species have been in conflict for billions of years, competing for resources and preying on each other [28].Over this time, microbial life has developed an array of offensive and defensive weapons and structures.One of the key weapons in their arsenal is the ability to biosynthesize small molecules capable of selectively killing other competing microorganisms [29].This selectivity is useful to medicine, providing a subset of molecules that kill bacteria with reduced or no toxicity to mammalian cells.These biomolecules generally kill microbes through specific mechanisms targeting essential cellular processes.These small molecules are the basis for the antibiotics that are the current front line in fighting infections in the clinic. The fact that antibiotics are derived from microbial bioweapons means the microorganisms they target have likewise had billions of years to develop defenses against them [30].There are 3 main defenses used by planktonic (dispersed) bacteria against antibiotics (Fig. 1A).First, bacteria have developed cell surface protection, for example, the bacterial outer membrane in the envelopes that protect Gram-negative bacteria from a number of antibiotics [31].The second is enzymatic deactivation of the antibiotic, which can occur via a wide variety of enzymes including β-lactamases, esterases, and oxygenases [32].The third strategy is the use of energy-dependent (ATP or sodium/proton gradient) efflux pumps that remove antibiotics from bacteria [33].The rapid development of resistance in bacteria is facilitated by sharing of genetic information, spreading resistance genes across bacterial populations [30]. Building shelter can be an effective response to external threats.Biofilms present a defensive "fort" for protection of microbes against other microbes as well as environmental hazards (Fig. 1B) [34].More than just protection, biofilms can harbor a complex and cooperative community.Bacterial biofilms are composed of bacteria embedded in a matrix of extracellular polymeric substance (EPS).There is an incredible array of biomolecular materials found in EPS, including proteins, polysaccharides, lipids, and DNA, with composition varying by species and strain [35].This dense matrix provides a physical barrier that can protect the bacteria from external agents such as antibiotics through exclusion, diffusion, and dilution within the matrix [6].Further, the proximity of bacteria in biofilms facilitates horizontal gene transfer that transmits resistance across the community. Beyond protection, the biofilm provides a phenotypically diverse community that enhances microbial survival [36].The heterogeneous structure of the biofilms creates regions where bacteria are isolated from nutrients and become quiescent.These metabolically slowed bacteria are called persister cells [37].Most antibiotics target active processes in bacteria; hence, the lack of metabolic activity in persister cells provides a form of protection.These persister cells provide reservoirs of bacteria when biofilms are under attack, making their eradication difficult and promoting antimicrobial resistance [38]. Engineering Nanomaterials to Kill Bacteria The most important step in eradicating biofilm infections is killing of bacteria.Biofilm dispersion without killing would foster the spread of bacteria, potentially exacerbating the infection.Fortunately, nanomaterials provide an almost infinite array of platforms for the creation of antimicrobials [39].The ability to control size, composition, and surface properties of NPs provides ample dimensions for engineering antibacterial activity through a wide range of mechanisms [40].The most commonly used NP-based antimicrobial strategies include direct damage of cell walls and membranes, delivery of antimicrobials, generation of reactive oxygen species (ROS), and binding to intracellular machinery [23]. Direct damage of cell walls and membranes Integrity of the envelope (cell wall/membrane) surrounding bacteria is central to their survival.Disruption of this envelope provides an effective means of killing bacteria, and is the operative principle for a wide range of surfactant-based antimicrobials.The key challenge to applying this strategy therapeutically is obtaining sufficient selectivity for killing bacteria relative to the host (mammalian/human) cells.The difference in surface charge between bacteria and mammalian cells provides a route to selectivity.The surfaces of bacteria are overall more negatively charged than those of mammalian cells, making them more electrostatically attractive to cationic species [41].Small-molecule cationic surfactants are widely used for cleaning surfaces.These surfactants, however, interact with membranes on a local level and are generally unable to discriminate between bacterial and mammalian cell surfaces based on surface charge.Natural products, including antimicrobial peptides, present multivalent surfaces that can provide bacterial/mammalian cell selectivity.Nanomaterials likewise present larger multivalent surfaces that can be engineered to provide selectivity against bacteria.In general, amphiphilic cationic NPs bind and disrupt bacterial membranes.The key feature in obtaining selectivity is control of charge and hydrophobicity: overly cationic or hydrophobic nanomaterials will have increased interactions with mammalian cells with concomitantly less selectivity against bacteria. Antimicrobial delivery Nanomaterials provide versatile carriers for antimicrobials.NPs can protect cargo and improve solubility and stability of antimicrobials.Silver NPs are the most widely employed nanoformulation for antimicrobial use [42].With these systems, the high surface area of the NP provides controlled dissolution to release antimicrobial silver ions.Small-molecule therapeutics can likewise be attached to or encapsulated inside NPs [43].Responsive nanomaterials provide the capability of controlled release at infection sites, e.g., through antibiotic release in acidic infection sites [44]. ROS generation ROS can kill bacteria through a variety of mechanisms [45] with the most prominent pathway being through deactivation of membrane surface receptors via reaction of superoxide and hydroxyl radicals with thiols [46].NPs can generate ROS through a variety of mechanisms.NPs can leach ions (e.g., Cu+) that generate ROS in bacteria [47].NPs (in particular metallic) can generate ROS through excitation using light and other electromagnetic radiation, a photocatalytic process known as photodynamic therapy that can be particularly useful for wound treatment [48]. Inactivation of cellular machinery NPs can be engineered with sizes commensurate with proteins and nucleic acids, making them ideally sized for binding and disrupting intracellular processes.NPs have been used to interfere with gene expression and to bind intracellular proteins leading to killing of the bacteria [49]. Penetration of Nanomaterials into Biofilms The protected and diverse community presented by biofilms makes biofilm infections difficult to treat [15,16].The ability of small molecules and nanomaterials to kill bacteria provides the first step in fighting biofilm infections.There remains, however, the key question of how to transport these antimicrobial agents into biofilms.Design strategy can be derived from the structure of biofilms.Bacteria are typically negatively charged, and the nucleic acids, polysaccharides, and proteins comprising the EPS are likewise rich in anionic and hydrophobic constituents.The biofilm matrix also features water-filled pores (≤350 nm in diameter) that allow nutrient transport into biofilms [7]. The overall negative charge of the biofilm suggests that charge will be a strong factor for the interaction of NPs with biofilms.Quantum dots (QDs) of ~10 nm diameter were used to probe the role of charge in NP-biofilm interactions [50].Predictably, anionic QDs did not interact with the biofilms (electrostatic repulsion with the anionic bacteria and EPS), with neither attachment nor penetration observed.Neutral QDs likewise were non-interacting (no driving force for interaction).Cationic QDs can be predicted to attach to the anionic biofilm; however, transport through this "sticky" environment can be difficult to predict.It turns out that both hydrophilic and hydrophobic cationic QDs are readily transported into biofilms.Intriguingly, this process appears to occur via 2 different mechanisms.Hydrophilic QDs localize in the ECM, suggesting transport through this pathway.In contrast, hydrophobic QDs localized in cells, consistent with an "island hopping" mechanism that proceeds through the bacteria (Fig. 2). Size presents another key determinant for NP-biofilm interactions.Water pores in biofilms come in a range of diameters.Studies show that uncharged NPs <350 nm are able to access much of the biofilm depth though the water channels, providing an upper limit for uncharged particles [51,52].Given that transport along a pore would be expected to be much faster than through the biofilm, the 350-nm scale provides a good target for cationic particles as well (Fig. 3). Nanomaterials as Therapeutics against Biofilms Penetration of nanomaterials into biofilms brings to bear all the tools described above for killing of planktonic bacteria to the treatment of biofilm infections.The ability to access embedded cells (and in particular persister cells) provides a promising strategy for overcoming both the physical barriers presented by the EPS and the challenges arising from phenotypic diversity in biofilms. Disruption of biofilms using nanomaterials As described above, cationic nanomaterials have the ability to penetrate into biofilms.This penetration can occur with disruption of the EPS, providing a means of completely disrupting biofilms.Then, cationic nanomaterials can kill the resident bacteria through membrane disruption [53].These capabilities make nanomaterials a "one-stop shop" for the treatment of biofilm infections [54]. An example of penetration of biofilms and killing of resident bacteria killing is provided by poly(oxanorborneneimide) [55] polymers self-assembled into cationic NPs (Fig. 4A) [56].This system provides an example of a cationic amphiphile with hydrophobicity tuned by the alkyl spacer on the quaternary ammonium side chain.With this family, amphiphilic polymers with a greater amount of hydrophobicity provided effective killing of bacteria (Fig. 4B).These NPs provided an antimicrobial with high efficacy against bacterial biofilms (including P. aeruginosa and methicillin-resistant Staphylococcus aureus [MRSA]).Of equal importance, high selectivity was observed for these NPs toward bacteria relative to mammalian cells (Fig. 4C).Perhaps most importantly, no resistance was observed with the polymer NPs during serial passaging, in stark contrast to that observed with antibiotics (Fig. 4D).This lack of resistance development suggests that this is one of the bactericidal pathways that nanomaterials can access that bacteria do not have defenses for. The structure provided by nanomaterials is an important determinant of their activity against biofilms.Barman et al. [57] developed a family of polymers featuring alternating co-polymers of polyurethane (Fig. 5A).This study showed that chain-folded structures generated from systems with flexible linkers F-PU-6a-c (Fig. 5B) provided effective killing of bacteria, whereas polymer F-PU-6 with a rigid backbone was not effective due to lack of stable nanoassembly formation.Efficient killing was demonstrated for biofilms of Gram-positive S. aureus and Gram-negative Escherichia coli (Fig. 5B). Delivery of antimicrobials into biofilms Nanomaterials have the capability of delivering therapeutics deep into biofilms.This penetration imparts antibiofilm activity to therapeutics that would not normally penetrate the biofilm matrix [58].An example is provided by polymer-based essential oil delivery.Essential oils are used by plants to resist bacterial infection, and are another front on the antimicrobial war [59].The hydrophobicity of these oils, however, inhibits their penetration into the highly charged EPS.Nanomaterials provide an effective tool for essential oil delivery [60].Integration of these oils into nanoemulsion "nanosponges" provides efficient delivery into biofilms.As an example, carvacrol (from oregano) was incorporated into a crosslinked gelatin nanosponge (Fig. 6A) that effectively disrupted biofilms and killed resident bacteria [61].This system was effective against multiple species of biofilms in vitro, and reduced bacterial load (Fig. 6B) and enhanced wound healing (Fig. 6C) in a 4-day established MRSA wound biofilm in a mouse model. Silver ions (Ag+) are highly effective at killing bacteria [42].These ions are quite "sticky" and have difficulty penetrating into biofilms.Silver nanoclusters and NPs provide a strategy for the delivery of Ag+ [62].Incorporation of these particles into biofilmpenetrating nanomaterial featuring a polymethacylate matrix functionalized with sugars and cationic amines enables treatment of the corresponding biofilms in vitro [63]. Localized activity is an important strategy for increasing efficacy while minimizing off-target effects.Nanomaterials have unique physical properties that can be used to enhance delivery into biofilms with spatial control.Superparamagnetic NPs (i.e.magnetic particles with easily flipped dipoles) can be heated using an alternating magnetic field [64].This magnetic field heating was used to enhance the penetration and therapeutic efficiency of superparamagnetic iron oxide particles coated with silver rings [65].In a related study, an alternating magnetic field was likewise used to disrupt biofilms treated with 70-nm Fe 3 O 4 particles (Fig. 7) [66].Far greater disruption, however, was observed using a rotating magnetic field that mechanically perturbed the biofilms.These modes of magnetically controlled biofilm disruption provide strategies that could readily be codeployed with a wide range of other therapeutic agents. Catalytic generation of antimicrobial agents at infection sites provides another strategy for localization of antimicrobial activity.Biomimetic nanocatalysts (nanozymes) provide versatile "drug factories" for in situ generation of antimicrobial species [67].The most common embodiment of nanozymes mimics the activity of catalase enzymes, activating hydrogen peroxide to ROS species [68].Iron oxide (Fe 3 O 4 ) efficiently generates ROS from peroxide and was used to treat Streptococcus mutans biofilms in vitro and in vivo [69].Strategic use of the Food and Drug Administration-approved nanotherapeutic dextran-coated Fe 3 O 4 NP ferumoxytol for ROS generation (Fig. 8A) moves this strategy substantially closer to clinical use [70].In this work, effective elimination of biofilms was demonstrated ex vivo (Fig. 8B and C) and prevention of caries was established using a mouse model (Fig. 8D). Bioorthogonal catalysis employs reactions inaccessible to natural enzymes, opening new chemical pathways for in situ generation of therapeutics [71,72].The use of NP scaffolds for bioorthogonal catalysis can be used to stabilize reactive catalysts [73,74] and impart useful properties such as biofilm penetration [75].Antibiofilm activity has been demonstrated using a number of different nanozyme platforms.As an example, thermally gated iron porphyrin/AuNP nanozymes were used to generate antibiotic inside biofilms, effectively killing bacteria and disrupting the biofilm [76].Notably, antibiotic activation and bactericidal activity could be controlled by temperature.Using the appropriate nanozyme, no antibiotic was generated and no killing was observed at ambient temperature (25 °C), whereas effective prodrug uncaging and antimicrobial and antibiofilm activity occurred at 37 °C (Fig. 9). Summary and Outlook MDR bacteria have evolved to survive a wide range of chemical threats, using both chemical (drug resistance) and physical means (biofilms).Nanomaterials provide us with a full toolkit of novel (to bacteria) threats that can be employed against these pathogens.Importantly, nanomaterials can access bactericidal pathways for which bacteria have not developed resistance mechanisms.The size and surface properties of NPs can be engineered to provide antimicrobial activity that can be coupled with the ability to penetrate into biofilms.This biofilm penetration capability can be used to enhance other therapeutic strategies, including both direct antimicrobial delivery and in situ therapeutic generation using nanocatalysts.Full use of these capabilities will require selectivity for pathogens relative to the host cells, a requirement that is being addressed as the field evolves.Additionally, are a number of emerging areas where nanotechnology can be used to treat biofilm infections, including inhibition of EPS matrix formation [25] and interruption of quorum sensing [77]. The challenges in bringing antibiofilm nanomaterials to the clinic are the same as for other nanotherapeutics.The in vivo activity and safety of nanomaterials is harder to predict than for small molecules, in large part due to the rich diversity of structures possible with nanomaterials.This barrier will be diminished as our understanding of nanomaterial behavior in vivo increases.The hurdle can also be avoided through use of already approved therapeutics [70] and diminished through the use of "safe" components for nanomaterial fabrication [61].Taken together, we can hope that nanomaterials will enable us to avoid the predicted catastrophe caused by "untreatable" MDR biofilm infections. Fig. 1 . Fig. 1. (A) Bacterial resistance strategies used by planktonic and biofilm bacteria.(B) Additional protection of bacteria afforded by biofilms, along with phenotypic variation (e.g., persister cells) to evade threats. Fig. 2 . Fig. 2. Penetration of quantum dots (green) into red fluorescent protein-expressing E. coli biofilms.Micrographs indicate that (A) neutral and (B) anionic QDs do not interact with the biofilm.(C) Hydrophilic cationic nanoparticles do not co-localize with cells, indicating a through-EPS transport process.(D) Hydrophobic QDs co-localize with cells, suggesting a cell-to-cell transport process.Adapted by the author from Ref. [50], Royal Society of Chemistry. Fig. 4 . Fig. 4. Effective and selective killing of biofilms.(A) Structures of cationic poly(oxanorbornene) polymers.(B) MIC values of polymers against E. coli (CD-2).Log P is the calculated hydrophobic values of the corresponding monomers.(C) Viability of E. coli biofilms and the 3T3 fibroblast cell co-culture model after incubation (3 h) with P5 NPs.(D) Resistance development during serial passaging in the presence of sub-MIC levels of antibiotics, and lack of inhibition observed with P5 nanoparticles.Adapted from Ref. [56].©2018 with permission of the American Chemical Society. Fig. 5 . Fig. 5. (A) Chain-folding of polyurethane polymers to provide self-assembled nanostructures.(B) Polyurethanes featuring rigid backbones (R-PU-C6) do not fold, aggregate, and are inactive against bacteria.(C) Polyurethanes with flexible backbones fold and effectively kill bacteria in biofilms.Adapted from Ref. [57] with permission of the Royal Society of Chemistry. Fig. 6 . Fig. 6. (A) Fabrication of carvacrol-loaded gelatin nanoemulsions through emulsification and riboflavin-mediated UV-crosslinking.Gelatin nanoemulsions were tested against a 4-day-old MRSA biofilm in a mouse wound model.(B) Nanoemulsion reduces bacterial load as shown by colony counts from the infected wounds.(C) Wound healing is enhanced by the gelatin nanoemulsions after 4 days with one treatment per day.Adapted by the author from Ref. [61], Royal Society of Chemistry. Fig. 7 . Fig. 7. Confocal imaging of MRSA biofilms after treatment with 70-nm Fe 3 O 4 nanoparticles (10 mg/ml) for 15 min.(A) No magnetic field (control); (B) partial disruption from heating of NPs in an AC magnetic field; and (C) complete disruption of biofilm from mechanical disruption through treatment of NPs with a rotating magnetic field.Adapted from Ref. [66] with permission of the Royal Society of Chemistry. Fig. 8 . Fig. 8. Ferumoxytol nanozymes are effective against oral biofilms.(A) Activation of ROS generation at low pH.(B) Confocal imaging of biofilms from pooled human plaque sample showing vehicle-control-treated biofilm.(C) Biofilm treated with ferumoxytol/H 2 O 2 to generate ROS in situ.Total bacteria (blue), S. mutans cells (green), and EPS (red); scale bars: 50 μm.(D) Therapeutic efficacy of topical ferumoxytol/H 2 O 2 against tooth decay in a mouse model (S.mutans UA159).Graph shows the number of moderate caries after a daily regimen of treatment for 22 days.Similar results were observed for initial and extensive lesions.Adapted from Ref. [70] under Creative Commons Attribution 4.0 International license. Fig. 9 . Fig. 9. Thermally gated bioorthogonal nanozymes.(A) Structure of the FeTPP catalyst.(B) Conversion of pro-antibiotic pro-Mox to moxifloxacin (Mox) through the FeTPPcatalyzed reduction of the aryl azide.(C) Nanoparticle AuTTMA used to encapsulate FeTPP.(D) Reversible thermal gating of catalysis through controlled aggregation of FeTPP.Gating temperatures were from 25 °C (Fe-NZ_25) to 37 °C (Fe-NZ_37) with 3 °C steps.(E) E. coli biofilms treated with pro-Mox and Fe-NZ_37 showing the effective killing of biofilms at 37 °C.(F) At 25 °C, no pro-Mox activation occurs with Fe-NZ_37 and concomitantly no biofilm killing was observed.Adapted by the author from Ref. [76], Cell Press.
2023-04-01T15:11:38.960Z
2023-03-30T00:00:00.000
{ "year": 2023, "sha1": "cd52e39471ac121220214f202297d1e3ae214ed0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.34133/bmef.0017", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24170d3ae8a0f38800b8f18d404941f8448f6bb3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
251172180
pes2o/s2orc
v3-fos-license
Metabolomics of clinical samples reveal the treatment mechanism of lanthanum hydroxide on vascular calcification in chronic kidney disease Previous studies showed that lanthanum hydroxide (LH) has a therapeutic effect on chronic kidney disease (CKD) and vascular calcification, which suggests that it might have clinical value. However, the target and mechanism of action of LH are unclear. Metabolomics of clinical samples can be used to predict the mechanism of drug action. In this study, metabolomic profiles in patients with end-stage renal disease (ESRD) were used to screen related signaling pathways, and we verified the influence of LH on the ROS-PI3K-AKT-mTOR-HIF-1α signaling pathway by western blotting and quantitative real-time RT-qPCR in vivo and in vitro. We found that ROS and SLC16A10 genes were activated in patients with ESRD. The SLC16A10 gene is associated with six significant metabolites (L-cysteine, L-cystine, L-isoleucine, L-arginine, L-aspartic acid, and L-phenylalanine) and the PI3K-AKT signaling pathway. The results showed that LH inhibits the ESRD process and its cardiovascular complications by inhibiting the ROS-PI3K-AKT-mTOR-HIF-1α signaling pathway. Collectively, LH may be a candidate phosphorus binder for the treatment of vascular calcification in ESRD. Introduction Drug discovery is a long and complex process, and elucidation of the mechanism of action of drugs is difficult and costly. 1) Most financial losses caused by the failure of new drugs result from the inability to accurately predict the pharmacological mechanism of candidate drugs. Many methods are available to study the mechanism of action of drugs, such as serum pharmacology to determine the structure of drugs and targets. 2) However, these traditional strategies have limitations; they mainly focus on epigenetic and morphological observations or only identify the molecular targets. 3) To fully understand the wide range of molecular interactions and their effects required for drug action, novel methods are being developed. Omics is an emerging discipline that has increasingly gained interest in research on drug mechanism of action. This discipline includes metabolomics, genomics, and proteomics. Metabolomics is the comprehensive study of biochemical substances (or small molecules) present in the metabolome, cells, tissues, and body fluids. Metabolic research is a rapidly developing field worldwide that will have a profound impact on medical practice. Metabolic profiles provide a quantifiable reading of the biochemical state, ranging from normal physiology to various pathophysiologic states. A series of pioneering studies on human samples, supported by the National Institutes of Health through the Pharmacometabonomics Research Network, provided new ideas for the elucidation and discovery of many drug mechanisms. 4) Thereafter, the Pharmacometabonomics Research Network has become a complementary and powerful tool in precision medicine. 5) Given the long-term research on the metabolism of kidney diseases, metabolomics have quickly been incorporated into nephrology research. In recent years, metabolomics is increasingly being used in research and has proved to be useful for studying the mechanism of action of existing drugs and candidates in the drug discovery stage. 6), 7) CKD is characterized by gradual loss of kidney function, decrease in glomerular filtration rate, increase in serum urea and creatinine levels, and abnormal phosphorus excretion. According to the National Kidney Foundation, 26 million adults suffer from CKD in the United States and millions of adults are at an increased risk of CKD. 8) Epidemiological studies have shown that the prevalence of CKD in Chinese adults is 10.8%, representing 120 million patients. 9) As of 2017, there are about 1 million patients with CKD in China, associated with a huge economic and health burden on affected families and the entirety of society. There has also been a significant increase in the global mortality rate caused by CKD; CKD was the 27th most common cause of death in 1990 and became the 18th most common in 2010. 10) At present, more than 2 million people worldwide receive dialysis or have undergone kidney transplantation, accounting for only 10% of the population who need these treatments. 11) Vascular calcification is a common complication of CKD. The histological anatomy and degree of vascular calcification predict subsequent vascular death in patients with CKD. 12) In patients with CKD, a variety of factors, such as oxidative stress, dyslipidemia, elevated glycation end products, and mineral metabolism disorders, cause vascular calcification. 13) Intimal and medial calcification increases the risk of cardiovascular death in patients with ESRD by 10-100 times. 14) Therefore, the development of new drugs for treatment of CKD is essential. Previous studies reported that LH has a high affinity with phosphate, and it is easy to form a lanthanum phosphate complex with phosphate. Lanthanum phosphate has low water solubility, does not pass easily through the intestinal wall, and can be excreted through the feces, thereby achieving the effect of alleviating renal failure in CKD rat models, decreasing serum phosphorus levels, and inhibiting vascular calcification, equivalent to the effects of lanthanum carbonate (LC). 15), 16) LC has been used clinically and is regarded as safe, 17) so it is suggested that the safety of LH may also be good. Because of the long-term administration in our experiment, there is a very small amount of absorption of lanthanum ions. In order to more accurately determine the toxicity of lanthanum ions, we conducted a safety evaluation experiment. We found no obvious toxicity for the time being when the dose of LH was more than 50 times the current dose. Previous studies on CKD and vascular calcification mainly focused on rat models. However, there are interspecies differences in ESRD models; therefore, it is difficult to accurately predict the mechanism of action of candidate drugs in humans, which explains why many drugs fail in clinical research. In the current study, we used comprehensive omics of humans and animals to analyze the metabolites and signal pathways in humans. We also evaluated the possible in vivo and in vitro signal pathways in rats and human vascular smooth muscle cells (VSMCs). The use of clinical omics research can improve the success rate of pre-clinical drug research and development. Clinical omics research also provides a basis for LH to be studied as a candidate phosphorus binder. Materials and methods 2.1. Liquid chromatography-tandem mass spectrometry analysis of serum metabolites. Samples were incubated for 10 min with pre-chilled methanol at a ratio of 1:3 to precipitate proteins. The samples were centrifuged at 12000 r/min for 15 min at 4°C. The supernatants were analyzed using Dionex UltiMate3000 Rapid Resolution Liquid Chromatography and Q Exactive mass spectrum (Thermo Fisher Scientific, Sunnyvale, CA, U.S.A.). The analytes were separated in an XBridge BEH Amide chromatographic column (2.1 # 100 mm; Waters Co., Milford, MA, U.S.A.) using 0.1% formic acid and acetonitrile as mobile phases A and B, respectively. The flow rate was set at 0.4 mL/min, with an injection volume of 5 µL and column temperature of 25°C. The mass spectrum signals were obtained using the positive and negative ion scanning modes. Participants. This study included 36 adult patients with CKD and 30 healthy controls for experiments performed between June 2020 and December 2020 at Ordos Central Hospital in Inner Mongolia. The patients had CKD stage 3-5 based on estimated glomerular filtration rate for at least 3 months. Patients were excluded if they had acute kidney injury, liver disease, gastrointestinal pathology, active vasculitis, cancer, need for dialysis, need for immunosuppressive or chemotherapy, or history of kidney transplant. Blood samples were obtained after an overnight fast, and the serum was separated and stored at !80°C for subsequent studies. The research strictly complied with the ISHLT ethics statement, and was approved by the Ethics Committee of Ordos Central Hospital (no. ZXYY2021014), and all patients provided informed consent. Animals and experimental protocol. Six-week-old male Wistar rats, weighing 200 ' 20 g, were purchased from Weitong Lihua Biotechnology Co., Ltd. (Beijing, SCXK 2016-0006, China). The rats were placed in a specific IVC cage free of pathogens. The rats were maintained in a 12-h light/ dark cycle, ambient temperature of 21-22°C, and humidity of 40-50%. The rats were allowed to eat and drink freely. Before the start of the experiment, rats were adaptively reared for 1 week. The animal experiments were approved by the Medical Ethics Committee of Inner Mongolia Medical University (no. YKD2019019). Figure 1 depicts the process of establishing the CKD animal model. For 1-2 weeks, rats in the model group were administered 2% adenine suspension daily (dose: 200 mg/kg). Within 3-4 weeks, all model rats were administered the same concentration of adenine suspension on alternate days, according to their body weight. After the adenine suspension was administered for 4 weeks, blood was collected from the fundus venous plexus of the rats to measure creatinine and urea nitrogen levels to determine the successful establishment of the CKD model. Then, we evaluated the serum phosphorus level in CKD rats to determine the progression of vascular calcification. The rats were randomly assigned to seven groups (15 rats per group): control group, model group, LH group (0.1 g/kg), LH group (0.2 g/kg), LH group (0.4 g/kg), LC group (0.3 g/kg), and calcium carbonic acid group (CC, 0.42 g/kg). During the experiment, rats in the control group were fed ordinary food, and those in the CKD group were fed 1.2% high-phosphorus food (Beijing Keao Xieli Feed Co., Ltd. 2021070107, Beijing, China). Serum biochemical indicators, such as creatinine (Scr), blood urea nitrogen (BUN), phosphorus, superoxide dismutase (SOD), malondialdehyde (MDA), parathyroid hormone (PTH), fibroblast growth factor 23 (FGF23), and lipid peroxide (LPO) were measured after 4 and 8 weeks of treatment. At the end of 12 weeks, all animals were sacrificed. Blood samples were collected from the abdominal aorta, and serum was separated and stored at !80°C. We collected the aorta from each animal by quick freezing in liquid nitrogen for subsequent western blotting. The right kidney and thoracic aorta samples were immersed in 10% phosphate buffered formalin for histological analysis. 2.4. Cell culture. Human aortic VSMCs (item #C740; Shanghai Yubo Biotechnology Co., Ltd., Shanghai, China) were cultured in growth medium containing 10% fetal bovine serum in high-sugar Dulbecco's modified Eagle's medium and 1% penicillin-streptomycin. After cell fusion, cells at passage 5-7 were used for experiments. The medium was changed every 2 days, and 3 mmol/L of Pi (consisting of Na 2 HPO 4 /NaH 2 PO 4 , pH F 7.4) was added for 6 days to induce cell calcification. On the 4th day, different concentrations of lanthanum chloride (LaCl 3 ) were added for continuous culture for 2 days. Rats were divided into control group (normal phosphorus, 1 mmol/L phosphate buffer), control D Histologic analyses. For histological analysis, we fixed rat kidney and thoracic aorta tissues from each group with 10% phosphate-buffered formalin, then embedded them in paraffin, and stained with hematoxylin and eosin (H&E), Masson stain, Von Kossa stain, and Verhoeff Van Gieson (EVG) stain in accordance with the method described below (Wuhan Saville Biotechnology Co., Ltd., Wuhan, China). Images were analyzed using a Leica DM2000 microscope (Germany Leica Microsystems, Wetzlar, Germany). 2.6.1. H&E staining. Rat kidney and thoracic aorta tissues were fixed in 4% paraformaldehyde for 24 h, and then the kidney was cut in the salt coronal plane whereas the thoracic aorta was cut in the sagittal plane. The tissues were placed in a dehydration box with an alcohol gradient. After removal from the dehydration box, tissues were placed in a shelf for embedding with melted wax. Then, the tissues were cooled at !20°C. After the wax block was solidified, it was removed from the shelf and trimmed. The block was cut into 4-micron slices using a microtome. The slices were floated on warm water at 40°C to flatten the tissues. Then, the tissues were removed with a glass slide and baked in an oven at 60°C. After dehydration, the slices were stained with an aqueous solution of hematoxylin for several minutes. The slices then underwent color separation in acid water and ammonia water, each for a few seconds. Then, they were rinsed with running water for 1 h and distilled water was added over 2-3 min. The slices were dehydrated with 70% and 90% alcohol for 10 min each. Then, alcohol eosin staining solution was added for 2-3 min, and the samples were dehydrated and made transparent using xylene. The transparent sections were mounted with gum and covered with a cover glass for sealing. After the gum had slightly dried, a sticker was attached and microscopic examination was performed. Microscopic images were acquired and analyzed. 2.6.2. Masson staining. The nucleus was stained with Weigert hematoxylin solution for 5-10 min, washed thoroughly with Masson Ponceau acid red for 5-10 min, immersed in 2% glacial acetic acid aqueous solution for 2-3 min, differentiated with 1% phosphomolybdic acid aqueous solution for 3-5 min, and stained with aniline blue or light green solution for 5 min. Then, the sample was soaked in 0.2% glacial acetic acid aqueous solution for a period and the slides were mounted with natural resin glue. Microscopic images were collected and analyzed. Von Kossa staining. The blood vessel slices were removed from the glass slides, treated with APES, and dried. Toluene was used twice for dewaxing for 15 min each time, and 100%, 95%, and 80% gradient ethanol was added. The slices were washed with water, and 1 mL of 1% silver nitrate solution was added. The solution was irradiated under sunlight for 30 min. The silver nitrate solution was removed and 1 mL of 5% sodium thiosulfate solution was added and left for 1 min. Basic fuchsin was used for backstaining for 10 seconds, followed by dehydration twice with 95% and absolute ethanol. The xylene was transparent after dehydration; the slides were mounted with natural resin glue, and microscopic images were obtained and analyzed. The cells were washed with phosphate-buffered saline (PBS; pH: 7.2-7.4) and fixed with 4% paraformaldehyde for 30 min at 37°C. The cells were washed again with PBS, treated with reagent A in the Von Kossa staining kit (Cat. #G1452, Solarbio), and irradiated with ultraviolet light until the calcium phosphate crystals turned black. The cells were washed three times with PBS to remove excess dye. After 2 min of sodium thiosulfate treatment, the cells were washed three times with PBS. The cells were stained with eosin staining solution (Cat. #G1120, Solarbio) and observed using a microscope. 2.6.4. EVG staining. The blood vessel slices were stained with EVG dye solution (hematoxylin, ferric chloride, and iodine solution F 5:2:2) for 30 min and rinsed with tap water. The slices were differentiated with ferric chloride differentiation solution and rinsed with tap water. This process was repeated to control the degree of differentiation visible using a microscope and continued until the elastic fibers appeared purple-black and the background was almost colorless. EVG dye solution was used for 1-3 min for counterstaining. The slices were rapidly washed and dehydrated with absolute ethanol. The film was sealed with neutral gum for 1-5 min and examined using a microscope. Microscopic images were acquired and analyzed. Alizarin red staining. The cells were washed with PBS (pH: 7.2-7.4) and fixed with 4% paraformaldehyde for 30 min at 37°C. The cells were washed again with PBS and stained with 1% Alizarin red stain (pH: 4.2, Cat. #G1452, Solarbio) for 5 min. The cells were washed with PBS until the staining solution was washed out. The culture plate was analyzed using a microscope and photomicrographs were obtained. Then, calcium crystals were dissolved with 10% acetic acid staining solution. A multi-detection microplate reader (Dynex, Lincoln, U.K.) was used to measure the absorbance at 420 nm to quantify the calcification. 2.8. Western blotting analysis. A cocktail of complete phosphatase and protease inhibitors (Thermo Fisher Scientific) were used to lyse rat aortic tissue and VSMCs. Then, the lysate was centrifuged for 10 min at 12000 rpm and 4°C, and the supernatant was collected. The supernatant was mixed with Roti-Load1 buffer (Carl Roth GmbH, Karlsruhe, Germany) at a ratio of 4:1. The proteins were boiled at 100°C for 10 min. Then, we separated the same volume of protein by SDS-PAGE and transferred to a PVDF membrane. Thereafter, we incubated the proteins overnight at 4°C with the primary antibody. The primary antibodies used were as follows: monoclonal rabbit anti-PI3K (1:1000), monoclonal mouse anti-BMP-2/4 (1:600), monoclonal mouse anti-RUNX2 (1:600), polyclonal rabbit anti-phospho-PI3K (1:500), monoclonal rabbit antiphospho-AKT antibody (1:1000), monoclonal rabbit anti-AKT (1:1000), monoclonal rabbit anti-phospho-mTOR (1:1000), monoclonal rabbit anti-mTOR (1:1000), polyclonal rabbit anti-O-actin (1:2000), and anti-HIF-1 antibody (1:5000). Afterward, the proteins were incubated with the secondary antibody for 1 h at room temperature in the dark, using secondary anti-rabbit IgG (HDL) (1:5000) or antimouse IgG (HDL) (1:5000). Finally, we used stripping buffer (Thermo Fisher Scientific) to strip the membrane at room temperature for 40 min for subsequent testing and loading control. The protein bands were exposed on an Odyssey CLX dual-color infrared laser imaging system (LI-COR) and quantified using ImageJ software. The data were standardized at the ratio of total protein/O-actin or nuclear protein/lamin A or phosphorylation/total protein. 2.9. Quantitative real-time RT-qPCR. The RNA Simple Total RNA Kit (Cat. #DP419, Tiangen Biochemical Technology [Beijing] Co., Ltd.) was used to extract total RNA from cells and thoracic aortic tissues, in accordance with the manufacturer's instructions. Then, a reverse transcription kit (AceqPCRRT kit, Cat. #FSQ-101, ToYoBo Life Science, Osaka, Japan) was used to synthesize cDNA using the extracted total RNA. In addition, a Piko real-time PCR detection system (Thermo Fisher Scientific) was used for quantitative analysis of RT-qPCR. The primer sequences are shown in Table S1. The melting curve of the PCR product was used to prove its specificity, and each procedure was repeated twice. The relative mRNA expression fold change was measured by the 2 !""Ct method, using GAPDH as the internal control. 2.10. Statistical analysis. Data are expressed as mean ' standard error of the mean (SEM). Statistical analysis and drawing were performed using GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, U.S.A.). Using one-way analysis of variance, data were compared between the groups. A difference of p < 0.05 was considered to be statistically significant. Partial least squares discriminant analysis (PLS-DA) of SIMCA-PD13.0 (Umetrics, AB, Umeå, Sweden) and principal component analysis (PCA) were used to assess normalized gas chromatography-mass spectrometry data. Variable influence on projection values were used to identify significant variables with values >1.0 and p < 0.05. The significant variables were used to identify the spectral peaks. Student's t-test was used to analyze differences between the groups. The taxonomic rank differential between groups was determined using Student's t-test (v3.1.2; R programming language). Correlations between genera abundance and rat behavior were assessed using Spearman's correlation coefficients (R language). p < 0.05 was considered statistically significant. LH delays the process of renal failure and protects residual renal function in CKD rats. After administering adenine gavage for 4 weeks, we measured serum creatinine and BUN levels in CKD rats. We found significant elevation of serum BUN and creatinine levels in the model group compared with the control group (Table S2) (p < 0.01). This suggested that CKD hyperphosphatemia rat model was successfully established. Compared with the CKD group, the creatinine and BUN levels of LHtreated group were significantly reduced (p < 0.01 and p < 0.05, respectively) after LH administration for 4 weeks. After LH was administered for 8 weeks, serum creatinine levels in the LH groups (0.1 g/kg, 0.2 g/kg, and 0.4 g/kg) decreased by approximately 24%, 26%, and 40%, respectively, compared with the model group; BUN was reduced by approximately 20%, 25%, and 31%, respectively. These reductions were observed in a dose-dependent manner (Figs. 2A and 2B). Furthermore, we also observed these protective effects in renal pathological examination after LH treatment. Compared with the control group, CKD rat specimens exhibited many cystic dilated tubules, adenine crystallization, protein casts, and glomerular necrosis (Fig. 2C). We also observed that the renal mesenchyme was accompanied by a large number of inflammatory cell infiltrates and mesenchymal fibrosis in the model group (Fig. 2D). However, after LH treatment, renal injury, cystic dilatation of renal tubules, and inflammatory cell infiltration were improved. The LC group and CC group also showed a protective effect on the kidneys. Notably, compared with the LC group and CC group, there was more significant improvement in kidney pathology in the LH (0.4 g/kg), suggesting that LH has greater therapeutic effects. However, there was no significant improvement in renal fibrosis in any group (Fig. 2D). LH inhibits vascular calcification induced by CKD. Vascular calcification in ESRD is the main cause of the high mortality in these patients. In patients with ESRD, phosphate excretion is impaired and the phosphorus load in the body is exacerbated, eventually leading to vascular calcification. Therefore, we measured serum phosphorus and calcium levels after LH treatment and found that compared with CKD rats, serum phosphorus in the LH-treated group was significantly lower (p < 0.01). After 8 weeks of treatment, serum phosphorus levels were reduced by approximately 28%, 34%, and 47% in the LH groups (0.1 g/kg, 0.2 g/kg, and 0.4 g/kg, respectively) compared with the model group. These effects were seen in a dose-dependent manner. Unfortunately, no significant difference was observed in the serum calcium levels after treatment for 4 and 8 weeks. We also measured the levels of FGF23 and PTH after administering the adenine diet (Table S2). We found that compared with the model group, PTH levels in the model group were significantly increased by approximately 40%. Additionally, the FGF23 level was also significantly increased (p < 0.01). After 8 weeks of treatment, compared with the control group, PTH level in the LH groups (0.1 g/kg, 0.2 g/kg, and 0.4 g/kg) was significantly reduced by approximately 11%, 15%, and 17%, respectively. Additionally, the FGF23 level was significantly reduced by approximately 17%, 20%, and 25%, respectively, in a dose-dependent manner. This suggested that LH decreases serum PTH and FGF23 levels, regulating calcium and phosphorus metabolism and reducing the incidence of cardiovascular diseases (Fig. 3). Morphologic analysis of rat thoracic aortic tissues showed distinct pathologic changes (Fig. 3C), with enlarged and reduced elasticity of aortic tissues. Additionally, a large amount of black calcification was observed in the media (Fig. 3B), and there were reduced broken elastic fibers in the calcified area (Fig. 3C). This showed that LH significantly inhibits vascular calcification and the breakage of elastic fibers in CKD rats. Notably, LH (0.4 g/kg) significantly inhibited vascular calcification. Vascular calcification did not improve in the CC group. Serum metabolomics in patients with ESRD. Although the pharmacodynamic effect of LH on vascular calcification in CKD and hyperphosphatemia is proven, its mechanism of action is not clear. Therefore, we used metabolomics to analyze the mechanism of CKD vascular calcification, to provide a basis for evaluating LH as a clinical drug. The demographic characteristics of healthy controls and patients with ESRD are shown in Tables S3 and S4, including age, gender, blood pressure, serum phosphorus, and information related to primary disease. This study included 36 healthy controls (Table S3) and 36 patients with ESRD (Table S4). Serum phosphorus level of healthy controls and patients with ESRD were 1.19 ' Using liquid chromatography-tandem mass spectrometry we identified 2,190 variables in positive ion mode and 794 variables in negative ion mode from 66 subjects. PCA was used to analyze the serum metabolites of healthy controls and patients with ESRD. The serum metabolites from the two groups were well distinguished (Fig. 4A). PLS-DA was used to identify significant metabolites that differentiated the two groups. The results showed that the metabolites of the two groups were well separated, with a reduced dispersion within the ESRD group. This model had a high explanatory and prediction rate (R 2 Y F 0.92 and Q 2 Y F 0.92). Permutation experiments demonstrated that the R 2 and Q 2 intercepts were 0.49 and !0.013, respectively (Fig. 4B). Meanwhile, Q 2 < 0 indicated a lack of overfitting. The control group (1-12) and ESRD group (1-12) samples were used as prediction samples. The prediction samples were unclassified. The results showed that the sample has good predictive ability (correlation coefficient, R 2 F 0.870819 and RMSEE F 0.111856) (Figs. 4C-4E). A significant change was observed in 32 metabolites (log-fold change > 1, p < 0.05) from the two groups. In a network analysis for significant genes and metabolomics (Figs. 4G and 4H), the SLC16A10 gene was activated in ESRD. Furthermore, the SLC16A10 gene was related to six significant metabolomics (Fig. 4F) (L-cysteine, L-cystine, Lisoleucine, L-arginine, L-aspartic acid, and L-phenylalanine) and the PI3K-AKT signaling pathway (Fig. 4I). We found MDA, as a product of free oxygen radicals, in the serum of patients with ESRD increased by 76%, and the oxygen free radical scavenger SOD decreased by 57% compared with healthy people (Fig. 4J). Furthermore, we analyzed the level of six significant metabolomics in CKD rats treated with adenine. Compared with the control group, Larginine and L-aspartic acid increased, whereas Lcysteine, L-cystine, L-isoleucine, and L-phenylalanine decreased, in the model group. Compared with the control group, the administration group had decreased L-arginine and L-aspartic acid, and increased L-isoleucine and L-phenylalanine (Fig. 5). These results indicated that these six metabolites affected the PI3K-AKT signaling pathway. LH inhibits vascular calcification in rats by inhibiting the PI3K-AKT-mTOR-HIF-1, signaling pathway, activated by reactive oxygen species in vivo and in vitro. CKD causes oxidative stress. Many studies have reported that reactive oxygen species (ROS) activates the PI3K-AKT signaling pathway. 18)-22) Although LH may be a promising candidate drug for CKD treatment compared with LC, its mechanism of action is not clear. This metabolomics, proteomics, and network pharmacology study showed that CKD and its complications (vascular calcification) are related to the PI3K-AKT signaling pathway (Figs. 4, 5, and S1). Previous studies showed that activation of the PI3K-AKT-mTOR-HIF-1, signaling pathway promoted osteoblast growth and proliferation. Additionally, the signaling pathway can be activated by ROS. 23), 24) We aimed to explore whether LH inhibits vascular calcification in patients with CKD through the ROS-PI3K-AKT-HIF-1, signaling pathway. Therefore, we verified the changes in ROS in CKD rats (Figs. 6A and 6B), and we found significantly increased levels of free oxygen radicals in the model group compared to the control group (p < 0.01). After treatment with LH, free oxygen radicals decreased in CKD rats in a dose-dependent manner, and SOD was significantly increased (p < 0.01). Furthermore, we examined the influence of ROS on osteogenic differentiation using western blotting analysis. Compared with the control group, the expression of p-PI3K, p-AKT, and p-mTOR in the CKD group was significantly increased (p < 0.01), the expression of HIF-1, in the nucleus was significantly increased by 67%, and the expression of downstream protein vascular endothelial growth factor (VEGF) was significantly increased by 89% (Fig. 6C). Compared with the control group, the expression of BMP-2 and RUNX2 proteins in the model group were significantly increased by 60% and 62%, respectively (p < 0.01), whereas the VSMC marker SM22, was significantly reduced by 49% (p < 0.01). This indicated activation of PI3K-AKT-HiF-1, signaling pathway and transdifferentiation of VSMCs into osteoblasts. After LH treatment, compared with the model group, expression of SM22, was significantly increased (p < 0.01), and the expression levels of p-PI3K, p-AKT, p-mTOR, BMP-2, and RUNX2 were significantly decreased (p < 0.01). Next, we used real-time PCR technology to detect the expression levels of VEGF, SM22,, BMP-2, and RUNX2 mRNA. Compared with the WT rats, the expression level of VSMC marker SM22, mRNA in CKD rats was significantly reduced (p < 0.01), and the expression levels of osteogenic markers BMP-2 and RUNX2 mRNA were significantly increased (p < 0.01). Additionally, the expression level of VEGF mRNA in the HIF-1, pathway was significantly increased (p < 0.01) (Fig. 6G). Treatment with LH reversed the expression levels of these genes in a dose-dependent manner. In hyperphosphatemia complicated by CKD, there are various reasons for the occurrence of vascular calcification. Including the formation of CPP or nano-hydroxyapatite, mechanistic vesicles, and transdifferentiation of VSMCs induced by a high phosphate environment. 25)- 28) In addition, the above three processes are completely different mechanisms. However, in this study, we only focused on the phosphate-induced VSMC transdifferentiation process. The other two processes are not involved. In our experiments, we examined whether ROS affects PI3K-AKT-HIF-1, expression in VSMC transdifferentiation. We used western blotting to determine whether lanthanum ions can block VSMC calcifica-tion by inhibiting the activation of the ROS-PI3K-AKT-HIF-1, signaling pathway. Our results showed that increased phosphorus load increases ROS production by VSMCs (Fig. S2), which activates the PI3K-AKT-HIF-1, signaling pathway. Protein detection and mRNA detection showed similar results in vitro and in vivo. The addition of inhibitors partially inhibited osteogenic transdifferentiation of VSMCs, suggesting that LH indirectly inhibits PI3K-AKT-HIF-1, (Figs. 7A-7D). Alizarin red staining and Von Kossa staining results suggested that high phosphorus induces an increase in mineralized nodules (Fig. S3). The addition of lanthanum ions prevents ROS production induced by high phosphorus and the production of mineralized nodules (Figs. S2 and S3). Inhibition of the PI3K-AKT-HIF-1, signaling pathway (Fig. 7) prevents transdifferentiation of VSMCs. Discussion In patients with CKD, mechanisms such as anemia, microvascular disease, hyperphosphatemia, and mitochondrial dysfunction contribute to the hypoxic state of the body. In hypoxic conditions, mitochondria produce large quantities of ROS, thereby damaging nephron excretion. ROS also prevents the maintenance of homeostasis and causes accumulation of metabolites. 29) Renal regulatory mechanisms, such as glomerular feedback, muscle reflex supplying small arteries, and the renin-angiotensin-aldosterone system, are also affected by ROS; therefore, the kidneys are unable to compensate for the imbalance in water, electrolytes, and acid-base system. As a result, the renal excretion of phosphorus is once again destroyed, and the phosphorus load in the body is increased. The development of a positive feedback mechanism leads to aggravation of oxidative stress and hypoxia-stress response. 30),31) ROS activates the PI3K-AKT signaling pathway, promotes expression of its downstream target genes (m-TOR and RUNX2), and promotes VSMC calcification and osteoblast proliferation. 32),33) The level of oxidative stress was significantly increased in patients with ESRD (Fig. 4), which is related to the signal pathway that we identified. HIF-1 mediates a series of cellular hypoxia responses in hypoxic conditions and plays an important role in vascular calcification. In hypoxia, owing to limited hydroxylation of oxygen levels, the HIF-1, subunit stabilizes and activates downstream signaling pathways, including VEGF signaling. VEGF signaling plays a central role in the process of angiogenesis and bone formation through its effects on chondrocytes and osteoblasts. Previous studies reported that ROS activates the PI3K-AKT-HIF-1, signaling pathway to promote the osteogenic marker RUNX2. 34) The previous study results were consistent with our metabolomics and proteomics results (Figs. 3 and S1). Therefore, our study confirmed the signaling pathway in vivo and in vitro. The study results showed that p-PI3K, p-AKT, and p-mTOR levels in the model group were significantly higher compared with the control group ( Fig. 6). Additionally, the expression level of HIF-1, in the nucleus was significantly increased by 67%, the expression of the downstream protein VEGF was significantly increased by 89%, and BMP2 and RUNX2 expression levels were increased by 60% and 62%, respectively, in the model group compared with the control group. The expression level of SM22, decreased by 49% (Fig. 6) in vitro and in vivo. Notably, after adding the inhibitor, the expression of the above-mentioned protein was reversed (Fig. 7). Similarly, after administration of the lanthanum ion, expression of the above-mentioned protein was reversed in a dose-dependent manner. However, lanthanum ions indirectly influence the pathway (Fig. 7). Our results suggest that VSMCs differentiate into osteoblast-like cells and lanthanum ions inhibit the transdifferentiation. Therefore, LH inhibits the CKD process and vascular calcification by blocking the ROS-PI3K-AKT-HIF-1, signaling pathway (Fig. 8). Vascular calcification is the main reason for the high mortality of patients with CKD. CKD is mainly treated with kidney transplantation and dialysis, but these methods are very expensive. Therefore, the development of drugs to treat vascular calcification is key to reducing the CKD mortality. However, drug development, from preclinical research to clinical research, is a long and complicated process. 35) The development of new therapeutic agents is a multistage process with ethical, scientific, and economic challenges. 36) The probability of successful clinical development is about 10%, and few drugs prove to be effective in clinical trials. 37) For example, the clinical success rates of systemic anti-infective drugs, neuropharmacology, and cardiovascular drugs are 27%, 8%, and 7%, respectively. 38) The lack of efficacy of drugs is the main reason for failure of drug development in the clinical studies, which may be explained by poor external validity of preclinical (cell, tissue, and animal) models of human diseases and differences in species. 39) In our study, we used metabolomics to predict the possible mechanism of an effective candidate drug in patients with ESRD and simulate the disease process in a preclinical study for new drug discovery. In previous metabolomics studies, metabolites were usually administered to rats to observe changes in signal pathways; some other studies only assessed the metabolites. 40)- 42) In this study, we propose a method for combining human and animal metabolomics to screen signal pathways, thereby avoiding interspecies differences, and it was proved for the first time that LH (a white powdery solid with a weight of 189.92752 g/mol, hardly soluble in water and the chemical formula is shown in Fig. 9) can be used as a new phosphorus binder [Vol. 98, to inhibit the occurrence of vascular calcification through the ROS-PI3K-AKT-mTOR-HIF-1, signaling pathway, which increases the likelihood of successful clinical studies of the candidate phosphorus-binding agent LH.
2022-07-30T15:05:08.896Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "5656b4a0ce7699aed38aea304d1b0e76c45e0fd3", "oa_license": "CCBYNC", "oa_url": "https://www.jstage.jst.go.jp/article/pjab/98/7/98_PJA9807B-04/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eafa6a4c5257e81fdfc76ae2f458333134a1634f", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
150147304
pes2o/s2orc
v3-fos-license
Imagined Immunities : Abjection , Contagion and the Neoliberal Debt Economy This paper addresses the pervasiveness of contagion as a structure of feeling by putting Maurizio Lazzarato’s biopolitics of indebtedness in dialogue with Roberto Esposito’s insight that debt is the very condition of both community and its dialectical opposite, immunity. Where Esposito does not suf ciently engage the role of nancialized or neoliberal capitalism within the contemporary crisis, Lazzarato develops a Marxian account of debt that complements Esposito’s “immunitarian biopolitics,” revealing it as an intrinsically capitalist one, and allows us to ground it in contemporary power structures through Marx’s gure of M-M’. Abjection: Two “Chiral” Scenarios In the 2009 science ction lm Surrogates (Jonathan Mostow), an implausible 98% of the world’s inhabitants interact with one another using remote-controlled, humanoid robotic avatars known, eponymously, as “surrogates.” From the “comfort and safety of their own homes,” we are told, people experience the world, including the company of others, “without risk of disease or injury,” such that violent crime and communicable disease have “dropped to record lows.” The premise, if on the one hand a logical extrapolation of our tendency to buffer social interactions through emails, text messages, social media, and the like, nonetheless raises a few vexing questions (not least logistical ones), but it also speaks to something more deeply entrenched in the forms of subjectivity and social life proper to late capitalist modernity, projecting its particular biopolitics onto a fantasized future. To wit, a scenario in which people continue to occupy a shared space, but from within the security and isolation of their own private dwellings, is one of total immunity—not just in the biomedical sense (e.g., from airborne threats like toxins and disease), or even from criminal violence, but also from the sheer inconvenience of encounters with other persons and the corresponding obligations that ensue. Indeed it is in this quite precise sense that I invoke Roberto Esposito’s etymological inquiry into the term, according to which munus denotes a duty, obligation, or debt, and the pre x im-, therefore, an exemption from such, with the juridical usage predating the medical one by a few centuries at least. In this reading too, its dialectical counterpart, community, points to a shared obligation or debt that in its constitutive un-payability, and in the modern context of secularization and individualization, demands the erection of an “apparatus of immunization” that for Western modernity comprises the dispositifs of property, liberty, and civil and political rights. Premised on “the proper”—and by extension on its cognates, such as property and proprietary individualism—this “immunitarian biopolitics” is speci cally a capitalist one, most effectively given form in the market, where subjects are enjoined to participate in social and economic exchange without exposing themselves to “obligation” in either its existential sense or the hierarchical one demanded by feudalism. For Esposito’s contemporary Luigino Bruni, indeed, it is precisely the “rationale of the market” that points, in theory, to the kind of futuristic vision depicted in Surrogates. 1 Abjection: Two "Chiral" Scenarios In the 2009 science ction lm Surrogates (Jonathan Mostow), an implausible 98% of the world's inhabitants interact with one another using remote-controlled, humanoid robotic avatars known, eponymously, as "surrogates." From the "comfort and safety of their own homes, " we are told, people experience the world, including the company of others, "without risk of disease or injury, " such that violent crime and communicable disease have "dropped to record lows." The premise, if on the one hand a logical extrapolation of our tendency to buffer social interactions through emails, text messages, social media, and the like, nonetheless raises a few vexing questions (not least logistical ones), but it also speaks to something more deeply entrenched in the forms of subjectivity and social life proper to late capitalist modernity, projecting its particular biopolitics onto a fantasized future. To wit, a scenario in which people continue to occupy a shared space, but from within the security and isolation of their own private dwellings, is one of total immunity-not just in the biomedical sense (e.g., from airborne threats like toxins and disease), or even from criminal violence, but also from the sheer inconvenience of encounters with other persons and the corresponding obligations that ensue.Indeed it is in this quite precise sense that I invoke Roberto Esposito's etymological inquiry into the term, according to which munus denotes a duty, obligation, or debt, and the pre x im-, therefore, an exemption from such, with the juridical usage predating the medical one by a few centuries at least.In this reading too, its dialectical counterpart, community, points to a shared obligation or debt that in its constitutive un-payability, and in the modern context of secularization and individualization, demands the erection of an "apparatus of immunization" that for Western modernity comprises the dispositifs of property, liberty, and civil and political rights.Premised on "the proper"-and by extension on its cognates, such as property and proprietary individualism-this "immunitarian biopolitics" is speci cally a capitalist one, most effectively given form in the market, where subjects are enjoined to participate in social and economic exchange without exposing themselves to "obligation" in either its existential sense or the hierarchical one demanded by feudalism.For Esposito's contemporary Luigino Bruni, indeed, it is precisely the "rationale of the market" that points, in theory, to the kind of futuristic vision depicted in Surrogates.Bruni asks us to imagine a world in which each family has its own house acoustically and visually isolated from others . . . the few remaining skyscrapers are constructed so as to avoid all encounters on the stairs or on the landings; . . . of ce and workplace communications are solely via email . . .Con icts have been eliminated because the precondition for con ict, that is, the need to maintain a common ground, has itself been eliminated. Though robotic avatars are not included in Bruni's vision, Surrogates plays out the culmination of Esposito's "immunitarian biopolitics, " bringing the immunitary right to carve out "one's own" from "the common" to the point where community, conceived as a subjective threshold or rupture, is safely recon gured as pure simulacrum, to be navigated and withdrawn from at will by fully "immune" individuals safely ensconced in their homes. Needless to say, events take an unfortunate turn.Inevitably people's surrogates start getting "killed, " and ultimately all of them, mutually networked as they are, fall victim to (of all things) a virus.In a particularly jarring scene towards the end, a bustling rush-hour crowd of surrogates collapses en masse as a total systemic failure takes hold; subsequent scenes depict a cityscape strewn with these ambiguous "bodies, " cars veering helplessly into one another and onto sidewalks.Of ces, subways, and shops are littered with uncanny/abjected gures that we "know" not to confuse with their actual human operators, but that deploy nonetheless the semiotics of a large-scale and properly human catastrophe.Indeed it is at precisely this moment that the remote-controlled avatars are most uncannily similar to their more familiar biohorror counterparts, the virally transformed ex-humans or "zombies" whose fate is to be slaughtered without hesitation or regret.Indeed the same, eerie "posthuman" silence hangs over these abjected, slaughtered gures in scenes from, for instance, The Walking Dead as over the abruptly disabled avatars of Surrogates.And the resonance between the two ostensibly disparate scenarios, I believe, suggests that the total immunity of Bruni's extrapolative vision, and the total contagion of the more Hobbesian, post-apocalyptic one, may be anything but mutually exclusive.This resonance might be described as "chiral, " in the sense of that "mirroring" property of certain chemical compounds that, while identical, are non-superimposable and have very different effects.I have previously borrowed this term from an early episode of Breaking Bad in which a chemistry lecture delivered by the protagonist, Walter White, foreshadows his transformation from hapless, insolvent family man to criminal mastermind, and used it to read the criminal underworld of that series as an inverted mirror-image of our own, late capitalist or neoliberal world.In the present context, "chirality" helps to illuminate how a "structure of feeling" in which an excess of immunity (Surrogates) is "mirrored" by its catastrophic opposite (e.g., The Walking Dead) derives from a particular moment in late capitalism, namely, the ascendancy of what Maurizio Lazzarato calls the "neoliberal debt economy." If it seems counterintuitive to read Esposito's poststructuralist take on immunity-for which "debt" is of an existential nature-alongside Lazzarato's autonomist Marxist reading of a more distinctly nancial debt, it is nonetheless on the plane of the biopolitical that the two can be articulated together.The linchpin is what Esposito conceives as the dispositif of "the proper, " expressed in, among other things, the relation of ownership that, under capitalist modernity, comes to ground the whole complex of "immunitary" rights in the structural and symbolic order of private property (including, of course, capital).Esposito's "immunitarian"-and in principle egalitarian-biopolitics is indeed distinctly capitalist, coming into its own, so to speak, just as "individualistic models" of social organization begin taking the place of communitarian ones.For Lazzarato, in turn, it is precisely property (a modality of the proper) that under conditions of nancialization is "deterritorialized" into the realm of capital securities and debt instruments.In the process, I would argue, the "immunitary" mediations of Western modernity as described by Esposito, and grounded as they are in "the proper, " are deterritorialized as well.Yet neoliberal ideology doubles down in its insistence on, precisely, the immunitary functions of ownership, i.e. property, as the foundation of existential security, even as the generalized crisis precipitated by nancialization exposes more and more citizens to the risks of precarity and expropriation.Herein lies the "chiral" center between visions of total immunity on the one hand and total contagion on the other.As a structure of feeling -the articulation of affective investments and tensions pointing to the "omissions, " the "consequences, as lived" of otherwise "formally held and systematic beliefs, " namely those of the "ownership society" promulgated by these ideologies-this tension between immunity and contagion expresses a crisis in the dispositif of the proper, in the mediation between power and life. In the following pages, I will review the salience of the contagion metaphor in particular, and esh out Esposito's claims about immunity and the "immunized community" as the biopolitical objective of Western capitalist modernity.But where Esposito's otherwise immensely productive inquiry tends to bracket the function of capitalism in this immunitary con guration, I will turn to Lazzarato's biopolitics of debt, with particular attention to the shift from the Marxian equation for capitalist accumulation (M-C-M') to that for nancial accumulation (M-M') as encoding the dissolution of "C" as a modality of the proper with an "immunitary" function.The gure of "human capital" that emerges in this nancialized biopolitics is not just the "productive" one of Foucauldian theory (the "entrepreneur of the self") but also a liminal and constitutively exposed one for whom immunity is both an inescapable imperative and an utter impossibility, and it is the inevitable anxieties arising from this condition of perpetual exposure that nd their cultural mediations not only in biohorror but in the pervasive sense of crisis that permeates contemporary politics. Contagion as the 'Hermeneutics of Everything' There is no question that contagion has in recent years taken on the function of what Angela Mitropoulos calls a "hermeneutics of everything." What Greg Bird and Jonathan Short call the "crisis of the proper" is one in which contact in all of its forms threatens the ontological integrity of the subject, from body to body politic.Diseases, chemicals, vaccinations, endocrine disruptors, glutens, and parabens threaten bodily boundaries and the systems internal to them, while distracted driving, sexting, obesity, drug abuse, and mass shootings take on "epidemic" proportions at a social level, and while terrorism and climate change threaten to breach our best defenses on the national and the global scale alike.Though body panic and its analogues are nothing new, what seems to distinguish the present is the way in which disparate threats merge together, generating a permanent immunological state of exception that promises no return to any norm; indeed it is dif cult to imagine a future in which immunological boundaries would need to be anything but perpetually reinforced.Bird and Short's "crisis of the proper, " indeed, entails a sense that "everything is . . .brought into proximity and correlation" and that "nothing . . .can be effectively isolated, insulated [and] immunized as proper to itself"; the loss of sovereignty individual, popular and national, the breakdown of the law's protective or immunizing function, the implosion of the symbolic order of liberal-democratic capitalism, and the concomitant exposure to violence and expropriation would seem to implicate the very ontology of the modern subject, above and beyond any particular ideological commitments. Modern subjectivity is for Esposito an "invention" of modernity's "immunization paradigm, " starting with the Hobbesian social contract whereby persons relinquished their acquisitive instincts in exchange for sovereign protection; "immunity" in this context means the legally protected right to mark out "one's own, " institutionalized in property, liberty, and civil and political rights.If "community" denotes not a positive entity but a liminal space between self and non-self, with the munus a threshold of exposure and contamination binding the subject to an unchosen debt, then the modern Western subject is invented as an insulated one for whom the dangers of that threshold have been neutralized, and who is thereby relieved of any such debt.To be clear, there is no goldenageism here connecting a "genuine" community (as "shared obligation") to some precapitalist utopia-the "debts" of feudalism were non-reciprocal and rigidly hierarchical, "immunizing" those at the top from the contaminations to which the rest were constitutively exposed (or, to put it another way, the "immunized community" eluded us long before capitalism and will probably do so long after).But what distinguishes a speci cally capitalist "immunitarian biopolitics" is that the "law's protection of [the subject's] possessive capacity"-the capacity, precisely, to withdraw from communitybecomes, under capitalism, the very condition of community as such.Thus, Esposito reads immunitas dialectically with communitas as its negative but constitutively enabling form, operating very much by the logic of medical immunization.In order to survive, he claims, a community "[introjects] the negative modality of its opposite, " in this case, in the form of proprietary individualism.It is perhaps in Rousseau's social contract rather than Hobbes' that the dialectic can be seen to function most seamlessly, perfectly harmonizing sacri ce with sovereignty.As Terry Eagleton explains, "[i]f all citizens alienate their rights entirely to the community, 'each man, in giving himself to all, gives himself to nobody' , and so receives himself back again as a free, autonomous being." For 20th century capitalism, it is the postwar compromise between capitalism and labour that comes closest to enacting this immunitary vision.But the dialectic is now, it would seem, irreparably broken; in Esposito's words, "we are no longer inside the immunitary semantics of the classical modern period, " and the mediation "between politics and the preservation of life" has now "[diminished] in favour of a more immediate superimposition between power and life." Esposito describes the contemporary crisis as a consequence of "excessive demands" for immunity in a globalized world, with the "small walls" erected by various fundamentalisms replacing the Cold War's "Big Wall" as an immunitary "counterweight" to the increasing interpenetration of (imagined) communities and cultures.If there is some truth to this, it nevertheless fails to account for globalization as a speci cally capitalist project, and now more than ever a nancialized one.It is curious, too, that an entire philosophical oeuvre that turns on the question of debt overlooks the possibility that debt in its neoliberal speci city may have something to do with the contemporary erosion of the "mediations" between power and life that modernity's "immunitarian biopolitics" had put into place.This is not to suggest a one-to-one correspondence between the existential debt of the human condition as such and the nancial debt that pervades contemporary capitalist power; one is the condition for social life in any and all of its forms while the other is indelibly stamped with the speci cities of our late-capitalist age. But the two meet nonetheless in the nexus of the proper.If "immunity" turns on property, and nance "deterritorializes" the latter, then the indebtedness that Lazzarato calls the "most universal [condition] of modern-day capitalism" might be placed in dialogue with Esposito's immunitarian biopolitics in order to "tell a better story" about the contemporary crisis, one that gures in, among other things, the "chiral" tension between immunitary and contagion with which I started out. Lazzarato, like Esposito, sees a biopolitics at work in the dispositif of property, and thus a "biopolitics of indebtedness" wrought by its deterritorialization under the latest round of nancialization.The story he tells is a familiar one among critics of neoliberalism: the abandonment of the gold standard and the Bretton Woods agreement effectively undercut the state's monetary sovereignty and "[brought] together a neoliberal alliance" that has "systematically taken aim at the logic of the Welfare State" -a con guration in which the state imposed some limits and redistributive pressures on private capital, and which Zygmunt Bauman, notably, referred to as the "ultimate modern embodiment of the idea of community." Sovereignty-once the guarantor of immunity-is increasingly the prerogative of the "Universal Creditor, " which, with the dismantling of regulatory frameworks, comes to impose the same burdens on the state as it does on individual citizens, compelling it, through discipline, evaluation, and metrics-based assessment, to abandon its "immunitary" functions.The "social rights" that mitigated the asymmetry of the capital-labor relationship are eclipsed by "social debt, " with access to state services a question no longer of "right" but "eligibility, " measured always under suspicion and in the shadow of a Nietzschean "bad conscience." The creditor-debtor relation displaces, indeed, that between capital and labor, as stagnant wages force a large-scale turn to consumer credit, replacing the collective struggle over wages with a more atomized, solitary one over repayment terms and interest rates.Debt, for Lazzarato, is not just a supplement to commodity capitalism but the "economic and subjective engine of the modern day economy" as well as the "strategic heart of neoliberal politics, " representing a "transversal power relation unimpeded by State boundaries, the dualism of production . . .and the distinctions between the economy, the political, and the social." .In other words, it erodes the "immunitary" mediations between power and life, insofar as these are taken to reside in the modern state's obligation towards its citizens. Debt's particular biopolitics-its assault on the ontology of the always-already immunized subject of modernity-is more fully explained in an early essay by Karl Marx that Lazzarato cites at length, namely, "Comments on James Mill, Éléments d'économie politique." For Marx, capitalism makes the medium of (commodity) money already an alienating one, "[estranging] from man" the "human, social act by which man's products mutually complement one another"; yet for the classical political economy that is the object of Marx's critique, this estrangement is undone by the credit relation, given its ostensible foundation in trust.It appears, in the credit relation, "as though the power of the alien, material force were broken . . .and man had once more human relations to man." To be sure, Marx asserts, this "trust" is only an illusion, for, in actual fact, in credit "the dehumanization is all the more infamous and extreme because its element is no longer commodity, metal, paper, but man's moral existence, man's social existence, the inmost depths of his heart." Trust, in other words, has nothing to do with what Lazzarato calls "some noble sentiment toward oneself, others, and the world" but is rather "limited to a trust in solvency, [making] solvency the content and measure of the ethical relationship." This is even more the case when, for Marx, a "rich man gives credit to a poor man" instead of a capitalist borrower: the life of the poor man and his talents and activity serve the rich man as a guarantee of the repayment of the money lent . . .All the social virtues of the poor man, the content of his vital activity, his existence itself, represent for the rich man the reimbursement of his capital with the customary interest." Thus credit appropriates, through debt, "not only the physical and intellectual abilities the poor man employs in his labor, but also his social and existential forces"; man becomes "a mediator of exchange, not however as a man, but as the mode of existence of capital and interest." Lazzarato offers the wording that connects debt in the monetary sense with Esposito's munus or existential obligation.What the credit relation exploits is "'existential' life, " with "existence" meaning the "power of self-af rmation, the forces of self-positioning, the choices that found and bear with them modes and styles of life"-to wit, "the ethico-political constitution of the self and the community." Even at the most stripped-down level of the credit relation, its encoding in the Marxian equation M-M' , the analysis holds true.The market's promise of "immunized" exchange was predicated on the equation for simple circulation, C-M-C, occluding, necessarily, the persistent, deep-structural asymmetries built into M-C-M' (which of course has never entailed a remotely equitable distribution of immunity).As Marx well knew, labour power becomes a commodity only under "de nite, historically developed conditions" that make subsistence conditional on its sale and exploitation.Yet, the fact that alienation and precarity have always been the generalized experience of capitalism-that immunity, in other words, has always been exclusive-should not deter us from conceiving the conjuncture as a dissolution of the latter.If property and its modalities are central to the immunitarian biopolitics of Western modernity, then the "failure of proprietary individualism" most recently and visibly exposed in 2008 portends a crisis that is more than just " nancial." For even in M-C-M' , with "C" encoded as commodity or commodity labour power, property remained "territorialized, " embedded within a productive economy in which it could be exchanged, even if under duress.The "C" as the "mediating" space of value production necessarily incorporated the capacities of living labour and was thus also the site of the "long, frenetic uphill struggle" that brought capitalist modernity to its "golden age" in the imperfect "compromise" of Keynesianism.But the displacement of the capital-labor relation by that between creditor and debtor erodes the "mediating" C and disperses it among an atomized and disenfranchised precariat; in this, the function of the property, along with those of its modalities around which this "struggle" was fought -protections, wages, rights-is catastrophically undone.As Mario Tronti once observed, the history of capitalism consists in its drive to "[emancipate] itself from the working class, " and the disembedding of property from the productive economy and its dissolution in the formula for "self-valorizing money, " M-M' , brings this ultimate objective that much closer.In the process, the "sovereign individual" of Western modernity is left increasingly exposed, with immunity increasingly the privilege of the creditors. Monetary debt thus inscribes itself along the fault-line of originary debt, the threshold of the munus, the exposure to the "other-than-self" that the immunitary mechanisms of modernity maintained in a state of "regulated permeability." Dissolving the mediating C, it inscribes itself as pure and unmediated alienation, forcing the debtor to concede to the creditor not only his instincts but his person itself, exposing it unbearably to the appetites of an always more powerful Other.What the debtor cedes to the creditor, in effect, is her immunity.Disarticulated from community, immunity is now concentrated in the privileges and protections enjoyed by the wealthiest: exemption from regulatory requirements across all sectors, including those most directly destructive of the natural environment; exemption from accountability vis-a-vis what remains of the commons; exemption from levels of taxation that once ensured a degree of existential security to all citizens, these latter now left to manage this burden on their own, with all of its risks and liabilities. Nothing encodes the exclusivity of this privilege more succinctly than the very phrase "too big to fail, " which marks the synthesis of total immunity with structural power and exposes the "immunized community" for the chimera that it is.If the "immunized community" has not been realized at any stage of capitalist social relations yet encountered, nonetheless the discourse of neoliberalism, obsessed as it is with "ownership, " keeps the dream alive and well.The discourse of the "ownership society" advanced by George W. Bush, exhorting Americans to "empower" themselves and exercise more "choice" and "control" over their futures not only by purchasing real estate but by shrewdly managing their health care and pension plans, extended the immunitary promise of the proper into the very realm of existential security that the proverbial 1% were busily dismantling.It also produces the gure of "human capital" in its Foucauldian sense as the "entrepreneur" encouraged to "invest" in itself in order to secure its future value. The biopolitics of indebtedness, however, produces a more sinister form of "human capital, " one that brings us back to the question of contagion with which this paper began. As Alessandra De Marco notes, the "buried commodity" (C ) absented from the equation M-M' can reemerge as a sort of "haunting presence, " and this, I propose, is the biopolitical synthesis of subject and property that, far from empowering the subject, transforms it into a sort of "standing reserve" for the creditors.The gure of "human capital" is therefore marked by liminality, estranged from itself as "capital-with-interest, " as the unmediated source of surplus value.As the munus or "ethico-political" threshold of subjectivity is pressed into the service of capital accumulation, the debtor's existential security is subordinated to " nance's goal of reducing what will be to what is"; meanwhile the total immunity of the creditors, the Wall Street manager, the corporate CEOs, is the inevitable obverse of that inescapable exposure that conditions the experience of Lazzarato's "indebted man." It is easy to see, then, how the "conceptual metaphor" of contagion has come to infuse the "affective economy" of neoliberal capitalism, encoding the "crisis of the proper" occasioned by the universalization of indebtedness and its particular biopolitics.The logic of immunitary ownership pressed to extremes is the chiral counterpart of that pervasive dread of contagion, contamination and dissolution that has come to frame nearly every threat to body and body politic alike.Hence a structure of feeling that, whether mediated by the "surrogates" of Mostow's lm or the shambling cadavers of the biohorror genre, foregrounds permutations of liminality and the Uncanny-the latter, as Nicholas Royle has de ned it, entailing precisely a "crisis of the proper, " a "critical disturbance . . . of the very idea of personal or private property" and a corresponding "strangeness of framing and borders." Contagion as a structure of feeling encodes the contradiction between the "received consciousness" that promises total self-suf ciency through immunitary ownership and a lived experience that is inescapably indebted and precarious.Indeed, there may be no better metaphor for the equation M-M' than the virus itself, or the viral zombie, which seeks less to consume than to replicate itself by estranging the host from its body, transforming it into a source of virus-plus-interest.If capital has long been allegorized as viral or vampiric, in the total debt economy capital is the virus: the appropriation of its "purchasing power" is coterminous with subjectivation to a "structural power" that reduces the body to an "interest-generating machine." Given the identi cation of ownership with exposure, and the unsustainable top-heaviness of the immunitary structure that leaves all but the wealthiest permanently exposed, it is easy to see how the "chirality" between the closing scenes of Surrogates and the more common ones of biohorror encode the paradox of total immunization as the condition for total contagion. Fredric Jameson long ago labelled conspiracy theory as the "poor person's cognitive mapping"-an attempt to apprehend, and make sense of, the deterritorializing ows of a global capitalist totality.Perhaps, then, the boundary work performed by the dread of -can be seen as the "indebted person's cognitive mapping, " an attempt to preserve the subject's ontological integrity from expropriation, to erect immunitary barriers around the body in keeping with neoliberalism's exhortations to "manage" one's own precarity and vulnerability.Such boundary work encodes precisely the breached immunity of the indebted subject, the liminality engendered by the risks and losses it must continually manage as the threshold demarcating bare from disposable life becomes ever more salient to lived experience.To be sure, as the "poor person" in Jameson's formulation is one not de ned by net worth but who substitutes a paranoid ction for true comprehension, so too is the "indebted person" here a gure for all of us not in the creditor class-if not those in it as well, given the susceptibility of global markets to sudden, systemic collapse. Though the most obvious political manifestation of the crisis would seem to be, at rst blush, the resurgent nativism of the Right-said to hold particular appeal among the socalled "white working class, " if the diagnoses issued by the liberal commentariat are taken at face value-in actual fact no one is (so to speak) immune: indeed, the fantasy of Surrogates is nothing if not a fantasy of the relatively privileged, and it is this class also for whom the broadly caricatured Trump voter emerges from the proverbial woodwork, from the darkest corners of the Internet and Appalachia alike.It is also this "urban, " "liberal" class for whom apps like "SketchFactor" and "Ghetto Tracker" were developed, to help them steer clear of inner city neighborhoods (the latter was renamed "Good Part of Town" following accusations of racism).That the Right has no monopoly on the trope of contagion is perhaps nowhere better illustrated, in recent popular culture, than in the darkly satirical early episodes of American Horror Story: Cult, in which a white, middleclass, lesbian couple nd their insular world breached, following Trump's election, by a baf ing coalition of murderous alt-righters and grotesque, terrifying clowns. The disarticulation of community from immunity has, it might be said, reached an intolerable extreme.While it is increasingly hard to tell whether the attendant crisis will resolve itself in "business as usual" or something altogether different, we can nonetheless discern from the overall structure of feeling and its popular mediations the extent of that crisis, its depth, and the intensity of its grip on lived experience.One can only hope that whatever lies on the other side allows us to reinvigorate debt as an ethico-political rather than a power relation, and immunity as the introjection of otherness rather than the feverish and futile erection of barriers against it. Notes defensive drawing and redrawing of boundaries, similar to the work of abjection as famously theorized by Julia Kristeva in her essay The Powers of Horror
2019-05-12T14:22:46.194Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "2edaa52a25565f82d94a4af5c20f9ba9f98f1b6a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.25158/l7.1.3", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d612a6ff4cac6d1ebb3c28564ce3c944044b3f8a", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Economics" ] }
2866725
pes2o/s2orc
v3-fos-license
Bacterial biosensors for screening isoform-selective ligands for human thyroid receptors α-1 and β-1 Subtype-selective thyromimetics have potential as new pharmaceuticals for the prevention or treatment of heart disease, high LDL cholesterol and obesity, but there are only a few methods that can detect agonistic behavior of TR-active compounds. Among these are the rat pituitary GH3 cell assay and transcriptional activation assays in engineered yeast and mammalian cells. We report the construction and validation of a newly designed TRα-1 bacterial biosensor, which indicates the presence of thyroid active compounds through their impacts on the growth of an engineered Escherichia coli strain in a simple defined medium. This biosensor couples the configuration of a hormone receptor ligand-binding domain to the activity of a thymidylate synthase reporter enzyme through an engineered allosteric fusion protein. The result is a hormone-dependent growth phenotype in the expressing E. coli cells. This sensor can be combined with our previously published TRβ-1 biosensor to detect potentially therapeutic subtype-selective compounds such as GC-1 and KB-141. To demonstrate this capability, we determined the half-maximal effective concentration (EC50) for the compounds T3, Triac, GC-1 and KB-141 using our biosensors, and determined their relative potency in each biosensor strain. Our results are similar to those reported by mammalian cell reporter gene assays, confirming the utility of our assay in identifying TR subtype-selective therapeutics. This biosensor thus provides a high-throughput, receptor-specific, and economical method (less than US$ 0.10 per well at laboratory scale) for identifying important therapeutics against these targets. Introduction Thyroid hormones play an essential role in the physiological regulation of different tissues, as well as overall metabolic rate, cholesterol level and heart rate. The targets of thyroid hormones are the thyroid receptors (TRs), which belong to the nuclear receptor (NR) superfamily. Two major classes of TR receptors are known, TRα and TRβ, each of which is expressed in multiple isoforms (TRα-1, TRα-2, TRβ-1, TRβ-2). The thyroid receptors TRα-1 and TRβ-1 each contain six domains (A-F), similar to estrogen receptors α and β and other NRs. The DNA binding (C), hinge (D) and ligand binding (E) domains in the TRα-1 and TRβ-1 isoforms are respectively 88%, 71% and 86% identical, while no homology has been observed in activation function-1 domain (A/B; AF1), which are isoform specific. The domains E/F effect transcription activation upon ligand binding and receptor dimerization, while the E domain contains activation function-2 (AF-2). The TR isoforms are expressed at different levels in different tissues. For example, the TRα-1 isoform is dominant in heart (70%), while the TRβ-1 isoform is dominant in the liver (80%), suggesting that these receptors may be important targets for subtype-selective thyroid hormone receptor modulator (STRM) therapeutics [1][2][3]. Thyroid hormone receptors are essential for proper infant central nervous system (CNS) development, and their production is regulated by the hypothalamic-pituitary-thyroid feedback system. Among nonisoform selective TR-binding compounds, T 3 is the native hormone in the human body, and is produced by follicular cells of the thyroid gland [1,4]. These cells accumulate iodide from plasma through their membranes and use it for the production of secreted human thyroid hormones. A deficiency or excess of these hormones, referred to as hypothyroidism or hyperthyroidism, may lead to myxedema coma, cretinism, and other serious disorders. Several therapeutic strategies have been devised to treat thyroidrelated disorders. For example, thioureylenes can be used in hyperthyroidism to inhibit thyroid hormone production, as well as the conversion of the less active 3,5,3 ,5 -tetraiodo-l-thyronine (thyroxine; T 4 ) to more active 3,5 ,3-triiodo-l-thyronine (triiodothyronine; T 3 ). Direct administration of T 3 is also used to treat hypothyroidism and associated obesity. Unfortunately, the use of T 3 is limited by its agonist activity against both TR isoforms, and resulting cardiovascular side effects such as tachycardia. The presence of additional tissue-specific side effects, arising from varying TR isoform levels in differing tissues, suggests that it may be desirable to develop subtype-selective TR modulators (STRMs). Work in this area led to the finding that a single amino acid residue difference (Ser277→Asn331) in the ligandbinding pockets of TRα and TRβ have a direct effect on the binding selectivity of potential STRMs [5,6]. Triiodothyroacetic acid (Triac) has been found to be TRβ selective as well [7], but the exact mechanism of its selectivity is not well understood. Martinez et al. suggested that the observed 2-to 3-fold selectivity of Triac for TRβ is connected to conformational changes in Triac itself, possibly caused by the high flexibility of its carboxylate group [8]. These studies have facilitated the recent development of several potentially therapeutic isoform-selective TRβ agonists, which include Sobetirome (GC-1), which lowers LDL cholesterol level with no effect on the cardiovascular system, as well as Eprotirome (KB-2115) and MB07811 for dislipidemia [9][10][11]. In addition to these, several TR antagonists have been developed for potential therapeutic uses, such as 1-850, DIBRT (low potency), NH-3 (high potency), and the partial antagonist GC-14 (low potency) [12][13][14]. Developing an ideal STRM is challenging. Several TRβ selective agonists, such as Axitirome and KB-141, have been discontinued during clinical development due to unexpected side effects [1,2,15,16]. The desire for isoform-selective compounds, coupled with the difficulties associated with their development, provides a strong impetus for the creation of new screening methods for isotype-selective TR modulators. The detection of various thyromimetic compounds is commonly analyzed by using a growth hormone 3 (GH 3 ) cell assay [17], as well as various protein microarray methods [18], in vitro timeresolved Fluorescence Resonance Energy Transfer (TR-FRET) assays (LanthaScreen TM ; Invitrogen, Carlsbad, CA), and a number of transcriptional activation assays [1,12,[19][20][21]. The main disadvantage of the GH 3 cell assay compared to other biosensor assays is that it does not report receptor isoform-specificity since the cells contain TRβ-1, TRα-1 and TRα-2 receptors [22]. The cells are also derived from rat and not from human. Mammalian or yeast transcriptional activation assays rely on reporter proteins such as luciferase or β-galactosidase, where a TR-responsive promoter, engineered into the host strain, drives their expression. Several additional strategies for detecting and identifying hormone-like compounds rely on fusions between the hormone receptor ligand-binding domain (LBD) and various other functional proteins in yeast and mammalian cells [23]. One group of these includes direct end-to-end or insertional fusions of LBDs to functional enzymes, where the binding of a ligand by the LBD will directly activate the fused reporter protein [24][25][26]. A second group involves fusion of the NR LBD to a GAL4 DNA-binding domain to generate a highly sensitive transcriptional assay for ligand function [27,28]. Both of these assay types are highly effective and allow generation of new assays by simple LBD swapping. Strengths of these assays include their ability to function in yeast and mammalian cells, high sensitivity to ligands, and the lack of requirement for NR-specific cofactors and coactivators. These strengths have led a few of these assays to be commercialized, including the HEK 293T cell-based GeneBLAzer beta-Lactamase reporter system (Invitrogen, Carlsbad, CA) for detection of agonists and antagonists against a variety of available NRs. One drawback of these assays is the potential for misclassification of the tested ligands. This can arise from the cellular context of a given assay (e.g., yeast versus human tissue), which may exhibit differences in coactivator levels, membrane transport characteristics, and genetic background [29][30][31][32]. Additional artifacts can arise from the use of isolated NR LBDs in fusion to the non-native reporter protein domains, which can misreport the relative activities of antagonists and agonists. A final, yet significant drawback is the cost of some of these assays, which can approach one thousand U.S. dollars per 384-well plate in the case of the GeneBLAzer reporter system mentioned above. The biosensor assay presented here is an Escherichia coli (E. coli) growth-based technique. In this assay, the conformation of the TR LBD is linked to the activity of a thymidylate synthase (TS) reporter enzyme through an engineered allosteric biosensor protein. The engineered sensor protein consists of the TS reporter enzyme linked to an intein splicing domain and maltose binding protein. In previous work, we have shown that the activity of the TS reporter is repressed when it is in fusion to the intein splicing domain, likely due to steric blockage of TS dimerization [33]. In the engineered biosensor protein, the NHR LBD is inserted into the splicing domain, which appears to stabilize the correct fold of the LBD while simultaneously blocking TS dimerization. Our hypothesis is that the repositioning of helix 12 of the TR LBD upon ligand binding induces a conformational change in the intein domain, which leads to dose-dependent activation of the TS domain [34]. Since TS activity is required for E. coli cell growth, the configuration of the TR LBD is reflected in the TS phenotype of the cells expressing the biosensor protein. The TS phenotype can be observed and quantified using positive selection in a defined liquid growth medium that lacks thymine (-Thy medium), or negative selection using -Thy medium supplemented with thymine and trimethoprim (TTM medium) [35]. Thus, an important aspect of the screen is its ability to confirm the effects of a given ligand on LBD-dependent TS activity through the mirror image phenotypes observed with -Thy and TTM media. In this case, a general growth effect (e.g., nutritional affect or toxicity) may produce a positive growth phenotype in one medium, but would fail to produce the mirror image phenotype in the alternate growth medium. Generation of dose-response curves in -Thy and TTM liquid media permits an estimate of the relative binding affinities of test compounds for the TR LBD targets, thus providing a rapid means for detecting and characterizing isoform-selective ligands. Because this assay relies on simple E. coli growth in liquid medium, it is nonradioactive, economical and simple to use. Further, only the LBD of the desired NR is cloned into the E. coli cells, which greatly simplifies the generation of specific NR biosensors. In this work, we demonstrate the capability of the system to readily detect several TR ligands ( Fig. 1), and to identify subtype-selective thyromimetic ligands. The construction of pMIT::TRβ-1 was based on our previously reported pMIT::TR* biosensor plasmid [33] by simple replacement of TRα-1 LBD in pMIT::TRα-1 by the TRβ-1 LBD (corresponding to TRβ-1 ORF amino acids E203 to D461) to assure identical plasmid construction. Its construction also relied on the silent NheI and SacII restriction sites with the N-and C-terminal segments of the miniintein (see Supplemental Materials, Table S1 for primer and TR LBD sequences). Phenotype determination The TS-deficient E. coli strain D1210ΔthyA::Kan R [F − Δ(gpt-proA)62 leuB6 supE44 ara-14 galK2 lacY1 Δ(mcrC-mrr) rpsL20 (Str r ) xyl-5 mtl-1 recA13 lacI q ] was transformed with pMIT::TRα-1 and pMIT::TRβ-1 for growth phenotype determinations. Fresh transformant colonies were used to inoculate 5 ml cultures of Luria-Bertani (LB) medium supplemented with 100 μg/ml ampicillin and 50 μg/ml thymine. μg/ml ampicillin. The diluted cells where then transferred to 96 well plates at 198 μL/well, and each well was supplemented with 2 μL of each compound diluted in DMSO at the desired concentration. Importantly, the DMSO concentration in each well was kept constant at 1% throughout each experiment, regardless of the final ligand concentration. The 96 well plates were then incubated at 34 • C, 150 rpm agitation and 80% humidity to assure equal volumes across the wells. Over time, the growth of the E. coli cells in each 96 well plate was measured by optical absorbance at a wavelength of 600 nm (OD 600 ) using a Biotek Synergy 2 spectrophotometer. To confirm the results of the -Thy medium test, the cells were also grown in -Thy medium supplemented with 10 μg/mL trimethoprim and 50 μg/mL thymine (TTM medium) and incubated at 37 • C. The TTM medium reverses the phenotype of the -Thy medium, providing direct evidence of a specific effect of the ligand on TS activity, as opposed to a more general ligand effect on cell growth. Statistical analysis The statistical significance of our results was verified by calculating the Z factor for each test as described by Zhang et al. [38]. Additionally, the signal-to-noise (S/N), and signal-to-background (S/B), ratios were analyzed to determine the significance of the observed signal [39]. Detection of agonism The reported TR agonists Triac, T 3 , GC-1 and KB-141, and the negative control (E 2 ) , were each tested at a concentration of 10 μM in the presence of the pMIT::TRα-1 and pMIT::TRβ-1 biosensor strains (Figs. 2, 3A and B). Each well contained cells growing in -Thy medium with a constant final concentration of either 1% (v/v) DMSO or 1% (v/ v) DMSO with dissolved ligand. Control experiments indicate that this concentration of DMSO does not impact cell growth in -Thy medium, suggesting that the use of DMSO as a delivery vehicle has minimal impact on E. coli cell viability (see Fig. S2). The time-dependent growth of cells harboring either pMIT::TRα-1 or pMIT::TRβ-1, incubated at 34 • C in the presence of 10 μM T 3 (in 1% DMSO) or solvent only (1% DMSO) in -Thy medium, is presented in Supplemental Material for the time period 15-24 h (see Figs. S1A and S1B). In all cases, the growth rate of the biosensor cells increased in the presence of reported agonists (Table 1). This result is presumably due to an increase in TS activity upon ligand binding, and is consistent with the behavior of our previously reported NR biosensors exposed to known agonists (e.g., ERα and ERβ strains exposed to estrogen). A specific ligand-LBD interaction is further supported by the observed decrease in growth [46] in TTM medium in the presence of ligands. This confirms that the phenotype affects are arising specifically via the TS reporter enzyme, and not from a more general affect of the ligand on cell growth. An additional control study compared the T 3 and E 2 dose responses of the TRα-1 and TRβ-1 biosensor strains, as well as an ERβ biosensor strain containing pMIT::ERβ (Fig. 4). As previously reported, high TS activity was observed with the ERβ biosensor strain exposed to its native estrogen ligand, E 2 (Fig. 4A). In addition, E 2 showed no significant affect on the TRα-1 biosensor (Fig. 4B), and a very slightly agonistic activity with the TRβ-1 biosensor at high concentration (Fig. 4C). As expected, T 3 was found to be a potent agonist for both TR sensors, μM T 3 with pMIT::ERβ; Fig. 4A). Under these conditions, cell viability is compromised due to thymine starvation, and very high ligand concentrations can lead to further decreases in cell growth. This effect is generally not observed in cases where the ligand stimulates healthy cell growth and high viability. Thus, to detect specific cytotoxicity of ligands, cells are grown under non-selective conditions (in the presence of thymine) in the presence of high ligand concentrations. In these tests, none of the tested compounds showed significant toxicity against our bacterial sensor strains (data not shown). Potency and selectivity of ligands The relative potencies of the ligands were based on dose response determinations, where the ligand concentrations were varied by serial dilution from a high concentration of 100 μM (final concentration in the growth medium) to a low concentration where no growth affect was observed (typically below 1 nM) (Fig. 5). To determine EC 50 values, growth rates at various ligand concentrations were normalized and fitted to a standard sigmoidal dose-response equation using nonlinear regression with variable slope (Prism ver. 5.01, GraphPad software, San Diego, CA). Once the EC 50 values had been determined for each ligand/sensor combination, the subtype-selective binding of the tested ligands was defined as follows: In all cases, the results generated by our system qualitatively matched those reported by other investigators (Table 1). For example, in our system, the native ligand T 3 showed similar EC 50 values of 0.52 and 0.58 μM for TRα-1 and TRβ-1, respectively (Tables 2 and S2). The most potent ligands for TRα-1 were Triac and T 3 , while KB-141 and GC-1 were less potent and exhibited similar binding to each other. Triac was also the most potent ligand for TRβ-1, while the GC-1 and KB-141 potencies were 2-fold lower and T 3 exhibited 8-fold lower potency than Triac. The dose-response curves used to calculate the EC 50 values for each compound indicated that the detection limits, which we define as the lowest concentration of a test compound that generates an unambiguous growth signal by visual inspection, of our TRα and TRβ biosensors for T 3 , KB-141 and GC-1 is approximately 100 nM, and the detection limit for Triac is approximately 10 nM (Fig. 5). The calculated EC 50 values also indicated some subtype-selective behavior in several of the compounds. Although the native T 3 ligand showed no significant selectivity for either TR receptor, Triac showed higher potency when bound to TRβ (EC 50 = 0.07 μM) vs. TRα (EC 50 = 0.31μM), corresponding to a selectivity ratio of 4.43 (Tables 2 and S2). Notably, KB-141 and GC-1 were designed to bind selectively to TRβ, and this behavior was confirmed by our biosensors. Specifically, the selectivity for TRβ over TRα was 4.04 for GC-1 and 4.66 for KB-141 (Tables 2 and S2). Further, GC-1 and KB-141 were both approximately 3-fold more potent than T 3 when bound to the TRβ sensor. Several aspects of the experimental design were examined for impacts, including growth media pH and plate edge effects. These tests indicate that the growth medium pH can lead to final OD variations of up to 25% over the range pH 6.9-7.1 (data not shown), and therefore great care was taken to adjust the growth media pH to precisely 7.0 during all experiments. Edge variations on the 96-well microtiter plates were as high as 10% in cases where cell growth levels are low, Table 3 Statistical analysis of the TR biosensor responses derived from three separate 96-well plates with three dose-response tests on each plate (nine total tests). Abbreviations: S/N-signal-to-noise ratio; S/B-signal-to-background ratio; Z factor-determines the statistical quality of the test as described by Zhang et al. [38]. To verify the ability of our sensor to report a statistically significant result for each test ligand, Z factors were calculated using the averages and standard deviations of the measured growth values at the highest and lowest concentrations of each ligand tested. This analysis yielded Z factors greater than 0.5 for all of our tests, indicating that the biosensor response to each of the tested compounds was unambiguously significant (Table 3). Further, all of the S/N ratios were above 66 for TRα and 16 for TRβ, while the S/B ratio was consistently above 3 for both biosensors. Discussion The first subtype-selective thyromimetics have appeared during the last 10 years [11]. In conventional transcriptional activation assays, the potency and selectivity (TRα/TRβ) of ligands can vary depending on the co-regulators present, which typically include SRC1-2, SRC3-2 and NCoR1-2 [19,40]. Other factors can also influence outcomes, such as the in vivo or in vitro method type, the physicochemical characteristics of the compounds, and the type of solvents used, which may enhance ligand penetration through cell membranes depending on the assay. Despite the large variety of assays available, there is a need for comparable qualitative and quantitative data for evaluating thyromimetic TR-subtype-selectivity [41]. These biosensors provide an alterative method to mammalian cell reporter gene assays for characterizing potency and isoformselectivity of ligands. Although the results obtained from this biosensor method follow the qualitative trends observed in conventional in vitro studies, the sensitivity of the bacterial sensors is currently lower. For example, the native thyroid hormone T 3 has been shown to bind both TRα and TRβ with similar affinity (K d = 0.1 nM) and potency (EC 50 = 2 nM), as determined via binding and transcriptional activation assays [21]. Our system reproduced the qualitative aspects of these results, indicating non-selective binding of T 3 to the TR receptors, but yielded much lower apparent potencies in the context of the biosensor assay (potencies of 0.52 μM and 0.58 μM for TRα and TRβ, respectively). In all cases, however, our biosensor system qualitatively reproduced important therapeutically relevant characteristics of the control ligands, including binding affinity relative to T 3 and subtypeselective binding. For example, in one previous study, Triac was reported to have 6-fold higher potency than T 3 when binding to TRβ [8], while in another, Triac was found to have approximately 3-fold higher affinity than T 3 for TRβ and identical affinity to T 3 for TRα [7] . Those subtype-selective differences are consistent with the relative potencies obtained in our work, where Triac was observed to be 8-fold more potent than T 3 against TRβ, and 1.7 times more potent for TRα. The in vitro affinity of GC-1 for the TR subtypes has also been studied previously, where it showed stronger binding to TRβ (K d of 0.1 ± 0.02 nM) than TRα (K d of 1.8 ± 0.2 nM) [4]. Chiellini et al. also reported EC 50 values for GC-1 using a transcriptional activation assay, which again indicated preferential binding of GC-1 to TRβ (7 nM vs. 45 nM for TRα) [4]. These results are quantitatively similar to our results in terms of subtype-selectivity, although our actual EC 50 values are 3-fold different (0.2 μM vs. 0.6 μM for TRβ and TRα, respectively). In additional previous work on KB-141, an in vitro radioactive displacement assay indicated a 10-fold TRβ binding selectivity for KB-141, while an in vivo transactivation assay confirmed the agonistic behavior of KB-141 and indicated an 8-fold greater binding affinity for TRβ when normalized to T 3 [15]. In our study, GC-1 and KB-141 were respectively observed to be approximately 4-and 4.7-fold selective for TRβ over TRα. Although our calculated EC 50 values are substantially higher than those determined from previous in vitro binding and transactivation assays, our results are qualitatively consistent with these assays in terms of agonistic behaviors and relative potencies. The differences in EC 50 between our and other assays likely arise from the nontranscriptional nature of the assay, and its reliance on membrane diffusion in bacterial cells. Further, the EC 50 values exhibited by our system are reasonable for the detection of therapeutically relevant compounds (e.g., T 3 , GC-1 and KB-141). These compounds typically must have nanomolar binding affinities in order to exhibit a reasonable therapeutic index. Since our assay can tolerate concentrations several orders of magnitude above this, it can be used to detect the activity of these compounds up to their solubility limits. Since these limits are typically greater than 10 μM, we feel that the testable range of concentrations is adequate for initial library screening. The calculated EC 50 values can then be benchmarked against the standard compounds described in this work. Importantly, our overall results were obtained with excellent reproducibility and robust statistical significance, with Z factors between 0.92 and 0.66. Finally, the signal-tonoise and background measurements were also excellent, and were indicative of very clear results (90>S/N> 66 and 4.5>S/B>3.5 for TRα and 46.5>S/N> 16.5 and 4>S/B>3.4 for TRβ). In previous studies, compounds with low affinity were also characterized using similarly engineered ER β biosensors. The relative binding affinity (RBA = × 100%) of bisphenol A for human ER β biosensor was reported as 1.15% (relative to 100% for 17-βestradiol), whereas for porcine ER β biosensor only 0.13% [39]. However, we have not determined the minimum detectable affinity for thyromimetics, and this is planned for future work with a greater variety of compounds. Our thyroid hormone biosensors provide a means to identify TR agonists and determine relative EC 50 values across a variety of ligands, which allows identification of subtype-selective compounds within large chemical libraries. Further, methods based on these biosensors are both simple and economical, and these approaches have shown utility in the discovery of subtype-selective compounds for ERα and ERβ [33,34,42]. It is therefore possible that these biosensors will become an important primary screen for TR-selective compounds that might be used to treat a wide range of metabolic disorders.
2016-05-12T22:15:10.714Z
2012-08-15T00:00:00.000
{ "year": 2012, "sha1": "050cb7c88dcc5bbe18865b7dc51803943a8ea9cf", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.fob.2012.08.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "050cb7c88dcc5bbe18865b7dc51803943a8ea9cf", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211231738
pes2o/s2orc
v3-fos-license
Efficacy and safety of cefoperazone-sulbactam in empiric therapy for febrile neutropenia Supplemental Digital Content is available in the text Introduction Febrile neutropenia is defined as the development of a fever during a period of significant neutropenia. [1] Despite improvements in cancer management, febrile neutropenia remains a severe complication for patients undergoing chemotherapy for cancer; approximately 1% of patients receiving chemotherapy develop febrile neutropenia. [2] Febrile neutropenia is associated with morbidity and mortality. [2] Patients with febrile neutropenia should be administered empiric antimicrobial agents intravenously; currently, broad-spectrum antibiotics such as antipseudomonal beta-lactam, carbapenems, and piperacillin-tazobactam are recommended. [3,4] Cefoperazone-sulbactam is a broad-spectrum antibiotic and approved for the treatment of several acute bacterial infections. Even for multidrug-resistant organisms, such as extendedspectrum b-lactamase-producing Enterobacteriaceae and carbapenem-resistant Acinetobacter baumannii, cefoperazone-sulbactam exhibits potent in vitro activity that is unaffected by inoculum effects. [5][6][7] Therefore, cefoperazone-sulbactam can be considered a therapeutic option for febrile neutropenia. Several clinical studies [8][9][10][11][12][13][14][15][16][17] have investigated the efficacy and safety of cefoperazone-sulbactam for the treatment of febrile neutropenia. However, no meta-analysis has compared the efficacy and safety of cefoperazone-sulbactam with those of other antibiotics commonly used for treating febrile neutropenia. Therefore, we conducted a comprehensive meta-analysis to provide highquality evidence of the efficacy and safety of cefoperazonesulbactam for treating febrile neutropenia. Data sources and search strategy We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses when searching for articles, selecting studies, evaluating article quality, and analyzing data. [18] We searched for candidate articles published before May 10, 2019, on the PubMed, Web of Science, EBSCO, Cochrane Library, Ovid Medline, EMBASE, and ClinicalTrial.gov databases. The search terms were "febrile neutropenia," "cefoperazone," "sulbactam," "cefoperazone-sulbactam," "sulperazone," "neutropenic fever," "and "neutropenic sepsis." We applied no publication year or language limitations. The definitions of febrile neutropenia varied; the cutoff neutrophil counts per liter were either 500 or 1000, and the definitions of fever were either a single oral temperature of >38.3°C (101°F) or a temperature >38.0°C (100.4°F) sustained for >1 hour. We permitted simultaneous administration of granulocyte colony-stimulating factor and cefoperazone-sulbactam as well as the use of the same anti-MRSA drug or aminoglycoside in both the study and control groups. Three investigators reviewed the full texts of the candidate articles to finalize the experimental and control groups included for meta-analysis. Three investigators reviewed the study methods, site, duration, and population as well as the treatment regimen reported in the articles. Initially, 2 investigators (Lan and Chang) examined the publications independently to avoid bias, and the third author (Lu) resolved any disagreements. We recorded the year of publication; study design, duration, site, and population; antibiotic regimen of cefoperazone-sulbactam and comparators; outcomes; and adverse effects reported in the included studies. Definitions and outcomes The primary outcome was treatment success without modification of the initial antibiotic regimen. Although some researchers consider success with regimen modification as treatment successes, this was not the primary outcome of our metaanalysis. The secondary outcomes were all-cause mortality and adverse events (AEs). Quality assessment and data analysis The investigators used the Cochrane Collaboration criteria to assess the study designs methodological quality; quality of included randomized controlled trials (RCTs), and observation studies were evaluated using the Cochrane risk-of-bias tool and standardized critical appraisal instruments from the Joanna Briggs Institute, respectively. Differences in opinion among the investigators were resolved through discussion and voting. Metaanalysis (drug efficacy and safety) was conducted using Review Manager software (RevMan, 5.3; Cochrane Informatics & Knowledge Management Department). The heterogeneity of the studies was measured using the I 2 statistic and the Q test (heterogeneity X 2 ). A Q test result of P < .1 or I 2 > 50% indicates heterogeneity; in such cases, a random-effects model was used. In contrast, if heterogeneity was absent in a study, a fixed-effects model was used. Pooled odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for outcome analyses. Discussion This meta-analysis of 11 clinical studies [8][9][10][11][12][13][14][15][16][17]19] determined that cefoperazone-sulbactam has a clinical efficacy similar to those of comparators in empiric treatment of febrile neutropenia. First, the success rate of cefoperazone in treating febrile neutropenia was similar to those of comparators in the pooled population of all 11 studies. [8][9][10][11][12][13][14][15][16][17]19] The similar clinical efficacy persisted in the analysis of only the 10 RCTs [8][9][10][11][12][13][14][15]17,19] and subsequent sensitivity test. Second, comparing cefoperazone-sulbactam with 2 antimicrobial agents, piperacillin-tazobactam and carbapenems, commonly recommended for the treatment of febrile neutropenia in subgroup analysis revealed no significant differences in the Table 1 Characteristics of included studies. In addition to the clinical response, AEs during antibiotic treatment are a concern in the management of patients with febrile neutropenia. The most common AEs among patients receiving cefoperazone-sulbactam in this meta-analysis were rash and nausea/vomiting. The pooled risks of rash, nausea/vomiting, and superinfection were similar for cefoperazone-sulbactam and comparators. Another side effect of the study drug is the inhibition of vitamin K metabolism; such inhibition can induce abnormal coagulation and hemorrhage. [20,21] In this metaanalysis, only Winston et al [17] reported data relevant to this AE, reporting that the incidence of prolonged prothrombin time was 10%. However, no significant hemorrhage related to cefoperazone-sulbactam was noted in this report. [17] These findings suggest that cefoperazone is as safe as its comparators in the treatment of febrile neutropenia. However, this meta-analysis has several limitations. First, we did not evaluate the efficacy of cefoperazone-sulbactam by sex, age, or underlying conditions, such as the type of cancer (eg., solid or hematologic) or risk of febrile neutropenia. Second, we did not assess the specific association between the in vitro activity and in vivo response of different microorganisms, particularly antibiotic-resistant ones, among patients with febrile neutropenia and documented microbial infection. Third, the numbers of studies and patients were low in this meta-analysis; therefore, a large-scale study is warranted to confirm our findings. Study, published year The findings of 11 clinical trials indicate that the efficacy and tolerability of cefoperazone-sulbactam are as high as those of its comparators for empiric treatment of patients with febrile neutropenia.
2020-02-22T14:03:59.925Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "4846f794859be802b1a648c5042305453ecbd137", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000019321", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cced730e44f1ad80b0453abf9beb286ac2358d42", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
59943683
pes2o/s2orc
v3-fos-license
Draft Genome Sequence of Listeria monocytogenes CIIMS-NV-3, a Strain Isolated from Vaginal Discharge of a Woman from Central India. We present here the draft genome sequence of Listeria monocytogenes CIIMS-NV-3, a serovar 4b strain isolated from the vaginal swab of a female patient from central India. The availability of this genome may provide useful information on virulence characteristics for comparative genomic analysis. L isteria monocytogenes is a Gram-positive bacterium responsible for listeriosis in humans. Globally, more than 90% of cases of listeriosis are caused by serovar 1/2a, 1/2b, or 4b (1). Clinical manifestations of an invasive form of infection are meningoencephalitis, meningitis, and septicemia (2). This pathogen can be transmitted via vertical transmission from mothers to neonates during passage through the birth canal. The probability of occurrence of neonatal infection may increase with a biofilmproducing clinical isolate (3). Here, we present the draft genome sequence of L. monocytogenes CIIMS-NV-3, isolated from the vaginal discharge of a female with a history of gynecological problems hospitalized in the Government Medical College in Nagpur, India. For isolation of the Listeria isolate, the vaginal swab was collected in a sterile vial and stored at 4°C until further analysis. The sample was processed according to the method of the U.S. Department of Agriculture (USDA). Briefly, a swab was directly inoculated into 10 ml of University of Vermont medium 1 (UVM 1) and incubated overnight at 30°C. The enriched UVM 1 inoculum (0.1 ml) was then transferred to UVM 2 and again incubated overnight at 30°C. The inoculum from enriched UVM 2 was streaked on PALCAM agar (HiMedia Laboratories, Mumbai, India). The inoculated plates were incubated at 37°C for 48 h. The presumed Listeria colonies were further characterized morphologically and biochemically. Typical colonies were verified with Gram staining, a catalase reaction, an oxidase test, a tumbling motility test at 20°C to 25°C, methyl red-Voges-Proskauer reactions, a CAMP test with Staphylococcus aureus, nitrate reduction, fermentation of sugars (rhamnose, xylose, mannitol, and methyl-␣-D-mannopyranoside), and hemolysis on 5% sheep blood agar. The isolate was tested for its pathogenicity with a phosphatidylinositol-specific phospholipase C (PI-PLC) assay. The genomic DNA was isolated using a Qiagen genomic DNA extraction kit from an 18-h culture on brain heart infusion (BHI) agar. Libraries were constructed using a paired-end library (2 ϫ 250 bp) using a v2 chemistry reagent kit. The sequencing of the isolate was performed on an Illumina HiSeq 2500 platform. The reads were trimmed with Trimmomatic v0.36 (4). De novo assembly was performed with SPAdes v3.11.1 (5). For characterization of the Listeria isolate, a multilocus sequence typing (MLST) scheme was used. In MLST, each species is characterized by a series of integers, which correspond to the alleles at the housekeeping loci. MLST was carried out with the FASTA sequence on the multilocus sequence typing (MLST) server provided by the Center of Genomic Epidemiology (https://cge.cbs.dtu.dk/services/MLST/). In silico MLST identified the isolate as belonging to sequence type 328 (ST-328, clonal complex 1, lineage I), which had been reported to be the predominant sequence type in India (7). All bioinformatics analysis was carried out with the default settings. The genome will be further studied to determine the characteristics of the strain for comparative analysis. In developing countries such as India, the incidence of L. monocytogenes from clinical cases is underreported due to the lack of awareness and proper diagnostic assays. Nonetheless, public accessibility of the genomes of L. monocytogenes isolates, such as that from the vaginal discharge of a female patient, is important from the clinical and epidemiological points of view (8,9). Data availability. The annotated whole-genome sequence of this strain has been deposited in GenBank under accession number CP031674. The SRA accession number for the raw reads is SRR8383222. ACKNOWLEDGMENT This study was supported by the Indian Council of Medical Research (ICMR), Government of India (grant number Zon. 15/11/2014-ECD-II).
2019-02-14T21:45:06.562Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "04642c895f62854fc78d560ef261030988d093dc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/mra.01553-18", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04642c895f62854fc78d560ef261030988d093dc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
202540031
pes2o/s2orc
v3-fos-license
Deep Metric Learning with Density Adaptivity The problem of distance metric learning is mostly considered from the perspective of learning an embedding space, where the distances between pairs of examples are in correspondence with a similarity metric. With the rise and success of Convolutional Neural Networks (CNN), deep metric learning (DML) involves training a network to learn a nonlinear transformation to the embedding space. Existing DML approaches often express the supervision through maximizing inter-class distance and minimizing intra-class variation. However, the results can suffer from overfitting problem, especially when the training examples of each class are embedded together tightly and the density of each class is very high. In this paper, we integrate density, i.e., the measure of data concentration in the representation, into the optimization of DML frameworks to adaptively balance inter-class similarity and intra-class variation by training the architecture in an end-to-end manner. Technically, the knowledge of density is employed as a regularizer, which is pluggable to any DML architecture with different objective functions such as contrastive loss, N-pair loss and triplet loss. Extensive experiments on three public datasets consistently demonstrate clear improvements by amending three types of embedding with the density adaptivity. More remarkably, our proposal increases Recall@1 from 67.95% to 77.62%, from 52.01% to 55.64% and from 68.20% to 70.56% on Cars196, CUB-200-2011 and Stanford Online Products dataset, respectively. I. INTRODUCTION L EARNING to assess the distance between the pairs of examples or learning a good metric is crucial in machine learning and real-world multimedia applications. One typical direction to define and learn metrics that reflect succinct characteristics of the data is from the viewpoint of classification, where a clear supervised objective, i.e., classification error, is available and could be optimized for. However, there is no guarantee that classification approaches could learn good and general metrics for any tasks, particularly when the data distribution at test time is quite different not to mention that some test examples are even from previously unseen classes. More importantly, the extreme case with enormous number of classes and only a few labeled examples per class practically stymies the direct classification. Distance metric learning, in contrast, aims at learning a transformation to an embedding space, which is regarded as a full metric over the input space by exploring not only semantic information of each example in the training set but also their intra-class and inter-class structures. As such, the learnt metric generalizes more easily. T. Yao The recent attempts on metric learning are inspired by the advances of using deep learning and learn an embedding representation of the data through neural networks. Deep metric learning (DML) has demonstrated high capability in a wide range of multimedia tasks, e.g., visual product search [2], [3], [4], image retrieval [5], [6], [7], [8], [9], clustering [10], zero-shot image classification [11], [12], highlight detection [13], [14], face recognition [15], [16] and person reidentification [17], [18]. The basic objective of the learning process is to preserve similar examples close in proximity and make dissimilar examples far apart from each other in the embedding space. To achieve this objective, a broad variety of losses, e.g., contrastive loss [19], [20], N-pair loss [7] and triplet loss [15], [21], are devised to explore the relationship between pairs or triplets of examples. Nonetheless, there is no clear picture of how to control the generalization error, i.e., difference between "training error" and "test error," when capitalizing on these losses. Take Cars196 dataset [22] as an example, a standard DML architecture with N-pair loss fits the training set nicely and achieves Recall@1 performance of 99.2% but generalizes poorly on the testing set and only reaches 56.5% Recall@1 as shown in Figure 1(d). Similarly, the generalization error is also observed when employing contrastive loss and triplet loss. Among the three losses, utilizing contrastive loss expresses the smallest generalization error and exhibits the highest performance on the testing set. More interestingly, the embedding representations of images from each class in the training set are more concentrated by using N-pair loss and triplet loss than contrastive loss as visualized in Figure 1(a)-1(c). In other words, optimizing contrastive loss leads to low density of example concentration. Here density refers to the measure of data concentration in the representation. This observation motivates us to explore the fuzzy relationship between density of examples in the embedding space and generalization capability of DML. By consolidating the idea of exploring density to mitigate overfitting, we integrate density adaptivity into metric learning as a regularizer, following the theory that some form of regularization is needed to ensure small generalization error [23]. The regularizer of density could be easily plugged into any existing DML framework by training the whole architecture in an end-to-end fashion. We formulate the density regularizer such that it enlarges intra-class variation while the loss in DML penalizes representation distribution overlap across different classes in the embedding space. As such, the embedding representations could be sufficiently spread out to fully utilize the expressive power of the embedding space. Moreover, considering that the inherent structure of each class should be preserved before and after representation embedding, relative relationship with respect to density between different classes is further taken into account to optimize the whole architecture. Technically, the target density of each class can be viewed as an intermediate variable in our designed regularizer. It is natural to simultaneously learn the target density of each class and the neural networks by optimizing the whole architecture through the DML loss plus density regularizer. As illustrated in Figure 1(d), contrastive embedding with our density adaptivity further decreases the generalization error and boosts up Recall@1 performance to 77.6% on Cars196 testing set. The main contribution of this work is the proposal of density adaptivity for addressing the issue of model generalization in the context of distance metric learning. This also leads to the elegant view of what role the density should act as in DML framework, which is a problem not yet fully understood in the literature. Through an extensive set of experiments, we demonstrate that our density adaptivity is amenable to three types of embedding with clear improvements on three different benchmarks. The remaining sections are organized as follows. Section II describes the related works. Section III presents our approach of deep metric learning with density adaptivity, while Section IV presents the experimental results for image retrieval. Finally, Section V concludes this paper. II. RELATED WORK The research on deep metric learning has mainly proceeded along two basic types of embedding, i.e., contrastive embedding and triplet embedding. The spirit of contrastive embedding is to make each positive pair from the same class in close proximity and meanwhile push the two samples in each negative pair to become far apart from each other. That is to pursue a discriminative embedding space with pairwise supervision. [16] is one of the early works to capitalize on contrastive embedding for deep metric learning in face verification task. The method learns embedding space through two identical sub-networks with the input pairs of samples. Next, an amount of subsequent works are presented to leverage contrastive embedding in several practical applications, e.g., person re-identification [24], [25] and image retrieval [26], [27]. As an extension of contrastive embedding, triplet embedding [28], [29], [30], [31], [32] is another dimension of DML approaches by learning embedding function with triplet/ranking supervision over the set of ranking triplets. For each input triplet consisting of one query sample, one positive sample from the same class and another negative sample from different classes, the training procedure can be interpreted as the preservation of relative similarity relations like "for the query sample, it should be more similar to positive sample than to negative sample." Despite the promising success of both contrastive embedding and triplet embedding in aforementioned tasks, the two embeddings rely on huge amounts of pairs or triplets for training, resulting in slow convergence and even local optimization. This is partially due to the fact that existing methods often construct each mini-batch with randomly sampled pairs or triplets and the loss functions are measured independently over individual pairs or triplets without any interaction among them. To alleviate the problem, a practical trick, i.e., hard sample mining [33], [34], [35], [36], is commonly leveraged to accelerate convergence with the hard pairs or triplets selected from each mini-batch. In particular, [35] devises an effective hard triplet sampling strategy by selecting more positive images with higher relevance scores and hard in-class negative images with less relevance scores. In another work [34], the idea of hard mining is incorporated into contrastive embedding by gradually searching hard negative samples for training. Recently, a variety of works design new loss functions for training, pursuing more effective DML. For example, [19], [37] present a simple yet effective method by combining deep metric learning with classification constraint in a multi-task learning framework. [7] develops N-pair embedding which improves triplet embedding by pushing away multiple negative samples simultaneously within a mini-batch. Such design of Npair embedding constructs each batch with N pairs of samples, leading to more efficient convergence in training stage. Song et al. define a structured prediction objective for DML by lifting the examples within a batch into a dense pairwise matrix in [3]. Later in [38], another structured prediction-based method is designed to directly optimize the deep neural network with a clustering quality metric. Ustinova et al. propose a new Histogram loss [4] to train the deep embeddings through making the distribution of similarities of positive and negative pairs less overlapped. Huang et al. introduce a Position-Dependent Deep Metric (PDDM) unit [2] which is capable of learning a similarity metric adaptive to local feature structure. Most recently, in [39], a Hard-Aware Deeply Cascaded embedding (HDC) is devised to handle samples of different hard level with sub-networks of different depths in a cascaded manner. [40] presents a global orthogonal regularizer to improve DML with pairwise and triplet losses by making two randomly sampled non-matching embedding representations close to orthogonal. In the literature, there have been few works, being proposed for exploiting the adaptation of density in deep metric learning. [6] is by arbitrarily splitting the distributions of classes in representation space to pursue local discrimination. Technically, the method maintains a number of clusters for each class and adaptively embraces intra-class variation and inter-class similarity by minimizing intra-cluster distances. As such, high density of data concentration is encouraged in each cluster. Instead, our work adapts data concentration through maximizing the feature spread or seeking to low density of feature distribution for each class, while guaranteeing all the classes separable. As a result, the expressive capability of the representation space could be fully endowed to enhance model generalization, making our model potentially more effective and robust. Moreover, relative relationship with respect to density between different classes is further taken into account to optimize DML architecture in our framework. III. DEEP METRIC LEARNING WITH DENSITY ADAPTIVITY Our proposed Deep Metric Learning with Density Adaptivity (DML-DA) approach is to build an embedding space in which the feature representations of images could be encoded with semantic supervision over pairs or triplets of examples, under the umbrella of density adaptivity for each class. The training of DML-DA is performed by simultaneously maximizing inter-class distance and minimizing intra-class variation, and dynamically adapting the density of each class to further regularize intra-class variation, targeting for better model generalization. Therefore, the objective function of DML-DA consists of two components, i.e., the standard DML loss over pairs or triplets of examples and the proposed density regularizer. In the following, we will first recall basic methods of DML, followed by presenting how to estimate and adapt the density of each class as a regularizer. Then, we formulate the joint objective function of DML with density adaptivity and present the optimization strategy in one deep learning framework. Specifically, a DML loss layer with density regularizer is elaborately devised to optimize the whole architecture. Suppose we have a training set with .., C} is the class label of image x i . With the standard setting of deep metric learning, the target is to learn an embedding function f (x i ; θ) : x i → R d for transforming each input image into a d-dimensional embedding space through a deep architecture, where θ represents the learnable parameters of the deep neural networks. Note that length-normalization is performed on the top of the deep architecture, making all the embedded representations {f (x i ; θ)} L 2 -normalized. Given two images x i and x j , the most natural way to measure the relations between them is to calculate the Euclidean distance in the embedding space as (1) After taking such Euclidean distance as a similarity metric, the concrete task for DML is to learn the discriminative embedding representation by preserving the semantic relationship underlying in pairs [41], [42], or triplets [15], [21] or even more critical relationships (e.g., N-pair loss [7]). Contrastive Embedding. Contrastive embedding is the most popular DML method which aims to generate embedding representations to satisfy the pairwise supervision, i.e., making the distance between a positive pair of examples from the same class minimized while maximized on a negative pair from different classes. Concretely, the corresponding contrastive loss function is defined as where m p is the functional margin in the hinge function. M and C denotes the set of positive pairs and negative pairs, respectively. Triplet Embedding. Different from pairwise embedding which only considers the absolute values of distances between positive and negative pairs, triplet embedding focuses more on the relative distance ordering among triplets The assumption is that the distance between negative pair (x i , x − k ) should be larger than that of positive pair (x i , x + j ). Hence, the triplet loss function is measured by where m t is the enforced margin in the hinge function and T is the triplet set generated on S. N-pair Embedding. N-pair embedding is one recent DML model which generalizes triplet loss by encouraging the joint distance comparison among more than one negative pair. Specifically, given a (C + 1)-tuplet of training samples are the (C − 1) negative samples from the rest (C − 1) different classes, the N-pair loss function is then formulated as , where N is the (C +1)-tuplet set constructed over S. Through minimizing this N-pair loss, the similarity between positive pair is enforced to be larger than all the rest (C − 1) negative pairs, which further enhances triplet loss in triplet embedding with more semantic supervision. [21], N-pair Embedding [7]) and our proposed DML with Density Adaptivity. The three DML models are all optimized by maximizing inter-class distance and minimizing intra-class variation, often resulting in overfitting problem as the examples of each class are enforced to be concentrated tightly, i.e., the density of each class is very high. In contrast, for DML with our proposed density regularizer, at each iteration the density of each class is estimated and adapted towards a target of low density which encourages to enlarge intra-class variation while guaranteeing all the classes seperable. Meanwhile, the objective in DML penalizes representation distribution overlap across different classes. Such balance between inter-class similarity and intra-class variation leads to better generalization capability of DML model. B. Density Regularizer One of the key attributes which the recent DML methods aforementioned in Section III-A have in common is that their objectives are predominantly designed for maximizing interclass distance and minimizing intra-class variation. Although such optimization matches the intention of encoding semantic supervision into the learnt embedding representations, it may stymie the intrinsic intra-class variation by enforcing the examples of each class to be concentrated together tightly, which often results in overfitting problem. To overcome this issue, we devise a novel regularizer for DML that encourages low density of data concentration in the learnt embedding space to achieve a better balance between inter-class distance and intra-class variation. A caricature illustrating the intuition behind the devised density regularizer is shown in Figure 2. Density Adaptivity. In our context, density is a measure of data concentration in the representation space. We assume that, for the image examples belonging to the same class, high density is equivalent to the fact that all the examples are close in proximity to the corresponding class centroid. Accordingly, for class c, to estimate its density, one natural way is to measure the average intra-class distance between examples and the class centroid in the embedding space, which is written as where S c denotes the set of samples from the same class c and µ c is the corresponding class centroid. Here we directly obtain the class centroid by performing mean pooling over all the samples in S c for simplicity. The higher the density of one class, the smaller the average intra-class distance between examples belonging to this class and the class centroid of this class in the embedding space. Based on the observations of the fuzzy relationship between density and generalization capability of DML, we propose a density regularizer to dynamically adapt the density of data concentration in the learnt embedding space for enhancing the generalization capability. The objective function of density regularizer is defined as where D avg (S c ) represents the density measurement of class c in the embedding space as defined in Eq. (5). α c is a newly incorporated intermediate variable which can be interpreted as the target density of class c corresponding to an appropriate intra-class variation. Note that α c is an intermediate which can be interpreted as the target density of class c. Similar to the density estimation of class c in Eq.(5), α c corresponds to an appropriate target intra-class variation of class c. The larger the value of α c , the lower the density of data concentration for class c. By minimizing this regularizer, the density of each class is enforced to be adapted towards the target density via the first term. Meanwhile, when minimizing the second term, each α c is enlarged (i.e., the target intra-class variation of each class is maximized), pursuing the lower density of data concentration in each class to enhance model generalization. The rationale of our devised density regularizer is to encourage the spread-out property in a way that the regularizer adapts data concentration and maximizes the feature spread in the embedding space, while guaranteeing all the classes separable. As such, the expressive capability of the embedding space could be fully endowed. Please also note that the devised density regularizer should be jointly utilized with a basic DML model in practice, as the objective in DML is required to and update the corresponding target density value. 9: Compute the overall gradient with respect to input embedding representations and backward it to lower layers for updating the parameters θ of the embedding function f (x i ; θ). simultaneously prevent the intra-class variation from increasing endlessly by penalizing representation distribution overlap across different classes. Inter-class Density Correlations Preservation Constraint. Inspired by the idea of structure preservation or manifold regularization in [43], the inter-class density correlation here is integrated into the density regularizer as a constraint to further explore the inherent density relationships between different classes. The spirit behind this constraint is that the target densities of two classes with similar inherent structures should still be similar on the embedding space. The intrinsic structure of the data in each class can be appropriately measured by the original density measurement before embedding. Specifically, our density regularizer with the constraint of inter-class density correlations is defined as where D (0) avg (S ci ) denotes the original average intra-class distance of class c i corresponding to the original density and it is calculated based on the image representations before embedding, i.e., the output of 1,024-way pool5/7×7 s1 layer of GoogleNet [44] in our experiments. η is utilized to control the impact of the original density and reflects what degree of the inherent density relationship between different classes is considered for measuring the density. To make the optimization of our density regularizer easy to be solved, we relax the constraint of inter-class density correlations by appending the converted soft penalty term to the objective function and then Eq.(7) is rewritten as By minimizing the converted soft penalty term in Eq.(8), the inherent inter-class density correlations can be preserved in the learnt embedding space. C. Training Procedure Without loss of generality, we adopt the widely used contrastive embedding as the basic DML model and present how to additionally incorporate the density regularizer into it. It is also worth noting that our density regularizer is pluggable to any neural networks for deep metric learning and could be trained in an end-to-end fashion. In particular, the overall objective function of DML-DA integrates the contrastive loss in Eq.(2) and the proposed density regularizer in Eq. (8). Hence, we obtain the following optimization problem as where λ is the tradeoff parameter. With this overall loss objective, the crucial goal of its optimization is to learn the embedding function f (x i ; θ) with its parameters θ and the target density of each class {α c } C c=1 . Inspired by the success of CNNs in recent DML models, we employ a deep architecture, i.e., GoogleNet [44], followed by an additional fully-connected layer (an embedding layer) to learn the embedding representations for images. In the training stage, to solve the optimization according to overall loss objective in Eq.(9), we design a DML loss layer with density regularizer on the top of the embedding layer. The loss layer only contains parameters of target density. During learning, it evaluates the model's violation of both the basic DML supervision over pairs and density regularizer, and backpropagates the gradients with respect to target density of each class and input embedding representations to update the parameters of loss layer and lower layers, respectively. The training process of our DML-DA is given in Algorithm 1. IV. EXPERIMENTS We evaluate our DML-DA models by conducting two object recognition tasks (clustering and k-nearest neighbour retrieval) on three image datasets, i.e., Cars196 [22], CUB-200-2011 [45] and Stanford Online Products [3]. The first two are the popular fine-grained object recognition benchmarks and the latter one is a recently released object recognition dataset of online product images. Cars196 contains 16,185 images belonging to 196 classes of cars. In our experiments, we follow the settings in [3], taking the first 98 classes (8, [38]. Cars196 Method Tri A. Implementation Details For the network architecture, we utilize GoogleNet [44] pre-trained on Imagenet ILSVRC12 dataset [46] plus a fully connected layer (an embedding layer), which is initialized with random weights. For density regularizer, its parameters (i.e., the target density of each class) are all initially set to 0.5. The control factor η in Eq.(7) is set as 0.5 and the tradeoff parameter λ in Eq.(9) is fixed to 10. All the margin parameters (e.g., m p and m t ) are set to 1. We fix the embedding size d as 128 throughout the experiments. We mainly implement DML models based on Caffe [47], which is one of widely adopted deep learning frameworks. Specifically, the network weights are trained by ADAM [48] with 0.9/0.999 momentum. The learning rate is initially set as 5 × 10 −4 , 10 −5 and 3 × 10 −5 on Cars196, CUB-200-2011 and Stanford Online Products, respectively. The minibatch size is set as 100 and the maximum training iteration is set as 30,000 for all the experiments. In the experiments on Cars196 and CUB-200-2011, to compute the density of each class with sufficient images in a mini-batch, we first randomly sample 10 classes from all training classes and then randomly select 10 images for each sampled class, leading to the mini-batch with 100 training images. In the experiments on Stanford Online Products dataset, since each training class contains only 5 images on average, we construct each minibatch by accumulating all the images for randomly sampled classes until the maximum size of mini-batch is achieved. B. Evaluation Metrics and Compared Methods Evaluation Metrics. For the clustering task, we adopt the Normalised Mutual Information (NMI) [49] metric, which is defined as the ratio of mutual information and the average entropy of clusters and labels. For the k-nearest neighbour retrieval task, Recall@K (R@K) is utilized for quantitative evaluation. Given a test image query, its Recall@K score is measured as 1 if an image of the same class is retrieved among the k-nearest neighbours and 0 otherwise. The final metric score is the average of Recall@K for all image queries in the testing set. All the metrics are computed by using the codes 1 released in [3]. Compared Methods. To empirically verify the merit of our proposed DML-DA models, we compared the following stateof-the-art methods: (1) Triplet [21] adopts triplet loss to optimize the deep architecture. (2) Lifted Struct [3] devises a structured prediction objective on the lifted dense pairwise distance matrix within the batch. (3) N-Pair [7] trains DML with N-pair loss. (4) Clustering [38] is a structured prediction based DML model which can be optimized with clustering quality metric. (5) Contrastive [19] uses contrastive loss for DML training. (6) HDC [39] trains the embedding neural network in a cascaded manner by handling samples of different hard level with models of different complexities. (7) DML-DA is the proposal in this paper. DML-DA tri , DML-DA np and DML-DA con denotes that the basic DML model in our DML-DA is equipped with triplet loss, N-pair loss and contrastive loss, respectively. Moreover, a slightly different settings of the three runs are named as DML-DA − tri , DML-DA − np and DML-DA − con , which are all trained without inter-class density correlations preservation constraint. Table I shows the NMI and k-nearest neighbors performance with Recall@K metric of different approaches on Cars196, CUB-200-2011 and Stanford Online Products dataset, respectively. It is worth noting that the dimension of the embedding space in Triplet, N-pair, Contrastive, HDC and our three DML-DA runs is 128, and in Lifted Struct and Clustering, the performances are given by choosing 64 as the embedding dimension. In view that the embedding size is not sensitive towards performance during training and testing phase as studied in [3], we compare directly with results. C. Performance Comparison Overall, the results across all evaluation metrics (NMI and Recall at different depths) and three datasets consistently indicate that our proposed DML-DA con exhibits better performance against all the state-of-the-art techniques. In particular, the NMI and Recall@1 performance of DML-DA con can achieve 65.17% and 77.62%, making the absolute improvement over the best competitor HDC by 3.0% and 6.2% on Cars196, respectively. DML-DA tri , DML-DA np and DML-DA con by integrating density adaptivity makes the absolute improvement over Triplet, N-pair and Contrastive by 19.97%, 14.82% and 9.67% in Recall@1 on Cars196, respectively. The performance trends on the other two datasets are similar with that of Cars196. The results indicate the advantage of exploring density adaptivity in DML training to enhance model generalization. Triplet which only compares an example with one negative example while ignoring negative examples from the rest of the classes performs the worst among all the methods. Lifted Struct, N-pair and Clustering distinguishing an example from all the negative classes lead to a large performance boost against Triplet. the objective function that Lifted Struct, N-pair and Clustering tend to push positive pairs closer through negative pairs and encourage small intra-class variation, while Contrastive could flexibly balance inter-class distance and intra-class similarity by seeking a tradeoff of impact between positive pairs and negative pairs. As indicated by our results, advisably enlarging intra-class variation leads to better performance and makes Contrastive generalize well. This is also consistent with the motivation of our density adaptivity, which is to regularize the degree of data concentration of each class. With our density adaptivity, DML-DA con successfully boosts up the performance on the two datasets. In contrast, the NMI performance of Contrastive is inferior to that of Lifted Struct, N-pair and Clustering on Stanford Online Products. This is expected as the number of classes in Stanford Online Products is too large (more than 11K test classes) and thus Lifted Struct, Npair and Clustering are benefited from the outcome of small intra-class clustering, making the chance of distinguishingly distributing such a large number of classes on the embedding space better. The improvement is also observed by DML-DA con in this extreme case. Furthermore, HDC by handling samples of different hard levels with sub-networks of different depths improves Contrastive, but the performances are still lower than our DML-DA con . Figure 3 compares the Recall@K performance of our DML-DA framework with or without inter-class density correlations preservation constraint on Cars196 dataset. The results across different depths (K) of Recall consistently indicate that additionally exploring inter-class density correlations preservation exhibits better performance when exploiting triplet loss, Npair loss and contrastive loss in our DML-DA framework, respectively. Though the performance gain is gradually decreased when going deeper into the retrieval list, our DML-DA framework still leads to apparent improvement, even at Recall@128. In particular, DML-DA tri makes the absolute Fig. 7. Barnes-Hut t-SNE visualization [50] of image embedding representations learnt by our DML-DAcon on the test split of Cars196. Best viewed on a monitor when zoomed in. By integrating density adaptivity in DML training, our DML-DAcon effectively balances the inter-class similarity and intra-class variation, which enhances model generalization. As such, the learnt embedding representation is more discriminative to cluster semantically similar cars despite of the significant variations in pose and body paint. improvement over DML-DA − tri and Triplet by 0.5% and 2.56% in terms of Recall@128, respectively. E. Effect of Different Regularizer Next, we compare our density regularizer with Entropy (EN) regularizer [51] and Global Orthogonal (GOR) regularizer [40] by plugging each of them into DML architecture with triplet loss, N-pair loss and contrastive loss, respectively. The entropy regularizer aims to maximize the entropy of the representation distribution in the embedding space and thus implicitly encourages large intra-class variation and small inter-class distance. The global orthogonal regularizer is to maximize the spread of embedding representations following the property that two non-matching representations are close to orthogonal with a high probability. Figure 4 details the NMI performance gains when exploiting each of the three regularizers on Cars196, CUB-200-2011 and Stanford Online Products dataset, respectively. The results across DML architecture with three types of losses and three datasets consistently indicate that our DA regularizer leads to a larger performance boost against the other two regularizers. Compared to EN regularizer, our DA regularizer is more effective and robust, since we uniquely consider the balance between enlarging intra-class variation and penalizing distribution overlap across different classes in the optimization. GOR Fig. 8. Barnes-Hut t-SNE visualization [50] of image embedding representations learnt by our DML-DAcon on the test split of CUB-200-2011 dataset. Best viewed on a monitor when zoomed in. By integrating density adaptivity in DML training, our DML-DAcon effectively balances the inter-class similarity and intra-class variation, which enhances model generalization. As such, the learnt embedding representation is more discriminative to cluster semantically similar birds despite of the significant variations in view point and background. regularizer targeting for an uniform distribution of examples in the embedding space improves EN regularizer, but the performance is still lower than that of our DA regularizer. This somewhat reveals the weakness of GOR regularizer which performs a strong constraint of pushing two randomly examples from different categories close to orthogonal. In addition, the improvement trends on other evaluation metrics are similar with that of NMI. F. Effect of trade-off parameter λ To further clarify the effect of the tradeoff parameter λ in Eq.(9), we illustrate the performance curves of DML-DA with three types of losses by varying λ from 0.5 to 25 in Figure 6. As shown in the figure, our DML-DA architecture with three types of losses constantly indicate that the best NMI performance is attained when the tradeoff parameter λ is set to 10. More importantly, the performance curve for each DML-DA model is relatively smooth as long as λ is larger than 7, that practically eases the selection of λ. [50] of image embedding representations learnt by our DML-DAcon on the test split of Stanford Online Products dataset. Best viewed on a monitor when zoomed in. By integrating density adaptivity in DML training, our DML-DAcon effectively balances the inter-class similarity and intra-class variation, which enhances model generalization. As such, the learnt embedding representation is more discriminative to cluster semantically similar products despite of the significant variations in configuration and illumination. G. Embedding Representations Visualization Contrastive, our DML-DA tri , DML-DA np , and DML-DA con , respectively. Specifically, we utilize all the training 98 classes in Cars196 dataset and the embedding representations of all the 8,054 images are then projected into 2-dimensional space using t-SNE. It is clear that the intra-class variation of the embedding representations learnt by DML-DA tri is larger than those of Triplet, while guaranteeing all the classes separable. Similarly, the increase of intra-class variation is also observed in t-SNE visualization when integrating density adaptivity into N-pair loss and contrastive loss, respectively. To better qualitatively evaluate the learnt embedding representations, we further show the Barnes-Hut t-SNE [50] vi-sualizations of image embedding representations learnt by our DML-DA con on Cars196 dataset, CUB-200-2011 and Stanford Online Products datasets in Figure 7, 8 and 9, respectively. Specifically, we leverage all the images in the test split of each dataset and the 128-dimensional embedding representations of images are then projected into 2-dimensional space using Barnes-Hut t-SNE [50]. It is clear that our learnt embedding representation effectively clusters semantically similar cars/birds/products despite of the significant variations in view point, pose and configuration. V. CONCLUSION In this paper we have investigated the problem of training deep neural networks that are capable of high generalization performance in the context of metric learning. Particularly, we propose a new principle of density adaptivity into the learning of DML, which could lead to the largest possible intra-class variation in the embedding space. More importantly, the density adaptivity can be easily integrated into any existing DML implementations by simply adding one regularizer to the original objective loss. To verify our claim, we have strengthened three types of embedding, i.e., contrastive embedding, N-pair embedding and triplet embedding, with density regularizer. Extensive experiments conducted on three datasets validate our proposal and analysis. More remarkably, we achieve new stateof-the-art performance on all the three datasets. One possible future research direction would be to generalize our density adaptivity scheme to other types of embedding or other tasks with a large amount of classes.
2019-09-09T15:04:26.000Z
2019-09-09T00:00:00.000
{ "year": 2019, "sha1": "b2227dede5b64dc40041e1b7517c1f00fd8dbed1", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1109/tmm.2019.2939711", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "b2227dede5b64dc40041e1b7517c1f00fd8dbed1", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
263637433
pes2o/s2orc
v3-fos-license
The Implementation of Integrated Management of Childhood Illness (IMCI) in Sick Children from 2 Months up to 5 Years Age Old with Diarrhea in Community Health Center Integrated Management of Childhood Illnesses (IMCI) is a program that focuses on the main childhood diseases that occur in children under five years of age with a focus on pneumonia (acute respiratory infection), diarrhea, malaria, measles, dengue hemorrhagic fever (DHF), ear and malnutrition. Diarrhea is a global health problem that causes high morbidity and mortality in developing countries due to poor environmental sanitation and hygiene, inadequate water supply, poverty and limited access to education. IMCI is a strategy that focuses on child as a whole, not on a single disease or condition, but on a combination of illnesses that need to be treated in an integrated manner at home and in primary health care facilities. Based on the Becora Community Health Center report that cases handled using the IMCI strategy totaled 2,204 cases in 2021, 4,273 cases in 2022 and 3,160 cases, from January to June 2023. Diarrhea cases ranked first, namely 584 cases in 2021, and in 2022 experienced an increase to 758 cases and until June 2023 cases of diarrhea reached 416 cases. The purpose of this residency is to know the management of cases of diarrhea by using IMCI strategies at the Becora Health Center, Dili, Timor Leste. Using descriptive studies because they only wanted to know the frequency distribution of cases handled with IMCI and specifically wanted to observe the management procedures in cases with diarrhea. Based on the results residence from 17-29 July 2023, 598 cases were not handled according to the IMCI strategy, of which 10.37% (n=62) cases had diarrhea and 0.5% (n=3) dysentery cases. In accordance with the results of observations that the IMCI management procedures are not in accordance with ministry standards or policies issued by the Ministry of Health Timor Leste. During residence it appears that health professionals conducting consultations are more directed to using general consultations than using the IMCI strategy. INTRODUCTION According to United Nations International Children's Fund and World Health Organization (UNICEF, 2017), Integrated Management of Childhood Illness is the standard for Integrated Management of Sick Child when the caregiver brings the child to a health facility.IMCI is a strategy that focuses on the whole children rather than a single disease or condition; Sick children often come with various diseases related to family health, so they need to be managed in an integrated manner at home and in primary health care facilities.Based on reports World Health Organization (WHO, 2017), shows 1.2 million children under five die from diarrhea, namely 23,365 children die every week, 3,338 die every day, 139 die every hour, 2.3 die every minute and 1 child every twenty six seconds.The report explained that the mortality rate for children under five years of age in 1990 was 90 deaths per 1000 live births (LH), dropping to 46 deaths per 1000 LH in 2013.Likewise the report from The Timor Leste Ministry of Health (MOH-TL) estimates that 2000 children under five years of age (six children per day) died in 2003.This means a decrease in the Under-five Mortality Rate (U5MR) from 165 to 55 and the Infant Mortality Rate (IMR) from 126 to 41 deaths per 1,000 LH in 2015.IMCI is an integrated/unified approach to managing sick children with a focus on healthy children aged 0-5 years.IMCI activity is an effort to help reduce pain and death and increase the quality of service at various levels of essential health services such as the health center, Help Post, mobile clinic and home visit (Pinto J, 2023).According to the results (Pinto J. G., 2020) showed that 70.6% of cases of diarrhea were handled using the IMCI strategy and the rest were handled using general procedures.However, Timor Leste is committed to improving child health programs based on progress indicators that in 2030 there will be a reduction in the under-5 mortality rate from 61 to 27; reduction in the infant mortality rate (IMR) from 44 to 21 and 15 deaths per 1,000 live births in 2030 (MoH, 2011(MoH, -2030) ) According to the Becora Health Center Health Information System (HIS) report, from 2021 -June 2023 shows that the number of cases reported by consulting using IMCI is as follows: 1.1, it shows that the number of cases of diarrhea and dysentery always increases, namely in 2021 there were 584 cases (diarrhea) and 76 cases of dysentery, and in 2022, the number of cases of diarrhea was 758, there were 65 cases of dysentery and from January to June 2023 there was a spike in cases namely 416 cases of diarrhea and 72 cases of dysentery.Likewise, during the period of residence, from 17-29 July 2023, 598 cases were recorded that were handled with the IMCI strategy, of which 62 cases were diarrhea without dehydration and 3 cases dysentery at the Becora Community Health Center.Factors that affect the management of IMCI are caused by the lack of available supporting facilities such as timers, scales, digital thermometers and also limited resources.According to (Leonard A. S. Dewi, 2021).Diarrhea or often called acute gastroenteritis is bowel movements with a softer or liquid consistency that occurs with a frequency of ≥3x within 24 hours.Things to note are the frequency of defecation, the consistency of the stool, and the amount of stool.If the stool consistency is not softer or runny but often it is not diarrhea.Babies who are breastfed often have loose bowel movements and this is not diarrhea either.The results of the study (Utami, 2016), the factors that influence the incidence of diarrhea in children are environmental factors, sociodemographics and human behavior.According to research results (Daviani, 2019), the factors that cause diarrhea are as follows METHOD This exploratory, descriptive study used both quantitative and qualitative methods in conjunction with each other to generate a more-in-depth understanding of the quantitative results.While collecting data, they wish to observe health workers in conducting consultations using the IMCI strategy or not, and the availability of supporting facilities for IMCI implementation.The cases selected for conducting interviews, namely caregivers who brought their child aged 2 months to 5 years with complaints of diarrhea, were considered to meet the inclusive criteria to be selected.The sampling technique used is non-probability sampling with accidental sampling technique. RESULT During the two-week data collection period, 598 cases were recorded, of which 10.4% (n=62) were classified as diarrhea without dehydration and 0.5% (n=3) were dysentery cases.This shows an average of 59-60 cases per day.And also recorded 6-7 cases with the classification of diarrhea without dehydration and dysentery, which is between 6-7 cases per day.Of the 11 mothers who agreed to undergo in-depth interviews with During the period, they conducted in-depth interviews saying that they were satisfied with the services provided by the health workers, they knew that their child had diarrhea and immediately took him for consultation and the officers provided counseling on how to give zinc and ORS at home, explaining the signs of when to return soon as well as the time for a repeat visit.Lack of supporting facilities such as Recording forms, booklet charts, wall charts, ARI timers, ORS Corners and also only 1 officer who conducts consultations and is assisted by a nurse who covers all child aged 0-59 months and also 6-15 years.One of the important things that motivates health workers to carry out IMCI consultations is regular supervision from the district health office and the Ministry of Health.It was discovered that healthcare workers accuse one another of IMCI implementation in all health facilities, the lack of routine monitoring, evaluation and supervision from the competent authority. DISCUSSION The number of visits per day reached 59 -60 cases, but only one doctor consulted, this exceeded his capacity so that it was not possible to follow the IMCI implementation procedure.In accordance with the results of Setiawan's research (2019), that IMCI implementing officers are at least 3 people who have attended IMCI training to increase knowledge and understanding of IMCI implementation.As soon as the results of the research (Pinto J. Integrated Management of Childhood Illness (IMCI) implementation at community health centers n Aileu municipality, Timor Leste: Health Workers "Perceptions", 2023), mutual accusations occurred between officers who had attended IMCI training in its implementation in workplace.'In accordance with IMCI implementation standards, it needs to be supported with adequate facilities to help expedite the IMCI implementation process.Lack of supporting facilities such as Recording forms, booklet charts, wall charts, ARI timers, ORS Corners, can hamper the IMCI implementation process.The results of the research (Setiawan, 2019) show that the availability of facilities and infrastructure consists ofthe presence of polys and completeness of tools was 83.5%, this was one of the dominant factors that hindered the implementation of IMCI.Likewise the results of research (Wasliah, 2022) supporting infrastructure for the implementation of IMCI includes medical equipment and medicines.Medical equipment consists of ARI Timers, digital thermometers, scales and sterile needles.NLEM drugs are classified as first choice antibiotics (Co-trimoxazole, Trimethotropin, Sulfamethoxazole syrup or tablets), second choice antibiotics (Amoxicillin, Nalidixat, Tetracycline syrup, tablets, capsules), Paracetamol tablets/syrup, vitamin A 200,000 IU or 100,000 IU iron syrup (ferrous sulfate) or iron tablets, ORS 200cc, eye ointment, pyrantel pamoat tablets, 1% gentian violet and infusion fluids such as RL and 5% Dektrose.As for traditional medicine, such as soy sauce, honey, lime for mild coughs, warm sweet tea/sugar water to prevent low sugar levels and sugar salt solution for diarrhea.And also according to IMCI implementation guidelines in Timor Leste (MoH W., 2015 ), that each IMCI consultation room must be equipped with ORS Corner, digital thermometer, ARI Timer, booklet chart, wall chart and recording form to make it easier for officers to implement IMCI.Treatment of sick child who visited the Becora Health Center.The IMCI program is a national program that has contributed to reducing morbidity and mortality in Timor Leste but has received little attention from the health authorities through the unavailability of supporting facilities and has not been able to be supervised by the Dili Health Office and the Timor Leste Ministry of Health. CONCLUSION IMCI is a national program and has contributed to reducing morbidity and mortality in Timor Leste, namely until 2030, it could drop to 15/1000 live births.Lack of support for facilities and infrastructure and a lack of trained IMCI personnel can affect the management of IMCI. Table 1 . 2. Distribution of Enabling Factors on the Incidence of Rangkah Diarrhea
2023-10-05T15:40:38.502Z
2023-09-30T00:00:00.000
{ "year": 2023, "sha1": "677dbd13ea02430bb92cdfd3dc16d743d654a717", "oa_license": "CCBYSA", "oa_url": "https://jceh.org/index.php/JCEH/article/download/533/303", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6d40f4c50225bcef511db171251ff5fc341b395c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
236402273
pes2o/s2orc
v3-fos-license
To Calculation Of Bended Elements Working Under The Conditions Of Exposure To High And High Temperatures On The Lateral Force By A New Method The article presents a new method for calculating bending reinforced concrete elements made of conventional and heat-resistant concrete operating under conditions of high and high technological temperatures on the action of transverse forces. The advantage of the proposed calculation method over the method adopted in the current design standards based on a comparison of the calculation results of the experimental data is shown. INTRODUCTION When designing bending reinforced concrete structures of thermal units, special attention is paid to their calculation for the effect of transverse forces, since these structures often have short lengths and large transverse forces arise during operation. The design of bending reinforced concrete elements operating under conditions of high and high technological temperatures on the shear force is being improved as experimental and theoretical research is accumulating. For this purpose, complex studies of the resistance of bending To Calculation Of Bended Elements Working Under The Conditions Of Exposure To High And High Temperatures On The Lateral Force By A New Method The American Journal of Applied sciences (ISSN -2689-0992) Published: May 31, 2021 | Pages: 210-218 Doi: https://doi.org/10.37547/tajas/Volume03Issue05-33 IMPACT FACTOR 2021: 5. 634 OCLC -1121105553 reinforced concrete elements made of ordinary and heat-resistant concrete to the action of transverse forces under the influence of high and high temperatures were carried out, and proposals were developed for calculating the strength of inclined sections [1]. For this, the accumulated experimental material of both the authors and other researchers was analyzed [1][2][3][4][5][6][7][8][9][10]. The essence of the analysis was to compare the results of calculations by the KMC method [11,12] and the proposed method. RESEARCH METHODOLOGY When developing a new calculation method, simple methods of statistical processing were used. As a result of comparing the experimental data with the calculation results using the new method, the following data were obtained. Analysis of the strength of inclined sections of beams tested with onesided heating showed that the method of calculating the shear force, developed for elements operating at normal temperatures, can be applied to elements operating at elevated and high temperatures. At the same time, it was revealed the need to take into account the change in the strength and deformative properties of concrete and reinforcement during heating and the features of the stress-strain state of a bent element under conditions of one-sided heating. A system of longitudinal and transverse forces is introduced into the design scheme of the forces of an inclined section of a bent reinforced concrete element operating under unilateral heating: in concrete above an inclined crack -Nb1 and Qb1, under an inclined crack Nb2 and Qb2, engagement forces in an inclined crack Νз and Qз, in a longitudinal reinforcement -Ns and Qs and axial forces in transverse reinforcement crossing an inclined crack -Qw ( Figure 1). The forces in the longitudinal reinforcement and the forces of engagement in an inclined crack are considered in the form of the total values Νsз= Ns -Nз and Qsз = Qs + Qз, applied at the point of intersection of the inclined crack by the longitudinal reinforcement. The design condition for the strength of inclined sections of bent reinforced concrete elements operating with one-sided heating is as follows: Q=Qx+Qb1+Qb2 (1) The transverse force Qx is determined taking into account the maximum heating temperature of the clamps according to the formula: The transverse force Qb1, perceived by the concrete in the compressed zone above the inclined crack, is determined by the formula: The transverse force Qb2, which characterizes the thrust force in the longitudinal reinforcement and the engagement forces in the inclined crack, is determined by the formula: The value of concrete shear resistance during heating Rsht is determined depending on the concrete temperature at a distance of 0.2h0 from the most compressed edge of the section by the expression: The American Journal of Applied sciences (ISSN -2689-0992) where σy -vertical stresses from the local action of the load or the support reaction. The values of the height of the compressed concrete zone above the normal X0 and inclined X cracks are determined by the formulas: Where ν=1,5•Est ⁄Ebt , for ordinary concrete elements Z1=0,7h0, from heat-resistant concrete Z1=0,6h0. The length of the projection of the inclined crack "C" on the longitudinal axis of the element is determined from the equation of equilibrium of the moments in the lower block under the inclined crack. If the height of the compressed zone of concrete X above an inclined crack, determined by formula (7), turns out to be negative, then the bending moment and shear force that can be perceived by the section are calculated taking the calculated stress diagram in concrete over normal triangular with a maximum on the most compressed face of the section Rbtem . In this case, the height of the compressed zone X0 above normal cracks is determined as when calculating the strength of normal sections. For small values of the relative shear span (0.5≤a ⁄ h0≤1.5), the design shear force is determined from the condition of the strength of short elements: for elements without transverse reinforcement: for elements with transverse reinforcement: Where kbt=0,7, kst=0,9 -coefficients that take into account the deviation of pleasant calculation schemes from the actual; γb and γscoefficients that take into account the effect of surrounding concrete and reinforcement on the strength of an inclined strip, are determined according to the rules for calculating local shear according to KMK 2.03.04-98; μw is the transverse reinforcement coefficient; α is the angle of inclination of the calculated strip to the horizontal; β -angle of inclination of transverse reinforcement; lр is the calculated width of the inclined strip. Before the destruction of reinforced concrete beams along an inclined section with one-sided heating, the total transverse force is perceived: Comparison In beams without clamps: -compressed concrete over an inclined crack -16-44%; -the total value of the thrust forces in the longitudinal reinforcement and the forces of engagement in an inclined crack -56-84%;
2021-07-27T00:05:07.891Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "ee52dd5756962776d06f6f122abaedc5e639d92a", "oa_license": "CCBY", "oa_url": "http://usajournalshub.com/plugins/themes/manuscript-jats/templates/frontend/images/ManuscriptTamplate.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c4b8674441100a628fff0e46e18ce0701611a845", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
248350016
pes2o/s2orc
v3-fos-license
Enantioselectivity and residue analysis of cycloxaprid and its metabolite in the pile and fermentation processing of Puer tea by ultraperformance liquid chromatography–high‐resolution mass spectrometry Abstract The residues of cycloxaprid enantiomers and metabolites are investigated by ultraperformance liquid chromatography–high‐resolution mass spectrometry (UPLC‐HRMS) during raw and ripen Puer tea processing. A Chiralpak AG column with chiral stationary phase of amylose tris (3‐chloro‐5‐methylphenylcarbamate) is succeed to separate the 1R, 2S‐cycloxaprid, 1S, 2R‐cycloxaprid, and their metabolite, which is identified as nitrylene‐imidazolidine. It is not conversed 1R, 2S ‐cycloxaprid into 1S,2R‐cycloxaprid during Puer tea processing. The estimated half‐lives of the 1R,2S‐cycloxaprid and 1S,2R‐cycloxaprid are 0.97 and 1.1 h, respectively, and 1R,2S‐cycloxaprid decreases more quickly than the 1S,2R‐cycloxaprid. During raw Puer tea processing, the half‐lives of 1R, 2S‐cycloxaprid and 1S, 2R‐cycloxaprid are 1.68 h and 1.77 h, but the residue is still detected even if it is over 730 day. However, the half‐lives of 1R,2S ‐cycloxaprid and 1S,2R‐cycloxaprid are 0.60 day and 0.63 day during ripen tea processing. The amounts of metabolite are more in raw tea than in ripen tea; the terminal residues are still detected until 730 days during raw tea. A significant enantioselectivity of 1R, 2S‐cycloxaprid and 1S, 2R‐cycloxaprid is observed during raw tea or ripen tea processing. The degration result shows the enantioselectivity of cycloxaprid in raw or ripen Puer tea processing. | INTRODUC TION Neonicotinoids are the most important class of synthetic insecticides for tea protection against piercing-sucking pests (Tomizawa & Casida, 2003). Cycloxaprid is a new neonicotinoid insecticide that has been synthesized and industrialized in China (Li et al., 2011). It is different from traditional neonicotinoids, which act as agonists of native and recombinant nicotinic acetylcholine receptors (nAChRs ) (Liu & Casida, 1993;Matsuda et al., 1998;Nishimura et al., 1994;Tomizawa & Casida, 2003) and show high insecticidal activity against a broad spectrum of sucking and biting insects1 (Cui et al., 2012;Shao et al., 2010), which suggests that it has been considerable as the third generation of neonicotinoids. The molecular structure of cycloxaprid contains a chiral oxabridged cis-configuration leading to a pair of enantiomers, 1R,2S-cycloxaprid and 1S,2R-cycloxaprid ( Figure 1). Cycloxaprids are commonly produced and used as racemic mixtures and the stereoselectivity degrade is found in soils (Liu et al., 2015). Zhang et al. observed stereoselective uptake and translocation of cycloxaprid in edible vegetables from roots (Zhang et al., 2013). However, Chen et al. (Chen et al., 2017) founded adverse results, as evidenced by the lack of significant difference between the stereoisomers in their fate in aerobic soils and three mainly metabolites were found in soil. The metabolite of cycloxaprid is readily founded through photolysis, hydrolysis, and oxidation reaction. Liu et al. (Liu et al., 2015) identified and tracked 11 metabolites of cycloxaprid, and tracked their changes in flooded and anoxic soils. Shuang et al. (Hou et al., 2017) studied the photostability of cycloxaprid in water and detected 25 photodegradation products; the predominant photodegradation product was named as NTN 32,692. Fang et al. (Fang et al., 2017) reported that the degradation dynamics of two neonicotinoids during Lonicera japonica planting, drying, and tea brewing processes were researched. Hou et al. (Hou et al., 2013) compared the dissipation behavior of three neonicotinoid insecticides in tea and found high transfer rates through green or black tea brewing of 80.5% or 81.6% for thiamethoxam, of 63.1% or 62.2% for imidacloprid, and of 78.3% or 80.6% for acetamiprid. However, the degradation behavior and metabolite of cycloxaprid was still unknown in Puer tea processing. Liquid chromatography-high-resolution mass spectrometry (LC-HRMS) has also been explored and has shown great potential for untargeted profiling in tea (Gao et al., 2019;Jia et al., 2020). The analysis method of cycloxaprid in tea was scarcely by LC-HRMS. Only Liu et. al. (Liu & Jiang, 2020) reported that the stereoisomer behavior of sulfoxaflor was determined by LC-HRMS during Puer tea and Black tea processing. Therefore, a new analytical method is developed to determine stereoisomer of cycloxaprid and metabolite in Puer tea using LC-HRMS. The method was applied to investigate the stereoselective cycloxaprid degradation during Pu-erh tea processing. The stock solutions were produced by dissolving the cycloxaprid in acetonitrile. All solutions were stored in a refrigerator at -18°C. HPLC-grade acetonitrile and methanol were provided by Tedia Company Inc. The initial dose of cycloxaprid 50 mg/L was used with 1g of cycloxaprid completely dissolved into 5 L water. Water was purified using a Milli-Q system. | Separation the metabolite of cycloxaprid One gram of 25% cycloxaprid powder is placed on sun at 3 d. The sample is dissolved by 20 ml water, then extracted by acetonitrile. The metabolite with degradation test is obtained by semipreparative HPLC. The molecular structure of metabolite is analyzed by LC-HFMS. | The transform of optical pure compounds in Puer tea processing The optical pure standards of 1R,2S-cycloxaprid or 1S,2R-cycloxaprid (1 mg/L) are respectively added to research the transform of optical pure compounds in Puer tea processing. The test at intervals time is designed at 0, 2 h, 15 h, 24 h, 48 h, 96 h, and 140 h. | Degradation in raw Puer tea Sun-dry Puer tea (20 kg) is obtained and sprayed with 50 mg/L aqueous solution (25% powder) in March, 2019. The raw Puer tea is stored under air temperature (5-28°C) and in dark place. The intervals time is designed at 0 (2 h), 4 h, 10 h, 16 h, 1d, 3d, 6 months, 12 months, 18 months, 24 months, and 36 months. The sample is dried to constant weight and the residues amount calculated with dry sample. | Ripen Puer tea processing To ferment the ripen Puer tea, 20 kg of sun-dry Puer teas is sprayed with 25% powder at the dose of 50 mg/L aqueous solution to keep the conditions of 35% moisture content. During the pile-fermentation, the fermented tea was artificially turned and piled again at 7th days | Calculation of enantiomer fraction The enantiomer fraction (EF) was EF = R (R + S) , concentration of the 1R,2S-cycloxaprid was R, the other concentration of the 1S, 2Rcycloxaprid was S. | Samples preparation Five grams of sample was exactly weighed and added 10 ml water, 20 ml acetonitrile. After the mixture was vortexed, 5 g NaCl was added. The tube was shaken vigorously for 1 min using a vortex mixer and centrifuged at 5000 rpm for 5 min. The upper layer solution was mixed with 150 mg PSA and 150 mg anhydrous MgSO 4 for cleanup. After shaking and centrifugation at 5000 rpm for 3 min, 0.5 ml of the upper layer was filtered through 0.22 μm filter for LC-HFMS analysis. | UPLC-HRMS Analysis Sample analysis was achieved in an ultraperformance liquid chromatography-Q exactive high-resolution mass spectrometry (Thermo Fisher Scientific,) system. | Method validation The method was validated with the following parameters: matrix effect, accuracy, linear range, limit of detection (LOD), limit of quantification (LOQ), specificity, and precision. The standard solution was determined from 2.0 to 100 μg/ml concentration for each enantiomer. Three times signal-to-noise (S/N) ratio was as the LOD for every enantiomer, whereas the LOQ was based on the lowest spiked concentration level. As shown in | Chromatography Separation optimized Because of absence of oxabridged ring, the metabolite was unstereoselective molecule. Once cleavage occurred on the oxabridge, the metabolite is no longer enantioselectivity. To simultaneously separate the chiral cycloxarpid and metabolite, the reverse-phase chiral columns were employed which contained cellulose-and amylosebased polysaccharide materials; a cellulose-based column (Chiral Cel OJ-3R) and two amylose-based columns (Chiralpak AD-RH and Chiralpak IG) were tested using a variety of reverse-phase mobile phase combinations. | Stereoselective dissipation of cycloxaprid in Puer tea processing The fermentation processing with under from several months to several ten years is unique to raw Puer tea. So the degradation of cy- Stereoselectivity is expressed as EF value. As shown in Figure 4, the beginning of EF value in cycloxaprid is >0.50, and the decrease of EF is obvious from 2 h to 730 days. The result showed that enantioselectivity is significantly observed during raw Puer tea processing. The result is shown that the degradation of cycloxaprid is enantioselectivity under raw Puer tea processing. | Stereoselective dissipation of cycloxaprid in ripen Puer tea processing Ripen Puer tea is unique processing due to the pile formation at 45 or 60 days. So the degradation of cycloxaprid is detected from starting The decrease of EF is obvious from 0.56 to 0.44 during ripen Puer tea processing in Figure 6. From the statistical analysis, it showed that there is stereoselective preference for cycloxaprid enantiomers as evidenced as significant difference among the stereoisomers and the racemate in ripen Puer tea processing. The result is shown that the degradation of cycloxaprid is enantioselectivity under raw Puer tea processing. | Dissipation of metabolite in raw Puer tea processing and ripen Puer tea When metabolites are produced during Puer tea processing, they are not easy to decompose. So the terminal residue is still detected until 730 days in raw Puer tea processing and 45 days in ripen Puer tea processing (Figure 7). The maximum residue appears at one day (raw Puer tea processing) or earlier (ripen Puer tea processing). The metabolites are higher in residues in raw Puer tea than in ripen Puer tea. ACK N OWLED G EM ENT We are grateful for supporting funded National Natural CO N FLI C T O F I NTE R E S T All Authors declare that they have no conflict of interest.
2022-04-24T15:11:11.858Z
2022-04-22T00:00:00.000
{ "year": 2022, "sha1": "7382b1276c2fc7dd99d91416272fc33073645e19", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "84839d1e8834a9c713b4816183eca4decb229e87", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
51704472
pes2o/s2orc
v3-fos-license
Research and Fabrication of High-Frequency Broadband and Omnidirectional Transmitting Transducer A wide-band cylindrical transducer was developed by using the wide band of the composite material and the matched matching layer for multimode coupling. Firstly, the structure size of the transducer’s sensitive component was designed by using ANSYS simulation software. Secondly, the piezoelectric composite ring-shaped sensitive component was fabricated by the piezoelectric composite curved-surface forming process, and the matching layer was coated on the periphery of the ring-shaped piezoelectric composite material. Finally, it was encapsulated and the electrodes were drawn out to make a high-frequency broadband horizontal omnidirectional water acoustic transducer prototype. After testing, the working frequency range of the transducer was 230–380 kHz, and the maximum transmission voltage response was 168 dB in the water. Introduction Sonar is an electronic system that detects and identifies underwater objects by means of acoustic waves. The underwater acoustic transducer is the key component of the sonar system. In recent years, the application of Unmanned Underwater Vehicle (UUV) has made rapid development of medium-and high-frequency underwater acoustic transducers. Generally, the high Q value of the high-frequency transducer results in its narrow working bandwidth and less information acquisition. However, due to the small beam angle of the planar transducer, the angle of the transmitting and receiving signals is limited. Thus, the hot topic of scholars at home and abroad is how to get wider bandwidths and larger beam angles of transducers. In terms of expanding the bandwidth of the transducer, S. Cochran and others in Britain fabricated a transducer with a bandwidth of more than one octave by adding a matching layer to the 1-3 type piezoelectric composite material. Zhang Kai and others fabricated a dual matching layer high-frequency broadband transducer with a working range of 43-155 kHz. Although the above-mentioned transducer had expanded bandwidth, its beam angle was small, so it was difficult to realize the transmitting or receiving of large-angle underwater acoustic signals [1][2][3]. In terms of enlarging the transducer's beam angle, MSI company had made 6 rows and 4 columns of arc-shaped transducer array; its working range was 8-16 kHz, and the horizontal opening angle was 150 • degrees. Zhang Kai and others fabricated a high-frequency broadband transducer whose radial vibration frequency was 47.5 kHz and the working range was 40-80 kHz. Such transducers expanded the beam angle, but the bandwidth was relatively small [4][5][6]. Composite materials The concept of composite materials was proposed in the 1970s. It was defined as a material that combines a piezoelectric ceramic and a polymer in a certain communication mode, a certain volume or weight ratio, and a certain spatial geometric distribution. Adding a three-dimensionally connected polymer material to a one-dimensional piezoelectric material to fabricate a 1-3 type piezoelectric composite material can increase the loss of the transducing material and increase the bandwidth [7,8]. The mechanical quality factor of the transducing material can be expressed as: where Q is the mechanical quality factor of the transducing material, ω is the circular frequency, M is the equivalent mass of the transducing material, and R is the sum of the loss resistances of the transducing material. From Equation (1), it can be seen that increasing the loss of the transducing material can reduce the mechanical quality factor of the transducing material. In addition, the mechanical quality factor can be expressed as: where f r is the resonant frequency of the transducing material and ∆ f is the frequency bandwidth of the 3 dB drop in conductance response. From Equation (2), it can be seen that the decrease of the mechanical quality factor of the transducing material expands the bandwidth of the transducing material. So, adding a flexible polymer to the piezoelectric material can extend the bandwidth of the transducing material. Adding a matching layer When no matching layer is added, the impedance of the water load is the impedance of the surface of the transducing material. After adding a matching layer with a specific acoustic characteristic impedance to the acoustic radiation surface of the transducing material, the impedance of the water load has the impedance generated by the matching layer in addition to the impedance of the surface of the transducing material. The difference in the two impedances produces two resonant frequencies. Adjusting the thickness of the matching layer brings the two resonant frequencies close enough to couple to expand the working bandwidth of the transducing material [9,10]. At the same time, using a curved-surface forming process to fabricate a circular piezoelectric composite can increase the beam angle of the transducer. The PZT-5A (produced by Risheng Electronics Co., Ltd., Kunshan, China) was selected as a piezoelectric material to fabricate a transducer. At present, in addition to the traditional piezoelectric materials, the high-performance requirements of the transducers have led to the development of new piezoelectric materials, including piezoelectric composites, relaxed ferroelectric single crystals, etc. At this stage, the relaxation ferroelectric single crystal has been a hot spot due to its high piezoelectric coefficient (d 33 ) and high mechanical quality factor (Q m ) [11,12]. The transducer based on the relaxed ferroelectric single crystal can increase the sensitivity by 12 dB, the bandwidth by 2-3 times, and the sound source level by 12 dB. In terms of the conventional PZT piezoelectric ceramic, it is relatively hard and has the ability to exert and withstand a large stress in physical properties. From a chemical point of view, it is inert and unaffected by moisture and other atmospheric conditions. The manufacturing method is also relatively simple, and as a transducing material, it also has excellent piezoelectric properties. Compared with the relaxed ferroelectric single crystal, the PZT-5A piezoelectric material has a higher Curie temperature, so its temperature stability is high, the aging rate is small, the time constant is large, the manufacturing cost is low, it can be molded in a large area, and commercialized application is more mature. Compared to the transducers designed here, the cost of preparing a transducer using the same size single crystal material can be 5-6 times higher. A relaxation ferroelectric single crystal PMN-PT29 material (produced by Materials Research Institute, Pennsylvania State University, State College, PA, USA) was selected and compared with the piezoelectric ceramic PZT-5A material. The material parameters of the two are shown in Table 1: For transmitting transducers, mechanical loss (larger Q m ) is generally required to improve the efficiency of the emission. However, sometimes it needs to increase the bandwidth and requires a smaller Q m material. In summary, PZT-5A was chosen as the transducer material to fabricate the transducer. Furthermore, the PZT-5A material was used to prepare the 1-3 type piezoelectric composite. Compared with pure PZT-5A piezoelectric ceramics, the hydrostatic pressure constant g h is 1-2 orders of magnitude higher. Due to the addition of the flexible polymer, the equivalent density of the transducing material is reduced, and the acoustic medium has a good acoustic matching. As a "soft" piezoelectric material, PZT piezoelectric ceramics can be prepared into a desired shape by adding a flexible polymer. This soft nature makes it more resistant to vibration and mechanical shock, which can increase the service life of the transducer in a complex seawater environment. From the beginning to the present, piezoelectric composite materials have been a research hotspot. Many scholars did a large number of theoretical and experimental research on piezoelectric composites, and also tested the impact of ceramic volume fraction on the performance of piezoelectric composites. For transducer applications, the key material parameter is the electromechanical coupling factor and variation of charge constant, which are closely related to device bandwidth and sensitivity. For example, Smith W.A. and Auld B.A. et al. studied the impact of different piezoelectric ceramic volume fractions on electromechanical coupling factor. The results showed that the electromechanical coupling coefficient shows an upward trend within 20% of the volume fraction; it remains stable in the range of 20% to 80%; and it shows a downward trend in the range of 80% to 100% [13]. Chan H.L.W. and Unsworth J. et al. studied the impact of different piezoelectric ceramic volume fractions on the piezoelectric charge constant d 33 . The results showed that the piezoelectric charge constant d 33 shows an upward trend within 40% of the ceramic volume fraction, and basically stabilizes after 40% [14]. T.R. Gururaja et al. tested the impact of different volume fractions on mechanical quality factor. It was found that the composite material has a lower mechanical quality factor than the pure piezoelectric ceramic material, which is advantageous for expanding the bandwidth of the transducer [15]. Based on the above studies, we found that the volume fraction of piezoelectric ceramics is in the range of 40-60%, and its comprehensive performance is optimal. Therefore, we chose to prepare a 1-3 type piezoelectric composite with a piezoelectric ceramic volume fraction of 50%. Finally, the experimental results show that the piezoelectric composite transducer fabricated in this paper has achieved the target of high-frequency wideband and wide beam angle. The design theory and fabrication process will greatly promote the study of the extended bandwidth and the beam angle of the high-frequency transducer. Structure of a High-Frequency Wideband Composite Cylindrical Transducer The structure of the high-frequency broadband composite cylindrical transducer is shown in Figure 1. It consists of piezoelectric composite material, matching layer, hard foam backing, waterproof sound transmission layer, and electrode lead. The piezoelectric composite material is composed of piezoelectric ceramic and flexible polymer. One-dimensional connected piezoelectric ceramic is arranged in a three-dimensional connected flexible polymer to form a 1-3 type piezoelectric composite. Its advantage is that it has a purer thickness vibration mode and can realize the transducer working in the high-frequency range. At the same time, this composite structure provides the possibility for the curved-surface forming of the transducer. Therefore, we use the 1-3 type piezoelectric composite material as the sensitive component of the transducer to realize the performance of high-frequency broadband and wide beam angle. Adding a matching layer to the acoustic radiation surface of the sensitive component forms two kinds of vibration modes. Adjusting the thickness of the matching layer enables the coupling of two vibration modes in the water to expand the bandwidth of the transducer. The matching layer can also produce the effect of prestress, which makes the amplitude difference of each point on the vibration surface smaller. The most important part of the transducer is the 1-3 type piezoelectric composite material with matching layer. Its structure with the individual ceramic dimensions and the dimensions of the polymer part is shown in Figure 2. Piezoelectric ceramics are used as active components and their size determines the parameters of the transducer. Since the thickness vibration mode of the piezoelectric ceramic is used in this design, the influence of the thickness of the piezoelectric ceramic on the frequency of the transducer is mainly considered. The finite element simulation of piezoelectric ceramics with different thicknesses was carried out by ANSYS software, and the variation curve of thickness resonance frequency with ceramic thickness was obtained as shown in Figure 3: It consists of piezoelectric composite material, matching layer, hard foam backing, waterproof sound transmission layer, and electrode lead. The piezoelectric composite material is composed of piezoelectric ceramic and flexible polymer. One-dimensional connected piezoelectric ceramic is arranged in a three-dimensional connected flexible polymer to form a 1-3 type piezoelectric composite. Its advantage is that it has a purer thickness vibration mode and can realize the transducer working in the high-frequency range. At the same time, this composite structure provides the possibility for the curved-surface forming of the transducer. Therefore, we use the 1-3 type piezoelectric composite material as the sensitive component of the transducer to realize the performance of high-frequency broadband and wide beam angle. Adding a matching layer to the acoustic radiation surface of the sensitive component forms two kinds of vibration modes. Adjusting the thickness of the matching layer enables the coupling of two vibration modes in the water to expand the bandwidth of the transducer. The matching layer can also produce the effect of prestress, which makes the amplitude difference of each point on the vibration surface smaller. The most important part of the transducer is the 1-3 type piezoelectric composite material with matching layer. Its structure with the individual ceramic dimensions and the dimensions of the polymer part is shown in Figure 2. It consists of piezoelectric composite material, matching layer, hard foam backing, waterproof sound transmission layer, and electrode lead. The piezoelectric composite material is composed of piezoelectric ceramic and flexible polymer. One-dimensional connected piezoelectric ceramic is arranged in a three-dimensional connected flexible polymer to form a 1-3 type piezoelectric composite. Its advantage is that it has a purer thickness vibration mode and can realize the transducer working in the high-frequency range. At the same time, this composite structure provides the possibility for the curved-surface forming of the transducer. Therefore, we use the 1-3 type piezoelectric composite material as the sensitive component of the transducer to realize the performance of high-frequency broadband and wide beam angle. Adding a matching layer to the acoustic radiation surface of the sensitive component forms two kinds of vibration modes. Adjusting the thickness of the matching layer enables the coupling of two vibration modes in the water to expand the bandwidth of the transducer. The matching layer can also produce the effect of prestress, which makes the amplitude difference of each point on the vibration surface smaller. The most important part of the transducer is the 1-3 type piezoelectric composite material with matching layer. Its structure with the individual ceramic dimensions and the dimensions of the polymer part is shown in Figure 2. Piezoelectric ceramics are used as active components and their size determines the parameters of the transducer. Since the thickness vibration mode of the piezoelectric ceramic is used in this design, the influence of the thickness of the piezoelectric ceramic on the frequency of the transducer is mainly considered. The finite element simulation of piezoelectric ceramics with different thicknesses was carried out by ANSYS software, and the variation curve of thickness resonance frequency with ceramic thickness was obtained as shown in Figure 3: Piezoelectric ceramics are used as active components and their size determines the parameters of the transducer. Since the thickness vibration mode of the piezoelectric ceramic is used in this design, the influence of the thickness of the piezoelectric ceramic on the frequency of the transducer is mainly considered. The finite element simulation of piezoelectric ceramics with different thicknesses was carried out by ANSYS software, and the variation curve of thickness resonance frequency with ceramic thickness was obtained as shown in Figure 3: It can be seen from Figure 3 that as the thickness increases, the thickness resonance frequency decreases. A piezoelectric ceramic of 5 mm thickness was selected based on design requirements near 300 kHz. The height of the transducer is determined by the directivity requirements of the transducer in the vertical direction. For composite materials, the vertical direction directivity calculation formula is shown in Equation (3): where is the directivity angle in the vertical direction, h is the height in the vertical direction, λ is the wavelength of the sound wave in the water, and θ is the calculation range. The variation of the directivity angle of the vertical direction with the vertical direction can be obtained by Matlab software calculation, as shown in Figure 4: The vertical directivity angle required for the transducer of this design is about 5°, so the height selected by Figure 4 is 50 mm. It can be seen from Figure 3 that as the thickness increases, the thickness resonance frequency decreases. A piezoelectric ceramic of 5 mm thickness was selected based on design requirements near 300 kHz. The height of the transducer is determined by the directivity requirements of the transducer in the vertical direction. For composite materials, the vertical direction directivity calculation formula is shown in Equation (3): where DI is the directivity angle in the vertical direction, h is the height in the vertical direction, λ is the wavelength of the sound wave in the water, and θ is the calculation range. The variation of the directivity angle of the vertical direction with the vertical direction can be obtained by Matlab software calculation, as shown in Figure 4: It can be seen from Figure 3 that as the thickness increases, the thickness resonance frequency decreases. A piezoelectric ceramic of 5 mm thickness was selected based on design requirements near 300 kHz. The height of the transducer is determined by the directivity requirements of the transducer in the vertical direction. For composite materials, the vertical direction directivity calculation formula is shown in Equation (3): where is the directivity angle in the vertical direction, h is the height in the vertical direction, λ is the wavelength of the sound wave in the water, and θ is the calculation range. The variation of the directivity angle of the vertical direction with the vertical direction can be obtained by Matlab software calculation, as shown in Figure 4: The vertical directivity angle required for the transducer of this design is about 5°, so the height selected by Figure 4 is 50 mm. The vertical directivity angle required for the transducer of this design is about 5 • , so the height selected by Figure 4 is 50 mm. Finite Element Simulation of Sensitive Component in the Air The harmonic response of the circular sensing component structure in the air was analyzed by ANSYS finite element simulation software. Firstly, the material parameters of the piezoelectric and polymer material and matching layer were set up. The parameters of PZT-5A were used for piezoelectric materials and the parameters of 618 epoxy resin were used for polymer and matching layer. The reason for selecting PZT-5A piezoelectric ceramics is as described above. Epoxy resin is one of the most widely used thermosetting resins in polymer materials. The most commonly used one is bisphenol A epoxy resin, which is characterized by: thermoplastic resin, good processability, high strength and bonding strength of the cured product, good corrosion resistance and electrical insulation, and small volume shrinkage after curing [16,17]. For the preparation of curved composite materials, liquid bisphenol A epoxy resin is required. The common domestic liquid bisphenol A epoxy resin and the main parameters are shown in the following Table 2: The higher the epoxy value, the lower the viscosity and the greater the brittleness after curing. The flexible polymer required for the composite material designed in this paper must ensure a certain degree of brittleness and a certain degree of toughness. By comparison, the 618-type epoxy resin has a moderate epoxy value and a moderate viscosity. Therefore, the 618-type epoxy resin was selected. The parameters of PZT-5A are density ρ, dielectric constant matrix ε S , piezoelectric constant matrix e, and elastic constant matrix c E . The specific values are shown in Table 3: The parameters of the epoxy resin are density ρ, Young's modulus E, and Poisson's ratio δ. The specific values are shown in Table 4: Secondly, a model of circular ring sensing component was established. To save computing time, the model was designed to be 180 • arc with a thickness of 5 mm. The model diagram is shown in Figure 5. The ceramic volume of the 1-3 type composite without adding matching layer accounted for 50% of the total volume. Then, the model was meshed into finite elements. Finally, the 1 V voltage was loaded on the outer surface of the arc and 0 V voltage was loaded on the inner surface of the arc. Symmetric boundary conditions were loaded in a height direction. Harmonic response analysis was carried out in the air from 200 kHz to 400 kHz frequency range [18]. The damping coefficient was set to 0.02. Sensors 2018, 18 The design of the matching layer was based on the quarter-wavelength theory [19,20], and its thickness can be calculated by the following formula: where t is the thickness of the matching layer and is the wavelength of the sound wave in the matching layer. The matching layer was made of epoxy resin, and the wavelength of the acoustic wave in the epoxy resin was about 8 mm, so the thickness of the epoxy matching layer was selected to be about 2 mm. The arc model with different thickness of epoxy resin matching layer was established and simulated by ANSYS finite element simulation software. The conductance curve with frequency of the sensitive component with different matching layer thicknesses from 1.7 mm to 2.3 mm is shown in Figure 6. The difference of the resonant peak value of two vibration modes with different matching layer thicknesses is shown in Figure 7. The design of the matching layer was based on the quarter-wavelength theory [19,20], and its thickness can be calculated by the following formula: where t is the thickness of the matching layer and λ is the wavelength of the sound wave in the matching layer. The matching layer was made of epoxy resin, and the wavelength of the acoustic wave in the epoxy resin was about 8 mm, so the thickness of the epoxy matching layer was selected to be about 2 mm. The arc model with different thickness of epoxy resin matching layer was established and simulated by ANSYS finite element simulation software. The conductance curve with frequency of the sensitive component with different matching layer thicknesses from 1.7 mm to 2.3 mm is shown in Figure 6. The difference of the resonant peak value of two vibration modes with different matching layer thicknesses is shown in Figure 7. The design of the matching layer was based on the quarter-wavelength theory [19,20], and its thickness can be calculated by the following formula: where t is the thickness of the matching layer and is the wavelength of the sound wave in the matching layer. The matching layer was made of epoxy resin, and the wavelength of the acoustic wave in the epoxy resin was about 8 mm, so the thickness of the epoxy matching layer was selected to be about 2 mm. The arc model with different thickness of epoxy resin matching layer was established and simulated by ANSYS finite element simulation software. The conductance curve with frequency of the sensitive component with different matching layer thicknesses from 1.7 mm to 2.3 mm is shown in Figure 6. The difference of the resonant peak value of two vibration modes with different matching layer thicknesses is shown in Figure 7. The design of the matching layer was based on the quarter-wavelength theory [19,20], and its thickness can be calculated by the following formula: where t is the thickness of the matching layer and is the wavelength of the sound wave in the matching layer. The matching layer was made of epoxy resin, and the wavelength of the acoustic wave in the epoxy resin was about 8 mm, so the thickness of the epoxy matching layer was selected to be about 2 mm. The arc model with different thickness of epoxy resin matching layer was established and simulated by ANSYS finite element simulation software. The conductance curve with frequency of the sensitive component with different matching layer thicknesses from 1.7 mm to 2.3 mm is shown in Figure 6. The difference of the resonant peak value of two vibration modes with different matching layer thicknesses is shown in Figure 7. It can be seen that two vibration modes are formed after adding the matching layer in Figure 4. The resonant peaks of the matching layers with different thicknesses are also different. The frequency difference between the two resonant peaks increases as the thickness of the matching layer increases. The conductance of the first resonance peak decreases, and the conductance of the second resonant peaks increases. From Figure 6, it can be concluded that the difference of the two resonant peaks decreases with the increase of the thickness of the matching layer. The damping coefficient in the air is small and cannot be coupled to a resonant peak. However, the damping coefficient in the water is larger than that in the air and the two vibration modes can be coupled to expand the bandwidth of the transducer. At the same time, the matching layer whose thickness is 2 mm is easier to be fabricated. So, the thickness of the matching layer to be added is 2 mm. Based on the above ideas, a composite material model without matching layer and a composite material model with 2 mm thick epoxy resin matching layer on the outside surface were established. The harmonic response of the two models were calculated and the conductance curves are shown in Figure 8. It can be seen that two vibration modes are formed after adding the matching layer in Figure 4. The resonant peaks of the matching layers with different thicknesses are also different. The frequency difference between the two resonant peaks increases as the thickness of the matching layer increases. The conductance of the first resonance peak decreases, and the conductance of the second resonant peaks increases. From Figure 6, it can be concluded that the difference of the two resonant peaks decreases with the increase of the thickness of the matching layer. The damping coefficient in the air is small and cannot be coupled to a resonant peak. However, the damping coefficient in the water is larger than that in the air and the two vibration modes can be coupled to expand the bandwidth of the transducer. At the same time, the matching layer whose thickness is 2 mm is easier to be fabricated. So, the thickness of the matching layer to be added is 2 mm. Based on the above ideas, a composite material model without matching layer and a composite material model with 2 mm thick epoxy resin matching layer on the outside surface were established. The harmonic response of the two models were calculated and the conductance curves are shown in Figure 8. Finite Element Simulation of Sensitive Component in the Water Based on the simulation steps of sensitive component in the air, the near-field water, far-field water, and boundary water models were added to simulate the sensitive component in the water environment. The parameters set by the simulation of the water environment include density, sound velocity, and boundary admittance as shown in Table 5: In order to save computing time, we continued to simplify the simulation model. The model was 1/144 of the actual transducer. The height direction and the circumferential direction were respectively loaded with symmetrical boundaries and the damping coefficient was increased to 0.04. Finally, harmonic response analysis was carried out, and the transmitting voltage response curve calculated is shown in Figure 9. Finite Element Simulation of Sensitive Component in the Water Based on the simulation steps of sensitive component in the air, the near-field water, far-field water, and boundary water models were added to simulate the sensitive component in the water environment. The parameters set by the simulation of the water environment include density, sound velocity, and boundary admittance as shown in Table 5: In order to save computing time, we continued to simplify the simulation model. The model was 1/144 of the actual transducer. The height direction and the circumferential direction were respectively loaded with symmetrical boundaries and the damping coefficient was increased to 0.04. Finally, harmonic response analysis was carried out, and the transmitting voltage response curve calculated is shown in Figure 9. It is known that the maximum value of transmitting voltage response which was simulated in the water is 166.8 dB in Figure 9. The −3 dB bandwidth of the transmitting voltage response is 128 kHz. It shows that the transducer bandwidth can be expanded by adding 2 mm thick matching layer on the composite surface. Fabrication and Testing of High-Frequency Broadband Circular Sensitive Element PZT-5A piezoelectric ceramic blocks which is produced from the Institute of Acoustics of the Chinese Academy of Sciences were used as piezoelectric materials. The thickness vibration mode of piezoelectric ceramic was adopted, so the thickness direction polarization was chosen. The ringshaped sensitive component was fabricated by an improved cut-filling method. Its fabrication flowchart is shown in Figure 10. 1-3 type piezoelectric composite ring was fabricated according to the above fabrication process, and the electrode was coated inside and on the outside surface. The conductance was tested by the Agilent-4294A impedance analyzer (produced by Agilent Technologies, Inc., Santa Clara, CA, USA) in the air. The epoxy resin matching layer with a thickness of 2 mm was added to the outer surface of the composite. After curing, the conductance was also tested by the Agilent-4294A impedance analyzer in the air. The conductance curve of the two models are shown in Figure 11. It is known that the maximum value of transmitting voltage response which was simulated in the water is 166.8 dB in Figure 9. The −3 dB bandwidth of the transmitting voltage response is 128 kHz. It shows that the transducer bandwidth can be expanded by adding 2 mm thick matching layer on the composite surface. Fabrication and Testing of High-Frequency Broadband Circular Sensitive Element PZT-5A piezoelectric ceramic blocks which is produced from the Institute of Acoustics of the Chinese Academy of Sciences were used as piezoelectric materials. The thickness vibration mode of piezoelectric ceramic was adopted, so the thickness direction polarization was chosen. The ring-shaped sensitive component was fabricated by an improved cut-filling method. Its fabrication flowchart is shown in Figure 10. It is known that the maximum value of transmitting voltage response which was simulated in the water is 166.8 dB in Figure 9. The −3 dB bandwidth of the transmitting voltage response is 128 kHz. It shows that the transducer bandwidth can be expanded by adding 2 mm thick matching layer on the composite surface. Fabrication and Testing of High-Frequency Broadband Circular Sensitive Element PZT-5A piezoelectric ceramic blocks which is produced from the Institute of Acoustics of the Chinese Academy of Sciences were used as piezoelectric materials. The thickness vibration mode of piezoelectric ceramic was adopted, so the thickness direction polarization was chosen. The ringshaped sensitive component was fabricated by an improved cut-filling method. Its fabrication flowchart is shown in Figure 10. 1-3 type piezoelectric composite ring was fabricated according to the above fabrication process, and the electrode was coated inside and on the outside surface. The conductance was tested by the Agilent-4294A impedance analyzer (produced by Agilent Technologies, Inc., Santa Clara, CA, USA) in the air. The epoxy resin matching layer with a thickness of 2 mm was added to the outer surface of the composite. After curing, the conductance was also tested by the Agilent-4294A impedance analyzer in the air. The conductance curve of the two models are shown in Figure 11. 1-3 type piezoelectric composite ring was fabricated according to the above fabrication process, and the electrode was coated inside and on the outside surface. The conductance was tested by the Agilent-4294A impedance analyzer (produced by Agilent Technologies, Inc., Santa Clara, CA, USA) in the air. The epoxy resin matching layer with a thickness of 2 mm was added to the outer surface of the composite. After curing, the conductance was also tested by the Agilent-4294A impedance analyzer in the air. The conductance curve of the two models are shown in Figure 11. It can be seen from Figure 11 that the resonance peak of the composite material without matching layer appears at 285 kHz and its bandwidth is only 8.5 kHz. However, after adding a 2 mm thick epoxy matching layer, there are two resonant peaks in the conductance curve, the frequencies of which are 241 kHz and 356 kHz, respectively. Matching layer and composite material produce two vibration modes in the air and two vibration modes are coupled in the water. The deviation between experiment results and simulation results is less than 3%, which meets the design requirements. Fabrication of High-Frequency Broadband Circular Transducer The high-frequency broadband circular transducer is mainly composed of sensitive component, rigid foam lining, upper and lower cover plates, waterproof sound transmission layer, and electrode lead. The rigid foam lining was embedded in the inside surface of the sensitive component and bonded with epoxy resin. The electrode lead was passed through the upper cover plate and drawn from the circular hole in the middle of the upper cover plate. Then, the sensitive component was put into the mold used to pour the waterproof sound transmission layer. The waterproof sound transmission layer used polyurethane with similar properties to the water. The polyurethane was slowly poured into the mold and solidified for 12 h in the 60 °C temperature chamber. Finally, the gap between the circular hole and the electrode lead was sealed. The fabricated cylindrical transducer is shown in Figure 12. During the fabrication of sensitive component and transducer, it was necessary to pay attention to the cleaning of the mold to ensure that there was no impurity in the filling material to affect the performance of the transducer. The process of curved-surface forming was rather complicated, so in Figure 11. The conductance curve obtained by the actual testing of two models. It can be seen from Figure 11 that the resonance peak of the composite material without matching layer appears at 285 kHz and its bandwidth is only 8.5 kHz. However, after adding a 2 mm thick epoxy matching layer, there are two resonant peaks in the conductance curve, the frequencies of which are 241 kHz and 356 kHz, respectively. Matching layer and composite material produce two vibration modes in the air and two vibration modes are coupled in the water. The deviation between experiment results and simulation results is less than 3%, which meets the design requirements. Fabrication of High-Frequency Broadband Circular Transducer The high-frequency broadband circular transducer is mainly composed of sensitive component, rigid foam lining, upper and lower cover plates, waterproof sound transmission layer, and electrode lead. The rigid foam lining was embedded in the inside surface of the sensitive component and bonded with epoxy resin. The electrode lead was passed through the upper cover plate and drawn from the circular hole in the middle of the upper cover plate. Then, the sensitive component was put into the mold used to pour the waterproof sound transmission layer. The waterproof sound transmission layer used polyurethane with similar properties to the water. The polyurethane was slowly poured into the mold and solidified for 12 h in the 60 • C temperature chamber. Finally, the gap between the circular hole and the electrode lead was sealed. The fabricated cylindrical transducer is shown in Figure 12. It can be seen from Figure 11 that the resonance peak of the composite material without matching layer appears at 285 kHz and its bandwidth is only 8.5 kHz. However, after adding a 2 mm thick epoxy matching layer, there are two resonant peaks in the conductance curve, the frequencies of which are 241 kHz and 356 kHz, respectively. Matching layer and composite material produce two vibration modes in the air and two vibration modes are coupled in the water. The deviation between experiment results and simulation results is less than 3%, which meets the design requirements. Fabrication of High-Frequency Broadband Circular Transducer The high-frequency broadband circular transducer is mainly composed of sensitive component, rigid foam lining, upper and lower cover plates, waterproof sound transmission layer, and electrode lead. The rigid foam lining was embedded in the inside surface of the sensitive component and bonded with epoxy resin. The electrode lead was passed through the upper cover plate and drawn from the circular hole in the middle of the upper cover plate. Then, the sensitive component was put into the mold used to pour the waterproof sound transmission layer. The waterproof sound transmission layer used polyurethane with similar properties to the water. The polyurethane was slowly poured into the mold and solidified for 12 h in the 60 °C temperature chamber. Finally, the gap between the circular hole and the electrode lead was sealed. The fabricated cylindrical transducer is shown in Figure 12. During the fabrication of sensitive component and transducer, it was necessary to pay attention to the cleaning of the mold to ensure that there was no impurity in the filling material to affect the performance of the transducer. The process of curved-surface forming was rather complicated, so in During the fabrication of sensitive component and transducer, it was necessary to pay attention to the cleaning of the mold to ensure that there was no impurity in the filling material to affect the performance of the transducer. The process of curved-surface forming was rather complicated, so in the process of perfusion, we had to grasp the operation time in every step to ensure the consistency of ceramic arrays. The outer dimension of the circular transducer was φ 112 mm × 60 mm in this experiment, in which the composite material element height of the transducer was 50 mm. Testing of High-Frequency Broadband Circulars Transducer The impedance performance of the transducer in the water was measured by the Agilent 4294A impedance analyzer after the fabrication of the transducer. The conductance curve in the water is shown in Figure 13. the process of perfusion, we had to grasp the operation time in every step to ensure the consistency of ceramic arrays. The outer dimension of the circular transducer was Ф 112 mm × 60 mm in this experiment, in which the composite material element height of the transducer was 50 mm. Testing of High-Frequency Broadband Circulars Transducer The impedance performance of the transducer in the water was measured by the Agilent 4294A impedance analyzer after the fabrication of the transducer. The conductance curve in the water is shown in Figure 13. As can be seen from Figure 13, there is only one resonant peak in the conductivity curve, which is due to the larger damping coefficient in the water than in the air, and the two resonant peaks in the water are coupled to a wide band of resonant peaks. Its working frequency range is from 203 kHz to 351 kHz. The bandwidth of the transducer can focus on not only the bandwidth of its conductance curve, but also its bandwidth in the transmitting voltage response curve [21,22]. The transmitting voltage response and directivity of the transducer in the water were measured by the electroacoustic measurement system of the underwater transducer. The instruments included in the electroacoustic measurement system are shown in Table 6: The range of measurement is set up from 180 kHz to 420 kHz. The transmitting voltage response curve with frequency is shown in Figure 14. The maximum transmitting voltage response is 168 dB, and the working frequency range is 230-380 kHz. As can be seen from Figure 13, there is only one resonant peak in the conductivity curve, which is due to the larger damping coefficient in the water than in the air, and the two resonant peaks in the water are coupled to a wide band of resonant peaks. Its working frequency range is from 203 kHz to 351 kHz. The bandwidth of the transducer can focus on not only the bandwidth of its conductance curve, but also its bandwidth in the transmitting voltage response curve [21,22]. The transmitting voltage response and directivity of the transducer in the water were measured by the electroacoustic measurement system of the underwater transducer. The instruments included in the electroacoustic measurement system are shown in Table 6: The range of measurement is set up from 180 kHz to 420 kHz. The transmitting voltage response curve with frequency is shown in Figure 14. The maximum transmitting voltage response is 168 dB, and the working frequency range is 230-380 kHz. Compared with the actual test and simulation results of the emission voltage response, the maximum emission voltage response is basically the same, and the bandwidth is different. Since the bandwidth is related to the damping coefficient, the damping coefficient in the simulation is set to a fixed value, and in fact, the damping coefficient changes with the frequency, so there is an error. However, the actually prepared transducer has a transmission voltage response bandwidth that has reached half of the octave. The sound source level L is generally used to describe the strength of the acoustic signal emitted by the active sonar, that is, the sound power level of the single-frequency transmission. The emission sound source level curve can be obtained by the electroacoustic measurement system as shown in Figure 15: It can be seen from Figure 3 that when the highest voltage applied across the transducer is 30 V, the maximum sound source level can reach 199 dB. The measurement result of the transducer's directivity is shown in Figure 16. It is known that the transducer has omnidirectional directivity in one-dimensional direction in Figure 16, and the fluctuation is more stable within −3 dB. After testing, the transducer parameters were obtained, and some advanced curved surface composite transducers were compared. For example, the USA MSI provides communication sonar for the USA Navy, and M.S. Martins et al. produced a high-frequency wide-beam PVDF transmitting transducer for underwater wireless communication and other fields [23,24]. The specific parameters are compared as in Table 7: Compared with the actual test and simulation results of the emission voltage response, the maximum emission voltage response is basically the same, and the bandwidth is different. Since the bandwidth is related to the damping coefficient, the damping coefficient in the simulation is set to a fixed value, and in fact, the damping coefficient changes with the frequency, so there is an error. However, the actually prepared transducer has a transmission voltage response bandwidth that has reached half of the octave. The sound source level L p is generally used to describe the strength of the acoustic signal emitted by the active sonar, that is, the sound power level of the single-frequency transmission. The emission sound source level curve can be obtained by the electroacoustic measurement system as shown in Figure 15: Compared with the actual test and simulation results of the emission voltage response, the maximum emission voltage response is basically the same, and the bandwidth is different. Since the bandwidth is related to the damping coefficient, the damping coefficient in the simulation is set to a fixed value, and in fact, the damping coefficient changes with the frequency, so there is an error. However, the actually prepared transducer has a transmission voltage response bandwidth that has reached half of the octave. The sound source level L is generally used to describe the strength of the acoustic signal emitted by the active sonar, that is, the sound power level of the single-frequency transmission. The emission sound source level curve can be obtained by the electroacoustic measurement system as shown in Figure 15: It can be seen from Figure 3 that when the highest voltage applied across the transducer is 30 V, the maximum sound source level can reach 199 dB. The measurement result of the transducer's directivity is shown in Figure 16. It is known that the transducer has omnidirectional directivity in one-dimensional direction in Figure 16, and the fluctuation is more stable within −3 dB. After testing, the transducer parameters were obtained, and some advanced curved surface composite transducers were compared. For example, the USA MSI provides communication sonar for the USA Navy, and M.S. Martins et al. produced a high-frequency wide-beam PVDF transmitting transducer for underwater wireless communication and other fields [23,24]. The specific parameters are compared as in Table 7: It can be seen from Figure 3 that when the highest voltage applied across the transducer is 30 V, the maximum sound source level can reach 199 dB. The measurement result of the transducer's directivity is shown in Figure 16. It is known that the transducer has omnidirectional directivity in one-dimensional direction in Figure 16, and the fluctuation is more stable within −3 dB. After testing, the transducer parameters were obtained, and some advanced curved surface composite transducers were compared. For example, the USA MSI provides communication sonar for the USA Navy, and M.S. Martins et al. produced a high-frequency wide-beam PVDF transmitting transducer for underwater wireless communication and other fields [23,24]. The specific parameters are compared as in Table 7: Through the above comparison, it is found that the transducer designed in this paper has lower operating frequency, bandwidth of 1/2 octave, higher response voltage, and omnidirectional emission of sound waves in the horizontal direction compared with other advanced transducers. The advantages of this transducer are obvious. Discussion and Conclusions In this paper, the bandwidth of transducers was expanded by means of composite material and multimode coupling, and a curved-surface forming process was adopted. Finally, an underwater acoustic transducer was fabricated with a working frequency band of 230-380 kHz. The maximum transmission voltage response in the frequency band was 168 dB. It can be launched omnidirectionally in one-dimensional direction. The test results are basically consistent with the simulation results. Compared with the same type of transducer, the working band width of the transducer is about doubled. The transducer can be applied to underwater vehicles, deep sea target detection, fine imaging, and so on. Through the above comparison, it is found that the transducer designed in this paper has lower operating frequency, bandwidth of 1/2 octave, higher response voltage, and omnidirectional emission of sound waves in the horizontal direction compared with other advanced transducers. The advantages of this transducer are obvious. Discussion and Conclusions In this paper, the bandwidth of transducers was expanded by means of composite material and multimode coupling, and a curved-surface forming process was adopted. Finally, an underwater acoustic transducer was fabricated with a working frequency band of 230-380 kHz. The maximum transmission voltage response in the frequency band was 168 dB. It can be launched omnidirectionally in one-dimensional direction. The test results are basically consistent with the simulation results. Compared with the same type of transducer, the working band width of the transducer is about doubled. The transducer can be applied to underwater vehicles, deep sea target detection, fine imaging, and so on. Conflicts of Interest: The authors declare no conflicts of interest.
2018-08-06T13:07:18.873Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "edec41d6d32b3cedb9345dd4439c9b6152e2b898", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/18/7/2347/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "edec41d6d32b3cedb9345dd4439c9b6152e2b898", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Materials Science" ] }
247617015
pes2o/s2orc
v3-fos-license
Development and external validation of a nomogram for predicting renal function based on preoperative data from in-hospital patients with simple renal cysts Objective To develop and validate a nomogram for predicting renal dysfunction in patients with simple renal cysts (SRCs). Methods We performed a multivariable logistic regression analysis of an in-hospital retrospective cohort of patients with SRCs in the Urology Department of the First Affiliated Hospital of Anhui Medical University. For prognostic model development, 386 patients with SRCs were enrolled from January 2016 to December 2018. External validation was performed in 46 patients with SRCs from January 2019 to April 2019. The primary outcome was renal dysfunction. Results Patients were divided into normal or abnormal estimated glomerular filtration rate groups (293 vs. 93) based on the cut-off value of 90 mL/minute/1.73 m2. Logistical regression analysis determined that age, haemoglobin, globulin, and creatinine might be associated with renal dysfunction, and a novel nomogram was established. Calibration curves showed that the true prediction rate was 77.42%, and decision curve analysis revealed that the nomogram was more effective with threshold probabilities ranging from 0.1 to 0.8. The area under the curves were 0.829, 0.752, and 0.888 in the overall training, internal, and external validation cohorts, respectively. Conclusions We established a nomogram to predict the probability of developing renal dysfunction in patients with SRCs. Introduction Simple renal cysts (SRCs), one of the most frequent kidney diseases, exhibit a prevalence of approximately 27% in patients over 50 years old. 1 The incidence, number, and size of SRCs increase with age. 2 Generally, operative management, including percutaneous aspiration with ethanol injection, open surgery, endoscopic cyst opening, and laparoscopic decortication, is considered in patients with cysts >4 cm in diameter or severe complications. 3 SRCs are found during routine health check-ups, and small cysts cause symptoms and complications, including dull flank pain, hypertension, haematuria, infection, and urinary tract obstruction. 4 Furthermore, whether SRCs are related to renal dysfunction and hypertension has attracted researchers' attention. [5][6][7] An early study showed that the presence of SRCs was related to hypertension but not renal dysfunction. 6 However, other studies have shown that SRCs are associated with renal dysfunction, and an increased cyst diameter contributed to a more rapid decline in renal function in patients with SRCs. 5,7 Therefore, we aimed to further explore the relationship between SRCs, hypertension, and renal dysfunction. Currently, the estimated glomerular filtration rate (eGFR) is routinely used to assess kidney function, but specific tools for the prediction of renal function in patients with SRCs have not been explored. 8 A nomogram, which is a graphic calculation method, can be used to determine the likelihood of a clinical event through approximate graphical computation based on a two-dimensional diagram. Nomograms are widely used to predict oncological outcomes by integrating different patient variables. [9][10][11][12] In addition, a nomogram has been applied to evaluate eGFR in patients with suspected renal cell carcinoma undergoing robot-assisted partial nephrectomy. 13 Therefore, we aimed to use a nomogram to estimate eGFR in patients with SRCs. In the present study, clinical and haematological features were collected from in-hospital patients with SRCs who were prepared for the decortication of renal cysts. In addition, multivariable logistic regression was performed to select statistically significant variables. After establishing the nomogram, calibration curves and decision curve analyses (DCA) were applied to validate this model, and the internal and external validation cohorts were used to further assess the model. Participants This study is a retrospective cohort study and included patients with Bosniak III and Bosniak IV cystic masses. The risks of observing Bosniak I cystic masses and Bosniak II cystic masses are particularly low; therefore, these patients were treated conservatively. 14 Consecutive patients who underwent laparoscopic renal cyst decortication between 1 January 2016 and 31 December 2018 for SRCs in the First Affiliated Hospital of Anhui Medical University were enrolled in this retrospective study. Additionally, in-hospital patients with SRCs were collected as the external validation cohort from 1 January 2019 to 31 April 2019. Because this study is a retrospective study, it did not require patient consent. This study was approved by the Institutional Review Board of the First Affiliated Hospital of Anhui Medical University on 13 January 2021 (approval number: Quick-PJ 2021-01-10). All surgical patients used the same antibiotic regimen after surgery, and patients did not receive nonsteroidal antiinflammatory drugs after surgery. No patients received other therapies. The exclusion criteria were patients with renal cysts other than SRCs, other renal diseases except for kidney stones, severe systematic disorders, or severe dysfunction of important organs. We de-identified all patient details to protect their identity. Datasets All patients were divided into two groups based on eGFR: the normal eGFR group (!90 mL/minute/1.73 m 2 ) and the abnormal eGFR group (<90 mL/minute/1.73 m 2 ). This dataset included patients' demographics (age, sex, height, and weight) and health status (hypertension, diabetes mellitus, cyst location, dull flank pain, kidney stones, liver cysts, and haematological features) obtained through the hospital electronic medical record system by two independent researchers. Data regarding serum leukocytes, erythrocytes, thrombocytes, haemoglobin, albumin, globulin, blood urea nitrogen (BUN), creatinine, uric acid (UA), eGFR, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol, and very-low-density lipoprotein cholesterol were included as haematological features. Body mass index (BMI) was calculated by the weight (kg) divided by the height (m) squared. All recorded variables from patients were detected approximately 1 week before surgery. Participants were removed because of missing relevant indicators (mostly because of missing lipid-related indicators). In the current study, we performed a sample size calculation. We calculate the minimal sample size for the four factors in the nomogram for the training cohort using the following formulas. The formula for categorical independent variables: The formula for continuous independent variables: In these formulas, n is the sample size, Za/2 is the critical value of the normal distribution at a/2 (e.g., for a confidence level of 95%, a is 0.05, and the critical value is 1.96), r 2 is the population variance for continuous data, p is the population proportion for categorical data, and e is the difference. Then, we determined that the minimal sample size for age, haemoglobin, globulin, and creatinine is 8, 310, 237, and 163, respectively. The training cohort contained 386 patients for the analysis; therefore, the sample size is sufficient for the current study. We showed the results in Table S1. Construction of the nomogram All patients were assigned to the training cohort to establish the nomogram based on the identified variables from the logistic regression analysis. Continuous variables were transformed into categorical variables based on cut-off values determined based on the receiver operating characteristic (ROC) curve through the identification of maximum sensitivity and specificity. [15][16][17] The nomogram was established based on the results of multivariate linear regression using the RMS package in R version 3.5 (www.r-project.org). Calibration curve analysis was performed to describe the consistency of the predicted and observed risks of renal failure. DCA was conducted to assess the clinical utility of the nomogram in the training cohort. The differences between the "true" positive rate and weighted false-positive rate across threshold probabilities were used to determine the net benefit of the nomogram. The predictive discrimination of the nomogram was evaluated using the ROC curve and the area under the curve (AUC). 18 Validation of the nomogram Validation of the nomogram was achieved by randomly allocating 100 patients into the internal validation cohort and the newly enrolled external in-hospital patient cohort. The AUC values of ROC analysis in the internal and external validation cohorts were used to assess the predictive discrimination of the nomogram. 19 The greater the AUC, the better the accuracy and stability of the established nomogram. 20 Statistical analysis All collected data were compared to assess significant differences between the two groups, and the patient data were excluded from the analysis when an item was missing. The Mann-Whitney U test was used to compare continuous variables that did not exhibit a normal distribution, and the chi-Square (v 2 ) test was used to analyse categorical variables. In addition, the forward LR method of logistic regression analysis was performed to identify significant independent predictors of renal dysfunction by screening all variables in the study. The Hosmer-Lemeshow test was applied to compare the nomogram-predicted probability of an abnormal eGFR and its actual value. All statistical analyses were performed using IBM SPSS Statistics for Windows, Version 22.0 (IBM Corp., Armonk, NY, USA) and the R software package (version 3.5; http://www.r-project. org). P < 0.05 was considered statistically significant, and the reporting of this study conforms to TRIPOD guidelines. 21 Patient characteristics The inclusion process for all patients is shown in Figure 1. All variables obtained from patients with SRCs are listed in Table 1. Among the 386 patients with SRCs, 293 (75.91%) patients were included in the normal eGFR group, and 93 patients (24.09%) were included in the abnormal eGFR group. The following factors significantly differed between the two groups: age (P < 0.001), cyst location (P ¼ 0.002), hypertension (P ¼ 0.005), erythrocyte counts (P ¼ 0.006), haemoglobin (P ¼ 0.033), globulin (P ¼ 0.008), BUN (P ¼ 0.004), creatinine (P < 0.001), UA (P<0.001), and HDL-C (P ¼ 0.037). In most patients, cyst location, BMI, kidney stones, and hypertension were associated with renal dysfunction. We presented the distribution of these four variables in the normal eGFR and abnormal eGFR groups in Figure 2. Regarding the location of cysts, the rate of bilateral renal cysts in the abnormal eGFR group was 62.37% but only 44.37% in the normal eGFR group. Regarding the distribution of BMI, the prevalence of patients with a BMI >25 kg/m 2 was greater in the normal eGFR group (33.79%) compared with that in the abnormal eGFR group (26.88%). Kidney stones were not associated with abnormal eGFR, and a similar number of patients in the two groups had kidney stones (22.58% vs. 19.45%). Regarding hypertension, 39.78% of patients showed increased blood pressure in the abnormal eGFR group, and only 24.57% of patients showed increased blood pressure in the normal eGFR group. Independent predictive factors and predictive nomogram for renal dysfunction Logistical regression analysis was performed to identify predictive variables for renal dysfunction, and Table 2 shows that age, haemoglobin, globulin, and creatinine were significantly associated with renal dysfunction in patients with SRCs (all P < 0.05). The cut-off values for haemoglobin, globulin, and creatinine were 135 g/L, 27.2 g/L, and 85 lmol/L, respectively, which were identified by ROCs ( Figure S1). The nomogram for predicting renal dysfunction in patients with SRCs was established by integrating four independent predictive factors. As shown in Figure 3, age contributed the most to renal dysfunction, followed by creatinine and haemoglobin. Performance and clinical utility of the nomogram For the calibration curve, the Hosmer-Lemeshow test was applied to detect the goodness of fit for the nomogram. The bootstrap-corrected coefficient of determination (R 2 ) value between the nomogrampredicted probability of eGFR abnormality and its actual value was 0.812. In addition, the Brier score was 0.051, indicating that the truly predicted nomogram was approximately 77.42% (Figure 4a). For the DCA, the decision curve (blue line) showed that in a threshold probability range of 0.1 to 0.8, which indicates that the risk generated from the nomogram is 0.1 to 0.8, patients who decided to interrupt clinical treatment benefited more compared with those undergoing all treatment schemes (grey line) or the nontreatment scheme (black line) (Figure 4b). Accuracy and stability of the nomogram model An additional 46 in-hospital patients with SRCs were collected as the external validation cohort. As shown in Figure 5, the x-and y-axes represent the false-positive rate (1 À specificity) and true positive rate (sensitivity), covering a range of values from the predictive nomogram model. The AUC was 0.829 in the overall training cohort (Figure 5a), 0.752 in the internal validation cohort (Figure 5b), and 0.888 in the external validation cohort (Figure 5c, Table S2). Additionally, Table S3 shows a comparison of the difference between the validation cohort and training cohort. Together, these results demonstrated that the established nomogram model exhibited sufficient accuracy and stability and could be applied to evaluate the renal function status of patients with SRCs. Discussion In recent years, many nomogram models have been explored to predict renal function in the context of urinary disease. For example, a nomogram for assessing significant eGFR reduction in patients with renal cell carcinoma after robotic partial nephrectomy was internally validated and displayed excellent calibration. 13 In addition, two nomograms for predicting renal function 1 year after partial nephrectomy in patients with renal tumours have been established and internally validated by preoperative variables. 22 However, nomograms evaluating eGFR in patients with SRCs are rare. Furthermore, whether renal cysts and renal dysfunction exhibit a causal relationship remains controversial. Tatar et al. 23 reported that SRCs are associated with poor renal function outcomes in solitary kidney patients. In Korea, Choi et al. 24 revealed that eGFR is an independent factor associated with the presence of renal cysts and that age, BMI, and hypertension represent other risk factors. Chin et al. 6 demonstrated that differences in eGFR between the control and cyst groups and the presence of cysts were not related to renal dysfunction, but patients with peripheral cysts had a lower eGFR than patients in the perihilar cyst subgroup. Therefore, investigating the association between SRCs and renal dysfunction and constructing a nomogram for simple renal patients are urgently needed. In the present study, we provide a predictive nomogram model to evaluate eGFR and the probability of renal dysfunction. Moreover, the accuracy and stability of the nomogram model were assessed. Our major findings were as follows: (1) age, haemoglobin, globulin, and creatinine may be significant predictive factors for renal dysfunction in patients with SRCs, and the cut-off values for haemoglobin, globulin, and creatinine were 135 g/L, 27.2 g/L, and 85 lmol/L, respectively, and (2) the predictable nomogram is accurate and stable, with AUCs of 0.829, 0.752, and 0.888 in the training cohort, internal and external validation cohorts, respectively. Age is an essential factor affecting eGFR because of the loss of renal mass in ageing people, and a proposed nomogram revealed that age plays a vital role in predicting renal function after partial nephrectomy. 13,25 In the current study, age was significantly associated with renal dysfunction in patients with SRCs and included in the predictable nomogram. Another study reported that the haematocrit value, haemoglobin, and erythrocyte counts were significantly elevated in patients with SRCs and significantly reduced after surgery, 26 which implicated potential crosstalk between blood parameters, SRCs, and renal dysfunction. Similarly, we revealed that haemoglobin 135 g/L might be a risk factor for renal function. Previous reports demonstrated that serum b 2 -microglobulin concentrations could be used to predict neonatal renal function and estimate GFR in infants and adults. [27][28][29] Interestingly, we also found that globulin (globulin, >27.2 g/L) was a predictive factor affecting renal function. Regarding creatinine, two retrospective cohort studies enrolled 1380 and 577 individuals and demonstrated that increased serum creatinine may be a risk factor for the development of SRCs. 30,31 Serum creatinine was also identified as one of the preoperative predictors to evaluate eGFR after partial nephrectomy. 22 Consistently, serum creatinine (>85 lmol/L) was screened as a predictor to assess renal function in our nomogram. Based on a previous study, endogenous creatinine clearance overestimated the GFR, and endogenous creatinine clearance was a poor predictor of renal function in patients with nephrotic syndrome. 32 Therefore, we recommend serum creatinine as a predictive factor of renal function in patients with SRCs. Collectively, our nomogram established for predicting renal function will serve as a valuable tool in assisting decisionmaking in patients with SRCs. Our nomogram was established by incorporating four variables, including age, haemoglobin, globulin, and creatinine, and the calibration curve for internal validation showed that the R 2 was 0.812, indicating a preferable prognostic value of the nomogram. Additionally, DCA demonstrated that the established nomogram could predict more benefits based on the evaluation of the abnormal eGFR status of patients with SRCs. For further validation, the AUCs in the training cohort and two validation cohorts were all >0.75, which demonstrated good accuracy and stability. Therefore, internal validation demonstrated that our nomogram was reliable for clinicians to identify the probability of renal dysfunction. This study has some limitations that cannot be ignored. 1) The smaller number of patients in the external validation will restrict its widespread use. In future research, we will further validate the clinical use of the nomogram in a multicentre study. 2) The odds ratios of these four independent predictors were very close to 1, potentially because the sample size was not large enough. The relationship between these predictors and renal function requires further research. 3) Moreover, although 22 significant variables were analysed in our study, other important predictive parameters, such as cyst size and precise location, septa in the cyst, and cyst infection, were not included in this study. 33 Conclusion We established a nomogram that incorporated four preoperative covariates, including age, haemoglobin, globulin, and creatinine, which may predict the probability of renal dysfunction in patients with SRCs. Internal validation analyses based on a calibration curve, DCA, and AUCs indicated that the nomogram exhibited great accuracy and stability. This nomogram might be a useful tool for clinicians to evaluate the risk of renal dysfunction for patients with SRCs. School of Public Health, Anhui Medical University, for his help in performing the statistical analysis. Author contributions YDC, CZL, and GYL conceived and designed the study. YDC and LC contributed to data collection. YDC, YCX, and MZ analysed the data. YCX, SF, JLM, and MZ contributed materials and analysis tools. YDC, LC, and GYL wrote the manuscript. All authors have read and approved the final version of the manuscript and agreed with the order of the authors. YDC and LC contribute equally to the current work. Declaration of conflicting interest The authors declare that there is no conflict of interest. Funding This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Supplemental material Supplemental material for this article is available online.
2022-03-24T06:22:57.174Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "333cca6800acc948ce642a5a8988bac1054bdcc5", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "Sage", "pdf_hash": "c5717a23ecbc1bf6d75530744a66ebdfa74f4a09", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16834019
pes2o/s2orc
v3-fos-license
Embolism of the popliteal artery after anterior cruciate ligament reconstruction: a case report and literature review Arterial complications after anterior cruciate ligament reconstruction (ACLR) are rare. We present a case report of a 44-year-old male patient with a subtotal occlusion of the popliteal artery, with sensory loss in the foot, 17 days after ACLR. Embolectomy and anticoagulant therapy led to full recovery of the peripheral arterial circulation. The sensory loss of the foot also fully recovered. To our knowledge, this is the first case report of an embolus of the popliteal artery after ACLR without relation to graft fixation. A literature review on vascular complications after ACLR is presented. Introduction Vascular complications after anterior cruciate ligament reconstruction (ACLR) are rare. Few peer reviewed case reports have been reported with various techniques of reconstruction [1,[3][4][5][6][7][8][9]. Allum did not report vascular lesions as a complication after ACLR in a review article on this subject [2]. The origin of vascular lesions after ACLR may be venous or arterial [4]. We present a case of arterial embolism of the popliteal artery after ACLR. A literature review on vascular complications after ACLR is presented. Case report A 44-year-old male has a previous medical history of open medial and lateral ligament repair of the right knee 15 years previously (motor accident). Functional instability of the right knee due to ACL deficiency was the reason for referral to our service. There were no signs of posterior nor posterolateral instability. An ACLR was performed with a quadruple hamstring graft. The graft was fixed with a Bone Mulch Screw on the femoral side and a WasherLoc device in the tibia (Arthrotek, Inc. Warsaw, USA). The latter is a spiked washer with bicortical screw fixation. Total tourniquet time was 90 min. No complications were noted during or after surgery. Thromboprophylaxis for deep venous thrombosis was given by means of Low Molecular Weight Heparin (2500 IE daily) during hospital stay. The hospital recovery was uneventful. On the 17th day of post surgery, he experienced pain and swelling in the popliteal fossa of the right knee. The complaints partially resolved with physiotherapy. Two days later, the fossa pain returned with alterations of skin color, sensory loss and an increasing cold right foot. He was referred to a vascular surgeon. Adequate dorsal pedal and posterior tibial pulses were noted. Duplex ultrasound examination showed no sign of venous thrombosis. Angiography revealed a subtotal occlusion of the popliteal artery at the level of the superior genicular artery (Fig. 1). An embolectomy was performed using a Fogarty catheter inserted in the femoral artery. The pedal pulses were diminished after embolectomy and a second angiography was performed. The occlusion at the level of the popliteal artery was no longer detected. No further emboli were noted, however, the peripheral flow qualified as too slow and suspect of small distal occlusions. Anticoagulant therapy with intravenous heparin as well as epidural analgesia was administered until complete recovery of peripheral circulation. The patient developed a superficial infection of the groin wound, treated by antibiotics. He was mobilized and discharged after 8 days. Sensory loss of the foot slowly recovered after 4 months. Vascular analysis in rest and strenuous activity was performed at 4 months. He had no more complaints, symmetrical ankle-brachial index in both legs and intact pulses at the foot and ankle. Vascular analysis did not reveal any other possible cause for arterial emboli. The patient has full range of motion of the right knee with a Lachman and anterior drawer test of 0-2 mm (International Knee Documentation Committee) and absent pivot shift test. Discussion Vascular complications after ACLR are rare. The origin of vascular lesions may be venous or arterial [4]. A case of fatal pulmonary embolism after ACLR with venous origin has been published recently [5]. Hypothesis of the cause was a hereditary coagulopathy. Arterial lesions of the popliteal artery after ACLR have been presented in few peer reviewed case reports even with an all-inside technique of arthroscopic ACL reconstruction and fixation as well as any type of graft [4]. Roth et al. [8] described an occlusion of the proximal popliteal artery. A composite graft consisting of a polypropylene ligament augmentation and the middle third of the quadriceps-patellar tendon was fixed to the lateral femur with a staple. The artery was trapped between the graft and the femur. A saphenous bypass was performed 6 weeks post surgery. Spalding et al. [9] reported a case of unilateral claudication 8 years after ACL reconstruction with use of a Gore-Tex ligament. A cyst had formed around the femoral insertion of the ruptured synthetic ligament and was excised without vascular repair. Evans et al. [3] reported a pseudoaneurysm of the medial inferior genicular artery following ACL reconstruction using a central third patellar tendon graft fixed with interference screws. At 5 weeks, ligation of the artery and removal of the thrombus led to full recovery. The cause of the lesion was elevation of the periosteum on the medial side of the tibia for tibial tunnel preparation. Aldridge et al. [1] described an avulsion of the middle genicular artery after a bone-patellar tendon-bone autograft fixed with interference screws. Surgical exploration at 4 weeks revealed a tear in the popliteal artery. There was no rupture of the posterior capsule. Probable cause for the avulsion of the middle genicular artery was the debridement of the femoral ACL remnant tissue. We have previously published two cases of popliteal artery lesions caused by a drill for bicortical tibia fixation after quadruple hamstring ACLR [6,7]. In the first case, the drill had caused an intimal lesion at the level of the infragenicular popliteal artery which led to the pseudoaneurysm. Vascular repair was performed 12 days after ACL reconstruction but sensory loss of the saphenous and medial plantar nerves was still present at 4 months followup [7]. The second case was a simultaneous traumatic pseudoaneurysm and thrombosis of the popliteal artery after ACLR. At surgical exploration, the thrombosis was in line with the drill hole for bicortical tibial fixation. There was no apparent relation of the femoral fixation device to the pseudoaneurysm of the supragenicular popliteal artery. This pseudoaneurysm was thought to be pre-existent. The pseudoaneurysm was ligated and a venous jump graft was performed to bypass the thrombosis located more distally in the popliteal artery [6]. In this review of the literature on arterial complications after ACLR, all cases are associated with direct damage to the popliteal artery at time of ACLR [1,3,[6][7][8][9]. There was no apparent direct damage to the popliteal artery in the 44year-old male patient presented in this case report. The popliteal artery occlusion was not in line with either the femoral nor tibial fixation device. Vascular analysis did not reveal any pre-existent vascular causes for arterial embolus formation proximal to the popliteal artery. Our hypothesis of the cause was the traumatic knee dislocation 15 years previously. Precursors could have been pre-existent intimal vascular damage or adhesions of the artery at the level of the superior genicular artery in combination with the use of the tourniquet and ACLR. Conclusion Awareness of possible arterial complications after ACL reconstruction is essential for early diagnosis. Clinical symptoms of pain in the popliteal fossa and sensory deficits in lower leg and foot should prompt the physician to analyze possible injuries of the popliteal artery. The differential diagnosis should include compartment syndrome and deep venous thrombosis. Doppler examination as well as intact pedal arterial pulses are unreliable in diagnosing arterial lesions after ACL reconstruction. Contrast-, CTor MRI-angiography are the diagnostic tools of choice. Immediate surgical exploration is indicated to limit limb ischemia and neurological damage [4].
2014-10-01T00:00:00.000Z
2007-06-20T00:00:00.000
{ "year": 2007, "sha1": "a1c231979948e1b18fa198b9442edc9a196ab556", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-007-0363-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a1c231979948e1b18fa198b9442edc9a196ab556", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
148654772
pes2o/s2orc
v3-fos-license
Entrepreneurship Education in the Caribbean: Learning and Teaching Tools This article reports on research that took place over two academic years (September 2013 April 2015). It provides a rich understanding of entrepreneurship education based on experiential knowledge and best practices from five entrepreneurship educators who have all worked as consultants to entrepreneurs, advisors to the government on entrepreneurship, and have taught entrepreneurship at the tertiary level for several years in the Caribbean. The findings illustrate that experiences, sense of purpose, reflective practice, lecturer's passion, mentoring, simulation and practice are seen to collectively offer a significant contribution to learning. Further, the findings support the view that teachers of entrepreneurship should draw upon highly developed techniques in their range of teaching methods that demonstrate aptitude for the subject matter. The participants agreed that ideally, the ultimate course goal is to support students in remembering techniques learned in an entrepreneurship class that contribute to gaining confidence in setting up their own venture and that assist with avoiding pitfalls. The purpose of the research article is to provide methodical insight that will improve the entrepreneurial orientation of students in entrepreneurship classes. 85 Brock Education Journal, 26(1) 2016 present study provide particular insights about teaching and learning entrepreneurship in the Caribbean setting, which contribute to the debate of integrating innovative teaching practices for non-business students. Entrepreneurship Education The concept of entrepreneurship education is highly contested. Earlier studies narrowly define entrepreneurship education as education that provides the needed skills for setting up new business ventures (Alberti, Sciascia & Poli, 2004;Cho, 1998;Vesper, 1993). While the definition of entrepreneurship education has survived over the years, it only provides a basic understanding of what entrepreneurship education really is, what it comprises, and its impact (Rideout & Gray, 2013). An expanded view raised by Martin (as cited in Birdthistle, Hynes & Fleming, 2007) suggested that entrepreneurship education involves the creation of entrepreneurial attitudes and skills and not simply training for business start-up. Jones and English (2004) referred to entrepreneurship education as "the process of providing individuals with the ability to recognise commercial opportunities and the insight, self-esteem, knowledge and skills to act on them" (p. 416). Jones and English further posited that entrepreneurship education incorporates content from traditional business disciplines such as management, marketing and finance. This perspective challenges Kirby's (2002) position that entrepreneurship education is different from traditional management studies as the traditional management education may impede the development of the necessary entrepreneurial quality and skills. While, a number of policy makers, practitioners and educators in developed economies still believe that entrepreneurship education should only be concerned about the creation of new ventures and new jobs (Fayolle & Gailly, 2008). Samwel Mwasalwiba (2010) suggests that entrepreneurship education stakeholders, such as policymakers, academician, and students, have an interest in this field of study due to the 'perceived socio-economic benefits' that can be achieved both at the individual and societal level ' (p. 21), this speaks to the potential impact that entrepreneurship education has on society. According to Raposo & Do Paço (2011): Entrepreneurship education seeks to propose people, especially young people, to be responsible, as well as enterprising individuals who became entrepreneurs or entrepreneurial thinkers who contribute to economic development and sustainable communities…through entrepreneurship education, students learn how to create business, but they also learn a lot more. (p. 454) Many universities have entrepreneurship classes as part of their business schools' programmes. This format leads to the marginalization of the needs of non-business school students as Standish-Kuon and Rice (2002) put forward; introducing engineering and science students to entrepreneurship requires better understanding, while even less is known about teaching entrepreneurship in non-technical disciplines such as nursing, law and the educational sciences. Entrepreneurship education needs a different teaching pedagogy. This premise has been explored through assessing the relationship of entrepreneurship education to work related learning (Dwerryhouse, 2001); experiential learning (Kolb, 1984); action-learning (Smith, 2001), and entrepreneurial training (Gibb, 1999). As this research seeks to highlight aspects of pedagogy 86 Brock Education Journal, 26(1) 2016 observed at the University of the West Indies, the next section of this article focuses on the learning theories as a core part of the development of the pedagogy of entrepreneurship education. Learning Theories Learning theories help describe how people learn and thus help in identifying best practices for teaching (Pounder, 2014). In the case of entrepreneurship education the author suggests using an approach that integrates and allows the processing of knowledge through inductive and deductive reasoning (respectively known as "bottom up" approach which is more open-ended & exploratory and "top-down" approach which is more focussed and linked to proving hypotheses), practice based learning, stakeholder-driven assessment priorities and also through meaningful shared experiences (Blenker, Dreisler, & Kjeldsen, 2006;Charney & Libecap, 2000;Duval-Couetil, 2013;Neck & Greene, 2011;Nelson & Johnson, 1997). In this regard, similar to Neergaard, Tanggaard, Krueger & Robinson (2012), the author seeks to contribute to the development of entrepreneurship education teaching pedagogy, as suggested by Yu Cheng, Sei Chan, & Mahmood (2009). Behaviorism Behaviorism offers a particular perspective on how learning occurs and how teaching influences the process. The main pedagogical reason for teaching entrepreneurship using behaviourism is that it encourages learning of 'facts'. In addition, it addresses the content of that which is being taught such as skills and tools which include business plans, and simulations with regard to decision making (Neergaard, Tanggaard, Krueger, & Robinson, 2012). The focus of behaviorism is on observation of movement and activities in response to external stimuli (Alzaghoul, 2012;Tomic, 1993;Williams, 1986). It stresses the importance of specific, measurable, attainable and observable performance and the impact of the environment on the learning experience (Brown & Green, 2006;McLeod, 2003;Pham, 2011;Shield, 2000). Thus, it embraces the 'learning about' entrepreneurship, representing the traditional way of understanding learning (Neergaard, Tanggaard, Krueger, & Robinson, 2012). To benefit from behaviourism theories in entrepreneurship teachings, students should be guided to connect with the learning process by positive reinforcement from the lecturer to reinforce positive actions of engagement, contributions, feedback and questioning. Although numerous entrepreneurship courses still tend to invoke behaviourist methods, in many universities it has been replaced by more experiential approaches (Neergaard, Tanggaard, Krueger & Robinson, 2012). Cognitivism Cognitivism as a learning theory takes a different pathway than behaviourism, where in this particular pedagogy learning is understood to be structural and computational (Clarke, 2013). The focus of cognitivism is based on evaluating, processing and memory; as it is concerned with the internal workings of the brain and how the mind processes information to endorse effective learning (Cooper, 1993;Ally, 2004;Siemens, 2004). Kohler (1947) in his research emphasized that some information is retained while part is lost during the initial learning process as it is only stored in short term memory. It is then up to the teacher to institute active learning which allows the student to engage in learning experiences that create long-term memory. Bruner (1976) 87 Brock Education Journal, 26(1) 2016 developed a set of principles adapted by the individual which speak to acquiring initial new information, transforming the information, and then evaluating the information. From an entrepreneurial perspective on education the learner requires assistance to develop prior knowledge and integrate new knowledge to identify and take advantage of opportunities. Constructivism Constructivism learning theory bases its principles on aiding learning rather than controlling learning as is the case with behaviourism (Lober, 2006). This is especially relevant where the learning outcome is not predictable, which is potentially the case with entrepreneurship education. In the teachings of entrepreneurship, students develop a level of insight and confidence from practicing methods for navigating unknown territories and from experiencing success and failure as in the real world. Entrepreneurship education allows for constructivists methodologies, given the innovative and active nature of entrepreneurship, where students engage as active agents in the learning process, requiring them to do and reflect upon meaningful learning activities (Romero, 2013;Solomon, Duffy, & Tarabishy, 2002). Kohler (1947) hypothesized that learning occurs when an individual has insight that shows a relationship between two distinct components of a larger system or problem. Thus as Lober (2006) suggested, the constructivist approach needs a special learning environment that has to be created by the teacher, who is not the governor of the student's learning process, but more so supports and facilitates learning from a student-centred point of view. From an entrepreneurial perspective this encourages a speculative approach to new venture development as high risks are involved at this stage; but the entrepreneur must be trained to spot and handle these opportunities as they arise. Transformative Principles of transformative theory focus on effective change and the application and transfer of learning into action (Cranton, 1996;Mezirow, 1991Mezirow, , 1996. Dyson (2010) emphasized the importance of teacher education and bases this on the theory of transformative learning. The transfer of learning into a decision making form is the main focus of such learning techniques. Caffarella (2002) defined transfer of learning as the effective application by program participants of what they learned as a result of attending an education or training program. It should also be noted that there is a natural barrier highlighted in transformative theory, as research has shown that there is little match between the learning environment and the implementation and execution phases of entrepreneurship. Connectivism Connectivism's focus is on recognition and bonding (Clarke, 1997). Recognition implies the identification of something as having been previously seen, heard, and/or known. The recognition and exploitation of business opportunities in the market are core functions of entrepreneurship (Casson, 1982;Hills & Shrader, 1998;Kirzner, 1979;Schumpeter, 1971). Whereas bonding speaks to the emotional and physical attachment occurring between student and the information shared, and is the basis for further emotional affiliation. In addition, connectivism bases its principles on knowledge, which is distributed across an information network and can be stored in a variety of 88 Brock Education Journal, 26(1) 2016 digital formats (Kop and Hill, 2008). One way in which students of entrepreneurship can be distinguished is by the style with which they engage the entrepreneurial learning and their interaction with existing entrepreneurs, as this is an effective way of gathering experiential knowledge. Methods The author uses a thematic qualitative research design which allows for a systematic subjective approach to describe experiences of faculty and give them meaning. The approach examines the uniqueness of faculties' teaching and learning situations with each faculty member having their own reality. The approach further emphasizes identifying, assessing, and highlighting themes; with additional assessment observing their linkages to the learning theories. Interviews were conducted among five entrepreneurship faculty members to gain insight: explore the depth, richness, and complexity inherent in their teaching practices. The interview questions focused on areas related to the philosophy of education, teaching techniques, and evaluation. The rationale for the questions on teaching philosophies was to gather as much information on how faculty members view the ways a student prefers to learn and to identify what teaching practices they have found to be successful. The justification for questions on teaching techniques was to identify best practices and how to incorporate them in classroom sessions. The reason for the questions on evaluation was to gain insights on how faculty members gauge their performance. This lead to further probing and selecting themes, which are of interest, and reporting on them (Tuckett, 2005). The selection of the interviewees for this study was driven by the researcher's belief that each respondent would bring about worthwhile information to entrepreneurship education, which was a core part of the study under investigation. The interviewees came from a variety of backgrounds that make up the entrepreneurship ecosystem and were faculty members teaching entrepreneurship. They also had a reputation for promoting and developing youth entrepreneurship curricula. The current study seeks answers to the following research question: RQ: Which of the themes identified in entrepreneurship education are grounded in the five learning theories? Feedback Instrument Interviews and feedback forms were facilitated by the author of the research. Components of the feedback instrument included: background information on lecturer and classes taught, lecturer's philosophy on teaching entrepreneurship and teaching techniques utilized in sessions. Thematic analysis, which is a widely-used qualitative analytic method (Boyatzis, 1998;Roulston, 2001, & Tuckett, 2005 was conducted and seven techniques utilized in teaching entrepreneurship resulted: experiences, sense of purpose, reflective practice, lecturer's passion, mentoring, simulation and practice. Participants The faculty interviewed had an average age of 45 years old and were relatively young for University Faculty, but each member had over 15 years working in the entrepreneurship ecosystem. Table 1 gives a profile of the faculty members and their assorted teaching expertise. The two part-time faculty members worked for the Government offering technical assistance and various forms of financing to entrepreneurial ventures. Description of Data Collection and Analysis Feedback was collected through interviews with entrepreneurship faculty members. The interview questions focused on areas related to the philosophy of education, teaching techniques, and evaluation. Opportunities to elaborate their responses, and follow up face-to-face or phone interviews were conducted to expand on teaching styles. In some instances, this was thought to bring about important insights into what entrepreneurship education methods entailed. Major trends and patterns were highlighted and synthesized in the findings as they became apparent. From the analysis of the reviewed literature and methods of teaching used, a conceptual discussion of the themes and their integrated approach to entrepreneurship education and the learning theories was presented. The analysis and conclusion discusses where the themes identified in entrepreneurship education are grounded in the five learning theories. Ethical Considerations Consideration was given to the position of the University of the West Indies Ethics Committee and policy for ensuring that the research conformed to approved principles and conditions. Each lecturer was made aware of the study through a phone call or face-to-face contact; and then offered a chance to participate in the research. A brief outline of the research study was discussed prior to the research being conducted. Lecturers were also informed of how much time they will be expected to give and what use will be made of the information they provide. It was noted that where the researcher observed any direct use of lecturers' material, the confidentiality policy of the University of the West Indies would be respected. Findings and Discussion The findings are presented in this section and documented based on the major themes coming out of the research. The research question below is also emphasized in this section: RQ: Which of the themes identified in entrepreneurship education are grounded in the five learning theories? Experiences The experiences of the students and the lecturer as they interact with entrepreneurs are a good basis for teaching and learning aspects of entrepreneurial learning. Based on student experiences and within the context of behaviourism, entrepreneurship students can reproduce and reinforce appropriate entrepreneurial and enterprising behaviour observed in the business environment (Neergaard, Tanggaard, Krueger & Robinson, 2012). The constructivism learning theory is also prevalent and observed when taking student experiences into consideration as it argues that people produce knowledge and form meaning based upon their experiences. From a cognitive standpoint, students' personal mental models of what it takes to be an entrepreneur are developed during this learning process (Krueger, 2009). It is through forms of students' interaction among entrepreneurs and themselves, that a constructivist teaching approach is created through meaningful shared experiences (Blenker, Dreisler, & Kjeldsen, 2006;Charney & Libecap, 2000;Duval-Couetil, 2013;Neck & Greene, 2011;Nelson & Johnson, 1997). The lecturers indicated that using appropriate methodologies added value to the experiences discussion. Lecturer A and C noted that they bring examples that are practical; while Lecturer B 91 Brock Education Journal, 26(1) 2016 stressed that they conduct probing when they stated "I provide context on what is going on in the environment" and then have further discussion. The constructivist theory maintains that students should learn to build their own knowledge rather than having knowledge given to them, thus supporting probing as an appropriate teaching method. Lecturers were in corroboration that discussions force students to articulate and defend positions to display their reasoning to others and to accept and respond to criticism (Christensen et al., 1991). At the end of the discussion on experiences, the lecturer should have been able to work towards a specific goal, while clarifying students' understanding and views in respect to the discussion. Lecturer A also indicated that they focus on fashioning learning experiences for members in the class. Lecturer A further highlighted: "the more experiences that come to light, the richer the class discussion becomes", especially as students are going through the transformation process of changes in behavior which are intended to alter the desired outcome. The findings reflect thoughts of Stansberry & Kymes (2007) which show that the values of constructivism are essential to transformative learning because knowledge and meaning are a direct result of experience. As an opportunity to learn from other class members is created, the concept to agree to disagree at times is emphasized. Lecturer B tells the class in their first session to "check all inhibitions, sensitivities and insecurity at the door". This changes the class mode to one where freedom of expression is dominant and open-mindedness is encouraged. The lecturers agreed that fostering dialogue in class or through online forums is essential to having students discuss matters related to entrepreneurship. This is in keeping with Ravenscroft (2011) who suggested that connectivism in education has given rise to a new type of dialogue through social, networked learning. Lecturer D stated "I talk about extreme cases to capture their attention." Lecturer E stated "tutorial sessions are more practical since students are usually given activities where they are either acting as a business advisor or an entrepreneur." Experiential learning is further formulated based on the student and not the teacher. The student is involved in carrying out activities, formulating questions, conducting experiments, solving problems, being creative and creating meaning from the acquired experience (Esters, 2004). Lecturer E identified a syntheses approach based on development of new concepts. The real advantage here is that the experiential learning practice is a learner-centred approach that caters to individual learning styles. Sense of Purpose The teaching methods utilized for entrepreneurship were focused on the purpose of the activity which gives rise to behaviorism (Tomic, 1993;Alzaghoul, 2012). The rationale behind most students wanting to take entrepreneurial classes is that it develops their understanding of the entrepreneurial business process and how they might become involved in those processes in their future careers. It is necessary to relate the course work not just to creating entrepreneurs but also to supporting entrepreneurs. The goal here is to have students identify their entrepreneurial interests through a combination of exploration, role-play, readings, and close interaction with successful entrepreneurs and service providers. This technique gave students a sense of purpose as they gained the courage to envisage and pursue opportunities in a constructivist way. This form of constructivism aligns with Solomon, Duffy, & Tarabishy (2002) and Romero (2013) who state that students who are engaged as active agents in the learning process (requiring them to do and reflect upon meaningful learning activities) are taking part in active learning. Lecturer D stated: "I 92 Brock Education Journal, 26(1) 2016 make students present on topics after further research." The willingness to go after things and take on role play showed the benefits of behaviourism and constructivism in the classroom. All the Lecturers have recognized the importance of exposing students to guest lecturers that represent varying forms of the entrepreneurship ecosystem e.g. successful entrepreneurs, informal sector entrepreneurs, serial entrepreneurs, social entrepreneurs and service providers. The theory of behaviorism and exposing students to appropriate entrepreneurial and enterprising behaviour showed positive for student learning (Neergaard, Tanggaard, Krueger, & Robinson, 2012). What is also noticeable is that Lecturers A,B, C, D and E as specialist in their own right also facilitate guest lecturers for each other. The fact that knowledge is distributed across a network of faculty, the guest lecturing among faculty members meant that an account of connectivism was showcased within the department through recognition of in-house talents and bonding (Clarke, 1997). For lasting benefits, each interactive teaching method must be designed around the intentions and desired outcomes. While all the Lecturers use some form of interactive teaching method, Lecturers A and B specifically utilize games and simulations that showcase entrepreneurship behaviour, opportunity and finance issues like cash flow. These simulations showcase a connectivism approach. The activities must instil a newfound purpose within the student. Lecturer C creates purpose in telling the class "Everyone should leave here with marketable skills," while Lecturer A states "I am preparing you for the test of the world." From a teaching perspective, faculty engaged students' sense of purpose by exposing them to relevant current readings and case studies to allow for closer interaction on topical areas. All Lecturers were also involved in developing Caribbean case studies for teaching. All Lecturers identified some seminal readings and prolific authors of entrepreneurship that they were exposed to that they in turn exposed students to as well. International authors like Jeffrey Timmons, Peter Drucker and Donald Kuratko provided much of the seminal readings for original thought. Faculty also engaged guest speakers as part of panel discussions and presentation process; students found this style more engaging than watching video clips, which they thought were more removed from their current situation. Reflective Practice Reflection is the active process of witnessing one's own experience in order to take a closer look at it every now and then to direct attention to it briefly, but often to explore it in greater depth. This is a cognitivist approach based on structural and computational actions (Clarke, 2013) and a heightened activity that some lecturers use when teaching entrepreneurship. Reflecting on what is learned is a sure way to make students own their own knowledge (Banner et. al., 1993, p. 32). This highlights a behaviourists approach, as students are motivated by success and place more importance on reflection of acceptance and extrinsic rewards. Reflection can be done in the midst of an activity or as a separate activity in itself. Through reflective practice, students should reflect frequently, bringing a high level of awareness to their thoughts and actions, perhaps stopping occasionally to consider what could be learned by exploring their patterns of thinking across different entrepreneurial situations. Lecturer C uses early feedback as a means of engaging reflection. Lecturer D states "students reflect on course through their life experiences". Lecturer B uses role playing as part of a reflection exercise. Encouraging reflection along with the activity structure has proven to be an effective component of the cycle for students (Miettinen, 2000). Roleplaying is also viewed as a level of connectivism as shown by its focus on bonding (Clark, 1997). This is in alignment with cognitivists, as they emphasize the motivating affect of learners as 93 Brock Education Journal, 26(1) 2016 problem solvers or information seekers. Especially where learning is understood to be structural and computational (Clarke, 2013), Lecturer A highlights the use of a reflection journal as a core learning strategy. Lecturer's Passion When entrepreneurship is taught, the type of person whom the educator is, will emerge. The findings show that the lecturer's passion must reflect positive and enterprising behaviourism to be successful and this is in alignment with Neergaard, Tanggaard, Krueger, and Robinson (2012). The lecturer must try to instil a culture that allows the learning of entrepreneurship to take place without prejudice tutoring. The lecturer's principles need to be in tune with the course of teachings. Ironically, many of the same characteristics that make a good entrepreneur make a good entrepreneurship teacher: being resilient, adding value, willingness to explore, seeking opportunity, visionary planning, ability to adapt to change easily, and understanding the customer. Lecturers need to think about curriculum and lesson plans like entrepreneurs and entrepreneurial support groups think about business development. Such teaching plans will only enhance the student experience and improve the entrepreneurial educational experience. The lecturer's passion and being a good teacher is challenging, as the topics surrounding entrepreneurship are very complex. The critical issue here is creating a connectivist learning environment inside and outside the classroom (Kop and Hill, 2008) that enhances the students' ability to really understand the material and to stimulate an interest in the entrepreneurship process. This stimulated interest of the student is in agreement with effective change highlighted by transformation theory (Mezirow, 1991;. The lecturer's passion is what makes students want to study more. Lecturer A stated, "I let students see how classroom topics apply to the world beyond the classroom." A passionate entrepreneurship lecturer will get students interested and even excited about what they are learning. Lecturer B states "I have individual heart-to-heart discussions where students express fears, expectations and tensions," it was also mentioned that sessions undertake some psychological and spiritual components. Further to this, teachers can encourage entrepreneurship speakers as guest lecturers to make presentations or join online discussions; this method would allow students to draw on famous and successful entrepreneurs who visit the educational institution to discuss ideas, opportunities and new venture management. This form of information networking and digital format is harmonious with connectivism as defined by Kop and Hill (2008). It was the view of the entire faculty that before individuals can teach entrepreneurship, they must have a passion and love for the topic. They must also be willing to share this passion with the students. Each faculty member highlighted their level of passion for teaching entrepreneurship through various perspectives. Lecturers A and B facilitate site visits and this usually highlights entrepreneurs and other key people doing what they love. Words like "obsession," "infatuation," and "enthusiasm" have been used to describe the teaching philosophy of the faculty interviewed. Lecturers B and C talked about using local vernacular to spark discussion in classes. Mentoring Mentoring and connecting directly with someone practicing in the field is a worthwhile strategy to be pursued in entrepreneurship classes. This is supported by the connectivism theories for bonding (Clarke, 1997); and for knowledge sharing over information network (Kop and Hill,94 Brock Education Journal, 26(1) 2016 2008). Mentoring is the establishment of a personal relationship for the purpose of professional instruction and guidance, which is supported by behaviorism. Mentors support students in improving problem solving and social skills, which supports cognitivism, and to achieve the attitudinal and behavioral change which aligns with the behaviorist approach and the transformative approach. In education, the value of mentoring has been recognized in the use of teachers and other professionals in one-on-one instruction of students for vocational education, science, and reading (Evenson, 1982). To be able to enlist the experiences and advise of a practitioner to complement learnt principles discussed in the text and the classroom can add a feature that creates a more interactive learning experience. As an interactive system, mentoring benefits the mentor, the student, as well as adds value to the teaching system. Getting the buy-in from mentors is key and can be seen as easy, as mentors gain the satisfaction of being able to transfer skills and knowledge accumulated through extensive professional practice (Krupp, 1984). In most cases, the mentor sees their contribution as a part of their corporate social responsibility and a philanthropic way of developing their legacy. Lecturers A and B see factory visits as part of the mentoring process as students interact with entrepreneurs. Lecture B also stated "I help build confidence through one-on-one sessions." Lecturer E stated that they also offer guidance to students who completed previous courses. Emphasis should be placed on building a relationship that last beyond the course of study and such strategies can be rewarding well into the life of the student. Entrepreneurship teachers therefore should advise students to build meaningful relationships through connectivism (Clarke, 1997); as they may want to rely on their mentors for help long into the future. Role models have been recognized in general as an important source of vicarious learning (Bandura, 1986). As role models, Lecturers A, B and C (fulltime staff) facilitate many past students with references to undertake future endeavours. The faculty views mentoring as a positive exercise that is critical in developing confidence. They saw mentoring as a necessary piece of the pie to offer guidance and opportunities for entrepreneurial growth. Lecturers allowed students to build up trusting one-on-one relationships that focused students on developing individual strengths and interests. Outside of individual faculty a general concern was not being able to get more mentors from outside the teaching system. Lecturer A stated "I recognize mentoring to be a key piece of the puzzle in teaching entrepreneurship but it is also a very difficult puzzle piece to find." Mentoring seemingly is an area for concern as entrepreneurship is not a classroom exercise. In the Caribbean, there is definitely a need for entrepreneurship to be highlighted in the media and other forms if more persons are going to recognize what their contribution as a mentor can do to develop the entrepreneurial system. Overall the faculty believes that mentoring is an essential part of teaching and learning. Simulation and Practice Simulation and practice are vital in the teaching of entrepreneurship. Elements of modelling can be found in role-play exercises and simulation. Noticeably, both of these tools are representative of behaviourism as suggested by Peltier (2001) and connectivism (Kop and Hill, 2008). It is clear that entrepreneurship is not based on a read and repeat model. The key advantage of simulations is that they mimic real life situations as closely as possible. As a lecturer in entrepreneurship one has to be careful to create a simulation, which is underpinned by a sense of reality of what is happening in the world of business or should create a brand new reality for a changed environment. This setting can quite easily be created through connectivism; which is an approach based on interactions within networks (Downes, 2012). Ideally, it should be relevant to the lives and interests of the students who are in entrepreneurship class. The entrepreneurship teacher, after offering guidance, should unobtrusively supervise the actions and note students' ability to handle situations. This feature of simulation increases students' autonomy and motivation, and lowers their anxiety levels since they are interacting as equals within a small group of their peers rather than performing for the teacher. This form of transformative learning is a route to the development of critical thinking. The final outcome is the model used to evaluate their performance. It is assumed that with repeated practice a student will develop in such a way that they can make decisions faster and enhance their outcomes through different learning experiences. Realism can be enhanced, particularly for longer-term simulations, by adapting the classroom so that it simulates the environment in which the exercise is said to be taking place. Lecturers B and C use market place simulation as a means to provide a valuable platform for assessing a number of learning objectives. Some faculty found it difficult to formulate simulation exercises among students. The thinking behind simulations was that it is supposed to represent an event or situation made to resemble real world experiences and that perspective was found hard to emulate in the class room. This shortcoming in teaching entrepreneurship through simulation exercises meant that application and integration of knowledge, skill development and critical thinking was lacking in most entrepreneurial classroom sessions. Lecturer A indicated that it is difficult to run a full simulation in a semester long 12-week session. Further to this, the Caribbean tertiary teaching system for entrepreneurship is at the crossroads in this regard and may need to engage student learning through more complex skills via simulation; especially as technology is advancing rapidly. This comment supports the view that connectivism is a useful approach to technology-enabled learning (Cormier, 2008). Essentially, simulation sessions need to be incorporated into the curriculum of all entrepreneurship courses at the tertiary level. Conclusion The learning theories classify into five general groups: behaviorism, cognitivism, constructivism, transformative and connectivism. This conclusion discusses each of them relative to the themes identified for entrepreneurship education. In general, the author concludes that the themes align with at least two or more of the theories and this is desirable and useful for grounding the learning and teaching tools. The behaviourist theory focuses on means of observation, response to external stimuli and the impact of the environment (Alzaghoul, 2012;Neergaard, Tanggaard, Krueger, & Robinson, 2012;Pham, 2011). This theory was seen as a very successful method as it was established in all the themes identified: 'experiences,' 'sense of purpose,' 'reflective practice,' 'lecturer's passion,' 'mentoring,' 'simulation and practice.' It is viewed as a broad-based approach to teaching entrepreneurship and has demonstrated usefulness in facilitating teaching in this field. The cognitive theories are an effective method for exploring problem solving, processing, encouraging and motivating (Clarke, 2013). They are a virtuous foundation for teacher-student relationship as they open the way for the development of the students. They have demonstrated effectiveness in "sense of purpose" and "reflective practice". The constructivist theories are a proficient process for teaching which allows for building on prior knowledge (Romero, 2013). In this case faculty used "experiences" and "sense of purpose" as building blocks to aiding learning in this field. This allowed for meaningful learning to take place as this system allows the learner to go beyond what is already known and create new ideas. The transformative theories are modern. They were seen as an effective way to change the application and transfer of learning into an implementable and executable form (Cranton, 1996;Mezirow, 1991Mezirow, , 1996. The relation of this theory to learning is more noteworthy than the other traditional learning theories because this theory develops applicability of skill sets specific to entrepreneurship. Several of its principles can be used to improve the teaching and learning process. This theory was seen as a very useful method as it was established in the themes identified: 'experiences,' 'lecturer's passion,' and 'simulation and practice.' The connectivism theory is also a modern theory which focuses on recognition and bonding (Clarke, 1997). It is important in formulating the relationship chain that is key to accessing new information, communicating and networking within the entrepreneurship eco-system. It is essential in the necessary interaction between teacher and the learner. Another key point is that it gives a chance for relating in a relevant social-cultural context. This theory was seen as a very useful method that was showcased in the themes: 'experiences,' 'sense of purpose,' 'lecturer's passion, ' 'mentoring,' and 'simulation and practice.' In summary, using a variety of learning strategies in entrepreneurship education can be desirable and useful. The study provides a repertoire of proven soft-skill approaches that have been successfully implemented strategically by entrepreneurship teachers in the Caribbean and can be used by other educators in their pursuit to educate students in the entrepreneurship process. The author highlights a series of themes related to entrepreneurship education that positively impact a learner in a wide number of circumstances. This may explain why there are such a wide variety of learning strategies, all of which can provide important outcomes to the student learner engaged in an entrepreneurship education program. Limitations One of the limitations of this study is that it was only conducted among five faculty members that teach entrepreneurship at the University of the West Indies. The findings are based solely on the way faculty perceive their practices. Faculty only gave relative strengths within their individual teaching experiences and not in relation to others. A better knowledge and understanding of learning styles may become increasingly critical as students come from across varying faculties. The context in other geographical locations around the world may vary and therefore require further investigation into the learning theories. Nevertheless, this study acknowledges the role lecturers play and the tools they use in teaching entrepreneurship education.
2018-12-12T09:38:23.337Z
2017-07-10T00:00:00.000
{ "year": 2017, "sha1": "0266bec71cd295cf32f9b3804c81cda568e3f7fd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26522/brocked.v26i1.437", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0266bec71cd295cf32f9b3804c81cda568e3f7fd", "s2fieldsofstudy": [ "Education", "Business" ], "extfieldsofstudy": [ "Sociology" ] }
257333151
pes2o/s2orc
v3-fos-license
Getting glued in the sea Inspired by ocean organisms, scientists have been developing adhesives for application in the marine environment. However, water and high salinity, which not only weaken the interfacial bonding by the hydration layer but also induce the deterioration of adhesives by erosion, swelling, hydrolysis, or plasticization, are detrimental to adhesion, resulting in specific challenges in the development of under-seawater adhesives. In this focus review, current adhesives that are capable of macroscopic adhesion in seawater were summarized. The design strategies and performance of these adhesives were reviewed based on their bonding methods. Finally, some future research directions and perspectives for under-seawater adhesives were discussed. Introduction The application of adhesives plays a crucial role in the exploration and utilization of marine resources. Many marine applications, such as underwater sonar equipment, protective/antifouling layer coatings, and offshore structures, require under-seawater adhesives [1][2][3]. Early studies in this area mainly focused on the effect of seawater on the strength aging of epoxy-based and polyurethane-based adhesives [4,5]. In these studies, bonding joints were generally first prepared in air and then immersed in (artificial) seawater to study their long-term stabilities. However, with the development of science and technology, applications in some emerging fields require adhesives to be directly applied under-seawater, for example, in the case of under-seawater repair, marine cultivation, ocean animal health care, electronic devices, marine robots, and so on ( Fig. 1) [6,7]. However, achieving strong interfacial bonding in seawater is challenging [8][9][10]. First, water and hydrated ions are strongly associated atop the surface of the substrate in seawater and impede the molecular contact of the adhesive and substrate. Second, the high-ionic strength of seawater (Table 1) dramatically weakens interfacial interactions, such as electrostatic interactions and dispersing forces, which prevents molecular-level bonding between the adhesive and the substrate [11][12][13][14]. Moreover, water droplets can be trapped at the interface, which reduces the real contact area, and such a phenomenon is particularly pronounced for soft tape-type adhesives [15][16][17]. In addition, the long-term exposure of submersed bonded joints results in the diffusion of water and salt into the adhesive, causing the swelling, erosion, degradation, or hydrolysis of adhesives, consequently leading to adhesive/cohesive failure. These effects become more significant in real applications due to the nonstatic conditions of the marine environment. For example, variations in temperature and hydraulic pressure and the scour of seawater are particularly detrimental to adhesion [7,18,19]. Notably, compared with adhesion under pure water, adhesion under-seawater is more difficult; the high concentration of salt ions in seawater further increases the difficulty of interfacial bonding and accelerates the deterioration of existing adhesion. Therefore, the development of adhesives used in seawater remains a huge challenge. In contrast to the failure of man-made adhesives in water, successful underwater adhesives have been wonderfully demonstrated in nature [10,20,21]. Distinguished examples include mussels and barnacles, which use adhesive proteins to attach to wet rocks to resist the scour of seawater, and octopuses, whose arms have suckers that allow them to capture prey in the sea [22,23]. Inspired by these unique adhesion strategies, the development of underwater adhesives has come a long way in the past decade. To date, various underwater adhesives have been fabricated by using diverse strategies, which have been summarized in several excellent reviews [24][25][26][27][28][29][30]. Nevertheless, few studies have been conducted that focus on their adhesive performance in seawater, and a review of the related topic has yet to be produced. In this focus review, I summarized the currently reported adhesives that can directly adhere to substrates under (artificial) seawater/high-salinity water. Based on their bonding methods, the adhesives were classified into gluetype and tape-type adhesives. An overview of the development and performance of typical examples was provided. In addition, some perspectives for under-seawater adhesives were discussed. It should be mentioned that this paper only reviewed the adhesives that were applied on a submersed substrate and cured in seawater/saline water without a drying process in air. Under-seawater adhesives Similar to underwater adhesion, achieving strong underseawater adhesion also requires three steps. The first and most important step is the removal of the hydration and salt ion layer of the substrate. Recent studies have found that cationic groups can clear surfaces of bound salts and that hydrophobic groups can breakdown the hydration layer to "dry" the underwater surface, making way for adhesion [13,30,31]. In addition, using water-absorbing fillers and pattern surfaces for water drainage has also been proven to effectively remove surface water [17,32,33]. Following the breakdown of the hydration layer, strong interfacial bonding should be achieved. Since the target adherents used in seawater are usually metals, glass, plastics, etc., noncovalent interactions (hydrogen bonds, electrostatic interactions, hydrophobic interactions, ion-dipole interactions, etc.) and physical suction are the general adhesion mechanisms [29,[34][35][36]. The other key to strong underwater adhesion is the tough mechanical strength of bulk adhesives, which prevents fast crack propagation during the debonding process. Particularly for under-seawater adhesives, adhesive joints are generally sustained by constant loading originating from the complex dynamic environment in the ocean. Therefore, antifatigue properties are essential for long-term adhesion. The basic design principle for toughening is introducing dissipation energy systems in the polymer network by using, for example, dynamic bonds or multiple networks [37][38][39]. In addition, due to the harsh environment in the ocean, adhesives are required to possess anti-erosion and antiswelling properties for long-term stability. Based on their bonding method, the current underseawater adhesives can be classified into glue-type and tapetype adhesives. The former is a liquid that needs to be solidified to glue the joint, whereas the latter is a soft solid material that can directly adhere to the substrate. The following sections review the current adhesives that can be used in artificial seawater or water with high salinity in terms of glue and tape systems. Glue-type adhesives Glues are a kind of liquid adhesive that require a solidification process to achieve adhesion. The current under-seawater glue mainly includes organic solvent-based polymer glue and water-borne coacervate glue. In general, glue is applied between two adherents that are immersed in seawater. Then, the joint is maintained in seawater for a particular time (hours to days) under external pressure to allow the glue to fully cure. The curing processes are mainly based on the polymerization of monomers and crosslinking of polymers through the formation of chemical bonds and/or physical interactions. Currently reported glue-type under-seawater adhesives and their adhesion performances are listed in Table 2. Catechol-based polymer solutions are typical glues that have been widely studied since the catechol group has [40][41][42]. Although various catechol-based copolymers have been synthesized and exhibited excellent adhesion on dry substrates, their adhesion performance in seawater or saline water is rarely studied [43,44]. A pioneering study was reported by White and Wilker in 2011 [32]. They synthesized three-component copolymers bearing cationic, catechol, and benzene groups (Fig. 2a). They dissolved the polymers in a chloroform/ methanol mixture and then applied the solutions onto an aluminum substrate immersed in artificial seawater. Due to its higher density compared to that of water, the glue did not float up and off the substrate. Then, a second aluminum sheet was placed on the first sheet and allowed to cure in water for 24 h. The test results showed that the adhesive strength of the polymer glues in seawater first increased and then decreased with increasing cationic content. They explained that the glue with the higher cationic fraction has better surface wetting ability based on the contact angle data, which contributed to improved adhesion. While introducing too much charge (above~11%) caused strong cohesion but not adhesion, in my view, the weak adhesion may have been caused by the increase in the hydrophilicity of the polymer. Furthermore, North et al. synthesized a series of poly[(3,4-dihydroxystyrene)-co-styrene] polymers with different molecular weights and studied their underseawater adhesion [45]. The adhesive strength in artificial seawater was found to first increase and then decrease with an increasing molecular weight of the polymer. The glue showed a highest adhesive strength of~2.5 MPa at a molecular weight of~85,000 g/mol. In glue systems, a low molecular weight is beneficial for the wetting behavior of the solution, while a high molecular weight can enhance the cohesion of the glue. Therefore, there is an optimum molecular weight for achieving the best adhesion performance. Zhan et al. studied the influence of the number of hydroxyl substituents on the benzene ring on underwater adhesion [46]. Copolymers bearing three types of phenolic groups (phenol, catechol, and gallol) were synthesized and dissolved in a chloroform/methanol mixture (Fig. 2b). The gallol-based copolymer exhibited the strongest adhesiveness compared to the other two polymers under all tested environments (in air, water, and seawater). The results suggested that the tridentate structure has a stronger underwater interfacial bonding ability than the bidentate and monodentate structures. Sha et al. synthesized a mussel-inspired alternating copolymer by using click chemistry [47]. In their strategy, dopamine and 2,2-bis(4-glycidyloxyphenyl) propane (BGOP) were copolymerized via an epoxy-amino click reaction (Fig. 2c). Compared with introducing the vinyl group into the catechol group, directly using dopamine is much simpler. Moreover, the click reaction is fast, almost quantitative, and allows polymers with high catechol contents and regular sequences to be obtained. To test its adhesion ability, the polymer was dissolved in a chloroform/methanol mixture to obtain a glue adhesive. They found that the introduced polar groups and rigid bisphenol A structures in the polymer enhanced the cohesion of the adhesive, while the high content of the catechol group provided strong adhesion. As a result, the copolymer showed high adhesiveness under both dry and seawater conditions. This work demonstrated a facile strategy for designing catechol-functionalized copolymers with controlled sequences; however, unfortunately, this work did not study the sequence effect on adhesion performance, which is critical for our understanding of structure-property relationships. In addition to the catechol group, the backbones of polymers were investigated by Li et al. [48]. Inspired by mussel foot proteins, they raised the question of the relationship between the polarity of the polymer backbone and polymer underwater adhesion. To answer this question, four polymers with similar catechol contents and molecular weights but different backbones were synthesized (Fig. 2d) [49]. The under-seawater adhesion test results showed that Fig. 2 Organic solvent-based polymer glue. a Copolymers bearing cationic, catechol, and benzene residues synthesized by Wilker's group [32]. b Adhesive copolymers possessing three types of phenolic groups (phenol, catechol, and gallol) and their under-seawater adhesiveness [46]. c Catechol-based alternating sequential copolymer synthesized by Sha et al. [47]. d Catechol-based copolymers with different polarities and their adhesion performance in seawater [49] a b c Fig. 3 Water-borne coacervate glue. a Precipitation of a triazole-based polymer in water and its adhesion performance in different substrates and solutions [7]. b Design strategy of a polymer with a cation-methylene-phenyl motif inspired by the protein of membraneless organelles [18]. c Ionic complex-based adhesives fabricated by the polymerization of 4-AS initiated by zwitterions [51] the polymer with the higher polarity had stronger adhesiveness. According to my understanding, this phenomenon may have been caused by the modulus difference of the cured adhesives. After curing in seawater, the adhesive film of the low-polarity polymer was more rigid than that of the high-polarity polymer. In general, a rigid adhesive has low resistance against crack growth and exhibits poor adhesive strength. Since glues based on hydrophobic polymers usually require an organic solvent, which may be harmful to organismic health and the environment, water-borne glues have received much attention in recent years. For example, Zheng et al. developed a polymer glue possessing rapid, strong, long-lasting adhesion and ionic conductivity in water and seawater environments [7]. In their strategy, they synthesized a hydrophobic polymer bearing benzene and nitrogen heterocyclic groups and dissolved it in a watersoluble ionic liquid (Fig. 3a). When injecting the polymer solution into water, the fast water exchange triggered the precipitation of the hydrophobic polymer, resulting in rapid adhesion. With increasing immersion time, the glue was further self-strengthened through the formation of strong hydrophobic interactions, which led to an increase in adhesive strength. For example, the instant under-seawater adhesive strength on glass was~100 kPa, while it increased to~400 kPa after 7 days of soaking. In addition, they found that there was~44.38% ionic liquid retained in the adhesive even after long-term soaking, which provided considerable ionic conductivity and allowed the adhesive to work as an in situ strain sensor. Zhu et al. proposed a strategy to synthesize a water-borne polymer adhesive inspired by the protein of membraneless organelles [18]. In their study, they synthesized a polymer with a cation-aromatic sequence by using a monomer with a cation-methylene-phenyl (C-M-P) motif (Fig. 3b). The obtained aqueous polymer solution formed coacervates through the addition of NaCl because the high-ionic strength of the solution reduced the electrostatic repulsion between cationic groups but enhanced the interchain cationπ interactions [30,50]. They further synthesized polymers by the copolymerization of a C-M-P monomer and postcurable monomers and tested their adhesion performance in aqueous conditions. The results showed that the copolymer glue exhibited outstanding adhesion in normal saline water and seawater. For example, a copolymer obtained by copolymerizing the C-M-P monomer with a curable ionic liquid monomer was capable of gluing glass together in a simulated deep-sea environment (3°C, oxygen-free, and dark) after overnight curing (12 h). In contrast with the difficulty of linear sequence-controlled copolymer synthesis, this work provides a simple strategy to develop polymers with sequential cation-π motifs. One of the shortcomings of glue adhesives is the longterm curing process. To achieve strong underwater adhesion in a short time, Cui et al. developed a water-borne glue system by utilizing the rapid polymerization of 4-aminostyrene (4-AS) in the presence of an acidic polyanion (Fig. 3c) [51]. In general, pure 4-AS is difficult to polymerize because the reactivity of the vinyl group is inhibited by the charge density transfer from the amino group to the conjugated double bond. However, in the presence of 4-AS salts, 4-AS can be spontaneously polymerized via the zwitterionic mechanism. Based on this phenomenon, they prepared two aqueous solutions, namely, a polyacrylic acid (PAAc) solution and a 4-AS solution. When mixing them, PAAc not only provided acidic conditions for the polymerization of 4-AS but also formed a complex with P(4-AS) through the formation of multiple ionic bonds. As a result, a bright-yellow sticky putty was immediately formed, and it could be fully cured within 20 min to glue diverse substrates in various environments (water, seawater, oil, etc.). In addition, this adhesive could maintain strong adhesion for 6 months. Recently, Peng et al. developed a coacervate glue that exhibited instant, robust, and repeatable underwater adhesion [52]. The reported coacervate was formed by mixing tannic acid and poly(ethylene glycol) 77 -b-poly(propylene glycol) 29 -b-poly(ethylene glycol) 77 (F68) aqueous solutions. The abundant gallol group on TA provided strong interfacial adhesion, while the hydrogen bonds between TA and the polymer and the hydrophobic core of F68 micelles facilitated tough cohesion. As a result, the coacervate showed robust and instant adhesion in water as well as in a NaCl solution (0.1-1.0 M). It also displayed excellent repeatability up to 1000 cycles, which is seldom tested in other adhesive systems. Moreover, the biological activities of TA endowed the adhesive with anticancer and antibacterial properties. Tape-type adhesives Soft solid materials that can directly adhere to substrates are regarded as tapes. Compared with glues, it is more challenging for tapes to achieve strong under-seawater adhesion. This is because, on the one hand, wetting the surface and breaking down the hydration layer with crosslinked polymers is much more difficult than with unconfined dissolved polymers. On the other hand, water droplets can be easily trapped at the interface, which decreases the real contact area and induces crack defects. To tackle this dilemma, several strategies have been proposed recently, inspired by natural underwater adhesion mechanisms. The tape-type under-seawater adhesives and their adhesion performances are listed in Table 3. Niu et al. developed an elastomer that exhibited high adhesion in water and seawater conditions [53]. The adhesive was synthesized by the copolymerization of butyl acrylate and acrylic acid (Fig. 4a). After adjusting the monomer ratio to balance the adhesion and cohesion, they optimized the viscoelastic properties of the elastomer so that it achieved the best adhesion performance in seawater. The adhesive strength of the elastomer was in the range of 120-150 kPa depending on the substrates. In addition, the adhesion of the elastomer was reversible due to physical interfacial interactions. Compared with the hydrophobic elastomer/gel, it is more difficult for a hydrogel to achieve underwater adhesion due to its hydrophilic and highly swollen polymer network. To effectively breakdown the hydration layer and suppress the swelling ratio to improve adhesion, introducing hydrophobic functional groups into hydrogels has been proven to be an effective approach [16,55]. In this strategy, gels are usually fabricated in a water-miscible organic solvent, followed by a solvent exchange process to obtain hydrogels. For example, Liu et al. copolymerized a hydrophobic and anionic monomer in DMSO and obtained an organogel [56,57]. They found that the as-prepared organogel could adhere to diverse substrates under various solvents, including water, seawater, and oil, during the exchange of solvent. In the case of adhesion in aqueous conditions, the water diffused into the gel and replaced DMSO, resulting in the dehydration of the surface and the aggregation of hydrophobic groups, which enhanced both adhesion and cohesion. After 24 h of immersion, the adhesive strength reached~120 kPa. Zhang et al. developed an organohydrogel with hydrophobic and hydrophilic heteronetwork by the in situ emulsion polymerization of oleophilic and zwitterionic monomers (Fig. 4c) [58]. In the organohydrogel, the oleophilic and hydrophilic polymer chains were spatially restrained in the interpenetrating heteronetwork, which led to antiswelling behavior in both water and oil. The authors further found that the organohydrogel exhibited Ionically conductive [54] AAcDA acrylic acid-dopamine, MATAC methacryloxyethyltrimethyl ammonium chloride, Many solid surfaces, including rocks, glass, and metals, are negatively charged in the marine environment [11]. Therefore, a facile strategy using electrostatic interactions as a bonding mechanism would be effective on these surfaces. However, the electrostatic interactions between oppositely charged surfaces in high-ionic-strength environments such as seawater are normally weakened due to the Debye screening effect [11]. To tackle this dilemma, specialized protein sequences have been obtained through evolution in biological systems. Cationic and aromatic amino acids are always adjacent to each other in many adhesive proteins, such as mussel foot proteins, barnacle cement proteins, and coronavirus spike proteins [59][60][61]. Such a specific characteristic enables the proteins to adhere to negatively charged surfaces through electrostatic interactions in a saline environment, providing a design model for developing marine adhesives. However, sequence-controlled polymerization is still a central challenge in polymer chemistry [62]. Recently, our group discovered that copolymers with adjacent cation-aromatic sequences can be synthesized through simple free-radical polymerization at equimolar ratios [30,63]. It was found that the polymerization behavior of cationic and aromatic monomers in precursor solution highly depends on their vinyl groups, monomer concentration, and solvent. When the cationic and aromatic monomers have the same double bond, at high concentrations and with DMSO as the solvent, the reactivity ratios (r) of the cationic and aromatic monomers are close to 1, namely, they show ideal random copolymerization (Fig. 5a) [63]. In this case, the resulting copolymer has an adjacentrich sequence, and it is water soluble, although this copolymer has 50% hydrophobic monomers. If it lacks one of Fig. 4 Tape-type under-seawater adhesives. a Rheological behavior of P(BA-co-AAc) elastomers and their underwater adhesiveness [53]. b Schematic illustration of the structure of a poly(ionic liquid) adhesive and its adhesion performance in various solutions [54]. c Schematic illustration of the preparation and structure of organohydrogels and their adhesiveness in seawater [58] these three preconditions, a random copolymer with only one component-rich sequence is obtained. Such a polymer cannot dissolve in water. By using one-pot free-radical copolymerization, copolymers with different sequences can be synthesized on a large scale, meeting the requirements for material fabrication and studies. For example, due to their different monomer sequences, P(cation-adj-π) and P(cation-co-π) hydrogels showed completely different macroappearances in water and saltwater (Fig. 5b). The P(cation-adj-π) hydrogels were almost transparent, while the P(cation-co-π) hydrogels were opaque owing to the aggregation of their hydrophobic aromatic-rich segments. These hydrogels with different sequences also showed dramatically different mechanical strengths. In a 0.7 M NaCl solution, the P(cation-adj-π) hydrogels were soft but more stretchable and tough than the P(cation-co-π) hydrogels. The monomer sequence also had a strong effect on the underwater adhesion of these hydrogels (Fig. 5c). Adhesion tests showed that the P(cation-adj-π) hydrogels exhibited fast, strong, but reversible adhesion to negatively charged glass in artificial seawater because the aromatic groups enhanced the electrostatic interactions of their adjacent cationic residues with the counter surfaces in highly ionic media. In contrast, the P(cation-co-π) hydrogels exhibited weak adhesion on glass substrate in saltwater. This work indicated that the monomer sequence has a strong influence on the network structures and the properties of hydrogels, which has always been overlooked. Soft materials with bioinspired microstructures are another type of tape-like adhesive. Although the corresponding works mainly focused on adhesion in air or water, instead of in saline water or seawater, it is expected that these materials have similar adhesion performances regardless of salinity due to their physical suction mechanism [29,[64][65][66][67]. This type of underwater adhesive has been widely reviewed in the literature and is not covered here [8,23,25,27]. Conclusion and outlook With an in-depth understanding of the underwater adhesion mechanisms of marine organisms, remarkable progress has been made in the development of adhesives applied in the marine environment. Especially in the last five years, various under-seawater adhesives have been fabricated by applying diverse bioinspired strategies [30,32,54]. However, compared with the development of adhesives used in the air, the development of adhesives that can be directly Fig. 5 Under-seawater adhesive hydrogels with adjacent cationic and aromatic sequences [30,63]. a Free-radical polymerization for synthesizing copolymers with different cation-π sequences by using DMSO and dimethyl sulfate (DMS). b Digital photo of P(cation-adj-π) and P(cation-co-π) hydrogels equilibrated in 0.7 M NaCl solution and their mechanical properties. c Under-seawater adhesion performance of P(ATAC-adj-PEA) and P(ATAC-co-PEA) hydrogels. ATAC: 2-(acryloyloxy)ethyl trimethylammonium chloride; PEA: 2-phenoxyethyl acrylate. d Photographs showing a P(cation-adj-π) gel adhered to a glass block and the block being lifted out of seawater applied under-seawater is still in its infancy. Currently, the rapid formation of strong adhesion remains a major challenge. In real-world applications, the formation of adhesive joints occurs under nonstatic conditions, such as the constant undulation of seawater, which requires adhesives to form an effective adhesion in a short time. The currently reported glue-type adhesives exhibit strong bonding strength but require hours or even days of curing, while the tape-type adhesives show instant adhesion but relatively weak strength. Various length-scale adhesion mechanisms are used in nature to achieve excellent underwater adhesion. Therefore, further exploring the underwater adhesion mechanisms with various length scales of natural organisms and mimicking them is one direction for the development of under-seawater adhesives. In addition, most adhesive joints used in marine environments are subjected to cyclic loadings throughout their life [68]. Therefore, the study of the long-term stability of adhesives under nonstatic conditions is essential for their real application in seawater. More comprehensive and systematic characterizations of the material properties of adhesives, such as fatigue, creep, and the effect of environmental changes (e.g., temperature, salinity, and hydraulic pressure), are required in future studies. Moreover, to meet the requirements of diverse application scenarios, future under-seawater adhesives should also have controlled adhesion and multiple functionalities and be ecofriendly (recyclable, biodegradable, nontoxic, etc.), low cost, and easy to mass produce.
2023-03-05T05:10:59.371Z
2023-03-03T00:00:00.000
{ "year": 2023, "sha1": "d37384ce4474a05c632427045761992149b539ec", "oa_license": null, "oa_url": "https://www.nature.com/articles/s41428-023-00769-6.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1b2438ec01282c7257b6029fdd3ab6dff2deb731", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
135055421
pes2o/s2orc
v3-fos-license
Influence of aggregate spatial distribution of concrete against projectile penetration Coarse aggregate settlement may occur during concrete pouring, which affects the ability to resist projectile attack. In order to study the anti-penetration performance of concrete, and obtain high penetration resistance concrete, mesoscale finite element model of different aggregate space distribution of concrete was established, the numerical simulation of long rod rigid projectile penetrating into different concrete was conducted. The results showed that aggregate settlement had great influence on target scabbing; aggregate size and settlement had limited influence on projectile residual velocity. Introduction Concrete is widely used in civil engineering, and its performance to resist projectile penetration has attracted wide attention in military field [1]. Aggregate settlement may occur after concrete vibration. The maximum aggregate size is different of different concrete. Therefore, it is of engineering practical significance to study the effect of aggregate spatial distribution on the anti-penetration performance of concrete. The influence of aggregate on penetration was studied by experiment and numerical simulation. The effects of aggregate gradation, size and type on cratering and scabbing were experimental studied by Bludau et al. [2]. The effect of aggregate size on the residual velocity of projectile was experimental studied by Werner et al. [3]. Wriggers et al. [4] gave the method of establishing the mesoscale finite element model of concrete. Fang et al. [5] numerically simulated the effects of aggregate size, aggregate strength and aggregate volume fraction on projectile deflection in deep penetration. At present the numerical simulation of penetration was based on thick target, there was a lack of research on thin targets. The influence of the spatial distribution of aggregate needs to be further studied. In this paper, numerical simulation of penetrating mesoscale concrete was carried out, and the influence of aggregate size and aggregate settlement on target scabbing and projectile residual velocity was studied. Mesoscale finite element model and scheme Projectile. The mass of the projectile was 556 g, the diameter d was 30 mm, the ratio of length to diameter was 5, the caliber-radius-head was 3, and the initial velocity was 800 m/s. Concrete target. The diameter of the target was 20d, the thickness was 5d, the volume fraction of coarse aggregate was 40%, and the continuous gradation was adopted. In this paper, four targets are studied, as shown in table 1, taking into account the maximum aggregate size and aggregate settlement. Modeling process. The large aggregate was easier to settlement, so the large aggregate was generated first. Under the dynamic load, the crack is mainly through the aggregate, so the connection between aggregate and mortar could be simplified as a common joint. The cracks are sensitive to the mesh size, so the whole target plate was refined, and the non-reflective boundary was added to the cylindrical side to simulate the large-size target. Material constitution Projectile. When the initial velocity is less than 1000 m/s, the head of the projectile usually does not erode and can be simplified as a rigid projectile, thus improving the computational efficiency. Concrete target. The concrete around the projectile head was plastic under high pressure of penetration load. HJC constitutive model can describe the mechanical characteristics of concrete like material under high pressure; due to the lack of mechanical parameters of mortar, 43MPa concrete was used as a substitute in this paper [6]. The JH-2 constitutive model describes the high pressure damage behavior of brittle materials [7]; granite was used as coarse aggregate in this paper, and its constitutive parameters were shown in [8]. The concrete far away from the projectile head was brittle under the action of the tensile wave. The minimum hydrostatic erosion criterion was used to simulate the crack. Result analysis The scabbing and the perforation projectile could effectively kill the objects in the building, which were characterized by the scabbing diameter (figure 1) and the projectile residual kinetic energy, respectively. Target damage Using the dynamic software LS-DYNA to solve the problem, the damage result of target 2 is shown in figure 1. Radial cracks occurred in the far region of the target under tensile stress. Sparse wave was transmitted from the target surface, and then scabbing occurs. It took a long time for the scabbing to completely separate from the target. Here only calculated the initial moment of the scabbing formation. table 2 show the comparison of dimensionless scabbing diameters (ds/d) of four targets. The size difference of coarse aggregate is limited, which did not reflect the difference in scabbing diameter. The volume fraction of the aggregate on the back of the target 3 increased and the target 4 decreased. The tensile strength of the aggregate is higher than that of the mortar, and the increase of the volume fraction will help to reduce the scabbing diameter. Compared with the target 2, the dimensionless scabbing diameter of the target 3 decreased by 10.6 %, and the target 4 increased by 9.8 %, as shown in table 2. Table 2 shows that the size and settlement of aggregate have a limited effect on the residual kinetic energy of the projectile. During the penetration process, the projectile passed through a large amount of aggregates, which reduced the randomness, so the influence of the spatial distribution of aggregates was limited. Conclusions In this paper, the influence of aggregate size and aggregate settlement on penetration was studied, and the following conclusions were obtained: 1. The scabbing diameter of the target was not sensitive to the aggregate size (31.5 mm and 40 mm), but sensitive to the aggregate settlement. 2. The influence of aggregate size and aggregate settlement on the residual kinetic energy of projectile was limited.
2019-04-27T13:12:13.751Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "5caf417b2d66b6e42ad63fb4e296dc5fd189fcc7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/431/7/072008", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3fc7396e3453c35d4cd1fc274c4d5a1c53b6dd78", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
1051576
pes2o/s2orc
v3-fos-license
Electroweak&QCD corrections to Drell Yan processes The relevance of single-W and single-Z production processes at hadron colliders is well known: in the present paper the status of theoretical calculations of Drell-Yan processes is summarized and some results on the combination of electroweak and QCD corrections to a sample of observables of the process $p p \to W^\pm \to \mu^\pm + X$ at the LHC are discussed. The phenomenological analysis shows that a high-precision knowledge of QCD and a careful combination of electroweak and strong contributions is mandatory in view of the anticipated LHC experimental accuracy. One of the authors (O.N.) dedicates these notes to Prof. S. Jadach, in honour of his 60th birthday and grateful for all that Prof. Jadach taught him during their fruitful collaboration. Introduction Single-W and single-Z production processes are today considered of utmost relevance for the physics studies at contemporary hadron colliders such as Tevatron and the LHC. Actually, charged and neutral current Drell-Yan (D-Y) processes, i.e. pp (−) → W → lν l + X, and pp (−) → Z/γ → l + l − + X play a very important role, since they have huge cross sections (e.g. σ(pp → W → lν l + X) ∼ 20 nb at LHC and about a factor of ten less for σ(pp → Z/γ → l + l − + X)) and are easily detected, given the presence of at least a high p ⊥ lepton, which to trigger on. For this reasons and also because the physics around W and Z mass scale is presently known with high precision after the LEP and Tevatron experience, D-Y processes will provide standard candles for detector calibration during the first stage of LHC running. Moreover, single-W as signal by itself will allow to perform a precise measurement of the W mass with a foreseen final uncertainty of the order of 15 MeV at LHC (20 MeV at Tevatron), a very important ingredient for precision tests of the Standard Model, when associated with a top mass uncertainty of the order of 1-2 GeV. Also, from the forward-backward asymmetry of the charged lepton pair in pp → Z/γ → e + e − the mixing angle sin 2 ϑ W could be extracted with a precision of 1 × 10 −4 . Useful observables for the measurement of the W mass are the transverse mass distribution and the charged lepton transverse momentum distribution. While the latter is in principle experimentally cleaner, the former is less sensitive to the effects of higher order radiative corrections affecting the theoretical predictions. The few per cent level precision in principle achievable in the cross sections motivated a proposal to use these observables as luminosity monitor for the LHC. Last, single-W and single-Z processes will provide important observables for new physics searches: in fact the high tail of the l + l − invariant mass and of the W transverse mass is sensitive to the presence of extra gauge bosons predicted in many extension of the Standard Model, which could lie in the TeV energy scale detectable at LHC. For the above reasons, it is of utmost importance to predict the W and Z observables with as high as possible theoretical precision. The sources of uncertainty in the theoretical predictions are essentially of perturbative and non-perturbative origin. The latter ones comprise the uncertainties related to the parton distribution functions and power corrections to resummed differential cross sections, which will not be discussed here. In the following we review the current state-of-the-art on the calculation of higher order QCD and electroweak (EW) radiative corrections and their implementation in simulation tools, and we present some original results about the combination of QCD and EW corrections to W production at the LHC. Higher-order QCD/EW calculations and tools In the present section, a sketchy summary of the main computational tools for EW gauge boson production at hadron colliders is presented. Concerning QCD calculations and tools, the present situation reveals quite a rich structure, that includes next-to-leading-order (NLO) and next-to-next-toleading-order (NNLO) corrections to W/Z total production rate [1,2], NLO calculations for W, Z + 1, 2 jets signatures (available in the codes DYRAD and MCFM) [3,4] , resummation of leading and next-to-leading logarithms due to soft gluon radiation (implemented in the Monte Carlo ResBos) [5,6], NLO corrections merged with QCD Parton Shower (PS) evolution (in the event generators MC@NLO and POWHEG) [7,8], NNLO corrections to W/Z production in fully differential form (available in the Monte Carlo program FEWZ) [9,10], as well as leading-order multi-parton matrix elements generators matched with vetoed PS, such as, for instance, ALPGEN [11], MADEVENT [12], HELAC [13] and SHERPA [14]. As far as complete O(α) EW corrections to D-Y processes are concerned, they have been computed independently by various authors in [15,16,17,18,19] for W production and in [20,21,22,23] for Z production. EW tools implementing exact NLO corrections to W production are DK [15], WGRAD2 [16], SANC [18] and HORACE [19], while ZGRAD2 [20], HO-RACE [22] and SANC [23] include the full set of O(α) EW corrections to Z production. The predictions of a subset of such calculations have been compared, at the level of same input parameters and cuts, in the proceedings of the Les Houches 2005 [24] and TEV4LHC [25] workshops for W production, finding a very satisfactory agreement between the various, independent calculations. A first set of tuned comparisons for the Z production process has been performed and is available in [26]. From the calculations above, it turns out that NLO EW corrections are dominated, in the resonant region, by final-state QED radiation containing large collinear logarithms of the form log(ŝ/m 2 l ), whereŝ is the squared partonic centre-of-mass (c.m.) energy and m l is the lepton mass. Since these corrections amount to several per cents around the jacobian peak of the W transverse mass and lepton transverse momentum distributions and cause a significant shift (of the order of 100-200 MeV) in the extraction of the W mass M W at the Tevatron, the contribution of higher-order corrections due to multiple photon radiation from the final-state leptons must be taken into account in the theoretical predictions, in view of the expected precision (at the level of 15-20 MeV) in the M W measurement at the LHC. The contribution due to multiple photon radiation has been computed, by means of a QED PS approach, in [27] for W production and in [28] for Z production, and implemented in the event generator HORACE. Higher-order QED contributions to W production have been calculated independently in [29] using the YFS exponentiation, and are available in the generator WINHAC. They have been also computed in the collinear approximation, within the structure functions approach, in [30]. It is worth noting that, for what concerns the precision measurement of M W , the shift induced by higher-order QED corrections is about 10% of that caused by one-photon emission and of opposite sign, as shown in [27]. Therefore, such an effect is non-negligible in view of the aimed accuracy in the M W measurement at the LHC. A further important phenomenological feature of EW corrections is that, in the region important for new physics searches (i.e. where the W transverse mass is much larger than the W mass or the invariant mass of the final state leptons is much larger than the Z mass), the NLO EW effects become large (of the order of 20-30%) and negative, due to the appearance of EW Sudakov logarithms ∝ −(α/π) log 2 (ŝ/M 2 V ), V = W, Z [15,16,19,20,21,22]. Furthermore, in this region, weak boson emission processes (e.g. pp → e + ν e V + X), that contribute at the same order in perturbation theory, can partially cancel the large Sudakov corrections, when the weak boson V decays into unobserved νν or jet pairs, as recently shown in [31]. Combination of EW and QCD corrections In spite of this detailed knowledge of higher-order EW and QCD corrections, the combination of their effects is presently under investigation. Some attempts have been explored in the literature [32,33,34]. Here our approach will be discussed in some detail. A first strategy for the combination of EW and QCD corrections consists in the following formula where dσ/dO MC@NLO stands for the prediction of the observable dσ/dO as obtained by means of MC@NLO, dσ/dO EW is the HORACE prediction for the EW corrections to the dσ/dO observable, and dσ/dO Born is the lowest-order result for the observable of interest. The label HERWIG PS in the second term in r.h.s. of eq. (1) means that EW corrections are convoluted with QCD PS evolution through the HERWIG event generator, in order to (approximately) include mixed O(αα s ) corrections and to obtain a more realistic description of the observables under study. In eq. (1) the infrared part of QCD corrections is factorized, whereas the infrared-safe matrix element residue is included in an additive form. It is otherwise possible to implement a fully factorized combination (valid for infra-red safe observables) as follows: where the ingredients are the same as in eq. (1) but also the QCD matrix element residue in now factorized. Eqs. (1) and (2) have the very same O(α) and O(α s ) content, differing by terms at the order αα s . Their relative difference has been checked to be of the order of a few per cent in the peak region, and can be taken as an estimate of the uncertainty of QCD & EW combination. Numerical results In order to assess the phenomenological relevance of radiative corrections to D-Y processes, we show the effect of purely EW corrections to Z-boson production at the LHC ( √ s = 14 TeV) in Fig. 1. Input parameters, cuts and lepton identification criteria can be found in ref. [22]. The set of PDFs used in our study is MRST2004QED [35]. As can be seen, EW corrections give huge contributions around the tail, mainly due to combined photonic and Sudakov effects. Multiple photon corrections are at the some per cent level. As far as the combination of QCD & EW corrections is concerned, we study, for definiteness, the production process pp → W ± → µ ± + X at the LHC, imposing the cuts shown in Tab. 1, where p µ ⊥ and η µ are the transverse momentum and the pseudorapidity of the muon, / E T is the missing transverse energy, which we identify with the transverse momentum of the neutrino, as typically done in several phenomenological studies. For set up b., a severe cut on the W transverse mass M W ⊥ is superimposed to the cuts of set up a., in order to isolate the region of the high tail of M W T , which is interesting for new physics searches. The QCD factorization/renormalization scale and the analogous QED scale (present in MRST2004QED) are chosen to be equal, as usually done in the literature [15,16,19], and fixed at µ R = µ F = In order to avoid systematic theoretical effects, all the generators under consideration have been properly tuned to reproduce the same LO/NLO results. A sample of our numerical results is shown in Fig. 2 for the W LHC a. p µ ⊥ ≥ 25 GeV / E T ≥ 25 GeV and |η µ | < 2.5 b. the cuts as above ⊕ M W ⊥ ≥ 1 TeV Table 1. Selection criteria imposed for the numerical simulation of single-W production process at the LHC. transverse mass M W ⊥ and muon transverse momentum p µ ⊥ distributions according to set up a. of Tab. 1, and in Fig. 3 for the same distributions according to set up b. In each figure, the upper panels show the predictions of the generators MC@NLO and MC@NLO + HORACE interfaced to HERWIG PS (according to eq.(1)), in comparison with the leading-order result by HORACE convoluted with HERWIG shower evolution. The lower panels illustrate the relative effects of the matrix element residue of NLO QCD and full EW corrections, as well as their sum, that can be obtained by appropriate combinations of the results shown in the upper panels. From Fig. 2 it can be seen that QCD corrections are positive around the jacobian peak and tend to compensate the effect due to EW corrections. Therefore, their interplay is crucial for a precise M W extraction at the LHC and their combined contribution can not be accounted for in terms of a pure QCD PS approach, as it can be inferred from the comparison of the predictions of MC@NLO versus the leading-order result by HORACE convoluted with HERWIG PS. The interplay between QCD and EW corrections in the region interesting for new physics searches, i.e. in the high tail of M W ⊥ and p µ ⊥ distributions, is shown in Fig. 3. For both M W ⊥ and p µ ⊥ NLO QCD corrections are positive and largely cancel the negative EW Sudakov logarithms. Therefore, a precise normalization of the SM background to new physics searches necessarily requires the simultaneous control of QCD and EW corrections. Conclusions During the last few years, there has been a big effort towards highprecision predictions for D-Y-like processes, addressing the calculation of higher-order QCD and EW corrections. Correspondingly, precision computational tools have been developed to keep under control theoretical systematics in view of the future measurements at the LHC. We presented some results about EW and QCD corrections to a sample of observables of the Z and W production processes at the LHC. Our investigation shows that a high-precision knowledge of QCD and a careful combination of electroweak and strong contributions is mandatory in view of the anticipated experimental accuracy. We plan, however, to perform a more complete and detailed phenomenological study, including the predictions of other QCD generators and considering further observables of interest for the many facets of the W/Z physics program at the LHC.
2008-05-08T10:20:10.000Z
2008-05-08T00:00:00.000
{ "year": 2008, "sha1": "a15e133cbebe0b19a4f7fe9175aebc2e13bc485b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a15e133cbebe0b19a4f7fe9175aebc2e13bc485b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
158517417
pes2o/s2orc
v3-fos-license
Challenges for the Integration of Water Resource and Drought-Risk Management in Spain Droughts are risks characterized by their complexity, uncertainty, and a series of other features, which differentiate them from other natural disasters and affect the strategies designed to manage them. These characteristics highlight the close relationship between drought management and water resources management. The following hypothesis is raised in this study—unsatisfactory integration of a drought-risk and water resources management strategies, increases the vulnerability to drought. To corroborate this hypothesis, the Spanish case was analyzed, where droughts are a recurrent phenomenon, due to the Mediterranean climate. Starting from the Intergovernmental Panel on Climate Change (IPCC) framework, which has been proposed to characterize vulnerability as a function of exposure, sensitivity, and adaptive capacity, this study analyzed the vulnerability in the Spanish River Basin Districts, through—(i) the integration of the predictable effects of climate change and the increased risk of exposure in hydrologic planning; (ii) the pressure on water resources that determines the sensitivity of the systems; and (iii) the development and implementation of drought management plans as a fundamental tool, in order to adapt before these events occur. The results showed that despite important advances in the process of conceiving and managing droughts, in Spain, there are still important gaps for an adequate integration of droughts risk into the water resource strategies. Therefore, despite the improvements, drought-risk vulnerability of the systems remained high. Introduction When we talk about drought it is important to differentiate between drought as a natural phenomenon and drought as a risk.As a natural phenomenon, drought refers to the levels of rainfall dropping below normal, for one area, during a set period [1].By contrast, drought as a risk refers to the effect that the decreased rainfall may have on the available water resources trying to keep up with the demand [2].In the same way, when we talk about the relationship between drought and water management it is also important to highlight the differences between drought (which is a temporary and natural phenomena), water scarcity (which is permanent and anthropogenic), and water deficit (which is temporal and anthropogenic) [3].A period of drought can lead to water deficit situations that make it difficult to meet the water demands.So, this situation could be prevented and adequately managed to avoid restriction in use.Scarcity, finally, refers to a permanent situation of water deficit, where demand is greater than the available resources and structural measures that are needed to revert this situation [4].Droughts understood to be risks are the result of a complex process which includes different natural and human factors, in continuous interaction, and operate in different spatial and temporal scales [2].On the one hand, a drought depends on the atmospheric and hydrologic processes, which vary depending on location and topography, which determine the spatial and temporal distribution of rainfall, and also the effectiveness, intensity, and number of rain events that occur [5].These processes are also important in the incidence of drought factors, such as geology and vegetation, which determine the capacity of water retention, infiltration, runoff, evapotranspiration, etc., and which ultimately determine the amount of water available to meet demands.Droughts also have important spatial variations and a strong dependence on the context in which it is produced [6].The level of impact of a drought is also related to the ways in which societies relate to the environment, through the management and exploitation of water resources and the development of strategies and tools to cope with these types of events, which determine the vulnerability of different systems to the onset of a drought period by conditioning their exposure, sensitivity, and ability to adapt. Like other natural hazards, drought-risk is associated with high levels of uncertainty.Moreover, the role that the human component plays in the lower or greater incidence of this risk by subjecting the systems to high levels of water stress and, thus, high levels of vulnerability, as well as the ability of the human being to intervene in the process and make decisions as the drought progresses, incorporates into the system a type of uncertainty that is typical of human intervention [7]. Furthermore, droughts present a number of characteristics that differentiate it from other natural disasters.(1) The effects of a drought can accumulate over long periods of times and persist for many years [8].(2) The temporal and spatial limits of drought can often be difficult to determine [9][10][11]. (3) Unlike other natural disasters, the impacts of drought rarely result in structural damage, so impact quantification is much more difficult [10,12].(4) Water is a basic resource needed for ecosystem maintenance, the majority of productive processes, and life.Droughts impacts are defined by a decrease or absence of this basic resource, hence, the relationship between ordinary resource management and risk management is critical in determining the impact of a drought. These special characteristics of drought as a risk phenomenon, place drought management strategies under important processes of perception and interpretation.In many cases, perception of drought derives from understanding and managing of the resources. In cases like that of Spain's, where drought is an inherent feature of the climate and a period of drought appears approximately every ten years (the most severe episode in terms of economic losses took place in 1941-45, 1979-83, 1990-95, 2004-08, and 2015-18), these drought events affected virtually the entire Spanish territory.In this context, it is essential to perceive and assume the normality of the occurrence of periods of drought and to integrate this into the ordinary water resources management, to reduce vulnerability of the system, against the emergence of this type of phenomenon [13]. As already pointed out, the hypothesis of this work is that a poor integration of a drought-risk in water resource planning strategies, generates an increase in the vulnerability to these risks.Hence, the structure of the paper is as follows.Section 2 explains the main changes in the drought management strategies that have been reached at the institutional level, derived from a change in the perception of droughts towards the assumption of normality of these phenomena, in Spain.Section 3 presents the methodology for analyzing the degree of real application of these institutional and normative changes.Results are then presented in Section 4, followed by a discussion in Section 5, which offers a debate and finally the conclusions. The Evolution of Drought Management Strategies in Spain In Spain, it rains an average of 650 mm every year (series 1981-2010).However, rainfall levels vary greatly annually, which creates a habitual succession of one or more wet years followed by one or more dry years.There is also an important spatial variability on the Iberian Peninsula, with regions in the northwest receiving excess of 800 mm of rainfall, annually, and the regions in the southeast that barely reach 250 mm.Additionally, rainfall in Spain has a marked seasonality, most of the rainfall is concentrated in the fall and winter months, while summer precipitation is almost nonexistent.This irregularity is due in part to the geographic position of the Iberian Peninsula which is in a transitional zone between two climates (Oceanic, Mediterranean).It is also affected by its size and varied orography [14,15].This natural irregularity has been the pretext of the country's hydraulic policy, throughout the twentieth century, which has tried to correct the territorial and seasonal imbalance in water supply by building important infrastructure, such as reservoirs and transfers between watersheds [16][17][18][19][20][21]. Derived from this manner of interpretation of water resource management, Spain's drought management strategies have been dominated by the crisis management model, as the Ministry of the Environment has acknowledged [22].From this paradigm, droughts have been interpreted as extraordinary events for which little or nothing could be done.The main tools that have been used for management have focused on the application of reactive and emergency measures, once the impacts are already perceived, usually through the approval of royal drought decrees, which are only used in emergency situations, for the general interest [23][24][25].Perceptions about drought and insufficient rainfall are keys to gaining a good understanding of drought management and hydraulic policy, in general.This results in unrealistic scenarios which puts the system under severe pressure and increases vulnerability to episodes of drought [13]. Due to its geographical location, drought episodes constitute one of the main natural risks of atmospheric origin in Spain [26].Drought is a recurrent risk in Spain and a drought or quasi-drought takes place, approximately, every ten years.In addition, climate change models predict an increase in the frequency and intensity of droughts in the Southern European regions [27], and Spain is highlighted as one of the most vulnerable countries in terms of future water availability [28]. For this reason, drought can no longer be regarded as an extraordinary event.This is an approach that promotes the separation of regular water management and drought management-i.e., calculating the supply and demand without taking into consideration the potential periods of drought, and addressing it by implementing mitigation measures, only after the event has become a reality. The need to reorient the management of droughts towards the risk management paradigm is recognized at the institutional level, although certain reluctance is still evident [28].In this sense, we can highlight three key milestones that practically coincide with the times that mark the way towards a risk management paradigm.First, the consequences of the drought of 1991-95-with more than 12 million affected-included agricultural losses, which lowered the country's GDP by 1 percent, and caused considerable environmental damage, which was never evaluated.The management of this drought included as many as twelve drought decrees, five ministerial orders that modified some aspect of the decrees, and two state secretary resolutions that were passed with emergency measures.These measures were the maximum expression of the reactive approach, and as the White Paper on Water recognizes, they were undertaken with little planning and were unable to solve most of the problems caused by the event [28]. Secondly, the initiatives promoted by the European Union that assumed important changes in the interpretation of water risks and drought management.Concretely, the approval of the Water Framework Directive (WFD) (2000) led to important changes regarding water management.For one, the directive moved away from a quantitative focus on the hydraulic paradigm and emphasized the ecological status of bodies of water.Concerning drought management, although it was outside the strict remit of the WFD to deal with extreme meteorological events (none of the articles in the WFD directly addressed the management of such events), the directive aimed to contribute to palliate the effects of flood and drought (Article 1), which paved the way for including drought in planning mechanisms, and adopting preventive and recovery-oriented measures [29].On the other hand, The publication by the Communication Department of the European Commission, "Addressing the challenge of water scarcity and drought situations", from the European Commission, was adopted in 2007 (COM (2007) 414), whose objective was to encourage the management of resources and water risks, in order to prevent, and to mitigate water scarcity and drought situations, with the priority to move towards a water-efficient and water-saving economy. Thirdly, although still reproducing some ideas from the traditional hydraulic paradigm, the National Hydrological Plan Act (2001) included an article (article 27) which demanded the preparation of Special Action Pans during Conditions of Alert and Drought (PES is the Spanish acronym) and Emergency Plans (PEM is the Spanish acronym), for supply systems that serve urban agglomerations with a population in excess of 20,000 inhabitants.This was a unique disposition in the legal corpus of European countries.It also directed the Ministry for the Environment to elaborate a global system of indicators, aimed at predicting drought, which the basin authorities might use as a universal reference.This act was, according to the Ministry, the inflexion point which marked the paradigm shift from crisis management to risk management [2]. However, despite these advances in both the interpretation of drought and the development of new management tools, which were related to a risk-management approach, the integration of risk into the water management strategies and planning was still, in practice, insufficient [24].As a result of this separation between resource and risk management, systems remained vulnerable to the onset of a period of drought, and the new management tools developed were insufficient to deal with drought-risk.The result was that drought decrees still played a relevant role because they perpetuated, in practice, the ideas of the traditional hydraulic paradigm.This was demonstrated by the publication in 2015 of the 'Royal Decree Law 356/2015 for the Segura basin, the 'Royal Decree 355/2015 for the Júcar basin, and the 'Royal Decree Law 10/201 , which adopted urgent measures to palliate the effect of drought, in a number of river basin districts, and modified the Water Act-a modification which was later endorsed as the 'Royal Decree Law 1/2001 . Materials and Methods To corroborate this hypothesis-poor integration of water resource management and drought-risk management strategies increases the vulnerability to drought-risk-three aspects were analyzed that were directly related to the three components of vulnerability, proposed by the Intergovernmental Panel on Climate Change (IPCC) [30].The exposition, defined in the function of the characteristics of the threat, such as, the frequency, magnitude, or duration of a disturbance [31][32][33]; sensitivity, defined by those conditions of the exposed system that make it more likely to experience damage and be adversely affected by a natural hazard [34]; and the ability to adapt, defined as the set of characteristics and capacities of the society that allow it to cope with drought as the phenomenon advances (response in the short term) and also those that are part of a constant process of learning, experimentation, and change in the way to deal with these risks [2]. In the first place, it analyzed the degree of integration of the effects of climate change in the Spanish River basin district plans.The analysis of this variable allowed us to know if the predictable increase of exposure to drought in the ordinary water resource planning was integrated.To analyze this variable, we reviewed all Spanish River basin district plans and analyzed the extent to which climate change forecasts have been introduced on water resources in the future (increased demands, decreased availability of resources, and worsening of the state of water bodies). Secondly, to characterize sensitivity, the state of pressure on water resources was analyzed, as a characteristic of the ordinary management of resources.Sensitivity determines the hydrological state of the different systems to respond to a period of drought.To analyze this variable, an analysis of the different river basin districts was carried out by using two indicators-the Water Exploitation Index (WEI) and the ecological state of water bodies was measured, according to the provisions of the Framework Water Directive (FWD). Finally, to characterize the degree of adaptation and preparation to respond to drought-risk, the compliance status was analyzed by Article 27 of the National Hydrological Plan (NHP), which established the obligation to elaborate special plans, for action in alert situations, and the eventual droughts (PES) in in all river basin districts, as well as the emergency plans for those urban supplies with more than 20,000 inhabitants (PEM).This was done by a review of the documents through an analysis of the established obligations and the current administrative state of the different plans. Results The results of the analysis carried out are presented in connection with the three variables analyzed.The results are presented for each of the Spanish River Basin Districts (Figure 1), so we could carry out a discretized analysis on each of them, for the three variables, and could also do an assessment of the whole country.The results of the analysis carried out are presented in connection with the three variables analyzed.The results are presented for each of the Spanish River Basin Districts (Figure 1), so we could carry out a discretized analysis on each of them, for the three variables, and could also do an assessment of the whole country.Figure 1 shows the differences in the water management administration authorities in Spain (regional governments for the intra-community river basin districts, the Spanish Central Government for the inter-community river basin districts, and the transboundary river basin districts).Due to the different orography and climate conditions, there were also some important differences in the water resources availability, water demands, and the use of non-conventional resources among the different Spanish River Basin Districts (Table 1).Figure 1 shows the differences in the water management administration authorities in Spain (regional governments for the intra-community river basin districts, the Spanish Central Government for the inter-community river basin districts, and the transboundary river basin districts).Due to the different orography and climate conditions, there were also some important differences in the water resources availability, water demands, and the use of non-conventional resources among the different Spanish River Basin Districts (Table 1). Climate-Change Effects on Water Resources As a result of the fourth report of the IPCC [35] and on the basis of the models of climate change that were published for Spain, in the year 2012, the Public works research and experimentation center (Centro de Experimentación de Obras Públicas, CEDEX) published a series of reports on the evaluation of the effects of climatic change on the hydrological resources and water bodies, for all Spanish River Basin Districts [36].These works were developed from the regionalized climatic scenarios for Spain, within the framework of the IPCC's fourth report [35].The emission scenarios were selected and transferred to all hydrological demarcation plans (SRES A2 y SRES B2) were a part of the set of greenhouse gas emission scenarios, established in the year 2000, in the special report on emission scenarios by the IPCC group (SRES A2: This scenario reflects the situation of non-adoption of measures to reduce emissions; SRES B2: This scenario incorporates reduction measures to alleviate the pernicious effects of climate change). These reports conclude that climate change will have an impact on rising temperatures, a widespread decrease in rainfall and runoff, and an increase in rainfall irregularities.The consequences of these changes on water resources, according to these reports, will be: (1) Increased evapotranspiration and, therefore, a general increase in consumption and water demands (mainly plant and agricultural demand); (2) reduction of rainfall, runoff, and natural contributions will lead to a decrease in available water resources; (3) worsening of the ecological state of rivers (other types of bodies of water were not evaluated); and (4) increased pluviometry irregularity that would cause an increase in the uncertainty of water availability. All of these consequences, obviously, negatively impact the vulnerability of the systems for coping with periods of drought in the future, which is exacerbated in the absence of adequate adaptation to the new scenario that climate change poses.In this sense, the consideration and application of climate change forecasts in the planning of water resources in Spain, has been insufficient, as shown in Table 2. Mainly, the Spanish River Basin Districts introduce these considerations weakly and are limited to apply percentages of reduction contained in the Hydrologic Planning Instruction for calculating the resources available in scenarios 2027 and 2033.Recently CEDEX [37] has published an update on the climate change assessment of water resources and droughts in Spain.Although it has not yet taken the time to incorporate the new conclusions on the effects of climate change on water resources in the river basin district plans, it should be included in the new planning cycle (2021-2027).Unlike the document published in 2010 [36], the new regionalized scenarios for Spain are based on the AR5 (Fifth Climate Change Assessment Report) [38], which introduces updates to the General Circulation Models (GCM), as in the emission scenarios used.The GCM used in the AR5 are called.The General Circulation Coupled Atmosphere-Ocean Models (MGCAO) simulate the dynamics of the physical components of the climatic system (atmosphere, ocean, earth, and ice cap); and the most complete Terrestrial System Models (TSM), and include the representation of several biochemical cycles, such as those involved in the carbon, sulfur, and ozone cycles. Additionally, the AR5 has defined four new emission scenarios, the so-called Representative Paths of Concentration (RCP), which replace the SRES used in previous reports (SRES 2 and SRES 4).News RCP used are RCP 2.6, RCP 4.5, RCP 6.0, and RCP 8.5.Each RCP is associated with a high spatial resolution database of pollutant emissions (classified by sectors), GHG emissions and concentrations, and soil uses up to the year 2100, based on a combination of models of different complexity of atmospheric chemistry and the carbon cycle (IPCC 2013). In the Spanish case, the Meteorology State Agency (Agencia Estatal de Meteorología, AEMET) has carried out a regionalization of the climatic scenarios of the RCP 4.5, 6.0, and 8.5, and the CEDEX has used RCP 4.5 (scenario of stabilization of the GHG emissions, where the maximum CO 2 concentration in the atmosphere is estimated at 528 ppm) and RCP 8.5 (scenario with very high levels of GHG emissions, where the concentration of maximum atmospheric CO 2 is estimated at 936 ppm) for the report on the impact of climate change on water resources and droughts in Spain.The conclusions of the report for the variables, which can determine the availability of water resources in the future, are presented in Table 3. Estimates in both scenarios show a widespread decrease in rainfall, soil moisture, aquifer recharge and runoff, and an increase in evapotranspiration, for the entire country, which would become more acute as the 21st century progresses.It is true that the RCP 8.5 scenario estimates are more pronounced.An increase in the potential evapotranspiration is also generally seen. In general, it recognizes a reduction of water resources in the whole peninsula, more intense in the south and in the archipelagos, and a minor reduction in eastern parts of the Iberian Peninsula.This decrease in the availability of resources will result in an increase in the scarcity of water resources to meet demands [39].In addition, the same report predicts a change in the drought regime.Most climatic predictions show a future in which droughts would be more frequent and intense, blaming the progress of the 21st century for increasing exposure to these kinds of episodes in the future. The hydrological plans of the different river basin district is to partially incorporate the forecasts of climate change in such a way that scenarios 2027 and 2033 include the decline in the availability of resources (but not the estimates of the increase in evapotranspiration), the consequent increase in demand, and the effects of climate change on the state of the water masses.The usual recurrence of droughts in Spain and the predicted increase in their frequency and intensity make it necessary to incorporate the period of drought's pluviometry normality and a review of the availability of resources that takes into account the variability of drought periods, as a standard, and not as an exception, in order to make a resource allocation much more adjusted to the actual availability of the resources necessary. Water-Stress Level for Different River Basin Districts In Spain, it usually rains enough to meet demands.Furthermore, the country also has a great regulation capacity (56,000 hm 3 ).However, the mentioned rainfall variability (interannual, seasonal, and spatial) is combined with a growing demand of water for different purposes [39].In the Spanish River Basin Districts (2015-2021), the total demand of water amounts to 31,355.4 hm 3 , of which 80.4% of which is agrarian, 15.6% urban, and 4.2% industrial demands [39]. The level of pressure to which the water resources are subjected in the ordinary hydrologic planning determines the response of the system when a drought occurs and negatively impacts the sensitivity of the system.The greater the pressure on water resources the more difficult it will be to meet demands when the availability of resources decrease due to drought.The level of pressure on the water resources in a system of exploitation or river basin district, can be characterized by two variables-the stress to which the water resources are subjected and the state of the water bodies after various demands. In Table 4, a synthesis of natural contributions is presented (hm 3 /year).Available resources (hm 3 /year) were calculated as the natural conventional resource to which non-conventional (reuse and desalinization) was added, also subtracting the transferred flows to other basins and adding those received by transfers from other areas of planning and consumption demand (urban, agricultural, and industrial) (hm 3 /year).The calculations were done on the official published data of the WEI. Water Exploitation Index is a widely recognized indicator used to characterize the level of pressure on the water resources of certain territories or river basins [40].This indicator relates the total use of resources (consumptive and non-consumptive) to the total renewable resources, expressed as a percent.According to Eurostat, WEI values below 10% imply that the analyzed system is not subjected to any form of stress, if the value is between 10-20% it implies a low water-stress level.If the indicator exceeds 20% it is considered to have raised the alarm for water-stress levels, and if it exceeds 40% the system is in severe stress.Another very similar indicator is the WEI+ (which is an evolution of the WEI) and it relates the total amount of resources consumed (consumption with no returns) to the total amount of renewable resources.Source: Author's own elaboration from CEDEX (2017b) [39] and García and Martínez (2016) [41]. Due to the variety of methodologies used in each river basin district to calculate these parameters, we have used two different methods of calculation.First, we used an adaptation of the WEI proposed by García and Martínez (2016) [41].This entails dividing the total consumption demands (excluding hydroelectric use) between the natural contributions, which results in the quantitative relationship between water availability and human pressure.In addition, the official results presented are the same as that published by CEDEX itself (2017), in which the WEI+ was calculated on the basis of the relationship between the demands of the available resource, which is also expressed as a percentage.The results obtained through the two methodologies (WEI+ and *WEI+) showed similar results for all river basin districts.The main difference lay in the Segura River Basin demarcation, due to the amount of water this area receives from the Tajo-Segura transfer which significantly increases the amount of resources available.For the rest of the demarcations both indexes (WEI+ and *WEI+) present similar values that can be used as a reference to determine the state of pressure on the water resources. According to the data obtained, only four of the fifteen demarcations analyzed (Cantábrico Occidental, Cantábrico Oriental, Galicia Costa, and Miño-Sil) had levels of stress below 10% which shows that they had little to no water stress.Three demarcations (Duero, Tajo, and the internal basins of Catalonia) had values between 20% and 40%, presenting alarming levels of stress on their resources.The rest of the demarcations had very high values, well above 40%, which indicated severe stress.The average value for all fifteen demarcations was slightly above 31%.Although both values place the entire country in a water-stress alert level, the mean value for all demarcations was skewed by the extreme values of some of the demarcation's data, which were not representative of the country.If the median value is used, where extreme values do not affect the average as much, a WEI value of 45.4% and a WEI+ value of 46.9% is achieved. The results obtained show that many of the Spanish demarcations were subject to severe water-stress levels.In addition, the demarcations with higher rates of water stress were also those that had the greatest availability of resources (except Cantábrico Occidental, Miño-Sil, and Galicia Costa) indicating that in most cases it was not a water availability issue but that of excessive pressure on the existing resources. The other indicator used to characterize the existing pressure on the hydrological resources was the state of the water bodies.Water bodies are a fundamental element which is articulated in the Water Framework Directive (WFD) and in the hydrologic planning.The purpose of the Directive is to reach a good status, for all water bodies, to ensure the adequate supply of surface and groundwater, and to require a sustainable, balanced, and equitable use of water, as stated in Article 1 of the WFD.Source: Author's own elaboration from CEDEX (2017b) [39]. Different demands are assigned to the water bodies in order to meet the water use needs of the different sectors.Therefore, the state of the water masses is not merely a goal, but also a condition for adequate water supply to meet the demands.When a period of drought occurs, the qualitative and quantitative status of a water body can be affected and can limit the ability to satisfy demands, either by reducing the amount of available water or by worsening the quality of water.The state of water bodies affects the way in which the water body responds to the decreased precipitation.In Table 5, data on the global state of surface water and groundwater bodies, for each of the Spanish River Basin Districts, are presented and analyzed. For surface water bodies, 43.4% of the bodies of the water presented had a global status of "worse than good", while 56.6% had a status of "good".Duero (70.2% the bodies of water had a "worse than good" status), Guadiana (69.6%),Tinto-Odiel-Piedras (50%), Guadalete-Barbate (54.6%), and Jucar (63.3%) had the highest percentage of bodies of water that did not reach the "good" status; being always above 50%. Both the amount of pressure on water resources and the global state of bodies of water showed a generalized situation of high pressure on the Spanish water resources of the different river basin districts, which increased the sensitivity to suffer impacts (economic, environmental, and social), before the onset of a drought. Development and Implementation of Drought Planning The main tool used to cope with droughts in the risk management paradigm are drought plans, as recognized by the major international agencies [42][43][44].The theoretical planning of these tools are based on two basic ideas.First, the probability of eliminating the risk does not exist, but can be minimized, and second, planning while anticipating the problem has many advantages: (1) It allows researchers to learn and analyze what causes systems to become vulnerable to periods of drought and how to fix the problem preemptively; (2) it prevents hasty decisions being made in response to the crisis.Therefore, the objective of the drought plans is to determine the arrival and departure of the different drought phases and the specific measures of action necessary for each social phase [11].Hence, the prevention and gradual adaptation to a drought as it progresses to reduce the economic, environmental, and social impacts. In Spain, Article 27 of the National Hydrological Plan Act establishes the obligation to prepare special plans of action for alert situations and eventual droughts in all River Basin Districts (PES), and urban supply system emergency plans (PEM). Special Action Plans during Conditions of Alert and Eventual Drought (PES) PES are an important step forward in conceptual and operational terms, as it aims to evaluate drought, objectively, and to implement progressive measures with which to prevent droughts from becoming more severe [44].These plans, which must be made by the watershed agencies in each of the river basin districts and must be coordinated with the hydrological demarcation plans, make up the system of indicators and thresholds used to identify the different states of drought (Normal, pre-alert, and emergency), as well as exploitation rules and the measures to be implemented concerning public ownership of water at each stage (Article 27.2.National Hydrological Plan Act). In any case, the first steps of this new policy have suffered from a number of teething problems. (1) PEM due for 2005 were only published by the inter-regional basins in 2007, while the first-cycle hydrological plans (2009)(2010)(2011)(2012)(2013)(2014)(2015) were published even later (some of them as much as six years late).These hydrological plans must be incorporated into the drought plans, but there is no guarantee that the targets and measures contemplated in both will be compatible.The late publication and inadequate updating of the PEM makes integrating current (but out-of-date) PES (2007) and second cycle hydrological planning (2015-2021), published in late 2015, with up-to-date information concerning resource inventories, demand, and, especially, ecological flows, very difficult.(2) The drought indicators in use are, in fact, indicators of scarcity.While indicators of scarcity may be useful in assessing and comparing supply and demand, they fall short of discriminating between 'meteorological drought' (caused by below-average rainfall) and water scarcity related to reservoir and aquifer levels, which depend, to a large extent, on the management model in place, and the usage of water before and after rainfall declined.As such, these plans are little more than contingency plans to address shortcomings in supply, but are largely unrelated to meteorological drought.The WFD permits the temporary deterioration of bodies of water, but only in unforeseeable and exceptional circumstances (Article 4.6.WFD).Adjusting the indicators to ensure that the reasons behind the deterioration of bodies of water are adequately accounted for, is essential for the correct implementation of the WFD. In 2017, the Ministry of Agriculture, Fisheries, and Food (MAGRAMA) published technical guidelines concerning special drought plans and prolonged drought and scarcity indicators (TG), which established a number of rules for the updating of PES.Certain basin authorities have already published drafts of the updated PES.The revision of PES, which is currently under way, is an opportunity to continue advancing towards a drought-management model, based on prevention, mitigation, and progressive adaptation.We must take this opportunity to discriminate, at long last, between the prolonged meteorological droughts and socially constructed, management-induced scarcity.In addition, we must ensure that the various planning processes, for instance PES and PEM, are better coordinated in the future.4.3.2.Emergency Plans (PEM) for Supply Systems That Serve Urban Agglomerations with a Population in Excess of 20,000 Inhabitants These plans are only concerned with urban areas with a population in excess of 20,000 inhabitants, and are, therefore, the responsibility of the authorities in charge of overseeing the water supply.The aim of these PEM is to prevent drought from affecting the urban supply.However, although the deadline for the publication of PES was 2005, only a few have been published and their impact has been, therefore, very limited. The publication of the new PES draft in December 2017 meant that progress could be assessed, with regard to PEM.These drafts identified two hundred and thirteen supply systems for urban areas, with over 20,000 inhabitants, which are, thus, required to provide a PEM.Only 8.5% of these supply systems have a PEM approved by the relevant PES (2007); 11.3% systems have already delivered their PEM, and this is currently being evaluated by the relevant basin authorities; 9.9% have a PEM in place, but the plans need to be revised and fitted to adapt the relevant PES, either because it was published before the PES and, in consequence, it does not follow the PES guidelines, or because, despite the PEM being published after the PES, the indicators used do not fit those used in the relevant PES.Most systems (70.4%) have not yet submitted their PEM to the relevant basin authority. Discussion Developing natural hazards management strategies implies the assumption of the complexity and uncertainty associated with this type of phenomenon.In the specific case of droughts, due to the special characteristics it presents as a natural hazard-that of impetuous appearance and a greater destruction capacity-which differentiates it from other hazards, like hurricanes, tornados, tsunamis, etc., conditions its management strategies.In the case of drought, in addition, the way in which the resource is managed, determines the vulnerability of the system to face a period of drought.Indeed, the level of impact of a drought is related not only with the magnitude of the rainfall decrease but also to the way in which societies relate to the environment, through the management and exploitation of water resources and the development of strategies and tools to cope with these types of events.This relationship determines the vulnerability of different systems to the onset of a drought period by conditioning their exposure, sensitivity, and ability to adapt. In this sense, from the perspective of the traditional hydraulic paradigm, droughts are interpreted as catastrophic and extraordinary acts of nature for which little can be done beyond applying reactive and emergency measures, once the impacts have already been presented.This interpretation has not only succeeded in not dealing with drought effectively but has legitimized expansionist water policies, which are based on constructing major infrastructures in order to increase water supply and, thus, palliate water scarcity-which is increasingly a socially constructed phenomenon-without challenging current patterns of water usage. Despite important advances in the perception and understanding of drought-risk at institutional and normative levels, the results obtained for the three selected variables to assess the real integration (integration of climate change forecasts in water resource planning, the state of pressure on water resources, and the state of development and implementation of drought plans) show a clear gap in the proper integration of drought-risk and ordinary water planning, and thus, show a drought-risk vulnerability increase.This presents us with major challenges: (1) Integration of the idea of the non-exceptionality of drought into the data on hydrological balances (supply/demand) and the management of water; an effective integration would reduce water bodies and exploitation systems that pressure the exploitation and, thus, would reduce the vulnerability to meteorological drought; (2) assuming that these events will become increasingly frequent, as a result of climate change, supply/demand balances and planning must realistically adapt to the predicted effect of climate change on water bodies; and (3) abandonment of the notion of drought as an extraordinary event, and development of specific tools which allow us to confront meteorological drought with the same instruments used during non-drought periods.In this regard, the potential of drought plans as management tools is clear.Planning means anticipating the problem before the problem becomes unmanageable, by identifying the strengths and weaknesses and avoiding crisis-driven contingency measures, as far as possible.However, the notion that these plans may be capable of confronting the problem single-handed, without a serious review of the prevailing water-exploitation model, can lead us to a false sense of security.In a country such as Spain, where we cannot predict when the next drought will come, or how severe it will be, but where we are certain that it will come, the only valid strategy for confronting drought is to tackle vulnerability.In order to achieve this, we must: (1) Reduce the sensitivity of systems by reducing consumption and adapting planning strategies to the predicted effects of climate change; (2) presume that drought is a normal occurrence, and that the frequency of episodes of drought will increase with climate change; and (3) increase our ability to adapt and respond to this sort of event by incorporating drought plans into ordinary hydrological planning. Conclusions In Spain, drought is a recurrent phenomenon, rather than the exception.Moreover, water resources are associated with high levels of demands.Thus, the adequate management of a drought-risk lies in integrating drought events into the climatic regularity of the country and, therefore, integrating it into the ordinary management of water resources. However, despite the important advances in both the interpretation of drought-risk and the development of new techniques and tools to tackle its management, from approaches based on prevention and adaptation, there are still important challenges for the proper integration of risk management strategies in the water resources planning and management processes.This is shown by the analysis carried out to analyze the integration of drought-risk in water resource management, which showed an important gap.The results obtained showed how this gap increases the vulnerability to drought-risk.Regarding climate-change forecast, we confirm that, despite differences between river basin districts, the consideration of the effects of climate change on hydrological planning is practically non-existent in Spain.They do not incorporate the estimations of the increase in evapotranspiration and the consequent increase in demands and also the effects of the climatic change on the state of the water bodies.The increasing frequency and intensity of droughts in the future is also not integrated into the hydrological plans.Exposure to drought increase [27] and the effects on water resources availability and quality are identified [38] but we did not consider it in water management plans. Regarding the state of pressure on water resources, which determine the sensitivity of the system to cope with the onset of a drought period, the results obtained for the two variables selected, showed a high pressure on the water resources in all Spanish River Basin Districts.About 53% of the river basin districts in Spain reached severe level of stress and results obtained to assess the water body status showed that only 59.9% of the 5582 reached a "good" global status, very far from the objectives proposed by the DMA to achieve a good ecological status, for all water bodies, in the year 2015.This means that there is an important sensitivity of the Spanish water bodies to cope with the appearance of a drought.Finally, the results obtained after the analysis carried out on the state of development and application of the drought plans, showed a late publication and inadequate update of the "Special Action Plans during Conditions of Alert and eventual drought (PES)" and a very limited development of "Emergency Plans (PEM) for supply systems that serve urban agglomerations with a population in excess of 20,000 inhabitants", with just a 8.5% in force, thirteen years after the mandatory date.That means that the main tool to deal with drought, under the drought-risk paradigm, is still incipient and currently non-operational. This situation (high exposure, high sensitivity, and low adaptive capacity) reveals a high vulnerability of the systems to cope with the emergence of a period of drought and reduces the effectiveness of the tools and plans that are operational and, thus, the hypothesis raised up is corroborated. Regardless of the differences that there may be between the different river basin districts, in terms of average precipitation, non-conventional resources, or different type of demands (which did not fall within this analysis), the little or no integration of drought-risk in the ordinary management strategies of the resource is corroborated as a common aspect and a pending challenge, in the Spanish water policy.This challenge also highlights some resistances existing in Spain, to achieve a sustainable water resources and drought-risk management, which have not yet been analyzed. Figure 1 . Figure 1.Location and type of management of the Spanish River Basin Districts. Figure 1 . Figure 1.Location and type of management of the Spanish River Basin Districts. Table 1 . Main characteristics on demands and resources in the Spanish river basin districts. * Water transfer accountability is included in "available resources" column.Source: Author's own elaboration from the Spanish River Basin Districts Plans (2016-2021). Table 2 . Reduction of natural contributions in the River Basin District due to climate change. Source: Author's own elaboration from the Spanish River Basin Districts Plans (2016-2021). Table 3 . Average values of change in the hydrological variables analyzed for the different projections of the emission scenarios RCP 4.5 and RCP 8.5. Table 5 . Global status of the surface and groundwater bodies of the Spanish Peninsular River Basin Districts.
2019-02-06T08:26:15.847Z
2019-01-09T00:00:00.000
{ "year": 2019, "sha1": "cf7cdd371ebdb4302b36d9c99a2cff6c10c52064", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/2/308/pdf?version=1547029065", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cf7cdd371ebdb4302b36d9c99a2cff6c10c52064", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
269961791
pes2o/s2orc
v3-fos-license
ROX Index Variation as a Predictor of Outcomes in COVID-19 Patients Background: During the COVID-19 pandemic, emergency departments were overcrowded with critically ill patients, and many providers were confronted with ethical dilemmas in assigning respiratory support to them due to scarce resources. Quick tools for evaluating patients upon admission were necessary, as many existing scores proved inaccurate in predicting outcomes. The ROX Index (RI), a rapid and straightforward scoring system reflecting respiratory status in acute respiratory failure patients, has shown promise in predicting outcomes for COVID-19 patients. The 24 h difference in the RI accurately gauges mortality and the need for invasive mechanical ventilation (IMV) among patients with COVID-19. Methods: Study design: Prospective cohort study. A total of 204 patients were admitted to the emergency department from May to August 2020. Data were collected from the clinical records. The RI was calculated at admission and 24 h later, and the difference was used to predict the association with mortality and the need for IMV, a logistic regression model was used to adjust for age, sex, presence of comorbidities, and disease severity. Finally, the data were analyzed using ROC. Results: The difference in respiratory RI between admission and 24 h is a good predictor for death (AUC 0.92) and for mechanic ventilation (AUC: 0.75). Each one-unit decrease in the RI difference at 24 h was associated with an odds ratio of 1.48 for the risk of death (95%CI: 1.31–1.67) and an odds ratio of 1.16 for IMV (95% IC: 1.1–1.23). Conclusions: The 24 h variation of RI is a good prediction tool to allow healthcare professionals to identify the patients who will benefit from invasive treatment, especially in low-resource settings. Introduction The coronavirus disease 2019 (COVID-19) pandemic represented a major global health threat and most health systems were easily overwhelmed [1].The global mortality rate of COVID-19 deaths is far more than the published data [2].In its latest report, the WHO stated a mortality rate of greater than 66% [3].During the first and most devastating wave of COVID-19, Ecuador had one of the highest mortality rates in South America [4]. In low-resource settings, the number and severity of cases resulted in high mortality due to a lack of mechanical ventilators and as well as other medical supplies.The prioritization of patients through triage tools was essential to provide opportunities to those with the best chance of survival and to facilitate decisions regarding their appropriate referral level.During triage, using simple and cost-effective methods is warranted in any situation, especially in low-resource settings [5]. Acute respiratory distress syndrome (ARDS) was present in up to 20% of patients with COVID-19, most of them with an urgent need for mechanical ventilation [6,7] so the development of a rapid prediction tool for COVID-19 severity needed to be established as soon as possible. In COVID-19, the main concern was the rapid respiratory decompensation that led to establishing early intubation and IVM protocols based on the increased O2 needs [8][9][10].Identifying the correct moment to intubate is important, as the risk of delaying this procedure leads to severe complications in a short time [11][12][13][14]. One of the challenges emergency physicians confronted during the emergency room visit was to estimate the severity of illness and prognosis [15], ideally completed with simple, precise, and practical scales applied in conditions like this pandemic [16]. The need for early predictors to identify high-risk patients and those with the best chance of survival was a high priority.Many scales have been used but with limitations in their predictive power.Previous tools were often time-consuming and complex to calculate.One possible solution, the RI, described by Roca [17], expresses the relation between pulse oximetry/inspired oxygen fraction (FiO2)/respiratory rate.It is a useful tool in decisionmaking and in the immediate therapeutic management of patients [17,18].Originally used for in-patients with pneumonia with acute respiratory failure treated with high flow nasal cannula (HFNC), the RI can help identify those patients with low and those with high risk for intubation [18]. Although the predictive capacity of RI is adequate, it is not high enough to be used as the sole criterion to predict failure of the HFNC, and those who require IMV.Therefore, it is important to add to this tool clinical variables or serial measurements to improve the predictive value. The objective of the present study was to evaluate the 24 h RI variation as a noninvasive tool in the emergency department to define the possible outcome of COVID-19 patients [12,13] in the pandemic scenario. Design and Study Population This is a prospective cohort study.Between 1 May to 31 August 2020, a total of 204 consecutive patients were admitted to the COVID-19 treatment area in a reference second-level hospital in Quito, Ecuador (2850 m altitude).The following inclusion criteria were utilized: age 18 or older, met one or more of the WHO criteria for confirmed or strongly suspicious cases of COVID-19, which are a positive PCR-rt test in the first case, or chest CT (computed tomography) scan result highly suspicious (CORADS) of COVID-19, and clinical-epidemiological criteria for the second one.In the health public system at the time of the study, the availability of COVID-19 molecular diagnostic tests was scarce and limited, and it was therefore impossible to test everyone.Only 25 patients out of 204 patients had the test performed.The CT, on the other hand, was available to every patient with a clinical suspicion of COVID-19. The protocol was approved by the ethics committee of the hospital MSP-CZ9HGDC-2021-0700-O. No intervention was made in this research, there is no individual identification data shown, and data were obtained from the clinical records.There was no need for informed consent. Information Collected In 204 patients with a high likelihood of COVID-19 infection, respiratory parameters like FiO2, respiratory rate, and O 2 saturation were recorded, and the RI was calculated at time of presentation to the emergency department and 24 h later.Other variables considered were the presence of comorbidities defined by presence/absence and tomographic compromise defined by the CORADS classification.The main outcomes were death and the need for IMV.Regarding the FiO2 during the pandemic, normal nasal cannulas and oxygen masks with and without reservoirs were available.Estimation of FiO2 was through the flow used for those devices.Regarding nasal cannula, for each liter of oxygen, there was a 4% increment in FiO2 until 40% or a maximum of 5 L of flow.Oxygen mask can provide 28% to 50% and non-rebreathing oxygen mask with reservoir can provide 60-90% depending on flow in each case.Respiratory rate was taken in one minute with the patient lying down.Oxygen saturation level was measured through the monitors available in our hospital. Analysis Plan Description of respiratory parameters was made presenting means and standard deviations of each parameter at time of presentation and 24 h later.Differences in respiratory parameters between those who survived and those who deceased or those who needed and did not need IMV were tested using t-test with correction for unequal standard deviation when necessary.ROC analysis was performed for each respiratory parameter and the RI difference was used to calculate the AUC, for both defined outcomes of death and the need for IMV.The difference between time of presentation (0 h) and 24 h respiratory RI was explored by subtracting 0 h minus 24 h values.Cut-offs for parameters were obtained by calculating the Youden index for each analyzed parameter.An ROC analysis was performed to generate a logistic regression model adjusting for age treated as continuous variable, sex, presence of comorbidities treated as binary variables (presence/absence), and radiographical disease severity as defined by CORADS score using scores 1 and 2 as lower severity, scores 3 and 4 middle severity, and 5 and 6 higher severities [19,20]. To analyze changes in 24 h RI difference, the RI scores were categorized as: >20 = normal respiratory function; 15-19.9= mild respiratory deterioration; 10-14.9= moderate respiratory deterioration; and <10 severe respiratory function deterioration.Logistic regression analysis was used to explore the association between the change in RI using the improving status (change from a worse to better category) as baseline category and compared with those who did not change category, or with those who deteriorated from one category to the next worse level, or with those who deteriorate more than 1 category.The logistic regression model was adjusted by age, sex, comorbidities, and severity as previously indicated.The results were also explored to investigate if the change in RI category was modified by sex, age, presence of comorbidity, and severity.The analysis was performed in STATA software version 16. Characteristics of the Population A total of 204 patients were admitted to the hospital with a COVID-19 diagnosis.The mean age was 57 years and 60% of the patients were male.Eighty-eight patients (43%) had comorbidities, the most frequent being hypertension and diabetes mellitus.Fever, cough, dyspnea, and malaise-myalgia were the most common presenting symptoms.Most patients (93%) had a CT scan highly suspicious of pulmonary infection according to the imagen CORADS classification.Approximately one-quarter of patients required IMV, and 56 patients (27%) died.There was no statistical difference in age or comorbidity type between groups compared to death and IMV requirements.Male patients were more frequent among those who died and in IMV.Symptoms were similar between compared groups, except for dyspnea, which was more frequent in those requiring ventilatory support. Of the 204 patients, 89.8% arrived at the hospital without supplemental O2, (FiO2 of 0.21) only 11.2% were on oxygen at different concentrations and came by ambulance.After 24 h from the time of presentation, oxygen demands were higher (FiO2 > 0.4) in those who died or needed IMV (Table 1). Respiratory Associated Factors for Death and Mechanical Ventilation The oxygen saturation levels were statistically higher at the time of presentation and 24 h later in those patients who survived than those who did not, and in those who required IMV when compared with those who did not.FiO2 administered levels and respiratory rate were statistically higher in deceased patients and in those requiring IMV, both at the time of presentation and after 24 h.RI values were significantly higher at presentation and 24 later in patients who survived and those who did not need IMV.Differences in RI values were negative in survivors and in those who did not require IMV.These differences were statistically higher, showing deterioration of RI, in those who did not survive and those who had IMV (Table 2).The difference in RI at entry versus 24 h is a better predictor for death (AUC 0.92) than for a mechanical ventilation predictor (AUC: 0.75) (Table 3) (Figures 1 and 2).Each decrease in one unit of difference of RI correlated to an increased risk of death and IMV by 48% and 16%, respectively.This yielded an odds ratio for death risk of 1.48 (95%CI: 1.31-1.67)and for IMV, 1.16 (95%IC: 1.1-1.23),adjusted for age, sex, comorbidities, and severity of the disease. Prediction Analysis of Respiratory Parameters and Difference between RI at Entry versus 24 h Later for Risk of Death and Mechanical Ventilation The difference in RI at entry versus 24 h is a better predictor for death (AUC 0.92) than for a mechanical ventilation predictor (AUC: 0.75) (Table 3) (Figures 1 and 2).Each decrease in one unit of difference of RI correlated to an increased risk of death and IMV by 48% and 16%, respectively.This yielded an odds ratio for death risk of 1.48 (95%CI: 1.31-1.67)and for IMV, 1.16 (95%IC: 1.1-1.23),adjusted for age, sex, comorbidities, and severity of the disease. Prediction Analysis of Respiratory Parameters and Difference between RI at Entry versus 24 h Later for Risk of Death and Mechanical Ventilation The difference in RI at entry versus 24 h is a better predictor for death (AUC 0.92) than for a mechanical ventilation predictor (AUC: 0.75) (Table 3) (Figures 1 and 2).Each decrease in one unit of difference of RI correlated to an increased risk of death and IMV by 48% and 16%, respectively.This yielded an odds ratio for death risk of 1.48 (95%CI: 1.31-1.67)and for IMV, 1.16 (95%IC: 1.1-1.23),adjusted for age, sex, comorbidities, and severity of the disease. Association between 24 h Changes in Respiratory ROX Index and Death A greater proportion of patients were admitted with moderate to severe respiratory deterioration (62%), and 24 h later around 39% of patients remained in those categories.Similarly, a minority of patients were admitted with normal respiratory function, but this value increased 24 h later (Table 4).The higher the deterioration in 24 h in RI score, the higher the likelihood of death or need for IMV.Even in the absence of change in the first 24 h, there is a statistical increase in the likelihood of death and IMV, compared with those who improve RI scores in the first 24 h.The probability of death and IMV increase when the deterioration of RI is even higher.This association was independent of age, sex, disease severity, and presence of comorbidities and there was no modification effect in this association due to these confounders (Table 5).The baseline category (Table 5) improvement means a change from a worse to a better functional respiratory category.No change refers to no change in the 24 h in the functional respiratory category compared to the initial category.Deteriorating 1 category means a change from normal to mild, mild to moderate, or moderate to severe respiratory deterioration in 24 h.Deteriorating 2 category means a change from mild to severe, or normal to moderate respiratory function in 24 h. Discussion The RI is a valuable tool for quick evaluation of the condition of COVID-19 patients at the time of presentation and it has a good prognostic value (AUC 78.6) to predict mortality and the need for IMV (AUC 76.1).Following stabilization and initial treatment, a 24 h RI variation became an even better prognostic tool with an (AUC 92) and (AUC 75) for mortality and IMV, respectively; this also shows that a 24 h improvement in the functional respiratory category is associated with survival and no need for IMV.Conversely, a 24 h deterioration in RI, or no change in RI is associated with an increasing risk of death and eventual need for IMV support. In a health disaster like the COVID-19 pandemic [21], conceptually, the healthcare provider's efforts should be directed to those with the better chance of survival [22], especially in low-resource settings or countries where infrastructure is poor and access to respiratory equipment and supplies like HFNC and non-invasive ventilation was limited.As far as we have searched the literature, there are some publications about the value of the RI in predicting mortality or mechanical ventilation in COVID-19 patients [23,24].Vega et al. establish that RI is a useful tool to guide intubation, especially in moderate respiratory categories, and Basoulis et al. also show that a 12 h RI is useful in predicting mortality.The 24 h RI difference seems to be a good tool to differentiate between those who will survive and allows for directing resources and therapy in the right way, but maybe this is a long interval to improve or change therapeutic measures.It would be beneficial to calculate the RI difference in shorter intervals to seek a prognostic benefit for individuals.The results of other authors show different values of the RI as predictors of the same outcomes considered in this paper, for example, Vanni et al. found that RI < 5.83 on ED admission was 79.6% sensitive and 63.5% specific for predicting mortality/mechanical ventilation need; and Bartoletti et al. show that an RI of <3.85 predicted mortalities with a sensitivity of 76.9% and a specificity 65.8%, [25,26] a much lower predictive power than the difference in RI at 24 h.This research was conducted in Quito, Ecuador and 204 patients were enrolled during the critical period of the pandemic.We have identified some limitations with our study.First, this was a monocentric study, second, it was conducted in a public hospital, and therefore patient characteristics might not be similar to other healthcare facilities, so our results cannot be generalizable to all COVID-19 patients.Third, Quito is a high-altitude city (2850 m) and it represents a special environmental situation.The inhabitants of high altitudes usually develop hypocapnic hypoxia and have respiratory rates higher than the sea level, as West B. et al. mention [27], and this fact may have an impact on the RI score and its utilization in other regions.The equation to calculate the RI has three components: oxygen saturation, FiO2, and respiratory rate, the component that remains the same between high altitude and sea level is inspired oxygen fraction (21%) [28,29].Due to this, we introduced four groups of the RI based on the normal parameters for Quito (Table 4).This may have become a source of bias during the analysis.Gianstefany et al. described sea level values for an RI higher than those in Quito [16], for instance, the RI cut-off for ambulatory care in the Gianstefany research was 26, which is different from 20 in our study.Lastly, the FiO2 was estimated based on clinical practice guidelines [30,31] rather than direct measurement studies.In the pandemic, the exact measurement of the administered FiO2 was not feasible in our setting as we lacked the necessary devices to do it; this could also become a limitation in our research. We describe four categories to show different stages of respiratory compromise in patients with COVID-19 from normal to severely compromised respiratory function.If patients do not improve the RI score within 24 h after implementing all the initial treatments or if the score deteriorates in one or two categories, the risk of death and need for mechanical ventilation increases.The greater odds ratio values were explained by the small study population in each category.However, even in the small groups, the association between respiratory deterioration and death or IMV was consistent. Patients with RI at admission superior to 20 or those who improved their RI in 24 h were assigned to observation or discharged for ambulatory treatment. The in-hospital mortality was 27% (56/204) with a length of hospital stay of 7.27 days on average.The average RI in this group deteriorated from 9.8 to 4.9, showing the rapid progression and severity of the disease.Twelve patients died at home after being discharged from the hospital.Ten of these patients improved the 24 h RI difference and only two deteriorated.Perhaps this group of patients died because of complications of the disease but this analysis is out of the scope of this report. Conclusions The use of the ROX Index at the time of admission for COVID-19 pneumonia patients is an easy and quick method used to predict the severity and outcome of patients when used in emergency departments.Our study shows that its use yields the same or more accurate prediction of outcomes compared to other similar scales.Interestingly, the 24 h difference in RI has the greatest potential to predict the outcome of patients and help identify those who will benefit from intensive measures. More research on this topic is needed to understand and implement the use of this tool, especially in low-resource settings where equipment is scarce, and infrastructure is poor. Table 1 . Characteristics of studied population by main outcomes. Table 2 . Associated factors for death and mechanical ventilation. Table 2 . Cont.Prediction Analysis of Respiratory Parameters and Difference between RI at Entry versus 24 hLater for Risk of Death and Mechanical Ventilation Table 3 . Predictors of death and mechanical ventilation. Table 3 . Predictors of death and mechanical ventilation. Table 3 . Predictors of death and mechanical ventilation. Table 4 . Frequency of respiratory severity categories at entry and 24 h later. Table 5 . Association between 24 h ROX Index change and death or mechanic ventilation.
2024-05-23T15:19:45.424Z
2024-05-21T00:00:00.000
{ "year": 2024, "sha1": "4e9b15dbb282e38f9c45e64b6360aec5c5d4bf09", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/13/11/3025/pdf?version=1716287724", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0a07108e666cce21cdd6cad4e661e028c90e8f19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
236162742
pes2o/s2orc
v3-fos-license
Redox regulation by TXNRD3 during epididymal maturation underlies capacitation-associated mitochondrial activity and sperm motility in mice During epididymal transit, redox remodeling protects mammalian spermatozoa, preparing them for survival in the subsequent journey to fertilization. However, molecular mechanisms of redox regulation in sperm development and maturation remain largely elusive. In this study, we report that thioredoxin-glutathione reductase (TXNRD3), a thioredoxin reductase family member particularly abundant in elongating spermatids at the site of mitochondrial sheath formation, regulates redox homeostasis to support male fertility. Using Txnrd3−/− mice, our biochemical, ultrastructural, and live cell imaging analyses revealed impairments in sperm morphology and motility under conditions of TXNRD3 deficiency. We find that mitochondria develop more defined cristae during capacitation in wildtype sperm. Furthermore, we show that absence of TXNRD3 alters thiol redox status in both the head and tail during sperm maturation and capacitation, resulting in defective mitochondrial ultrastructure and activity under capacitating conditions. These findings provide insights into molecular mechanisms of redox homeostasis and bioenergetics during sperm maturation, capacitation, and fertilization. During epididymal transit, redox remodeling protects mammalian spermatozoa, preparing them for survival in the subsequent journey to fertilization. However, molecular mechanisms of redox regulation in sperm development and maturation remain largely elusive. In this study, we report that thioredoxin-glutathione reductase (TXNRD3), a thioredoxin reductase family member particularly abundant in elongating spermatids at the site of mitochondrial sheath formation, regulates redox homeostasis to support male fertility. Using Txnrd3 −/− mice, our biochemical, ultrastructural, and live cell imaging analyses revealed impairments in sperm morphology and motility under conditions of TXNRD3 deficiency. We find that mitochondria develop more defined cristae during capacitation in wildtype sperm. Furthermore, we show that absence of TXNRD3 alters thiol redox status in both the head and tail during sperm maturation and capacitation, resulting in defective mitochondrial ultrastructure and activity under capacitating conditions. These findings provide insights into molecular mechanisms of redox homeostasis and bioenergetics during sperm maturation, capacitation, and fertilization. Testicular sperm are functionally immature in that they lack the ability to naturally fertilize an egg. Epididymis transit is an indispensable step for mammalian sperm cells to fully develop the fertilizing ability. During epididymal descent, spermatozoa acquire the potential to develop progressive motility and capacitation (1,2). One major threat to sperm for the subsequent journey to fertilization is oxidative damage since sperm plasma membrane is rich in polyunsaturated fatty acids (3). Redox mechanisms prepare epididymal spermatozoa for their protection and survival during the fertilization journey (4). For example, oxidation of the intracellular milieu is involved in mammalian sperm maturation and capacitation (5,6). At the same time, oxidative damage correlates with lower motility of human sperm (7) and causes DNA fragmentation and protein crosslinking (8). However, it remains unclear how redox homeostasis is maintained during mammalian sperm capacitation. Thioredoxin and glutathione systems are two major redox systems that utilize the thiol redox biology to maintain cellular redox homeostasis and protect against oxidative stress. Thioredoxin reductases, a family of selenium-containing pyridine nucleotide-disulfide oxidoreductases, are key redox regulators of mammalian thioredoxin system. They are comprised of three enzymes: thioredoxin reductase 1 (TXNRD1), thioredoxin reductase 2 (TXNRD2) (9,10), and thioredoxinglutathione reductase (TXNRD3, also known as TGR) (11,12). TXNRD1 and TXNRD2 are essential proteins that support redox homeostasis in the cytosol and mitochondria, respectively (13)(14)(15)(16). TXNRD3 is predominantly expressed in testis (12) and particularly abundant in elongating spermatids at the site of mitochondrial sheath formation (17). Structurally, TXNRD3 contains an additional N-terminal glutaredoxin domain compared with the other two TXNRD isozymes (12). Thus, it has affinity to both thioredoxin and glutathione systems in vitro (12) and can reduce both. Disulfide bond isomerase activity of TXNRD3 was also previously reported in vitro (17), yet the physiological significance and underlying mechanisms of TXNRD3 in male germ cell development and sperm function have been largely unknown. Here, we report that targeted disruption of Txnrd3 in mice induces capacitation-associated impairment in sperm morphology and motility. We find that TXNRD3 functions during epididymal sperm maturation in regulating chromatin organization and capacitation-associated mitochondrial activity. Biochemical analyses demonstrate that TXNRD3 protein levels diminish during sperm maturation, impairing redox homeostasis of head and flagellar proteins. Using ultrastructural analyses and live cell imaging, we reveal that mitochondria in Txnrd3 −/− sperm undergo structural defects and lose control of its activity especially during sperm capacitation. These findings provide insights into sperm redox regulation during epididymal maturation and its effect on capacitationassociated energy metabolism. Txnrd3 −/− sperm cells display defects in morphology and motility under capacitating conditions In the accompanying study by Dou et al. (2022) (18), we showed that Txnrd3 −/− mice compromise fertility in vivo and sperm fertilizing ability in vitro. To understand why Txnrd3 −/− males exhibit sub-fertility in vivo and in vitro (18), we first examined sperm morphology and count from the cauda epididymis. The number of sperm produced by Txnrd3 −/− males was smaller than that from 2 to 3 months old heterozygous littermates (Fig. S1A). Among them, some Txnrd3 −/− sperm displayed bending within midpiece, specifically evident after 90 min incubation under capacitating conditions (Fig. 1, A and B), whereas sperm bent around annulus (i.e., hairpin) were negligible in both wildtype and Txnrd3 −/− mice. Scanning electron microscopy analysis did not reveal any gross abnormalities in the sperm midpiece (Fig. 1C). Prompted by the observation of capacitation-associated bending within midpiece of Txnrd3 −/− sperm, we next analyzed sperm motility. Flagellar waveform analysis revealed that Txnrd3 −/− sperm become stiff in the midpiece and beat abnormally after incubating under capacitating conditions (Figs. 1D and S1B and Video S1). Accordingly, CASA analysis found that overall motility of Txnrd3 −/− sperm was diminished (Fig. S1C). By contrast, hyperactivated motility was not significantly affected, likely because the Txnrd3 −/− sperm that are not bent still developed hyperactivated motility. To better understand the effect of TXNRD3 deficiency on freeswimming pattern, sperm cells were placed in 0.3% methylcellulose which mimics viscous environment in the female reproductive tract. Swimming trajectory traced by time-lapse video microscope demonstrated that Txnrd3 −/− sperm move a shorter distance at a given time after capacitation ( Fig. 1E and Video S2), suggesting that the bending within midpiece prevents them from swimming as efficiently as wildtype sperm. Absence of TXNRD3 affects the redox state of both head and tail of sperm TXNRD3 is enriched in the testis but not readily detected in the epididymis (17). Our analysis of the publicly available scRNA-seq dataset of mouse testis (19) found that Txrnd3 mRNA expresses gradually more from spermatogonia, spermatocytes, to early spermatids but the expression level is much reduced in late spermatids (Fig. S2). We hypothesized that diminishing TXNRD3 mRNA and protein levels during sperm development underlies the changes in the overall redox state of sperm proteins along the epididymal tract and subsequent capacitation of sperm. Thus, we examined the distribution and the extent of protein thiol modification in the absence of TXNRD3 by labeling epididymal sperm cells with Thio-lTracker that reacts with reduced thiols (free "-SH") irreversibly (Fig. 2, A and B). Consistent with the previously reported increasing disulfide bond formation in sperm during their transit along the male reproductive tract (20), wildtype sperm from corpus and cauda region displayed gradually diminishing fluorescence intensity, more noticeably in the head, whereas caput sperm exhibited strong fluorescence over the entire cell (Fig. 2, A and B). Using BODIPY-NEM, another thiol-reactive dye, we also found that, in the absence of TXNRD3, free thiol content is further decreased modestly, yet significantly, in cauda sperm (Fig. S3A). Loss of TXNRD3 did not affect the overall diminishing fluorescence patterns from caput to cauda. It is possible that other reductases function redundantly and compensate for the loss of TXNRD3. Indeed, both TXNRD1 and TXNRD2 proteins are equally detected in wildtype and Txnrd3 −/− sperm (Fig. S3B). In the absence of TXNRD3, however, free thiol content was significantly lower in the head of cauda sperm as well as in the midpiece of caput and cauda sperm than those of wildtype (Figs. 2, A and B and 3A, arrowheads). We next examined whether changes in TXNRD3 protein levels resulted in more oxidized sperm proteins on thiols in cauda than caput. As the oxidized proteins in cauda sperm might be more resistant to be extracted in SDS lysis buffer likely due to more extensive cross-linking involving disulfide bond, we employed urea to solubilize sperm proteins. Under this condition, we found no significant difference in TXNRD3 protein levels from caput, corpus, and cauda sperm (Fig. 2C). Yet, TXNRD3 in sperm was indeed less soluble in SDS lysis buffer during their descent through epididymis (Fig. 2, D and E). This suggests that TXNRD3 itself is likely a substrate of thioredoxin system and is oxidized during epididymal maturation, affecting its enzymatic activity. Sperm midpiece harbors compartmentalized mitochondria (21) (see also Fig. 1C). As Txnrd3 −/− sperm specifically exhibit the bending within the midpiece, next we tested whether the TXNRD3 loss-of-function affects the redox states of mitochondrial proteins more specifically. In the SDS extracted fraction, the protein level of glutathione peroxidase 4 (GPX4)-a mitochondrial structural protein and potential substrate of TXNRD3 (17,22)-was much lower in the cauda sperm ( Fig. S3, C and D) just as seen for TXNRD3 (Fig. 2, D and E). By contrast, urea readily extracted GPX4 even from cauda sperm (Fig. 2F, supernatant), indicating that GPX4 becomes gradually more oxidized during epididymal transit, therefore SDS-insoluble. Intriguingly, we observed a ureainsoluble GPX4 fraction in Txnrd3 −/− cauda sperm, but not in wildtype, when the pellet was further solubilized by guanidine following urea treatment (Fig. 2E, pellet). The altered thiol redox status of GPX4 might affect its function in reducing hydrogen peroxide and lipid peroxides against oxidative stress (23,24). Other nonstructural but membrane proteins localized in sperm mitochondria or midpiece such as ANT4 (ADP/ATP translocase 4), P2X2 (P2X purinoceptor 2), TOM20 (translocase of outer mitochondrial membrane 20), and the other two thioredoxin reductases-TXNRD1 and TXNRD2-were well solubilized in all detergents tested (Figs. 2F and S3B). Taken together, our data clearly showed that TXNRD3 is involved in thiol redox regulation of sperm proteins during epididymal transition. In the accompanying manuscript (18), we provide further evidence that TXNRD3 reduces a broad range of substrates during epididymal sperm maturation. TXNRD3 regulates sperm maturation and mitochondrial function Deregulation of capacitation-associated oxidation in chromatin organization in Txnrd3 −/− sperm It has been known that redox signaling is involved in sperm capacitation (5). We next examined how free thiol levels change during sperm capacitation. Under capacitating conditions, wildtype sperm isolated from cauda epididymis were less reactive with ThiolTracker, particularly in the head, whereas Txnrd3 −/− sperm was not reactive regardless (Fig. 3A, TXNRD3 regulates sperm maturation and mitochondrial function arrowheads). These results suggest that sperm proteins were more oxidized in the absence of TXNRD3 during sperm maturation, which remain relatively stable during capacitation. To test this idea, we examined protein oxidation levels by detecting carbonyl groups inserted into proteins. Incubating sperm under capacitating conditions dramatically increased sperm protein carbonylation in general, consistent with the idea of active oxidative reactions occur during capacitation. We observed that the protein oxidation level was further enhanced in Txnrd3 −/− sperm (Figs. 3B and S4A). The cellular oxidation during sperm capacitation might be associated with superoxide dismutase 1 decrease (Fig. S4B), consistent with the note that ROS level increases during sperm capacitation (25,26). The identity of proteins subject to capacitation-associated oxidation remains to be determined. As the free thiol level changes were most dramatic in the head, we next investigated the loss of TXNRD3 function on sperm chromatin organization. We examined the extent of DNA integrity by halo assay where radial dispersion of the DNA fragments from the nucleus was measured upon artificial acid denaturation; the bigger the halo size is, the less damaged DNA fragment is present (27)(28)(29). Intriguingly, Txnrd3 −/− sperm resist the denaturation better, developing smaller halos than those of wildtype sperm (Fig. 4, A and B). This result suggests that more damaged, double-stranded DNA fragments are present in the nucleus in the absence of TXNRD3, likely due to more DNA fragments crosslinked to proteins. The nuclei of sperm incubated under capacitating conditions, developed smaller halos in both wildtype and Txnrd3 −/− sperm (Fig. 4, A and B). We further evaluated the altered resistance of DNA to acid denaturation by acridine orange (AO) assay; AO produces different emission when bound to single versus double stranded DNA (30,31). We observed that double-stranded DNA was more stable and resistant to acid denaturation in both nucleus and mitochondria in capacitated Txnrd3 −/− sperm (Fig. S5). Taken together, the thiol oxidoreductase function is diminished in Txnrd3 −/− sperm compared to wildtype sperm. The data show that TXNRD3 supports redox regulation of epididymal sperm maturation, protecting mitochondrial structure and chromatin from oxidative damage during sperm capacitation. Lack of TXNRD3 dysregulates capacitation-induced remodeling of sperm mitochondrial ultrastructure To maintain motility and support capacitation after ejaculation, mammalian sperm utilize nutrient molecules in the seminal plasma and in the female reproductive tract environment to generate energy (32). As terminally differentiated and highly polarized cells, sperm uniquely compartmentalize metabolic pathways in the flagella: the glycolytic enzymes are specifically localized in the principal piece, whereas mitochondrial oxidative phosphorylation (OXPHOS) occurs in the midpiece (21). A classic view on mouse sperm bioenergetics has been that glycolysis is the main metabolic pathway to support energy production for sperm motility although both glycolytic and OXPHOS are functional (33)(34)(35). Recent studies reported that oxygen consumption is accelerated in sperm incubated under capacitating conditions (36,37), supporting increase of mitochondrial OXPHOS during capacitation. As mitochondria cristae organization-dimerization of the V-like shaped ATP synthase and its self-assembly into rows-determines respiratory efficiency (38)(39)(40)(41), we analyzed mitochondrial morphology by transmission electron microscopy. We found that mitochondria display more well-defined cristae after incubating under capacitating conditions in both wildtype TXNRD3 regulates sperm maturation and mitochondrial function and Txnrd3 −/− sperm (Fig. 5, A and B). In addition, we observed that capacitation induces electron-dense cores with few or no cristae in the mitochondrial matrix of capacitated sperm cells (Fig. 5, A and B). Notably, significantly a greater number of Txnrd3 −/− sperm displayed more mitochondrial condensation and collapsed cristae than wildtype sperm, which becomes more prominent in capacitated sperm. Thus, our results suggest that TXNRD3 deficiency compromises capacitation-associated mitochondrial dynamics and energetics, leading to energy imbalance, consistent with decreased sperm motility (Fig. S1). Txnrd3 −/− sperm lose control of mitochondrial membrane potential Mitochondrial membrane potential (ΔΨm) is an essential driving force for ATP synthesis and Ca 2+ uptake (42). To better understand the loss-of-function effect of TXNRD3 on the capacitation-associated mitochondrial ultrastructure and sperm motility, we performed functional imaging to probe ΔΨm and measured ATP content. ΔΨm maintained by pumping hydrogen ions is coupled to the transfer of electrons through the electron transport chain (43). We observed that antimycin A, a potent electron transport chain inhibitor, dissipated ΔΨm probed by MitoTracker DeepRed (Fig. 6A, wildtype), a fluorescent dye which accumulates in mitochondria ΔΨm-dependently (44)(45)(46). Thus, the extent to which mitochondrial membrane is polarized can be compared with this functional yet indirect approach. Under noncapacitated conditions, ΔΨm is not significantly different between wildtype and Txnrd3 −/− sperm (Figs. 6B and S6). Under capacitating conditions, however, we found two distinct populations among TXNRD3-deficient sperm in terms of ΔΨm (Figs. 6 and S6); one group did not show any change in the fluorescent intensity of the dye, i.e., an ΔΨm impervious to 10 μM antimycin A, suggesting an altered manner of regulating mitochondrial activity (red trace). By contrast, the other group of capacitated Txnrd3 −/− sperm exhibited ΔΨm at a similar level to that of capacitated wildtype sperm, which is more polarized than in noncapacitated cells. Intriguingly, ATP level is dramatically decreased in both wildtype and Txnrd3 −/− sperm after incubating under capacitating conditions (Fig. 6C, compare noncapacitated versus capacitated), which is in line with the direction of ΔΨm change during sperm capacitation. This suggests that ATP consumption is faster than synthesis to meet the energy demand for sperm capacitation (47) while both glycolysis and mitochondrial activity are accelerated (36,37). We observed a trend toward the decrease of ATP level in Txnrd3 −/− sperm but found it not statistically significant (Fig. 6C), presumably due to a net result of two mixed sperm populations. Treatment of antimycin A reduced ATP level in noncapacitated sperm by 34% (wildtype) versus 35% (Txnrd3 −/ − ) but had no effect on capacitated sperm (Fig. 6C), suggesting the potential adaptation in the contributions of mitochondria and glycolysis to ATP content during sperm capacitation. Taken together, these results suggest that the loss of TXNRD3 dysregulates the capacitation-associated mitochondrial activity in mouse sperm. Discussion Evidence that TXNRD3 supports sperm function and redox regulation in male fertility The findings in this study demonstrate that the subfertility of Txnrd3 −/− sperm likely result from the combinatorial effects of defective chromatin organization and capacitationinduced tail bending within the midpiece. The high degree of sperm DNA fragmentation was reported to correlate with low fertilization rates, embryo quality, and pregnancy after IVF and intracytoplasmic sperm injection (28,48). The bent TXNRD3 regulates sperm maturation and mitochondrial function tail would force the head backward, opposite to the swimming direction of the sperm. Additionally, TXNRD3 in sperm is gradually oxidized during epididymal transit, which is consistent with reduced free thiol group levels in cells along the epididymal tract. Therefore, the main timing of TXNRD3 function is during spermatogenesis in the testis and epididymal maturation rather than the subsequent fertilization process. Moderate changes in the free thiol levels and redox status of the Txnrd3 −/− sperm may be due to the increased level of thioredoxin 1 (18), which is a conserved thiol oxidoreductases and substrate of TXNRDs, and/or other thiol reductases such as TXNRD1 and TXNRD2 that could partly compensate for the loss of TXNRD3. Other thiol oxidoreductases specific to the testis and sperm such as thioredoxin domain-containing 2 (TXNDC2) (49, 50), thioredoxin domain-containing 3 (TXNDC3) (50,51), and thioredoxin domain-containing 8 (TXNDC8) (52) remain to be examined. The data are also consistent with the mode wherein TXNRD3 supports accelerated redox remodeling during sperm maturation, which may also occur, albeit at a lower rate, in the absence of this enzyme. Consistent with the notion, the accompanying manuscript shows that overall, more oxidized proteins, based on their thiol status, were identified in the epididymal Txnrd3 −/− sperm including those of RNA binding-proteins and mitochondrial proteins involved in metabolism (18). Alternatively, the thiol redox status of certain proteins is marginally changed in the absence of TXNRD3, but the pattern of disulfide bond formation could be jumbled, sufficient to make Txnrd3 −/− sperm vulnerable to capacitation-associated oxidative damage. TXNRD3 regulates sperm maturation and mitochondrial function Mitochondrial function and control of metabolism during capacitation Mitochondria is a key organelle still preserved in the fully differentiated spermatozoa unlike other cellular organelles. Previous knock-out studies of sperm-specific glyceraldehyde 3-phosphate dehydrogenase and phosphoglycerate kinase 2 have identified glycolysis as the main source of ATP to sustain the motility of mouse spermatozoa (53,54). Because OXPHOS yields more ATP per mol of glucose than glycolysis, the discordant contribution of the two metabolic pathways to provide the energy for mouse sperm motility has been an unresolved puzzle for a long time. A comparative study revealed that sperm of closely related mouse species with higher oxygen consumption rate were able to produce higher amounts of ATP, achieving higher swimming velocities (55). Recent observations of capacitation-associated accelerated oxygen consumption in mouse sperm (36,37) further support increased in mitochondrial activity for higher energy demand in sperm. Here we show, to our knowledge, that capacitation induces sperm mitochondria to form more defined cristae for the first time (Fig. 5). As ATPase dimerization and selfassembly drive cristae formation (38)(39)(40), this result further Figure 6. Mitochondrial activity, indicated by membrane potential, is impaired in absence of TXNRD3. A, dynamic representative of mitochondrial membrane potential (ΔΨm) indicated by Mitotracker. Mouse sperm were capacitated then stained with 500 nM Mitotracker Deep Red which accumulates ΔΨm-dependently. ΔΨm was dissipated by antimycin A-an inhibitor of electron transport chain during oxidative phosphorylation. B, transition of the mitochondrial membrane potential. The changes of fluorescence intensity were calculated as ΔF/F 0 (F 0 , the mean fluorescence intensity of the sperm midpiece before adding antimycin A (at 10 s); F, the fluorescence of the midpiece after adding antimycin A; ΔF = F − F 0 ). Capacitated Txnrd3 −/− sperm show heterogeneous response to antimycin A, suggesting disrupted cellular respiration. C, ATP level from wild type and Txnrd3 −/− sperm before and after 90 min capacitation. 10 μM Antimycin A was added 30 min before sample collection. Statistical analyses between same genotype were performed using paired Student's t test. Mean ± SD. *p < 0.05. corroborates the idea that capacitation enhances OXPHOS pathway to supply additional ATP. At the same time, we also made surprising but consistent observations with other groups: first, capacitating sperm cells display a more hyperpolarized ΔΨm than noncapacitated cells (Fig. 6B), which was also seen with a different dye (56). Second, capacitated sperm contains ATP to much less degree (Fig. 6C), suggesting that capacitation favors ATP consumption over synthesis (47). These results are seemingly incompatible with a decline of mitochondrial membrane potential coupled with ATP synthesis by F0-F1 ATP synthase (57) as well as capacitation-associated accelerated oxygen consumption (36,37). It remains to be further investigated dynamics and relative partitioning of glycolysis versus mitochondrial contribution to sperm bioenergetics and the potential role of ΔΨm in regulating these processes. Interestingly, we observed that a significant portion of TXNRD3-deficient sperm not only fail to control ΔΨm but also develop a more apparent electron-dense core in the mitochondrial matrix. As similar dense cores were previously observed in Pmca4 −/− sperm that might potentially experience Ca 2+ overload (58), the TXNRD3-dependent redox signaling might impact on Ca 2+ homeostasis during sperm capacitation. It requires further work to determine whether the TXNRD3deficient sperm cells that lost control of ΔΨm are the same sperm populations with the electron-dense core and/or with deregulated Ca 2+ homeostasis. Generation of a mouse model expressing a genetically encoded Ca 2+ indicator specifically targeted to sperm mitochondria will help to directly address the functional relationship of redox regulation of mitochondria dynamics and Ca 2+ homeostasis in sperm metabolism during capacitation. In summary, our study investigated the TXNRD3 function and potential cellular mechanisms in sperm maturation and male fertility. The absence of TXNRD3 deregulates the redox control of sperm chromatin and mitochondrial structure proteins during capacitation, resulting in distinct bent midpiece and defective mitochondrial respiration; the loss-offunction prevents Txnrd3 −/− sperm from properly positioning to penetrate the egg and presumably also from efficiently producing ATP. Txnrd3 −/− mice are fertile in controlled laboratory environment without limitation of resource and male competition. Yet our findings from in vitro experiments show that TXNRD3 and the thioredoxin system intimately contribute to sperm function and male reproduction. This new knowledge could help to define conditions for assisted reproductive technology in clinic. Mice Txnrd3-null mice (18) are generated and maintained on a C57BL/6 background. Wildtype C57BL/6 mice were purchased from Charles River Laboratory. Mice were cared according to the guidelines approved by the Institutional Animal Care and Use Committee of Yale University (#20079). Single-cell RNA-seq analysis The raw count matrice mouse (GSE109033) testis single cell RNA (scRNA)-seq datasets (61) was downloaded from Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm. nih.gov/geo/). The downloaded raw count matrices were processed for quality control using the Seurat package (ver.3.2.3) as previously described (59,62). Briefly, cells with less than 200 expressed features, higher than 9000 (GSE109033) or 10,000 (GSE109037) expressed features and higher than 20% (GSE109037) or 25% (GSE109033) mitochondrial transcript fraction were excluded to select single cells with high quality mRNA profiles. The data were normalized by the total expression, scaled, and log transformed. Identification of 2000 highly variable features was followed by PCA to reduce the number of dimensions representing each cell. Statistically significant 15 PCs were selected based on the JackStraw and Elbow plots and provided as input for constructing a K-nearest-neighbors graph based on the Euclidean distance in PCA space. Cells were clustered by the Louvain algorithm with a resolution parameter 0.1. Uniform Manifold Approximation and Projection was used to visualize and explore cluster data. Marker genes that define each cluster were identified by comparing each cluster to all other clusters using the MAST (63) provided in Seurat package. In order to correct batch effects among samples and experiments, we applied the Harmony package (ver.1.0) (64) to the datasets. The Markov Affinity-based Graph Imputation of Cells algorithm (ver.2.0.3) (65) was used to denoise and the count matrix and impute the missing data. In these testis datasets from adult human, we identified 23,896 high quality single cells, that were clustered into seven major cell types, including spermatogonia, spermatocytes, early spermatids, late spermatids, peritubular myoid cell, endothelial cell, and macrophage. Similarly, we exploited 30,268 high quality single cells from eight adult and TXNRD3 regulates sperm maturation and mitochondrial function three 6-days postpartum mouse testis tissue samples and subsequently defined 11 major cell populations. Sperm immunocytochemistry and free thiol labeling assay As described previously (59,66), mouse sperm were washed in PBS twice, attached on the glass coverslips, and fixed with 4% paraformaldehyde (PFA) in PBS at room temperature (RT) for 10 min. Fixed samples were permeabilized using 0.1% Triton X-100 in PBS at RT for 10 min, washed in PBS, and blocked with 10% goat serum in PBS at RT for 1 h. Cells were stained with antibody in PBS supplemented with 10% goat serum at 4 C overnight. After washing in PBS, the samples were incubated with goat anti-rabbit Alexa 647 or Alexa 555plus (Invitrogen, 1:1000) in 10% goat serum in PBS at RT for 1 h. Hoechst dye was used to stain nucleus to indicate sperm head. For BODIPY-N-ethylmaleimide labeling, reduced thiols within proteins were alkylated with BODIPY FL maleimide (final concentration of 10 nM, ThermoFisher) for 30 min in the dark after permeabilization. The sample was then quenched by the addition of 500 mM 2-mercaptoethanol for 30 min in the dark, followed by three times washing in PBS. For ThiolTracker (ThermoFisher) labeling, fixed sperm were stained with 20 μM dye, followed by three times washing in PBS. Stained sperm samples were mounted with Prolong gold (Invitrogen) and cured for 24 h, followed by imaging with Zeiss LSM710 Elyra P1 using Plan-Apochrombat 63×/1.40 or alpha Plan-APO 100×/1.46 oil objective lens (Carl Zeiss). Protein oxidation Carbonyl groups inserted into proteins by oxidative reactions were evaluated with protein carbonyl assay kit (Abcam, ab178020) (56,69). Briefly, samples were homogenized and processed according to the manufacturer's protocol. The carbonyl groups in the solubilized protein samples were derivatized using DNPH (2,4 dinitrophenyl hydrazine) or control solution for 15 min and then neutralized. The samples were then loaded onto SDS-PAGE gels, and DNP-conjugated proteins were detected by western blotting using primary DNP antibody and HRP-conjugated secondary antibody. Flagella waveform analysis To tether sperm head for planar beating, noncapacitated or capacitated spermatozoa (2-3 × 10 5 cells) from adult male mice were transferred to the fibronectin-coated 37 C chamber for Delta T culture dish controller (Bioptechs) filled with HS and Hepes-buffered HTF medium (H-HTF) (68), respectively. Flagellar movements of the tethered sperm were recorded for 2 s with 200 fps using pco.edge sCMOS camera equipped in Axio observer Z1 microscope (Carl Zeiss). All movies were taken at 37 C within 10 min after transferring sperm to the imaging dish. ImageJ software (v1.53) (70) was used to measure beating frequency of sperm tail and to generate overlaid images to trace waveform of sperm flagella as previously described (68). Sperm motility analysis Aliquots of sperm were placed in slide chamber (CellVision, 20 mm depth) and motility was examined on a 37 C stage of a Nikon E200 microscope under 10× phase contrast objective (CFI Plan Achro 10×/0.25 Ph1 BM, Nikon). Images were recorded (40 frames at 50 fps) using CMOS video camera (Basler acA1300-200 μm, Basler AG) and analyzed by computer-assisted sperm analysis (CASA, Sperm Class Analyzer version 6.3, Microptic). Sperm total motility and hyperactivated motility was quantified simultaneously. Over 200 motile sperm were analyzed for each trial, at least three biological replicates were performed for each genotype. To track swimming trajectory, the sperm motility was videotaped at 50 fps. The images were analyzed using ImageJ software (v1.53) (70) by assembling overlays of the flagellar traces generated by hyperstacking binary images of 20 frames of 2 s movies coded in a gray intensity scale. Scanning electron microscopy As previously described (59), sperm cells were attached on the glass coverslips and fixed with 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer (pH 7.4) for 1 h at 4 C and post fixed in 2% osmium tetroxide in 0.1 M cacodylate buffer (pH 7.4). The fixed samples were washed with 0.1 M cacodylate buffer for three times and dehydrated through a series of ethanol to 100%. The samples were dried using a 300 critical point dryer with liquid carbon dioxide as transitional fluid. The coverslips with dried samples were glued to aluminum stubs and sputter coated with 5 nm platinum using a Cressington 208HR (Ted Pella) rotary sputter coater. Prepared samples were imaged with Hitachi SU-70 scanning electron microscope (Hitachi High-Technologies). Transmission electron microscopy Collected epididymal sperm cells were washed and pelleted by centrifugation and fixed in 2.5% glutaraldehyde and 2% PFA TXNRD3 regulates sperm maturation and mitochondrial function in 0.1 M cacodylate buffer pH 7.4 for 1 h at RT. Fixed sperm pellets were rinsed with 0.1 M cacodylate buffer and spud down in 2% agar. The chilled blocks were trimmed, rinsed in the 0.1 M cacodylate buffer, and replaced with 0.1% tannic acid in the buffer for 1 h. After rinsing in the buffer, the samples were postfixed in 1% osmium tetroxide and 1.5% potassium ferrocyanide in 0.1 M cacodylate buffer for 1 h. The postfixed samples were rinsed in the cacodylate buffer and distilled water, followed by en bloc staining in 2% aqueous uranyl acetate for 1 h. Prepared samples were rinsed and dehydrated in an ethanol series to 100%. Dehydrated samples were infiltrated with epoxy resin Embed 812 (Electron Microscopy Sciences), placed in silicone molds, and baked for 24 h at 60 C. The hardened blocks were sectioned in 60-nm thickness using Leica UltraCut UC7. The sections were collected on grids coated with formvar/carbon and contrast stained using 2% uranyl acetate and lead citrate. The grids were imaged using FEI TecnaiBiotwin Transmission Electron Microscope (FEI) at 80 kV. Images were taken using MORADA CCD camera and iTEM (Olympus) software. The sperm are classified based on the leading proportion of defined cristae (defined cristae, >50%; less defined cristae with gaps, <50%). Sperm are classified as dense cored sperm when more than one mitochondrion with dense core is observed per sperm. Sperm chromatin dispersion test (halo assay) Halo assay was modified according to a previous report (29). In brief, samples were diluted to the concentration of 0.5 to 1 × 10 7 cells/ml, then mixed with low-melting-point aqueous agarose to obtain a final concentration of 0.7% agarose. 50 μl mixture was dispensed onto a glass slide precoated with 0.65% standard agarose, then covered by a coverslip for solidification at 4 C for 4 min. The coverslip was then removed carefully before the sample was immersed into freshly prepared acid denaturation solution (0.08 N HCl) for 7 min, neutralizing and lysing solution 1 (0.4 M Tris-HCl, 0.8 M DTT, 1% SDS, 50 mM EDTA, pH 7.5) for 10 min, and neutralizing and lysing solution 2 (0.4 M Tris-HCl, 2 M NaCl, 1% SDS, pH 7.5) for 5 min at RT in sequence. The slide was then transferred to Tris-borate EDTA solution (90 mM Tris-borate, 2 mM EDTA, pH 7.5) for 2 min, then dehydrated in sequential 70%, 90%, and 100% EtOH (2 min each). After drying, the slide was stained with DAPI (2 μg/ml), then examined under microscope as described above. The Halo size (diameter) was defined and classified into big (>17 μm), medium (between 12 and 17 μm) and small size (<12 μm). Acridine orange assay Staining was performed, as described previously (30,31) with minor modification. Briefly, sperm cells were smeared on glass slide until dry, followed by washing in PBS pH 7.2. 4% PFA was used to fix cells for 15 min at RT, then covered by methanol for 5 min washed with PBS for 5 min. The sample was incubated in RNAase solution at 37 C for 30 min. Washing with PBS of pH 7.2 for 5 min was performed between each step. 0.1 M HCl was used for DNA denaturation for 30 to 45 s, followed by AO staining working solution for 2 min. The slide was ready for examination under fluorescence microscopy after washing gently and drying. Mitochondrial membrane potential (ΔΨm) measurement Epididymal sperm were attached on Delta T culture chamber coated with poly-D-Lysine (2 mg/ml) or together with Cell-Tak for 30 min (71). Unattached sperm were removed by the gentle pipette wash. Mitotracker deep red (working concentration 500 nM, ThermoFisher) was loaded in sperm for 30 min, followed by gentle rinse with HS or Hepesbuffered HTF medium H-HTF. All movies were taken at 37 C by the pco.edge sCMOS camera equipped in Axio observer Z1 microscope (Carl Zeiss). The data were analyzed by Zen (Blue). ATP measurement 1 to 1.5 × 10 6 sperm were left in HS as noncapacitated or incubated in HTF under capacitating conditions for 1 h, followed by 10 μM antimycin A or vehicle treatment for 30 min. Sperm suspensions were shock frozen in liquid nitrogen, ATP was then extracted by boiling for 10 min (53,72). After cooling down on ice, the suspensions were centrifuged for 5 min at 20,000g, and the supernatants were collected. ATP levels were measured in triplicate samples on 96-well plates with the luciferase-based ATP bioluminescent assay kit (ThermoFisher, A22066) according to the manufacturer's protocol by a luminometer (Tecan Infinite M1000). Data availability All data are contained within the article and supporting information. Supporting information-This article contains supporting information. School of Medicine and NIH R01HD096745 to J.-J. C. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Conflict of interest-The authors declare that they have no conflicts of interest with the contents of this article.
2021-07-22T13:19:24.564Z
2021-07-19T00:00:00.000
{ "year": 2022, "sha1": "c05c7988eb6aaa7ac2300ed246c728fe44a7de06", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925822005178/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4445d262d263e2bc4982aed5bfb931dabd2e0b36", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
210244443
pes2o/s2orc
v3-fos-license
Characterization of yield criteria for zinc coated steel sheets using nanoindentation with knoop indenter An indentation based method to characterize the yield locus for steel sheets is developed and implemented. Knoop hardness based indentation experiments have been performed on the surface as well as on the cross sections of an uncoated steel sheet to obtain the corresponding yield locus in the deviatoric and plane-stress situation. Stress ratios following the indenter's geometry are used to plot the yield locus from indentation data. The stress ratios have been corrected for the anisotropy of the material by an optimization algorithm. Points are then plotted in the plane-stress plane using the corrected stress ratios, the strain increment vectors and indentation hardness data. The parameters for the Hill's quadratic yield criteria are obtained from the indentation data based on a curve fitted yield locus. The results obtained using nano-indentation have been compared with those obtained from the standard characterization tests for steel sheet and shown to have good agreement. The method is also applied to the yield locus characterization of zinc coatings on steel sheet for multi-scale modelling of friction in deep drawing. Introduction The plastic deformation (yielding) in metals results from shear stress which causes the formation and movement of dislocations at the slips systems in the crystals. The deformation modes in most crystals are directional resulting in anisotropic yielding of the metals on loading. In sheet metal forming processes like deep-drawing, metallic sheets are typically cold rolled a priori which induces a deformation texture in the sheet [1]. The textures, i.e. preferred orientations of crystals along direction of applied stress, are oriented along the rolling, transverse and normal directions. Hence most sheet metals exhibit anisotropic yielding which is described by a yield function and is represented by the corresponding yield surface in a three-dimensional principal stress space. Of the available criteria, Hill's quadratic yield criterion has been most commonly used to describe the anisotropic yielding in most metals [2]. Further, the von Mises and Tresca yield criterions [3,4] for isotropic materials have been generalized and given a non-quadratic model defined in the principal stress space by Hosford in Ref. [5]. Likewise, an anisotropic extension of the Hosford model has been proposed by Hill in Ref. [6]. The yield criterions have been typically plotted as yield loci in the octahedral -plane with deviatoric stresses along the axis. Since in sheet metal forming processes, the out-of-plane stresses are neglected, the yield loci can also be plotted in the planestress plane. Anisotropic yield criteria specific to sheet metals have also been developed in the principal stress space in Refs. [7][8][9]. Among them, the Vegter yield criterion [7,8] has been used to develop and characterize the yield loci for sheet metals in the plane-stress plane. The measurement of anisotropic yield parameters (Lankford coefficients: R values) is typically done by uniaxial testing (tensile loading) of sheet material with varying orientations relative to the rolling direction. For planar isotropic materials the R values are obtained by bulk loading methods such as pure shear, uniaxial tensile, plane strain tensile or an equi-biaxial tensile test. Each of these experiments are combined to measure the stress points on the plane-stress plane, between which the Bezier curve has been used to describe the yield locus independent of R values, in the Vegter yield criteria [7,8,10]. Also virtual field methods (VFM) have been used in combination with digital image correlation of strain fields to obtain parameters for anisotropic yield criteria [11]. However, VFM has been applied successfully for characterizing only certain types of yield criteria mostly for uncoated sheet metals [12]. Indentation based characterization techniques have also been modelled and designed to estimate the anisotropic plastic properties [13][14][15] stress ratio has been derived for planar isotropy [14,15]. FE models involving indentation near free edge, free corner, interface of bonded samples and linear and circular scratch tests have also been used to characterize anisotropic yield parameters [13]. However, a rigorous development of a measurement and characterization method based on such indentation techniques is unavailable. Among the pyramidal shaped indenters commonly used for hardness measurements [16], the use of an asymmetric Knoop indenter [17] with a rhombic base having diagonals of lengths in ratio 7: 1 has been used to measure the anisotropy in plastic deformation of metals. The dependence of the measured hardness on orientation of the long diagonal of the Knoop indenter relative to the crystal planes has been observed and explained using the resolved shear stress of the slip systems for hexagonal single crystal of WC, zinc and zircalloy-2 in Refs. [18][19][20]. However, for larger indentations over multiple grains, a flow surface theory relating the deviatoric shear stress to the indentation hardness and the geometry of the Knoop indenter was first proposed by Wheeler and Ireland [21]. By aligning the diagonals of the Knoop indenter along the principal axes of stress (axes of anisotropy), six indentations were performed on zircalloy-2 specimens. The Knoop hardness number (KHN) specific to each indentation was plotted in the octahedral (deviatoric stress) plane by taking the ratio of the corresponding deviatoric stress as equal to the ratio of the diagonal lengths of the Knoop indenter, i.e. 7: 1. The yield locus was plotted using the points on the plane and the strain ratios were compared with those obtained from bulk tensile tests along the anisotropic axes to good agreement. This technique of characterizing the yield loci from KHN data of anisotropic metals was modified by relating the hardness number to plane-stress yield loci and implemented in Ref. [22] for two titanium alloys. By relating the ratio of plastic strain underneath the indenter to the ratio of its diagonal lengths and using constancy of volume of deformed substrate and the Lévy-Mises equations, the strain ratios could be related to the stress ratios in the plane-stress plane. The yield loci were plotted by equating the KHN with the equivalent stress for Hosford yield criteria [22] and compared with the yield loci obtained by tensile tests [23] at various strains to give the best agreement at 0.01 strain. The KHN-based yield loci of highly anisotropic single magnesium crystal, polycrystalline magnesium sheets and magnesium alloys in plane-stress plane were also compared with conventional yield loci for small strains in Ref. [24] but did not show good agreement. Wonsiewicz and Wilkening [24] also observed the insensitivity of the KHNbased yield loci to capture the difference in compression and in tension as well as the excessive bulk of the KHN-yield locus into plane-stress plane's quadrants for the stress ratios computed using the techniques in Ref. [22]. Hence, the R value was included in the calculation of the stress ratios from the strain ratios in Ref. [24]. Using the initial methods to plot a yield locus from KHN [21,22,24], the yield loci of pure polycrystalline titanium [25], titanium alloys [26] and zircalloys [27,28] have been determined. Although the yield loci plotted from the indentation hardness seems convenient for bulk polycrystalline metals and alloys as has been discussed above, micro-hardness indentation of a coating along the anisotropic axes (surface and cross-section) is challenging. Hence, attempts to utilize depth-sensing Nano indentation techniques with a Knoop indenter are made in Refs. [29,30]. The hardness of the indent is measured from the maximum penetration depth in the load-depth curve or the long diagonal lengths. However, compared to the standard Berkovich tip, the elastic recovery along the shorter diagonal of the residual impression by the Knoop indenter is accounted for. In order to express the elastic modulus, the ratio of the short and the long diagonals of the residual impression by the Knoop indenter is used [30]. Knoops indentation has also been analysed by modelling the indentation response of substrate with elastic and elastoplastic material behaviour without [31] and with strain hardening effects [32]. Numerical simulation of the depth-sensing indentation tests with a Knoop indenter on substrates with various hardening and material properties were also done recently using 3D finite element models to measure the area function, hardness and the elastic modulus [33,34]. Knoop indentations have been also performed on ductile metals and brittle ceramics and the results have been compared to other pyramidal shaped indenters (Berkovich and Vickers) [35,36]. The slip anisotropy has been determined from the indentation anisotropy by Knoop nano-indentation of SiC-6H single crystals [37]. The available research to measure the yield criteria for anisotropic material has mostly focussed on bulk metals and alloys and not on surfaces and coatings. Thus far, experimental techniques to characterize the parameters for yield criteria or yield locus for coated systems have not been investigated. Moreover, zinc coatings applied on steel sheets used in deep-drawing have a high degree of anisotropy [38,39] due to orientation of the zinc grains during the prior galvanization and temper rolling processes. However, the yield criterion for zinc layer in galvanized steel sheets is unavailable in the literature. Among the available characterization techniques for coating, nano-indentation using depth sensing indentation has been mostly focussed upon [40]. Nano-indentation of zinc coating has been done to quantify the elastic anisotropy by measuring the elastic modulus and hardness for various grain orientations in Ref. [41]. Furthermore, yield loci have been plotted for various anisotropic materials using Knoop indentation [42] and have shown fair agreement with conventional yield loci. Knoop indenter has not been used in Nano-indentation to plot obtain the yield locus of zinc coating on steel sheet so far. With the current advances in technology, modelling and analysis of Nano-indentation using the Knoop indenters, the Knoop indentation of both the surface and cross-section of zinc coating on steel substrate has been performed in the current research. A methodology to determine the parameters for Hill '48 yield criteria [32] from Knoop hardness number is designed. The current method accounts for the anisotropic behaviour of the coating by measuring the stress ratios induced by asymmetric Knoop indenter. The yield parameters are compared with those obtained by standard tests for a cold-rolled DC04 steel sheet, thereby validating the methodology. The same method is then implemented for measuring the yield parameters for the zinc coating in the temper-rolled, galvanized steel sheets as modelled by the Hill '48 yield criterion. Calculation of yield parameters from KHN A systematic procedure to derive the yield criteria for the zinc coating through indentation hardness data has been laid out in this section. Hill's quadratic yield function [2] is chosen for its ability of being expressed in matrix vector product form and hence the ease of implementation in numerical codes. Hills yield parameters have been related to the Lankford coefficients. Accounting for the anisotropy, the stress ratios in plane-stress condition have been expressed in terms of the Lankford coefficients and the strain ratio corresponding to each (six) Knoop indentation as explained in Ref. [21]. Some of the key assumptions as obtained from the literature and used in the method described below to obtain the yield parameters using Knoop indentation hardness are as follows: 1. The Knoop hardness number KHN in kgf mm / 2 is approximated to be equal to the equivalent flow stress, i.e. KHN is taken proportional to the shearing stress on the octahedral plane and (uniaxial) yield strength in plane stress [22,43]. 2. The ratio of the deviatoric strain along the long and short diagonals of the Knoop indenter is assumed to be 1/7 [21]. 3. The loading (strain) path of the Knoop indentation resulting in the intersection of the KHN based stress points with the yield locus is assumed to be linear [21,22]. 4. The contact between the Knoop indenter and the experimental specimen is assumed to be frictionless and adhesionless. Hill's yield criterion and flow rule The quadratic Hill's yield function f ( ) ij also named as the Hill 48 yield criterion [2] has been used in the current work to quantify the anisotropy in sheet metals, see equation (1.1). The Hill 48 yield criterion assumes no difference between the tensile and compressive yield stresses in a particular stress direction. The yield criterion depends on the deviatoric stresses and is pressure independent. Hence, ij are deviatoric stresses where i j , 1,2,3 being the anisotropic axes and F G H L M , , , , and N are constants which are experimentally determined. For a rolled sheet metal, 1 is the rolling direction RD, 2 is the transverse direction TD and 3 is the normal direction ND. Typically, the constants F G , and H are determined from uniaxial yield stresses with respect to the axes of anisotropy , 3 aligned with the directions of anisotropy, k 1,2,3 being the principal stress axes. Further assuming associated flow for plasticity in metals, the associated flow rule is expressed in equation (1.4) using which is the rate of the plastic multiplier. The flow rule represents the coincidence between the plastic potential and the yield surface and gives the plastic deformation rate p as orthogonal to the yield surface. (1.1) Relationship between yield parameters and Lankford coefficients Typically, for thin rolled sheets, a plane-stress condition is assumed where = 0 3 . The yield criteria in plane-stress condition, i.e. for a planar anisotropic material is given in terms of principal stresses, uniaxial yield stress in rolling direction y 1 and Lankford coefficients in equation (2.1). The Lankford coefficients or the plastic strain ratios R R , 0 90 and R 45 are the ratios of in-plane plastic strain to out of plane (through thickness) plastic strain due to loading under uniaxial stress , 1 2 and 12 (at an angle =°°0 , 90 and°45 relative to the rolling direction) respectively as defined in equation (2.2). The relationship between the Lankford coefficients and the Hill's parameter in equation (2.2) has been derived from the flow rule in equation (1.4). The Hill's yield criterion is written for plane-stress condition in the anisotropic axes taking = , , 0 13 23 33 in equation (2.3). Hill's yield criteria in plane-stress is written in terms of the Lankford coefficients in equation (2.4) [44]. By comparing the individual terms in equations (2.3) and (2.4), the relationships between Hill's yield parameters and the Lankford coefficients are obtained in equation (2.5). For planar isotropy, the plastic strain ratio R 0 is independent of and = = R R 1 90 45 . By substituting values of the yield parameters F G , and H in equation (1.4), the equivalent flow stress is expressed in the principal stresses as given in equation (2.6). Derivation of stress ratio's using anisotropy constants The ratio of plastic strain underneath the Knoop indenter along the long and short diagonal is assumed proportional to the inverse of the ratio of the length of the diagonals, i.e. as shown in Fig. 1a. This results from the plastic deformation (displacement of substrate) along the diagonals of a penetrating Knoop indenter follows the projected length the edges of the faces of the indenter in contact with the substrate. Following the work done in Ref. [21], the diagonals of the Knoop indenters are oriented along the principal coordinate direction which are also aligned along the axes of anisotropy (RD TD , and ND). In the analysis, the axes of anisotropy for the measured specimen . The strain ratios on the 1 2 plane are given in equation (3.1) as a vector B using the ratio = 7. By using the associated flow rule the expression for the strain ratio vector B has been expanded in equation (3.2) following which the stress ratio vector, i.e. the ratio of principal stresses in plane stress-plane = / 2 1 for each indentation (i i a f ) has been expressed in terms of Lankford coefficients, and strain ratio vector B. The expression of in equation (3.3) accounts for the anisotropy in the specimen characterized for its yield criterion parameters. Plotting points on 1 2 plane from KHN data The Knoop hardness number is taken proportional to the deviatoric shearing stress on the octahedral plane. Therefore, the KHN is assumed to be equivalent to the equivalent flow stress (uniaxial yield stress) in plane stress as explained in Ref. [22] and is in agreement with [43]. For the Hill's 48 yield criterion the equivalent stress in plane stress condition is given as the uniaxial yield stress in the (rolling direction) principal axis 1, The Knoop hardness number (KHN) is measured using the dimensions of the indentation mark as shown in Fig. 1a. The KHN is given as the ratio of the applied load P and the area of the indentation A c . The area of the indent is given is terms of the major diagonal length D l of the indentation mark and a constant factor 2) in GPa units [29]. The Young's modulus E of the specimen is calculated from the elastic recovery of the indented material during unloading of the indenter [45]. However, the elastic recovery along the shorter diagonal of the Knoop indenter is higher compared to that of the longer diagonal of the Knoop indenter [30]. The difference in the recovered (final) contact length ratio and the maximum contact length (7.114 Optimizing the yield parameters based on KHN-data points The yield parameters R , 0 R 90 and y obtained for the yield locus fitting the KHN based points for all six orientations plotted in the 1 2 plane (equation (4.2)) is optimized by minimizing the distance between the yield locus and the KHN based points in the 1 2 plane. The distance between the KHN data points ( , 1 2 ) and the yield locus is measured along a straight line using either of the two methods described below. In both the methods, a linear strain path is assumed for the Knoop indentation to find the intersection of the KHN data points with yield locus as shown in Fig. 2. The distance between the points plotted using equation (4.2) from the KHN-data ( , 1 2 ) and the yield locus plotted from R 0 and R 90 values obtained by solving the equation for the ellipse in the 1 2 plane for the plotted KHN-data points is calculated by two methods. In the first method the shortest distance between the points and the yield locus is calculated taking the perpendicular line from the points and the intersection with the yield locus. The slope of the perpendicular line to the ellipse is given as m n and the slope of the ellipse (yield locus) at the point of intersection is given as m e as shown in Fig. 2. In the second method, the distance between the KHN points and the yield locus along the line through the origin and the KHN points intersecting the yield locus is minimized. The slope of the line passing through the origin of the plane-stress plane and the measured points using KHN data points is given as m s as shown in Fig. 2. The point of intersection ( , 1 2 ) for the normal to the yield locus, passing through the KHN-data point is given by solving equations (6.1) and (6.2). The point of intersection ( , . The initial value of R 0 and R 90 are varied across a range of values until the distance between the points and the yield locus is minimized. To further analyse the data, an algorithm to plot the yield locus from the KHN data and to calculate R R , 0 90 and y has been developed and shown in Fig. 3. Initially, points are plotted in the deviatoric plane from KHN, using equation ) respectively. However, the yield locus obtained in the deviatoric plane or for that matter in the plane-stress plane with an isotropic assumption of is not accurate for anisotropic material. Also, typically sheet metal processes have used plane stress assumptions in expressing anisotropic material behaviour. Hence, an algorithm has been developed to optimize the value of R 0 and R 90 and use the optimized values to plot the yield locus in the plane-stress plane. The algorithm minimizes the distance between the points plotted in the plane-stress plane using KHN-data. Then, the yield locus plotted in the plane stress plane using the optimized values of R R , 0 90 and the value of y . Experimental procedure Anton Paar's NHT 3 nano-indentation set-up along with the Knoop indenter is used to perform nano-indentation of both zinc coated and uncoated steel sheets. For higher loads Lecco's LM100 micro hardness test set up was used with a Knoop indenter. The geometry of the Knoop indenter is explained in the current section. Also metallographic preparation of steel sheets has been done prior to the indentation as explained in this section. The polished sheets are also used for EBSD (electron backscatter diffraction) analysis in SEM (scanning electron microscopy) to study the grain size and grain orientation in the sheets. The surface of the specimen is measured using a confocal microscope for its roughness after polishing. Preparation of specimens Knoop indentation based characterization has been done on low carbon DC04 steel sheets and zinc coating on steel sheets. Prior to characterization polishing of the sheets is done which serves a two-fold purpose. The effect of friction on the maximum resolved deviatoric stress on the surface of the Knoop indenter is minimized by polishing. The aim is to equate the Knoop hardness to the deviatoric stress resulting in plastic flow of the sheet due to indentation. By comparing the size of the indent with the grain size the indentation loads are adjusted such that the indentation measurements are done for multiple grains. Both the DC04 steel sheet and the zinc coated steel sheets have been obtained from cold rolling mills with a marked rolling direction. They have been cut into rectangular sheets of different sizes for polishing of the surface and the cross-section. Rectangular sheets of length 10 mm and with 15 mm with the rolling direction along the length have been laser cut from the rolled sheets and used for polishing the surface. The cross section of rectangular sheets of length 10 mm cut along transverse direction of length 15 mm cut along rolling direction and width 2.5 mm width have been polished as well. The surface of the × 10 15 mm sheets and the cross-section of the × 10 2.5 mm and × 15 2.5 mm are hot mounted using a bakelite disc of 25 mm diameter for the metallographic preparation. An automatic polishing machine was used to polish the mounted bakelite discs. The following polishing steps are used for the surface and cross-section of the DC04 steel sheets. Grinding of the steel sheets is done initially using 220 grade silicon carbide paper at 25 N load and 300 rpm for 3 min using water as lubricant. Further grinding was done using diamond suspension of 9 μm particle size at 40 N load and 150 rpm for 5 min. The grinding steps removed unevenness on the surface of the steel sheet. Further polishing was done in three steps. Diamond suspensions with 3 μm and 1 μm particle sizes were used at 20 N load 150 rpm for 4 min. For the final etching step, a silica Fig. 4a. The polishing of the zinc coating using similar steps is challenging due to the softness of the coating, the shrinkage gap between the coating and the bakelite resin mount and the reaction of water with the coating resulting in discoloration. Hence water and water based lubricants are only used in the initial grinding of the zinc coating crosssection using 320 grade sandpaper at 300 rpm and 9 μm particle size diamond suspension at 150 rpm both at 30N load for 4 min respectively. To avoid the reaction of water, an alcohol based lubricant with diamond slurries of 3 μm and 1 μm are applied at 25 N and 20 N at 150 rpm for 4-6 min respectively. The final etching of the zinc coating was done using de-agglomerated gamma alumina powder of 0.05 μm mixed with Ethanol denatured with iso-propyl alcohol at 15 N and 150 rpm for 2 min. In order to polish the surface of the zinc coating on the steel sheet without removing the soft zinc coating, the grinding steps are avoided and only polishing and etching steps are followed similar to that of the zinc coating cross-section. The polished surface and cross-section of the zinc coating is shown in Fig. 4b and c respectively. The size and thickness of the zinc grains can be estimated to be around 100-200 µm and 10-20 µm from Fig. 4b and c respectively. The slip deformation marks due to rolling process can be seen on the zinc grains in Fig. 4b. Indentation test set-up Anton Paar's NHT 3 nano-indentation set up has been used to perform indentation based characterization of the DC04 and zinc coated specimen as shown in Fig. 5a. Nano indentation is a depth sensing indentation technique where the applied load and the penetration depth of the indenter into the specimen are recorded and used to determine the mechanical properties of the test specimen [46]. The force is applied during indentation by a piezo-electric actuator with a feedback control. In the current work, the Nano-indentation tester can apply a load up to a maximum force of mN 500 at a resolution of 0.02 μN and a maximum penetration depth of 200 μm at a resolution of 0.01 nm. The Nano-indentation tester used, has a noise floor value of ± N 0.5 µ for load controlled indentation which indicates the maximum resolution by which noise is precisely measured [47]. The schematic of the Nanoindentation test set-up and the Knoop indenter tip is shown in Fig. 5a and b. In a depth sensing nano-indentation technique [45] the hardness and the elastic modulus of the specimen are measured using the loading and unloading curves respectively as shown in Fig. 6b. During loading the load is increased to the set load P m for a time duration of 30 s. At maximum load P m the load is kept constant for a dwell time duration of 10 s to avoid creep effects. The unloading is done at a similar rate as loading for a duration of 30 s. The loading and unloading sequence and the corresponding penetration depth is plotted in Fig. 6a. The hardness of the specimen H i is measured from the ratio of the applied load to the indentation area A c which corresponds to the maximum penetration depth h m as given in equation (7.1) for Knoop indenter. The unloading of the indenter is followed by elastic recovery of the substrate. The stiffness of the substrate system can be given as the slope of the unloading curve in Fig. 6b. The residual elastic modulus of the indenter substrate system E r is calculated by equation (7.2). The elastic modulus of the substrate E s is calculated from equation (7.3), given the elastic Calibration of indenter shape function The schematic of an indent on the cross section of a coating is given in Fig. 7a. The applied load is maintained such that the size of the indent is less than the coating thickness (20 μm). However, prior to indentation experiments the tip of the Knoop indenter must be calibrated for its tip shape by a shape function. The shape function of the indenter gives the projected area of the indentation at the contact depth h c and is approximated by fitting polynomial function to the experimentally calibrated data. The shape function takes into account the curvature of the indenter tip in measurement of the projected contact area. By indenting the calibrated fused silica specimen and curve-fitting the experimental data as shown in Fig. 7b, the shape function of the Knoop indenter in nano-indentation was obtained with equation (8) For very large loads in micro-hardness measurement methods, where the elastic recovery is negligible compared to the indent size, the hardness can be measured by the final indent size after indentation. However, the KHN is measured with the depth sensing method at loads of 10-100 mN by equation (7.1). Results and discussion The grain size and orientations of the DC04 steel sheet and zinc coated steel sheet have been studied using SEM with EBSD analysis. Based on the grain size, the loads and distribution of in the Knoop indentations has been varied such that the anisotropy at the crystal scale is minimized. The grain orientations are also helpful in understanding the slip systems and the anisotropy in the grains with respect to rolling process. The Knoop hardness number is measured for all 6 orientations for both the DC04 steel sheet and the zinc coating on the steel sheet. The six orientation namely i i i i i , , , , a b c d e and i f correspond to the orientation of the longer diagonal D l and the shorter diagonal d s along the anisotropy axes rolling direction RD, transverse direction TD and normal direction ND. Hence a given orientation, for instance can be written as ND rd where the long diagonal D l is oriented along the normal direction ND and the shorter diagonal d s is oriented along the rolling direction rd (RD). The results are plotted in the plane-stress plane. The distance between the plotted points and the yield locus d is minimized along the slope m s . Then the anisotropy parameters are optimized using the algorithm in section 2.5 and used to plot the yield locus in plane stress plane. The KHN-based yield curve and anisotropic parameters have been validated with the yield curve and anisotropic parameters obtained using bulk tests. Grain size and orientation of DC04 steel and zinc coating The cross section of the zinc coated steel substrate and surface of the zinc coating showing the individual grains is given in Fig. 8. The average grain size of the zinc coating can be estimated to be around 100-200 µm. It can be seen that the zinc grains are aligned as pancakes with a thickness of 20 µm. It can be deduced from Fig. 8a and b that the size of the grains is typically in the order of the size of the Knoop indent (see Fig. 7a). If the grain size is larger than the indentation size, then the grain size and orientation has a major effect on the properties obtained from the indentation [41]. Hence larger loads are chosen for Fig. 6. (a) Loading and penetration curves for Nano-indentation using Knoop indentation (b) to obtain hardness and stiffness from the load-depth curve [46]. Nano-indentation keeping in mind the coating thickness for zinc coated specimens. Furthermore, multiple indentations (20-50 in number) have been performed as a matrix spread out over a region of the specimen and the average of the data obtained from the indentation is taken. This helps in averaging the effect of local grain orientation and size on the data obtained from the Knoop indentation. Fig. 9b shows the Euler angles in the inverse pole figure (IPF Z) map of the SEM scan of the area shown in Fig. 8b. The zinc grains are mostly oriented along their (hcp crystals) c axis almost normal to the sheet plane. However certain grains can be seen elongated and aligned along the rolling direction in Fig. 9b. Multiple pyramidal slips and twins can be seen throughout the grain matrix as well. The deformation of the zinc grains during the (temper/cold) rolling process orients the zinc grains in a preferred direction as can be seen in Fig. 8a and IPF figure in Fig. 9b. The grain size of the steel substrate and the DC04 steel is much smaller around 10-20 µm from Figs. 8a and 9a. The DC04 steel grains (bcc (body centred cubic) crystals) are predominantly aligned with their axis along the 111 direction. Yield locus of DC04 steel The hardness of the DC04 steel sheet specimen was measured for each of the six orientations using 20 measurements. The average values of the Knoop hardness numbers for each six indentations were plotted for 50 g load in Fig. 10a. A maximum load of 50 g equivalent to 490.05 mN was applied using a Knoop indenter to avoid any grain effects on yield behaviour of the DC04 steel. Multiple indentations have been done, in an indentation matrix shown in Fig. 10b. The size of the Knoop indentation has been measured by observing them under a confocal microscope. The length of the long diagonal D l , short diagonal d s and indentation depth h of the Knoop indent mark is measured from the height profile of the indent as shown in Fig. 10c. The symbols in the bar plots below represent the orientation of the longer and shorter diagonals of the Knoop indenter along the axis of anisotropy, e.g. the ND rd represents the long diagonal D l along the normal direction ND and short diagonal d s along the rolling direction rd. The yield locus of the DC04 steel sheet is initially plotted in the deviatoric plane as shown in Fig. 11. By assuming the Levy's Mises criteria for isotropic materials, the stress ratio is taken equal to the strain ratios from equation ( Table 1. However, the anisotropic parameters obtained from the deviatoric yield locus are typically the same as those obtained from bulk tests for DC04 steel sheet [48]. Hence, the yield locus is plotted in the plane-stress plane and optimized to obtain the anisotropic parameters. The initial KHN-data points are plotted in the plane-stress plane by taking initial values of . The yield locus is solved for the plotted points from which values of R , 0 R 90 and y are obtained. The values of R 0 and R 90 are used to correct and re-plot the yield locus until the difference in the distance between iterated KHN-data points and the yield loci from the values of modified R , 0 R 90 and y is below a specified tolerance. The optimized KHN-yield locus is plotted in the plane-stress plane in Fig. 12 and its optimized R , 0 R 90 and y are listed in Table 1 and compared with those obtained from the bulk loading tests of DC04 steel [48]. The close agreement in both the methods, sets the possibility of using Knoop (Nano-) indentation with the developed algorithm given in Fig. 12 to characterize the yield locus for thin, zinc coatings of galvanized steel sheets [41]. Yield locus of zinc coating Multiple indentations have also been performed on the cross-sections of the zinc coating as shown in the Fig. 13b and c. The size of the indents for indentation along the cross-section of coating at higher loads (> 50 mN ) exceeds the thickness of the coating cross-section. Hence, the indentation load is taken as 20 mN in order to keep the indentation size well within the coating thickness of 20 μm as shown in Fig. 13b. The length of the longer diagonal for the 20 mN load can be seen as 10-12 μm in Fig. 13c which is lower than the average coating thickness of 20 μm. This corresponds to a penetration depth of = h D /30 l which is approximately 0.4 μm. For such low penetration depth, the plastic flow along the coating thickness direction which is also along the longer diagonal is minimal. The plastic flow occurs along the short diagonal which is along the coating. Therefore, it can be concluded that for indentations with 20 mN loads along the coating cross section, the effect of substrate mechanical properties is minimal. Furthermore, the plastic flow component under the indenter along the Fig. 13a. A large deviation in the hardness values is obtained for each of the six indentation orientation resulting from the effect of the orientation of the zinc grains on indentation hardness (see Fig. 13a). The average value of the measurements of the indents at the same orientation are taken to reduce the local effects of anisotropy due to grain size and orientation. The average of the hardness values (KHN) obtained from indents with an applied load of 20 mN load is used to calculate the yield locus of the zinc coating in order to avoid effects of coating-substrate interface and, properties of the steel substrate, individual grain orientation and grain boundaries (see Fig. 13b and c). The anisotropy determined using Knoop indentation accounts for the anisotropy in the zinc coating attributed to the hcp crystal structure of the zinc as well as the deformation textures induced in the zinc coating by the temper rolling of the galvanized steel sheet. Initially the yield locus for the KHN data is plotted in the deviatoric plane as shown in Fig. 14. The points on the deviatoric plane are plotted along the lines following the stress ratios from equation (3.1) = B ( ) and scaled according to the KHN data. Comparing the yield locus of the DC04 steel and the zinc coating from Figs. 11 and 14, it can be said the zinc coating has higher induced anisotropy compared to the DC04 steel sheet. The anisotropy in the plastic deformation of the zinc coating and the steel is induced from the rolling process. The difference in the size and amount of zinc along the thickness and surface plane could also be attributed to the anisotropy in the mechanical properties of zinc coatings. Additional anisotropy is inherent to the zinc grains due to their hcp (hexagonal closed pack) crystal structure. The difference in the critical resolved shear stress for the various slip systems in the zinc grains, e.g. basal slip, pyramidal slip and twinning results in its anisotropy [41,49]. After validating the yield locus obtained from the Knoop indentations of DC04 steel sheet with the bulk tests, the yield locus of zinc coating on steel sheets has been characterised by Knoop indentation. The KHN data has been listed for the 6 orientations in Table 2. The points are plotted in the plane stress 1 2 plane taking initial values of = = R R 1 0 90 in Fig. 15. The values of R , 0 R 90 and y are then varied (increased/decreased) using constants a and b as shown in the algorithm in Fig. 3. The distance d between the measured KHN-data points on the plane stress plane and the yield locus based on R , 0 R 90 and y is computed. Finally, the optimized yield locus for the zinc coating on the steel sheet is plotted by optimizing the KHN-data points in Fig. 15. The parameters of the yield locus obtained using the procedure above are listed in Table 2. Using equations (2.2) and (2.5) the parameters for the Hill's quadratic yield criteria of the zinc coating can be obtained from the R values As of now the value of R 45 (Lankford coefficient/strain ratio at°45 orientation) is required to obtain the value of N has not been obtained from the Knoop indentation method listed above. The objective of the current work is developing a new indentation based method to characterize the yield locus for metallic coatings with rolling induced anisotropy. To elaborate on this method, as an example hot dip galvanized, temper rolled zinc coating have been used. The yield locus of the zinc coating has been successfully plotted from Knoop hardness data using the above method after an initial validation of the yield locus by Knoop indentation with the yield locus by standard tests for an uncoated cold rolled DC04 steel specimen. Conclusion To conclude, a method to obtain the yield parameters and to plot the anisotropic yield locus based on Knoop indentation has been developed. The yield parameters are optimized to plot the best fit yield locus from the KHN data. The method has been implemented to plot the yield locus for DC04 steel sheet and validated against bulk tests. Both experimental characterization procedures have been shown to be in good agreement. The method has been extended to plot the yield locus of the zinc coating on steel sheet. The parameters for the Hill's quadratic yield criteria have been derived from the plotted yield locus of the zinc coating and can be used in modelling of material deformation behaviour in various numerical simulations. Declaration of competing interest None. Fig. 15. The yield loci obtained from KHN data on the plane-stress plane optimized for anisotropy on zinc coating on steel sheet.
2019-11-07T15:30:05.023Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "783f68cdc5ea22f87048d903175a25372ad96816", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.surfcoat.2019.125110", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5f76eb86d06b30a91818a2b8b673566283d1a257", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
53028442
pes2o/s2orc
v3-fos-license
Cryogen-Free dissolution Dynamic Nuclear Polarization polarizer operating at 3.35 T, 6.70 T and 10.1 T Purpose: A novel dissolution dynamic nuclear polarization (dDNP) polarizer platform is presented. The polarizer meets a number of key requirements for in vitro, pre-clinical and clinical applications. Method: It uses no liquid cryogens, operates in continuous mode, accommodates a wide range of sample sizes up to and including those required for human studies, and is fully automated. Results: It offers a wide operational window both in terms of magnetic field, up to 10.1 T, and temperature, from room temperature down to 1.3 K. The polarizer delivers a 13C liquid state polarization for [1-13C]pyruvate of 70%. The build-up time constant in the solid state is approx. 1200 s (20 min), allowing a sample throughput of at least one sample per hour including sample loading and dissolution. Conclusion: We confirm the previously reported strong field dependence in the range 3.35 to 6.7 T, but see no further increase in polarization when increasing the magnetic field strength to 10.1 T for [1-13C]pyruvate and trityl. Using a custom dry magnet, cold head and recondensing, closed-cycle cooling system, combined with a modular DNP probe, automation and fluid handling systems; we have designed a unique dDNP system with unrivalled flexibility and performance. Introduction Despite the widespread utility of NMR and MRI, emerging applications are often limited by low sensitivity.To improve the sensitivity, it is possible to either increase the available signal or decrease noise levels, or both.Improving signal by stronger magnetic field is challenging, since superconducting magnet technology permits only relatively modest increases in magnetic field, which come at relatively high cost [1,2].On the other hand, noise reduction is possible, chiefly driven by the development of cryogenically cooled probes [3].However, this leads to a relatively modest factor of noise reduction of about a factor of approx.four.It is clear that an alternative scheme is necessary to make dramatic change to the signal-to-noise ratio.Using hyperpolarization, the sensitivity can be augmented by several orders of magnitude by boosting the polarization of the nuclear spins close to unity [4,5]. One of the leading means of hyperpolarization is dynamic nuclear polarization (DNP).DNP itself was predicted by Overhauser in 1953 and subsequently experimentally confirmed using paramagnetic metals [6][7][8].The premise of DNP is the use of microwave (MW) irradiation to transfer polarization from electron spins with larger polarization to nuclear spins.While early work used the conduction electrons of metals, modern work uses exogenous stable free radicals, such as e.g.nitroxides or trityls.There are four different DNP mechanisms that drive the polarization transfer, namely the Overhauser effect (OE), cross effect (CE), solid effect (SE), and thermal mixing (TM) [9].These mechanisms have different dependency on magnetic field strength and temperature.It is well established that for most samples at low temperature (<4.2 K), TM is the dominant mechanism.With increasing magnetic field strength, the ESR line width typically increases due to g-anisotropy, and the resonance becomes more inhomogeneously broadened.It is therefore well established that at higher magnetic field strength, modulation of the MW frequency [10] to saturate more electron spin packets or increasing the radical concentration to make the line more homogeneous [11], improves the DNP efficiency.The other DNP mechanisms should not be relevant in this study, but will also depend on the magnetic field strength.Nuclear and electron spin relaxation times will also depend on both magnetic field strength and temperature, and both play an important role in the efficiency of DNP.Griffin et al. pioneered the use of DNP with solid-state NMR at MIT, where it has been applied to a variety of systems [12][13][14].One limitation of working in the solid-state, at low temperature is the lack of information on dynamics and chemical reactions.Therefore, it is desirable to get the best of both worlds, by performing DNP at low temperature, while observing in the liquid-state at ambient temperature.This method known as dissolution DNP (dDNP) involves the rapid dissolution of the frozen sample into hot solvent [15]. A key factor, which impairs the widespread implementation of dDNP, is the need for copious quantities of liquid helium for maintaining the sample at temperatures below 2 K. Helium is a costly, non-renewable resource, and dDNP systems typically consume more than 2 L of liquid helium per sample.While this can be offset by the use of helium recovery systems, these are also costly, and 100% recovery is quite difficult to achieve.Equally important is the inconvenience of frequent handling of cryogens in a hospital environment, or even in a typical NMR laboratory. While a system for clinical use has been designed with zero cryogen consumption [16], the goals for a research system differ.Our aims were: • Zero cryogen consumption with ability to run in continuous mode • Fully automated operation for routine use • Dual channel solid-state NMR probe with broadband capability • Base temperature of less than 1.5 K under DNP conditions and operational at any temperature up to room temperature The basic principles of the polarizer were first presented at the 5 th International DNP Symposium, Egmond aan Zee, Netherlands, Sep 2015 [17], which inspired others to adopt a similar design [18].More recently, we have presented preliminary results at ENC [19] and EUROMAR [20]. In this work, we present a detailed description of the HYPERMAG polarizer, and use the unique capability of the polarizer to study the DNP enhancement as a function of magnetic field strength for the most important imaging agent for hyperpolarized metabolic MR, [1-13 C]pyruvate [21,22].Hyperpolarized pyruvate is now in man and first clinical studies have been published [23][24][25][26].It is equipped with a persistent mode switch for routine operation.In case of power failure to the compressor (the compressor starts automatically when power returns), the magnet can remain at field for several minutes.Beyond this, the magnet will harmlessly quench.In case of a quench from full field, the magnet is cold within two hours.The magnet is unshielded and provides a strong background field for the hyperpolarized solution.The dissolution is performed 100 mm from the magnet center, where the field is still 80% of the center field. DNP probe Fig 3 shows a drawing of the DNP probe [27,28].We designed it for Cryogenic Ltd as a template for similar dDNP systems based on their cryostat and VTI.The DNP probe is isolated from the VTI to avoid contamination of the VTI circuit with air from sample loading and unloading.It consists of a stainless steel upper section mated to a copper MW cavity.The internal of the DNP probe is connected to a buffer volume of 2 L charged with 2 bar of He gas, i.e. 4 L of standard temperature and pressure helium gas.This corresponds to approx.5 mL of liquid helium, and fills the cavity to half when a sample is loaded.The DNP probe has a Cernox temperature sensor (CX1030, Lakeshore, USA) and a 1 W heater mounted on the cavity.Below 4.0 K the temperature was derived from the VTI helium vapor pressure per ITS-90.The vapor pressure based temperature reading was verified against the Cernox calibration at zero magnetic field.Above 4 K, the Cernox reading was used, ignoring any magnetic field corrections.The MW cavity has an inner diameter of 27 mm and an outer diameter of 29 mm.It has an interior height of 30 mm and is fitted with a saddle coil with 13 mm diameter and 22 mm height [27].The DNP probe has an approx.400 mm long, 4.2 mm diameter circular waveguide terminating at the cavity.A UT85SS-CuBe coaxial cable connects the coil to a tune-and-match box external to the VTI.The saddle coil can be single tuned for any frequency up to 428 MHz for 1 H at 10.05 T, or it can be double tuned for 13 C and 1 H for the same frequency range. MW source(s) MW are generated by a 94 GHz source (Fig 1I, Quinstar, USA) with 500 MHz analog tuning range and 300 mW output power with analog attenuation.The source can be connected to either a frequency doubler (D200, Virginia Diodes Incorporated, USA) or tripler (D282, Virginia Diodes Incorporated, USA).The doubler or tripler is biased by -36 V from the power supply (Fig 1E).The efficiencies of the doubler and tripler are 25% and 10%, respectively.Depending on the frequency, the source is connected to a transition from rectangular waveguide (WR10, WR5.1 or WR3.4,respectively) to the circular waveguide (316L stainless steel) of the DNP probe.The transition is vacuum tight with an O-ring against the circular waveguide and with Kapton tape (DuPont, USA) on the input.When the magnetic field is changed, the frequency doubler or tripler and their respective transition is either inserted or removed as required. Dissolution system The Liquid-state polarization measurements were acquired with 400 MHz Direct Drive console running VnmrJ 4.2 software and a 5 mm 1H/X broadband probe (Agilent, Palo Alto, CA, USA).The 13 C Larmor frequency at the three field strengths are 35.9,71.8 and 108 MHz, respectively.A flip angle of 5° every 5 s was used to measure the decay of the DNP enhanced NMR signal.T1 and initial polarization (at time of dissolution) was calculated from the exponential decay.Liquid state polarization was measured by normalization of the DNP enhanced signal with the NMR signal at thermal equilibrium for the same sample using a 90° flip angle. Sample preparation [1- 13 C]pyruvic acid (Sigma-Aldrich, Denmark) with the trityl radical AH111501 (GE Healthcare, Denmark) was used in all experiments.The concentration of radical was optimized in coarse increments of 15, 30 and 45 mM at the three magnetic field strengths.1 mM Gd (Dotarem, Guerbet) doping was tested at the three field strengths for the 15 mM AH11501 sample.Pyruvic acid was loaded into the SPINlab fluid path (GE Healthcare, Denmark) as described in [16,29]. Results The magnet operated in persistent mode at 10.1 T (full field) with a drift rate of -0.06 ppm/h corresponding to 2.8 MHz/week on the MW frequency at 282 GHz.The supplier verified the magnetic field homogeneity at factory.The system cooled from room temperature to operating temperature in less than 24 h and recovered from a quench in less than 2 h.The magnet was energized to full field in less than 30 min. In persistent mode the base temperature of the VTI was <1.3 K with the pumping capacity of the ACP40 pump and all heat loads (under polarizing conditions).For most of the results presented here, the VTI was maintained at 1.4 K (2.8 mbar helium vapor pressure) by the Lakeshore 350 controller and heater on the probe.Maintaining the probe at fixed temperature permits a steady-state to be established where no adjustments to the helium flow or vacuum pump speed are required.In this mode, reductions in heater power offset MW induced heating and permit stable operation for extended periods.This is in contrast to conventional VTI based systems, which can be operated under a variety of heat loads with only minimal changes in performance.Coupling between the recirculating system and magnet leads to dramatic changes in performance with varying heat loads. An additional benefit of this operating scheme is the ability to measure the MW power deposited in the sample.The MW power transmission to the cavity at 188 GHz was determined by measuring the heater power required to maintain a fixed temperature as a function of applied MW power, Fig. 4. The power dissipated in the cavity was 5.7 dB (10 log 10(0.27))less than the source output power.The loss is consistent with the attenuation that we have measured for circular stainless steel waveguide using a power meter (Ericsson PM-5, Virginia Diodes Inc, USA) at this frequency.This loss increases to 8.5 dB (10 log10(0.14)) at 282 GHz.C]pyruvic acid with 45 mM AH111501 as a function of MW power is shown in Fig. 6.The figure shows that adequate MW power is available, and that a time constant of about 1800 s (30 min) can be obtained for [1- 13 C]pyruvic acid.In contrast to the MW frequency sweeps, the MW power sweeps are very biased towards higher power, if short build-up periods are used.The data reported here is obtained by fitting of full buildup curves at each MW power. Examples of DNP build-up curves at 94, 188, and 282 GHz for [1-13 C]pyruvic acid for close to optimal conditions (MW frequency, modulation and power) are shown in Fig. 7.The radical concentration has been varied to approach optimal conditions, and a higher radical concentration is required at higher magnetic field to obtain maximum polarization with the shortest possible build-up time constant.The 15 mM AH111501 sample has a 13 C nuclear relaxation time, T1, of 90,000 (25 h), 33,000 (9.2 h) or 6,400 s at 10.1, 6.7 and 3.35 T, respectively.The DNP build-up time constant at the three field strengths are 26,000 (7.2 h), 3,700 and 1,000 s, respectively, for the 15 mM AH111501 concentration.A build-up time constant of approx.1200 s was obtained at 6.7 T for 30 mM AH111501.The build-up curves have been obtained by acquiring a spectrum with a 1° flip angle every 300 s.At the two highest field strengths, MW modulation was applied at a rate of 1 kHz and amplitude of 25 or 50 MHz.No effect of modulation was seen at 3.35 T, whereas the effect was pronounced at 10.1 T with a two to three times increase in the DNP enhancement for 20 to 50 MHz modulation amplitude.At the higher radical concentrations, the effect of MW modulation was significantly reduced, but still provided an approx.50% improvement at 10.1 T. MW modulation had a similar effect on the build-up time constant as on the final polarization.We estimate an uncertainty of ±5% based on the three replicates for all numbers.The fitting uncertainties were typically 1% or less. The samples in Fig. 7 have been dissolved with liquid-state polarization of 27% at 3.35 T, 70% at 6.7 T, and 70% at 10.1 T. The standard deviation on the liquid-state polarization is ±5% with three replicates at each magnetic field strength.Fig. 8A shows a representative sample loading profile (position within the DNP probe; -630 mm is the magnet center) and corresponding temperature of the DNP probe for a 1.5 g sample.With such a large sample, heat loads are at the upper end of the operating range and the trends most pronounced.The loading profile has been empirically optimized to impose a low, but consistent heat load on the helium bath throughout the loading process.The temperature at the first stop at -400 mm is less than 70 K, ensuring that the sample freezes rapidly.If the sample has been frozen prior to the loading process, this pause could be shortened.Smaller samples can be loaded considerably faster, with the loading profile for a 50 mg sample taking less than 3 min.Fig 8B shows the DNP probe temperature during a dissolution of the 1.5 g sample.The dissolution takes place at -530 mm, 100 mm above the magnet center, and is retracted immediately at the end of the dissolution.A peak temperature of the DNP probe is 1.9 K, which settles to base temperature within two minutes.A smaller sample would have less impact on the DNP probe temperature.Thus, sample loading and unloading (dissolution) adds between 5 and 15 min to the polarization cycle depending on sample size, thus, exceeding SPINlab and adding fractional to the polarization cycle. Discussion and conclusion In comparison to NMR magnet design, there are some notable differences in requirements for dDNP systems.One such area is the desirable effect of stray magnetic field.The dissolution process benefit from taking place at high magnetic field in principle.Furthermore, the dissolved sample must be handled in a background magnetic field to avoid low field relaxation and non-adiabatic field changes.The most straightforward manner to accomplish these goals is to eschew the use of active shielding, which may produce abrupt magnetic field gradients, and thereby maximize the stray field.Another consideration is minimization of the length of the VTI.On a traditional cryogen cooled magnet, this distance is largely defined by the cryogen vessels contained within the cryostat that require a certain volume and benefit from long heat conduction paths.As a cryogen-free magnet is free of these constraints, it is possible to reduce the height of the VTI dramatically.Therefore, the hyperpolarized solution is always within 1 m of the magnet center and the background field never drops below 2 mT (when at 6.7 T or above). A further advantage of this is a concomitant reduction in the overall height of the system, a lower weight and cost, and the ability to take the magnet off field for periods, and within hours or days bring the polarizer operational.One consequence of shortening the VTI is a relatively high thermal gradient along the length of the VTI.This thermal gradient must be considered when designing the probe as it is possible to inadvertently introduce large heat loads through e.g.conduction or thermoacoustic oscillations. With regard to the magnet itself, requirements for dDNP are generally less demanding than those for NMR.In particular, magnetic field homogeneity and drift requirements are relaxed by several orders of magnitude as compared to high-resolution NMR magnets.One caveat is that the homogeneity requirements must be met without the use of passive shims, which are optimized for use at a single magnetic field strength.Independent superconducting shims are best avoided, as they would need to be ramped at the same time as the main magnetic field.While drift requirements are relatively relaxed (0.1 ppm/hr), the magnet must be run in persistent mode, as the drift rate of commercially available magnet power supplies is not able to meet this specification.Furthermore, persistent mode reduces the thermal load on the magnet by eliminating resistive heating in the current leads.The reduced thermal load increases the power available for helium liquefaction. The obtained drift rate of the magnet is acceptable for weekly calibration of the optimal MW frequency even for radicals with narrow ESR linewidth.With a tuning range of approx. 2 GHz (at 10.1 T) for the MW source, magnetic field correction would be required less than annually.We have implemented an automated procedure for resetting the magnetic field and typically perform this monthly.The large homogeneous volume was specified to allow multiple human doses to be polarized simultaneously.In this work, we only polarized a single sample of full human dose equivalent (1.5 g of pyruvic acid), but it should be possible to fit three human dose samples (SPINlab fluid paths) into the DNP probe and into the homogeneous volume of the magnet. Another key thermal consideration is the heat introduced in the sample loading and dissolution processes.These processes are critical, because they have the ability to introduce a sudden heat load that will rapidly propagate through the system, increase the temperature of the second stage of the cold head, and potentially quench the magnet.We have demonstrated, that even under the most demanding conditions, will the sample loading and dissolution process introduce little heat into the system.The sample vial is lifted 100 mm prior to dissolution and immediately removed after the dissolution process is complete.However, the loading process must be conducted with care as it has the potential to evaporate significant quantities of liquid helium.It is preferable to direct as much of the heat from the sample to the enthalpy of the helium gas instead of the helium bath.This is more easily accomplished in this system due to the higher operating temperature in comparison to the SPINlab polarizer.The higher operating temperature leads to a higher operating pressure, leading to more efficient thermal transfer between the sample and the cold helium gas, i.e. faster cooling rate of the sample.In the case of the SPINlab, the challenge is to reject the heat to the cold head to preserve the finite helium volume.For this system, the limitation is the capacity of the cold head to condense the warm exhaust helium.The most popular substrate for hyperpolarized metabolic MR, [1-13 C]pyruvic acid, was studied in the work.We show that the previously reported strong field dependence in the range 3.35 to 4.6 T [32,33] does not extrapolate to higher magnetic fields; up to 10.1 T. It seems that, for this sample, the maximum achievable polarization is approx.70% in agreement with other work at high magnetic field strength [11]. The g-tensor for the OX063 trityl (AH111501 is the methylated OX063) was determined by Lumata et al [34] to be axially symmetric with gꓕ = 2.00319(3) and gǁ = 2.00258(3.Thus, the gǁ-gꓕ is -0.00061, which corresponds to a spectral width of 8.5 MHz/T.This is less than the 13 C Larmor frequency of 10.71 MHz/T.Electron-electron dipolar broadening contributes to the line width, in a first approximation linearly with concentration.Chen et al [35] measured the trityl line width in frozen solution for 20, 40 and 60 mM at Xband (minimal contribution from g-anisotropy).They found that the line shape changed towards Lorentzian at the highest concentration, and that the line width was less than the dipolar coupling of 20 MHz between a minimal distance (approx. 1 nm) trityl spin pair.The 8.5 MHz/T g-anisotropy, convoluted with a dipolar broadening of 5-10 MHz at 45 mM corresponds well with the peak-peak separation of the DNP spectrum of 93 MHz, which is significantly less than the 13 C Larmor frequency of 107.1 MHz at 10.1 T. We therefore conclude that the ESR spectral width of the trityl is well matched for direct 13 C DNP via a CE or TM mechanism up to approx.6.7 T, but above this field strength, DNP via these two mechanisms become less efficient due to a too narrow ESR line relative to the 13 C Larmor frequency.Consequently, we observe that a much higher radical concentration is required when the magnetic field strength increases from 3.35 to 10.1 T in order to broaden the ESR spectrum.Thus, the optimal radical concentrations are 15, 30 and 45 mM at 3.35, 6.7 and 10.1 T, respectively.Furthermore, the higher radical concentration makes the ESR line more homogeneous.At the lowest radical concentration, 15 mM, we observed a significant increase of the DNP enhancement and shortening of build-up time constant with modulation of the MW frequency.The effect of MW frequency modulation was significantly offset when the radical concentration was increased.To the point that MW frequency modulation had little effect at 6.7 T at 30 mM, but still significant effect at 10.1 T and 45 mM.Thus, at high field the ESR line becomes inhomogeneous as expected, and the DNP mechanism shifts from TM to CE. Increased radical concentration is unable to make the ESR line effectively homogeneous. Both Lumata et al and Chen et al studied the electron longitudinal relaxation time, T1e, for trityls as a function of temperature and magnetic field.A temperature dependence of approx.linear was reported, characteristic of a direct process, which favors increasing the temperature to shorten T1e.Surprisingly, they reported almost no magnetic field dependence as would be expected for the dominating Orbach and direct mechanism, implying that T1e should not be a bottleneck at higher magnetic field strengths either.This is supported by the fact that we have not observed any benefit of Gd doping at the two highest magnetic field strengths for this sample. Since the nuclear relaxation time, T1, continues to increase with approx.the square of the magnetic field strength up to 10 T, it seems that the leakage factor cannot explain the stagnation at 70% polarization.The nuclear longitudinal relaxation time, T1n, is due to the presence of trityl.The direct dipolar relaxation by the electron spin is modulated by the electron spin lattice relaxation time (T2e on the time scale of tens of nanoseconds) [9].Since we would expect ≫ 1 at low temperature and high field, the relaxation rate should follow a 0 −2 dependence.This is consistent with our observation of T1n increasing approx.quadratic with magnetic field.However, this is probably coincidental, since this relaxation mechanism, electronnucleus dipolar relaxation, should further scale with the (1 − 0 2 ) factor.At 1.4 K and 3.35, 6.7 and 10.1 T, 0 is 92.3%, 99.7% and 99.99%, respectively, and, thus, the factor becomes 0.147, 0.00632, 0.000253, respectively.This is approx.a factor 23 from 3.35 to 6.7 T, and another factor 25 from 6.7 to 10.1 T. A more reasonable interpretation of the radical induced nuclear T1, is that it proceeds through the same three spin transitions that give rise to the DNP enhancement, indirect nuclear relaxation [9].Similar to the discussion above for DNP, the observation that the 13 C Larmor frequency starts to exceed the ESR line width, makes these transitions less probable.Furthermore, the non-linear shortening of T1n with radical concentration as it increases from 15 to 45 mM supports this interpretation.In [36] we observed an almost linear shortening of T1n with radical concentration at 3.35 T and 1.2 K, and no effect of Gd doping on T1n.However, another effect may be that small amounts of clusters of trityls form fast relaxing centers [35] that shorten T1e.Their implication for T1n and DNP remains to be further investigated. In general, a technical limitation for high field DNP, has been the availability of MW sources.However, improved waveguide transmission, the short DNP probe, and efficient multipliers has extended the accessible range.In this work, we show that this is no longer a limitation of high field dDNP.At 10.1 T approx.10 mW is required for maximal DNP and shortest build-up time constant.At 6.7 T the required MW power is slightly higher, approx.20 mW, to achieve fastest build-up, and at 3.35 T the optimal MW power had increased to approx.40 mW.These numbers are influenced by technical performance, but illustrates the general trend that the narrower ESR line at the lower magnetic field strengths absorb more MW power and lead to faster DNP time constants.In principle, any field strength in the range defined by the cut-off frequency of the circular waveguide (approx.42 GHz) and the maximum field strength of 10.1 T can be used.In recent years, MW sources with good properties for DNP at 196 GHz (7 T) and 263 GHz (9.4 T) have become available. The HYPERMAG polarizer provides several important features.It runs in continuous mode (in contrast to the SPINlab that requires overnight regeneration of the contained helium) without consumption of any cryogens.It offers the widest range of operating conditions for dDNP, up to 10.1 T magnetic field strength and any temperature from room temperature down to 1.3 K.The system is very convenient, with a high level of automation for the user, and reliable.A liquid state polarization of 70% was obtained for [1- 13 C]pyruvate with a solid state build-up time constant of approx.1200s (20 min) allowing a throughput of at least one sample per hour including sample loading and dissolution We have built two systems, the first has been running for 15,625 h/651 days (compressor running hours) and the second has been in operation at Dr Mikko Kettunen, University of Eastern Finland since Aug, 2017.In conclusion, using a custom dry magnet, cold head and recondensing, closed-cycle cooling system, combined with a modular DNP probe, automation and fluid handling systems; we have designed a unique dDNP system with unrivalled flexibility and performance. A Fig 2 shows a schematic of the cryostat (Fig 1J, Cryogenic Ltd, UK).The cooling power of the system is provided by a 1 W pulse tube cooler (Fig 1M: RP-082B2, SHI Cryogenics, Japan) and a F-70H compressor (SHI Cryogenics, Japan).A temperature monitor (Fig 1D: Model 218, Lakeshore, USA) monitors the magnet, Figure 1 : Figure 1: Photographs of HYPERMAG polarizer: A: magnet power supply; B: LS350 temperature controller; C: Watlow temperature controller for solvent heating; D: LS218 temperature logger; E: power supply; F: pneumatics; G: heater pressure module for fluid path syringe; H: Insertion module for fluid path, air lock and gate valve on top of DNP probe; I: MW source; J: dry magnet and cryostat; K: dry pump; L: buffer tanks; M: cold head; N: National Instruments cRIO controller; O: Pneumatics valve block for vacuum and pressure; P: high pressure valves for syringe drive; Q: vacuum pump for air lock. Figure 2 : Figure2: Schematic of the cryostat with the 10.1 T magnet.A pulse tube cold head and compressor provides the cooling power to the magnet and helium cooling circuit.100 L of helium gas from a cylinder is charged into the closed cooling circuit.The cold head condenses the helium gas into the helium pot.A needle valve controls the flow of liquid helium into the VTI.A dry pump reduces the vapor pressure in the VTI, and the exhaust of the pump is buffered by tanks before entering the cryostat through a charcoal trap.The interior of the DNP probe is isolated from the VTI with a separate helium gas volume of 2 L at 2 bar.Sample loading into the DNP probe is through an airlock. Figure 3 : Figure 3: 2D CAD drawing of the DNP probe.The DNP probe fit into the 30 mm diameter VTI.The distance from the KF-40 flange to the magnet center is 444 mm.The probe has a KF-16 flange at the sample loading port to fit the gate valve and airlock.An overpressure relief valve is fitted.The 4.2 mm diameter waveguide extends to the copper cavity seen on the right.The photo shows the NMR saddle coil without the cavity mounted. Figure 4 : Figure4: Probe heater power required to maintain a fixed temperature of 1.4 K as a function of MW power at the frequency for optimal positive enhancement.The measurements were performed at 188 (blue empty square) and 282 GHz (green empty circle), and the measured attenuations were 5.7 dB (10 log10(0.27))and 8.5 dB (10 log10(0.14)),respectively. Figure 6 : Figure 6: Time constant and final value for DNP build-up at 282 GHz for [1-13 C]pyruvic acid with 45 mM AH111501 as a function of MW power. Figure 7 : Figure 7: DNP build-up curves at 94, 188, and 282GHz for [1-13 C]pyruvic acid for close to optimal conditions (MW frequency, modulation and power).The radical concentration has been varied to approach optimal conditions. Figure 8 : Figure 8: A. Temperature profile of the helium bath (DNP probe) during loading of 1.5 g of [1-13 C]pyruvic acid with 15 mM AH111501.The sample is lowered gradually according to an insertion profile that has been empirically adjusted to minimize the heat load to the helium bath.Sample position is indicated on the secondary ordinate.B. Temperature profile of the helium bath (DNP probe) during a dissolution of the same sample.The sample is raised 100 mm before dissolution, and immediately retracted when the dissolution is complete. • Operational at multiple field strengths up to 10.1 T corresponding to a MW frequency of 282 GHz • Any sample size up to 2 g corresponding to a human dose of [1-13 C]pyruvic acid [29][30][31]mple port has a gate valve (VAT 01224-KA24, Switzerland) mounted between air lock and insertion modules (Fig 1H: GE Healthcare, USA) adapted from the SPINlab polarizer [16].The airlock is connected to helium purge gas (approx.1.3bar)or vacuum (approx.7 mbar) through the automation valves.A small diaphragm pump (Fig 1Q: KNF, Germany) provides the vacuum.A temperature controller (Fig 1C: Watlow, USA) controls a heater-pressure module (Fig 1G: GE Healthcare, USA), also adapted from the SPINlab, and mounted in proximity to the insertion module.The complete assembly accommodates the use of fluid paths[29][30][31]available for the SPINlab polarizer, which is the key feature to ensure sterility for human use (the SPINlab also includes a quality control system to enable human studies). AutomationThe system is controlled by a real time controller (Fig 1N:cRIO-9035, National Instruments, Austin, TX, USA) running LabVIEW software.The controller has modules for motor control, digital and analogue input and output, and serial communication.A 24 V power supply (Fig1E) powers the controller.The system runs under computer control with a graphical user interface to control the basic user functions (sample loading, polarization and dissolution).The software logs key parameters such as temperatures, pressures, and sample position from the temperature controller and monitor, as well as pressure gauges.Pneumatics are controlled by a valve block (Fig 1O:Festo, Germany) that delivers compressed air to various pneumatic valves and actuators (Fig1F).For compatibility with standard house compressed air supplies, the 16 bar pressure required for driving the heater-pressure module is generated by a four times pressure booster (SMC, Japan).High pressure gas, as well as vacuum are controlled with pneumatically actuated high purity diaphragm valves (Fig 1P: Swagelok, Solon, OH, USA).NMRSolid-state NMR data were acquired with a 400 MHz Unity INOVA NMR console (Agilent, Palo Alto, CA, USA) running VnmrJ 4.2 software.The flip angle was calibrated from a series (hundreds) of NMR spectra acquired with short repetition time (typically 0.1 s).An exponential fit to the signal decay provided the actual flip angle.Solid-state polarization was measured by normalization of the DNP enhanced NMR signal (integral) with the NMR signal at thermal equilibrium at the same conditions.No 13 C background signal from the probe was observed.
2018-08-27T19:50:07.000Z
2018-08-14T00:00:00.000
{ "year": 2018, "sha1": "5ffd93d0e65db971631883b6a0ddd531b77120f8", "oa_license": null, "oa_url": "https://backend.orbit.dtu.dk/ws/files/158809932/arXiv.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7fbdda67b2f901a1b181e572400ad077b7d39c72", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
6998129
pes2o/s2orc
v3-fos-license
An empty E1−, E3−, E4− adenovirus vector protects photoreceptors from light-induced degeneration We have previously identified a neuroprotective effect associated with empty (E1−, E3−, E4−) adenovirus vector delivery in a model of light-induced, photoreceptor cell death. In this study, we further characterize this protective effect in light-injured retina and investigate its molecular basis. Dark-adapted BALB/c mice, aged 6–8 weeks, were exposed to standardized, intense fluorescent light for 96 or 144 h. Prior to dark adaptation, all mice received intravitreous injection of 1 × 109 particles of an empty (E1−, E3−, E4−) adenovirus vector in one eye and vehicle in the other. Following light challenge of 96 or 144 h, histopathological analysis and quantitative photoreceptor cell counts were conducted. Semiquantitative assessment of messenger ribonucleic acid (mRNA) for the apoptosis related genes: p50, p65, IkBa, caspase-1, caspase-3, Bad, c-Jun, Bax, Bak, Bcl-2, c-Fos, and p53 using quantitative reverse transcriptase polymerase chain reaction was performed on eyes following 12 h of light exposure. Following 96 h of light exposure, the photoreceptor cell density for E1−, E3−, E4− adenovirus vector and vehicle-injected eyes were 87.5 ± 9.5 and 79.3 ± 10.1, respectively, (p = 0.79). After 144 h of light exposure, the photoreceptor cell density was preserved in vector-injected eyes as compared to vehicle treated eyes, 68.9 ± 10.0 and 49.2 ± 4.6, respectively (p = 0.016). Relative mRNA levels of c-Fos and c-Jun at 12-h light exposure after injection differed significantly between vector- and vehicle-injected eyes (p = 0.036, 0.016, respectively). The expression of the other apoptosis-related genes evaluated was not significantly affected. This study investigates the molecular basis of photoreceptor neuroprotective pathway induction associated with E1−, E3−, E4− adenovirus vectors. The results indicate that empty adenovirus vectors protect photoreceptors from light-induced degeneration by the modulation of apoptotic pathways. Gene expression changes suggest that the suppression of c-Fos and c-Jun upregulation contributes significantly to the neuroprotective effect. Understanding the molecular basis of the neuroprotective pathway induction in photoreceptors is critical to the development of novel therapies for retinal degenerations. Introduction Adenovirus (AdV), adeno-associated virus, and lentivirus vector platforms continue in active development for ocular gene therapy [1][2][3]. A number of factors determine the relative advantages and disadvantages of each platform and include but are not limited to vector tropism, transduction efficiency, transgene size, latency, duration of expression, vector-related toxicity, and integration requirements for transgene expression. Associated and potentially therapeu-tic effects of the empty or null vectors themselves are described but are not well understood [4,5]. While the magnitude of the reported protective effect is small compared to that resulting from, e.g., overexpression of a specific neuroprotective transgene, such as pigment epithelium-derived factor (PEDF), the effect is significant [5], and to date, there has been no attempt to optimize for therapeutic benefit. Understanding the mechanistic origin of these phenomena may lead to greater understanding of neuroprotection, photoreceptor degeneration, and the requirements for future therapy. There are clinical settings (e.g., chronic disease) in which prolonged transgene expression may be desired. Currently, the risk of prolonged gene expression is unknown for most proteins and must be evaluated on a protein-by-protein basis. In the eye for example, even increased expression of wildtype rhodopsin or peripherin/rds can result in the degeneration of photoreceptors [6,7]. Expression changes induced by the vector platform itself are potentially long-term results of vector administration that must be considered. It is conceivable that platform-induced gene expression changes could persist for longer than those induced by transgene expression and should be understood. Replication-deficient AdV vectors are typically characterized by a large capacity, short latency, transduction of both dividing and nondividing cells, high expression levels, and relative ease of production. AdV vectors, however, induce multigene response including dose-dependent ocular immune responses that are associated with inflammation and shorter periods of expression [8][9][10] Among the genes induced by AdV vectors is nuclear factor kappa B (NFkB), a key regulator of apoptosis [11]. We have previously reported that intraocular AdV-mediated gene transfer of PEDF significantly increased retinal cell survival following retinal ischemia-reperfusion injury [12] and light-induced photic injury [5]. The protective effects of PEDF were in part attributed to the modulation of apoptotic pathways [5,12]. The molecular basis of a separate protective effect noted in the empty (E1 − , E3 − , E4 − ) AdV vector is not yet understood and is evaluated in this study. Animals Female BALB/c mice were used at 4 to 8 weeks of age. The animals were housed under a 12-h (7:00 A.M. to 7:00 P.M.) light/dark cycle with 60 lux at the center of the cage prior to the start of experiments. The animals were anesthetized by intramuscular injection of 80 mg/kg of ketamine hydrochloride. All of the animals were treated under deep sedation in accordance with the Association for Research in Vision and Ophthalmology resolution on the use of animals in research. Light-induced retinal degeneration alone was induced in six control mice by a predetermined level of fluorescent light exposure sufficient to induce degenerative change. Twenty-four mice received intravitreous injection of adenoviral vectors or vehicle (3% trehalose) followed by light exposure. Adenoviral vectors and intraocular injection procedures The vectors are deleted for E1A, E1B, E3, and E4 and lack an inserted transgene. The specific empty vector has been reported in prior publication as AdNull.11 (GenVec, Gaithersburg, MD, USA) [13]. Mice in the experimental group were injected prior to excessive light exposure. Mice received either no injection, vehicle injection, or intravitreous injection of 1×10 9 particles of AdNull.11. Intravitreous injection was performed with a Hamilton syringe fitted with a 33-gauge beveled needle (Hamilton, Reno, NV, USA). The needle was passed through the sclera at the equator into the vitreous cavity. The injection occurred with direct observation of the needle in the center of the vitreous cavity. Eyes with intraocular hemorrhages, lens trauma, or other complication during the viral injection were excluded from this study. Exposure to light Immediately after vector delivery, rats were dark adapted for 3 or 72 h. Following dark adaptation, all eyes were carefully examined by slitlamp biomicroscopy and indirect ophthalmoscope, to rule out the presence of ocular injury or toxicity. Eyes with any signs of trauma or inflammation were excluded from further study. The animals were then housed in an animal cage that was surrounded on all sides by commercially available fluorescent tubes (EFD21EN, Toshiba, Tokyo). Light exposure was continuous at a constant 2,500 lux as measured at the center of the cage. No area of the cage allowed avoidance of the diffused 2,500-lux light. The temperature in the center of the cage during the period of light exposure was maintained at room temperature. All experiments were conducted in a well-ventilated space. The animals had access to water at all times. Animals in the control and treatment groups were exposed to identical environmental conditions throughout the experimental period. Morphometric analysis Eyes with or without viral vector administration prior to light exposure were enucleated after 96 or 144 h of light exposure. All eyes were immediately fixed in 4% paraformaldehyde in phosphate-buffered saline (PBS) for 60 min. After rinsing with PBS, the eyes were oriented in optimum cutting temperature embedding compound (OCT; Miles Diagnostics, Elkhart, IN, USA) with the cornea facing forward and with 12 o'clock positioned superiorly and then snap frozen in liquid nitrogen after which they were stored at −80°C until sectioning. At cryosectioning, five serial sections (10 μm), beginning at the superior edge of the optic nerve, were obtained at 100-μm intervals. In sections including the optic nerve, the optic nerve tissue was excluded from cell counts. All specimens were processed using Hematoxylin staining (Contrast-blue, KPL Laboratory, Gaithersburg, MD, USA). The number of nuclear cells in the outer nuclear layer (ONL) was counted in two sample areas, in each of ten standard sections, per eye. The areas to be counted were assigned in a standard fashion such that retina located approximately 200 μm from the optic nerve, lacking artifacts such as retinal detachment, tissue distortion, and staining artifact, were used. The mean ONL cell count was then calculated for each eye and analyzed statistically. Apoptosis-related gene expression analysis Eyes treated with no injection, vehicle injection, or injection with AdV vector were enucleated after 12 h of continuous light exposure. Naïve eyes, without injection or light exposure, were also enucleated following 15 h of dark adaptation for gene expression analysis. The retina was removed from experimental eyes and immediately frozen in liquid nitrogen. Retinal tissue was stored at −80°C until ribonucleic acid (RNA) preparation. Total retinal RNA was isolated by the acid guanidine thiocyanate-phenol-chloroform extraction method using TRIzol® (Invitrogen, Carlsbad, CA, USA). We utilized DNaseI (RNase-free; TAKARA BIO) to remove genomic deoxyribonucleic acid (DNA) contamination. Two hundred nanograms of total RNA was applied to reverse transcription with 25 U of SuperScript II reverses transcriptase (Invitrogen) in a thermal cycler (GeneAmp PCR system 9700, Applied Biosystems) to generate complementary DNA (cDNA). Quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) was carried out with 10 ng of cDNA using Assay-on-Demand™ Gene Expression for 12 apoptosis-related genes (p53, c-Fos, c-Jun, Bad, Bak, Bax, Bcl-2, Caspase-1, Caspase-3, IkB, p50NFkB, and p65NFkB). We used acidic ribosomal phosphoprotein P0 (Applied Biosystems) as an endogenous control gene [14]. qRT-PCR was performed in triplicate for each sample with a commercial system (ABI PRISM® 7900HT Sequence Detection System, Applied Biosystems). The expression level of each gene was assigned arbitrary units (relative to baseline samples) using described comparative Ct methods [15,16]. Statistical analysis Statistical analysis of all eyes was performed using paired or unpaired t test. p values less than 0.05 were prospectively assigned as the value required for the reporting of significance. Effect of intravitreous injection of empty AdV in light-induced photoreceptor cell death Three days prior to initiation of light exposure, female BALB/c mice received no injection or intravitreous injection of 1 × 10 9 particles of empty AdV. Typical histologic changes are shown in Fig. 1. In uninjected eyes, the photoreceptor cell density and the thickness of the ONL were predictably reduced corresponding to the time course of light injury (Fig. 1a-c). In contrast, photoreceptor cell density was relatively preserved in eyes following intravitreous injection of empty AdV as compared to corresponding uninjected eyes (Fig. 1d,e). Morphometric analysis was performed to quantitatively compare photoreceptor cell counts in each group. The photoreceptor cell counts of uninjected eyes at baseline, 96 h, and 144 h were 108.8±7.3, 79.3±10.1, and 49.2±4.6 (mean±SD), respectively. Following intravitreous injection with empty AdV, the photoreceptor cell counts at 96 and 144 h were 87.5±9.5 and 68.9±10.0, respectively. These results represent significant preservation of the photoreceptor layer in eyes treated with empty AdV at 144 (p=0.016) but not at 96 h (p=0.79; Fig. 2). Apoptosis-related gene expression analysis The apoptosis-related genes tested in this study were p53, c-Fos, c-Jun, Bad, Bak, Bax, Bcl-2, Caspase-1, Caspase-3, IkB, p50NFkB, and p65NFkB. Retinal mRNAs of the four anti-apoptosis genes were Bcl-2, IkB, p50NFkB and p65NFkB. The eight pro-apoptosis genes were p53, c-Fos, c-Jun, Bad, Bak, Bax, Caspase-1 and Caspase-3. Relative messenger RNA (mRNA) expression levels of c-Fos increased in the retinas with light exposure alone (p= 2.37×10 −10 ) and light exposure and vehicle injection (p=0.017) when compared to untreated baseline eyes. Relative mRNA levels of c-Fos in the retinas treated with light exposure and empty AdV injection were preserved and less than those in the retinas with light exposure alone (p=1.51×10 −7 ) and light exposure and vehicle injection (p=0.036; Fig. 3a). The results of relative mRNA expression levels of c-Jun were similar to c-Fos expression. c-Jun gene expression levels were induced and significantly higher in the retinas with light exposure alone (p=3.10×10 −5 ) and light exposure and vehicle injection (p=1.16×10 −3 ) when compared to untreated baseline eyes. The intravitreous injection of empty AdV in eyes with light exposure inhibited the induction of mRNA expression of c-Jun in the retinas with light exposure alone (p=7.69×10 −4 ) and with light exposure and vehicle injection (p=0.016; Fig. 3b). The mRNA expression levels of other genes tested (p53, Bad, Bak, Bax, Bcl-2, Caspase-1, Caspase-3, IkB, p50NFkB, and p65NFkB) were not significantly changed between eyes with light exposure and vehicle injection and eyes with light exposure and empty AdV injection (Table 1). Discussion The molecular basis of a retinal neuroprotective effect associated with intravitreous delivery of an empty AdV vector, in the setting of intense retinal light exposure, is The photoreceptor response to light injury is complex and the result of a multigenic gene response. Current understanding in this area is reviewed by Wenzel et al. [17]. Acute bright light exposure is reported to induce changes in the mitochondrial membrane potential that may be associated with the induction of photoreceptor apoptosis [18]. Bcl-2 family members are known to regulate mitochondrial membrane permeability and integrity [19,20] However, the ameliorative effect of Bcl-2 overexpression in a transgenic model following excessive light exposure is incompletely understood with points of controversy remaining [21,22]. Reports indicating that the ablation of the proapoptotic Bcl-2 family members Bax and Bak protect the retina against light damage support a neuroprotective effect for BCL-2 [23]. In general, photo-oxidative stress is believed to downregulate NFkB via involvement of caspase-1, resulting in apoptosis of photoreceptor cells [24,25]. In this study, gene expression changes show significant suppression of c-Fos and c-Jun upregulation, both constituents of transcription factor AP-1. Intensive visible light exposure induces apoptotic photoreceptor cell death by activation of the transcription factor AP-1 and AP-1 activation is believed to be essential for light-induced photoreceptor apoptosis [17]. AP-1 is a complex that consists either of heterodimers of members of the Fos and Jun family or homodimers of members of the Jun family [26,27]. Light exposure induces complexes of c-Fos, c-Jun, and JunD proteins [28], but JunD is not essential for retinal light damage [29]. The absence of c-Fos is reported to completely prevent light-induced apoptotic photoreceptor cell death [30]. Grimm et al. have evaluated activation of several apoptosis-related genes during light-induced photoreceptor degeneration in wild-type mice (strain 129SV/Bl6 or BALB/c) and found that intensive light exposure induced c-Fos and c-Jun gene upregulation [31]. Although the setting (duration and intensity) of light exposure were different than those in this study, these findings are consistent with those reported here. Lastly, while the Grimm study observed upregulation of the caspase-1 gene [31], we did not detect significant expression change in caspase-1 gene expression following empty AdV injection. Reichel et al. have reported that an AdV vector expressing the β-galactosidase reporter gene had a protective effect in the rd mouse model of retinal degeneration [4]. While it was not determined whether the protective effect was related to the β-galactosidase protein or the vector, it was negated by immune suppression with depletion of both CD4+ and CD8+ T cells. They therefore hypothesized that the immune response to vector and/or transgene products was protective. The vector used in the current study is a human adenoviral vector, serotype 5, similar to that used in the study by Reichel et al. [4]. We may therefore speculate that the downregulation of c-Fos and c-Jun could result, at least in part, from immune responses initiated by intraocular injection of the AdV vector. AdV vectors have been tested in human subjects [32,33], and safety data are available from a phase I clinical trial [33]. An inflammatory response has been considered a disadvantage of this vector platform, but induced immune responses may also have beneficial effects in the setting of retinal degeneration. We have previously demonstrated that the intravitreous injection of the AdV vector with the a similar genetic backbone to the vector used in this study resulted in the transduction of cells predominantly in the iris, cornea, and ciliary body but not in photoreceptors [34]. It is interesting to note that several studies have demonstrated that the AdV vector induces modification of endogenous multigene expression [34][35][36][37]. We thus hypothesize that the intravitreous injection of the AdV vector modifies the endogenous gene expression in the transduced cells of the eye, which might secrete the neuroprotective protein. In future experiments, we will test this hypothesis and others regarding the mechanism of the effect of the AdV vector on neuroprotection, which could provide additional opportunities for the development of new treatments. In summary, our data indicate that intravitreous injection of an E1 − , E3 − , E4 − AdV vector increases photoreceptor cell survival resulting from intense light exposure. Associated gene expression changes suggest that the protective effect involves suppression of transient upregulation of c-Fos and c-Jun genes, both constituents of transcription factor AP-1. The findings provide insight into AdV vector-induced neuroprotective pathways associated with photoreceptor rescue and may have eventual therapeutic implications for retinal degenerations.
2014-10-01T00:00:00.000Z
2008-03-01T00:00:00.000
{ "year": 2008, "sha1": "10bf1823dcfd068a8f327d60fc5eae7a1e3f5470", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12177-008-9004-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "10bf1823dcfd068a8f327d60fc5eae7a1e3f5470", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
8569678
pes2o/s2orc
v3-fos-license
Zebrafish homologs of genes within 16p11.2, a genomic region associated with brain disorders, are active during brain development, and include two deletion dosage sensor genes SUMMARY Deletion or duplication of one copy of the human 16p11.2 interval is tightly associated with impaired brain function, including autism spectrum disorders (ASDs), intellectual disability disorder (IDD) and other phenotypes, indicating the importance of gene dosage in this copy number variant region (CNV). The core of this CNV includes 25 genes; however, the number of genes that contribute to these phenotypes is not known. Furthermore, genes whose functional levels change with deletion or duplication (termed ‘dosage sensors’), which can associate the CNV with pathologies, have not been identified in this region. Using the zebrafish as a tool, a set of 16p11.2 homologs was identified, primarily on chromosomes 3 and 12. Use of 11 phenotypic assays, spanning the first 5 days of development, demonstrated that this set of genes is highly active, such that 21 out of the 22 homologs tested showed loss-of-function phenotypes. Most genes in this region were required for nervous system development – impacting brain morphology, eye development, axonal density or organization, and motor response. In general, human genes were able to substitute for the fish homolog, demonstrating orthology and suggesting conserved molecular pathways. In a screen for 16p11.2 genes whose function is sensitive to hemizygosity, the aldolase a (aldoaa) and kinesin family member 22 (kif22) genes were identified as giving clear phenotypes when RNA levels were reduced by ∼50%, suggesting that these genes are deletion dosage sensors. This study leads to two major findings. The first is that the 16p11.2 region comprises a highly active set of genes, which could present a large genetic target and might explain why multiple brain function, and other, phenotypes are associated with this interval. The second major finding is that there are (at least) two genes with deletion dosage sensor properties among the 16p11.2 set, and these could link this CNV to brain disorders such as ASD and IDD. A key general question is how do genes in a CNV contribute to an associated disorder? In the most direct case, duplication or deletion of a gene would increase or decrease cognate RNA and protein levels proportionally, and this change would be pivotal in the development of the pathology (Nord et al., 2011). We term genes with such properties 'dosage sensors' . In other cases, structural effects caused by the chromosomal rearrangement might contribute to a phenotype (Ricard et al., 2010). For the 16p11.2 region, a mouse model in the syntenic region shows that levels of brain expression in 79% of genes are affected by deletion or duplication, as predicted Conservation and expression of zebrafish 16p11.2 homologs In order to use the zebrafish as an effective tool for functional analysis of the 16p11.2 CNV, we used the strategy shown in Fig. 1A. The 16p11.2 core spans 593 kbp and includes 25 protein-coding genes. In total, 21 of these genes in the human interval were identified in the zebrafish genome (Fig. 1B). Of the remaining genes, SPN, TMEM and C16ORF54 are limited to mammals. SPN has a regulatory role in adaptive immunity (Kyoizumi et al., 2004), whereas TMEM and C16ORF54 are of unknown function, and all three are of unknown importance in neurodevelopment. Finally, QPRT has a teleost homolog in fugu (52% identity to the human protein) and medaka (54% identity to the human protein), suggesting that a zebrafish homolog exists, but is not yet annotated in the genome. Zebrafish homologs of 16p11.2 were found to be clustered on either chromosome 3 or 12, previously identified as the genomic counterparts of human chromosome 16 (Taylor et al., 2003), with the exception of ino80e, which is located on chromosome 16 (supplementary material Table S1). These regions are not syntenic with human chromosome 16, because gene order is not conserved. Two sets of syntenic genes were found on chromosome 3: one region comprises kctd13, sez6l2 and asphd1, for which the order is not conserved with that in humans; and the other comprises mapk3, gdpd3 and ypel3, for which the order is conserved. Interestingly, the first set of syntenic genes (kctd13, sez6l2 and asphd1) is found in a microdeletion associated with ASD (Crepel et al., 2011). Five homologs were present in two copies [aldolase a (aldoa), fam57b, gdpd3, ppp4c and taok2], reflecting the partial duplication of the teleost genome ( Fig. 1B) (Postlethwait et al., 2000). Genes in each pair had similar but not identical sequences (supplementary Table S1). The mapk3, gdpd3 and ypel3 loci are syntenic, whereas kctd13, sez6l2 and asphd1 genes are grouped, but their order on the human chromosome is different. The cluster tbx24, ppp4cb and aldoab genes have conserved order, but the region includes intervening genes. Single fish icon, single homolog; two fish icons, multiple homologs; blue dot, teleost homolog, but no Danio rerio homolog; red box, no teleost homologs identified; black bar, synteny; H. s., Homo sapiens; D. r., Danio rerio; Chr., chromosome. Zebrafish 16p11.2 homologs RESEARCH ARTICLE material Table S1), with one found on zebrafish chromosome 12 and the other on chromosome 3. Such duplication might result in split or divergent function of the gene (Yamamoto and Vernier, 2011). These data indicate that the zebrafish genome includes homologs of 84% of the human 16p11.2 core genes, arranged primarily on two homologous chromosomes. Homologs were all expressed during the first 48 hours of development, as the brain and other organs are forming, with almost all genes showing some expression in the brain. Expression data, chromosomal arrangement of the genes and their sequence conservation with their human counterparts indicated that this gene set was appropriate for further analysis. Changes in brain and body morphology accompany loss of function in 16p11.2 homologs In order to determine which zebrafish 16p11.2 homologs were active as the brain formed and began to function, we screened these for activity during brain development, from 24 hpf through 5 days post-fertilization (dpf ), the human developmental equivalent of 5 weeks of gestation to toddlerhood. Loss of function (LOF) was performed by injection of antisense morpholino oligonucleotides (MOs) into one-to two-cell embryos. Where possible, MOs binding to an exon-intron boundary were utilized (Table 1; supplementary material Table S2), to target zygotic RNA. Where a splice site MO did not give a phenotype, an MO directed against the translational start site was tested to determine whether maternal RNA could have prevented observing a phenotype with a splice site MO (Table 1; supplementary material Table S2). The resulting effects of MO action on RNA coding capacity and the predicted protein are included in Table 2. Where two copies of a gene had been characterized, in some cases, only one was highly expressed in the brain and this was assayed for a LOF phenotype. In the case of taok2, both genes showed strong brain expression (supplementary material Fig. S1), and both were assayed. Thus, 22 genes were tested for LOF phenotypes. MO methodology is rapid and allows functional analysis of many genes; however, specificity of phenotypes associated with MOs was carefully tested because off-target effects can sometimes be observed (Bedell et al., 2011;Eisen and Smith, 2008). The criteria employed to test specificity were as follows, and are described more fully in Methods, with results documented in Table 1 and quantified in Tables 2 and 3. First, for MOs targeting a splice donor or acceptor site, a change in RNA splicing and coding capacity should be observed. Second, the corresponding amount of normal RNA should be reduced, and a phenotype should correlate with RNA reduction. Use of splice site MOs allows these assays to be performed quantitatively, because normal RNA can be distinguished from abnormally spliced RNA. Third, a key assay for specificity is the ability of RNA derived from the corresponding human or zebrafish cDNA to prevent the LOF phenotype, when co-injected with the appropriate MO. Such 'rescue' RNAs do not contain the MO-binding site, owing to species differences or because the MO-binding site lies across a splice junction. Fourth, off-target effects of MOs might cause cell death, which can effectively be suppressed by injection of a p53 MO (Robu et al., 2007). In cases in which severe phenotypes were observed, the effect of p53 suppression is tested and, if a resulting phenotype was milder, it is the one scored. Finally, a phenotype obtained with MOs was compared with that of mutants (or with that induced by shRNAs) to test similarity of phenotypes. Because mutants are available for a very limited number of genes, we focused on MO-mediated LOF, which is the most feasible way to assay the activity of the large 16p11.2 gene set. LOF embryos were first examined at 24 hpf for brain morphology, after injection of the brain ventricles with Texas Red dextran (Gutzman and Sive, 2009), and scored for brain shape, presence of forebrain, midbrain and hindbrain hingepoints, brain ventricle size, forebrain truncation, and eye morphology. Tail and body phenotypes were assayed as additional indicators. For each gene, each phenotype reported was observed in at least two independent experiments, and observed for at least 70% of embryos examined, using MO amounts that had been titrated (Table 2) and gave a clear phenotype (with quantification presented in supplementary material Table S3). Strikingly, LOF for almost all genes (20 out of the 22 assayed), with the exception of prrt2 and tbx24, led to changes in brain or eye morphology ( Fig. 2A, Fig. 3; supplementary material Table S3). These phenotypes are characterized further in Fig. 2B,C. tbx24 LOF was associated only with a tail phenotype, consistent with a lack of brain expression of this gene, and with the mutant phenotype (supplementary material Fig. S2) (Thisse and Thisse, 2005). Thus, a total of 21 out of the 22 genes examined gave a LOF phenotype. In addition to their brain phenotypes, LOF in all but six genes (cdipt, doc2a, fam57ba, kctd13, prrt2 and taok2a) led to tail or body defects, including failure of the yolk cell to extend, a short bent tail and abnormally shaped muscle segments ( Fig. 2A, Fig. 3; supplementary material Table S3). Several genes gave very strong phenotypes, suggesting early embryonic defects. In particular, coro1a, ino80e, mapk3 and mvp 'morphants' (defined as LOF embryos caused by MO injection) showed abnormal body length and defective neural tubes. mapk3 LOF embryos showed a defective forebrain and eyes, and a short body, consistent with a recent study (Krens et al., 2008). coro1a, maz and fam57ba morphants have small eye cups with protruding lenses (Schmitt and Dowling, 1994). LOF in asphd1 and sez6l2 gave weak phenotypes, whereas no phenotype was observed after prrt2 LOF. Because prrt2 is not highly expressed until 48 hpf, gene function could be required later (supplementary material Fig. S1A). Two clear groups of brain morphology phenotypes were apparent (Fig. 2B,C). LOF in the first group of genes was associated with reduced brain ventricle size, and the midbrain-hindbrain boundary (MHB) was less sharply defined than in controls. This group includes c16orf53, cdipt, doc2a, hirip3, kctd13 and taok2b (Fig. 2B), which have not previously been shown to regulate brain morphology, although cdipt is required later in lens and Zebrafish 16p11.2 homologs RESEARCH ARTICLE photoreceptor development in zebrafish (Murphy et al., 2011), and Doc2a is a modulator of synaptic transmission in mice (Yao et al., 2011). These phenotypes could result from changes in neuroepithelial specification, morphogenesis, or a reduction in cerebrospinal fluid volume (Gato and Desmond, 2009;Lowery and Sive, 2009). Table S3). f MOs were administered in titrations. For splice site MOs where the mass correlated with an increase in phenotype severity, qPCR was performed to measure the normal RNA (Fig. 6). g RNA was co-injected with MOs to confirm phenotypes were specific effects of the MO (Fig. 3, Table 3). h Similar phenotypes were observed for additional knockdown methods ( Fig. 7; supplementary material Fig. S2). i A decrease in protein was also observed by western blot analysis of 24 hpf embryos ( Fig. 6 and Methods). j shRNA was used to target deletion dosage sensors. Similar phenotypes were observed (Fig. 7). k Mutant lines were available for cdipt and tbx24, and similar phenotypes were observed (supplementary material Fig. S2 and Methods). l No splicing change was observed in RNA. m The rescue for the tbx24 MO was previously published in Nikaido et al. (Nikaido et al., 2002). 1,2,3 mvp MOs 1, 2 and 3, correspond to those included in Table 2 and supplementary material Tables S2 and S3, respectively, although no mvp LOF conditions were rescued. ND, not determined; NA, not applicable. RESEARCH ARTICLE A second group of genes leading to defective brain morphology after LOF showed a straight midbrain (Fig. 2C), where the midbrain hingepoint was essentially absent. This phenotype was seen in aldoaa, fam57ba, gdpd3, maz, ppp4ca and ypel3 morphants, a group of genes with previously undefined contributions to brain development. gdpd3, ppp4ca and ypel3 LOF embryos showed a wider opening at the MHB, relative to the narrowing seen in control embryos. A subset of embryos from both groups showed a narrowed forebrain, such as those seen in aldoaa, fam57ba and maz LOF embryos, whereas the area rostral of the eyes was reduced after fam57ba, gdpd3, hirip3 and maz LOF. Expression of pax2a at the MHB was normal, indicating correct specification of this region, and suggesting that later steps resulted in the phenotypes observed (not shown). Although the groupings shown in Fig. 2B,C suggest a similar contribution to brain development by all genes in the group, close comparison reveals distinct phenotypes; for example, aldoaa and ypel3 morphants have similar midbrain phenotypes, but their MHB and hindbrain phenotypes were unique. For all but three genes, co-injection of RNA derived from the human cDNA together with the MO generally restored the phenotype, indicating fish-human orthology, and confirming the specificity of the phenotype (Tables 1, 3 and Fig. 4). For kctd13 and maz, the zebrafish, but not the human, gene rescued the LOF phenotype. mvp LOF gave a similar phenotype with three tested MOs, but was not reproducibly rescued by fish or human cDNAs, perhaps reflecting a stoichiometric requirement for other components with which mvp complexes (Berger et al., 2009). Embryos were scored as rescued if morphological and behavioral phenotypes were ameliorated in ~50% or more embryos (see Fig. 4 and Table 3). For strong phenotypes, such as gdpd3 and mapk3 LOF, rescues vastly improved brain and body morphology, but did not fully restore a wild-type phenotype. Assays for rescue included Table S1 were targeted in LOF experiments and quantification is included in Table 3 and supplementary material Table S3. b The MOs (supplementary material Table S2) were injected at the one-to two-cell stage in titration experiments. The mass of MO that was used for phenotype characterization, scoring (Figs 2, 3; supplementary material Table S3) and rescue experiments (Figs 4, 5 and Table 3) is indicated. c Splicing changes were monitored by RT-PCR (Methods) using primers indicated in supplementary material Table S2 and detected changes were sequenced to identify changes in mRNA, including frame shifts. d The predicted changes in protein, based on the sequencing data obtained, are included as the number of amino acids in truncated vs normal protein. Truncated proteins listed might contain abnormal amino acids, because amino acids encoded by an intron inclusion or a frame shift are predicted (in some cases) before an early stop codon. e The p53 MO was used to eliminate off-target effects where indicated. 1,2,3 The mvp MOs are included in the same order as in Table 1, and supplementary material Tables S2 and S3. 1 Indicates the mvp MO that was used for the LOF collation in Fig. 2A and Fig. 3. ND, not determined. RESEARCH ARTICLE observation of the shape of the brain, ventricle size, and eye morphology, movement and axon tracts. In summary, 21 out of the 22 zebrafish 16p11.2 homologs tested were required for early brain and/or body development, with the majority showing conserved function with the cognate human gene. The data therefore show that this set of genes is highly active during early development. Movement deficiency is associated with LOF in some 16p11.2 homologs We next assessed motor function as a read-out of neural circuitry, by assaying two early behaviors: spontaneous movement at 24 hpf and the touch response at 48 hpf as described in Methods (Liu et al., 2012;Naganawa and Hirata, 2011). LOF in seven genes [coro1a, fam57ba, gdpd3, hirip3, kinesin family member 22 (kif22), maz and ppp4ca] was associated with spontaneous movement defects. LOF in 14 genes led to a defect in touch response in at least 70% of embryos for each gene examined ( Fig. 3; supplementary material Table S3). In the most severe cases, aldoaa and fam57ba, LOF embryos exhibited no response to touch, and these severe phenotypes were rescued by co-injection with human RNA. A sluggish response to touch, rather than the normal rapid C-bend and brief swim, was observed in coro1a, gdpd3, hirip3, kctd13, kif22, maz and ppp4ca morphants. The response to touch was improved by the addition of rescue RNA. Abnormal, U-shaped muscle segments and/or a short bent tail were seen in the spontaneous-movement-and touchresponse-defective aldoaa, coro1a, fam57ba, gdpd3, kif22 and hirip3 LOF embryos, perhaps explaining the movement defects ( Fig Table S3). These data indicate that multiple 16p11.2 homologs are required for normal motor activity, as reflected by spontaneous movement or touch-responsiveness. A subset of genes is required for normal axon tract development Motor deficiencies observed in LOF embryos led us to investigate whether axon tract formation was affected. Both forebrain and hindbrain axon tracts were analyzed by immunostaining for acetylated -tubulin and confocal imaging at 36 hpf, when initial scaffolding has formed (Fig. 5). Embryos with deficient axon tracts were seen in more than 80% of embryos, after LOF in each of six genes: coro1a, fam57ba, kctd13, kif22, mapk3 and ppp4ca (Fig. 5, and quantification in legend). LOF in all of these genes led to reduced and disorganized tracts; however, the kctd13 LOF forebrain tract phenotype was mild, and kif22 LOF hindbrain tracts appeared normal. For each gene, normal phenotypes were observed in at least 75% of embryos examined after co-injection of cognate human mRNA (or fish mRNA for kctd13) with the antisense MO, further supporting conservation of function and specificity of the axon tract phenotypes caused by MO injection (Fig. 5). In addition to brain axon tract deficiencies, we demonstrated that pigmentation, indicative of neural crest lineages that include Table S2) at the one-to two-cell stage (supplementary material Table S3). Genes assayed (supplementary material Table S1) are indicated above each set of images. 'Control' embryos were injected with control MO (see Methods). Brain ventricles were injected with Texas Red dextran, and bright-field and fluorescence images superimposed. Images are representative of the phenotypes observed in at least 70% of embryos, over two to seven independent experiments, with 50-350 embryos assayed in total per gene (supplementary material Table S3 RESEARCH ARTICLE peripheral nerves (Schilling and Kimmel, 1994), was abnormal after LOF in a subset of homologs (supplementary material Fig. S3). Some of these also presented with axon tract abnormalities (coro1a, fam57ba, mapk3 and ppp4ca). Interestingly, for six genes (coro1a, fam57ba, kctd13, kif22, mapk3 and ppp4ca) for which LOF gave a movement or touch response phenotype, an axon tract and/or pigmentation defect was also apparent (Fig. 3), perhaps connecting these phenotypes to the abnormal behavior Haffter et al., 1996;Marmigere and Ernfors, 2007). For four genes (maz, mvp, tbx24 and ypel3), a touch response phenotype seen after LOF was not accompanied by axon tract or pigmentation aberrations, implicating abnormal muscle activity in the phenotype. However, although axon tracts might appear normal, synaptic transmission could be defective. These data show that multiple 16p11.2 homologs are necessary for normal axon tract development -suggesting deficits in the formation of neuronal precursors, guidance or fasciculation -and that axonal deficiencies could be linked to motor phenotypes. Identification of aldoaa and kif22 as deletion dosage sensor genes A major goal of this study was to determine whether any 16p11.2 homologs had the properties of deletion dosage sensors, which could be associated with IDD, ASD and other phenotypes. We defined a deletion dosage sensor as a gene that gives a phenotype after a 50% decrease in expression, in accord with the simplest outcome of loss of one gene copy. Our initial assays for function of 16p11.2 homologs (Figs 2, 3) used MO concentrations that led to a clear phenotype; however, the associated decreases in RNA expression were not determined. In order to identify whether any of these genes are deletion dosage sensors, we assessed the lowest dose of MO that led to a phenotype, and quantified the amount of normal RNA remaining at this concentration using qPCR (Fig. 6). This approach was used for the 14 genes whose function was inhibited by splice site MOs, where abnormal splicing had been detected and the normal and abnormal transcript could be distinguished, using appropriate primers (Fig. 6B, Table 1; supplementary material Table S2). Owing to standard error intrinsic to the qPCR process, genes were designated putative deletion dosage sensors if a phenotype was observed when between 40% and 60% normal RNA remained, although a deletion dosage sensor could also be sensitive to smaller decreases in expression (>60% of RNA remaining). Most genes tested showed a LOF phenotype only when 25% or less normally spliced RNA remained (Fig. 6C). However, two genes, Table S3, and rescue conditions are shown in Fig. 4 and Table 3. a Genes assayed are indicated in the left column and gene identifiers are included in supplementary material Table S1. MOs targeting these genes are included in supplementary material Table S2. b Morphological analyses addressed head morphology (brain ventricle shape and eye formation), tail shape and length, and muscle segment shape (chevron vs U-shape). c Two types of movement were tested: spontaneous movement at 24 hpf and touch response at 48 hpf. The ino80e LOF embryos respond with one flip of the tail or not at all. mapk3 LOF embryos have a jerky response and the ypel3 LOF embryos move in small, jerky circles. mvp LOF embryos range from not responding to spinning in response to touch. Otherwise, touch response was weak, sluggish or not responsive. d Initial axon tracts form by 36 hpf and were assayed by immunostaining for acetylated tubulin. Axon tracts were not assayed in mvp LOF embryos owing to lack of rescue, in prrt2 LOF owing to lack of observable phenotype at these time points, and in tbx24 LOF owing to lack of head expression. e Pigmentation was observed at 48 hpf in LOF embryos and images are included in supplementary material Fig. S3. f Persistence of early phenotypic abnormalities was monitored up to 5 dpf. 1 The mvp MO used for the phenotype reported is indicated with matching superscript and listed first in Table 1 and supplementary material Table S2. Red boxes: abnormal phenotype in >70% embryos; speckled red boxes: abnormal phenotype in >70% of embryos, but the phenotype was mild. ND, not determined. RESEARCH ARTICLE aldoaa and kif22, were reproducibly associated with a phenotype when approximately 45% and 55% RNA remained (Fig. 6C), respectively, suggesting that these genes are deletion dosage sensors. Although RNA levels were initially quantified at 24 hpf, approximately 50% of normal levels were present also at 18 and 36 hpf (Fig. 6D) (for aldoaa and kif22, respectively, 64% and 52% of normal levels were present at 18 hpf, and 67% and 41% of normal levels were present at 36 hpf ). Importantly, the decrease in RNA expression was mirrored at the protein level: western blot analysis showed a 56% and 65% loss of Aldoaa and Kif22 protein expression, respectively, after MO injection (Fig. 6E). The sensitivity of embryos to aldoaa and kif22 50% LOF led us to examine the phenotypes obtained in more detail. Thus, abnormal muscle segment formation (Fig. 2) was further characterized after phalloidin staining, which showed U-shaped muscle segments, as well as muscle fibers that were wavy and poorly aligned (Fig. 6F). Because the brains in both aldoaa and kif22 LOF embryos appeared narrow ( Fig. 2A), we investigated whether formation of neural progenitors was affected at 48 hpf, using a transgenic line with GFP driven by the promoter of NeuroD, a pan neuronal transcription factor (Obholzer et al., 2008;Ulitsky et al., 2011). After LOF in both aldoaa and kif22, NeuroD-promoter-driven expression of GFP was decreased in the eyes and optic tectum, whereas expression in the cranial neurons and pancreas seemed unaffected (Fig. 6G). These data indicate that most zebrafish homologs in the 16p11.2 cohort do not have the characteristics of deletion dosage sensors. However, the aldoaa and kif22 genes each showed a robust phenotype at 50% LOF, which might indicate the effects of genetic hemizygosity at these loci, and designate these genes as putative deletion dosage sensors. Tissue-specific shRNA expression indicates nervous system function for aldoaa and kif22 In order to address whether the effects of aldoaa and kif22 LOF were due directly to changes in expression within the brain, or secondarily, owing to effects on other tissues, shRNAs targeting these genes were expressed in the zebrafish brain. We used the central nervous system (CNS)-specific miR124 promoter (De Rienzo et al., 2011;Shkumatava et al., 2009) and expressed shRNAs from the miR30 backbone (see Methods) (Dong et al., 2009;De Rienzo et al., 2011;De Rienzo et al., 2012) (Fig. 7A). By testing four hairpins against either aldoaa or kif22, a targeting construct was identified for each gene. These constructs reduced normal RNA levels to 46% for aldoaa and 42% for kif22 when normalized to GFP in the whole embryo (reflecting total number of cells Mass of mRNA for rescue was determined based on a titration used for single cell embryo injections. In control and LOF conditions a balancing amount of membrane targeted GFP mRNA was co-injected to achieve the equivalent load in the rescue condition (Methods). Where more than one MO was tested per gene, the rescue of LOF experiment was performed using the first MO listed for a gene in supplementary material Percentage of rescued embryos relative to total embryos injected. f Two to four independent experiments were performed per rescue condition after the preferred titrated dosage was determined. g kctd13 and maz LOF phenotypes were rescued using zebrafish mRNA using the indicated masses. All other rescue mRNA masses refer to human mRNA. Disease Models & Mechanisms DMM Zebrafish 16p11.2 homologs RESEARCH ARTICLE expressing the shRNA) or 64% for aldoaa and 57% for kif22 when normalized to -actin in microdissected brain (reflecting total brain RNA) (Fig. 7B,D). For aldoaa, a phenotype very similar to that of the antisense MO was observed, such that touch response was highly defective (Fig. 7B) and the forebrain was narrow (Fig. 7C). However, the tail and muscle segment phenotype was not observed, Fig. 4. Human orthologs rescue zebrafish LOF embryos. Single-cell embryos were injected with MO, either alone or together with human or fish mRNA, and imaged at 24 hpf after injecting Texas Red dextran into the brain ventricles. The restoration of a LOF phenotype to normal by co-injection of the human homolog with a zebrafish LOF indicates functional equivalence of the human and zebrafish gene (orthology). kctd13 (L 0 -L"') and maz (O 0 -O"') LOF phenotypes were rescued by zebrafish but not human RNA. (B 0 -T 0 ) Dorsal views and (B"-T") lateral views of LOF embryos. (B'-T') Dorsal views and (B"'-T"') lateral views of LOF embryos plus rescue mRNA. Human RNAs co-injected for rescue are indicated by uppercase letters, except for z.kctd13 and z.maz, which refer to RNA from the zebrafish genes. The rescue experiments shown in D, E and G are from the experiment shown in Fig. 2; in other cases, the images are taken from different experiments, with data consistent to that shown in Fig. 2. Images are representative of two to four independent experiments per gene, with 40-194 embryos assayed in total per gene. Rescue of the abnormal LOF phenotypes was achieved in ~50% or more of the embryos. Representative images are shown here and quantification is included in Table 3. F, forebrain ventricle; M, midbrain ventricle; H, hindbrain ventricle. RESEARCH ARTICLE in accord with nervous-system-specific expression of this shRNA. The kif22 RNAi phenotype was very similar to that seen with the antisense MO, including abnormal brain morphology and a bent tail (Fig. 7E). The persistent bent tail phenotype, even after nervoussystem-specific expression of the kif22 shRNA, probably reflects defective convergence and extension that could be modulated by kif22 activity in the spinal cord (De Rienzo et al., 2011). These shRNA data further confirm specificity of the aldoaa and kif22 phenotypes observed using MOs. Similar confirmation of phenotypes obtained from MO injection or genetic mutants was seen for the cdipt and tbx24 genes (supplementary material Fig. S2). In summary, the data demonstrate that the aldoaa and kif22 phenotypes were consistently seen after 50% LOF, and are due to activity of these genes in the brain. DISCUSSION This study has uncovered two previously unknown and fundamental aspects of genes within the 16p11.2 CNV. The first major finding is that the 16p11.2 set of genes is highly active and necessary for early development. This indicates that the set presents a large genetic target, helping to explain the penetrant association of 16p11.2 with multiple brain disorders and other phenotypes. The second major finding is that, among the 16p11.2 set, there are (at least) two single genes with deletion dosage sensor properties, which could link this CNV to ASD, IDD and other disorders. The finding that the majority of 16p11.2 homologs are required for normal embryonic nervous system and body development is consistent with the early onset of some of these disorders, which suggests that key genes underlying the disorder have developmental roles. Alternatively, key disorder genes might govern the maintenance of aspects of the brain, or lead to a postnatal change in brain function. The finding that LOF for a majority of these genes results in persistent phenotypes through 5 dpf could imply that these are important for maintenance and continued function; however, we did not distinguish whether different mechanisms underlie the early and later phenotypes. We further note that LOF phenotypes observed after extensive RNA knockdown will generally be more severe than those seen in hemizygous mutant fish or human patients. Thus, developmental phenotypes might be present after extensive knockdown, whereas, after partial LOF, later brain function might be altered, with no apparent change to development. By comparison with zebrafish genetic screens, in which approximately 10% of genes give an embryonic phenotype after mutation Schier et al., 1996), 95% of the genes tested here gave a LOF phenotype, almost all involving the brain, suggesting that the multitude of phenotypes associated with the 16p11.2 CNV reflects the activity of many genes. Although the MOs used for LOF assays might sensitize the embryo owing to their unusual nucleic acid backbone, the observed phenotypes were specific and, where tested, were similar to that of genetic mutants [as seen in other studies (e.g. De Rienzo et al., 2011)]. Accordingly, the LOF phenotypes for each gene that we examined formed a unique phenotypic signature: in some cases, abnormalities were seen across the entire spectrum of assays (for example, coro1a and fam57ba), whereas, for other genes, only a subset of phenotypes was observed. Overall, it is not clear whether the mammalian set of 16p11.2 genes is as active as the fish gene set; however, for 16p11.2 homolog activity that has been reported, mouse and fish LOF phenotypes are often similar. Mouse null knockouts have been reported for only eight 16p11.2 homologs (Coro1a,Doc2a,Kif22,Mapk3,Mvp,Ppp4c,Sez6l2 and Tbx6) (Chapman and Papaioannou, 1998;Foger et al., 2011;Miyazaki et al., 2006;Mossink et al., 2002;Ohsugi et al., 2008;Pages et al., 1999;Sakaguchi et al., 1999;Shui et al., 2007), with one conditional knock out (Prrt2) (Skarnes et al., 2011). All of the homozygous knockouts are associated with phenotypes, except for Sez6l2, which has other copies that are predicted to compensate (Miyazaki et al., 2006), and Mvp (Mossink et al., 2002). Hindbrain and forebrain tracts were affected in all LOF conditions, except kctd13, which only showed defects in the hindbrain, and kif22, which only showed defects in the forebrain. The rescues with cognate RNA led to rescue in 75-100% of embryos assayed. Human RNAs co-injected for rescue are indicated by uppercase letters, except for z.kctd13, which refers to RNA from the zebrafish gene. 'Control' embryos were injected with control MO (see Methods). ac, anterior commissures; sot, supra optic tract; poc, post optic commissure; tpoc, tract of the post optic commissure; tpc, tract of the posterior commissure; r2, r4, r6, rhombomeres 2, 4, and 6; asterisk, reduced or disorganized ac; white arrowhead, reduced or disorganized sot; white arrow, reduced tpc; open arrowhead, reduced or disorganized tpoc; dotted arrow, reduced poc. RESEARCH ARTICLE Both Ppp4c and Tbx6 homozygous knockout mice are embryonic lethal; Ppp4c heterozygotes showed growth retardation with decreased survival, whereas Tbx6 heterozygotes were viable and displayed no obvious phenotypes (Chapman and Papaioannou, 1998;Shui et al., 2007). By contrast, tbx24 mutant zebrafish are viable (Nikaido et al., 2002), whereas our ppp4ca knockdown fish still had an abnormal phenotype at 5 dpf, implying that they would not survive. Coro1a mutant mice have lower T cell counts, owing to defective migration (Foger et al., 2006) and increased apoptosis (Mueller et al., 2011), whereas Mapk3 mutant mice have a reduced number of thymocytes (Pages et al., 1999). We did not evaluate zebrafish immune response; however, coro1a LOF zebrafish showed highly abnormal axon tracts, consistent with a migration defect. Doc2a knockout mice exhibit defects in excitatory synaptic transmission and long-term potentiation (Sakaguchi et al., 1999), whereas knockdown of doc2a in zebrafish resulted in defective brain morphology, but apparently normal motor responses and axon tracts. Because the mice were studied at later stages, similar phenotypes could develop in older fish. Mapk3 mutant mice, in combination with Mapk1 deficiency, exhibit defective neurogenesis (Satoh et al., 2011) and, similarly, we observed defective axon tracts in mapk3 LOF zebrafish. Given the broad set of phenotypes that we observed, it seems likely that the extensive postnatal lethality of 16p11.2-region deletion mice is a result of the compound hemizygosity of multiple genes. Together, the data implicate the activity of many 16p11.2 genes in development and/or function of the brain and body. The second major finding from this zebrafish study, definition of the first dosage sensor genes in the 16p11.2 CNV, is groundbreaking because dosage sensor genes that are pivotal for association with mental health disorders have been identified in only a handful of CNVs. These include SHANK3 in 22q13 (Durand et al., 2007), RPA1 in 17q13.3 (Outwin et al., 2011), VIPR2 in 7q36.3 (Vacic et al., 2011) and MBD5 in 2q23.1 (Talkowski et al., 2011). We identified two genes in the 16p11.2 CNV, aldoaa and kif22, for which a phenotype is observed after reducing their expression level by ~50% in fish, thus showing characteristics of deletion dosage sensors. ALDOA is a glycolytic enzyme that catalyzes the conversion of fructose-1,6-bisphosphate to glyceraldehyde-3-phosphate and Fig. 6. Identification of deletion dosage sensor genes. (A)Strategy for identification of deletion dosage sensor genes. MOs designed against splice sites are titrated to find the lowest amount resulting in a phenotype, with at least 70% penetrance, and normal RNA remaining at this MO concentration is determined. A 'deletion dosage sensor' is defined as a gene for which a phenotype is observed when ~50% of the normal mRNA remains. (B)Strategy to quantify normal mRNA remaining in LOF embryos. An antisense MO is designed to an intron-exon boundary, and typically results in exon exclusion or intron inclusion (Table 2). qPCR primers are designed to detect the normally processed mRNA, where one primer in each set hybridizes to the normal but not the abnormally processed transcript. For, forward primer; Rev, reverse primer; Ex, exon; Intr, intron. (C)Percentage of normal mRNA remaining in 24 hpf LOF embryos. RNA levels were quantified by qPCR, normalized to ef1 and expressed relative to levels of experimental RNA in control-MO-injected embryos. LOF was performed at two MO concentrations, one that did not give a phenotype ('Low MO') and one that did ('High MO'). Genes assayed are indicated below the relevant histograms. (D)Quantification of normal aldoaa and kif22 mRNA after LOF in 18, 24 and 36 hpf embryos, at the same MO concentration used in panel C. qPCR was performed and RNA levels normalized to ef1 and expressed relative to levels of experimental RNA in control-MO-injected embryos. (E)Western blots of 24 hpf LOF embryos, at the same MO concentration used in panel C. Representative image of three experiments is shown. Protein was extracted from embryos injected with aldoaa or kif22 MOs. After LOF, 56% of Aldoaa (from head-dissected protein, thus Aldoaa-enriched; see Methods) and 65% of Kif22 (whole embryo) protein remains when normalized to control proteins (Eif4e and Gapdh, respectively), compared with control-MO-injected embryos. (F)Muscle segments in 24 hpf LOF embryos. Actin is stained with phalloidin, and muscle shape is indicated by white dotted lines. Over two experiments, chevron shape was abnormal in 0% of control-MO-injected embryos, 100% aldoaa LOF embryos and 100% kif22 LOF embryos (n10 for each condition). (G)GFP expression in the NeuroD:GFP line. 0% (n106) control embryos (injected with control MO), 94% (n97) aldoaa LOF embryos and 100% (n100) kif22 LOF embryos were affected, as observed over four independent experiments. Dotted arrow, retina; arrow, tectum; oval, pancreas. RESEARCH ARTICLE dihydroxyacetone phosphate. Several other functions and roles have been ascribed to ALDOA, including inhibiting phospholipase D2, binding to the cytoskeleton and RNase activity (Canete-Soler et al., 2005;Kim et al., 2002;Kusakabe et al., 1997). No homozygous null mutations have been identified in humans, indicating that ALDOA is essential (Esposito et al., 2004). Six cases of hemolytic anemia and myopathy have been associated with point mutations and reduced ALDOA activity (Beutler et al., 1973;Esposito et al., 2004;Kishi et al., 1987;Kreuder et al., 1996;Miwa et al., 1981;Yao et al., 2004). One case presented with mental retardation (Beutler et al., 1973), and another with microcephaly and language delay (Kreuder et al., 1996). Further connecting this gene with mental health disorders, expression of ALDOA is upregulated in the cortex of individuals with schizophrenia and depression (Beasley et al., 2006). The mitochondrial citric acid cycle, into which glycolytic end products feed, has been shown to be dysregulated in children with autism (Giulivi et al., 2010), pointing to glycolysis and energy production as possible ALDOA targets. ALDOA was identified as a binding partner for the ASD-linked protein SHANK3 by a protein interactome study (Sakai et al., 2011), as well as being identified in a study implicating postsynaptic signaling complexes in ASD (Kirov et al., 2012). The association of partial LOF in ALDOA with patient phenotypes, as well as these other considerations, suggest that ALDOA is a player in 16p11.2 pathologies. KIF22 is a microtubule-and DNA-binding molecular motor that is important for chromosome alignment (Santamaria et al., 2008) and compaction during anaphase (Ohsugi et al., 2008). Individuals with a point mutation in the motor domain of KIF22 suffer from the autosomal-dominant skeletal disorder spondyloepimetaphyseal dysplasia with joint laxity (Boyden et al., 2011;Min et al., 2011). No phenotype has been reported in Kif22 +/mice; however, ~50% of Kif22 -/mouse embryos do not survive past the morula stage (Ohsugi et al., 2008). KIF22 has not previously been implicated in brain function disorders, but our data suggest that this gene is required for the formation of neural progenitors. The fact that mammalian heterozygotes in Kif22 have not been associated with phenotypes suggests that Kif22 expression levels might be regulated after loss of one gene copy, or there might be greater redundancy among mammalian kinesins than among the zebrafish genes. For both kif22 and aldoaa, the stronger phenotypes seen after partial LOF in zebrafish relative to humans suggest that an additional gene(s) must synergize with ALDOA or KIF22 to convey ASD, IDD or other phenotypic risk in humans. We suggested that the zebrafish could be a useful tool to address the function of 16p11.2 homologs, without a need to assay for behaviors that are restricted to humans (Sive, 2011). This suggestion is made because the same genetic pathways are active in mammals and fish, and other indicators of pathway activity can be employed. Consistently, almost all zebrafish LOF phenotypes could be prevented by expression of the homologous human gene, supporting gene orthology and shared gene function at the molecular or cellular level. We note that several zebrafish phenotypes could be similar to those seen in individuals with ASD and/or IDD; such phenotypes include abnormal brain size, brain shape, axon tracts, motor readouts, and specification of retinal and tectal neural progenitors (Almgren et al., 2008;Amaral et al., 2008;Courchesne et al., 2007;Hashimoto et al., 1991;Marin-Padilla, 1975;Matson et al., 2011;Ritvo et al., 1986), as well as musculoskeletal defects that are seen in some individuals with ASD and/or IDD (Calhoun et al., 2011;Chen, 1982;Oslejskova et al., 2007;Shimojima et al., 2009). RESEARCH ARTICLE This work identifies the 16p11.2 CNV as an active genomic region and delineates two putative deletion dosage sensor genes in the region, with predictable connection to functional brain syndromes associated with the CNV. These two genes, in combination with additional 16p11.2 or other genes, might be haploinsufficient in ASD and related disorders, leading to abnormal brain function. Other dosage sensors in this interval might exist, perhaps as pairs of synergistically functioning genes. Future assays in the zebrafish will augment antisense MO approaches with RNAi and genetic mutants, determine whether duplication and deletion sensor genes are the same, and screen for synergistic deletion and duplication dosage sensor genes in the 16p11.2 gene set. These unbiased screening approaches are a powerful step in translational research focusing on CNVs that are associated with disorders arising from abnormal brain development and function. Identification of zebrafish 16p11.2 homologs Zebrafish homologs of human 16p11.2 genes were identified using UniGene, Ensembl and UCSC Genome browsers with alignment and family tree comparisons. Fish lines and maintenance Embryos were obtained from natural spawnings. Developmental stages are reported as hpf at 28°C. The NeuroD:GFP line was previously described (Obholzer et al., 2008;Ulitsky et al., 2011). Additional mutant lines were obtained from the Zebrafish International Resource Center (ZIRC). The tbx24 te314a/+ line (Nikaido et al., 2002;van Eeden et al., 1996) was incrossed and homozygotes identified phenotypically at 24 hpf. cDNA constructs Human or zebrafish cDNAs that were used for rescue experiments were cloned into pCS2+ (supplementary material Table S4). Zebrafish cDNAs used for in situ hybridization are also included in supplementary material Table S4. All human and some zebrafish clones were obtained from Open Biosystems. asphd1, c16orf53, doc2a, maz, sez6l2, taok2a and taok2b were cloned by PCR from 24-hpf zebrafish cDNA, using primers listed in supplementary material Table S2. We thank Dr Jeremy Green (Kings College, London) for membrane-targeted CAAX-eGFP. MO design and use MOs were designed by Gene Tools, LLC, to a splice donor or acceptor site, as close to the 5Ј end of the predicted primary RNA as possible. Where a splice site MO gave no phenotype, a translational start site MO was designed. The designed MO sequences are shown in supplementary material Table S2. In all cases, the top MO listed (in Table 1; supplementary material Table S2) for a gene was used in the phenotypic assays described elsewhere, unless otherwise noted. For all experiments, a control MO was injected at the same or greater mass amount. MO (1 nl) was injected into a single cell of a one-to two-cell embryo, using a range of concentrations to determine the lowest concentration at which a phenotype was observed. No more than 7.5 ng of a MO was injected. Unless otherwise stated, the 'control' condition refers to control-MO-injected embryos. The control MO sequence is 5Ј-CCTCTTACCTCAGTTACAATTTATA-3Ј. Criteria and methodology to assess MO specificity Specificity of MO-induced LOF phenotypes was determined by the following criteria, as is standard for the field (Bedell et al., 2011;Eisen and Smith, 2008). First, because initial MOs were designed to target splice junctions (described above), it is predicted that a change in RNA splicing would be observed by RT-PCR. These MOs would target zygotically expressed RNAs. Primers to detect knockdown are included in supplementary material Table S2, and RT-PCR methods are discussed below. Where a change in splicing was not detected, an additional splice site MO was designed. For genes for which splice site MOs changed splicing but did not result in an observable phenotype, a translation blocking MO was designed to target maternal transcripts, as well as zygotic. It is further predicted that protein-coding capacity would be altered. This was determined by gel purification of the RT-PCR products after control or test-gene MO injection and sequencing of the PCR product. Protein-coding capacity was determined using Sequencher and MacVector software. In the cases of putative deletion dosage sensor genes, change in expression resulting from MO injection was monitored at the protein level by western blot analysis (described below). Second, for phenotypes observed after injection of splice blocking MOs, it is predicted that normal RNA levels will decrease, and this was monitored by qPCR, as discussed in the specific Methods section. Correlation between phenotype and MO mass is also predicted, and was assayed in MO titration experiments (Fig. 6, and below). Third, MO specificity predicts that the LOF phenotype will be prevented ('rescued') after co-injection of the MO with the cognate human or zebrafish mRNA that lacks the MO-binding site but preserves protein coding capacity. The appropriate mass of RNA used in rescue experiments was based on both rescue of the LOF phenotype as well as the lack of an overexpression phenotype when the same RNA mass was co-injected with the control MO. Mass of RNA injected in rescue assays and the success of rescue is listed in Table 3. GenBank Accession numbers and cDNA constructs used to synthesize RNA are shown in supplementary material Table S4. RNA synthesis is discussed below. The rescue titration experiments had a minimum of four conditions: control MO plus mGFP to serve as a balancer RNA; LOF MO plus mGFP RNA; control MO plus the human/fish RNA; and LOF MO plus the human/fish RNA. Fourth, for cases in which necrosis or a severe phenotype was observed, the p53 MO was co-injected to suppress off-target cell death (Robu et al., 2007) at a 1.5-fold greater mass amount than the mass of experimental or control MO, as indicated in Table 2. The p53 MO sequence is: 5Ј-GCGCCATTGCTTTGCAAG -AATTG-3Ј. Finally, where the mutant lines were available, for cdipt and tbx24 the MO-induced LOF phenotypes were compared, or, in the cases of aldoaa and kif22, the effects of shRNAs were examined. RESEARCH ARTICLE Phenotypic scoring procedures Embryos were scored live using bright-field imaging, or after fixation for axon tracts using scanning confocal imaging. Where there was any ambiguity, or a question of whether phenotypic rescue had been achieved, another lab member scored the embryos. For most genes, more than one of the authors independently assayed MO effects or ability to be rescued by RNA injection. Results were almost always concordant. We required ~70% of embryos in a condition to have an aberrant phenotype in at least two independent experiments, and for the phenotype to be rescued by RNA co-injection, for inclusion in further experiments. Morphological assays Brain morphology of LOF embryos was examined at 24 hpf by bright-field and fluorescence microscopy, after injection of the brain ventricles with Texas Red dextran (Gutzman and Sive, 2009). Embryos were scored for: presence of forebrain, midbrain and hindbrain hingepoints; brain ventricle size and volume; forebrain truncation; and eye morphology. Trunk and tail morphology, and the shape of muscle segments, were scored by bright-field microscopy or by staining actin filaments with phalloidin (described below). Phenotypes existing in greater or equal to 70% of embryos and also meeting specificity criteria were included in results unless otherwise stated. Movement assays Movement in LOF embryos was monitored at 24 and 48 hpf. In 24 hpf embryos, spontaneous contractions were observed. Spontaneous contractions have been described previously (Saint-Amant and Drapeau, 1998). At 24 hpf, the typical movement consists of sideto-side contractions that result in slow coils. Embryos were observed for several minutes because, by 24 hpf, the contractions are sporadic. Touch response assays were administered at 48 hpf. For this, a loop of thread was used to gently touch the embryos on both the head and the tail. The normal response of a tail stimulus involves the embryo briefly swimming (approximately the length of its body) and landing again on the bottom of the dish, whereas a stimulus to the head begins with full coiling of the embryo resulting in repositioning (C-start) (Issa et al., 2011;Saint-Amant and Drapeau, 1998). In situ hybridization In situ hybridization methods are described elsewhere (Wiellette and Sive, 2003). Probes used are described in supplementary material Table S4 and wild-type 24 hpf embryos that were fixed in 4% PFA were used to assay spatial expression. RT-PCR and qPCR RT-PCR was performed to monitor expression at developmental time points (supplementary material Fig. S1) and changes in splicing that resulted from MO targeting (Table 2). Total embryo RNA was extracted using Trizol (Invitrogen) followed by chloroform extraction, isopropanol precipitation and DNAase treatment or use of the RNeasy kit (Qiagen). cDNA synthesis was performed with Super Script III Reverse Transcriptase (Invitrogen) and oligodT or random hexamers. Primers for RT-PCR and qPCR are shown in supplementary material Table S2. To detect changes in splicing, primers were designed around targeted exons. RT-PCR was performed using Hot Start Taq Plus (Qiagen) and primers shown in supplementary material Table S2. Primers were designed to only recognize normal transcript (see Fig. 5 for primer design strategy). Knockdown was confirmed by sequencing PCR products, and MO effects on expression of predicted protein are included in Table 2. qPCR was performed using an ABI Prism 7900 (ABI). Fluorescence detection chemistry utilized SYBR Green dye master mix (Roche). The relative amount of product was calculated using CT and normalized to Ef1. Values are reported with standard deviation. Each assay was performed in at least two independent experiments. Each experiment contained at least 90 embryos per condition divided into three separate RNA preparations (biological replicates). Each RNA preparation was used for one reverse transcription reaction, which was then used in triplicate for each qPCR reaction (technical replicates). RNA injections RNA was synthesized using the Message Machine kit (Ambion) and injected as described previously (Gutzman et al., 2008). Western blot analysis Methods for western blot analysis have been described previously (Gutzman and Sive, 2010). Human anti-KIF22 antibody (Sigma K1390) used at 1:1000 in 3% BSA TBST and human anti-ALDOA antibody (Sigma WH0000226M1) used at 1:1000 in 5% milk TBST were detected with anti-mouse HRP secondary antibody (Sigma). Human anti-GAPDH antibody (Abcam ab22555) used at 1:3000 in 5% milk TBST and human anti-eIF4E antibody (Cell Signaling 9742S) used at 1:500 in 5% BSA TBST were detected using antirabbit HRP secondary antibody (Cell Signaling). Because the anti-Aldoa antibody is not specific for the protein product of aldoaa and is expected to cross-react with that of aldoab, the Aldoaa western blot was performed using heads only, because the aldoab gene is only expressed in the tail (not shown). RNAi methods Hairpins for aldoaa and kif22 were designed using Invitrogen Block-IT RNAi Designer software. A total of four hairpins per gene were designed and analyzed. Hairpin oligonucleotide pairs were purified by SDS-PAGE, and annealed by heating to 95°C and slow cooling to 10°C. Annealed oligonucleotides were subcloned into the miR30 backbone (Dong et al., 2009) of the I-SceI-miR124:GFP-miR30-pA plasmid, prepared by Dr Jennifer Gutzman, University of Wisconsin-Milwaukee, WI. This plasmid consists of the CNSspecific promoter miR124 (Shkumatava et al., 2009) driving a GFP reporter upstream of the miR30 backbone and an SV40 polyA RESEARCH ARTICLE addition site, with the expression cassette flanked by I-SceI restriction sites. Transgenesis was achieved by the meganuclease (I-SceI) method (Thermes et al., 2002), using fresh I-SceI for each transgenic preparation. Clinical issue Copy number variant regions (CNVs) are intervals of DNA ranging from 1000 bp to several megabases, in which one genomic copy is either duplicated or deleted, changing the number of gene copies in that interval. Because CNVs have been associated with many diseases, from cancer to autoimmune disease to neuropsychiatric disorders, understanding how they cause deleterious effects is important. Carriers of 16p11.2 CNVs present with a wide range of disorders, including intellectual disability disorder (IDD) and autism spectrum disorders (ASDs). The genes in the 16p11.2 CNV are probably integral to normal brain function. There are 25 genes in the central core interval, and it is hypothesized that dosage changes in one or more of these genes underlie the pathologies associated with the 16p11.2 CNV. However, the crucial genes in the 16p11.2 interval -and in many CNVs associated with other disorders -are unknown. Results This study used the zebrafish as a tool to study the activity of genes that are homologous to those in the human 16p11.2 interval, and to identify which might be most important for the association of the 16p11.2 CNV with human brain disorders. Of the 25 human genes in this interval, 22 homologs were identified in zebrafish, and 21 displayed embryonic and larval loss-of-function phenotypes, demonstrating that this set of genes is very important during development. In total, 20 genes were found to be necessary for proper brain size and shape, with subsets also affecting eye development, axon tract organization, movement behaviors and muscle formation. The authors also examined whether these phenotypes persisted when each gene produced only 50% of its product (equivalent to losing one copy of a gene). Two genes were sensitive to dosage: one encoding glycolytic enzyme aldolase A (Aldoaa) and the other microtubule motor kinesin family member 22 (Kif22). Thus, the function of these genes changes with copy number, which might explain the link between 16p11.2 CNVs and associated disorders. Implications and future directions These data show that the 16p11.2 CNV comprises a highly active set of genes that are important for the formation of the nervous system and probably also for its function. Most importantly, two genes were identified as having functions that were sensitive to dosage, indicating that these might be crucial in connecting this CNV to IDD, ASDs and other disorders. Future directions include experiments to understand the molecular pathways by which each gene works, and whether each works together with other genes in the 16p11.2 interval, as predicted by human genetic data. This information will help to define targeted assays in mammals, and possibly guide therapeutic directions. This study further shows that zebrafish could be used to identify conserved, dosage-sensitive genes in other CNVs that are implicated in other human disorders.
2017-04-13T13:42:44.303Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "0e4425f160e5297584a747f6e07dd68b44fc97bb", "oa_license": "CCBYNCSA", "oa_url": "http://dmm.biologists.org/content/5/6/834.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a0eb29c8a29ed4e0a23cb64dee4b4866ea658e7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234131720
pes2o/s2orc
v3-fos-license
Analysis of Non-Structural Carbohydrate in Relation with Shoot Elongation of Rice under Complete Submergence : Regulation of non-structural carbohydrates (NSCs) are important for plants in response to submergence. In this study, the difference in non-structural carbohydrates in relation with shoot elongation between Sub1A and non -Sub1A rice genotypes was investigated. Two rice genotypes, namely Inpari30 ( Sub1A genotype) and IR72442 (non -Sub1A genotype), were submerged completely for 6 days and re-aerated by lowering water level up to stem base for 6 days of post submergence. In addition, non-submerged plants (control) was treated with water level up to stem base during the experiment. Photosynthesis rate decreased in both submerged Inpari30 and IR72442 genotypes 71% and 96% lower than their control, respectively. Submerged IR72442 declined Fv/Fm 15.6% lowest than its control and both control and submerged Inpari30. Investigation of the distribution of starch and soluble sugar content in plant organs suggested that shoot elongation of non-Sub1A genotype led to starch and sugar consumption that distributed faster to the new developed organ during submergence. In contrast, Sub1A genotype of Inpari30, which did not exhibit shoot elongation and showed slower NSCs distribution during submergence, performed better on post submergence by maintaining NSCs and distributing to the new developed organ faster than IR72442. These results suggest that Sub1A genotype managed elongation and NSCs during submergence more efficiently than non-Sub1A genotype. Introduction Non-structural carbohydrates (NSCs) are photosynthetic products that provide growth and metabolism substrates, which can be stored in a plant. During vegetative growth, the plant increases geometrically as plants invest more of the assimilated carbohydrate into new leaves [1]. However, plants also respond to environmental changes by regulating the translocation and partitioning of assimilated carbon, which will determine photosynthetic capacity [2]. When plants are completely submerged in water, O 2 diffusion is restricted, photosynthesis decreases due to reduced light intensity, lowering the internal O 2 content, and anaerobic respiration increases at the expense of the aerobic process [3]. Consequently, the carbohydrate levels and energy status in the shoot will drop to levels harmful to the plant [4,5]. Previously, we observed that photosynthetic rates decreased during submergence in rice cultivars tolerant and intolerant to submergence stress [6]. However, some rice plant elongates during submergence, which requires initial energy. High initial carbohydrate levels in the stem are essential to provide energy for rapid elongation [7] required for plant survival during submergence [3]. Later, it was demonstrated that survival under submergence is dependent on the ability to store non-structural carbohydrates (NSCs) and conserve energy through reduced underwater elongation [8]. Compared with sensitive parent lines, Sub1 introgression lines reduced less sugar and starch concentration after submergence [9,10]. Thus, tolerance genotype is not necessarily associated with the carbohydrate status before submergence, but rather with the ability to sustain energy levels throughout submergence [3]. The cultivars that maintain high NSC after submergence develop new leaves faster and accumulate greater biomass during recovery [9]. Sub1A, a major quantitative trait locus (QTL) responsible for submergence tolerance, has been widely studied in FR13A rice (Oryza sativa L.) and is considered to confer submergence tolerance in rice breeding [11][12][13]. It has been introgressed into the high-yielding rice variety Inpari30 that did not result in any significant trait differences compared to the parental variety (Ciherang) under normal conditions [14][15][16]. Other studies reported the effect of submergence on shoot elongation in 15 genotypes of Oryza sativa under 80 cm water depth for 7 days. Among the 15 cultivars, the non-Sub1A rice type, IR72442, exhibited the highest ratio of plant length on submersion compared to the control plant, indicating that this cultivar exploits the elongation mechanism on submergence [17]. Both Inpari30 and IR72442 are used in this experiment to demonstrate starch and sugar accumulation during submergence by representing Sub1A and non-Sub1A rice genotype, respectively. Non-structural carbohydrates (NSCs) in rice subjected to submergence have been investigated in association with growth parameters [7], elongation ability [8], different ages of rice seedlings [18], impeded metabolism [19], catabolism processes [13,20], and application time of nitrogen and phosphorus fertilizers [10]. However, the most critical period for the plant to consume starch and soluble sugar, the distribution of starch and soluble sugar throughout the plant, and their relation to elongation on submergence has not been clearly described between Sub1A and non-Sub1A genotype. In this study, we analyze the starch and soluble sugar behavior during submergence using Inpari30, which is characterized by the Sub1A gene, and IR72442, which is characterized by non-Sub1A gene. This study could help develop an effective agronomical approach that can improve plant tolerance to submergence in relation with the efficient use of non-structural carbohydrates. Materials and Methods The research was conducted from August to November 2019 at the Tropical Crop Science Laboratory, Kagoshima University, Japan. The experiment was arranged in two factors. The environmental condition factor consisted of a control and submergence treatment. The rice variety factor was randomized completely within the environmental factor. Two Oryza sativa L. genotypes, Inpari30 with Sub1A and IR72442 without Sub1A, were compared during six days of complete submergence followed by six days of reaeration. Comprehensive research flow is presented in Figure 1. The experiment was started by soaking rice seeds of Inpari30 and IR72442 in an incubator at 30 • C for three days. The germinated rice seeds were then sown in commercial soil (N/P/K = 0.9:2.3:1.1; pH 4.5-5.2) in a greenhouse. Ten-day-old seedlings were transplanted into hydroponic sponge (30 mm), which were then inserted into seedling trays inside experiment glasses (45 cm × 45 cm × 60 cm). Water was maintained at 4.5 cm from the container base, the same height as the seedling tray surface. Seedlings were grown at 27 • C and exposed to 12 h of light per day with an intensity of 300-350 µmol m −2 s −1 measured 20 cm above the tray surface. On 14 days after seeding (DAS), the plant has, in total, four leaves with the third leaf as the most fully expanded. The submergence treatment was applied at 14 DAS by increasing the water level of the transparent container box to 35 cm above the plant shoot base. The water level of the control was maintained at 1 cm from the glass base throughout the experiment. The submergence treatment ended after six days (20 DAS, observed as desubmergence); the water level was then maintained at 1 cm from the glass base for the six-day recovery period. After the recovery period (26 DAS), all plants were removed from the containers for post submergence observation. Observations of variables explained below were conducted before water level was increased (referred to as 'before submergence'), upon 6 days of submergence the plants were directly observed after the water was just removed from the experiment glasses (referred to as 'desubmergence'), and 6 days after desubmergence (referred to as 'post submergence'). Submergence started at 14 DAS by increasing water 35 cm above stem base for submerged plant after taking observations and samplings. After six days of complete submergence, water was removed from the glass containers up to the height of stem base. Observations and samplings were conducted directly after water was removed. Six days after removing the water, observations and sampling were conducted to evaluate plant performance at post submergence. * DAS: Days After Seeding. Photosystem II (PS II) Quantum Yield Fv/Fm was measured following the method used by [6]. After 2 h in the dark, the third fully developed leaf of the rice plant was clipped with chlorophyll fluorescence equipment (AquaPen-P AP-P 100, PSI, Czech Republic). The maximal quantum yield of PS II photochemistry, calculated as variable fluorescence (Fv) divided by maximum fluorescence (Fm), was obtained by emitting an actinic light through the quantum yield menu. Net Photosynthesis Rate Net photosynthesis (P n ) was measured using a portable photosynthesis analysis system (LI-6400; LI-COR, Lincoln, NE, USA) as mentioned by [6]. Before the measurement, plant samples were adapted to light exposure 100-200 µmol m −2 s −1 . The third leaf from each plant was collected and kept inside the chamber until a stable reading was recorded. The leaf was measured to the calculate gas exchange rate per observed area. During measurement, relative humidity was~50%, leaf temperature was 27 • C with an ambient CO 2 concentration~400 µmol mol −1 , airflow through the chamber was maintained at 500 µmol s −1 , and photosynthetic photon flux density was 1100 µmol m −2 s −1 . Shoot Elongation (cm) The length of each plant was measured from the base of the stem to the highest shoot tip using a ruler. Elongation was calculated as the difference of plant length on desubmergence with before submergence, and the difference of plant length on post submergence with desubmergence. Data are presented as the average from 12 plant samples in 3 replications. Starch and Sugar Analysis Forty plant samples was harvested from each time of observations to be separated into each leaf number and root. The samples were then dried at 80 • C using drying oven for 72 h. After that, the samples were milled using mortar and pestle to pass a 0.5 mm screen, then 100 mg was transferred to a glass test tube. Next, 0.2 mL of aqueous ethanol (80% v v −1 ) was added to the tube to wet the sample and aid dispersion. A vortex mixer was used to stir the tube contents, then 3 mL of distilled water was added. The tube was then incubated in a boiling water bath for six minutes and vigorously stirred after two, four, and six minutes. The solvent was then separated into another tube and the process was repeated. The accumulated solvent was used for soluble sugar analysis and the remaining dissolved sample was used for starch analysis using total starch assays kit (K-TSTA) from Megazyme (AOAC Method 996.11, AACC Method 76-13.01) [21,22]. Statistical Analysis Data were analyzed using a two-way analysis of variance. If significant differences were found, Tukey's test was performed at a probability of 0.05. T-test was performed a probability of 0.05 to compare between submerged Inpari30 and IR72442. Photosynthesis Rate and Fv/Fm of Chlorophyll Fluorescence Affected by Submergence The photosynthesis rate was 20.1 and 14.5 µmol CO 2 m −2 s −1 in Inpari30 and IR72442, respectively, and was the same between control and submerged plant at before submergence. Then, the submerged plant gradually decreased to 5.3 and 0.5 µmol CO 2 m −2 s −1 in Inpari30 and IR72442, respectively, at desubmergence, significantly lower by 71% and 96% than its control plant, respectively. At post submergence, photosynthesis rate was 18.0 and 16.8 µmol CO 2 m −2 s −1 in control and submerged Inpari30, respectively, 14.5 and 12.5 µmol CO 2 m −2 s −1 in control and submerged IR72442, respectively. No significant difference was found between submerged compared to control plants of the same genotype at post submergence ( Figure 2A). significant difference was found between submerged compared to control plants of the same genotype at post submergence ( Figure 2A). Fv/Fm value was 0.79 in Inpari30 and IR72442 regardless of control and submerged plant at before submergence. In desubmergence, Fv/Fm was 0.75 and 0.66 in submerged Inpari30 and IR72442, respectively, significantly 3.2% and 15.6% lower compared to the controls that were not submerged. In post submergence, Fv/Fm was 0.74 and 0.69 in submerged Inpari30 and IR72442, respectively, significantly 4.7% and 12.2% lower than the controls. The lowest Fv/Fm value was observed in submerged IR72442, which was four times lower than that of submerged Inpari30 ( Figure 2B). Dashes: S, Submerged) at before submergence, desubmergence, and post submergence. Different lowercase letters in the same column indicate a significant difference at p < 0.05 with Tukey test. Changes in Starch and Soluble Sugar Content in Relation to Elongation Environmental change occurred two times, from before submergence to desubmergence, which refers to 'submergence condition,' and from desubmergence to post submergence, which refers to 'recovery condition.' Here we compare the shoot elongation, change of shoot starch and sugar content to calculate the needs of starch and non-soluble sugar resulted from shoot elongation. The calculation revealed greater differences from the first period of before submergence to desubmergence than in the second period from desubmergence to post submergence (Figure 3). At submergence condition, submerged IR72442 exhibited a rate of shoot elongation 1.7 times higher than control IR72442 and more than 5.2 times higher than both control and submerged Inpari30 ( Figure 3A). Significant differences in shoot elongation rate were not observed between treatments at recovery condition ( Figure 3B). Comparison of the shoot starch content revealed that submerged IR72442 exhibited the smallest changes in these parameters, compared to control IR72442 and both control and submerged Inpari30 ( Figure 3C). No significant differences were observed between control and submerged Inpari30 ( Figure 3D). Submerged IR72442 exhibited the smallest changes in shoot sugar content among all treatments from before submergence to desubmergence ( Figure 3E,F). Fv/Fm value was 0.79 in Inpari30 and IR72442 regardless of control and submerged plant at before submergence. In desubmergence, Fv/Fm was 0.75 and 0.66 in submerged Inpari30 and IR72442, respectively, significantly 3.2% and 15.6% lower compared to the controls that were not submerged. In post submergence, Fv/Fm was 0.74 and 0.69 in submerged Inpari30 and IR72442, respectively, significantly 4.7% and 12.2% lower than the controls. The lowest Fv/Fm value was observed in submerged IR72442, which was four times lower than that of submerged Inpari30 ( Figure 2B). Changes in Starch and Soluble Sugar Content in Relation to Elongation Environmental change occurred two times, from before submergence to desubmergence, which refers to 'submergence condition', and from desubmergence to post submergence, which refers to 'recovery condition'. Here we compare the shoot elongation, change of shoot starch and sugar content to calculate the needs of starch and non-soluble sugar resulted from shoot elongation. The calculation revealed greater differences from the first period of before submergence to desubmergence than in the second period from desubmergence to post submergence (Figure 3). At submergence condition, submerged Sustainability 2021, 13, 670 6 of 11 IR72442 exhibited a rate of shoot elongation 1.7 times higher than control IR72442 and more than 5.2 times higher than both control and submerged Inpari30 ( Figure 3A). Significant differences in shoot elongation rate were not observed between treatments at recovery condition ( Figure 3B). Comparison of the shoot starch content revealed that submerged IR72442 exhibited the smallest changes in these parameters, compared to control IR72442 and both control and submerged Inpari30 ( Figure 3C). No significant differences were observed between control and submerged Inpari30 ( Figure 3D). Submerged IR72442 exhibited the smallest changes in shoot sugar content among all treatments from before submergence to desubmergence ( Figure 3E,F). Distribution of Starch and Soluble Sugar to the Plant Organs The third leaf was the most expanded leaf when the experiment started, comprised of high distribution of starch and sugar both on submerged Inpari30 and IR72442 ( Figure 4A,B). Inpari30 distributed lower starch and sugar to the new, fifth, leaf than IR72442 on desubmergence. However, Inpari30 distributed higher starch and sugar to the 5th leaf than IR72442 on post submergence. The distribution of sugar tends to be higher than starch in the 5th leaf. The distribution of starch and sugar to the 5th leaf leads to decreasing distribution from root, 2nd, 3rd, and 4th leaves. Distribution of Starch and Soluble Sugar to the Plant Organs The third leaf was the most expanded leaf when the experiment started, comprised of high distribution of starch and sugar both on submerged Inpari30 and IR72442 ( Figure 4A,B). Inpari30 distributed lower starch and sugar to the new, fifth, leaf than IR72442 on desubmergence. However, Inpari30 distributed higher starch and sugar to the 5th leaf than IR72442 on post submergence. The distribution of sugar tends to be higher than starch in the 5th leaf. The distribution of starch and sugar to the 5th leaf leads to decreasing distribution from root, 2nd, 3rd, and 4th leaves. Before submergence was applied to the plants, the proportion of starch and s content in both genotypes were the same. However, at desubmergence and submergence, Inpari30 had a higher proportion of starch content than IR72442 IR72442 had a higher proportion of sugar contents than Inpari30 ( Figure 5). Before submergence was applied to the plants, the proportion of starch and sugar content in both genotypes were the same. However, at desubmergence and post submergence, Inpari30 had a higher proportion of starch content than IR72442, and IR72442 had a higher proportion of sugar contents than Inpari30 ( Figure 5). ustainability 2021, 13, 670 Figure 5. Proportion of starch and sugar in shoot of submerged Inpari30 (Sub1 genotype) and IR72442 (non-Sub1A genotype) before submergence, on desubmergence, and post submergen (re-aeration). Different lowercase letters in the same parameter indicate a significant differen < 0.05 with T-test. Discussion Submergence changes the environmental conditions that a plant is expos leading to several adjustments. In this experiment, the photosynthetic rate decreas desubmergence in both the Sub1A genotype of Inpari30 and non-Sub1A genoty IR72442 (Figure 2A). Observation of underwater photosynthesis on FR13A (a S donor genotype) was not proven in Swarna-Sub1 (Swarna that carries Sub1) and th was declined equal to Swarna and IR42. This result suggested that the ability to ma underwater photosynthesis from the tolerant donor of Sub1A (FR13A) is not inheri other genotype with Sub1A [23]. Fv/Fm, a measurement of the efficiency of PSII, obs at desubmergence was lowest in submerged IR72442 ( Figure 2B). Chloro fluorescence (Fv/Fm) is an effective indicator of submergence tolerance of rice [17 decline in Fv/Fm observed at desubmergence likely reflects a reduced ability of P reduce the primary acceptor [13], which provides evidence for disorganization photosynthetic apparatus and is attributed to a decrease in light intensity and o level in floodwater [13,24]. This reduction is also indicative of photoinhibition dam response to environmental stress, resulting in a decline in the efficiency of solar e conversion during photosynthesis [25,26]. Photosynthetic activity is a major determining factor of sucrose availabili translocation [27]. Plant carbohydrates are comprised of NSC, such as starch and so Discussion Submergence changes the environmental conditions that a plant is exposed to, leading to several adjustments. In this experiment, the photosynthetic rate decreased on desubmergence in both the Sub1A genotype of Inpari30 and non-Sub1A genotype of IR72442 (Figure 2A). Observation of underwater photosynthesis on FR13A (a Sub1A donor genotype) was not proven in Swarna-Sub1 (Swarna that carries Sub1) and the rate was declined equal to Swarna and IR42. This result suggested that the ability to maintain underwater photosynthesis from the tolerant donor of Sub1A (FR13A) is not inherited to other genotype with Sub1A [23]. Fv/Fm, a measurement of the efficiency of PSII, observed at desubmergence was lowest in submerged IR72442 ( Figure 2B). Chlorophyll fluorescence (Fv/Fm) is an effective indicator of submergence tolerance of rice [17]. The decline in Fv/Fm observed at desubmergence likely reflects a reduced ability of PSII to reduce the primary acceptor [13], which provides evidence for disorganization of the photosynthetic apparatus and is attributed to a decrease in light intensity and oxygen level in floodwater [13,24]. This reduction is also indicative of photoinhibition damage in response to environmental stress, resulting in a decline in the efficiency of solar energy conversion during photosynthesis [25,26]. Photosynthetic activity is a major determining factor of sucrose availability for translocation [27]. Plant carbohydrates are comprised of NSC, such as starch and soluble sugars that play important roles in the metabolic process of plants. As photosynthesis rates decreased on desubmergence, we examined the changes of starch and sugar in relation with elongation ( Figure 3). Environmental change occurred two times, from before submergence to desubmergence, which refers to 'submergence condition,' and from desubmergence to post submergence, which refers to 'recovery condition.' Both changes resulted in changes of starch and sugar in response to elongation, mentioned by [28] as internal triggers that affects plant growth. Submerged IR72442 exhibited highest shoot elongation on submergence condition followed by smallest changes of starch and sugar content. Changes of starch and sugar on submerged IR72442 continues to occur in recovery period even if shoot elongation does not occur. However, submerged Inpari30 did not exhibit shoot elongation on submergence condition with no significant changes of starch and sugar content compared to the control plant. Change of starch and sugar content of submerged Inpari30 also was not significantly observed on recovery condition. This result suggests that the consumption of starch and soluble sugar in submerged IR72442 was a consequence of the rapid shoot elongation upon submergence. Shoot elongation is a vegetative response to escape from complete submergence [29]. Complete submergence reduces the rate of growth and carbohydrate concentration in shoot tissues [30]. Shoot elongation processes compete with maintenance processes for energy during submergence, resulting in low survival [7,13,31]. Additionally, decreases in the proportion of starch content during submergence might be due to low light intensity and lower chlorophyll content, resulting in reduced photosynthetic rates and subsequent depletion of starch [18]. However, shoot elongation during submergence may result in weak and droopy blades that are easily damaged by wind and water [4]. Distribution of starch and soluble sugar in specific plant organs reflects the translocation between organs before submergence, on desubmergence, and post submergence. Submerged IR72442 distributed starch and sugar content on the 5th leaf higher than submerged Inpari30 on desubmergence. However, the distribution changes on post submergence, whereas submerged Inpari30 distributed starch and sugar content on 5th leaf higher than submerged IR72442 (Figure 4). A proportion of sugar was higher than starch on desubmergence and post submergence of submerged IR72442 ( Figure 5). This result suggested that in order to elongate, submerged IR72442 transported a proportion of starch and sugar into the 5th leaf as the newly developed leaf. Transport of photo-assimilates depends on source supply and sink demand; roots and young leaves are major sinks during early developmental stages. Balanced development could be achieved if prioritized photo-assimilate translocation was established between sinks. When photosynthetic rates decline during submergence, older leaves provide the only source of photo-assimilates to the younger leaves [27]. Conclusions Generally, decreasing photosynthesis during submergence was observed in Sub1A and non-Sub1A genotype. Fv/Fm and NSC content decreased more in submerged IR72442 than in submerged Inpari30. Rapid shoot elongation was apparent during submergence following the consumption of starch and soluble sugar. Investigation of the distribution of starch and soluble sugar content in plant organs suggested that elongation of non-Sub1A genotype led to starch and sugar consumption that distributed faster to the new developed organ during submergence. In contrast, Sub1A genotype of Inpari30, which did not exhibit shoot elongation and showed slower NSCs distribution during submergence, performed better on post submergence by maintaining NSCs and distributing to the new developed organ faster. This study suggested that Sub1A genotype managed elongation and NSCs during submergence more efficiently than non-Sub1A genotype. Through this understanding, we can apply agronomical cultivation approaches for improving rice resistance tolerance to submergence in relation with the efficient use of non-structural carbohydrates. However, further research examining enzymatic schemes and gene expression would clearly distinguish the roles of starch and NSC behavior in plants subjected to submergence. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-05-11T00:04:09.352Z
2021-01-12T00:00:00.000
{ "year": 2021, "sha1": "82f4f1e0fcfac9f3a1d72a72b9074b37113b8c20", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/2/670/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "47fe9978e2db1ba816679566b446118dda07a8d4", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
271215683
pes2o/s2orc
v3-fos-license
Decalogue for mastering robotic transanal minimally invasive surgery (rTAMIS) This manuscript offers a detailed description of our successful tips for mastering transanal robotic surgery. It covers various aspects, including patient positioning, management of abdominal pressures to maintain a stable pneumorectum, platform positioning, camera alignment, trocar positioning to minimize collisions, instruments used, and approaches to tumor resection. Introduction Since local excision began, the technique has evolved significantly, and so have its indications.In 1982, Dr. Klaus Buess introduced transanal endoscopic microsurgery (TEM), a minimally invasive surgical technique, through a groundbreaking paper and patented device known as TEM, which utilizes a rigid rectoscope [1].This device played a pivotal role in the widespread adoption and refinement of the technique.However, its complexity eventually led to its gradual replacement by simpler devices.The next major innovation was the TEO, developed by Karl Storz, which also relied on a rigid endoscope.Now, the GelPOINT Path® appears to be the future of transanal surgery (TAMIS).Unlike its predecessors, it is not a rigid endoscope but rather a single-port device, making it much easier to use.In summary, TEM, TEO, and TAMIS represent the most significant techniques and devices developed for transanal surgery.TAMIS currently holds the dominant position in this field. Advancements in robotic technology continue to evolve, with the imminent arrival of single-port devices like the da Vinci SP® system poised to revolutionize transanal surgery [2].However, as we eagerly anticipate these developments, we currently rely on adapting existing platforms such as the Xi system to navigate the constraints of narrow anatomical spaces for which they were not originally designed.Considering this necessity, this publication aims to share invaluable insights that have greatly enhanced our proficiency in mastering this approach. Transanal robotic surgery applications Robotic transanal minimally invasive surgery, also called rTAMIS, is used to treat various conditions affecting the lower rectum and anal canal.Below is an overview of its indications. Early-stage rectal cancer rTAMIS is a technique used to remove early-stage rectal tumors or polyps, especially those in the mid to lower rectum.Studies have shown that it can achieve comparable oncological outcomes to traditional open or laparoscopic approaches while offering the benefits of minimally invasive surgery [3,4]. Repair of dehiscent low rectal anastomosis Extrapolating from the experience with TAMIS, we consider that rTAMIS is an effective technique for repairing dehiscent colorectal anastomoses in the early stages [5]. Advanced rectal cancer While some surgeons adhere to the conventional approach to rectal cancer surgery supplemented by robotics, the field has undergone a revolution that challenges this traditional paradigm.Organ preservation strategies are successful in more than 50% of cases, and radical surgery is no longer the best option for everyone.This is where surgeons can still make a difference in rectal cancer treatment with local excision.The hot spot for surgeons is now in local excision [6]. Several studies have evaluated this choice: A study conducted by the Rome group [7], which randomized cT2N0 lower rectal tumors to radical surgery (total mesorectal excision, TME) or neoadjuvant chemoradiotherapy (NCRT) + local resection, found that after a 9-year followup, local recurrence rates were similar in both groups (6% vs 8%).These good results have been reproduced by other studies, such as the Dutch CARTS study [8], which achieved a 74% organ preservation rate.However, it is also mentioned that the effect of local resection after radiotherapy includes wound-related complications as well as up to 43% grade 3 chemoradiotherapy toxicity.Three other important studies have set the trend in this landscape, the French GRECCAR 2 [9], the American ACOSOG [10], and the Spanish TAU-TEM [11].Finally, English groups published the TREC study results in 2021 and achieved an 88% margin-free rate and low toxicity (15% grade 3) in < 3 cm, cT1-2 lower rectal tumors that had been treated with short-cycle RT and local resection 8-10 weeks later [12]. Tips Robotic surgery is revolutionizing the surgical landscape, yet mastering the technique of local excision remains challenging.Surgeons can benefit from lesser-known tips and strategies when implementing this approach. Tip 1: position Traditionally, the patient's positioning was determined by the tumor's location, ensuring the lesion remained in the lower portion of the surgical field.This practice was imperative when using rigid rectoscopes and proved beneficial with conventional TAMIS.The reasoning for this approach stemmed from the difficulties encountered in managing lesions positioned on the upper aspect of the surgical field.Operating on lesions located at the top of the surgical field is not only uncomfortable but also makes suturing the rectal defect afterward challenging. However, with the advent of robotic approaches, surgical procedures have become significantly more versatile.Robotic systems offer enhanced maneuverability, enabling surgeons to achieve successful outcomes with greater ease.This technological advancement has particularly revolutionized the suturing and closure of wounds, overcoming the previous limitations. Hence, the systematic jackknife position, set at 145° with open legs and a slight head-down inclination, stands out as our preferred orientation.This choice is not solely based on its feasibility but is also driven by its efficacy in sustaining a more stable pneumorectum.The abdominal compression achieved in this position serves to prevent the unintended diffusion of gas throughout the bowel. As important as the jackknife position is, maintaining relaxation is also necessary to avoid straining that collapses the rectal space. In patients undergoing femorofemoral bypass surgery, vigilance is crucial to address potential issues related to pubic compression, ensuring optimal leg perfusion following positioning (Fig. 1). Tip 2 Understanding the dynamics of intra-abdominal pressure is crucial in creating an optimal surgical field, as depicted in Fig. 2. Although not strictly necessary from an anesthetic standpoint, the insertion of a bladder catheter can be beneficial by enlarging the pelvic space and facilitating rectal distension. Tip 3 Let us delve deeper into the reasoning behind the choice of using arm 123 when da Vinci Xi® arms enter from the Space considerations • Assistant accessibility: Positioning arm 123 creates more space on the left side of the patient, facilitating better accessibility for the surgical assistant.Additionally, small adjustments with the clearance button will reduce collisions.It also facilitates good access to the airway for the anesthetic team.• Optimized platform stability: The utilization of arm 123 significantly enhances stability within the robotic platform throughout the entirety of the surgical procedure, eliminating the need for extreme arm rotations. In summary, choosing arm 123 when the da Vinci Xi® arms enter from the right side in upper abdomen configuration is a strategic decision based on considerations of space, cart stability, and procedural efficiency.It optimizes the working environment for the surgical and anaesthetic team, and simplifies the overall robot-assisted surgical process (Fig. 3a). Figure 3b shows the 234 configuration. Tip 4 As a result of inherent limitations of the reduced space, automatic targeting is not feasible.As previously emphasized, a profound comprehension of arm positioning becomes paramount.Consequently, a manual targeting process is unavoidable, underscoring the importance of meticulous optical alignment with the patient's spine. Tip 5 The GelPOINT Path® is best utilized with a triangular distribution of the trocars.At the apex, position the optic trocar, with the left and right working devices flanking it.The assistant port, dedicated to sutures and suction, is situated below.After completing the docking process, gently adjust the trocar positions outward towards the outer ring (Fig. 4). Tip 6 For the GelPoint Path® insertion, the anal area must be highly lubricated and softly dilated before the insertion.The port needs to be folded as shown in Fig. 5 and gently inserted.Finally, once inserted, it needs to be unfolded. Tip 7 The implementation of a staggered docking technique proves advantageous in reducing collisions between ports.Trocar placement is performed before the cap is connected to Generally, the optic port is positioned as the more exteriorized port in this setup in the upper position for posterior tumors and lower position for anterior ones (Fig. 6). Tip 8 The GelPOINT Path® rings must be seen as another port.Moving it will increase the range of vision.For example pulling it upwards will show the lower part of the rectum (Fig. 7). Tip 9 The initial surgical maneuver involves placing marks around the lesion.However, if a stenosis interferes with this process, the first step should be to create space by cutting down the stenosis.This will provide the necessary room to navigate and mark the perimeter of the lesion (Fig. 8). Tip 10 Barbed sutures are the preferred choice for closing the defect if needed. Instrumentation We consider the following instruments mandatory for the procedure: • GelPOINT Path® • Bipolar forceps On the other hand, along our learning curve, we discovered that the following instruments are not absolutely necessary: • Advanced energy devices: The rectal wall undergoes devascularization because of radiotherapy [15], reducing the risk of hemorrhage during surgery.Consequently, there is no requirement for advanced energy devices.• Lone Star: Simple fixation to the perianal skin suffices to stabilize the gel port.However, the Lone Star instrument does not improve neumorectum stability, and it may even interfere with the excision of distal tumors.Traction on distal tumors could potentially conceal them beneath the gel port.Ensuring clear visibility during procedures is crucial for accurate diagnosis and treatment. • Airseal system and GelPoint Path stabilization bag have become redundant with the adoption of the prone position (explained before) and the use of modern insufflators that deliver high airflow.While the Airseal system may retain some utility for smoke extraction, our experience suggests that, in general, it is not required.• Robotic cutting needle holder.Bipolar forceps may cut threads with energy. Conclusions Adapting existing robotic platforms for narrow anatomical spaces like the anal canal and rectum is challenging.These spaces were not originally designed for such platforms, leading to a high rate of conversions.However, this publication shares valuable insights that have improved our proficiency in mastering this approach.
2024-07-17T06:17:49.445Z
2024-07-16T00:00:00.000
{ "year": 2024, "sha1": "ae8fb03d77820d831fdf31b3242f4a729ad9e997", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8cf4dab4c5abf0b0384520c9eb6676c04d8000ea", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
245142237
pes2o/s2orc
v3-fos-license
Perceived Social Support as a Predictor of Teacher Candidates' Smartphone Cyberloafing ABSTRACT Introduction As a result of the transition to university life, significant changes occur in the students' social environment. Some students move to different cities, while others live in the same town, but they change their social environment in both cases. In other words, they diversify their social support resources. Social support, which plays an important role in improving social integration, is defined as feedback towards a behavior, a thought, or a share. Social support is also defined as any support provided by the immediate social environment to those especially in difficulties and anxiety (Eker & Arkar, 1995). Shumaker and Brownell (1984) define social support as a resource exchange between at least two people who provide and receive support and mainly aim to increase the well-being of the receiver of support. Thus, there are different interactions and relationships between individuals in the process of social support (Zimet et al., 1988). While social support is accepted as realized helping behaviors, the belief that these behaviors may arise in distress is expressed as PSS (Özdemir, 2013). Perception levels regarding this support are related to individuals' trusting their support resources (family, spouse, friends, etc.) and believing that they will receive the necessary support when needed (Sarason & Sarason, 1982). In addition to the support that the individual gets by interacting. Yıldırım (1997) states that the individual's family, family environment, friends, opposite-sex friends, teachers, colleagues, neighbors, ideological, religious, or ethnic groups, and the society in which the individual lives constitute the social support resources of the individual. Hupcey (1998) further narrows these resources and states that the individual's family, spouse, children, and close friends constitute their social support resources. Zimet et al. (1988) limit social support resources to the individual's family, friends, and significant other. With the developments occurring within the dynamics of life, differences in individuals' social support resources and their access to these resources also happen. Cohen and Wills (1985) also cite social support resources as respect support, information support, social friendship, and instrumental support. Supporting respect is information that relates to the acceptance and sense of dignity of the individual. This source also refers to self-esteem and emphasizes that the individual is valued and accepted despite any difficulties or personal flaws.This support is also called emotional support, meaningful support, self-esteem support, and close support. Information support helps to understand, identify, and cope with problem situations. This type of support is also referred to as advice, assessment support, or cognitive guidance. Social friendship support is to spend time with other individuals in leisure and entertainment times. This dimension is also called messy support and belonging. Finally, instrumental support is the provision of necessary services, financial assistance, and resources. Moreover, it is also called aid, material, and concrete support. The internet provides comfortable access to information and manipulates social relationships, and it is a tool that makes social support always accessible by making individuals feel less alone and feel more comfortable (Leung, 2007). In particular, the internet provides individuals access to social support resources when they need it, and it may be instrumental support for them. For example, an individual who receives less social support than colleagues and superiors makes more cyberloafing in the workplace than an individual with a higher social support level (Reinecke, 2009). This may mean that the individual uses the internet as instrumental support when they cannot reach the support source. Leung and Lee (2005) found a positive relationship between participation in leisure activities on the internet and social support. The internet is an indispensable tool for people to engage in various activities such as socializing, entertainment, and information seeking, and smartphones play an essential role in providing internet access. Smartphones are technological tools that provide convenience in communication and many subjects such as education, awareness of the environment, health, entertainment, fast access to information, and interaction with others via social networks. Therefore, smartphones are communication tools related to individuals' perceptions of social support . All age groups use smartphones, but research has revealed that young people mostly use them. Students do not give up smartphones in classroom environments, mainly because they provide practical access to information. However, they are become busy both physically and mentally due to using their smartphones during the lesson (Dirik, 2016). As an extension of both internet access and the widespread use of smartphones, students started to exhibit new behaviors such as communicating with their friends, watching movies, listening to music through their smartphones during the lecture. Cyberloafing during the lectures is defined as the students' excessive and irrelevant use of the internet through the smartphone during the lectures. This is the in-class cyberloafing behavior that many students engage in, sometimes knowingly and sometimes unknowingly. The main cyberloafing behaviors during lecture are interactive cyberloafing, entertainment cyberloafing, and browsingrelated cyberloafing (Blau et al., 2006). If smartphones are used for educational purposes during the lecture, they provide students the information they want. In this case, these behaviors performed interactively or as browsing-related cyberloafing may give the students instrumental and informational support. However, cyberloafing during the lecture can lead to adverse psychological and social effects on students' learning processes (Aljomaa et al., 2016). Because entertainment and interactive cyberloafing can provide students with social support due to their social interaction, it may cause their distraction during the lecture (Ott et al., 2018;Ragan et al., 2014). As a result of technological advances and the overuse of smartphones, one of the issues that researchers have been working on extensively in recent times is the technological (say, smartphone and internet) addictions (Gökçearslan et al., 2018) and the effect of technology on student's social relations (Kalungu & Thinguri, 2017). In the study by Mete (2017), it was observed that PSS significantly predicted the internet addiction of university students. Tanrıkulu's (2019) study showed that the education faculty students' PSS levels significantly predicted the levels of social self-efficacy. Çivitci's (2015) study revealed that social support has a regulatory function in the relationship between self-esteem and constant anger. In Konan and Çelik's (2019) study, teacher candidates' social support perceptions significantly predicted their interaction anxiety in a negative way. In Zorlu-Yam and Tüzel-İşeri's study (2019), the levels of PSS by teacher candidates significantly predicted their social competence levels. A limited number of studies examine the relationship between perceptions of social support and smartphone cyberloafing (Konan & Çelik, 2019) and perceptions of social support and cyberloafing (Goekçearslan et al., 2018). Cyberloafing is one of the events that negatively affect the efficiency and productivity of learning and teaching activities in educational environments (Saritepeci, 2019). A negative relationship was found between in-lecture cyberloafing and academic performance. Moreover, many studies reveal the harmful effects of smartphone use during lectures (Rosen et al., 2011;Sana et al., 2013). Cyberloafing behavior in the class reduces students' active participation in learning activities (Heflin et al., 2017), consumes cognitive resources that can be used for classroom learning (Sana et al., 2013), and negatively affects students' lecture learning and academic achievement (Wu et al., 2018). The idea that determining the relationship between teacher candidates' cyberloafing behaviors during the lecture and their social support status would help understand student behavior and achievement inspired us to begin this study. In addition, the most crucial measure to be taken to prevent cyberloafing during the lecture is to inform and raise awareness of students. The most influential people who can do awareness-raising activities are teachers. Therefore, in this study, it is thought that working with teacher candidates who will touch the lives of a large part of society and be an effective source of social support when they are appointed to the profession will contribute more to the field. In addition, the most crucial measure to be taken to prevent cyberloafing during the lectures is to raise awareness of students. The most influential people who can do awareness-raising activities are teachers. For this reason, in this study, it would be more appropriate to work with teacher candidates who would touch the lives of a large part of the society and be a valuable source of social support when they were appointed to the profession and contribute more to the field. In this context, it can be said that PSS can affect the level of smartphone cyberloafing that teacher candidates would make during the lecture. It is also pointed out that social support harms young's negative behavior (Jackson & Warren, 2000). Therefore, teacher candidates' determination of the level of PSS and their perception of the source from which this support comes can guide our understanding of their behavior. Given the developments in today's mobile internet technology, it is expected that this study will provide clues on how to prevent this cyberloafing by revealing whether social support is among the reasons for using smartphones during lectures. In addition, it is thought that this study can give important ideas about whether smartphone cyberloafing, which is considered to have negative consequences for students, can be prevented with PSS. Additionally, research results show that the teacher candidates' gender (Arıkan & Özgür, 2019;Askew, 2012;Çok & Kutlu, 2018;Tanrıverdi & Karaca, 2018) and the grade they study in (Tanriverdi & Karaca, 2018;Yılmaz, 2017) cause differentiation in cyberloafing behaviors. It shows the importance of controlling these variables in determining the relationship between teacher candidates' PSS and cyberloafing. Therefore, in the present study, the teacher candidates' gender and grade were also included in the analysis as control variables. This study aimed to determine whether teacher candidates' PSS levels predict smartphone cyberloafing during the lecture. For this purpose, answers were sought to the following questions:  Is there any relationship among research variables (PSS, smartphone cyberloafing, teacher candidates' gender, and grade)?  Do teacher candidates' demographic characteristics predict smartphone cyberloafing during the lecture?  Do teacher candidates' PSS levels predict smartphone cyberloafing during the lecture? Research Model The relational survey model was used in this study. The relational survey model is used to determine the direction and degree of the relationship between two or more variables (Creswell, 2012). The relational survey model can determine relationships with statistical methods such as correlation and hierarchical linear regression. Therefore, in this study, the relationships between teacher candidates' demographic characteristics, social support, and smartphone cyberloafing levels in the research method were examined with this model. Research Sample The study population consisted of 1898 students studying at the education faculty in Elazig, located in the eastern part of Turkey, in the 2019-2020 academic year. Four hundred ninety-two students at 99% confidence level and 5% error level should be reached from the population. This study collected data from 497 students (99% confidence and 4.96% acceptable error level) selected from this population by the simple random sampling method. In simple random sampling, the researcher selects participants for the sample so that any individual has an equal probability of being selected from the population (Creswell, 2012). This study sample determination process formed a random coded student numbers Data Collection Tools The data was collected by the researchers with a data collection form. The Multidimensional Scale of PSS and Smartphone Cyberloafing Scale in Class were in this data collection form. The Multidimensional Scale of PSS: The scale developed by Zimet et al. (1988) has three subscales with 12 items, each addressing a different support source: family, friends, and significant other. In the present study, the internal consistency coefficients were calculated as .85 for the whole scale, .85 for the family support subscale, .89 for the friends support subscale, and .92 for the significant other subscale. Smartphone Cyberloafing Scale in Class (SPCSC): The six-point Likert scale was developed by Blau et al. (2006). The scale consists of three subscales with 16 items: cyberloafing related to browsing, interactive cyberloafing, and cyberloafing for entertainment. In the present study, the internal consistency coefficient of the whole scale was .92, and it was .88 for browsing-related cyberloafing, .86 for interactive cyberloafing, and .64 for entertainment-related cyberloafing. Ethical and Data Collection The ethical permission of the study was obtained from Fırat University Ethics Committee with the decision numbered 2020/11. Then, permission to apply the questionnaire was obtained from the dean of education faculty to apply the questionnaire forms. After receiving the necessary permissions, all participants were informed about the research purpose, and they voluntarily filled out the questionnaire. It took about 15 minutes to complete the questionnaire. Data Analysis SPSS 22 program was used in the analysis of the data. It was observed that the kurtosis and skewness coefficients of the subscales ranged between -1.25 and .98. Kurtosis and skewness coefficients in the range of ± 1.5 mean that the data meet the univariate normality condition (Tabachnick & Fidell, 2013). Then, by calculating the Mahalanobis distance, it was determined whether the multivariate normal distribution was satisfied or not. The Mahalanobis distance values and the chi-square values were compared (Can, 2017), and the dataset was found to satisfy the multivariate normal distribution (R² = 0.986; y = 0.9825x). On the other hand, Cook's distance was calculated and it was found that all values were below .05 and close to zero.These values also showed that multivariate normality was obtained (Seçer, 2015). Therefore, Pearson correlation analysis was conducted to determine the relationships among teacher candidates' demographic characteristics, PSS, and smartphone cyberloafing during the lecture. In correlations between variables, the Pearson correlation coefficient (r) is considered to be high between .70 and 1.00, moderate between .70 and .30, and low if less than .30 (Büyüköztürk, 2012). The hierarchical linear regression analysis was used to determine whether teacher candidates' demographic characteristics and PSS positively affect smartphone cyberloafing levels. The assumptions that normal distribution of data, no high-level correlation between independent variables, and no correlation between error terms were checked before regression analysis. Finally, suppose the tolerance value is greater than .10, and the VIF value is less than 10 in the regression analysis. In this case, there is no multicollinearity problem between the variables (see Table 2, Table 3, Table 4, and Table 5), and if the value of Durbin-Watson (dw) is around 2, it means that there is no autocorrelation (Can, 2017).In the hierarchical regression analysis, it was seen that the lowest tolerance value of the variables was .784, the highest VIF value was 1.275, and there was no multicollinearity problem. According to the analyzes, it was revealed that men received less family support, more interactive, entertainment, and smartphone cyberloafing in general than women. In addition, it was observed that as the grade increased, more significant other supports emerged that were more related to browsing, entertainment cyberloafing, and smartphone cyberloafing in general. The hierarchical linear regression analysis was performed to determine whether demographic characteristics and the PSS predicted the teacher candidates' cyberloafing during the lecture. The analysis results are presented in Table 2. Table 2 is examined, both models tested with hierarchical regression in two steps were significant as a whole. In the first step, demographic variables (gender and grade) and in the second step, PSS subscales (family, friend, and significant other) were included in the analysis. In the first step, the demographic variables included in the model together significantly (p < .05) predicted the browsing-related cyberloafing. When the significance of the regression coefficients of each variable was examined, it was seen that grade (β = .15; p < .01) variable significantly predicted browsing-related cyberloafing, but the predictive effect of the gender variable was not significant (β = .03; p > .05). Gender and grade variables together significantly explained about 2% (ΔR² = .02; p < .01) of browsing-related cyberloafing. In the second step of the analysis, the model variables obtained by including subscales of PSS (family, friend, and significant other) significantly predicted the browsing-related cyberloafing. When the regression coefficients of each variable in the model are examined, the predictive effects of grade (β = -.13; p < .01), family support (β = -.22; p < .01), and significant other support (β = .12; p < .01) variables on browsing-related cyberloafing were significant, but the predictive effects of gender (β = .01; p > .05) and friends support (β = -.05; p > .05) variables were not. PSS subscales added to the model at this step significantly explain approximately 7% of the model (ΔR² = .07; p < .01). All the independent variables of the research predicted 9% of browsingrelated cyberloafing (R² = .09). According to the t-test results, grade, family support, and significant other support subscales were meaningful predictors for browsing-related cyberloafing. Still, gender friends support subscales were not a significant predictor. The hierarchical linear regression analysis results related to predicting the teacher candidates' interactive cyberloafing by PSS and demographic characteristics were presented in Table 3. As seen in Table 3, gender and grade variables included in the model in the first step together significantly predicted the interactive cyberloafing model. However, when the significance of the regression coefficients of each variable was examined, gender (β = .15; p < .01) variable significantly predicted interactive cyberloafing, but the predictive effect of the grade variable was not significant(β = .08; p > .05). Gender and grade variables together explain approximately 3% (ΔR² = .03; p < .01) of interactive cyberloafing. Model variables obtained by including subscales of PSS in the analysis in the second step together significantly predicted interactive cyberloafing. However, when looking at the regression coefficients for each variable, gender (β = .12; p < .01), family support (β = -.15; p < .01), friends support (β = -.12; p < .05 ) and significant other support (β = .10; p < .05) variables significantly predicted interactive cyberloafing, but the predictive effect of grade variable was not significant (β = .07; p > .05). PSS subscales added to the model at this step significantly explain approximately 5% of the model (ΔR² = .05; p < .01). All the independent variables of the research predicted 8% of browsing-related cyberloafing (R² = .08). When the t-test results were examined, the gender and all three subscales was a meaningful predictor for entertainment cyberloafing. The hierarchical linear regression analysis results related to predicting the teacher candidates' entertainment cyberloafing by PSS and demographic characteristics were presented in Table 4. As seen in Table 4, gender and grade variables included in the first step model significantly predicted the interactive cyberloafing model. When the significance of the regression coefficients of each variable was examined, gender (β = .29; p < .01) and grade (β = .10; p < .05) variables were significant in predicting entertainment cyberloafing. Gender and grade variables together significantly explain approximately 10% (ΔR² = .10; p < .01) of entertainment cyberloafing. Model variables obtained by including subscales of PSS (family, friends, and significant other) in the analysis in the second step together significantly predicted entertainment cyberloafing. However, when looking at the regression coefficients for each variable, gender variable (β = .26; p < .01), family support (β = -.18; p < .01), and significant other support (β = .13; p < .01) variables significantly predicted entertainment cyberloafing, but the predictive effect of grade (β = .08; p > .05) and friends support (β = -.06; p > .05) was not significant. PSS subscales added to the model at this step significantly explain approximately 5% of the model (ΔR² = .05; p < .01). All the independent variables of the research predicted 15% of entertainment cyberloafing (R² = .15). When the t-test results were examined, gender, family support, and significant other support subscales were meaningful predictors for entertainment cyberloafing, but grade and friends support subscales were not. The hierarchical linear regression analysis results related to predicting the teacher candidates' smartphone cyberloafing during the lecture by PSS and demographic characteristics were shown in Table 5. As seen in Table 5, first step variables (gender and grade) together predicted the smartphone cyberloafing significantly. According to the significance of the regression coefficients of each variable, gender (β = .14; p < .01) and grade (β = .13; p < .01) variables were significant in predicting smartphone cyberloafing. Gender and grade variables together significantly explain approximately 4% (ΔR² = .04; p < .01) of smartphone cyberloafing. Model variables obtained by including subscales of PSS in the second step analysis significantly predicted the smartphone cyberloafing. According to the regression coefficients of each variable, gender (β = .10; p < .05), grade (β = .11; p < .05), family support (β = -.21; p < .01), and significant other support (β = .13; p < .01) variables significantly predicted the smartphone cyberloafing, but friends support (β = -.09; p > .05) was not significantly predicted. PSS subscales added to the model at the second step significantly explain approximately 7% of the model (ΔR² = .07; p < .01). All the independent variables of the research predicted 11% of smartphone cyberloafing (R² = .11). When the t-test results were examined, gender, grade, family support, and significant other support subscales were meaningful predictors for smartphone cyberloafing, but the friends support subscale was not. Discussion, Conclusion, and Recommendations Cyberloafing during the lecture negatively affects the academic success of the students. The teachers have to prevent this type of loaf, and teachers should support students about this wrong behavior by giving practical suggestions and exhibiting correct behaviors. For this reason, this study, which was conducted to determine how much PSS explains university students' use of smartphones during the lectures, aimed to examine teacher candidates' perspectives. According to this study's first finding, there was a negative and low relationship between the teacher candidates' PSS and smartphone cyberloafing during the lecture. This finding showed that as teacher candidates' PSS increased, their tendency to make cyberloafing during the class decreased. Studies have also shown that there is a negative relationship between PSS and cyberloafing (Goekçearslan et al., 2018), social media addiction (Bilgin & Taş, 2018), smartphone addiction (Konan & Çelik, 2019), Internet addiction (Esen & Guendoğdu, 2010;Oktan, 2015;Shaw & Gant, 2002;Tanrıverdi, 2012), and digital gambling addiction (Barut, 2019;Yavuz, 2018;Yıldırım, 2019). This finding of the current study is supported by the studies' results in the literature. According to the results obtained from the study, it was revealed that male teacher candidates received less family support and more interactive, entertainment, and smartphone cyberloafing than females. It can be said that men receive less family support due to the Turkish family structure. Because in the Turkish family structure, the man himself is the source of support (Yapıcı, 2010). The reason why male teacher candidates cyberloaf with smartphones can also be explained by the fact that men are more likely to be addicted to the Internet. This is because according to the meta-analysis study by Su et al. (2020), there are gender differences in certain behaviors/disorders of internet use worldwide, specifically, males show more behaviors related to internet use disorder and social media addiction than females do. Senel et al. (2019) also stated in his study that one of the important predictors of cyberloafing is gender and that men show more cyberloafing behavior, and as a result, it is supported by many studies. Dursun et al (2018) also emphasized that school grade should be considered as a determinant of cyberloafing status in educational studies dealing with cyberloafing behavior. In this study, it was found that as grade level increased, more significant other support emerged, more browsing-related, entertainment-related cyberloafing, and smartphone cyberloafing in general.It can be interpreted as that they tend to establish relationships with private individuals rather than family and friends. In this study, the reason why university students' smartphone cyberloafing levels tend to increase as college students' grades increase may be because teacher candidates prefer to use the internet to browse their lessons or have fun rather than use the internet to interact with others. In addition, it can be said that upper-grade students are more accustomed to university and classroom environments, and they are more self-confident than a lower grade. Senel et al. (2019) while internet use is an obstacle in front of the learning process with cyberloafing behavior; On the other hand, he stated that he could be a significant supporter. In the study conducted by Akgün (2020), it was revealed that as the grade level increased, the level of cyberloafing in the lessons increased. According to another result, the PSS is negatively and slightly related to interactive cyberloafing, but not to cyberloafing related to surfing and to cyberloafing related to entertainment. In Gökçearslan et al. (2018) study, it was found that social support has a significant effect on cyberloafing. Kim (2017) revealed that those with high loneliness tend to rely more on smartphone-mediated communication. This finding of the current study coincides with the definition of social support perception as a perception arising from the communication of individuals with each other. Positive interaction is an essential factor in feeling social support. Interactive cyberloafing during lecture, however, is significantly but negatively related to PSS compared to other types of cyberloafing. Students' need to interact with others decreases as their PSS increases. This finding shows that the lecturers' interest or support to the teacher candidates is important for preventing smartphone cyberloafing during the lecture. After adolescence, the effects of their friends increase in the lives of young people. Adolescents and teens prefer to share their problems with their friends rather than their families, but parents still impact their children. However, friends' influence starts to increase more than families' (Gunuc & Dogan, 2013;Muus, 1980;Rosen, 1965). This study supports these claims. The results showed a negative and low relationship between internet-related cyberloafing, interactive cyberloafing, and entertainment cyberloafing and support from family and friends. According to this finding, it can be said that as the social support that teacher candidates receive from their families or friends increases, their tendency to make smartphone cyberloafing decreases in class or vice versa. It can be said that students, whom their families and friends support, avoid displaying smartphone cyberloafing behaviors that will reduce their motivation and prevent them from understanding the subjects. This situation can be interpreted as the family's support to teacher candidates can prevent their behavior unrelated to the lecture subject. Gunuc and Doğan (2013) found that adolescents spending time with their mothers have a higher level of PSS and a lower level of internet addiction. Researchers also stated that many activities adolescents carry out with their mothers increase their PSS levels. Dokmen (1994) specified that sharing problems in adolescence and communication within the family in this period is believed to positively influence adolescents' psychology. In the study of Pawlowska et al. (2018), it was observed that digital game addiction was higher among adolescents with communication disorders in the family. Besides, Kwon et al. (2011) also suggested that family relationships are more important in computer game addiction than friend relationships. Hupcey (2000) stated that individuals felt bad before talking to their families, felt happy, and encouraged after talking to their families or receiving support. In Esen and Guendoğdu's (2010) study, adolescents' Internet addiction decreased with increasing support from family and teachers. In the study by Yıldırım (2019), a negative relationship was found between the level of online gaming addiction and perceived social support from family, friends, and teachers. The results of the studies explain the findings of the current study. It was found that there was a positive and low relationship between significant other support and both entertainment-related cyberloafing and browsing-related cyberloafing. However, there was no meaningful relationship between interactive cyberloafing. Results show that teacher candidates who feel supported by their partner are more likely to engage in cyberloafing while surfing and conversing. In Büyükçolpan's study (2019), there was a positive relationship between nomophobia and significant other supportWhen it comes down to the faculty member, it can be concluded that pre-service teachers tend to be more engaged in scanning data when they need to learn during class or when they feel that teachers provide the necessary support. When teacher candidates are distracted, bored of the lesson, or do not feel the faculty members' support, they mostly do entertainment cyberloafing during the lecture. However, if the significant other is flirt, the teacher candidate wants to communicate with the flirt and waits for her\his support, and she\he can push to talk with her\him during the lesson. Thus, it may cause more cyberloafing behaviors during the lectures. Whoever is evaluated as a significant other can also change the effect of PSS on cyberloafing. According to another result, the subscales of the PSS together significantly predicted both cyberloafing and the three subscales of cyberloafing. In Mete's (2017) study, the PSS variable significantly predicted university students' internet addiction. In Konan and Çelik's (2018) study, the teacher candidates' social support perceptions predicted their smartphone addiction negatively meaningfully. Gökçearslan et al. (2018) stated that social support has a small but significant impact on cyberloafing. This finding shows that one of the variables explaining smartphone cyberloafing during the lectures is social support. While the family and significant others subscales of the PSS significantly predicted interactive, browsingrelated, and entertainment-related cyberloafing, the friends subscale significantly predicted only interactive cyberloafing. Besides, one of the results is that family and significant others are significant predictors of cyberloafing during the lecture, but friends do not have a significant predictive effect. According to the findings obtained, the most important effect for reducing the cyberloafing of teacher candidates is the support provided by their families. While the level of cyberloafing during the lecture decreases with the support of family and friends, it can increase with the support of significant others. Besides, interactive cyberloafing significantly reduces the perception of support from friends. This may be because the teacher candidates are in the same environment as most of their friends, and their friends provide the necessary support during the lecture. Esen and Gündoğdu (2010) suggested that when students perceive their families' support as more important than others, they can protect themselves more easily from the damage caused by digitalization. Teacher candidates' belief that they will get support from their families when they need it may also lead to a significant decrease in undesirable behaviors such as smartphone surfing during lectures. Drouin and Landgraff (2012) stated that today, youngs continue their romantic relationships through messaging and can meet their needs of interest and love with the texts and pictures they send to each other. Büyükçolpan (2019) stated that dating relationships can affect university students' smartphone use and that smartphones are used to meet the daily communication needs of those distant from each other. This finding showed that teacher candidates use their smartphones to reach different supports for different purposes during the lecture. As a result, PSS is also stated as satisfaction from the social relationship (Kaya et al., 2015). As understood from this definition, it can be said that PSS is one of the important factors shaping the individual's behaviour. Teacher candidates' perception of faculty members as a source of social support and establishing strong social relationships can positively affect smartphone cyberloafing during the classroom environment. Haşimoğlu and Aslandoğan (2018) found that students who engage in cyberloafing in their classes are supported by teachers, and that the self-esteem of students who do not feel the support of teachers and are exposed to violence could be damaged. According to Dirik (2016), virtual media addictions (such as smartphones and social media) of students whose self-confidence decreases increases, reflected during the lecture. Martinez et al. (2011) stated that with the decrease in teacher support, students' problematic behaviors increased, and in Altunbaş's (2002) study, the support perceived by teachers motivated and encouraged students. Gökçearslan et al. (2018) stated that poorly planned classes and problems related to campus life might also cause cyberloafing. As a result of this study, family and friend support have negative relationships with cyberloafing during the lectures while significant other has a positive relationship. According to another result, social support, perceived by the family, significantly predicts the smartphone cyberloafing in the lecture and has a preventive effect. On the contrary, the individuals who perceive social support from significant others have an increased tendency to make cyberloafing in the classes. Perceived social support from friends also significantly predicts interactive cyberloafing. Parents' trust in teacher candidates and the feeling that they are helping them when they are stressed may lead them to focus more on lectures. Thus, individuals who limit their smartphone cyberloafing behavior can be expected to use their smartphones for educational purposes. This study showed that the social support that teacher candidates perceived from their families could reduce their cyberloafing during the lecture. The communication established in the family positively affects the psychology and development of the individual, and the social support from the families may reduce many destructive behaviors of the students, including cyberloafing during the lecture. That's why it's important to be aware of family support to prevent cyberloafing on smartphones in the class. In this respect, it is undeniable that the time spent with parents and family is vital in preventing smartphone cyberloafing, as with many addictions. In this context, families must engage in various activities with teacher candidates. With the transition to university, the influence of the family on the individual begins to be replaced by friends. However, the influence of families never completely disappears. For this reason, families should guide their children to use smartphones even when they are away from them. In this case, prospective teachers will use their smartphones more healthily and accurately. They will be able to make practical suggestions about using many technological tools, including smartphones, to the students they will train in their professional lives. Since those who continue their education in provinces far from their families are far from family control, it is possible that these teacher candidates will be adversely affected by some people and use smartphones in lectures. In this case, universities and national education directorates have various duties regarding teacher candidates' effective and correct use of smartphones. For this, universities and national education directorates should organize seminars or informative meetings on how important their social support is to students. Educational institutions should develop policies that involve families at every stage of education, and decisions should be taken by providing support from experts, institutions, or organizations on problematic issues. Also, various activities should be organized to provide teacher candidates with competencies related to practical use and prevention of misuse of technologies such as smartphones and the internet. In the study, data were collected using scales and it was found that the level of cyberloafing during lecture decreased with the support of family and friends. At the same time, it increased with the support of a significant other. The significant other in the scale items could also be the fiancé, flirt, neighbor, doctor, or teacher of the teacher candidates. In this context, it is important for the interpretation of the study results to know which person the teachers are referring to in the concept of significant other. Therefore, in order to increase the validity of this study result, qualitative studies can be conducted in which the opinion of teacher candidates is obtained and mixed studies in which qualitative and quantitative data are collected together.The study was conducted only with students from a university of education at a particular university. In order to increase the generalizability of the study results, the opinions of students at other universities should be obtained.
2021-12-12T17:52:40.701Z
2021-10-31T00:00:00.000
{ "year": 2021, "sha1": "339c143bd167b00c6ebf1e77a82a96676336b262", "oa_license": "CCBY", "oa_url": "https://www.ijpes.com/files/Uploads/Articles/016-Perceived%20Social%20Support%20as%20a%20Predictor%20of%20Teacher%20Candidates%20Smartphone%20Cyberloafing-522.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c80da8d2479055c677e8e255f91a8261fe6ed4ec", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
122119753
pes2o/s2orc
v3-fos-license
Finite-element formulae for calculating the sectional forces of a Bernoulli-Euler beam on continuously viscoelastic foundation subjected to concentrated moving loads In the finite element method, there are shortcomings using the conventional formulae to calculate the sectional forces, i.e., the bending moment and the shear force, at any cross-section of Bernoulli-Euler beam under dynamic loads. This paper presents some new finite-element formulae overcoming the shortcomings of the conventional ones to calculate the sectional forces at any cross-section of a Bernoulli-Euler beam on continuously viscoelastic foundation subjected to concentrated moving loads. The proposed formulae can easily degenerate into the formulae for calculating the sectional forces of a simply supported or a continuous Bernoulli-Euler beam subjected to concentrated moving loads, and into the formulae for calculating the sectional forces of a Bernoulli-Euler beam on Winkler foundation under static loads. Five numerical examples including static and dynamic analyses are chosen to illustrate the application of the proposed formulae. Numerical results show: (1) compared with the conventional formulae, the proposed formulae can improve the calculation accuracy of the sectional forces of beam; (2) one should use the proposed formulae, not the conventional formulae, to calculate the sectional forces at any cross-section in Bernoulli-Euler beam. Introduction The dynamic analysis of a beam on elastic foundation or a simply supported beam subjected to dynamic loads is widespread in engineering.Here only some of the relevant literature is mentioned.Many researchers [5,8,9,11,12,15,16,20,22] applied the analytical methods to investigate the dynamic problem.However, for most of the engineering problems, one must rely on numerical methods since analytical methods are usually not available.In the numerical methods, the finite element method (FEM) is powerful because it allows solution to complex problems in engineering.Some researchers [10,13,14,18,19,[23][24][25][26] applied FEM to study the foregoing dynamic problem.It is well known that in the design of a beam on elastic foundation or a simply supported beam subjected to moving loads, the most important considerations are the vertical deflection and the bending stress, and the latter is given by using bending moment divided by the section modulus of the beam.References [14,18,23,25,26] did not care for the bending moment of beam.References [10,13,19,24] reported the bending moment of beam, but they did not give the formula to calculate the bending moment of beam. In the dynamic analysis using FEM, if the following formulae (1)-( 4) are used to calculate the sectional forces of the cross-section at any point in a Bernoulli-Euler beam element on continuously viscoelastic foundation subjected to concentrated moving loads, then, errors may appear in the numerical results because of shortcomings in using the formulae (1)-( 4) to calculate the sectional forces.M (ξ, t) = −EI ∂ 2 y(ξ, t) ∂ξ 2 = −EIN q e for the right-hand side of cross-section at any point (1) Q(ξ, t) = EI ∂ 3 y(ξ, t) ∂ξ 3 = EIN q e for the right-hand side of cross-section at any point (2) M (ξ, t) = EI ∂ 2 y(ξ, t) ∂ξ 2 = EIN q e for the left-hand side of cross-section at any point Q(ξ, t) = −EI ∂ 3 y(ξ, t) ∂ξ 3 = −EIN q e for the left-hand side of cross-section at any point (4) where M (ξ, t) and Q(ξ, t) respectively denote the bending moment and the shear force of cross-section at any point and time t, and the positive directions of M (ξ, t) and Q(ξ, t) for the right-hand side of cross-section and for the left-hand side of cross-section at any point are shown in Fig. 1(a) and (b), respectively; ξ denotes local coordinate measured from the left node of the beam element; EI denotes the constant bending stiffness of the beam; N denotes the shape matrix of the beam element; q e denotes the nodal displacement vector of the beam element; the superscript prime denotes differentiation with respect to ξ; and y(ξ) denotes the vertical deflection at any point in the beam element.y(ξ) can be expressed as If the cubic Hermitian polynomials [3] are used as the shape functions of a beam element, the shape matrix N of the beam element can be written as with where l denotes the length of the beam element.In this paper, the formulae (1)-( 4) are referred to as the conventional ones for calculating the sectional forces of a Bernoulli-Euler beam under dynamic loads.The shortcomings using the formulae (1)-( 4) to calculate the sectional forces of a beam element on continuously viscoelastic foundation subjected to concentrated moving loads are as follows.First, there are shortcomings using the formulae (1)-( 4) to calculate the sectional forces at two nodes of a beam element.The formulae (1) and ( 2) for calculating the sectional forces at the left node (i.e., ξ = 0) and the formulae (3) and ( 4) for calculating the sectional forces at the right node (i.e., ξ = l) in a beam element only consider the sectional forces induced by the nodal displacements of the corresponding beam element, because the formulae (1) and ( 2) with ξ = 0 and the formulae (3) and ( 4) with ξ = l can be expressed as the sum of the products of the corresponding element stiffness matrix and the displacements of the corresponding beam element nodes, i.e., f e d = k b q e , in which, f e d denotes the sectional forces of the beam element induced by the nodal displacements of the beam element, and k b denotes the element stiffness matrix of the beam element itself.The sectional forces of the cross-section at the nodes of element induced by the nodal accelerations and the foundation spring and damping forces acting on the corresponding beam element are not considered.In addition, the fixed-end sectional forces presented by reference [17] at the two ends of a clamped-clamped (C-C) beam element induced by moving loads acting on the beam element at time t are also not considered. Then, there exist shortcomings using the formulae (3) and ( 4) to calculate the sectional forces at any point A , rather than at a node, of a beam element.The formulae (3) and (4) for calculating the sectional forces of the left-hand side of cross-section at any point A, rather than at a node, of a Bernoulli-Euler beam element and at time t can be written as, respectively, The formula (7) only considers the effects of the bending moment and the shear force at the left node of the beam element, which are induced by the nodal displacements of the corresponding element, on the bending moment of the left-hand side of cross-section at the point A. In the formula (7), the effects of the fixed-end bending moment and shear force at the left end of the C-C beam element, induced by moving loads acting on the beam element at time t, on the bending moment of the left-hand side of cross-section at the point A are not considered while should be considered.In addition, the effects on the bending moment of the left-hand side of cross-section at the point A are also not considered of moving loads acting on the beam element between the left node and the point A, of inertia force in the beam element between the left node and the point A, and of the foundation spring and damping forces acting on the beam element between the left node and the point A. Similarly, the formula (8) only considers the effect of the shear force at the left node of the beam element, which is induced by the nodal displacement of the corresponding element, on the shear force of the left-hand side of cross-section at the point A. In the formula (8), the effect of the fixed-end shear force at the left end of the C-C beam element, induced by moving loads acting on the beam element at time t, on the shear force of the left-hand side of cross-section at the point A is not considered while should be considered.In addition, the effects on the shear force of the left-hand side of cross-section at the point A are also not considered of moving loads acting on the beam element between the left node and the point A, of inertia force in the beam element between the left node and the point A, and of the foundation spring and damping forces acting on the beam element between the left node and the point A. This purpose of this paper is to present new finite-element formulae for calculating the sectional forces at any cross-section of a Bernoulli-Euler beam on continuously viscoelastic foundation subjected to concentrated moving loads.The new finite-element formulae can overcome the foregoing shortcomings of the conventional formulae and can improve the accuracy of calculating the sectional forces of beam.Evidently, the proposed formulae can easily degenerate into the formulae for calculating the sectional forces of a simply supported or a continuous Bernoulli-Euler beam subjected to concentrated moving loads, and into the formulae for calculating the sectional forces of a Bernoulli-Euler beam on Winkler foundation under static loads. Fundamental assumptions The following assumptions are made when one establishes the formulae for calculating the sectional forces of a beam on viscoelastic foundation subjected to concentrated moving loads. (1).Only vertical dynamic loads are considered.(2).Axial deformations and the damping of the beam are neglected. (3).The beam is modelled as a uniform Bernoulli-Euler beam.(4).The viscoelastic foundation is modelled as closely spaced, independent, linear springs and viscous dampers.(5).The cubic Hermitian polynomials [3] are used as the shape functions of a beam element. Equation of motion for a beam on continuously viscoelastic foundation subjected to concentrated moving loads Consider a uniform elastic Bernoulli-Euler beam with constant bending stiffness EI resting on continuously viscoelastic foundation with stiffness coefficient k w and damping coefficient c w subjected to a number of concentrated moving loads, as shown in Fig. 2. The beam is divided into a number of finite elements with equal length of l.The solid circles (•) denote the nodes for the beam elements.Since the axial deformations of the beam are neglected, per node of a beam element has two degrees of freedom, i.e., a vertical translation and a rotation about an axis normal to the plane of the paper.According to the energy principle, the equation of motion for the beam on continuously viscoelastic foundation subjected to a number of concentrated moving loads at time t can be written as where M, C, and K with n × n order are the overall mass, damping and stiffness matrices, respectively; q, q, and q with n × 1 order are the acceleration, velocity and displacement vectors, respectively; the superscript T denotes transpose; N T i is the transpose of the shape functions for the beam element which are evaluated at the position of the i-th concentrated moving load P i at time t; P i is the magnitude of the i-th concentrated moving load; n is the total number of the degrees of freedom of the beam; and n is the total number of concentrated moving loads. In Eq. ( 9), the overall mass matrix M can be obtained by assembling the element consistent mass matrix m.Since the damping of beam itself is neglected, the overall damping matrix C only includes the effect of the viscous damping of foundation.C can be obtained by assembling the foundation element damping matrix c w induced by viscous damping foundation supporting the beam element.The overall stiffness matrix K can be obtained by assembling the element stiffness matrix k b of the beam element itself and the foundation element stiffness matrix k w due to the elastic foundation supporting the beam element.The expressions of element matrices m, c w , k b and k w are listed in the Appendix. In addition, N i with 1 × n order in Eq. ( 9) can be written as where ξ i denotes the distance between the acting point of the i-th concentrated moving load P i and the left node of the beam element on which the load P i is acting at time t, as shown in Fig. 2. It should be noted that N i is a row matrix with zero entries except for those terms corresponding to two nodes of the element on which the i−th concentrated moving load P i is acting.N i is time dependent as the load P i moves from one position to another within one element.As the load P i moves to the next element, N i will shift in position corresponding to the degrees of freedom of the element where the load P i is positioned. Formulae for calculating the sectional forces of a beam By introducing the boundary conditions of the beam, Eq. ( 9) can be solved by the Wilson-θ method or similar methods [1], to obtain the generalized displacements, velocities and accelerations per node of the Bernoulli-Euler beam at time t.Then, the sectional forces at any cross-section of the beam at time t can be obtained by using the following procedure. First, let us consider how to calculate the sectional forces at the cross-section of the two nodes of a beam element.The sectional forces at the cross-section of the two nodes of a typical beam element at time t, as shown in Fig. 3, can be expressed as where f e is the sectional forces vector at the two nodes of a beam element, f e = [Q e l M e l Q e r M e r ] T , Q e l and M e l are the shear force and bending moment at the left node of the beam element, respectively; Q e r and M e r are the shear force and bending moment at the right node of the beam element, respectively; the positive directions of Q e l , M e l , Q e r , and M e r are shown in Fig. 3; qe , qe , and q e with 4 × 1 order are the nodal acceleration, velocity and displacement vectors of the beam element, respectively, obtained by solving Eq. ( 9); h is the total number of concentrated moving loads acting on the beam element at time t; and f 0 i is the fixed-end force vector [17] at the two ends of the C-C beam element induced by the i-th concentrated moving load P i acting on the beam element at time t.It should be noted that f 0 i and the equivalent nodal force vector induced by the i-th concentrated moving load P i acting on the C-C beam element at time t are equal in magnitude but opposite in direction.Thus, the expression of f 0 i can be written as f 0 i will become zero vector if the i-th concentrated moving load P i acts outside the beam element. The first and second terms on the right-hand side in Eq. ( 11) denote the vectors of the sectional forces at the two nodes of the beam element induced by, respectively, the nodal accelerations and the nodal displacements of the beam element.The third and fourth terms on the right-hand side denote the vectors of the sectional forces at the two nodes of the beam element induced by, respectively, the continuously damping force and the spring force under the beam element.The fifth term on the right-hand side denotes the sum of the fixed-end force vectors at the two ends of the C-C beam element induced by h concentrated moving loads acting on the beam element at time t. Then, let us consider how to calculate the sectional forces at a point, rather than at a node, within a beam element.The sectional forces can be obtained by using the equilibrium condition of forces acting in the vertical direction and the equilibrium condition of bending moments.For example, it is assumed that point A locates at a position in the beam between two adjacent nodes, as shown in Fig. 4, and there are h concentrated moving loads between the left node of the beam element and the point A at time t.The shear force Q A and bending moment M A at the point A can be given by the following expression where , and ξ A denotes the distance between the left node of the beam element and the point A. The positive directions of the shear force Q A and bending moment M A at the point A are shown in Figure 4.It should be noted that Q e l and M e l in the formulae (13a) and (14a) have been obtained by solving the formula (11).If m, c w , and k w are constant, the formulae (13a) and (14a) can be written as, respectively, It should be pointed out that the formula (11) only calculates the sectional forces at two nodes of a beam element and the formulaes ( 13) and ( 14) can calculate the sectional forces at any point in a beam element except two nodes.By =98000N/m Fig. 5.A simply supported beam under uniformly distributed static load. using the formulae ( 11), ( 13) and ( 14), one can obtain the sectional forces at any cross-section of a Bernoulli-Euler beam on continuously viscoelastic foundation subjected to a number of concentrated moving loads at time t.Furthermore, for the "static" problem, one has q e = qe = 0, and the formulae ( 11), ( 13) and ( 14) reduce to, respectively, If k w is constant, the formulae (16a) and (17a) can be written as, respectively, In addition, if k w = 0 and c w = 0 are used in the formulae (11), ( 13) and ( 14), then the revised formulae of ( 11), ( 13) and ( 14) can be used to calculate the sectional forces at any cross-section of a simply supported or a continuous Bernoulli-Euler beam subjected to a number of concentrated moving loads. Numerical examples Five numerical examples including static and dynamic analyses are chosen to illustrate the application of the proposed formulae.In the finite element analysis, the equation of motion for the following dynamic analyses will be solved by means of the Wilson-θ method with θ = 1.4.For the case of a simply supported beam under uniformly distributed static load, the formula (11) for calculating the sectional forces at the two nodes of a beam element can be revised as where f 0 denotes the fixed-end force vector at the two ends of the C-C beam element induced by uniformly distributed static load, which can be written as Example 1. A simply supported beam under uniformly distributed static load The formulaes ( 13) and ( 14) for calculating the sectional forces at a point A, rather than at a node, in a beam element can be revised as Bending moment diagrams and shear force diagrams of the simply supported beam have been plotted in Figs 6 and 7, respectively, including closed form solutions and finite element solutions given by the proposed formulae with 1 element and by the conventional formulae [4] with 2 elements of equal length.It can be seen from Figs 6 and 7 that the finite element solutions given by the proposed formulae with 1 element are the same as the closed form solutions; however, the same agreement cannot be found between the finite element solutions given by the conventional formulae with 2 elements and the closed form solutions.The reasons are as follows. The conventional formulae used in reference [4] is Compared with the proposed formula (18), the conventional formula ( 22) neglects f 0 to calculate the sectional forces at two nodes of a beam element.In addition, compared with the proposed formulaes ( 20) and ( 21), the conventional formula (22) does not consider (i) the sectional forces at point A contributed by the fixed-end force at the left end of the C-C beam element induced by the uniformly distributed load acting on the beam element, and (ii) the sectional forces at point A induced by the uniformly distributed load acting on the beam element between the left node and the point A, when the conventional formula ( 22) is used to calculate the sectional forces at the point A in a beam element. Example 2. A beam with free ends on Winkler foundation under a concentrated static load Consider a uniform Bernoulli-Euler beam with free ends resting on Winkler foundation under a concentrated static load P = 10,000 N acting on the beam midpoint, as shown in Fig. 8.The parameters in this study are given: E (beam elastic modulus) = 9,100 × 10 6 N/m 2 , I (moment of inertia of beam) = 7.326 × 10 −5 m 4 , L (beam length) = 18.05 m, and k w (stiffness coefficient of Winkler foundation) = 4.0 × 10 6 N/m 2 .These parameters of this example are taken from reference [7]. Table 1 reports the ratios of the numerical solutions to the exact solutions of the sectional forces at the point A , with the numerical solutions given, respectively, by the proposed formulaes (16b) and (17b), and by the conventional formula (22) adopted in reference [7] using the cubic Hermitian polynomials as the shape functions of a beam element.The distance between the point A and the beam midpoint is 0.5415 m, as shown in Fig. 8.The exact solutions can be obtained using the formulae presented in reference [21].It should be noted that exact solutions are not dependent on the number of elements.It can be seen from Table 1 that the bending moment and the shear force given by the proposed formulaes (16b) and (17b) are nearly exact if 20 elements are used for the beam length L. The error of the shear force is 3 per cent and the error of the bending moment is less than 8 per cent if only 10 elements are used.However, the same agreement cannot be found for the results given by the conventional formula (22) adopted in reference [7].This is due to the shortcomings of the conventional formulae.In the finite element analysis, the beam is modelled as 2, 6 and 10 elements with equal length, respectively, and time interval 0.005 s is adopted.The bending moments at the midpoint cross-section of beam given by the revised proposed formula (11), i.e., after deleting the third and fourth terms of right hand side in formula (11), with 2, 6 and 10 elements have been plotted in Fig. 9, along with the analytical solution given by the following Eq.( 23) taken from Timoshenko et al. [22] with i = 1-2000. Example 3. A simply supported beam subjected to a concentrated moving load Table 1 Ratios of the numerical solutions to the exact solutions of the sectional forces at the point A in Example 2 with the numerical solutions given, respectively, by the proposed formulae and by the conventional formulae in reference [7] Grade of mesh (elements of equal length) Bending moment Shear force Present Ref. [ with EI m From Fig. 9, good agreement has been achieved between the present solution with 2, 6 and 10 elements, respectively, and the analytical solution. The bending moments at the midpoint cross-section of beam given by the revised proposed formula (11) and the conventional formula (1) with 2, 6 and 10 elements have been plotted in Figs 10-12, respectively.It can be seen that the difference between the solution given by the revised proposed formula (11) and that given by the conventional formula (1) increases with the increase of length of element.The reason why there is the difference between the solution given by the revised proposed formula (11) and that given by the conventional formula ( 1) is that the fixed-end bending moment at the left end of the (N /2+1)-th C-C beam element induced by the load acting on the (N /2+1)-th beam element is not considered in the conventional formula, in which N denotes the total number of beam elements. Example 4. A simply supported beam resting on Winkler foundation subjected to a stationary pulsating concentrated load Let us consider a simply supported uniform Bernoulli-Euler beam resting on Winkler foundation subjected to a stationary pulsating concentrated load of P (t) = P sin ω e t acting at a distance x 1 from the left end point of the Fig. 10.Time histories for bending moment at the midpoint cross-section of beam with the solutions given by the proposed formula and by the conventional formula with 2 elements.Fig. 11.Time histories for bending moment at the midpoint cross-section of beam with the solutions given by the proposed formula and by the conventional formula with 6 elements. beam.The expressions of the exact vertical displacement y(x, t) and bending moment M (x, t) of the beam taken from Timoshenko et al. [22] are as follows with The time histories for the bending moment at the midpoint cross-section of beam are shown in Fig. 13, where the solid, the dotted and the dashed lines denote the time histories with the analytical solution given by Eq. ( 26) with i = 1-2000, with the finite element solutions (with equal element length 1.0 m and time interval 0.01 s) given by the revised proposed formula (11), i.e., after deleting the third term of right hand side in formula (11), and given by the conventional formula (1), respectively.It is evident that the time histories with the finite element solution given by the revised proposed formula (11) are very close to those with the analytical solution given by Eq. ( 26).However, the difference between the time histories with the finite element solution given by the conventional formula (1) and those with the analytical solution given by Eq. ( 26) is high.This is due to the shortcomings mentioned in Section 1 of Introduction. Example 5. A simply supported beam resting on viscoelastic foundation subjected to a concentrated moving load A simply supported uniform Bernoulli-Euler beam resting on continuously viscoelastic foundation subjected to a concentrated moving load P = 98,000 N with constant speed from the left end to the right end of the beam is studied in this example.The damping coefficient of foundation, c w , is defined by a non-dimensional parameter [6] (the ratio between actual damping and critical damping) given by β = cw 2 m m/k w .β = 0.1 is adopted in this example.Other parameters for the beam and the foundation are the same as those in Example 4. Initially, the beam is at rest, and the moving load is at the left end of the beam.In the present analysis, 1000 equal time steps for the case of every speed of the moving load are adopted. The maximum bending moments at the midpoint cross-section of beam given by the proposed formula (11) using 100 elements with equal length under the moving load with various speeds have been reported in Table 2, along with the analytical solutions given by the following Eq.( 27) taken from Fr ýba [9] with It is observed from Table 2 that the present solution agrees well with the corresponding analytical one, and the difference between the present solution and the analytical one is about 1 per cent.The maximum bending moments at the midpoint cross-section of beam given by the proposed formula (11) with 40, 60, 80, 100, 120, 140, and 160 elements and constant speed v = 50 m/s have been reported in Table 3, along with those given by the conventional formula (1) at the corresponding time when the maximum bending moment is achieved by the proposed formula (11), i.e., when the moving load is acting on the beam midpoint.In addition, the two results have been plotted in Fig. 14.From Table 3 and Fig. 14, one can obtain that the difference between the solution given from the proposed formula (11) and the analytical solution given by Eq. ( 27) is low, for example, the difference between the two results is 4.8 per cent even though 40 elements (i.e., element length of 2.5 m) have been used for the beam; however, the difference between the solution given by the conventional formula (1) and the analytical solution given by Eq. ( 27) is high, for example, the differences between the two results are 68.1 and 9.8 per cent, respectively, when 40 and 160 elements have been used for the beam.This is due to the shortcomings of the conventional formula mentioned Section 1 of Introduction. It should be pointed out that Eq. ( 27) is based on the following assumption [9], i.e., an infinite beam on a Winkler foundation is subjected to a constant load P moving from infinity to infinity at constant speed v.The differential equation of the vertical vibration of the beam given by reference [9] is written as with Table 3 Maximum bending moments at the midpoint cross-section of beam given by the proposed formula and those given by the conventional formula with various element numbers subjected to a moving load with constant speed v = 50 m/s Grade of mesh (elements of equal length) The reasons why Eq. ( 27) can be used as the analytical solution of this example are as follows.On the one hand, if β 1 in Eq. ( 28) is replaced by β = cw 2 m m/k w , then the modified Eq. ( 28) is the differential equation of the vertical vibration of an infinite beam on a continuously viscoelastic foundation subjected to a constant load P moving from infinity to infinity at constant speed v. On the other hand, the displacements and sectional forces of the beam on elastic foundation decline to zero very fast as the distance from point load increases [21].Therefore, one may choose a beam with finite length to take the place of an infinite beam. Concluding remarks This paper has presented new finite-element formulae overcoming the shortcomings of the conventional formulae to calculate the sectional forces at any cross-section of a Bernoulli-Euler beam on continuously viscoelastic foundation subjected to concentrated moving loads.The proposed formulae can easily degenerate into the formulae for calculating the sectional forces of a simply supported or a continuous Bernoulli-Euler beam subjected to concentrated moving loads, and into the formulae for calculating the sectional forces of a Bernoulli-Euler beam on Winkler foundation under static loads.Five numerical examples including static and dynamic analyses are chosen to illustrate the application of the proposed formulae.Numerical results show: (i) compared with the conventional formulae, the proposed formulae can improve the calculation accuracy of the sectional forces of beam; (ii) to calculate the sectional forces at the cross-section of the two nodes of a beam element, one should use the proposed formula (11), not the conventional formulae (1), ( 2) or (3), ( 4); (iii) to calculate the sectional forces at point within a beam element rather than at a node, one should use the proposed formulae ( 13), (14), not the conventional formulae (1), ( 2) or (3), (4). Fig. 1 . Fig. 1.Sectional forces diagram for the cross-section at any point in a Bernoulli-Euler beam element in the dynamic analysis. Fig. 2 .Fig. 3 . Fig. 2. Mathematical model for a uniform elastic Bernoulli-Euler beam on continuously viscoelastic foundation subjected to a number of concentrated moving loads. Figure 5 Figure 5 shows a simply supported beam under uniformly distributed static load p = 9.8 × 10 4 N/m.The parameters of the beam taken from reference [4] are: length L = 0.2 m and cross-section area = 2.0 × 10 −4 m 2 (0.02 m deep × 0.01 m wide) and E = 9.8 × 10 10 N/m 2 .For the case of a simply supported beam under uniformly distributed static load, the formula(11) for calculating the sectional forces at the two nodes of a beam element can be revised as Fig. 6 . Fig. 6.Bending moment diagrams in a simply supported beam under uniformly distributed load. Consider a uniform simply supported Bernoulli-Euler beam with a span L of 20 m subjected to a concentrated moving load P = 215.6 kN with constant speed v = 60 m/s from the left end to the right end of the beam.The beam parameters are: E = 2.9430 × 10 10 N/m 2 , I = 3.81 m 4 , and m = 34,088 kg/m. Fig. 9 . Fig.9.Time histories for bending moment at the midpoint cross-section of beam with analytical solution, and the solutions given by the proposed formula with 2, 6 and 10 elements. Fig. 12 . Fig.12.Time histories for bending moment at the midpoint cross-section of beam with the solutions given by the proposed formula and by the conventional formula with 10 elements. Fig. 13 . Fig. 13.Time histories for bending moment at the midpoint cross-section of beam in Example 4. Fig. 14 . Fig. 14.Maximum bending moments at the midpoint cross-section of beam given by the proposed formula and those given by the conventional formula with various element numbers subjected to a moving load with constant speed v = 50 m/s.s = λ(x − vt)β 1 = ω b m/k w δ(s) = δ(x)/λwhere ω b denotes circular frequency of damping of the beam, v is moving speed of the load, and δ(x) is Dirac delta function.The reasons why Eq. (27) can be used as the analytical solution of this example are as follows.On the one hand, if β 1 in Eq. (28) is replaced by β = cw Table 2 Present solutions (with 100 elements of equal length) and analytical solutions of maximum bending moments at the midpoint cross-section of beam, and the ratios of the solution given by the proposed formula to the corresponding analytical solution subjected to a moving load with various speeds
2019-01-03T10:04:40.826Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "ca3947b6c7ef8e49a57e9c0befd2f08f55951ccb", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2008/309216.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ca3947b6c7ef8e49a57e9c0befd2f08f55951ccb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
118959968
pes2o/s2orc
v3-fos-license
Near-Earth Observations by Spread Telescopes We suggest the all-sky survey at the International Space Station by four little wide-angle telescopes with polarization filters and CCD-arrays spread by several meters one from another. The video information processing will be carried out by real-time multiprocessor system on the board of the station. This experiment would allow to observe the sunlit space debris and meteoroids of centimetre size with their distances and velocities estimations at the distances up to 20 km from station and to investigate the interplanetary and interstellar medium by the making of polarization sky maps and detecting the weak-contrast features on it. INTRODUCTION The investigations of sky background that basically consists of zodiacal light are difficult to conduct on the ground because of sufficient contribution of the light of other sources and zodiacal light itself (Bernstein et al, 2002) scattered in the atmosphere. The translucent high-latitude cirruses found by IRAS were observed in the visual by Cawson et al (1986). Since the scattered light is sufficiently polarized, these features are better to search for on the polarization sky maps. The space experiment with polarization sky survey using wide-angle telescopes and CCD-arrays would allow to investigate the distribution and properties of interplanetary dust, the size, shape and orientation of dust particles. Another problem related with prolonged polarization background mapping is the possibility of supernova echoes discovering and investigation. The polarized spots with several years variability should be observable (Maslov, 2000) around the locations of supernovas that were observed several centuries ago. Observations of these spots would give the information about interstellar medium of our Galaxy. Search for these objects for nonpolarized light was conducted by Van den Bergh (1966) with photographic plates and brought the negative result, but using the modern technique and image processing methods allows to move forward in this question. Radar stations of space control are regularly watching for several thousands spacecrafts and their fragments with sizes more than 20 cm. The smallest size particles were registrated by the collisions with film screens (LDEF satellite) and spacecratfs surfaces. But experimental data about most dangerous for spacecrafts medium-sized (about 1 cm) particles practically are absent. EXPERIMENT DESCRIPTION The choose of International Space Station (ISS) for this experiment is made by three reasons: • Near-Earth medium watching near the ISS orbit allows to estimate the statistical parameters and to watch for the bodies dangerous for the station without additional assumptions about their space distribution; • The size of the ISS allows to spread the telescopes for triangulational; • Installation of the devices on often-visited station simplifies the changes of the data processing program. The main three goals of the experiment are: • Investigations of distribution and properties of dust in the Solar System and Galaxy; • Obtaining the data about space debris and meteoroid particles of centimetre size near the ISS orbit; • Discovering and observations of asteroids, comets and variable stars. The idea of the experiment follows. Four small telescopes with ∼ 8 • field of view installed on ISS, watching at the same direction and observed the sky by using the continuous station rotation with the angular velocity about ∼ 4 • /min. The information processing of CCD-images is being done by on-board computers in real-time mode. The light sources coordinates comparison with stars catalogue allows to determine the exact telescopes orientation, to find unknown sources and to measure their position. The difference of these positions measured on different telescopes allows to determine the distance to the object. The reasons to use four telescopes are following: • using of two or more telescopes spread by several meters give the possibility to determine the distance to the particle; • the registration of the track of fast-moving object by two telescopes with opposite CCD regimes ("exposition" on the first and "reading" on the second and vice versa) allows to determine the angular velocity of the object; • using the different axes direction of the polarizing filters of the telescopes, we can measure the linear polarization both point and extended sources; • the work failure of one telescope does not bring sufficient change for the worse of the quality of the information remaining the experiment conduction possible; • the measurements exactness is better, the probability of space debris or meteoroid discovering or unusual event (short-time burst, for example) observation increases; • the extension of observable sky area is possible. The telescopes are installing in pairs on two platforms with vertical (relatively Earth) axis of rotation. The telescopes are watching at one direction by the angle about 30 − 60 • to the zenith. The distance between platforms should be not less than 5 m. The platforms are turning the telescopes to the required sky region not emitted by the Sun. APPARATUS DESCRIPTION The apparatus complex consists of two same devices. The mass of each one is not more than 40 kg, the size is 940 × 550 × 550 mm 3 , that makes their transfer and installation on the International Space Station possible. The telescopes are developed in Space Research Institute basing on Star Sensor (Ziman, 1994) that is successfully working now on geostationary communication satellite "Yamal 100". The basic parameters of the telescopes are shown in the Table 1. The telescopes are able to work at angles down to 30 • from the Sun and from the Earth horizon. Adding to the video information, they supply the parameters of each image orientation, that is simplify the further information processing. The Information Processing and Saving Device is the special board computer being developed in Space Research Institute. It is consisted of: • two processors of Intel-486 type with the frequency 66 MHz; • energy-independent flash memory not less than 8 GBit; • special modules based on programmed logical matrixes for fast image processing. One such device can process information from two telescopes. If we use second device for two telescopes, it will be reserve one or we will have the possibility of processing programs debugging and comparison. The basic way to pass the information to Earth and programs edition is their copying using the ISS server and changeable information holders. ALGORYTHMS AND PRINCIPLES OF INFORMATION PROCESSING The input information flux is the sky images made by four telescopes each second. This flux fulfils the memory of computers at several minutes, that's why the information compression is necessary. It is better to do it in order to have the complete astronomical data (such as maps, catalogues, lightcurves etc.) at the output. The sources in the images can be classified: • by the extension: point sources: the stars and star-like objects (for 1 ′ -resolution); -tracks: the trace as a straight line made by moving object; -extended sources: the source with angular size from 1 ′ to several degrees; -background: the source with angular size more than visible area. • by the time averaging: model: given initially, with parameters correction while the experiment if necessary; -momentary: present just in one image; -current: present on the map obtained by the images addition at the single sky survey near the source; -seasonal: present on the map obtained by the images addition during several months of work (until the data passing to Earth). The tracks and point sources information will be saved in the catalogs and the extended sources and background -in the 2 ′ -resolution sky maps. Finally, we will obtain the following information: • last 200 sky images; • point momentary sources catalogue; • point current sources catalogue with variability data; • current sky map; • tracks catalogue; • seasonal sky map; • current maps of some sky regions; • some sky images. The apparatus model is given by "dark image", "flat field image" and point spread function (PSF) of point source depending on the orbital declination where the survey was made. The computer memory is holding the background sky map and stars catalogue that are the sky model for given spectral region. The deflections from this model are recording during the survey, that makes the search for new and variable sources easier. The model image is the sum of background map and point sources with account of PSF. We suggest the following processing sequence: • correction of output sky images by "dark image" and "flat field image"; • subtraction of model image with the model point sources brightness correction; • photometric calibration by the model stars in the visible area; • the search for tracks and new point sources in images with subtracted model and their include to the tracks and point momentary sources catalogues; • the creation of current sky map with the size about 10 • × 10 • using the images with subtracted tracks and point momentary sources; • the search for weak tracks, stars and extended sources in the current map; • the information about brightness of sky regions out of current map is including to the seasonal map. The seasonal map of the whole sky with 2 ′ -resolution requires about 200-300 MBytes of memory for each telescope, if not to take the compression into account. The apparatus parameters and sky model are being corrected during the time of experiment and changing after passing the data to Earth. Polarimetric and parallactic measurements are made basing on the maps and catalogues obtained by different telescopes. EXPECTED RESULTS The exactness of single position measurement of an object relatively the stars is 1 ′ . Since the stationary and slowly moving objects are being recorded about 100 times at one crossing of visible area, the averagesquared exactness can reach 6 ′′ . The same exactness can be reached for track position measurement perpendicular its direction, since this estimation is made by about 500 pixels. The sensitivity of the telescopes (by S/N level equal to 1) will be about 10 m for point objects. The exactness of photometric measurements for bright star-like objects will be about 10 percents. The magnitude of the object present in all images obtained during 2-minute survey, and magnitude of the track can be estimated with exactness 0.01 − 0.02 m . The magnitude of space debris or meteoroid with albedo about 0.1 and the size D, flying at the distance r with tangential velocity vt can be estimated by using Bagrov and Vygon's (1998) formula: where β is the angular size of one image pixel which is equal to about 3 × 10 −4 . Corresponding to this formula, fragment with size equal to 1 cm, flying at 20 km from ISS with velocity 40 km/s will be recorded by the telescopes as a 10 m -track, i.e. with S/N ratio equal to 1. With the velocity or the distance decrease the S/N ratio will rise back proportional to these parameters. If we increase the distance between the telescopes to 5-6 meters, than the parallax of the fragment at the distance equal to 20 km will reach 1 ′ and it will be possible to measure it with 10-percent uncertainty. The angular velocity of slow moving (from 0.001 to 1 • /sec) fragments can be measured by the displacement of the object in different images. Having measured the angle between the tracks recorded by CCD-matrixes of two telescopes (in "exposition" and "reading" regimes) we can determine the angular velocity of fast moving (from 0.1 to 8000 • /sec) fragments. The velocity of meteoroid equal to 40 km/s can be measured from the distance 300 meters! Thus, the apparatus will be able to find and measure the brightness, angular velocity and distance and estimate the size of all space debris and meteoroids larger than 1 cm, flying at the distances from 1 to 20 km. If the flux of such fragments is dangerous by possible collisions with the station one time per 10 years, than they will be revealed by this apparatus complex several times per day. Polarimetric observations will be conducted by the comparison of object brightness at four telescopes with different polaroid axes. For extended objects with size more than 1 • it is possible to measure polarized light with the intensity equal to 10 −4 from the background (Sholomitskii et al, 1999), using the large number of pixels. Sky mapping prolonged for the several years would decrease this value for one more order and to investigate the detailed features of Galactic background and zodiacal light variations. CONCLUSION The experiment would allow to obtain: • distribution and scattering parameters of interplanetary and interstellar dust by the prolonged regular polarimetric sky mapping; • statistical estimations of concentration, velocities and sizes of space debris and meteoroids near to International Space Station orbit; • data about Novas at early stages before the registration by groundbased observatories (especially at low angular distances from the Sun) and statistical characteristics of bursting and variable stars.
2019-04-14T01:36:53.051Z
2003-01-08T00:00:00.000
{ "year": 2003, "sha1": "c0acb07322e8c38d974643b483f640456cc4d7c9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d609afc1d6018616f83534d2487f2a241b877bbc", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
114490977
pes2o/s2orc
v3-fos-license
A study of the partial acquisition technique to reduce the amount of SAR data Synthetic Aperture Radar (SAR) technology is capable to provide high resolution image data of earth surfaces from a moving vehicle. This causes large volumes of raw data. Many researchs were proposed about compressed radar imaging, which can reduce the sampling rate of the analog digital converter (ADC) on the receiver and eliminate the need of match filter on the radar receiver. Besides the advantages, there is a major problem that produces a large measurement matrix, which causes a very intensive matrix calculation. In this paper is studied a new approach to partial acquisition technique to reduce the amount of raw data using compressed sampling in both the azimuth and range and to reduce the computational load. The results showed that the reconstruction of SAR image using partial acquisition model has good resolution comparable to the conventional method (Range Doppler Algorithm). On a target of a ship, that represents a low level sparsity, a good reconstruction image could be achieved from a fewer number measurement. The method can speed up the computation time by a factor of 2.64 to 4.49 times faster than with a full acquisition matrix. Intoduction One of main challenges to obtain high resolution synthetic aperture radar (SAR) images is the acquisition mechanism of a backscatter signal using a high rate of analog digital converter (ADC) according Shannon / Nyquist theorem [1]. This causes the volume of SAR raw data larger. The conventional approach is not only complicated and expensive, but also led to work onboard components becomes heavy. But on the other hand, the capacity of onboard memory and downlink transmission is limited. To solve this problem, many techniques have been proposed to compress SAR data. Scalar compression technique was first used is block adaptive quantization (BAQ). BAQ technique aims to estimate the input signal statistics and match quantizer adaptively according to the statistics of input signal and adopt on onboard satellite such as SIR-C [2], FFT-BAQ [3], ALOS PALSAR 2 [4], Flexible Dinamic BAQ (FDBAQ) on Sentinel-1 [5]. Unlike conventional compression methods above, the theory of compressive sensing (CS) [6][7][8] proposed a new approach, where CS can recover certain signals from the measurement/sampling much less than the Nyquist sampling rate theory. Scheme of CS for radar imaging system was introduced from reseachers [9,10] and which states that the radar system with CS can eliminate the need of match filter on the radar receiver and reduce the sampling rate of the ADC on the receiver. Liu [11] the use random sampling for SAR signal transmission without any changes to the system hardware. All the above research requires the radar signal is sparse and compressible. Sparse representation model of SAR signals stated that the raw data can be represented as a sparse signal in a certain basis. Herman [12] proposed a sparse representation model in the form of a linear equation with Alltop sequence. Wei [13] described the SAR signal by separating the sparse target and the acquisition matrix of SAR signal. Another approach is the establishment of the linear model of the SAR raw data based on the Born Approximation [14][15][16]. This paper proposes a model of partial SAR data acquisition to reduce the dimension of the acquisition matrix of SAR signal based on the linear model of the SAR raw data, so that the processing load can be reduced. Linear model of recieved SAR signal Pulse radar systems using stop-go approach [16] where the radar antenna transmits chirp signal at time t and the position of the antenna x repeatedly on repetition interval. The chirp signal has a pulsed LFM radar transmitted waveform and can be written as follows: where is the amplitude of the transmitter signal and ( ) (( ⁄ ) ⁄ ) is a rectangular gate function with as the pulse duration time. The is the carrief frequency and LFM pulse chirp rate. When the signal hits an object, it will induce currents hence the object emits the scattered field which is the same signal, but weaker and time delayed. The scattered field ( ) is formed from the interaction between the target and the incident field ( ). Thus its value is the response target depends on the geometry and material properties of the target and of the shape. The equation of scattered signal according [14] can be written as follows: where ( ) | | is the distance between the antenna and the target and For radar imaging, the scattered field can be measured at the antenna and the reflectivity V(z) is a function that must be resolved. We assume the value of the coefficient ( ) is the coefficient value of the backscattered signal from sparse targets, where k is an index of sparse targets and and are the sampling number of slow time and fast time signal. The linear equation of the SAR signal (2) is formed by separating the components reflectivity and acquisition matrix Ψ SAR signal in the form of discrete [16] written as follows: , the new mathematical model of general SAR signal acquisition is interpreted as basis vector at the fast-time and slow-time and can be written as: A measured SAR echo S is obtained by using high rate analog digital converter (ADC) as required by the Nyquist theorem. The goal of SAR image reconstruction is to determine the target reflectivity from the measured SAR echo S. Partial acquisition model of SAR signal In this section proposed a new method to reduce the dimension of the matrix Ψ in formula (7) by dividing the matrix per block in order to reduce computational load. The matrix Ψ as shown in Figure 1(a) has a large size of ( ), where and are the maximum number of sampling of slow time and fast time signal. This causes the inverse solution of target reflectivity ( ( )) becomes complex. To find the target reflectivity , the number of equations is not required the number of rows ( ) of the matrix Ψ. For this reason, the dimension of the matrix Ψ can be reduced. So the computational load to resolve inverse problems can be reduced as well. The new matrix is formed by dividing the original into several blocks as shown in Figure 1(b), so that the dimension becomes smaller. (a) (b) Figure 1 (a) Fully acquisition matrix (b) partially by deviding in several blocks Low sampling method One important step in the algorithm CS is randomly low sampling on recieved radar signal (3) is required. A low sampling model in form of fewer random measurement is needed to reduce the SAR raw data. It represent incomplete matrix. The new incomplete radar signal is formulated as follows: (5) where is a randomly low sampling measurement matrix with size of M × N, and is noise matrix. The noise can be stochastic or deterministic. The number of measurements M must be at least greater than the number of K non-zero value, but significantly smaller than the total entries ( ). The undersampling ratio become ⁄ . The low sampling of slow time (azimuth) signal is obtained by random arrangements of transmitted radar pulses [11,17] and the low sampling of fast time (range) signal is obtained by using lower rate ADC than received signal [15,18]. Reconstruction algorithm The scheme of the proposed partially SAR acquisition method states that a new partially matrix is obtained by deviding the acquitition matrix (7) in several N blocks in the same size. Each block produce a new matrix , which has diffirent value compared to other blocks. Thus, the linear equation of each block can be formulated as follows: where i is an index of each block. The reflectivity target should ideally have the same value. But because the value and of every blocks are different, the results of are solved using L1 algorithm [19] and obtained different magnitudes. The best reconstructed value from is obtained by comparing the PSNR value of each blocks and the highest PSNR is choosen as the final reconstructed reflectifity target. Figure 2 showed the proposed algorithm. Result and discussion This section describes the performance evaluation of the new model of partially SAR acquisition which are the basis for reconstructing the sparse target on CS-based SAR imaging. Experiments were conducted by evaluating the performance of the partially SAR acquisition and compared to fully SAR acquisition. Experiments were performed on the two input data SAR: an ideal target in the form of point target and ship target from Radarsat-1 as shown in Figure 3. Experiment on point targets Scheme as described in Figure 1(a) divides the acquisition matrix into several blocks each be 1/2, 1/3 1/4, 1/5 1/6 section. The low random sampling is performed on each block a number of measurements M = 90, with 10 samples in azimuth and 9 samples in the range direction. The result of the reconstruction of the experiment is to distinguish between the fully and partially SAR acquisition. In the first simulation, the full acquisition matrix is divided into 2 blocks of the same size. 3.719 seconds and 3.519 seconds. From the both calculation is obtained best reconstruction result of block 2 with PSNR 58.595 dB, RMSE 0,011 and a calculation time of 3.519 seconds. The next simulation were performed also at 1/3, 1/4, 1/5 and 1/6 of the full block. The PSNR and RMSE values of the reconstructed target point using the proposed method can be seen in Figure 4 and Table 1. In this experiment, the reconstruction results on partial SAR acquisition model with blocks 1/2 and 1/3 showed equivalent quality with full matrix acquisition. The reconstruction of the target point with 1/6 partial matrix acquisition of the full block (smallest building blocks of the simulation) have good image quality of best PSNR 49. 230 The image quality of the experimental results of the proposed method are still better than those from the conventional method RDA and has a PSNR value of 23.206 dB and RMSE 0.26. PSNR value is still above the threshold of acceptable PSNR is between 29-34 dB according to [20]. There are some facts that PSNR decreased or RMSE value increases with the division of the smaller blocks. The proposed method resulted in the target point reconstruction with a 1/6 block is still very good, well above the required PSNR and can reduce the computational load. The calculation time of point target using the partial acquisition matrix is faster, because the size of the partial acquisition matrix is getting smaller. The complexity of matrix multiplication becomes smaller. Compared with the calculation of the full block, a calculation of 1/6 block is accelerated 2.32 times (see table 1). Experiment on ship target The next experiment is performed on the ship target. This target represents the real target that has a lower level of sparsity and more complex scene compared with the target point. As the previous section, the block dividing scheme as described in Figure 1(b) states that the full acquisition matrix is divided into several blocks equally of 1/2, 1/3 1/4 of the full block. Randomly low sampling of the radar signal is performed on each block with more number of measurements than point target as M = 1000 samples, with details of 20 samples in azimuth and 50 samples in the range direction. Results reconstruction targets calculated using an algorithm in Figure 2, which is looking for the best reconstruction results by comparing the value of PSNR. Figure 5 shows the results of reconstruction with partial acquisition matrix. The PSNR value of full block decreased from 65.047 dB to 34.104 dB and 32.075 dB at partial acquisition matrix with size 1/2 and 1/3 of full block. The PSNR value decreases or RMSE value increases with the division of the smaller blocks. The reconstruction result of 1/4 of full block indicates PSNR value of 24.847 dB. It shows a lower quality than conventional methods RDA. SAR image reconstruction error rate increases, if the dimension of the partial acquisition matrix gets smaller. The advantage of the partial acquisition model is that the image of the target vessel can be reconsructed using a partial matrix with 1/2, 1/3, 1/4 of full blok faster than the full matrix by a factor of 2.64 to 4.49 times. Conclusion The new method of partial acquisition techniques have been proposed, performed experiments and analyzed. The experimental results show the performance of the method is better than the conventional method of RDA. It can suppress the side lobe and improve the quality of SAR images from fewer numbers of measurement of SAR signals and can speed up the computation time by a factor of 2.64 to 4.49 times faster than with a full acquisition matrix. The fewer measurement numbers of received radar signals is conducted by low sampling of slow time signals (azimuth) by setting the transmitted 6 LISAT IOP Publishing IOP Conf. Series: Earth and Environmental Science 54 (2017) 012100 doi:10.1088/1755-1315/54/1/012100 radar pulses at random and by low sampling of fast time signals (range) below the Nyquist frequency. Partial acquisition model generates good quality SAR image almost equal with to the full acquisition model at few numbers of measurement to a target point. While on the target ship, this method provides good image quality above acceptable PSNR value.
2019-04-15T13:10:08.717Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "5dbce541237203db07bbddc93bdcfff1c75ce646", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/54/1/012100", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bb8fd587a34a1519cdb24b7d5bf06145f279822c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
270306971
pes2o/s2orc
v3-fos-license
Enhancing Value: The Impact of Environmental, Social, and Governance Disclosure on Indonesian Basic Materials Sector Companies Purpose - This study aims to determine the effect of the implementation of environmental, social and governance disclosure on firm value in the basic materials sector listed on the Indonesia Stock Exchange (IDX) during the 2018-2022 period. Design/methodology/approach - This research uses quantitative methods using secondary data that has been published on the company’s official website and the official IDX website. The population in this study are basic materials sector companies listed on the IDX in 2018-2022. The samples in this study were 11 companies for five years of observation so that 55 data samples were obtained. The sampling technique used purposive sampling technique. The data analysis techniques used in the study are classical assumption test analysis, multiple linear regression test, correlation test, determination test, t test and f test using IBM SPSS 26 software. Findings - The results showed that environmental disclosure has a positive influence on firm value, but social disclosure and governance disclosure do not show the same positive influence on firm value. Research limitations/implications - The limited data set may not fully encapsulate the diverse impacts of ESG disclosures on firm value, thus impacting the robustness and generalizability of the study’s conclusions. Introduction The development of the business world is growing along with the increasing market demand for the goods and services offered.This will certainly increase competition and between business people competing to achieve company goals (Xaviera & Rahman, 2023). The establishment of a company has a reason and purpose behind it.These goals can be divided into two: short-term goals and long-term goals.For short-term goals, a company aims to achieve maximum success by utilizing all of its resources.As for long-term goals, the company seeks to increase the value of its company and improve the welfare of its shareholders.Due to the ever-changing and challenging business competition, many companies focus on increasing their company value as much as possible in order to attract investors to invest in the company (Muslichah, 2020). Firm value is the investor's perception of the company's success rate which is often associated with supply and demand which affects price movements in the capital market (Melinda & Wardhani, 2020).It also implicitly shows success in increasing shareholder welfare, which is the main goal of the company.From an investor's point of view, an increase in company value makes the company more attractive as an investment option. Based on news quoted from Eddyelly (2023) said that there was a decrease in the value of basic materials sector companies in 2022, this was related to the decline in the Composite Stock Price Index (IHSG) in 2022 which only grew by 4.08% compared to 2021 which amounted to 10.08%.In addition, the basic materials sector index (IDX BASIC) fell 0.92% or 11.09 points to 1,191.77.This was due to the increase in average global commodity prices, which caused negative sentiment towards index issuers.The basic materials sector is very vulnerable to increases in raw materials which are the main raw material for its production and this price trend will have an impact on increasing the cost of selling prices of its constituent raw material products (Bisnis, 2022). The issue of sustainability has become a recurring global discussion in recent years, and companies are now faced with the challenge of improving their environmental, social and governance performance and transparency.The ESG concept, which focuses on environment, social and governance, has become widely recognized since the 193-nation meeting at the United Nations in 2015.Global awareness of issues such as climate change, human rights, social inequality, and corruption has increased since the early 2000s, making ESG increasingly important for companies (Fadhillah & Marsono, 2023).In Indonesia, ESG has received great attention from the government, companies, investors, and the public.Environmental, social and governance disclosures serve to increase public understanding of sustainable investing, which is growing rapidly.Public awareness of investments that focus on environmental, social and governance has increased in recent years (Kartika et al., 2023). Environmental, social and governance disclosures made by companies are one of the communication platforms to show investors the real form of the company's responsibility and potential in efforts to improve sustainability business performance.This is also reinforced by the increasing interest of investors in companies that disclose environmental, social and governance issues in their investment decisions, so that disclosures made by companies can have a positive impact on company value in the long term. But in fact, the disclosure of information carried out by companies related to the environment, social and governance has increased every year, but this is inversely proportional to the company's value which has decreased.Basically, if the disclosure made by the company has increased, the company value will also increase. The increasing focus on environmental, social, and governance (ESG) transparency globally and in Indonesia reveals a mismatch between the rise in ESG disclosures and the actual value investors place on companies, particularly in the basic materials sector.Despite more companies sharing their ESG practices, this does not consistently lead to higher firm values.This inconsistency poses questions about the real impact of ESG disclosures on enhancing a company's market value, especially in markets sensitive to environmental and market changes.There's a noticeable gap in research specifically targeting the effects of ESG disclosures on the valuation of companies within Indonesia's critical basic materials sector, underscoring the need for more comprehensive studies (Kartika et al., 2023). This research aims to analyze the connection between the level of environmental, social, and governance (ESG) information shared and the market value of companies in the basic materials sector of Indonesia, with a focus spanning from 2018 to 2022.It intends to assess how the evolving global and local awareness of ESG issues influences investor perceptions and, in turn, company valuations in this industry.The study will identify which ESG actions significantly influence the financial valuation of these firms.Additionally, it will explore both the obstacles and driving factors that affect the success of ESG disclosures within this sector and their impact on corporate value.Finally, the research will develop practical strategies for businesses within this sector to improve their market value through effective ESG disclosure, thereby benefiting their stakeholders. Based on the background above, the purpose of this study is to examine the effect of environmental, social and governance disclosure on firm value.So, the authors are interested in conducting research with the title "The Effect of Environmental, Social and Governance Disclosure Implementation on Firm Value (Case Study on Basic Materials Sector Companies Listed on the Indonesia Stock Exchange 2018-2022)". Literature Review Legitimacy Theory Legitimacy theory was popularized by Dwoling and Pfeffer in 1975, legitimacy theory is a theory that explains corporate policies in disclosing social and environmental information, which underlines the importance of companies recognizing and complying with the norms that apply in the society and environment in which they operate when carrying out their business activities (Wahdah & Jayanti, 2023). Legitimacy theory focuses on the relationship between companies and society.Companies are part of the value system in society, therefore corporate values create harmony between the prevailing norms in society.Businesses can begin to manage legitimacy by changing their activities according to the social perception of society.This can influence perceptions and evaluations of the company.When a company's actions are perceived to deviate from societal norms, its legitimacy is questioned, which can lead to a variety of negative outcomes, from reduced consumer confidence to more severe financial and regulatory repercussions.Therefore, companies are motivated to engage in practices that enhance their legitimacy (Melinda & Wardhani, 2020). In the context of legitimacy theory, these ESG disclosures serve as vital conduits through which companies demonstrate their adherence to societal standards and expectations.The act of disclosing such information can be seen as a strategic effort by companies to maintain or improve their social license to operate by aligning their practices with the evolving values and norms of society.This alignment is essential for sustaining corporate legitimacy and ensuring long-term success and acceptance in the market. Stakeholders Theory Stakeholder theory was developed by Edward Freeman in 1984.Stakeholder theory says that companies are not only entities that operate only for their own interests but companies must also provide benefits to their stakeholders (Ghozali, 2020). Positive relationships with stakeholders is not merely an ethical imperative but a strategic necessity.This involves engaging with stakeholders to understand their expectations and needs and making concerted efforts to address these in the company's operational decisions and strategies.Such an approach ensures the long-term sustainability of the company by fostering loyalty, trust, and support from those who are affected by or can significantly impact the company's operations.Companies need to actively maintain positive relationships with their stakeholders, by trying to accommodate the wants and needs of stakeholders, especially those who have a direct relationship with the resources used in the company's operations, such as labor, consumers, and shareholders.One method that can be used by companies to build and improve good relationships with stakeholders is through the provision of reports that provide added value, such as sustainability reports and annual reports (Kurniawan et al., 2018). The intersection of stakeholder theory with these aspects of ESG disclosure serves as a framework for businesses to not only align their practices with stakeholder expectations but also to promote a culture of transparency and responsibility.This approach not only enhances the company's credibility and trustworthiness but also ensures a sustainable business model that values the contributions and concerns of all stakeholders Environmental Disclosure Environmental Disclosure can be defined as a set of information that includes environmental management activities that occurred in the past, present, and future (Muslichah, 2020).Environmental disclosure is an effort taken by companies as a form of their commitment to the environment and surrounding communities, with the aim of overcoming potential negative impacts on the environment that may arise from company activities, both direct and indirect and to obtain investment opportunities in the future (Dewi et al., 2023).Companies are also expected to publicly disclose information about their activities that impact the environment.This disclosure is important because it helps increase the company's reputation and the company's value in the eyes of the wider public and investors (Firmansyah et al., 2021).This is in line with research conducted by Melinda & Wardhani (2020) which says that environmental disclosure has a positive effect on company value because it is expected to provide long-term benefits for stakeholders and increase company value.H1: Environmental Disclosure has a positive effect on firm value. Social Disclosure Social disclosure refers to the information that companies provide about their social and environmental activities, which is of interest to investors and can affect market value (Gutierrez, 2023).Social disclosure is very helpful for stakeholders and external parties of the company in monitoring the activities carried out by the company as well as providing relevant information in connection with its business activities in order to fulfill corporate social responsibility and will improve the company's reputation in the eyes of society (Sarikawi & Natalylova, 2022). Disclosure of social information has a positive value for stakeholders influencing stakeholders' perceptions of the sustainability of the company (Muslichah, 2020).Social responsibility disclosure can provide a competitive advantage for companies because it will increase the company's reputation in the eyes of the public (Sarikawi & Natalylova, 2022).This is in line with research conducted by disclosure of social information has a positive value for stakeholders influencing stakeholders' perceptions of the sustainability of the company (Muslichah, 2020).Social responsibility disclosure can provide a competitive advantage for companies because it will increase the company's reputation in the eyes of the public (Sarikawi & Natalylova, 2022).This is in line with research conducted by Muslichah (2020) which states that social disclosure has a positive effect on company value, because the disclosure made illustrates the company's legitimacy in the eyes of stakeholders and can significantly increase company value.which states that social disclosure has a positive effect on company value, because the disclosure made illustrates the company's legitimacy in the eyes of stakeholders and can significantly increase company value.H2: Social Disclosure has a positive effect on firm value. Governance Disclosure Governance disclosure refers to the extent to which a company publicly discloses information about its corporate governance practices (Previtali & Cerchiello, 2023).Governance Disclosure plays an important role in gaining stakeholders' trust.Governance frameworks encourage companies to look after the interests of their stakeholders as they understand that stakeholders contribute to the long-term success of the company.This is because pressure from stakeholders can make companies always act ethically in their business (Nugraheni et al., 2022).Disclosure of information regarding governance is an important aspect that investors pay great attention to in assessing and responding to the dynamics of company shares in the capital market.Investors often assume that good and efficient corporate governance practices have a significant impact on increasing public trust and corporate value.With clear and open disclosure of governance, companies show transparency and high responsibility in providing essential information, especially related to company management and operations, which in turn can be a positive acceptance factor and increase investment interest from investors.(Firmansyah et al., 2021).This is in line with research conducted by Aboud & Diab (2018) that states there is a significant positive influence between good corporate governance performance on company value, where companies that have higher governance performance tend to have higher company value.H3: Governance Disclosure has a positive effect on firm value.H4: Environmental, Social and Governance Disclosure simultaneously affect Firm Value. Firm Value The value of a company is a key factor that investors consider before investing.A company's share price is often considered vital data for the public, especially for those planning to invest, to assist in making investment decisions (Melinda & Wardhani, 2020).The value of a company in the eyes of investors is often measured by its stock price.A high stock price indicates a high company value.This becomes one of the important factors for investors when determining where they will invest their money.In other words, the success of a company can be measured by how effective it is in improving the welfare of its shareholders (Sarikawi & Natalylova, 2022). Increasing the value of the company is crucial, because it has a direct effect on the increase in share price and the welfare of shareholders.For a manager, the value of the company becomes an indicator of the effectiveness of his work.If the value of the company increases, this is interpreted as an increase in business performance.It also implicitly indicates success in improving shareholder welfare, which is the main objective of the company.From an investor's point of view, the increase in the value of the company makes the company more attractive as an investment option. Research Method This study uses quantitative data types with descriptive statistics.The data used is secondary data obtained from annual reports and sustainability reports of basic materials sector companies for the 2018-2022 period listed on the IDX and data processing using IBM SPSS version 26.The population in this study were basic materials sector companies listed on the IDX during the 2018-2022 period, the samples were selected using purposive sampling method.The criteria for selecting these samples, namely: Governance Disclosure Governance disclosure in this study which is disclosed in the sustainability report for the 2018-2020 period can be measured using a proportion ratio based on the GRI 102 guidelines which has 22 indicators (Firmansyah et al., 2021), can be calculated by the formula: G = Number Disclosed 22 Then, Governance disclosure disclosed in the 2021-2022 period sustainability report can be measured using a proportion ratio based on the GRI 2 guidelines which has 13 indicators (Firmansyah et al., 2021), can be calculated by the formula: G = Number Disclosed 13 Firm Value The measurement of firm value in this study is measured using the Tobin's Q formula (Melinda & Wardhani, 2020), namely: Total Assets Description: MVE: Closing share price x number of shares outstanding Results and Discussion Based on the results of data testing using the help of IBM SPSS version 26 software, the results of descriptive statistical tests are obtained in the following table: (2-tailed) to 0.200, this value is greater than the value of a = 0.05.So, it can be concluded that the test results above the data are normally distributed.Based on table 4, the three independent variables have a tolerance value> 0.10 and a VIF value < 10.So, it can be concluded that there is no multicollinearity. Source: Processed Data (2024) Based on Figure 1, it can be seen in the scatterplot graph that there is no clear pattern or it is said that the points spread randomly.So, it can be concluded that the data does not occur heteroscedasticity.Based on these calculations, it can be concluded that in this study there is no autocorrelation, either positive or negative, with the decision not being rejected, which indicates that this study is free from autocorrelation problems. Multiple Linear Regression Test The multiple linear regression equation is interpreted as follows: 1.The constant value of 0,491 indicates the coefficient of the dependent variable which is influenced by the independent variable.It shows that if the independent variable does not exist, the dependent variable will change.2. The environmental disclosure (X1) regression coefficient value of 1,887 indicates that the environmental disclosure variable has a positive influence on firm value.If the X1 variable increases by 1%, it will be followed by an increase in the dependent variable coefficient of 1,887 with the assumption that other variables are not examined in this study.3. The regression coefficient value of social disclosure (X2) of -0,302 indicates that the social disclosure variable has a negative influence on firm value.If the X3 variable increases by 1%, it will be followed by a decrease in the dependent variable coefficient of -0,302 with the assumption that other variables are not examined in this study.4. The governance disclosure (X3) regression coefficient value of -0,578 indicates that the governance disclosure variable has a negative influence on firm value.If the X3 variable increases by 1% it will be followed by a decrease in the dependent variable coefficient of -0,578 with the assumption that other variables are not examined in this study. Source: Processed Data (2023) Based on table 8, the results of the interpretation of the partial correlation test of environmental disclosure with firm value have a moderate level of relationship between the two variables.The positive correlation coefficient value indicates that any increase in environmental disclosure will include an increase in firm value. While the results of the interpretation of social disclosure and governance disclosure have a very low level of relationship.This shows that any increase in social disclosure and governance disclosure will be accompanied by a decrease in firm value.Based on table 9 and the results of the above calculations, it can be concluded that the analysis of the coefficient of determination of 49.5% means that 49.5% of firm value is influenced by the independent variables in this study, namely environmental disclosure, social disclosure and governance disclosure.While the other 50.50% is influenced by other variables outside the study.The Effect of Environmental Disclosure on Firm Value The results of partial hypothesis testing or simultaneous hypothesis testing that environmental disclosure positively impacts firm value in the examined sector, aligning with legitimacy theory's premise. Results of the t-test Legitimacy theory suggests that companies can secure their social license to operate by aligning their activities with societal expectations and norms.The positive correlation between environmental disclosure and firm value corroborates the idea that transparency regarding environmental initiatives can enhance a company's reputation, thereby increasing its attractiveness to investors and consumers who are increasingly concerned about sustainability.This is consistent with prior studies by Melinda & Wardhani (2020) and Aboud & Diab (2018) which also identified a positive relationship between environmental disclosure and firm value.From a stakeholder theory perspective, this reflects companies addressing the environmental concerns of their stakeholders, including customers, communities, and investors, thereby fostering goodwill and support that translates into increased firm value.Thus, environmental disclosure is a factor that can affect the increase and decrease in firm value. The Effect of Social Disclosure on Firm Value The results of partial hypothesis testing or simultaneous hypothesis testing that have been carried out, it can be seen that the social disclosure variable partially has no positive effect on firm value in basic materials sector companies listed on the Indonesia Stock Exchange for the period 2018-2022.This outcome challenges stakeholder theory to some extent, suggesting that not all stakeholder interests, particularly concerning social issues, are valued equally across industries or impact firm value directly.It may also reflect a gap between companies' social disclosure practices and what stakeholders consider meaningful or relevant.From the legitimacy theory perspective, this suggests that social practices and reporting, in this context, may not be critical components for maintaining or enhancing the companies' legitimacy in the eyes of their stakeholders. The results of this study are in line with research conducted by Firmansyah et al. (2021) and Sarikawi & Natalylova (2022) which say that social disclosure has no effect on firm value.This is also supported by the level of correlation between social disclosure and firm value which has a very low relationship, this means that companies that increase their social disclosure do not guarantee that firm value and investor interest will increase.Therefore, companies do not make social practices and reporting a top priority in order to increase company value.Thus, social disclosure is not a factor that can affect the increase and decrease in firm value. The Effect of Governance Disclosure on Firm Value The results of partial hypothesis testing or simultaneous hypothesis testing that have been carried out, it can be seen that the governance disclosure variable partially has no positive effect on firm value in basic materials sector companies listed on the Indonesia Stock Exchange for the period 2018-2022. According to stakeholder theory, this would suggest that governance issues might not be at the forefront of stakeholders' concerns for firms in the basic materials sector, or that such disclosures are not meeting stakeholder expectations in terms of transparency or relevance.In terms of legitimacy theory, this indicates that governance practices and their disclosure may not significantly contribute to the company's legitimacy or perceived value, possibly due to a lack of enforcement or perceived sincerity in these disclosures. The results of this study are in line with research conducted by Nurfauziah & Utami (2021) and Firmansyah et al. (2021) which say that governance disclosure has no effect on firm value.This is also supported by the level of correlation between governance disclosure and firm value which has a very low relationship, which means that companies that increase governance disclosure do not guarantee that firm value and investor interest will increase.Investors assume that governance disclosures made by companies are only voluntary and there are no sanctions if the company does not implement them.As a result, corporate governance disclosure is only limited to fulfilling administrative requirements but has not been reflected in the company's performance or activities.Thus, governance disclosure is not a factor that can affect the increase and decrease in firm value. Conclusion The research uncovers a positive correlation between environmental disclosure and firm value within this sector.This outcome implies that companies demonstrating transparency regarding their environmental initiatives showcase a dedication to ecological stewardship.This commitment not only improves their public image but also resonates positively with investors, thereby elevating their market valuation.Contrary to environmental disclosure, the findings reveal that practices of social and governance disclosure do not markedly influence the firm value within the examined sector.This indicates that increasing the level of transparency in social and governance affairs, in isolation, does not necessarily lead to heightened investor interest or an augmented company value, particularly within the specified timeframe and sector. A significant constraint of the study is its reliance on a limited sample size, consisting merely of 11 companies from the basic materials sector, constrained by the lack of widespread sustainability reporting within this group.This small sample size hampers the breadth and depth of the findings, limiting their application across the broader industry.The limited data set may not fully encapsulate the diverse impacts of ESG disclosures on firm value, thus impacting the robustness and generalizability of the study's conclusions. The study reinforces the importance of environmental disclosure while offering critical insights into the nuances of stakeholder theory and legitimacy theory as they apply to social and governance disclosures.These insights should guide future corporate practices and research, aiming for a deeper understanding of how different forms of disclosure influence firm value in varying contexts. Based on these reflections, future avenues of research should consider broadening the data set to include a larger number of companies, potentially across varied sectors, to enrich the reliability and applicability of the findings.Recommendation 1.For future research to explore other independent variables that may affect firm value, such as risk management policies or technological innovation.2. This research focuses on basic materials sector companies.Therefore, for future research, it is recommended to choose samples from other sectors listed on the Indonesia Stock Exchange.This aims to make comparisons regarding the value of companies in these sectors.3.For future research, it is recommended that researchers expand the sample size and also expand the research time period.Most likely, along with the latest regulations and business developments there will be an increase in the number of companies that prepare sustainability reports over time. Source: Processed Data (2024) Results of the f-test Tabel 11 Results of the f-test Source: Processed Data (2024)
2024-06-07T15:03:40.241Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "3c7884238c09f30de49151cd65a3bd8d35708287", "oa_license": null, "oa_url": "https://doi.org/10.28932/jam.v16i1.8140", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d6e76e2dc8801ebf8f8dd668a54b79bcc526f003", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [] }
4593372
pes2o/s2orc
v3-fos-license
Cross ‐ cultural Adaptation of the Hip Disability and Osteoarthritis Score into Persian Language : Reassessment of Validity and Reliability Hip is a large weight bearing synovial joint having a great effect on gait kinematics and thus having a major impact on the quality of life (QOL). We routinely measure health related QOL with the SF‐36 questionnaire. Recently, there is a trend toward region specific health‐related QOL including Manchester‐Oxford foot questionnaire for foot problems, disabilities of the arm shoulder and hand for upper extremity problems, and Western Ontario and McMaster Universities arthritis index (WOMAC) for knee osteoarthritis. Introduction Hip is a large weight bearing synovial joint having a great effect on gait kinematics and thus having a major impact on the quality of life (QOL).We routinely measure health related QOL with the SF-36 questionnaire.Recently, there is a trend toward region specific health-related QOL including Manchester-Oxford foot questionnaire for foot problems, disabilities of the arm shoulder and hand for upper extremity problems, and Western Ontario and McMaster Universities arthritis index (WOMAC) for knee osteoarthritis. WOMAC is the most widely used, valid, reliable, and responsive patient-reported outcome measure for osteoarthritis of the knee and hip. [1,2]It is especially applicable in the elderly population, and the estimated normative values are different from middle aged. [3]e hip disability and osteoarthritis outcome score (HOOS) is a patient-based expansion of the WOMAC for hip osteoarthritis. [4]his scoring system is useful for young and active individuals as well as elderly patients with hip osteoarthritis. The HOOS is a 40-item patient reported disease-specific QOL measurement tool comprised of six domains that include all 24 items of WOMAC with some additional items.In the pain domain, it includes the five questions of WOMAC and five more added questions.In the function of daily living and stiffness domains, it only includes WOMAC items.There are also three new domains in the HOOS including sports and recreational activity (four items), QOL (four items), and symptoms (three items).Indeed, the last three domains are the extension of stiffness domain.Each item is scored between 0 and 4, and each subscale is calculated independently.Scores are transformed to a normalized zero to 100 scaling system. [5]e original HOOS scaling system has been validated, and its responsiveness has been confirmed in patients with hip osteoarthritis in relation to medical and surgical treatments and recently in arthroscopic hip surgeries. [6]9][10] There are many similarities in the Iranian life style with Europeans and Americans.However there still remains several differences in bathing, eating, and recreational activities throughout the life styles.The aim of this study was to translate and cross culturally adapt the Persian HOOS with further testing its reliability and validity among Persian speaking population. Methods The institutional review board approved this study, which was divided into two parts; first, we translated the HOOS measuring instrument to Persian language with respect to suitable cultural adaptation.Then assessment of validity and reliability was performed in hip osteoarthritis patients if they were signed the consent form. First translation Translation and cross-cultural adaptation process performed according to Guillemin et al. guidelines published in 1993 and 2000. [11,12]Three independent translations were produced by two orthopedic surgeons and an expert bilingual translator (Persian as the first language) that all of them were aware of the objective of the study. Back-translations Back-translation was done by three independent bilingual translators that one of them was England-native.All of the back-translators were blinded to objectives of the study. Committee review A committee consists of four orthopedic surgeons and an English language expert compared to source and these three final versions.They could correct errors in first translations and improve cross-cultural adaptation.In the committee, conceptual, experiential, and idiomatic equivalence were considered more important than semantic equivalence. Pretesting We used a probe technique for the checking of the face validity before final approval of translation.We handed translated version of HOOS to 20 patients in foot and ankle outpatient clinic.We checked whether all patients could understand each item.For this purpose, the answer to an item is sufficient.However for checking the probability of misunderstanding, a probe question was asked after each item answering: "what do you mean?" Psychometric assessment In the second part of the study, we assessed psychometric properties of the new Persian version of HOOS for 203 hip osteoarthritic patients in the orthopedic outpatient clinic of Ghaem hospital (Mashhad, Iran) seeking surgical treatment. The inclusion criteria for filling the questionnaire were all primary hip osteoarthritis patients according to the American College of Rheumatology in 1991 that were seeking a therapeutic surgical procedure aged more than 40 years old. [13]The exclusion criteria were the existence of any other complaint or even osteoarthritis sign or symptom in any other joint that is a major problem for the patient. [9]e eligible cases in the outpatient setting were given Persian version of HOOS and SF36. SF36 is a general health-related patient reported questionnaire that its feasibility and psychometric characteristic has been proved and has been translated and validated to the Persian language. [14]This valuable measurement mean is used routinely as a standard questionnaire for validation of other health related QOL measurements.It is composed of 36 items, 8 component (physical functioning, role physical, role emotional, social functioning, mental health, energy, general health, and pain) and two extra domains (physical component summary [PCS] and mental component summary [MCS]). [15,16] data analysis first, for determining the feasibility, the percentage of responses, floor and ceiling effect were assessed.Floor and ceiling effects happen when more than 15% of responses are the lowest and the highest score. [17] Reliability For assessing reproducibility and reliability of responses and reducing intra-and inter-observer errors we used test-retest reliability.For 20 patients, the HOOS questionnaire was filled 1 week later for the second time.Responses in each subscale were assessed using intra-class correlation coefficient (ICC) with 95% confidential interval [CI].We considered excellent reliability in each domain if ICC gets >0.8. [18]ternal consistency For assessing the correlation between items in each subscale and in between all subscales as a whole, we measured internal consistency using the Cronbach's alpha with 95% (95% CIs).We considered coefficients equal or >0.7 as good correlation. [19] Construct validity For assessing the construct validity of a translated version of a validated questionnaire, we used SF-36 as a previously validated questionnaire with similar goals.For this purpose, we could determine convergent validity between similar scales in these two questionnaires using nonparametric Spearman correlation coefficient.With a significance level of two-tailed P < 0.05 coefficient amounts that were >0.6 were considered strong convergent and amounts <0.4 were considered to have week meaningful correlation. [16] Statistical analysis We used previous Korean validation study for this questionnaire in hip osteoarthritis patients for sample size calculation.In the Korean version, they used 75 cases, 25 of them filled the retest forms.In the present study, we had 205 arthritic patients.Statistical analysis was performed using SPSS version 21.0 for Mac (SPSS, Chicago, IL, USA), and P < 0.05 was considered statistically significant.Internal consistency was measured using Cronbach's alpha coefficient with or without deleting each subscale.Cronbach's alpha coefficient of 0.7 and more was considered satisfactory.Reliability was also tested by measuring ICC between test and retest with 2-14 days interval.Test-retest reliability measures the robustness of the results in the repeated filling.The interval was selected to be long enough so that the patient could not recall the previous answers.Furthermore, the interval should not be too long in that a treatment changes the condition. We assessed content validity by measuring the strength and direction of the association between SF-36 and the Persian version of the HOOS using Spearman's correlation coefficient. [20,21] Translation for translation process, there were little changes needed on pure semantic translation because in all items there were enough transcultural equalities.In pretesting assessment of new version that was carried out on 20-clinic outpatient, all patients with at least primary educational skills answered all questions, and all answers to probe questions were acceptable (Please find the Persian version in the Appendix). For psychometric assessment of new version of HOOS, a total of 203 patients from the outpatient clinic were recruited.All patients had long-standing hip osteoarthritis seeking a surgical treatment option.Table 1 represents characteristics of patients included for psychometric assessment.In Table 2, function of sport shows 18% floor effect.The floor and ceiling effects were <3% in other subscales. We can see average HOOS, VAS, and SF-36 in Table 3. Construct validity is shown in Tables 4 and 5. Table 6 depicts internal consistency and test-retest reliability of new version of HOOS.Cronbach's alpha ranged from 0.81 to 0.95 showed good to excellent internal consistency between items of each subscale (P < 0.05) except for the QOL subscale that results were not meaningful (P = 0.1). Discussion The aim of this study was to translate and cross-cultural adapt the HOOS questionnaire to Persian language and to test the validity, reliability, and internal consistency of this new version in osteoarthritic hip patients.The new change was combining stiffness (just two items) with symptom subscale. Hip osteoarthritis is a common and rapidly ingrowing disorder in between Iranian population that is becoming older years ahead.For assessing QOL of these patients, we need to have a strong measuring means.As HOOS questionnaire is a strong and well-done measurement score regarding hip osteoarthritis and some other hip problems and until now is validated in several societies.Lee translated and culturally adapted it according to the same guideline in patients with hip osteoarthritis.Satoh in 2013 validated this questionnaire in Japanese patients with the same problem.] The results of validity showed a statistically meaningful correlation between HOOS questionnaire and SF36 (P < 0.05).Spearman's Rho correlation coefficient between HOOS as a whole and PCS domain was strong (r = 0.639), although this correlation with MCS is not good (r = 0.164, P = 0.04). We could see the best correlation between HOOS subscales and bodily pain of SF-36 (r = 0.63, P < 0.0001).PCS of SF-36 had acceptable meaningful correlation with all subscales of HOOS (P < 0.0001, r > 0.4).Surprisingly, the function of daily living correlation was more than the association between pain scale of HOOS and bodily pain (r = 0.59 vs. 0.53) of SF-36.This conflict was noted in Japanese translation first. [10]Other subscales of HOOS had a moderate correlation with PCS dimension of SF36, but week association was obtained between MCS dimension and HOOS subscales that were mentioned before.We concluded that translated HOOS questionnaire can better address physical aspects of QOL in our patients. Cronbach's alpha results show an acceptable internal consistency in all subscales of HOOS and when each item was deleted the consistency was not exceeded the total amount (0.98).The lowest internal consistency was seen in symptom domain and QOL, but still acceptable (0.806) we can see this lower internal consistency in QOL domain in Japanese version in comparison with other domains (0.78).In French version, internal consistency in Symptom domain was <0.7. [9]Assessing test-retest reliability using ICC showed good reproducibility with ICC >0.81.ICC for QOL is not acceptable compared with other studies in other languages.Maybe questions in this part are not a place of care and concern regarding society culture and overall level of activity (P = 0.1).Total ICC showed a mean amount of 0.97 that relieves our concern about reproducibility. This study has some limitations; the most prominent one is that some psychometric properties were not addressed in our study.We considered patients just before surgery without follow-up after that.Hence, we could not assess the responsiveness of this new version.The second limitation was that we included just hip osteoarthritis patients and not other common hip disorders.We recommend further investigation that addresses responsiveness of the HOOS and a separate study on other hip disorders such as pure labral injuries or congenital problems of the hip. Conclusions Persian version of the HOOS questionnaire is a valid and reliable assessment tool in patients with hip osteoarthritis. Financial support and sponsorship Nil. Table 5 : Construct validity of Hip Disability and Osteoarthritis outcome Score subscales (correlations with short formHOOS=Hip Disability and Osteoarthritis outcome Score, QOL=Quality of life, SF=Short form, * Correlation is significant at the 0.05 level (two-tailed), ** Correlation is significant at the 0.01 level (two-tailed) Table 3 : Average functional score of patients with hip osteoarthritis (n=203) MCS=Mental component summary, HOOS=Hip Disability and Osteoarthritis outcome Score, VAS=Visual analog scale, SF=Short form, SD=Standard deviation Table 4 : Construct validity expressed by Spearman's rho correlation coefficient between Hip Disability and Osteoarthritis outcome Score and subscales of the Short form-36 MCS=Mental component summary, HOOS=Hip Disability and Osteoarthritis outcome Score, SF=Short form, * Correlation is significant at the 0.05 level (twotailed), ** Correlation is significant at the 0.01 level (two-tailed) Table 6 : Internal consistency and test-retest reliability of the Persian version of the Hip Disability and Osteoarthritis outcome Score ICC=Intraclass correlation coefficient, CI=Confidence interval, QOL=Quality of life, HOOS=Hip Disability and Osteoarthritis outcome Score
2018-04-26T19:49:05.233Z
2018-02-28T00:00:00.000
{ "year": 2018, "sha1": "f853b3b8388d3167046fe53e1313c1789de0f6a6", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijpvm.ijpvm_359_16", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c28f7db2bd6e6483cc112ba3f823a765e6adc1b1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
3372412
pes2o/s2orc
v3-fos-license
Does smoking status affect cost of hospitalization? Evidence from three main diseases associated with smoking in Iran Background: Smoking is recognized as one of the main public health problems worldwide and is accounted for a high financial burden to healthcare systems and the society as a whole. This study was aimed at examining the effect of smoking status on cost of hospitalization among patients with lung cancer (LC), chronic obstructive pulmonary disease (COPD) and ischemic heart diseases (IHD) in Iran in 2014. Methods: A total of 1,271 patients (consisting of 415 LC, 427 COPD and 429 IHD patients) were included in the study. Data on age, sex, and insurance status, length of hospital stay and cost of hospitalization were extracted from the medical records of the patients. The smoking status of the patients was obtained through a telephone survey. A generalized linear model (GLM) was used to compare the costs of hospitalization of current, former and never smokers. The analysis was done using Stata v.12. Results: The mean±SD cost of hospitalization per patient was 45.6 ± 41.8 million IR for current smokers, 34.8±23 million IR for former smokers and 27.6±24.6 million IR for never smokers, respectively. The findings indicated that the cost of hospitalization for current and former smokers was 65% and 26% in the unadjusted model and 35% and 24% in the adjusted model higher than for never smokers. Conclusion: The findings revealed that smoking drains a large hospital resource and imposes a high financial burden on the health system and the society. Therefore, efforts should focus on reducing the prevalence of smoking and the negative economic consequences of smoking. Introduction Smoking is the most important public health problem that requires attention globally (1,2). Lung cancer, ischemic heart disease and respiratory illness, etc. among many other diseases are closely associated with and caused by smoking. Each year, more than five million adults die from conditions related to smoking. The annual death rate is expected to rise to about 8 million people by the year 2030. More than 80% of these deaths will be in low-and middle-income countries (3,4). The financial consequences associated with smoking and related diseases to health systems as well as societies as a whole is considerably high (5)(6)(7). However, most evidence on the negative effect of smoking on utilization of health services, direct and indirect costs and work absenteeism has been from developed countries (7)(8)(9)(10)(11)(12)(13)(14)(15)(16). Evidence from developing countries concerning the negative effects of smoking is less documented (2,4). Recent 2 smoking is 12.5% (23.4% f or males and 1.4% for females) and that an individual on average smokes 13.7 sticks of cigarettes daily (17). A previous study reported that smoking was accounted for 4,623 cancer deaths, 80,808 years of potential life lost and US$ 83,019,583 cost of productivity lost (4). To our knowledge, little information is available on the impact of smoking on healthcare cost, length of hospital stay and healthcare utilization related to smoking associated conditions in Iran (4,15). However, reliable evidence is required to effectively reduce the prevalence of smoking and successfully implement smoking cessation interventions in the country. This study aimed to address the information gap concerning the effect of smoking status on cost of hospitalization for patients with lung cancer (LC), chronic obstructive pulmonary disease (COPD) and ischemic heart diseases (IHD) in Iran in 2014. Study Setting Tehran is a province as well as the capital city of Iran. According to the census 2011 report, the total population of the city was around 9 million, and 16 million in the wider metropolitan area. Tehran is the largest city in Iran, the second largest city in Western Asia, and the third largest in the Middle East. Sample Size and Data Collection In the health system network of Iran, inpatient care is delivered through hospitals owned by different providers. Governmental hospitals are the main providers of inpatient services and account for 68% of total hospital beds. Private hospitals account for 12% of the total hospital beds. However, Social Security Organization (SSO) and Armed Forces hospital beds account for 9 and 4%, respectively. Oil Company and NGOs hospital beds account for the rest of the share (18). The provider's perspective was used to include the cost of hospitalization in the analysis. All patients aged 35 years and older, who were discharged from the hospitals during 21 March 2014 to 22 March 2015 and were managed for LC, COPD or IHD related to smoking constituted the study population. The latent period between initial exposure to smoking and occurrence of cancer is believed to be about 20 years and more (19). Thus, the patients were identified using the International Classification for Diseases 10th Revision (ICD-10) codes for LC (C33-C34), COPD (J40-J44) and IHD (I20-I25). The samples for the study were selected in two stages. First, 1,503 patients (501 patients with LC, 501 patients with IHD and 501 patients with COPD) who were admitted during the study period were selected based on the proportion of hospital beds of the different providers. That is, 73% of the patients were from governmental hospitals, 12% from the private hospitals, 9% from social security hospitals and 4% from the Armed Forces hospitals. Data on sex, age, insurances status, type of disease, residence, length of hospital stay and hospitalization costs were retrieved from the patients' medical records, using selfconstructed checklist. In the next stage, information about smoking status of the selected patients was obtained through telephone survey. Based on the response of the interviewees, 1,271 patients (415 LC patients, 427 COPD patients and 429 IHD patients) were included in the analysis. Patients were classified as current smokers if they were smoking at least one cigarette per day at the time of hospitalization, never smokers if they have never had smoked or smoked less than 100 piece of cigarettes in their lifetime, former smokers if they have had smoked regularly or occasionally in the past and before hospitalization. Ethics The ethics Committee of the Deputy of Research of Tehran University of Medical Sciences reviewed and approved the study protocol (Code: IR.TUMS.REC. 1394.659). Statistical Analysis To account for the non-normality of data on the cost (Shapiro Wilk test (p<0.001), Kruskal-Wallis test was used to explore the existence of any significant difference in hospitalization cost among different groups. To consider the skewed distribution of the costs, GLM with gamma and log-link were used. This model has been used as a good method for determining healthcare costs (11,20,21). The Modified Park Test confirmed gamma distribution at p= 0.45, and Pregibon Link-Test at p= 0.24 and Pearson Correlation Test at p= 0.58 confirmed the choice of the log link function. The cost ratio (CR) was used to report the association between the dependent and independent variables. Thus, the CR equal to one showed no association between the dependent and independent variables. The CR higher than one indicated a positive relationship between the dependent and independent variables. The age of less than 35 years was used as an offset variable for the study population aged 35 years and above. For the cost data, the original Iranian Rials (IR) was used where during the study period 34,500 Rials were equal to US $1. The association was considered statistically significant at p-value of less than 0.05, and all analyses were done using Stata software version 12. Results A total of 1,271 patients with a mean±SD age of 62.5±10.8 years (age range of 35 to 93 years) were included in the study. Of whom, 415 (32.6 %) were diagnosed with LC, 427 (33.6 %) with COPD and 429 (33.8 %) with IHD. Men comprised of 67.6% (n= 860) of all the study patients. Besides, 70.8% of LC, 65% of IHD and 67.2% of COPD patients were among males. The mean±SD LoS was 9.4±8.4 days for current smokers, 7.3 ±5.3 days for former smokers, and 6.02±5.05 days for never smokers. The prevalence of current smoking patients was 33.9% while the prevalence of former and never smokers was 12 and 54.1%, respectively ( 3 IR for former smokers and 27.6±24.6 million IR for never smokers, respectively. Furthermore, the average cost of hospitalization was 25.3±21 million IR for COPD, 39.1±37.2 million IR for IHD and 39.3 ± 34.7 million IR for LC patients, respectively. The results of GLM with gamma distribution and log link function of costs of hospitalization are presented in Table 2. The adjusted gamma regression model revealed that smoking status, LoS, type of disease and type of hospitals were associated with higher hospitalization cost. Compared with never smokers, current and former smokers had 35 and 24% increased costs of hospitaliza-tion, respectively (p<0.001). LoS was associated with increased cost of hospitalization. The cost of hospitalization for patients with LoS between 3 and 5 days was 91% compared to those with LoS less than 3 days. In addition, the cost of hospitalization was statistically and significantly associated with type of hospitals (p<0.001). The average cost of hospitalization for private and social security hospitals was respectively 2.17 and 1.27 times higher than the cost of the patients admitted to the Armed Forces hospital. 4 Discussion This study investigated the effect of smoking status on the cost of hospitalization from the provider's perspective in Iran. This study found that the total cost of hospitalization for the current and former smoker categories were respectively 65% and 26% higher in unadjusted model and 35% and 24% in the adjusted model higher than for the never smoker category of patients. These findings are in line with other studies that have reported increased costs in current and former smoker patients compared to never smokers (2,11,14,15,22). A study in Germany reported a positive association between history of smoking, and direct and indirect costs. The total annual costs were more than 20% and 35% higher for current and former smokers, respectively, compared with never smokers (11). Similarly, another study also found that costs for current and former smokers were significantly higher compared with never smokers. The monthly cost of hospitalization for current smokers was $400 higher than the costs for never smokers. However, the cost for former smokers was $273 higher than the costs for never smokers (14). In our study, the difference in cost of hospitalization between current and never smokers was 17.9 million IR, whereas the difference between former smokers and never smokers was 7.2 million IR. Another factor affecting the cost of hospitalization was the LoS. Our findings revealed that the cost of hospitalization for patients with more than 9 days LoS was 5.2 times higher compared to those with LoS less than 3 days. The average LoS for current and former smokers was 9.4±8.4 days and 7.3±5.3 days, respectively, while for never smokers it was 6.02±5.05 days. This finding are consistent with the reports of previous studies in Japan (15) and USA (10) and Iran (22). Moreover, Izumi et al. found that the hospitalization rate of current plus former smokers was 26% (in males) and 22% (in females) higher than in never smokers (15). Overall, the hospitalization rate for current and former smokers was respectively 30% and 20% higher than in never smokers. A study in the United States on medical services utilization by smokers during 1999 to 2004 indicated that current smokers were more likely to be hospitalized than never smokers. Besides, outpatient visits for current smokers was four times higher than never smokers (9). Our study did not find any statistically significant relationship between age and sex and cost of hospitalization, which is consistent with findings of another study (23). However, there was a positive relationship between being male and age 35-64 years and cost of hospitalization. These findings can be explained by the fact that the prevalence of smoking for males is higher compared to the prevalence among females (42.8% vs. 15.3% or for more patients in the age category of 35-64 years compared to those over the age of 64 years (36.7% vs. 30.2%). There are, however, some limitations to our study. This study was carried out in Tehran, so the generalizability of the findings is limited. Besides, the effect of severity of diseases on the cost of hospitalization was not adjusted because data for these factors were not available in the medical records of the patients. The self-reported smoking status of the patients through the telephone survey might have led to social desirability bias and under estimation of the prevalence of current and former smoking behavior of the patients. Furthermore, total costs of hospitalization might not reflect the actual hospital cost of smoking and did not include government subsidies to hospital services and all out of pocket payments by the patients. Indirect medical costs such as transportation and food costs and other indirect costs associated with loss of productivity due to disability and mortality were not included in the study as measuring these costs was beyond the scope of the study. Conclusion This study demonstrated that considerable costs of hospitalizations are attributable to smoking in Iran. The financial consequences of smoking on health system, especially on hospitals, and on the society as a whole are substantial. However, further studies are needed to investigate the impact of smoking status on healthcare utilization, physicians' visits, pharmaceutical costs and LoS. The current findings suggest the need for strong preventive measures for smoking and its negative economic consequences.
2018-04-03T04:41:10.026Z
2017-01-10T00:00:00.000
{ "year": 2017, "sha1": "32e1b94aae50c7856dec662c0a9498e105dfca49", "oa_license": "CCBYNC", "oa_url": "http://mjiri.iums.ac.ir/files/site1/user_files_e9487e/satarrezaei-A-10-2034-4-ddd037b.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81d8ba106ce2f87af6a9df2a8f09b0d24e7ac4b9", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232424094
pes2o/s2orc
v3-fos-license
Cytogenomic Aberrations in Isolated Multicystic Dysplastic Kidney in Children Background: Multicystic dysplastic kidney (MCDK) is a common form of congenital kidney anomaly. The cause of MCDK is unknown. We investigated whether MCDK in children is linked to cytogenomic aberrations. Methods: We conducted Array Comparative Genomic Hybridization (aCGH) in 10 unrelated children with MCDK. The pattern of inheritance was determined by real-time PCR in patients and their biological parents. Results: Pathogenic aberrations were detected in three patients: a deletion at 7p14.3 with a size of 2.07 Mb housing 12 genes, including BBS9 and BMPER; a duplication at 16p13.11p12.3 with a size of 3.28 Mb that included more than 20 genes; and monosomy X for a female patient. The deletion at 7p14.3 was inherited from patient’s father, while the duplication at 16p13.11p12.3 was derived from patient’s mother. Conclusions: Up to 30% of patients with MCDK possess cytogenomic aberrations. BBS9 and BMPER variants have been reported to result in cystic kidney dysplasia, suggesting possible pathogenic function for the deletion at 7p14.3 in children with MCDK. The duplication at 16p13.11p12.3 was not reported previously to associate with MCDK. Both variations were inherited from parents, indicating hereditary contributions in MCDK. Thus, aCGH is an informative tool to unravel pathogenic mechanisms of MCDK. Introduction Multicystic dysplastic kidney (MCDK) is a form of congenital anomaly of the kidney and urinary tract (CAKUT). CAKUT are a major cause of kidney failure in children, accounting for 30 to 50% of cases of end-stage kidney disease (ESKD) [1]. Multiple lines of evidence, including discovery of causative genes and intrafamilial CAKUT segregation, support the contribution of genetic factors to CAKUT in humans. A recent large-scale study by Verbitsky et al. reported presence of a large size (≥100 kb) copy number variations (CNVs) in 4.1% of patients with diverse forms of CAKUT recruited in the United States, Europe and Brazil, demonstrating that genomic disorders represent a common etiology of CAKUT [2]. However, the gene variants causing a discrete form of CAKUT such as MCDK remain incompletely defined. MCDK consists of macroscopic cysts with absent working kidney tissue and arises in 1 in 1000 to 1 in 4300 live births [3,4]. MCDK is unilateral in most cases, but may affect both kidneys. Bilateral MCDK causes Potter syndrome (hypoplastic lungs, deformed limbs, widely separated eyes, broad nasal bridge, low set ears). Histological examination of MCDK tissue demonstrates presence of connective tissue and immature epithelium [3,4]. MCDK is believed to result from either abnormal inductive interactions between the ureteric bud and the metanephric mesenchyme during embryonic kidney development or from intrauterine fetal urinary tract obstruction [5,6]. Variants in genes critical for normal kidney development (so called renal developmental genes), including HNF1B, ACE, PAX2, REN, ROBO2, AGTR1, SALL1, AGT genes, have been linked to pediatric MCDK [7][8][9][10]. These observations indicate that genetic alterations may play a causative role in MCDK. In addition to single variants of discrete genes, large cytogenetic defects, including the number of copies of a particular gene, insertions, deletions, and duplications of large DNA segments, are associated with CAKUT [11]. CNVs are defined as any gain or loss of germline DNA. The characterization of DNA CNVs by Array Comparative Genomic Hybridization (aCGH) analysis has demonstrated to be highly valuable in revealing pathogenic mechanisms of CAKUT (2,11). However, the pathogenic roles of cytogenomic aberrations in children with MCDK were not investigated well. Here, we report the results of a study using aCGH in ten pediatric patients with isolated MCDK. Patients The study was approved by the Tulane University School of Medicine IRB#:150438-4. Informed consent was obtained from the parents of children and, where appropriate, assent was acquired from children themselves. 10 unrelated patients with MCDK (mean age 8.5±1.1 years) were enrolled in the study after clinical diagnosis of MCDK was established by renal ultrasonography (US). Blood was obtained from MCDK patients and 20 pooled age-, race-and sex-matched controls (6 black and 5 white males, 5 black and 4 white females). Buccal cells were obtained from patients' biological parents. Children in control group had US performed for kidney diseases different from CAKUT (e.g., minimal proteinuria and microscopic hematuria,). Six MCDK patients were females and four-males. Kidney function was estimated from plasma creatinine with Schwartz equation (height in cm x 0.413/plasma creatinine in mg/dL) [12]. DNA isolation DNA was obtained from blood or from patients' biological parents' buccal cells as previously described [10]. Array Comparative Genomic Hybridization (aCGH) analysis DNA was isolated and labelled using Agilent-recommended protocol. CGH was performed on an Agilent Microarray platform with 105k probes (Agilent Inc., Santa Clara, CA). Array image was acquired with an Agilent Array Scanner. Microarray data was analyzed using the Cytogenomics software package from Agilent Inc. Interpretation of detected CNVs was conducted according to the ACMG standards and guidelines revision 2013 [13], and technical standards recommendation by ACMG and ClinGen [14]. Information for clinical significance on reported cases was extracted from Databases of Genomic Variants (ClinVar) at www.ncbi.nim.nih.gov/clinvar, and DECIPHER at www.decipher.sanger.ac.uk. Association analysis of gene function and clinical features is based on the information from Online Mendelian Inheritance in Man (OMIM at www.omim.org). Real-time quantitative polymerase chase reaction (qPCR) Primers for qPCR were designed to detect the relative copy number of targeted genes within the aberrations and to avoid any potential common genomic variants encountered in the general population reported in the Database of Genomic Variants. The SYBR green assay (Thermo Fisher Scientific) was performed on BMPER (BMP Binding Endothelial Regulator) in the deleted region and XYLT1 (Xylosyltransferase 1) in the duplicated region, using RPPH1 (Ribonuclease P RNA Component H1) and TERT (Telomerase Reverse Transcriptase) genes as references. The relative copy number was calculated using the ∆∆C t method compared to an unaffected human DNA sample. Primers used for qPCR were as follows: BMPER-forward: ctgtggtttgcaagaggaag, BMPER-reverse: atgtcttctgggggcactc, XYLT1-forward: caacgagtccagccatcc, XYLT1-reverse: cagagcttccagagcctaaac, TERT-E3forward: tcccacgacgtagtccat, TERT-E3reverse: cagaggtca-ggcagcatc, RNaseP-forward: ggagagtagtctgaattgggttatg, and RNaseP-reverse: ggagcttggaaca-gactcac. Clinical findings Mean age of MCDK patients was 8.5±1.1 years and of children in the control group-9.7±0.9 years (Tables 1 and 2). Four MCDK patients were males and six-females. Eight children were African-American and two-Caucasian. All children manifested normal blood pressure and renal function. In all children, family history was negative for MCDK or other anomalies of the kidney and urinary tract. MCDK was isolated in nine of ten cases and was associated with Turner's syndrome in one of patient. MCDK was unilateral in all children with ratio of right vs. left MCDK 1:1 (Table 1). Contralateral kidney underwent proper compensatory hypertrophy in all instances. Mild hydronephrosis was identified in the contralateral kidney of one patient. US did not reveal any renal malformations in children from the control group. All biological parents had apparently normal phenotype and reported absence of known kidney disease or abnormalities of the urinary system. aCGH results Three diverse pathogenic aberrations (a deletion, a duplication, and a numerical abnormality) were detected in 3 of 10 patients (in 2 of 9 patients with isolated MCDK and in a single patient with known Turner syndrome). The results are summarized in Table 3. No pathogenic aberrations were detected from the other seven patients using the laboratory standard cutoff at the resolution of 300 kb. The deletion detected from subject 3 was located at 7p14.3 with a size of 2.07 Mb. This alteration results in deletion of 7 protein coding genes, including BBS9 (Bardet-Biedl Syndrome 9), BMPER, and RP9 (Retinitis Pigmentosa 9), (Figure 1 and 2A), as well as a pseudogene, RP9P (Retinitis Pigmentosa 9 Pseudogene), and three copies of non-coding gene of NPSR (Neuropeptide S Receptor)-AS1 (antisense RNA 1). The duplication detected from subject 41was located at 16p13.11p12.3 with a size of 3.28 Mb. There are more than 30 genes in this duplicated region, including 16 protein coding genes, 18 microRNA (miRNA) genes in three clusters, and two pseudogenes. (Figure 1 and 2B). The third aberration was monosomy X (Figure 1) from a female patient (subject 22). The data of the laboratory analysis are shown in Table 3. qPCR results The results from qPCR assay confirmed the findings from aCGH of the deletion and duplication. qPCR studies of the two families showed the deletion at 7p14.3 was inherited from patient's father (Fig 3A), while the duplication at 16p13.11p12.3 was derived from patient's mother (Fig 3B). The monosomy X is a known product of meiotic nondisjunction and no further confirmation study was carried. Discussion The current study identified novel cytogenomic aberrations in children with isolated MCDK. The overall prevalence of cytogenomic aberrations was about 33% (3 of 10 patients). The prevalence of cytogenomic aberrations in patients with isolated MCDK in this study was about 22% (2 of 9). These findings are in line with other reports showing that molecular diagnosis due to a copy-number disorder can be established in 4.1% to 14.5% of children with diverse forms of CAKUT [2,11]. Relative enrichment for CNVs in our analysis (22%) could be due to study of a single discrete form of CAKUT such as MCDK rather than diverse forms of CAKUT. Our present findings support the role of genetic factors in the pathogenesis of MCDK. Understanding the genetic architecture of MCDK as a discrete form of CAKUT has important implications for the development of preventive and therapeutic interventions that aim to mitigate the associated cardiovascular comorbidities and curtail progression of kidney disease. In addition, identification of CNVs helps to identify novel intracellular pathways that are implicated in the pathogenesis of MCDK and to provide molecular diagnosis, thus establishing the etiology of MCDK. The deletion at 7p14.3 results in the deletion of BBS9 and BMPER genes. Mutations in BBS9 cause Bardet-Biedl Syndrome (BBS, OMIM 209900), a rare autosomal-recessive ciliopathy distinguished by mental retardation, polydactyly, obesity, retinitis pigmentosa and CAKUT [15,16]. Renal manifestations of BBS include collecting duct microcysts rather than macrocysts observed in MCDK [15,16]. In the Databases of Decipher and ClinVar, 10 patients with 7p14.3 deletion were reported with deletions in size from 200Kb to 2100kb, and all these deletions have BBS9 and BMPER involved. Six of these patients showed clinical features of intellectual disability, autistic behavior and developmental delay. One patient also manifested unilateral renal hypoplasia. Our patient with the 7p14.3 deletion (subject 3 in Table 1 and 3) has clinical features of MCDK and autistic feature, similar to this case. It is worth to notice that the clinical features of these patients are different from that of BBS patients, suggesting that phenotype of 7p14.3 deletion may be due to more complex mechanism of multiple gene deletions rather than to loss of function of a single BBS9 gene. Animal studies demonstrate that mice deficient in a related gene, BBS4, (Bbs4 −/− ) exhibit renal glomerular macrocysts [17]. These findings underscore the importance of BBS spectrum genes in cystogenesis in mammals. Mutations in BMPER, which encodes the bone morphogenetic protein (BMP)-binding endothelial cell precursor-derived regulator, cause diaphanospondylodysostosis (DSD, OMIM 608022), a rare autosomal-recessive disease characterized by aberrant vertebral segmentation and a small chest. Renal findings in DSD include nephroblastomatosis with cystic kidneys [18,19]. Bmp4 is a member of the transforming growth factor β (TGF-β) family and is essential for normal kidney organogenesis. It inhibits ectopic outgrowth of the ureteric bud and promotes elongation of UB-derived ureter in mice [20]. In addition, recombinant BMP4 induces cell apoptosis during early stages of kidney formation [21]. Inhibition of the BMP signaling is critical for survival and proliferation of the nephron progenitor cells in the metanephric mesenchyme. Of interest, kidneys of newborn Bmp4+/− mice contain multicystic dysplastic areas [22]. In humans, BMP4 variants are associated with renal hypodysplasia (RHD), defined as reduced renal size and/or abnormal formation of the kidney tissue during renal organogenesis [23]. Collectively, disruption of BMPER/BMP signaling is linked to variable types of cystogenesis in both mice and humans. The duplication at 16p13.11p12.3 spans over the region for 16p13.11 recurrent microduplication locus and extends to its downstream region. More than 20 genes are located within the duplicated region, including 10 miRNA genes, and disease associated genes such as NDE1, MYH11, ABCC6, and XYLT1. Two pseudo genes, PKD1P1 and ABCC6P1, are located within the duplication. PKD1P1 is a pseudogene for PKD1 (Polycystic Kidney Disease 1). PKD1 mutations result in an autosomal-dominant polycystic kidney disease in humans. The PKD1P1 shares a 97.7% sequence identity with the genuine PKD1. PKD1P1 is expressed during the early stages of embryogenesis [24]. However, its function has not been determined yet. Recent reports indicate that pseudogenes might affect the expression of their parental genes by diverting miRNAs away from corresponding parental mRNA [25]. If this is the case for PDK1P1, an extra copy of PDK1P1 might have impacts on the expression of PKD1. ABCC6P1 is a pseudogene for ABCC6 (ATP Binding Cassette Subfamily C Member 6). Mutation of ABCC6 results in pseudoxanthoma elasticum, an inherited disease characterized by calcification of arteries and kidney tissue. Both genes are located in the duplicated region. ABCC6 is expressed in the kidney, suggesting its possible role in kidney development or function [24,26]. The duplication will add extra one copy of ABCC6P1 and possibly alter the expression level of this gene in the fetal kidney. In addition, there are about 18 copies of miRNA genes within the duplicated region. Increasing copy number of miRNA genes may have an impact on the regulation of gene expression as well. Therefore, the pathogenic role of ABCC6P1 duplication in MCDK cannot be excluded. Both the deletion at 7p14.3 and the duplication at 16p13.11p12.3 were inherited from each patient's phenotypically normal corresponding parents. These two cytogenomic aberrations have not been reported previously as CNVs from general population, suggesting that they may be specific to MCDK. Identification of CNVs in the minority of children with MCDK in this study is consistent with such general characteristics of CAKUT as incomplete penetrance and variable expressivity of disease. Additional mechanism explaining lack of identifiable CNVs in children with MCDK include epigenetic imprinting and unmarked single nucleotide variants (SNVs) in the aberration regions. Detailed analysis of genes located within identified aberrations will allow to better elucidate genetic mechanisms of MCDK. In this regard, next generation sequencing (NGS), including whole exome sequencing (WES), should improve discovery of novel causative genes in patients with MCDK and their families. In summary, current study describes two novel candidates of cytogenomic alterations as MCDK susceptibility loci. Unilateral renal defect was shown in 2 of 7 patients with 7p14.3 deletion (including one case described in this study), suggesting that it is a common clinical feature of 7p14.3 deletion. We report the novel finding of MCDK association with the 16p13.11 duplication. We explored the mechanisms for several genes located within the identified cytogenomic alterations as possible genetic drivers for MCDK in children. Therefore, these results provide significant insight into the genomic landscape of MCDK in humans.
2021-03-31T13:57:17.235Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "c8386c0a3bb1f0bd00bc8ccaedba503c1481add6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d574e1604ca8bd53fdcaa02c0899bb641ef976c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89244191
pes2o/s2orc
v3-fos-license
NATURAL OCCURRENCE OF Pasteuria nishizawae IN SOYBEAN AREAS OCORRÊNCIA NATURAL DE Pasteuria nishizawae EM ÁREAS DE SOJA : Soybean ( Glycine max (L.) Merrill) is a widely grown crop of economic prominence. Brazil is considered as the second largest worldwide producer and exporter. Soybean cyst nematode, Heterodera glycines Ichinohe, in one of the most serious threats for this crop, considered as its most destructive parasite. The first report of this disease in Brazil was recorded during the 1991/1992 harvest. Control of nematodes is more challenging if compared with other plant diseases control. Thus, there is a growing demand to search for alternative control practices that will not harm the environment nor the human being. Therefore, the highly specific bacteria Pasteuria spp. Metchnikoff, represents an auspicious biological control agent against nematodes. The biological control of the soybean cyst nematode with the bacterium Pasteuria nishizawae Sayre has proven an excellent choice and has been studied by different scientists. The objective of this work was to determine the natural occurrence of Pasteuria nishizawae in Brazilian soils. The experiment was performed under greenhouse conditions at the Research unit from Syngenta® in Uberlandia-MG, Brazil, with soil samples originated from soybean planted areas from the Brazilian states of Bahia, Goiás, Mato Grosso and Paraná. Fractions of 150 cm 3 of soil were withdrawn to be processed by the centrifugal flotation technique in sucrose solution. Aliquots of 1 mL from the obtained suspension were observed in Peters’ chamber with the aid of an inverted light microscope, in order to verify the absence or presence of bacterial endospores attached to the cuticle of the nematodes extracted from the soil samples. The frequency of Pasteuria nishizawae incidence was of 100% within the samples analyzed. INTRODUCTION Soybean is a crop of economic importance, being the most planted oilseed in the world. Brazil is the second largest producer and exporter of soybean in the world, with a planted area of approximately 26,4 to 27,3 million hectares and a production of about 80,08 to 82,99 million tons in 2012/2013 (CONAB, 2013). The soybean cysts nematode (SCN), Heterodera glycines, is one of the most severe plant disease problems of this crop, considered as the most destructive parasite of soybean plants (NOEL, 1992). In order to control these nematodes the primary common methods used are chemical control (nematicides), the use of resistant varieties and crop rotation. Chemical control by means of nematicide application has proven expensive as well as detrimental to the environment, to the human being, to the wild fauna and to the beneficial organism within the soil. The use of resistant varieties is a natural approach, highly recommended to control plant pests and diseases. However, in the case of this specific nematode, the availability of resistant varieties for the farmer is scarce, due to the great number of races or HG types of SCN occurring in Brazil (FERRAZ;FREITAS, 2004). In tropical and subtropical countries nematodes find ideal humidity and temperature conditions for reproduction and feeding. Thus, worsening the efficiency of control of these pathogens, that are very difficult to eradicate once they are established within an area (TORRES et al., 2008). Biological control is an alternative method to control plant parasites, as nematodes. Biological control of nematodes can be achieved by the use of appropriate fungi or bacteria. Predators and egg parasites are certainly the most studied organism and the best fitted as nematode's biological control agents (JATALA, 1986;NORDBRING-HERTZ et al., 2002). The first observation of Pasteuria in a plant parasite nematode, Pratylenchus pratensis de Man, was made by Thorne (1940) whom classified the organism as a Protozoa and named it as Dubosquia penetrans. However, studies performed by Mankau (1975), re-classified the organism as a prokaryote of the Genus Bacillus Cohn. The re-discovery of Pasteuria ramose by Sayre et al. (1977) and its morphological similarities with Bacillus penetrans Mankau suggested that these two bacteria had a common generic relation. In 1985, Sayre and Starr named the organism that parasite Meloidogyne incognita as Pasteuria penetrans (Thorne) Sayre & Starr, as well as Pasteuria nishizawae is specific to the soybean cysts nematode (Heterodera glycines), Pasteuria thornei Sayre & Starr is specific to the nematode of the root lesions according to Stirling & Wachtel (1980), while Pasteuria usgae is specific to the nematode Belonolaimus longicaudatus Rau according to Rau Giblin-Davis et al. (2003). Pasteuria is a gram-positive bacterium that forms an endospore. These bacteria were found as parasites of many economically important nematodes (SAYRE; STARR 1988). The Genus Pasteuria spp. was structured from 323 nematode species belonging to 116 Genera, including plant parasitic nematodes, entomopathogenic nematodes, predator nematodes and free living nematodes (CHEN;DICKSON 1998). The analysis of the 16S rRNA gene sequences is a sensible and well-established tool for the detection and phylogenetic analysis of bacteria (STAHL, 1997). The use of sequences of the gene 16S rRNA may lead to identification of species of Pasteuria and to evaluation of the diversity of these microorganisms within a population of nematodes' samples, thus being a tool of a great utility for the comprehension of the parasite-nematode interactions and the ecology of Pasteuria (EBERT et al., 1996;ANDERSON et al., 1999;ATIBALENTJA et al., 2000;BEKAL et al., 2001). For many years attempts to obtain an axenic culture of Pasteuria spp. resulted in failure (WILLIAMS et al., 1989;BISHOP;ELLAR, 1991). However, an advance towards the success to obtain an in vitro culture of P. penetrans was announced by Hewlett et al. (2002). A life cycle study of the bacterium that parasites the nematode Heterodera glycines was performed by means of germination of the endospore that infects J2's for production of a next generation endospores in adult females and cysts. Descriptions were based in microscopic examination of successive juvenile stages of H. glycines excised from soybean roots, unlike P. nishizawae, that were based exclusively on the examination of diseased cysts (SAYRE et al., 1991). According to Atibalentja et al. (2004), bacterium development, germination of the endospore and penetration of the germ tube inside the nematode begin soon after penetration of the J2's in the radicular system. Otherwise, the endospore does not germinate and consequently no infection by Pasteuria will arise. For this reason, observations based only on diseased cysts will result in an incomplete analysis of Pasteuria's life cycle, which would explain why germination of P. nishizawae is not observed (SAYRE et al., 1991). After the endospore's germination, primary cauliflower-like colonies are formed within the J3. Development of the bacterium was observed only in female adults but not in males. Later, the bacterial sporulation can be observed with the development of a structure similar to a chunk of grapes, which can be observed also in immature females of fourth stage and cysts. Finally, the presence of mature sporangia and endospores can be observed, varying in number (30.000 to 820.000 with mean value and standard deviation of 314.00 and 234.000, respectively) as a function of the size of the cyst or female nematode (ATIBALENTJA et. al., 2004). Until now, P. penetrans is the most studied species due to its potential as a biological control agent. However, technical hitches for mass production in vitro had made its commercial production problematic. In order to introduce bacteria in the soil environment for field experiments, dry roots' powder infected by the bacteria was used, as proposed by Stirling and Wachtel (1980). Thus, the objective of the present work was to verify the natural occurrence of Pasteuria nishizawae Sayre in soils under soybean cultivation. MATERIAL AND METHODS Soil samples with cysts coming from different established soybean growing areas from Brazil (Table 1), were processed by the centrifugal flotation in sucrose solution technique (Jenkins, 1964). Part of those ten processed soil samples were added, individually, to ceramic pots and preserved under greenhouse at Syngenta's experimental station in Uberlândia, MG -Brazil. These pots were planted with the susceptible soybean cultivar 'Lee 74E', in order to promote multiplication of Heterodera glycines, as shown in Figure 1. A soil aliquot of 150 cm 3 was obtained 28 days after sowing and then processed by the centrifugal flotation in sucrose solution technique, according Jenkins (1964). From the obtained suspension solution an aliquot of 1 mL was observed in Peters chamber under an inverted light microscope, verifying the presence or absence of endospores of the bacterium attached to the nematode's cuticle ( Figure 2). Subsequently, the percentage of nematodes showing bacteria attached to its body in relation to the total number of nematodes observed in the suspension was calculated. RESULTS AND DISCUSSION Within all ten samples the frequency of occurrence of the bacterium Pasteuria nishizawae was of 100%, that is to say, there were observed bacterial endospores attached to the cuticle of all the nematodes Heterodera glycines extracted from the soil 28 days after soybean sowing (Table 2), as showed in Figure 3. In Brazil, studies regarding Pasteuria spp. are still developing, especially when concerning to its employment to control species of Pratylenchus and Heterodera. The vast majority research about the subject is developed in the country focusing the root-knot nematode (Meloidogyne spp.). According to Tzortzakakis and Gowen (1994), a great number of endospores attached to the J2 nematodes, does not ensure reduction in the number of eggs produced, if a high variance of attachment and infectivity of endospores occurs. These remain viable in the soil by many years, are resistant to desiccation (STIRLING, 1984) and relatively resistant to high temperatures. Lordello (1966) reported for the first time in Brazil, the occurrence of Pasteuria spp. in females of Meloidogyne javanica, infecting plants of tomato coming from Vargina in Minas Gerais. Santos (1981), observed the bacterium in juveniles of the second stage of M. javanica extracted from soil collected from the rhizosphere of bean plants intensively infected by the nematode in Petrolina, Pernambuco. Subsequently, several works were performed on the parasitism of Pasteuria spp. in nematodes, verifying in the average, activity of the bacterium over Meloidogyne species (PIMENTA; CARNEIRO, 2005). According to Souza and Campos (1996), Pasteuria spp. was observed in 29,69% of the samples collected in 128 locations and 28 vegetal species in Minas Gerais, similar results were also verified in other countries: South Africa (Spaull, 1984), Australia (STIRLING; WHITE, 1982; BIRD; Natural occurrence of Pasteuria nishizawae… VICENTE, C. B.;SANTOS, M. A. BRISBANE, 1988), USA (WALTER; KAPLAN, 1990;HEWLETT et al., 1994) and Spain (VERDEJO;LUCAS, 1992 There is no record of the occurrence of Pasteuria spp. in regions with annual mean temperature under 10ºC,with the higher occurrence being observed under annual mean temperatures above 21ºC (CHEN; DICKSON, 1998). CONCLUSIONS Pasteuria nishizawae has natural occurrence in Brazilian soils under soybean cultivation, specifically in the States of Bahia, Goiás, Mato Grosso and Paraná. The frequency of Pasteuria nishizawae occurence within the analyzed soils was of 100%. ACKNOWLEDGMENTS The author would like to thank Syngenta's experimental station staff at Uberlândia -MG, where this research was performed.
2019-04-01T13:15:39.913Z
2016-10-06T00:00:00.000
{ "year": 2016, "sha1": "16fa85fda9e8c57afa1560f23ba32838fef83e67", "oa_license": "CCBY", "oa_url": "http://www.seer.ufu.br/index.php/biosciencejournal/article/download/32898/19008/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "621702f8d95126a19bb790a01c54f5a7ed571c34", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
14953304
pes2o/s2orc
v3-fos-license
Potential of the Virion-Associated Peptidoglycan Hydrolase HydH5 and Its Derivative Fusion Proteins in Milk Biopreservation Bacteriophage lytic enzymes have recently attracted considerable interest as novel antimicrobials against Gram-positive bacteria. In this work, antimicrobial activity in milk of HydH5 [a virion-associated peptidoglycan hydrolase (VAPGH) encoded by the Staphylococcus aureus bacteriophage vB_SauS-phiIPLA88], and three different fusion proteins created between HydH5 and lysostaphin has been assessed. The lytic activity of the five proteins (HydH5, HydH5Lyso, HydH5SH3b, CHAPSH3b and lysostaphin) was confirmed using commercial whole extended shelf-life milk (ESL) in challenge assays with 104 CFU/mL of the strain S. aureus Sa9. HydH5, HydH5Lyso and HydH5SH3b (3.5 µM) kept the staphylococcal viable counts below the control cultures for 6 h at 37°C. The effect is apparent just 15 minutes after the addition of the lytic enzyme. Of note, lysostaphin and CHAPSH3b showed the highest staphylolytic protection as they were able to eradicate the initial staphylococcal challenge immediately or 15 min after addition, respectively, at lower concentration (1 µM) at 37°C. CHAPSH3b showed the same antistaphyloccal effect at room temperature (1.65 µM). No re-growth was observed for the remainder of the experiment (up to 6 h). CHAPSH3b activity (1.65 µM) was also assayed in raw (whole and skim) and pasteurized (whole and skim) milk. Pasteurization of milk clearly enhanced CHAPSH3b staphylolytic activity in both whole and skim milk at both temperatures. This effect was most dramatic at room temperature as this protein was able to reduce S. aureus viable counts to undetectable levels immediately after addition with no re-growth detected for the duration of the experiment (360 min). Furthermore, CHAPSH3b protein is known to be heat tolerant and retained some lytic activity after pasteurization treatment and after storage at 4°C for 3 days. These results might facilitate the use of the peptidoglycan hydrolase HydH5 and its derivative fusions, particularly CHAPSH3b, as biocontrol agents for controlling undesirable bacteria in dairy products. Introduction Staphylococcus aureus is a bacterial pathogen responsible for a wide range of human and animal infections, including food poisoning caused by the ingestion of enterotoxins produced in food by enterotoxigenic strains [1,2]. Staphylococcal enterotoxins are notoriously thermostable and maintain their stability even after the thermal treatments customarily utilized in the food industry. This represents a threat to consumers and makes necessary the control of staphylococcal contaminants to avoid the production of high risk levels of enterotoxins [3]. Humans and domestic animals are the primary reservoirs of S. aureus, as this microorganism colonizes mucous membranes and skin. Thus, food handlers and animals are usually the primary source of S. aureus contamination of food products of animal origin [4]. S. aureus is also an important etiological agent of mastitis in cattle, goats and sheep [5], with the mastitic udder being a source of contaminated milk and milk-derived dairy products, along with the dairy farm environment and processing facilities [6]. Although an important food safety concern, S. aureus mastitis is difficult to eradicate and constitutes a serious economic problem for dairy herd management [7]. Several antimicrobial treatments are available for clinical mastitis differing in the antimicrobial agent, route of application, duration, probability of cure or recurrence, and cost [8]. However, this problem remains unsolved in part due to the ability of S. aureus to invade and reside intracellularly [9], within mammary cells, thereby evading most antibiotics, but also because of the high frequency of antibiotic resistance among S. aureus strains [10,11]. Bacteriophage endolysins have been proposed as antimicrobials to control Gram positive bacteria due to their ability to degrade the bacterial cell wall resulting in lysis of the pathogen [12]. This bactericidal activity has been successfully used to control antibiotic-resistant pathogenic bacteria in animal models [13]. For instance, the pneumococcal lysin Cpl-1 protected a mouse model against pneumococcal bacteraemia and colonization by intravenous administration and topical nasal treatment, respectively [14]. More recently, staphylococcal lysins have also been used against staphylococcal infections in mouse models. This is the case for endolysins MV-L [15], LysGH14 [16] or the chimeric lysin ClyS [17] that protected mice against lethal doses of methicillin-resistant S. aureus (MRSA) by intraperitoneal injections. The effectiveness shown by the staphylococcal phage K endolysin, LysK, CHAP domain construct and the CHAP domain construct from the phage K tail-associated muralytic enzyme to eliminate S. aureus from the nares of challenged mice and rats, respectively, supports the potential use of phage lytic proteins' catalytic domains as antimicrobials [18,19]. Lysostaphin is a well characterized peptidoglycan hydrolase produced by Staphylococcus simulans biovar. staphylolyticus. Its lytic action against S. aureus relies mainly on its N-terminal domain with glycylglycine endopeptidase activity that cleaves the pentaglycine cross bridges present in staphylococcal peptidoglycan, while its C-terminal domain promotes its specific binding to staphylococcal peptidoglycan [20]. It was shown to protect mammary glands against S. aureus challenge in both mice [21] and cattle [22]. It has also been shown that antimicrobial synergy exists in vitro between some phage endolysins and antibiotics or antimicrobial peptides of bacterial origin against S. aureus [18,23,24]. In this regard, the in vitro synergy observed between phage lytic proteins and lysostaphin was recently expanded to include the in vivo protection of murine mammary glands from an S. aureus challenge [25]. In addition to mammary gland protection, phage endolysins might also serve to inhibit undesirable bacterial growth for food biocontrol purposes [24]. The staphylococcal phage vB_SauS-phiIPLA88 endolysin, LysH5, has been demonstrated to control S. aureus growth in milk. The purified protein was able to rapidly kill S. aureus growing in pasteurized milk with a 10 6 CFU/ml inoculum undetectable after 4 h of co-incubation with 1.6 mM LysH5 at 37uC [26]. In addition to endolysins, there is a largely untapped group of phage lytic proteins the virion-associated peptidoglycan hydrolases (VAPGHs) that are involved in local cell-wall degradation to facilitate the injection of phage DNA into the cell cytoplasm [27]. These PGHs have been reported to be encoded by phages infecting S. aureus [28,29] and other bacterial species [30][31][32][33]. Their antimicrobial activity was first postulated in 1940, with 'lysis from without' that takes place when a very high number of phages are adsorbed onto the host cell [34]. In this work, we have assessed the antimicrobial ability of HydH5 and its derivative fusion proteins in milk, in order to explore new biopreservation strategies to effectively inhibit S. aureus growth in dairy products. Microbiological and Physicochemical Analyses of Milk Microbiological and physicochemical analyses were performed in commercial cow's whole ESL milk whole and skim raw milk (the latter was centrifuged at 6,0006g for 20 min to remove fat) and whole and skim pasteurized milk supplied by a collaborating farm. Samples of milk (500 ml) were aseptically sampled. Serial dilutions of milk were made in quarter-strength Ringer solution (Oxoid, Basingstoke, Hampshire, UK) and plated in duplicate on the appropriate agar medium. Total bacterial counts were performed in the different types of milk by deep-plating appropriate dilutions on Plate Count Agar (32uC, 72 h). S. aureus counting was performed as indicated above. Total solids, fat and protein content were determined according to the International Dairy Federation [40][41][42]. Protein Purification Protein purification was performed as previously described [38]. Purity of each preparation was determined in 15% (w/v) SDS-PAGE gels. Electrophoresis was conducted in Tris-Glycine buffer at 20 mA for 1 h in the BioRad Mini-Protean gel apparatus. Protein was quantified by the Quick Start Bradford Protein Assay (BioRad, Hercules, CA). Quantification of lytic activity was performed by turbidity reduction assays against live S. aureus Sa9 cells prepared as previously described [43,44]. Challenge Tests in Milk HydH5, HydH5SH3b or HydH5Lyso proteins (3.5 mM), CHAPSH3b protein (1 mM) and lysostaphin (1 mM) were individually added to 2 ml of whole ESL milk inoculated with 10 4 CFU/ ml of S. aureus Sa9 and incubated at 37uC for 6 h. CHAPSH3b (1.65 mM) was also assayed at room temperature (RT) for the same period. The anti-staphylococcal activity of CHAPSH3b (1.65 mM) was further assayed in whole and skim raw milk and in whole and skim pasteurized milk inoculated with 10 3 CFU/ml at 37uC and RT for 2 h. Challenged milk without lytic protein additions were used as controls. Samples were taken at different times throughout the incubation period and survival of S. aureus Sa9 was determined by serial dilution plating onto Baird-Parker plates for ESL milk samples (37uC, 48 h) and ChromoID S. aureus plates (37uC, 24 h) for raw and pasteurized milk samples, respectively. ChromoID S. aureus is a selective and differential culture medium for Staphylococcus sp, in which different staphylococcal species are distinguished by the colour of colonies (green for S. aureus; pink in S. saprophyticus; purple in S. xylosus; white in S. epidermidis). This chromogenic medium inhibits other Gram positive bacteria, Gram negative bacteria and yeasts. CHAPSH3b Fusion Protein Stability in Milk To test the stability of CHAPSH3b in milk, 1.65 mM protein was added to 2 ml whole raw milk and kept at 4uC for 3 days. Samples (250 mL) were taken every day, challenged with 10 3 CFU/ml of S. aureus Sa9 and incubated at RT for 15 min. Staphylococcal viable counts in the presence and in the absence of the antimicrobial protein were determined by serial dilution plating onto ChromoID S. aureus plates. Results were expressed as the percentage of viable counts reduction compared to the untreated control. CHAPSH3b Pasteurization Treatment Commercial whole ESL milk and whole raw milk containing CHAPSH3b (1.97 mM) were pasteurized (72uC, 15 s) in a thermo cycler (BioRad Laboratories, Hercules, CA, USA). Samples were cooled at RT for 15 min, further inoculated with 10 3 CFU/ml of S. aureus Sa9 and incubated for 0, 15, 30, 60 and 120 minutes at RT. Staphylococcal viable counts were determined as indicated above and results also expressed as the percentage reduction of viable counts. Statistical Analysis Statistical analysis was performed using the SPSS-PC +11.0 software (SPSS, Chicago, IL, USA). Staphylococcal CFU data were subjected to one-way ANOVA within each sampling time. Types of anti-staphylococcal protein (HydH5, HydSH3b, HydH5lyso and CHAPSH3b) were compared against the untreated control. Data of cold storage stability of protein CHAPSH3b were compared with one-way ANOVA and the LSD test was used for a comparison of means at a level of significance P,0.05. Microbiological and Physicochemical Characteristics of Milk Total viable bacterial counts were below the detection limit (,10 CFU/ml) in ESL milk, whereas about 7.08610 4 CFU/ml and 3.89610 1 CFU/ml were detected in raw and pasteurized milk, respectively. Viable counts were lower in both raw skim (1.90610 3 CFU/ml); and pasteurized skim milk (1.20610 1 CFU/ ml). S. aureus counts were only detected in whole raw milk (2.84-4.0610 1 CFU/ml) and they kept below 10 2 CFU/ml throughout 2 h of incubation. Results of gross composition are shown in Table 1. Mean values of total solids, fat and protein contents of whole commercial ESL milk, and raw (whole and skim) and pasteurized (whole and skim) milk were within the standards of commercial and farmhouse milk. HydH5 and its Derivative Fusion Proteins have Antimicrobial Activity against S. aureus Sa9 in Commercial ESL Milk The antimicrobial activity of HydH5, its derivative fusion proteins and lysostaphin was assessed in commercial whole ESL milk inoculated with 10 4 CFU/ml of S. aureus Sa9. The effect of the different proteins on S. aureus growth was first tested at 37uC. In the absence of antimicrobial proteins (control cultures) the staphylococcal strain grew from 10 4 to 6.5610 4 CFU/ml during the first hour of incubation with a more robust increase in CFU subsequently, achieving 8.9610 7 CFU/ml at the end of six hours (Fig. 1A). The addition of HydH5, HydH5SH3b and HydH5Lyso (3.5 mM) to the S. aureus inoculated milk resulted in an immediate effect on S. aureus viability, with the viable counts maintained below the time zero control counts (immediately after addition of the antimicrobials). At time 0, only the viable counts in HydH5Lyso treated cultures were significantly different (P,0.05) compared to the control cultures. From 15 min onwards, the inhibitory effect of each of the proteins on S. aureus viability was significant (P,0.01 at 15 min and P,0.001 thereafter). The greatest reduction in viable counts (about 2.3460.01 log CFU/ml) was detected at the end of the 6 h incubation period (Fig. 1A). These activities are, however, far from lysostaphin antistaphylococcal activity since 1 mM of this bacterial peptide resulted in an immediately kill of the S. aureus population and no viable counts were detected even at time 0. In addition, no-re-growth was observed afterwards (data not shown). Only the fusion protein CHAPSH3b showed an inhibitory effect on S. aureus similar to lysostaphin since 1 mM resulted in a complete clearance of the pathogen 15 min after addition without further re-growth throughout the assay period (6 h) (Fig. 1A). Likewise, viable counts became undetectable immediately after the addition of 1.65 mM CHAPSH3b (data not shown). At RT, the inhibitory effect of CHAPSH3b decreased slightly with a higher protein concentration (1.65 mM) required to fully eliminate S. aureus in 15 min after addition (Fig. 1B), whereas a continuous proliferation of the staphylococcal population occurred in the control cultures. CHAPSH3b Fusion Protein is Effective in Raw Milk and Highly Effective in Pasteurized Milk The effect of the CHAPSH3b protein was tested against S. aureus Sa9 strain (10 3 CFU/ml) in whole and skim raw milk at both 37uC and RT. As shown in Figure 2A, the staphylococcal growth observed in the whole raw milk control cultures was immediately inhibited by addition of 1.65 mM of CHAPSH3b as no viable counts were detected at 37uC or RT and re-growth was prevented for ,30 min of incubation. Thereafter, S. aureus growth was observed at both temperatures but CHAPSH3b treatment kept viable counts below the control counts throughout the 2 h experiment. CHAPSH3b showed higher growth inhibition at RT than at 37uC (Fig. 2A). Significant differences between control and treated cultures were observed throughout the remaining 2 h incubation period at both RT (P,0.001) and 37uC (P,0.001 at 30 min and P,0.01 at 60 and 120 min of sampling time). At the end of the incubation period, the presence of the antimicrobial protein resulted in a reduction of 1.0960.12 and 0.760.17 log CFU/ml at RT and 37uC, respectively, compared to the control cultures. The level of indigenous S. aureus in raw milk was also monitored through the incubation period as an additional control. This population remained below 10 2 CFU/ml and was also sensitive to CHAPSH3b (data not shown).Similar staphylococcal growth kinetics was observed in skim raw milk in the presence of CHAPSH3b (Fig. 3A). As in whole milk (Fig. 2), re-growth also occurred after 30 min and the antimicrobial protein exhibited higher inhibitory activity at RT. Differences in staphylococcal viable counts between VAPGH-treated and control samples were significant (P,0.001) at both 37uC and RT. The final reduction in staphylococcal CFU was similar at 37uC (0.7560.23 log CFU/ml) and lower (0.4160.09 log CFU/ml) at RT than in whole raw milk (Fig. 3A). CHAPSH3b was more effective in reducing S. aureus Sa9 in pasteurized milk (whole and skim) as is shown in Figure 2B and 3B. At RT, CHAPSH3b (1.65 mM) was able to reduce S. aureus viable counts to undetectable levels in whole and skim milk immediately after addition, and no re-growth was detected for 2 h thereafter. At 37uC, the presence of CHAPSH3b prevented staphylococcal re-growth for over 30 min in whole milk and for more than 1 h in skim milk. At the end of the incubation period, the final staphylococcal population in whole and skim pasteurized milk was 0.7560.23 (P,0.001) and 2.0260.21 log CFU/ml (P,0.001) lower than the control, respectively, in the presence of CHAPSH3b. In order to rule out the possibility of CHAPSH3bresistant colonies confounding the data, ten S. aureus colonies were randomly selected from the selective agar plates used for viable count determination. Turbidity reduction assays of each colony performed in the presence of 1 mM of CHAPSH3b indicate that all were sensitive to lysis by CHAPSH3b to the same extent as the inoculated strain (data not shown). CHAPSH3b Remains Active after Storage at 4uC in Milk and after Pasteurization Treatment To assess the CHAPSH3b stability in milk, challenge assays were performed after storage of the protein (1.65 mM) in refrigerated raw milk for 3 days. As shown in Figure 4A, the non cold-stored protein reduced the initial staphylococcal population (10 3 CFU/ml) by 94% after just 15 min at RT. The inhibitory activity of CHAPSH3b decreased significantly (P,0.05) with the cold storage time as compared with the non-cold stored protein but the remaining activity was still able to kill 42%, 33% and 32% of the S. aureus population following storage in milk at 4uC for one, two and three days, respectively. No significant differences in the anti-staphylococcal activity after 2 and 3 days of cold storage were detected (P.0.05). To test the stability of CHAPSH3b under high temperature treatment, the protein (2 mM) was subjected at pasteurization (72uC, 15 s) in both commercial ESL whole milk and raw whole milk and further challenged with S. aureus at RT. Pasteurization in ELS milk did not affect the inhibitory activity of CHAPSH3b since no viable counts were detected in treated cultures throughout the incubation period (Fig. 4B). However, the heat treatment in raw milk clearly reduced CHAPSH3b activity as only partial inhibition of S. aureus CFUs was observed in the first 15 min of treatment (Fig. 4B) compared to the undetectable CFUs that was observed in untreated raw milk cultures spiked with a lower concentration (1.65 mM) of unpasteurized protein (Fig. 2A). Nevertheless, the reductions in viable counts between control and treated cultures were significant throughout the incubation period (P,0.001 at time 0 and 60 min; P,0.01 at time 15, 30 and 120 min). Discussion Current food safety depends on a combination of preventive hygiene-based approaches that are focused on minimizing the microbial contamination of raw material that mainly include physical and chemical decontamination treatments aimed to remove the microbial contamination in food products [45]. Due to the increasing consumer demand for natural, nutritious and fresh-tasting foods, the food industry is interested in replacing traditional preservation techniques (e.g. heat and chemical treatments) whenever possible, to avoid the risk of sensory quality changes or the presence of unwanted chemical residues in foods [46]. Food preservation treatments based on natural antimicrobials such as bacteriocins, bacteriophages or phage-derived lytic enzymes could help to fight against pathogenic and spoilage bacteria along the food chain and are not expected to alter the sensory change or other undesirable effects of traditional treatments [24]. The application of bacteriocins in food safety has been widely studied for the last two decades [47], but food biopreservation based on phages and phage-derived lytic enzymes is a more recent avenue of research [48]. So far, phage derived lysins have been mainly assayed in veterinary and human medical model approaches [12,49], and less attention has been paid to their potential role as food biopreservatives [50]. Nevertheless, some phage lytic enzymes have shown antibacterial activity in milk. This is true for S. aureus bacteriophage vB_SauS-phiIPLA88 endolysin LysH5 that completely inhibited S. aureus growth in commercial pasteurized milk after 4 h of treatment [26], or the fusion proteins lSA2-E-Lyso-SH3b (streptococcal lSA2 endolysin endopeptidase domain fused to the lysostaphin SH3b domain) and lSA2-E-LysK-SH3b (streptococcal lSA2 endolysin endopeptidase domain fused to the staphylococcal phage K endolysin SH3b domain) that showed anti-staphylococcal activity in ultra-high temperature (UHT) milk by reducing the bacterial load by 3 and 1 log CFU/ml, respectively, within 3 h of incubation [51]. Recently, it has been also reported that Listeria bacteriophage endolysin LysZ5 was able to kill 4 log CFU/ml of L. monocytogenes within 3 h at 4uC in soya milk [52]. The peptidoglycan hydrolase HydH5 encoded by the S. aureus phage vB_SauS-phiIPLA88 [37] and the fusion proteins between HydH5 and lysostaphin (CHAPSH3b, HydH5SH3b and HydH5-Lyso) have all been shown to yield staphylolytic activity in zymogram, plate lysis and turbidity reduction assays [38]. In this work, these constructs have been assessed as antimicrobial additives for preventing the growth of S. aureus in milk. HydH5, HydH5SH3b and HydH5Lyso showed staphylolytic activity in commercial whole ESL milk but were clearly less effective than lysostaphin and CHAPSH3b activities. Lysostaphin and CHAPSH3b (1 mM) were able to reduce the S. aureus load by 4log CFU/ml immediately or 15 min after addition at 37uC, respectively, while nearly four times as much (3.5 mM) of the other constructs were needed to obtain just a reduction of the staphylococcal counts throughout the incubation period compared to the untreated cultures. These findings are consistent with our previous results. In fact, the CHAP domain of HydH5 when fused to the lysostaphin SH3b domain showed a 4.8-fold higher activity, compared to full length HydH5 [38]. The high activity shown by CHAPSH3b in ESL milk, prompted us to broaden the assays on milk with a broader range of treatments. Accordingly, CHAPSH3b activity was assessed in raw (whole and skim) and pasteurized (whole and skim) milk. CHAPSH3b is active in whole and skim raw milk at 37uC and RT, as the protein was able to reduce 10 3 CFU/ml below the detection limit (,10 CFU/ml) for 30 min. The staphylolytic activity, however, was lower than in high-heat treated milk yielding less of a reduction in viable counts, despite a higher concentration of enzyme (1.65 mM versus 1 mM). Of note, the indigenous S. aureus contamination of raw milk do not seem to have interfered in the CHAPSH3b activity because it was shown to be sensitive and hardly accounts for the total S. aureus population once Sa9 was added. Pasteurization of milk clearly enhanced CHAPSH3b staphylolytic activity in both whole and skim milk at both temperatures. Apparently, something in the raw milk is hampering the CHAPSH3b activity. One possibility is heatsensitive components such as immunoglobulin M and agglutinins in so far as they have been reported to promote the formation of cell clumps [53] that would likely make it more difficult for the antibacterial protein to reach the staphylococcal cells sequestered inside the clumps. These components of raw milk have also been previously reported to hamper phage adsorption [54]. In contrast, CHAPSH3b activity does not seem to be affected by fat globules in milk since similar kinetics of staphylococcal inhibition were observed in whole or skim milk despite the fact that bacterial clumps have also been associated with fat globules [55]. Although structural and chemical composition of food can negatively affect the ability of antimicrobials to reach the pathogen [56], the addition of CHAPSH3b to different types of milk yielded an immediate reduction in S. aureus viable counts with the only exception being heat-treated protein in raw milk. This suggests a quick reaction to CHAPSH3b which is reminiscent with previous data with Listeria monocytogenes phage endolysins Ply118 and Ply500. The cell binding domain of these proteins showed a rapid and saturation-dependent binding to L. monocytogenes cell surface within 15 s, with no further increase [57]. A high affinity for staphylococcal cells, especially MRSA, was also described for the endolysin LysGH15 [17]. The staphylococcal re-growth observed at latter sampling times could be attributed to those cells that were sequestered and did not see the CHAPSH3b at the beginning of the treatment. However, it should be noted that the number of bacteria attained by the end of the assay period (clearly below the critical threshold of 10 5 CFU/ml for production of hazardous enterotoxins levels to consumers) does not present a high-risk of enterotoxin contamination of milk [58]. In addition, no CHAPSH3b resistant bacteria were isolated from lysin-treated milk. Therefore, insensitivity to CHAPSH3b appears to be a rare event under the experimental conditions tested. Other researchers also failed to detect resistance against endolysins used to control the growth of Gram positive bacteria, such as Bacillus anthracis [59] and Streptococcus pneumoniae [14]. Of note, CHAPSH3b showed higher activity at RT than at 37uC in raw and pasteurized milk. The lower growth rate of S. aureus at RT could account for the higher effectiveness of CHAPSH3b. The demonstrated 3 day longevity of CHAPSH3b in cold milk supports the notion of using CHAPSH3b as a potential staphylolytic agent to prevent S. aureus development during an unexpected breakdown in cold storage and thus, enhance food safety. . Cold storage stability and pasteurization resistance of CHAPSH3b in milk. A) 1.65 mM CHAPSH3b was stored in raw milk at 4uC for 3 days. Samples were taken every day, inoculated with 10 3 CFU/ml of S. aureus Sa9 and incubated for 15 min at room temperature before plating. Non-cold storage protein was used as control. Cold storage stability was expressed as the percentage reduction of S. aureus Sa9 CFU/ml after CHAPSH3b addition. Values are the means 6 standard deviations of two independent experiments. Bars having different letters are significantly different (P,0.05). B) 1.97 mM CHAPSH3b was pasteurized at 72uC for 15 s in raw milk (left) and commercial pasteurized milk (right). Samples were inoculated with 10 3 CFU/ml and incubated for 0, 15, 30, 60 and 120 min at room temperature before plating. S. aureus inoculated cultures without lytic protein addition were used as control (dark grey bars). Light grey bars indicate S. aureus Sa9+ CHAPSH3b. Data from pasteurized samples with CHAPSH3b activity was expressed as log CFU/ml. Values are the means 6 standard deviations of two independent experiments. Bars having asterisks are significantly different from the control (**P,0.01; ***P,0.001). doi:10.1371/journal.pone.0054828.g004 The ability of CHAPSH3b (1.65 mM) to kill up to 10 3 CFU/ml in raw milk in the first 30 min of treatment at 37uC along with its proven activity against MRSA strains [38] points to CHAPSH3b as a potential candidate to control S. aureus infections in cows' mammary glands. Previous reports have shown the effectiveness of chimeric phage lysins to kill mastitis-causing S. aureus in murine mammary glands [25]. The safety of endolysins has also been determined since experimental mice to which endolysin was administered did not exhibit adverse physiological effects [60]. The stability of CHAPSH3b after exposure at high temperatures (72uC, 15 s) and cold storage in milk has a clear technical interest for dairy products protection since this staphylolytic protein could be added to raw milk before thermal processing to control any potential contamination by S. aureus. CHAPSH3b thermostability is consistent with our previous results which showed that HydH5 retained activity after heat treatment (5 min treatment at 100uC) [37]. Recently, Listeria bacteriophage peptidoglycan hydrolases also revealed a high thermostability retaining up to 35% activity after 30 min of incubation at 90uC [51]. By contrast, the lytic activity of some phage endolysins was destroyed by heat treatment [26,61]. However, none of these assays were performed in milk, work that is sorely needed. Regarding the lytic proteins' stability in cold storage, prior to this study, the existing data were obtained in aqueous solutions but not in milk. This is the case for CHAP k that retained up to 70% of its lytic activity after being stored at 4uC for one month [18]. By contrast, the remaining lytic activity of CHAPSH3b was ,33% after 3 days of storage a 4uC in raw milk. As indicated above, the reduction of lytic activity in raw milk could be due to the presence of heat-sensitive components in raw milk that hamper the access of the lytic protein to its target on the cell wall of the bacterial host [56]. Increasing the concentration of the lytic protein might enable us to overcome this limited activity in raw milk. Indeed, 3.30 mM (100 mg/ml) CHAPSH3b was able to kill 10 3 CFU/ml of S. aureus in raw milk and no re-growth was observed within 2 h (data not shown). Our findings demonstrate the ability of HydH5-derived proteins to inhibit the development of S. aureus in milk, with CHAPSH3b being particularly effective. The high anti-staphylococcal activity of CHAPSH3b along with its thermostability might enable this protein to be applied directly to raw milk after milking. Overall, our results suggest that phage lytic proteins might be useful as a valuable hurdle to prevent S. aureus growth in milk and presumably in other dairy products.
2017-07-07T12:44:34.299Z
2013-01-24T00:00:00.000
{ "year": 2013, "sha1": "2a2792e59f8b93bfb09bf82cc484119f631e4ee3", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0054828&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a2792e59f8b93bfb09bf82cc484119f631e4ee3", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257459940
pes2o/s2orc
v3-fos-license
INVESTIGATION AND PREVENTION OF SPORTS RISK IN THE TEACHING OF TABLE TENNIS ABSTRACT Introduction: Among college students who are not majoring in sports, table tennis is well accepted, highly popular, and has low requirements regarding equipment investment. Therefore, many students choose this sport, although there are also certain sports risks in the course of the sport. Objective: Investigate the sporting risks of teaching table tennis and its preventive measures. Methods: Before each class exercise, the physical education teacher led the experimental class to rigorously complete the warm-up. In contrast, the control class maintained the basic program configuration without the warm-up phase. After 24 hours of practice, students in the experimental class and the control class were tested with the FMS. Results: The experimental class stability score increased from 1.58 points to 1.84 points, and the trunk rotation stability score increased from 1.68 points to 2.05 points. Conclusion: Warm-up activities before sports can further reduce sports risks in the table tennis teaching process by providing a better and safer higher education environment for students. Level of evidence II; Therapeutic studies - investigation of treatment outcomes. INTRODUCTION Table tennis has always enjoyed the title of national sport in China.At the same time, the table tennis project is very popular, and the participation of the masses is very high. 1 A large number of sports enthusiasts and students are willing to participate in the table tennis project. 2 The development of table tennis in China has been very mature, and at this stage, it has more advanced theoretical knowledge and training teaching mode.In terms of infrastructure, the construction of table tennis venues is also very perfect. 3Daily participation in table tennis can help improve physical condition, enhance physical quality, cultivate will, and help students develop physically and mentally in many ways.Due to the continuous development of society, people's investment in sports has begun to increase significantly. 4The student union actively pursues high-quality sports experience and improves its competitive level through daily training.The opening of table tennis courses in colleges and universities is often accompanied by many potential safety hazards. 5y analyzing the core causes of various potential safety hazards, we can effectively prevent various sports risks in the teaching process of table tennis.To create a safe sports environment for students is conducive to the development of physical education courses in colleges and universities. 6In addition, professional table tennis courses can help students have a deeper understanding of the project.Through participating in college courses, we can improve our table tennis skills. 7 Research on sports risk of table tennis teaching Table tennis is an almost indispensable option for colleges and universities to carry out sports elective courses.For non sports majors, table tennis is famous, highly popular, and does not require high investment in equipment.Therefore, many students will choose this course, but there are also certain sports risks in the process of sports.In order to analyze the sports risk of table tennis teaching, this paper uses the interview method and questionnaire survey method to communicate with the physical education teachers and students of the elective course of table tennis, to investigate the current teaching situation and sports risk of the elective course of table tennis, and then uses the questionnaire survey method to distribute questionnaires to students who have sports injuries in the classroom. This questionnaire survey adopts the method of anonymous distribution and recovery.A total of 50 questionnaires were distributed, and 47 valid questionnaires were obtained, with a recovery rate of 94%.The study and all the participants were reviewed and approved by Ethics Committee of Chengdu Technological University (NO.CDTU2019SZ024).Use Excel software to sort out the acquired data and draw relevant analysis charts. Experimental design of sports risk prevention In order to study the preventive effect of standardized warm-up preparation activities before class on the sports risks of table tennis teaching, this paper selected two table tennis elective classes in college freshmen as the research objects.After data collation, the basic information is shown in Table 1.The students in the experimental class are (164.22± 8.6538) (cm) tall, (54.21 ± 7.4152) (kg) heavy, (20.09 ± 0.4059) (years) old, and (2.08 ± 0.7173) (years) trained.The students in the control class were (166.51 ± 9.6321) (cm) tall, (60.28 ± 10.3448 (kg) weight, (20.19 ± 0.6130) years old, and (1.99 ± 0.5108) years of training.The P value between several groups of data is greater than 0.05, so as to reduce the interference of personnel selection on the experimental results. The experiment was conducted in a controlled way.Before each class began to exercise, the experimental class was led by the physical education teacher to carefully and strictly complete the warm-up preparation exercise, and the physical education teacher standardized the teaching of their actions, which was equivalent to taking the warm-up preparation exercise as a key teaching activity, so as to ensure the effectiveness of the warm-up preparation exercise.The control class is to maintain the original teaching situation, and the teacher will simply lead the students to complete the relevant actions.The teacher will demonstrate in the front, and the students will learn the teacher's actions and complete the relevant preparatory activities.In this process, the teacher will not regulate the students' actions.After completing the relevant actions, the experimental class and the control class will carry out a table tennis teaching course that is completely consistent.After 24 hours of teaching, the students in the experimental class and the control class will be tested with FMS. The FMS test shows the fluency of athletes' movements and the flexibility of their bodies, which are generally regarded as the criteria to reduce the risk of sports injury.This paper also uses this indicator here.The sub indicators included 7 tests, including deep squat, hurdle stance, straight lunge, shoulder flexibility, active straight knee lift, trunk stability push up, and trunk rotation stability.Three professionals score the student's scores, take the average value as the final score of the student, and then calculate the average value of the class. Research results of sports risks in table tennis teaching As shown in Figure 1, the table tennis injury site analysis in the investigation and study.It can be seen from the figure that there are 3 students with shoulder and neck injuries, accounting for 6.383%; Four students had wrist injuries, accounting for 8.511%; Eight students had elbow injuries, accounting for 17.021%;There were 4 students with waist injuries, accounting for 8.511%;There were 8 students with knee joint injuries, accounting for 17.021%;There were 17 students with ankle injuries, accounting for 36.170%. Figure 2 shows the analysis of sports injuries in table tennis teaching.As can be seen from the picture, 26 students were slightly unwell, accounting for 55.319%; 20 students with minor injuries needed medical treatment, accounting for 42.553%;One student was seriously injured, accounting for 2.128%.Through interviews with physical education teachers and students, we can also know that most of the students themselves do not have a good foundation in table tennis, so the intensity of sports is far less than that of professional table tennis players, and their sports injuries are also limited.Most of the students only have joint and muscle strains, sprains, etc., which will not have a great impact on their health.Some students can improve their physical condition by simply stopping their movements when they feel uncomfortable due to sports injuries and walking around the field to recover under the guidance of physical education teachers.They can recover after a period of rest. There are also some students who have joint dislocation or strain injury.They need to go to the hospital for some bone setting massage and other methods.They can be cured with the help of medical treatment of bruises and injuries, almost without leaving sequelae.Only in a few cases will students suffer serious injuries, even fractures, which will affect their health and quality of life during this period. Effect of prevention training program on reducing sports risk In this section, we comprehensively analyzed the FMS test results of the two groups of students before and after the exercise, as shown in Table 2 and Table 3. It can be seen from Table 2 that before the experiment, there was little difference between the one-way scores of students in the experimental class and those in the control class, thus reducing the impact of students' own performance on the experimental results.From the overall numerical results, it can be seen that before the experiment, the students in the two elective classes had poor foundations and relatively low scores, especially the stability of the trunk, which was easy to cause teaching risks in the teaching process. Sports risks in table tennis teaching The main sports risks in table tennis teaching can be divided into the following reasons.The first is the choice of teaching site.Teaching table tennis in a field with too much or too little friction is likely to cause students to slip and fall during sports.In addition, if the space of the sports ground is too narrow, it also greatly increases the probability of sports risk.Secondly, before the beginning of the sports process, students lack sufficient warm-up activities, which may cause various sports injuries.The main function of the warm-up activity is to open up the physical activity, so that their own state can reach the standard of participating in sports.Lack of warm up activities or insufficient warm up activities are likely to cause strain and sprain of soft tissues such as joints and ligaments of the body.Excessive exercise intensity is also part of the reason for sports risk.In the teaching process, once students continue to exercise in the environment with excessive exercise intensity, they may also cause various sports risks due to exhaustion.The students' technical movements are not standardized, which may cause various joint injuries.Irregular technical actions will lead to the wrong starting point.Once some soft tissues of the body exert too much force, it is likely to cause some serious tissue strain.Lack of necessary sports protective equipment is also part of the reason for sports risks.Sports protective equipment can give students some protection.Without the protection of protective equipment, the safety during sports will be greatly reduced.As the student group is in the growth stage, the sense of self-protection is not yet mature and lacks the sense of self-protection, which is the main reason why the student group has various sports injuries in sports.The physical quality of student groups varies from person to person.Under the same exercise intensity, the probability of sports risk will increase for students with poor physical quality. Table tennis teaching risk prevention measures By analyzing the causes of various sports risks in table tennis teaching, we can get some effective preventive measures.According to these reasons, teachers should attach great importance to and improve problems in teaching.It can effectively reduce the probability of various sports risks in the learning process of table tennis.First of all, colleges and universities should build professional table tennis teaching venues.All teaching venues CONCLUSION There have been many problems in the elective course of physical education for non physical education majors in colleges and universities. First of all, students do not pay attention to psychology.Many non sports majors believe that sports is just a relaxation and entertainment activity, and the sports effect will only affect the course results.Therefore, they do not pay attention to physical education, and they are completely relaxed.Only when they are interested in physical education can they seriously study and train.This kind of mentality has affected the teaching effect of physical education curriculum, and has also brought high sports risks.Therefore, this paper takes the stage of sports preparation training as the key research object.The research results show that careful warm-up activities before sports can better reduce the sports risks in the teaching process of table tennis, prevent students from joint injuries and muscle strains caused by excessive excitement, and create a better and safer teaching environment for students. Figure 1 . Figure 1.Analysis of the Sports Injuries in TableTennisTeaching. Table 1 . Analysis of the characteristics of the two groups of subjects. Figure 2. Analysis of sports injuries in table tennis teaching. Table 2 . Analysis of FMS Test Results of Two Groups of Students before Sports Training. Table 3 . Analysis of FMS Test Results of Two Groups of Students after Sports Training. information Before the experiment of the experimental class Before the experiment of the control class P value safe sports environment for students.Effectively reduce various sports risks during sports.Secondly, teachers should guide professional warm-up activities when students play table tennis.Let the students turn on their physical activity by warming up.The flexibility and agility of the body meet the standard of participating in sports.During the warm-up activity, teachers should emphasize the importance of the warm-up activity.In the teaching process, teachers should guide students to choose the exercise intensity suitable for their physical conditions.It can effectively avoid the exercise of college students in the state of fatigue due to excessive exercise intensity.Teachers should standardize the use of students' technical actions.Avoid all kinds of soft tissue injuries and tissue strain caused by incorrect force generating points due to non-standard technical actions.
2023-03-12T15:14:52.119Z
2023-03-10T00:00:00.000
{ "year": 2023, "sha1": "8b4bf20302830405d2e0fa29fd0b290da8c6082f", "oa_license": null, "oa_url": "https://www.scielo.br/j/rbme/a/wsVDntSQBnRGVsjRn5KnDLf/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Dynamic", "pdf_hash": "21b51d592764091b3510c37efe07a32342ac7cad", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
9558722
pes2o/s2orc
v3-fos-license
Coliform accumulation in Amphibalanus amphitrite ( Darwin , 1854 ) ( Cirripedia ) and its use as an organic pollution bioindicator in the estuarine area of Recife , Pernambuco , Samples of water and barnacles Amphibalanus amphitrite were collected from Recife, Brazil, to assess if it accumulates total (TC) and thermotolerant coliforms (TTC) related with sewage pollution. The Most Probable Number (MPN) values and the standard procedures for examination of shellfish were used. Comparatively with the water samples, the highest coliform values came from the barnacles, with TC values ranging from < 3.0 × 10 to ≥ 2.4 × 10 MPN.g, and TTC ranging from ≥ 2.4 × 10 to 2.9 × 10 MPN.g. Barnacles accumulate the TC Ewingella americana, and the TTC Escherichia coli, Enterobacter gergoviae, Enterobacter aerogenes, and Enterobacter sakazakii. The results provided an indication of the level of organic contamination at the sampling locations and that this species could be a good organic pollution bioindicator. Introduction The biological effects of pollution are closely related to the increasing man-made changes to nature.A number of investigations have focused their attention on the ecological, physiological, or biochemical consequences of pollution, as marine organisms could potentially be used as pollution indicators (Patarnello et al., 1991).Benthic animals are useful for the monitoring of environmental quality due to their habitat and lifestyle.Attributes of benthic community structure (species composition, quantitative parameters, trophic groups, and species-indicators) may therefore reflect the quality of marine environments (Pearson and Rosenberg 1978).Assessing patterns in benthic community structures has several advantages over other experimental methods to detect anthropogenic disturbances as the benthos can integrate conditions over time rather than just reflect the conditions of the time the sampling occurred; thus, benthic animals are more useful in assessing local effects of monitoring programmes (Rivero et al., 2005). According to Zauke et al. (1988), the main goal of biomonitoring is to evaluate the potential of chemical bioaccumulation in ecological systems, to identify corresponding accumulation strategies, and to analyse biological effects using different scales of observation -ranging from intracellular sequestration of substances to effects on the population, community, or ecosystem level.Biological effects at the individual or population levels (e.g., reduced reproduction) may lead to effects at the ecosystem level (e.g., due to changes in resource use and biological interactions such as predation or competition), eventually changing the structure and dynamics of marine systems.Gestel and Brummelen (1996) considered as bioindicators organisms or groups of organisms that provide information on the environmental conditions of their habitat through their presence or absence or through ecosystem parameters.Rinderhagen et al. (2000) complemented this definition affirming that these organisms should be benthic species, distributed over the whole region of interest in the littoral zone, to enable easy sampling.For Blackmore et al. (1998), these organisms should accumulate trace contaminants in their tissues, responding essentially to the fraction in the environment which is of direct ecotoxicological relevance, i.e. the bioavailability of chemical forms.The required biological properties for using an animal as a bioindicator can be summarised as: sessility or very low mobility; widespread distribution and abundance in the area that will be monitored; sampling ease; and capacity of filtration and accumulation of pollutants (Barbaro et al., 1978, Westman, 1985). Among the marine and estuarine benthic organisms present in the littoral zone, barnacles constitute an important, characteristic, and successful group in the intertidal region of hard substrata throughout the world's oceans in terms of abundance and diversity (Newman and Ross, 1976).As they are sessile and relatively longliving, they reflect prolonged conditions of the place where they occur and, as bioindicators, they can be used to differentiate between general background levels and responses to increased bioavailable environmental supplies (Al-Thaqafi and White, 1991). Since barnacles do not tightly regulate the metal burdens in their bodies, they have been suggested as good indicators of metal pollution (Walker et al., 1975).They tolerate and selectively uptake a wide range of toxic metal pollution and accumulate high levels of metals in their body tissues, egg masses and shells in parallel concentrations to the metals in the seawater, which justifies their use as metal pollution bioindicators (Al-Thaqafi and White, 1991;Watson et al., 1995).Among the barnacles used as suitable biomonitors of trace metal pollutant elements, Amphibalanus amphitrite (Darwin, 1854) has been cited as a bioaccumulator of 5 trace metals: cadmium, chromium, copper, lead, zinc (Blackmore et al., 1998), andfluoride (Barbaro et al., 1978). Nevertheless, the focus of the newly developed monitoring programmes are the evaluation of coastal seawater quality and the assessment of the impacts of human sewage on the benthic invertebrate population, using the bioaccumulation of total and thermotolerant coliforms (Alonso et al., 1999;Chiroles-Rubalcaba et al., 2007).Coliforms include a heterogeneous group of lactose-fermentative bacteria belonging to the Enterobacteriaceae Family (APHA, 2000).Among them, Escherichia coli (Escherich, 1885) is generally considered a more reliable sanitary indicator of the quality of shellfish, and is also used to classify water (Hood et al., 1983;Silva et al., 2003). Estuaries are usually highly contaminated by organic and inorganic residues, which increase in concentration in the brackish waters due to the sedimentation of most residues, where there is severe oxygen depletion and increased inputs of untreated organic matter due to urbanization (Cross and Rebordinos, 2003).Furthermore, the composition and structural variation of benthic communities has been used to infer the prevalent environmental factors when creating models for these communities (Gorostiaga et al., 2004).In domestic sewage, the presence of thermotolerant coliforms indicates sewage pollution and contamination of benthic animals as their presence could mean that pathogenic bacteria and viruses might be concentrated in them (Waldichuk, 1989;Chigbu et al. 2005). Therefore, among the barnacles to date, only Amphibalanus amphitrite has been proposed as a potential biomonitor for sewage-derived nutrients in coastal marine ecosystems.The high percentage cover of the barnacle A. amphitrite in estuarine areas with waste input and little or no efficient circulation showed that this species is also tolerant of polluted environments; thus, it could be considered a good organic pollution indicator when occurring alone without other typical euryhaline species that are common in these estuarine areas (Lacombe and Monteiro, 1986;Calcagno et al., 1998;Junqueira et al., 2000;Breves-Ramos et al., 2005;Farrapeira, 2006;Farrapeira et al., 2009). Amphibalanus amphitrite is a warm temperate to tropical barnacle distributed throughout the world (Newman and Ross, 1976).It is distinctly euryhaline and eurythermic and is commonly found in the intertidal zone, settled on several substrates such as mangrove roots, rocky shores, and artificial substrates (Newman, 1967;Baker et al., 2004;Desai et al., 2006;Farrapeira, 2008).This species has an average lifespan of 22 months and a maximum of 5-6 years (Calcagno et al., 1998).In the estuarine area of Recife, Pernambuco, it was considered the most euryhaline marine component, even in the most polluted area when no other species were found, and always occupied a wide zone in the intertidal area (Farrapeira, 2006;Farrapeira et al., 2009).This study aimed to assess whether A. amphitrite accumulates total and thermotolerant coliforms in this estuarine area, and thus to demonstrate that it is a good bioindicator of sewage coliform pollution. Study area Recife (08° 04' 03" -08° 05' 06" S and 34° 52' 16"-34° 53' 58" W), located in the northeastern region of Brazil, in the state of Pernambuco, has a population of approximately 1.40 million people, and suffers with problems such as lack of basic urban services (water, sanitation, solid waste collection, etc.) and physical and social infrastructure (The World Bank, 2003).It was built on a coastal plain constituted of fluvial-marine sediments, where the mangroves were one of the most important deposition systems that provided substrata for urban development (Silva, 2004).The climate is tropical warm and wet, with an annual average air temperature of around 25 °C and water temperatures that range from 23.5 to 32.0 °C.Relative air humidity is of approximately 80% and the area has an average precipitation of 1763 mm, of which 80% falls during the period of April to July -the winter months -with rare rain events occurring during the months of September and February (summer) (Somerfield et al., 2003). The estuarine area comprises the rivers Capibaribe, Jiquiá, Tejipió, Jordão, Pina, and Beberibe, which flow towards the Atlantic Ocean through a single opening located in the Port of Recife.There are two important basins in this area.The Pina Basin is formed by the confluence of the southern branch of the Capibaribe River and the Jiquiá, Tejipió, Jordão, and Pina rivers.The Harbor Basin is formed by the confluence of the Pina Basin, and by the northern branch of the Capibaribe and Beberibe rivers, enclosed by the Recife and Olinda moles (Figure 1) (Farrapeira, 2006). Due to its proximity to the city, the estuarine region suffers from extreme environmental degradation, espe-cially mangrove degradation and water pollution from domestic organic sewage.Moreover, only 39% of the urban population has sewerage, and less than 50% is effectively treated; the rest of the sewage is discharged in natura into the rivers (Recife, 1995). Sampling and laboratorial analysis Sampling was designed to test the pollution gradient in two areas, in April and May 2007. Station 1, Pina Bridge (08° 05' 142" S and 34° 53' 419" W), potentially the most polluted site, is an artificial rocky substrate located in the Pina Basin.Station 2, at Marco Zero-Port of Recife, located in the Harbor Basin (08° 03' 795" S and 34° 52' 248" W), represented the place of little or null impact due to the constant renovation of seawater, and is an artificial rocky wall in the Harbour Basin (Figure 1). Environmental variables from the water (pH, salinity and temperature) were taken by portable pHmeter and densimeter during low tide.The water samples from the estuarine areas were collected in sterile containers and processed within 12 hours of collection.Furthermore, the unpublished data of salinities carried out by the Oceanographic Department of Federal University of Pernambuco sampled during the year was used. Living barnacles were collected from both stations, with similar heights and diameters to avoid possible distinct effects caused by differences in age.They were taken from the middle range of the intertidal zone during low tides.The samples were placed into clean plas- tic bags and stored at room temperature until further processing. In the Laboratório de Sanidade de Animais Aquáticos, Universidade Federal Rural de Pernambuco -UFRPE all specimens were washed and lightly scrubbed under running tap water to clean any possible contamination and to remove sediments and any microorganisms from the shell plates.They were then measured (basal diameter and height) using calipers (mm), and the whole sample was weighed with an electronic balance (two decimals).The soft tissues of the barnacles (including bodies, muscles, mantle tissue and egg masses) were then aseptically separated from their shells using sterile stainless steel instruments, transferred to appropriate sterile glass jars, and weighed again. The bacterial analyses were performed according to the standard methods for seawater and shellfish examination (Silva et al., 1997;Feng et al., 1998), using the Most Probable Number (MPN) technique for the total (TC) and thermotolerant coliforms (TTC) test.Thus, the barnacle soft tissues were homogenised with 0.1% peptone water to obtain a dilution of 10 -1 ; the mixture was then disintegrated using sterilised blender equipment.Further dilutions (10 -4 , 10 -5 , 10 -6 ) were prepared from this one, inoculating 1 ml of each dilution into a series of three tubes containing 9 mL Brilliant Green Lactose Bile broth (BGLB) with a Durham tube and incubated at 35 °C for 48 hours.The water samples were analysed using the same method, and dilution was 10 -1 , 10 -2 , 10 -3 .All BGLB tubes that showed turbidity and gas production were quantified for the Most Probable Number (MPN) of TC per gram using the MPN table for three tubes. To analyse the TTC from each gassing BGLB tube, a loopful of suspension was transferred to two tubes of tryptone broth and E. coli broth (EC), which remained in a double boiler at 44.5 °C during 48 hours.The presence of TTC was considered positive when gas production was observed in the EC broth tubes.To confirm it, 0.2-0.3mL of Kovacs' reagent was inoculated in the tube of tryptone broth to test for indole.The distinct red colour in the upper layer (in a ring) was considered a positive result.The MPN of TTC per gramme was determined using the MPN table for three tubes (APHA, 2000).The biochemical identification of bacteria in barnacles and water was carried out according to specific procedures in each station sample. Results The environmental parameters of the Pina Station (08° 05' 142" S and 34° 53' 419" W) and Port Station (08° 03' 795" S and 34° 52' 248" W) are indicated in Table 1.The remarkably low salinity recorded in the first collection was a result of torrential rain which had occurred in the previous week.The pluviometric pattern and the presence of several rivers in the studied area produce great variation of salinity.According to the Oceanographic Department of the Federal University of Pernambuco, the Pina Basin has a minimal salinity of 0.35 at low tide and 6.14 at high tide during the rainy months, while in the summer period, the maximal values of salinities varies from 26.63 to 33.70 at low and high tides, respectively.At the Harbor Basin, the salinity varies from 0.77 at low tide and 18.98 at high tide during the rainy months, and from 33.72 to 36.09 at low and high tides in the summer period, respectively. The barnacles collected from Station 1 (Pina) measured 7.9 ± 1.8 mm in height and 11.5 ± 1.9 mm in diameter; their total weight in April was 141.23 g (including shells and opercula) and 6 g (soft tissue weight), and 190.23 and 7 g in May, respectively.The barnacles from Station 2 (Port) measured 8.7 ± 1.7 mm in height and 12.9 ± 2.6 mm in diameter; weight was of 135.79 g (complete animals) and 6 g (soft tissues) in April, and 201.26 and 10 g in May, respectively. Discussion The coliforms present in the water varied in value for the commonest bacteria (Escherichia coli and Klebsiella pneumoniae) that were found in both stations, but the highest values of TC and TTC were only recorded at the most polluted station.It is important to highlight that in addition to these species, the TTC Citrobacter amalonaticus also occurred in this locality, a species considered of fecal origin (Leclerc et al., 1981).When studying the coliforms present in Valencia, Spain, Alonso et al. (1999) found Escherichia coli, Klebsiella pneumoniae, and 12 other species, while Chiroles-Rubalcaba et al. (2007) found the same TTC coliforms and another four species when researching bacterial indications of fecal contamination in the Almendares River, Cuba.The specific variation, especially in TC species composition between the two stations sampled, may be explained based on their ecological specificity.The survival of coliforms in marine waters depends on salinity, predation, competition with autochthonous microorganisms, heavy metals and nutrients (Hood and Ness, 1982). Although several other disease-causing organisms might be present, the results found here are useful to recognise the high density of bacterial pathogens at the Pina Basin.The presence of the pneumonia-causing TTC Klebsiella pneumoniae, for instance, is cause for concern (Grisi et al., 1983), as well the TC Salmonella spp.(which causes typhoid fever and salmonellosis) and the TTC Citrobacter amalonaticus, which is frequently present in humans as normal intestinal inhabitants and can cause a range of diseases that include bacteremia/ sepsis and several other infectious diseases (Suwansrinon et al., 2005). The coliform count of this study showed that Pina Station was the most contaminated site, and this fact is cited in the literature.According to Recife (1995), high levels of TTC (> 3 × 10 4 MPN.100 ml -1 ) have been recorded in the Pina Basin, which shows the strong influence of domestic sewage at that locality.Similarly, Castro et al. (2006) described the Pina Basin as hypereutrophic and organically polluted by the constant influx of nutrients from the five rivers that enter the basin through urban areas with poor sanitation or none at all.Moreover, these rivers pick up a large range of pollutants, particularly high levels of nutrients from the domestic sewage derived from the Boa Viagem and Pina neighbourhoods and also the downtown area.The situation of sewerage is even more dramatic.The World Bank (2003) reported that overall coverage is around 36% and as low as 7% in the poorer areas.Sewage is discharged into open canals which flow into the river systems without any treatment whatsoever.Only an estimated 20% of total sewage is treated; as a result, the heavily contaminated river system affects water quality along the coast. According to Tommasi (1987), the domestic sewage composition of a large Brazilian city has a pH of approximately 6.8 and a total bacteria count that varies from 0.1 × 10 to 3 × 10 6 MPN.100 mL -1 .This author also estimated that sewage pollution per inhabitant is of 1 × 10 5 MPN.100 mL -1 (without treatment) and 5 × 10 3 MPN.100mL -1 (with primary treatment).Kolm and Absher (2008), when studying the bacterial density in waters of the Paranaguá estuarine complex (Paraná, Brazil), found a correlation between the area that receives waters from city sewers and the highest values of TC, E. coli, and seston in the water; the highest TC values were obtained from the polluted estuarine area (6.7 × 10 3 MPN.100mL -1 ).At the mouth of the Nervión River (Biscay Bay, Iberian Peninsula), which receives an enormous load of untreated sanitary sewage, Pagola-Carte and Saiz-Salinas ( 2001) found TC values that ranged from 1.0 × 10 to 2.2 × 10 4 MPN.100 mL -1 and TTC values between 0.1 × 10 and 2.7 × 10 3 MPN.100mL -1 .Jeng et al. (2005) found the highest TTC concentrations (1.6 × 10 2 MPN.100 mL -1 ) at the mouth of the Lake Pontchartrain canal, an estuary in Louisiana, Mississippi (USA). The maximum values observed by these authors were somewhat similar to those obtained from Pina Station in this study and are higher than the maximum allowed by Brazilian law regarding TTC counts in water from where the bivalve mollusks will be extracted and/or cultivated to be consumed by humans (1.4 × 10 MPN.100 mL -1 ) (CONAMA, 2005).The lower coliform values at Port Station, nevertheless, can be explained because of the proximity to the Atlantic Ocean and the tide movements, where seawater is continuously renovated.According to Tommasi (1987), seawater has a high ability to purify itself and to inactivate thermotolerant organisms.Additionally, bacteria could travel some distances (nearly 2.4 km from the discharge point) as a result of pump-ing, depending on wind direction/intensity and current (Jeng et al., 2005).Chigbu et al. (2005) noted that after a peak in TTC values the numbers decline rapidly in 0.3 to 13 days (an average of 6 days); their dynamics in coastal waters is a function of the bacterial load from inflowing streams and rivers, mass transport, and losses due to death and sedimentation. An interesting fact was finding the highest concentrations of coliforms inside the barnacles rather than in the water surface.Kolm and Absher (2008), when comparing the bacterial density in waters and oysters of the Paranaguá estuarine complex, Paraná-Brazil, always found lower TC values in the water than in the oysters during the period sampled.The same was observed by Burkhardt III and Calci (2000), who found TTC concentrations inside the oysters Crassostrea virginica (Gmelin, 1791) from an estuary of the Mexican Gulf as much as 4.4 times than that of the surrounding water.Similarly, Lucena et al. (1994) noted that bacterial concentrations of the black mussel Mytilus edulis d`Orbigny, 1846 from the Mediterranean Sea were far greater than bacterial levels in the water, which indicates bioaccumulation. It might be expected that coliforms also penetrate the soft tissues of the barnacles.Zauke et al. (1988) defines bioaccumulation as the net uptake of a substance by aquatic organisms through the interactive effect of bioconcentration (via non-dietary uptake routes).Paralleling an observation by Cerutti and Barbosa (1991) about mollusks, bacterial selection in barnacles may be related to factors such as the bacteria's adaptation to the marine environment, resistance to enzymatic degradation, and use of the host's gut content as a source of nutrients.Although there is a lot of literature on many aspects of barnacle biology, there is very little quantitative information on suspension feeding rates.Barnacles are sessile facultative active-passive suspension-feeders and the volume of seston filtered from the water is a function of current speed and the surface area of the extended basket of modified legs or cirri (Foster, 1978).Amphibalanus amphitrite is more microphagous in nature and is capable of feeding on phytoplankton and debris (Desai and Anil, 2004).Regarding the nutrition of this species' larvae, Gosselin and Qian (1997) observed that when bacteria concentrations were added, barnacle larvae did not grow and did not accumulate particulate material in their gut, which suggests that A. amphitrite larvae are unable to capture bacteria. Comparing coliform concentrations in other marine invertebrate organisms -mainly shellfish (oysters and mussels) -the coliform values inside the barnacles were astonishingly higher than for any other marine or estuarine invertebrates.Although there are no established TTC standards for shellfish, according to Hood et al. (1983) the National Shellfish Sanitation Program (USA) has suggested a standard of ≥ 2.3 × 10 MPN.g -1 of tissue.These authors found TTC levels of 8.6 × 10 MPN.g -1 in Crassostrea virginica oysters from the Apalachicola and Tampa bays, USA.In Brazil, the estimated TC and TTC values in Crassostrea rhizophorae (Guilding, 1828) oysters ranged from < 0.2 × 10 to > 1.6 × 10 3 MPN.g - and from < 0.2 × 10 to > 9.2 × 10 2 MPN.g -1 , respectively, in the Cocó River estuary, State of Ceará (Silva et al., 2003).Values ranged from < 0.2 × 10 to 9.2 × 10 3 MPN.g - for TC and from < 0.2 × 10 to 4.3 × 10 2 MPN.g -1 for TTC in samples taken from the Jaguaribe River estuary, in the same state (Vieira et al., 2007).Kolm and Absher (2008) found that in the oysters from the Paranaguá estuarine complex the TC values varied from 0.6 × 10 MPN.g -1 to 4.8 × 10 2 MPN.g -1 in polluted areas.In addition, coliform values in mussels were not similar to those in barnacles.Jorge et al. (2002), when studying Perna perna (Linnaeus, 1758) mussels from Niterói, Rio de Janeiro, found 0.3 × 10 MPN.g -1 TC and 0.1 × 10 MPN.g -1 TTC. Another interesting fact observed in this study was the bacteria composition within the barnacles.With the exception of Escherichia coli -which was also found in water samples -all other coliform species were only sampled from barnacles, regardless of their occurrence in other rivers, as noted by Alonso et al. (1999) in Valencia, Spain.Escherichia coli and Enterobacter aerogenes are usually isolated from human feces, while the other thermotolerant coliforms (Enterobacter gergoviae and Enterobacter sakazakii) are probably of non-fecal origin (Leclerc et al., 1981).E. gergoviae has been isolated from a variety of clinical and environmental sources (Brenner et al., 1980), as well as E. sakazakii; the latter has been linked to outbreaks of meningitis or enteritis, especially in infants (Kandhai et al., 2004).The TC Ewingella americana -also found solely inside the barnacles -is the only species of the genus in Family Enterobacteriaceae, first described from clinical specimens (Grimont et al., 1983).This organism has low pathogenic potential and rarely causes infections in immunocompromised humans; peritonitis and bacteremy has been identified from various clinical specimens, although the reservoir's niches have not been clarified (Pound et al., 2007). When observing this lack of pattern of coliform numbers in water and animal samples, Escobar Nieves (1988) suggested that TC and TTC detected in the water do not represent a reliable measurement of oyster quality.To justify the occurrence of a specific coliform in an organism and not in the water, Pommepuy et al. (1996) proposed that enterobacteria are best preserved when the microorganisms are ingested by organisms, which may account for the great differences observed in bacterial counts.Chigbu et al. (2005) noted that TTC in surface waters decrease or disappear from the water column with time through death and sedimentation processes, and can concentrate in sediments at high densities.Likewise, Troussellier (1998) stated that water microbes directly discharged into receiving waters could remain in the water column and be transported for some distance prior to their die-off, precipitate to the bottom sediment, or remain suspended in the surface water overlying the bottom sediment.Therefore, the presence of the TC and TTC identified in the barnacles sampled means that at any moment in the past these bacteria were in the water and were carried out to the estuarine area. The high concentration of coliforms in body tissues of the barnacle Amphibalanus amphitrite at an estuarine area of Recife showed not only that this species is tolerant to polluted environments (Calcagno et al., 1998), but especially that it is a good coliform bioaccumulator; thus, it proved to be a good choice as a sentinel species for biomonitoring.According to Westman (1985), this occurs because they are unusually tolerant of degraded conditions and -considering the literature on their history in estuarine areas contaminated with sewage -they can survive where others organisms cannot. Figure 1 . Figure 1.Map of the estuarine area of Recife, Pernambuco, Brazil, showing station 1 at Pina Basin and station 2 at Harbour Basin.
2017-08-28T16:43:58.085Z
2010-05-01T00:00:00.000
{ "year": 2010, "sha1": "c22c15b96135cd92cc498504dc2f94a68242768c", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjb/a/4wsXzN6Www7pWZQjSfdqjRn/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c22c15b96135cd92cc498504dc2f94a68242768c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246866406
pes2o/s2orc
v3-fos-license
Sex Differences in Responses to Antidepressant Augmentations in Treatment-Resistant Depression Abstract Background Women are nearly twice as likely as men to suffer from major depressive disorder. Yet, there is a dearth of studies comparing the clinical outcomes of women and men with treatment-resistant depression (TRD) treated with similar augmentation strategies. We aimed to evaluate the effects of the augmentation strategies in women and men at the McGill University Health Center. Methods We reviewed health records of 76 patients (42 women, 34 men) with TRD, treated with augmentation strategies including antidepressants (AD) with mood stabilizers (AD+MS), antipsychotics (AD+AP), or in combination (AD+AP+MS). Clinical outcomes were determined by comparing changes on the 17-item Hamilton Depression Rating Scale (HAMD-17), Montgomery-Åsberg Depression Rating Scale (MADRS), Quick Inventory of Depressive Symptomatology (QIDS-C16), and Clinical Global Impression rating scale (CGI-S) at the beginning and after 3 months of an unchanged treatment. Changes in individual items of the HAMD-17 were also compared between the groups. Results Women and men improved from beginning to 3 months on all scales (P < .001, η p2 ≥ 0.68). There was also a significant sex × time interaction for all scales (P < .05, η p2 ≥ 0.06), reflecting a greater improvement in women compared with men. Specifically, women exhibited greater improvement in early (P = .03, η p2 = 0.08) and middle-of-the-night insomnia (P = .01, η p2 = 0.09) as well as psychomotor retardation (P < .001 η p2 = 0.16) and psychic (P = .02, η p2 = 0.07) and somatic anxiety (P = .01, η p2 = 0.10). Conclusions The combination of AD+AP/MS generates a significantly greater clinical response in women compared with men with TRD, supporting the existence of distinct pharmacological profiles between sexes in our sample. Moreover, they emphasize the benefit of augmentation strategies in women, underscoring the benefit of addressing symptoms such as insomnia and anxiety with AP and MS. Introduction Women are nearly twice as likely as men to suffer from major depressive disorder (MDD), and this sex difference is among the most robust of findings in psychopathology research (Weissman et al. 1996;Kessler et al. 2003;Wilhelm et al. 2008;Parker and Brotchie 2010;Salk et al. 2017). Despite the greater prevalence of depression among women, only a few studies have investigated the issue of sex differences and psychopharmacological response in MDD, particularly treatment-resistant depression (TRD) (LeGates et al. 2019; Rubinow and Schmidt 2019; Bartova et al. 2021). Antidepressants (AD) are the first-line treatment for MDD (NICE 2010;Bauer et al. 2015;Cleare et al. 2015;Kennedy et al. 2016), yet more than 30% of patients show an inadequate response to initial pharmacological treatments (Rush et al. 2006;Berlim and Turecki 2007). International guidelines and clinical studies suggest that MDD non-responding to 2 adequate trials with AD, also called TRD, should be treated with a combination of different classes of AD, or augmentation strategies with antipsychotics (AP) and/or lithium and valproic acid (mood stabilizers [MS]) as well as other treatment modalities (including brain stimulation techniques) (Lam et al. 2009;Ghabrash et al. 2016;Kennedy et al. 2016;Gobbi et al. 2018). Early evidence showed sex differences in the clinical outcome of augmentation strategies in MDD. For instance, T3 (L-triiodothyronine) was observed to be more effective in the augmentation of AD treatment in women than in men (Altshuler et al. 2001). Additional work in current and novel augmentation strategies may be useful in identifying personalized approaches to optimize treatment in both women and men (LeGates et al. 2019). To the best of our knowledge, clinical response rates to AD and a combination of augmentation strategies with either AP or MS has not been explored comprehensively between women and men. The present naturalistic study conducted at the specialized mood disorder clinic of McGill University primarily aimed to evaluate the use of pharmacological combinations of AD+AP, AD+MS, and AD+AP+MS in male compared with female TRD patients. The secondary objective is to investigate possible differences in sociodemographic, clinical, and treatment patterns between male and female TRD patients. METHODS This retrospective study was approved by the Institutional Review Board of McGill University (IRB no. 2020-6323) and was conducted from 2015 to 2020 in accordance with the Declaration of Helsinki and ICH Good Clinical Practice. Data were retrieved from a research database containing information systematically collected on patients followed at the Mood Disorders Clinic of the McGill University Health Center for ≥2 years (mean, 7.5 years). Written informed consent was not required because data were obtained by chart review. Diagnoses of MDD and comorbidities were confirmed by the Structured Clinical Interview for DSM-IV as well as thorough clinical interviews by experienced mood disorder specialists and research coordinators. Patients with a mixed episode or with a neurological/developmental disorder and/or a mood disorder secondary to a medical condition were excluded. The Maudsley Staging Method was used to establish the severity of the TRD patients (Fekadu et al. 2009). Some of the patients had been included in previous studies (Ghabrash et al. 2016;Nuñez et al. 2018). Patients Charts of 206 patients meeting DSM-IV criteria for a major depressive episode for ≥2 months were reviewed (American Psychiatric Association, 2000). A total 76 patients met the criteria for TRD by failing ≥2 pharmacological trials with different AD in mono or combination therapy at an adequate dose and for ≥3 weeks (Lam et al. 2009). All patients had at least a mild to severe major depressive episode, suggested by a score of ≥13 on the Hamilton-Rating Scale for Depression (HAMD-17) and a score of ≥20 on the Montgomery-Åsberg Depression Rating Scale (MADRS) based on cut-off values proposed by Zimmerman et al. (2013). Patients were treated with augmentation strategies, including ADs with MS (AD + MS), AP (AD + AP), or both (AD + AP + MS). Clinical Evaluation Chart analysis was performed by 2 authors (N.A.N. and G.G.) and evaluated at baseline, before the beginning (T0), and after at least 3 months of an unchanged pharmacological treatment (T3). At T0 and T3, patients were assessed on the following behavioral scales: HAMD-17 (Hamilton 1986), MADRS (Montgomery and Åsberg 1979), the Quick Inventory of Depressive Symptomatology ) (QIDS-C16), and the Clinical Global Impression-Severity of Illness ) (CGI-S). The response was defined as a ≥50% reduction from the pre-treatment in the HAMD-17 score. Remission was defined as a score <7 of the HAMD-17 at T3. Reliability and Inter-Rater Agreement for Psychometric Scales The internal consistency was previously assessed utilizing Cronbach's alpha, and an acceptable reliability was found for all scales (HAMD-17: α = 0.82; QIDS-C16: α = 0.77) ). Inter-rater reliability was previously assessed using Significance Statement Women are nearly twice as likely as men to suffer from major depressive disorder. Yet, there is a dearth of studies comparing the clinical outcomes of women and men with treatment-resistant depression treated with similar medication. We compared the improvement of women and men treated with similar combinations of medication (antidepressants with mood stabilizers and/ or antipsychotics) at the McGill University Health Center. We found that the depressive symptoms of women and men improved significantly over 3 months. We also found that women improved more than men this period. Specifically, the use of mood stabilizers and/or antipsychotics in women improved insomnia, anxiety, and psychomotor retardation more than in men. Our results support the existence of distinct pharmacological profiles between sexes. Moreover, they emphasize the benefit of augmentation strategies in women, underscoring the benefit of addressing symptoms such as insomnia and anxiety with antipsychotics and mood stabilizers. Statistical Analyses Group comparisons on patients' demographics were computed through the Pearson's chi-square (χ²) test or by Fisher's exact test (if n ≤5 in each subgroup). Changes in scales were analyzed using repeated-measures ANOVA with sex as between-subject factor and time as a within-subject factor, followed by Tukey post-hoc analyses. Effect sizes are reported for t tests (Cohen's d) and ANOVA (partial eta-squared, η p 2 ). Small, medium, and large effect sizes were respectively 0.2, 0.5, and 0.8 for "d", and 0.01, 0.06, and 0.14 for η p 2 (Cohen 1968). Analyses were performed using R Statistical Software (R Core Team, 2020). Significance was set at P < .05. Data are presented as mean ± SD, except when otherwise specified. Demographics A total 76 patients were included in the study (age: 47.71 ± 12.50 years; Table 1). Women and men showed a moderate level of resistance based on the Maudsley Staging Method (women: 9.92 ± 1.89, men: 9.47 ± 1.67). Women previously had tried an average 5.2 (±3.2) medications and men an average 4.4 (±2.0) medications. Pharmacotherapies are described in Tables 2 and 3. At T0, women and men had a moderate/severe depression respectively). Further characteristics of this sample can be found in our previous work Moderie et al. 2022). Response and Remission Response and remission rates of women and men did not differ significantly between the 2 groups (P ≥ .12; Table 4). Of note, no suicide attempt or suicidal behavior occurred during the 3-month follow-up of the patients. Improvement in Individual Items of the HAMD-17 in Women vs Men In Table 5, we compared 3-month changes in individual items of the HAMD-17 in women and men. Women exhibited greater improvement in both early (P = .03, η p 2 = 0.08) and middle-ofthe-night insomnia (P = .01, η p 2 = 0.09) as well as retardation (P < .001 η p 2 = 0.16), psychic (P = .02, η p 2 = 0.07), and somatic anxiety (P = .01, η p 2 = 0.10). There was a marginal finding for a greater improvement in general somatic symptoms in women vs men (P = .07, η p 2 = 0.04). No significant findings were seen in other clinical scales (P ≥ .15). DISCUSSION This is the first study, to our knowledge, comparing the clinical trajectory of women and men with TRD treated with similar augmentation strategies (AD and/or MS). One of our main findings is the greater reduction of depressive symptoms in women compared with men (medium effect size) treated with an AD and augmented with an AP and/or MS. Yet, a recent analysis by the Group for the Study of Resistant Depression showed a trend towards a more frequent administration of add-on treatments in men than in women (Bartova et al. 2021). Our results emphasize the importance of augmentation strategies in women with TRD. The synergistic effect of AD+AP is well studied in unipolar depression (Dold and Kasper 2017), and preclinical studies have underscored that augmentation with an AP allows for targeting multiple receptors and neurotransmitters systems (Blier and Blondeau 2011). There is also evidence undermining the importance of augmentation strategies including MS for TRD patients (Blier and Blondeau 2011;Dold and Kasper 2017;Gobbi et al. 2018). Historically, women were prescribed more tranquilizing and hypnotic drugs than men, but recently, AP appears to replace the use of those medications (Seifert et al. 2021a,b). Augmentation of AD with evidence-based pharmacotherapies rather than tranquilizing and hypnotic drugs will benefit women with TRD (Kennedy et al. 2016;Seifert et al. 2021b). Sex differences for the prescription of AD and MS among patients with MDD are largely unavailable and remain to be clarified (Seifert et al. 2021b). Our study design did not allow us to systematically address this question, but no significant differences were noted in terms of augmentation strategy. Most patients were augmented with an AP, and quetiapine was the agent most prescribed in both men and women. It is also the medication with the best evidence, as emphasized in a recent Cochrane Review for the management of TRD (Davies et al. 2019). In our study, MS was used less often than AP, without any sex differences. Others have reported a less common administration of MS in women compared with men with MDD (Bartova et al. 2021), a difference likely driven by the contraindication of valproic acid and lithium in women of childbearing age due to their potential teratogenic effects (Gentile 2010;Dold et al. 2016;Munk-Olsen et al. 2018). The differential treatment outcomes observed between men and women could be explained by a myriad of factors. It has been suggested that women may be more likely to respond to selective serotonin reuptake inhibitors (SSRI) than a tricyclic AD, whereas men may be more likely to respond to tricyclic AD than an SSRI (Frank et al. 1988;Haykal and Akiskal 1999;Kornstein et al. 2000;Berlanga and Flores-Ramos 2006;Young et al. 2009). The occurrence of certain drug side effects (i.e., weight gain or sexual dysfunction) may also contribute to the differential AD efficacy and tolerability between sexes (Seifert et al. 2021a). Because most patients were treated with SSRIs in our study, this could contribute to explaining the greater improvement in specific symptoms in women compared with men. Nonetheless, there is no definite consensus on whether sex differences in AD efficacy actually exist (Keers and Aitchison 2010;Sramek et al. 2016;LeGates et al. 2019), and the National Institute for Health and Care Excellence explicitly states that little evidence supports prescribing AD according to sex (NICE 2010). Besides, the pharmacokinetics of augmenting agents might exhibit sex differences with hypothesized differences in drug transporters (Benet et al. 1999), metabolizing enzymes (Harris et al. 1995;Cheung et al. 2006), and resulting plasma levels of medication (Ronfeld et al. 1997;Keers and Aitchison 2010). Pharmacodynamic properties of the augmenting agents may, furthermore, differ in men and women, with distinct effects on neurotransmitter synthesis in men and women (Keers and Aitchison 2010). More studies are needed to elucidate distinct effects of augmenting agents in relation to sex. The groups in our study were comparable in terms of age, duration of illness, number of past hospitalizations and medications, and comorbidities with substance-use disorders (SUD) and anxiety disorders. Unlike our findings, in patients with MDD (non-TRD), alcohol and drug abuse is more common in men than in women (Marcus et al. 2008). Although the sample size aTwo-way ANOVAs with sex as between-subject factor and time as a within-subject factors, followed by Tukey post-hoc analyses. Delta scores (changes from T0 to T3) are reported. *P < .05, **P < .01, ***P < .001 Abbreviations: HAMD-17, 17-item Hamilton Depression Rating Scale; TRD, treatment-resistant depression. is limited, our results may indicate that the prevalence of SUD in women with TRD might be higher compared with women with MDD, in line with increased risk for SUD among patients with TRD compared with other depressed patients (Brenner et al. 2019). Notably, most studies in TRD excluded individuals with SUD (Bennabi et al. 2015;De Carlo et al. 2016). Although the European Group for the Study of Resistant Depression did not identify SUD as a risk factor for TRD (Souery et al. 2007), sex differences were not investigated. Women with MDD (non-TRD) present comorbid anxiety disorders more frequently than men and are more likely to suffer from anxiety prior to the development of depression (Breslau et al. 1995;Yonkers et al. 1996;Howell et al. 2001;Marcus et al. 2005;Grigoriadis and Erlick Robinson 2007;Bukh et al. 2010). Some data even suggest that the increase in anxiety-depression comorbidity may explain the greater lifetime prevalence of depression in women (Breslau et al. 1995). Interestingly, the European Group for the Study of Resistant Depression found no difference in comorbid anxiety disorders when comparing women and men with TRD, which might suggest an attenuation of this sex difference in TRD (Bartova et al. 2021). Likewise, we found no differences in comorbid anxiety disorders in women compared with men. However, as shown in the Zurich Cohort Study, women also have higher rates of sub-threshold co-morbid anxiety, which could contribute to the treatment resistance in MDD (Angst and Merikangas 2001;Souery et al. 2007). Our data also align with the findings from the DEPRES I and II studies reporting higher prevalence of insomnia and anxiety symptoms in women compared with men (Angst et al. 2002). Such evidence emphasizes the need to address anxiety and insomnia, particularly in women. AD, particularly SSRIs, may not sufficiently alleviate those symptoms in women (LeGates et al. 2019) and might contribute to the higher number of tranquilizers and hypnotics prescribed for women than for men (Boyd et al. 2015;Seifert et al. 2021b). In the current study, augmentation with AP and/ or MS helped to significantly reduce (moderate effect size) both insomnia and anxiety in women more than in men. Another important finding is the larger improvement noted in reported early and middle-of-the-night insomnia in women compared with men. Women with MDD generally report more insomnia symptoms than men (Silverstein 1999;Marcus et al. 2005). Insomnia is an established and modifiable risk factor for depression, the treatment of which offers the critical opportunity to prevent major depressive episodes (Plante 2021). The differential improvement in insomnia in women compared with men was accompanied by large-effect size difference in psychomotor retardation. Although no causality can be drawn, our results suggest that improving sleep with augmenting agents in women could decrease psychomotor retardation. While we found a distinct clinical improvement on the severity of depression according to the different pharmacotherapy strategies, we did not observe an overall difference in their response or remission rates. Such outcomes should be viewed, considering the long period required to achieve remission or euthymic states in depression (Goodwin et al. 2016). The observed low rates of remission indeed reflect the refractory nature of patients included in this study. However, no suicide or suicide attempts were reported during the study follow-up, underscoring that even if the pharmacological combinations did not lead to remission within 3 months, they may be significant in certain depressive domains such as preventing suicidal behaviors. There was no sex difference observed in the suicidality item of the HAMD17. As in multiple studies, the number of suicide attempts was higher in women than in men, which could reflect the higher completion rate in men (Kessler et al. 1993;Oquendo et al. 2001). Larger studies are needed to elucidate preferential pharmacotherapy to prevent suicide. The absence of suicide in our cohort can also be linked to follow-up in a tertiary/quaternary clinic with staff fully trained in suicidal prevention and with 24/7 access to psychiatrists and/or psychiatry emergency. Limitations Several limitations should be considered while interpreting these findings. First, the external validity may be limited by data derived from a university hospital mood-specialized center. Second, we did not match the sample of women and men patients according to single pharmacological agents or dosages as well as to depressive severity. Third, we did not control for the menstrual status/ phase of women, which can contribute to the severity of symptoms (Hartlage et al. 2004;Haley et al. 2013;Davari-Tanha et al. 2016;Salk et al. 2017) and AD response. Fourth, the non-blinded retrospective outcome assessments should be considered as well as the limitations of a naturalistic design study. Side effects and adverse events were not systematically documented. Nevertheless, the findings may reflect real-world interactions of clinically selected pharmacotherapies, as clinical treatment was individualized and adjusted to tolerability to favor patients' preference and positive clinical outcomes (Kennedy et al. 2016;Dold and Kasper 2017). The long follow-up of patients at the clinic also prevents the inclusion of undiagnosed bipolar patients in the sample of TRD (Perlis et al. 2011). CONCLUSION In our naturalist study in patients with TRD, augmentation strategies generate a significantly greater clinical improvement in women compared with men, supporting the existence of distinct pharmacological profiles between sexes. Moreover, they emphasize the benefit of augmentation strategies in women and highlight the benefit of addressing insomnia and anxiety with AP and MS in this specific population. Further studies linking specific medication and symptoms outcomes in larger sample sizes should provide more insight into these clinical questions to provide personalized management of care of patients suffering from depression. This study paves the way for the investigation of sex differences in TRD, and the data reported here can be used to determine needed sample size in larger trials.
2022-02-17T06:17:21.636Z
2022-02-15T00:00:00.000
{ "year": 2022, "sha1": "489cf1b5250d6d8dbaca33a3601fac59c38744e1", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ijnp/advance-article-pdf/doi/10.1093/ijnp/pyac017/43590262/pyac017.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08d0c0c4b0ac9eb0e0e73931d362ca3f6feee921", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1470114
pes2o/s2orc
v3-fos-license
The effect of total hysterectomy on sexual function and depression Background & Objectives: To investigate whether the operations of Type 1 hysterectomy and bilateral salpingo-oophorectomy performed for benign reasons have any effect on sexual life and levels of depression. Method: This is a multi-center, comparative, prospective study. Healthy, sexual active patients aged between 40 and 60 were included into the study. Data was collected with the technique of face-to-face meeting held three months before and after the operation by using the demographic data form developed by the researchers i.e. the Female Sexual Function Index (FSFI) and the Beck Depression Scale (BDS). Results: In the post-operative third month, there was an improvement in dysuria in terms of symptomatology (34% and 17%, P<0.001), while in FSFI (41.47±25.46 to 34.20±26.67, P<0.001) and BDS (12.87±11.19 to 14.27±10.95, P=0.015) there was a deterioration. For FSFI, 50-60 age range, extended family structure; and for BDS, educational status, not working and extended family structure were statistically important confounding factors for increased risk in the post-operative period. Conclusion: While hysterectomy and bilateral salpingo-oophorectomy performed for benign reasons brought about short-term improvement in urinary problems after the operation for sexually active and healthy women, they resulted in sexual dysfunction and increase in depression. The age, educational status, working condition and family structure is also important. INTRODUCTION The majority of hysterectomies are performed on benign reasons in order to increase quality of life; nevertheless, it can bring about some post-operative long-term problems such as sexual dysfunction, depression and especially, urinary incontinence. 1,2 During the operation of hysterectomy, particularly in the course of the ablation of cervix, the bilateral inferior hypogastric plexus, which enables the sympathetic and parasympathetic innervations of the sub pelvic region, can sustain injury. 3 In addition, depending on the lack of uterus among women after hysterectomy and the termination of the capacity of reproduction, the anxiety for no longer having any sex increases the risk of depression, having an impact on the thoughts, social life and partnering communication of women focusing too much on reproduction. 4 Male partners can also have sexual anxieties after such an operation. In some studies it has been reported that alleviation of sexual problems and anxieties of partners undergoing hysterectomy has a positive effect quality of life of the patients. 5,6 In some studies, it has been pointed out that such operations do not have any effect on the sexual functions of women 7 , while some claim positive effects 8,9 and some others have reported negative effects. 10,11 As a consequence, the majority of contemporary studies are retrospective and the short and long-term effects of hysterectomy on sexual function and depression are still not exactly known. 12 The aim of this research was to determine, by using the Female Sexual Function Index (FSFI) and the Beck Depression Scale (BDS), whether the operation of Total Abdominal Hysterectomy and Bilateral Salpingo-oophorectomy (TAH+BSO) performed on benign reasons among sexually healthy and active women aged between 40-60 has any effect on sexual life and levels of depression in the post-operative short period. METHODS This study was planned as a prospective and comparative, multi-center one. The patients included into the study were those sexually active and healthy patients, aged between 40-60, who underwent treatment between May 2013 and December 2013 in the Istanbul Gulhane Military Medical Academy, Haydarpasa Training Hospital, Obstetrics and Gynaecology Service, Suleymaniye Training and Research Hospital and Namik Kemal University Medical Faculty Hospital, These were patients with no diagnosis of malignancy, planning to have an operation of Type 1 hysterectomy and bilateral salpingo-oophorectomy and subsequently having this kind of operation. Our study was approved by the Ethics Committee of the Non-Invasive Clinical Trials of GATA Haydarpaşa Training Hospital. All patients were informed about the research; discussions were held about the issues they worry about the operation and the postoperative period. On benign reasons, all patients underwent the operation of Type 1 total abdominal hysterectomy and bilateral salpingo-oophorectomy. In the post-operative six. week, all patients were routinely called in for control, given a briefing on sexual issues, and were encouraged. Either in the pre-operative or post-operative briefing period, the sexual partner was also informed and his anxieties were thus dispelled. The patients who had been under severe depression before the operation or had been using anti-depressant medications or had sexual dysfunction, and those patients who had developed complications during the operation or in the post-operative period, whose partner had a severe illness or had died in the meantime, did not want to continue were excluded. Demographic features of the women participating in the research were determined through the "Patient Diagnosis Form" prepared by the researchers. Additionally, for the determination of the depression level of women, BDS and for the evaluation of their sexual functions, FSFI scales were used through face-to-face meeting held three months before and after the operation. FSFI were grouped as the following: 0-15 (severe), 16-25 (moderate), 26-35 (mild) and 36 and more (normal); and the Beck Depression Scale was grouped as the following: 0-10 (non), 11-17 (mild), 18-23 (moderate), 24 and more (severe). FSFI was also divided into sub-groups under the headings of satisfaction, lubrication, orgasm, arousal, sexual desire and pain. Statistical analysis: Data was evaluated by using SPSS for Windows 15.0 software (Statistical Package for the Social Sciences -SPSS Inc., Chicago, Illinois, USA). Descriptive statistical mean values were presented in terms of standard deviation, frequency and percentage. For statistical analytical categorical changes, the chi-square test was used; for continuous data, the student-t test was used; and for the comparison of dependent qualitative date, the McNemar test was used. Multivariable logistic regression was performed to assess the independence of the associations by adjusting for potential confounding factors. For the purpose of assessing the sexual function of women and determining their level of depression, the categories of age (40-45, 46-50 and 51-60), educational status (primary school or its absence, middle school, high school, associate degree or undergraduate), employment status, family type (nuclear family and extended family) and smoking habits were all used as a potential confounding factor for multivariable logistic regression models. For each potential confounder, we calculated adjusted odds ratios (ORs) and 95% CIs. Results were evaluated in a 95% confidence interval, p<0.05 significance level and p<0.01 P<0.001 advanced significance level. For all comparisons, nominal dual p value was accepted. RESULTS One hundred fifty patients were included into the study. 89 of these patients (59.4%) were in the pre-Sexual function, depression and hysterectomy menopausal period and 61 of them (40.6%) were in the postmenopausal period. The average age of the patients was 46.94±3.86 years; the average marriage duration was 22.40±6.94 years; and average BMI was 26.87±4.14 kg/m 2 . Table-I shows the general demographic features of the patients. In terms of the symptomatology in the preoperative and post-operative periods, Table-II shows that there is a statistically significant decrease in urinary problems in the post-operative period (34% and17%, p<0.001, respectively). As for a comparison in terms of FSFI, one can see that there is a statistical difference between the total (41.47±25.46 to 34.20±26.67, p<0.001, respectively) and pre-menopausal sub-groups (42.11±23.83 to 32.52±26.29, p<0.001, respectively) and this difference also continues between sub-groups. And in the menopausal group, there was a difference only in the sub-group of sexual desire (4.3±1.95 to 3.61±1.64, p=0.002, respectively). However, when female sexual functions are grouped as normal and abnormal, there is a difference only in pre-menopausal patients (p=0.031). Data are presented as frequency and percentage. to 14.27±10.95, P=0.015, respectively) and premenopausal sub-group (P=0.028), but t there was no difference in the menopausal group. Comparing the existence or absence of depression in the pre-operative and post-operative periods, no difference was observed in any category. Adjusted ORs and 95% CIs of each potential confounder were calculated for the Female Sexual Function Index (FSFI) and the Beck Depression Scale (BDS). When the adjusted odds ration (OR) of the potential confounding factors affecting the existence of female sexual dysfunction and depression, it was educational status at the undergraduate level that was observed to have less frequent in the preoperative period, [OR:7.32 (95% CI: 1.09-49.10), p=0.040] and in the case of depression it was the 51-60 age range that was observed to have less frequent. In the case of female sexual dysfunction in the postoperative period, extended family type Table-IV shows the adjusted ORs for potential confounding factors that were statistically significant for an increased risk of FSFI and BDS. DISCUSSION Hysterectomy is among the most frequently performed major gynaecological surgical interventions. 1 Whether with oophorectomy or not, anxieties concerning sexual function in the posthysterectomy period are questionable and it is emphasized that each partner's life quality can be affected after such an operation. However, the exact reason underlying the potential sexual dysfunction after hysterectomy could not be explained up until now. 13,14 It is thought that the neural support of the upper vagina is related to orgasm and lubrication and that many nerves in the pelvic region perform their function through a structure known as uterovaginal plexus. However, literature search for the last two decades shows that the ablation or non-ablation of cervix has no effect on sexual function 15,16 and that there is no difference between the techniques of total abdominal hysterectomy, subtotal hysterectomy and vaginal hysterectomy in terms of post-operative sexual activities and sexual problems. 17 There is also no evidence in relevant literature that supports the possible relation between vaginal length and sexual function. 18 While many researchers report that there is a measurable advance in the life style and sexual function after simple hysterectomy, 8,9,19,20 some others point out to negative results. 10,11 In the studies pointing out to positive results, the main reason has been shown as the decrease of dyspareunia 21 , disappearance of pregnancy anxiety, absence of vaginal bleeding and thus the existence of more time for relationship. 7 And in a few studies, it has been reported, that there is an improvement for the sexual functions of each partner after hysterectomy. 22 Some small-scale studies have indicated that the sexual well-being after hysterectomy depends upon the relationship between the partners before the operation and physical well-being. 15 And in a recent review, it has been reported that if hysterectomy is performed under appropriate indications and with an appropriate technique, it would not have any effect upon sexual functions and these claims do not have any scientific premise. 7 Nonetheless, many of the long-term effects of hysterectomy on sexual function are still unknown. As for our study, it has been observe that female sexual dysfunction increased after hysterectomy (Table-IV). About 59.3% of our patients were in the pre-menopausal group and 40.7% of them were in the menopausal group. In the pre-menopausal group, while there was an increase in FSFI scores not only in the total group (42.11±23.83 to 32.52±26.29, p<0.001) but also in each and every sub-group, in the menopausal group there was a significant difference (in the form of decrease) only in the sub-group of sexual desire (4.3±1.95 to 3.61±1.64, p=0.002) ( Table-II). With regard to a comparison in terms of the existence or absence of sexual dysfunction, there was not any difference in the total group and the menopausal group in the post-operative period. However, in the pre-menopausal sub-group, there was an increase in the post-operative period in terms of the existence of sexual dysfunction (30.3% to 47.2%, p=0.031) ( Table-II). In fact, these results show not only that sexual life in the early post-hysterectomy period is negatively influenced, but also that this effect is less in menopausal patients. When the pre-operative period and the post-operative period was compared with respect to symptomatology, it was observed that there was a difference only in terms of the urogenital system and unlike what the relevant literature states 2 , it was also observed that there was a statistically significant decrease in urinary problems (34% and 17%, p<0.001, respectively) ( Table-II). This can well be the consequence of the disappearance of the urogenital pressure problems brought about by benign conditions and the implementation of prophylactic antibiotic during the operation. For a woman, hysterectomy not only signals a loss of the capacity of reproduction, but also of sexual function. This is because uterus makes its contraction felt during orgasm. 23 With the ablation of ovaries, the sudden loss of sex hormones can further increase such anxieties and complaints of depression. The relevant literature points that situations like sexual dysfunctions and decrease in sexual desire after hysterectomy usually leads to a development of depression and that the most common psychiatric problem after hysterectomy is depression. 24 does not have influence upon psychological well-being, but that a depressive condition in the pre-operative period increases the percentage of depression in the post-operative period. 25 Table-III shows that depression varies in terms of BDS in the pre-operative and post-operative periods (12.87±11.19 to 14.27±10.95, p=0.015, respectively), but that there is no difference in any category in terms of the comparison between an existence and absence of depression. In addition, while there was a difference in the pre-menopausal group in terms of BDS (P=0.028), there was no such difference in the menopausal group (p=0.197). This can be due to the sudden decrease in sex hormones with the ablation of ovaries. Our study is not only a prospective study, but also one that includes sexually healthy patients, having no disorder apart from gynaecological pathology, and it consists of isolated cases where patients were excluded in case of any negative situation affecting sexuality in the post-operative short period. One of the major short coming of our study is the possibility of minimal differences originating from the surgical technique, since this study is a multicenter one besides the limited number of cases. CONCLUSION While planning the operation of hysterectomy and bilateral salpingo-oophorectomy on benign reasons in sexually active and healthy women in the pre-menopausal and menopausal groups, potential symptomatology, female sexual dysfunction and depression should be definitely analyzed for their specific factors. While the operation of Type 1 hysterectomy and bilateral salpingo-oophorectomy enables improvement for women in terms of urinary problems in the post-operative short period, it causes sexual dysfunction and depression in women. Before the operation, doctors and nurses must explicitly inform all patients and even their partners about the operation and its potential consequences.
2017-09-13T06:42:53.504Z
1969-12-31T00:00:00.000
{ "year": 2015, "sha1": "373da6c00e3ee3d4762912143d75181ad8c58942", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12669/pjms.313.7368", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "373da6c00e3ee3d4762912143d75181ad8c58942", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11937679
pes2o/s2orc
v3-fos-license
Gastrodia elata Ameliorates High-Fructose Diet-Induced Lipid Metabolism and Endothelial Dysfunction Overconsumption of fructose results in dyslipidemia, hypertension, and impaired glucose tolerance, which have documented correlation with metabolic syndrome. Gastrodia elata, a widely used traditional herbal medicine, was reported with anti-inflammatory and antidiabetes activities. Thus, this study examined whether ethanol extract of Gastrodia elata Blume (EGB) attenuate lipid metabolism and endothelial dysfunction in a high-fructose (HF) diet animal model. Rats were fed the 65% HF diet with/without EGB 100 mg/kg/day for 8 weeks. Treatment with EGB significantly suppressed the increments of epididymal fat weight, blood pressure, plasma triglyceride, total cholesterol levels, and oral glucose tolerance, respectively. In addition, EGB markedly prevented increase of adipocyte size and hepatic accumulation of triglycerides. EGB ameliorated endothelial dysfunction by downregulation of endothelin-1 (ET-1) and adhesion molecules in the aorta. Moreover, EGB significantly recovered the impairment of vasorelaxation to acetylcholine and levels of endothelial nitric oxide synthase (eNOS) expression and induced markedly upregulation of phosphorylation AMP-activated protein kinase (AMPK)α in the liver, muscle, and fat. These results indicate that EGB ameliorates dyslipidemia, hypertension, and insulin resistance as well as impaired vascular endothelial function in HF diet rats. Taken together, EGB may be a beneficial therapeutic approach for metabolic syndrome. Introduction Metabolic syndrome, a worldwide issue, is characterized by insulin resistance, impaired glucose tolerance and/or hyperglycemia, high blood serum triglycerides, low concentration of high-density lipoprotein (HDL) cholesterol, high blood pressure, and central obesity. The association of 3 (or more) of these factors leads to an increased morbidity and mortality from several predominant diseases such as type 2 diabetes, cancer, and cardiovascular diseases including atherosclerosis, myocardial infarction, and stroke [1,2]. Fructose is an isomer of glucose with a hydroxyl group on carbon-4 reversed in position. It is promptly absorbed and rapidly metabolized by liver. Recent decades westernization of diets has resulted in significant increases in added fructose, enormous rised in fructose consumption typical daily [3]. The exposure of the liver to such enormous rising fructose consumption leads to rapid stimulation of lipogenesis and triglyceride accumulation, which in turn leads to reduced insulin sensitivity and hepatic insulin resistance/glucose intolerance [4]. Thus, high-fructose diet induces a well-characterised metabolic syndrome, generally resulting in hypertension, dyslipidaemia, and low level of HDL-cholesterol [5]. Recent studies suggest that high fructose intake may be an important risk factor for the development of fatty liver [6]. Rats are commonly used as a model to mimic human disease, including metabolic syndrome [7]. Similarly, emerging data suggest that experiment on fructosediet rats tends to produce some of the changes associated with metabolic syndrome, such as altered lipid metabolism, fatty liver, hypertension, obesity, and dyslipidemia [8]. Evidence-Based Complementary and Alternative Medicine Gastrodia elata Blume is a traditional herbal medicine in Korea, China, and Japan, which has been used for the treatment of headaches, hypertension, rheumatism, and cardiovascular diseases [9]. Several major physiological substances have been identified from Gastrodia elata Blume such as gastrodin, vanillyl alcohol, vanillin, glycoprotein, p-endoxybenzyl alcohol, and polysaccharides including alpha-D-glucan [10][11][12]. Our previous studies showed that Gastrodia elata Blume exhibits anti-inflammatory and antiatherosclerotic properties by inhibiting the expression of proinflammatory cytokines in vascular endothelial cells [13,14]. However, the effect of ethanol extract of Gastrodia elata Blume on high-fructose (HF) diet animal model has not been yet reported. Thus, the present study was designed to determine whether an ethanol extract of Gastrodia elata Blume (EGB) improves high-fructose diet-induced lipid metabolism and endothelial dysfunction. Preparation of Gastrodia elata Blume. The Gastrodia elata Blume was purchased from the Herbal Medicine Cooperative Association, Iksan, Jeonbuk Province, Korea, in May 2012. A voucher specimen (no. HBJ1041) was deposited in the herbarium of the Professional Graduate School of Oriental Medicine, Wonkwang University, Iksan, Jeonbuk, South Korea. The dried Gastrodia elata Blume (400 g) was extracted with 4 L of 95% ethanol at room temperature for 1 week. The extract was filtered through Whatman no. 3 filter paper (Whatman International Ltd., England) and concentrated using rotary evaporator. The resulting extract (12.741 g) was lyophilized by using a freeze drier and retained until required. Animal Experiments and Diet. All experimental procedures were carried out in accordance with the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Utilization Committee for Medical Science of Wonkwang University. Seven week-old male Sprague-Dawley (SD) rats were obtained from Samtako (Osan, Korea). Rats were kept in a room automatically maintained at a temperature (23 ± 2 ∘ C), humidity (50∼60%), and 12-h light/dark cycle throughout the experiments. After 1 week of acclimatization, animals were randomly divided into three groups ( = 10 per group). Control group (Cont.) was fed regular diet, high-fructose group (HF) was fed 65% fructose diet (Research Diet, USA), and the third group (HF + EGB) was fed with 65% fructose along with a single dose of 100 mg/kg/day of EGB orally for a period of 8 weeks. The regular diet was composed of 50% starch, 21% protein, 4% fat, and standard vitamins and mineral mix. The high-fructose diet was composed of 65% fructose, 20% protein, 10% fat, and standard vitamins and mineral mix. Blood and Tissue Sampling. At the end of the experiments, the aorta, liver, adipose tissue (epididymal fat pads), and muscle were separated and frozen until analysis after being rinsed with cold saline. The plasma was obtained from the coagulated blood by centrifugation at 3,000 rpm 15 min at 4 ∘ C. The separation of plasma was frozen at −80 ∘ C until analysis. 2.4. Measurements of Blood Pressure. Systolic blood pressure (SBP) was determined by using noninvasive tail-cuff plethysmography method and recorded with an automatic sphygmotonography (MK2000; Muromachi Kikai, Tokyo, Japan). The systolic blood pressure (SBP) was measured at week 1, week 3, and week 7, respectively. At least seven determinations were made in every session. Values were presented as the mean ± SEM of five measurements. Analysis of Plasma Lipids. The levels of triglyceride in plasma were measured by using commercial kits (ARKRAY, Inc., MINAMI-KU, KYOTO, Japan). The levels of highdensity lipoprotein (HDL)-cholesterol, total cholesterol, and LDL-cholesterol in plasma were measured by using HDL and LDL assay kit (E2HL-100, BioAssay Systems). Estimation of Blood Glucose and Oral Glucose Tolerance Test. The concentration of glucose in blood was measured which was obtained from tail vein using glucometer (Onetouch Ultra) and Test Strip (Life Scan Inc., CA, USA), respectively. The oral glucose tolerance test (OGTT) was performed 2 days apart at 7 weeks. For the OGTT, briefly, basal blood glucose concentrations were measured after 10∼12 h of overnight food privation; then the glucose solution (2 g/kg body weight) was immediately administered via oral gavage, and fourth more tail vein blood samples were taken at 30, 60, 90, and 120 min after glucose administration. Preparation of Carotid Artery and Measurement of Vascular Reactivity. The carotid arteries of the rats were rapidly and carefully isolated and placed into cold Kreb's solution of the following composition (mM): NaCl 118, KCl 4.7, MgSO 4 1.1, KH 2 PO 4 1.2, CaCl 1.5, NaHCO 3 25, glucose 10, and pH 7.4. The carotid arteries were removed to connective tissue and fat and cut into rings of approximately 3 mm in length. All dissecting procedures were carried out for caring to protect the endothelium from accidental damage. The carotid artery rings were suspended by means of two L-shaped stainlesssteel wires inserted into the lumen in a tissue bath containing Kreb's solution at 37 ∘ C and aerated with 95% O 2 and 5% CO 2 . The isometric forces of the rings were measured by using a Grass FT 03 force displacement transducer connected to a Model 7E polygraph recording system (Grass Technologies, Quincy, MA, USA). In the carotid artery rings of rats, a passive stretch of 1 g was determined to be optimal tension for maximal responsiveness to phenylephrine (10 −6 M). The preparations were allowed to equilibrate for approximately 1 h with an exchange of Kreb's solution every 10 min. The relaxant effects of acetylcholine (ACh, 10 −9 ∼ 10 −6 M) and sodium nitroprusside (SNP, 10 −10 ∼ 10 −5 M) were studied in carotid artery rings constricted submaximally with phenylephrine (10 −6 M). Values were expressed as mean ± SD ( = 10). * * < 0.01 versus Cont.; # < 0.05, ## < 0.01 versus HF. HF: high fructose; HF + EGB: high fructose diet with EGB; BW: body weight. Western Blot Analysis in the Rat Aorta, Liver, Muscle, and Fat. The aorta, liver muscle, and fat tissues homogenate were prepared in ice-cold buffer containing 250 mM sucrose, 1 mM EDTA, 0.1 mM phenylmethylsufonyl fluoride, and 20 mM potassium phosphate buffer (pH 7.6). The homogenates were then centrifuged at 8,000 rpm for 10 min at 4 ∘ C, and the supernatant was centrifuged at 13,000 rpm for 5 min at 4 ∘ C, and as a cytosolic fraction for the analysis of protein. The recovered proteins were separated by 10% SDS-polyacrylamide gel electrophoresis and electrophoresis transferred to nitrocellulose membranes. Membranes were blocked by 5% BSA powder in 0.05% Tween 20-Tris-bufferd saline (TBS-T) for 1 h. The antibodies against ICAM-1, VCAM-1, E-selectin, eNOS, ET-1 (in aorta), AMPK, and p-AMPK (in liver, muscle, and fat) were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA, USA). The nitrocellulose membranes were incubated overnight at 4 ∘ C with protein antibodies. The blots were washed several times with TBS-T and incubated with horseradish peroxidase-conjugated secondary antibody for 1 h, and then the immunoreactive bands were visualized by using enhanced chemiluminescence (Amersham, Buchinghamshire, UK). The bands were analyzed densitometrically by using a Chemi-doc image analyzer (Bio-Rad, Hercules, CA, USA). Histopathological Staining of Aorta, Epididymal Fat, and Liver. Aortic tissues were fixed in 10% (v/v) formalin in 0.01 M phosphate buffered saline (PBS) for 2 days with change of formalin solution every day to remove traces of blood from tissue. The tissue samples were dehydrated and embedded in paraffin, and then thin sections (6 m) of the aortic arch in each group were cut and stained with hematoxylin and eosin (H&E). Epididymal fat and liver tissues were fixed by immersion in 4% paraformaldehyde for 48 h at 4 ∘ C and incubated with 30% sucrose for 2 days. Each fat and liver was embedded in OCT compound (Tissue-Tek, Sakura Finetek, Torrance, CA, USA), frozen in liquid nitrogen, and stored at −80 ∘ C. Frozen sections were cut with a Shandon Cryotome SME (Thermo Electron Corporation, Pittsburg, PA, USA) and placed on poly-L-lysine-coated slide. Epididymal fat sections were stained with H&E. Liver sections were assessed by using Oil Red O staining. For quantitative histopathological comparisons, each section was determined by Axiovision 4 Imaging/Archiving software. Immunihistochemical Staining of Aortic Tissues. Paraffin sections for immunohistochemical staining were placed on poly-L-lysine-coated slide (Fisher scientific, Pittsburgh, PA, USA). Slides were immunostained by Invitrogen's HISOTO-STAIN-SP kits using the Labeled-(strept) Avidin-Biotin (LAB-SA) method. After antigen retrieval, slides were immersed in 3% hydrogen peroxide for 10 min at room temperature to block endogenous peroxidase activity and rinsed with PBS. After being rinsed, slides were incubated with 10% nonimmune goat serum for 10 min at room temperature and incubated with primary antibodies of ICAM-1, VCAM-1, and E-selectin (1:200; Santa Cruz, CA, USA) in humidified chambers overnight at 4 ∘ C. All slides were then incubated with biotinylated secondary antibody for 20 min at room temperature and then incubated with horseradish peroxidase-conjugated streptavidin for 20 min at room temperature. Peroxidase activity was visualized by 3,3 -Diaminobenzidine (DAB; Novex, CA) substrate-chromogen system, counterstaining with hematoxylin (Zymed, CA, USA). For quantitative analysis, the average score of 10∼20 randomly selected area was calculated by using NIH Image analysis software, Image J (NIH, Bethesda, MD, USA). Statistical Analysis. All the experiments were repeated at least three times. The results were expressed as a mean ±SD or mean ±SE. The data was analyzed using SIGMAPLOT 10.0 program. The Student's t-test was used to determine any significant differences. < 0.05 was considered as statistically significant. Characteristics of Experimental Animals. During the entire experimental period, all groups showed significant increase in body weight. There was no significant change in body weight after 8 weeks of fructose feeding in HF group. However, treatment of EGB group showed significant decrease in body weight (439.8 ± 26.5 versus 402.5 ± 22.1, < 0.05) (Table 1). Moreover, HF diet results in a significant increase in epididymal fat pads weight. The weight of epididymal fat pads was 60.8 ± 17.4% higher than that of the HF diet group compared with control group. However, treatment of EGB group significantly reduced the epididymal fat pads weight (57.5 ± 7.3%) compared with HF diet group (Table 1). Effect of EGB on Blood Pressure. At the beginning of the experimental feeding period, the levels of systolic blood pressure in all groups were approximately 95∼100 mmHg as investigated by the tail-cuff technique. After 4 weeks, systolic blood pressure of HF group was significantly increased than that of control group ( < 0.01). However, EGB group was significantly decreased than that of HF group during all the experimental period (136.71 ± 1.24 versus 116.4 ± 1.21, < 0.01) (Figure 1(a)). Effect of EGB on Blood Glucose Level and Oral Glucose Tolerance Test. Plasma blood glucose levels were not statistically different in HF diet rats with chronic treatment of EGB (Table 1). Oral glucose tolerance test was carried out to check insulin resistance in high-fructose diet rats after 8 weeks. The results showed that HF diet group maintained the significant increase in blood glucose levels at 30, 60, 90 ( < 0.01), and 120 min ( < 0.05), respectively. However, the plasma glucose levels in treatment of EGB were significantly decreased at 30 and 90 min as compared with HF diet group ( < 0.05) (Figure 1(b)). Effect of EGB on the Morphology of Aorta and Epididymal Fat Pads. EGB effectively decreased blood pressure and attenuated impairment of vasorelaxation. Thus, we examined histological changes by staining with H&E in thoracic aorta. Figure 3 showed that thoracic aorta of HF diet group revealed roughened endothelial layers and increased tunica intimamedia of layers compared with control group (+24.13%, < 0.01). However, treatment of EGB group significantly maintained the smooth character of the intima endothelial layers and decreased tunica intima-media thickness in aortic section (−16.10%, < 0.01) (Figures 3(a) and 3(c)). Because EGB effectively reduced the epididymal fat pads weight, we prepared frozen section of epididymal fat pads and stained with H&E. The adipocytes were hypertrophy induced by HF diet compared with control group (+40.97%, < 0.01). However, treatment of EGB significantly decreased the hypertrophy of adipocytes (−13.04%, < 0.05) (Figures 3(b), and 3(d)). Effect of EGB on the Hepatic Lipids. To investigate the existence of fat accumulation of liver in all experimental groups, we prepared frozen section of liver and stained with Oil Red O. Lipid droplets were detected in HF diet groups. However, treatment of EGB showed that the number of lipid Evidence-Based Complementary and Alternative Medicine 5 droplets significantly decreased compared with HF diet group ( Figure 4). Effect of EGB on the Expressions Levels of Adhesion Molecules, eNOS, and ET-1 in Aorta. Protein expression levels of VCAM-1, ICAM-1, E-selectin, eNOS, and ET-1 in aorta were determined by western blotting, respectively. Adhesion molecules (VCAM-1, ICAM-1, and E-selectin) and ET-1 protein levels were increased in the HF diet group compared with control group. However, treatment of EGB group significantly decreased expression levels of protein compared with HF diet group. Moreover, we examined the expression of eNOS levels to evaluate vascular endothelial function. The eNOS protein levels decreased in the HF diet group compared with control group. However, treatment of EGB group increased expression levels of protein compared with HF diet group ( Figure 5). Effect of EGB on the Expressions Levels of AMPK in Liver, Muscle, and Fat Tissues. Because EGB effectively suppressed the development of impaired glucose tolerance, dyslipidemia, fatty liver, and endothelial dysfunction, the expression of AMPK was examined in liver, muscle, and fat tissues. The expression of AMPK was significantly decreased in HF diet group. However, treatment of EGB group increased expression levels of protein in liver, muscle, and fat tissues (Figure 7). Discussion Herb, Acupuncture, and Natural Medicine (HAN), one of the most ancient and revered forms of healing, has been used to diagnose, treat, and prevent disease for over 3,000 years. HAN is now used worldwide as an effective means of overcoming disease. Gastrodia elata is a well-known traditional Korean medicinal herb specifically for promoting blood circulation to remove blood stasis. In the present study, we provided the evidence for the beneficial effect of Gastrodia elata on lipid metabolism and endothelial dysfunction in high fructoseinduced metabolic syndrome rat model. Fructose is a lipogenic component, its consumption promotes the development of atherogenic lipid profile and elevation of postprandial hypertriglycemia [15,16]. In addition, HF diet animals develop hypertriglyceridemia, obesity, impaired glucose tolerance, fatty liver, increased SBP, and vascular remodeling [17,18]. In the present study, HF diet clearly increased visceral epididymal fat pads weight resulting from the increases in triglyceride and LDL cholesterol. Treatment with EGB lowered epididymal fat pads weight, triglyceride, and LDL cholesterol levels, whereas it elevated HDL cholesterol levels which assist lipid metabolism. Thus, EGB improves lipid metabolism by the decrease of triglyceride and LDL cholesterol. Although increased epididymal fat pads, body weight was not different from control diet and HF diet group. We suppose that proper experimental periods should be longer than the present periods for 8 weeks to increase body weight. It is sure that EGB is effective in obesity in HF diet rats, since EGB significantly decreased HF dietinduced increase in body weight. In addition, disorder of lipid levels induced by HF diet was associated with aortic lesion. Histological analysis demonstrated that the endothelial layers were rougher in aortic sections of HF diet rats associated with a trend towards an increased development of atherosclerosis. Intima-media thickness of the thoracic aorta has been shown to correlate with prognosis and extend of coronary artery disease [19]. Treatment of EGB maintained smooth and soft intima endothelial layers and decreased intima-media thickness in aortic sections of HF diet rats. Dyslipidemia, impaired glucose tolerance, and fatty liver are major features associated with metabolic syndrome in HF diet rats [19,20]. Fructose induces impaired glucose tolerance via the elevation of plasma triglyceride levels. In addition, previous study demonstrated that an elevated fructose diet associated with impaired glucose tolerance and endothelial dysfunction precedes the development of hypertension [21]. Impaired glucose tolerance plays an important role in the development of such abnormalities as insulin resistance, type 2 diabetes, and dyslipidemia [22]. Similarly, HF diet induced impaired glucose tolerance and dyslipidemia, whereas treatment of EGB improved impaired glucose tolerance with the amelioration of dyslipidemia. In addition, EGB significantly suppressed the increasing adipocyte size and fatty liver. Thus, these results suggest that EGB may be useful to suppress the development of atherosclerotic lesions, obesity, and ameliorated lipid metabolism in metabolic syndrome model. Endothelial dysfunction plays an important role in hypertension and vascular inflammation, other cardiovascular diseases, and metabolic syndrome [23,24]. In this experimental model, the expression of ET-1 and inducible adhesion molecules such as ICAM-1, VCAM-1, and Eselectin in the arterial wall represent a key event in the development of atherosclerosis. EGB ameliorated vascular inflammation by downregulation of ET-1 as well as ICAM-1, VCAM-1, and E-selectin expressions in thoracic arota. Several studies have shown that lowering blood pressure and endothelial functions are related to an increase of eNOS reactivity, thereby increasing NO production roles as a strong vasodilator [25,26]. In the present study, EGB upregulated eNOS levels in the aorta and recovered the HF diet-induced impairment of endothelium-dependent vasorelaxation. However, endothelium-independent vasodilatorinduced vasorelaxation was not affected by EGB. These results suggest that hypotensive effect of EGB is mediated by endothelium-dependent NO/cGMP pathway. Histological study revealed that EGB suppressed vascular inflammation, compatible with the processes of atherosclerosis. In fact, endothelial dysfunction was initially identified as impaired vasodilation to specific stimuli such as ACh or bradykinin; therefore, improvement of endothelial function is predicted to regulate lipid homeostasis [27]. Thus, antihypertension and antivascular inflammatory effects of EGB contribute to the beneficial effects on endothelial function and lipid metabolism in metabolic syndrome. To clarify the mechanism for EGB suppressing the development of visceral obesity, impaired glucose tolerance, dyslipidemia, and fatty liver, the study was focused on the expression of AMP-activated protein kinase (AMPK). There is a strong correlation between low activation state of AMPK with metabolic disorder associated with insulin resistance, fat deposition, and dyslipidemia [28][29][30]. AMPK is a key regulator of glucose and lipid metabolism. In the liver and muscle, activation of AMPK results in enhanced fatty acid oxidation and decreased production of glucose, cholesterol, and triglycerides [31]. Recently Misra reported that the suspected role of AMPK appeared as a promising tool to prevent and/or to treat metabolic disorders [32]. Also, the activation of AMPK signaling pathway is associated with eNOS regulation and alteration of systemic endothelin pathway in fructose diet animal models [25]. AMPK is required for adiponectin-, thrombin-, and histamine-induced eNOS phosphorylation and subsequent NO production in endothelium [33]. However, our study showed that EGB induced Evidence-Based Complementary and Alternative Medicine 9 markedly not only activation of phosphorylation AMPK in the liver, muscle, and fat, but also activation of eNOS levels in aorta. It could be hypothesized that EGB could lead to novel AMPK-mediated eNOS pathways which could in turn recover HF diet-induced metabolic disorders. Conclusion These results suppose that EGB ameliorates lipid metabolism, impaired glucose tolerance, hypertension, and endothelial dysfunction in HF diet-induced metabolic syndrome, at least in part, via activation of AMPK and eNOS/NO pathway. Therefore, Gastrodia elata Blume might be a beneficial therapeutic approach for metabolic syndrome.
2016-05-17T13:24:43.328Z
2014-02-26T00:00:00.000
{ "year": 2014, "sha1": "4bbdb07380e67595e5efa6b594b9310297d38a7d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2014/101624.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5aec3fe03bd172d84e2308cdd8813e2f97ad5ce3", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221902968
pes2o/s2orc
v3-fos-license
The Creativity of Primary School Students in Learning Music as Part of Cultural Art School Subject Primary level education in Indonesia comprises the elementary and Junior High School. Especially in Junior High School, music is taught as part of a cultural art school subject. The subject of research in this study is the creativity of students in Junior High School in learning music as part of a bigger cultural art school subject. Therefore, this study is aiming at finding out the manifestation of creativity of Junior High School students related to musical creativity in cultural learning. The research method applied is qualitative interpretative. The research location is Semarang city. Data collection techniques used in this research are interview and documentation study. Moreover, data triangulation is used as the data validity technique. Data analysis technique used in this research is content analysis. The result shows that the creativity of Junior High School students is manifested in its cultural product in the form of playing ensemble in groups which are different and varied between one student group to I. INTRODUCTION This study will focus on teaching music as part of the Cultural Art Education school subject in Junior High School. Based on the curriculum, the Cultural Art Subject consists of fine arts, music, dance, and drama lessons. All lessons are taught under the same subject called Cultural Art Education. It seems that the combination of four sub materials in Junior High School Level will not cause a problem since arts that is given in public school, in general, is not to achieve the capability of doing arts, but more, as an arts in Junior High School's intra-curricular learning which only be used as a media to support the appreciation and creation education. What is received from the process of doing appreciation and creation is hoped to be implicated in the students' character education. Based on the recent curriculum, character education which becomes the purpose of education are sensitivity, intelligence, honesty, cooperative, tolerance, environmental friendly, creative, nationalist, etc., which all have been explained in detail in the curriculum of Indonesia Education. One of the music lesson activities that often used to teach appreciation and creation as part of character education is ensemble music. Ensemble is a kind of music group play which was played using homogen or mixed musical instruments. The music itself is related to rhythm and melody. In a more comprehensive way, music can be translated as forms of sounds which contain melody, rhythm, and harmony [1] [2]. According to Keller [3] ensemble performance is considerably beneficial for students, since it contains three main cognitive motor skills. Those skills are determined by the quality of real-time interpersonal coordination. Those three are anticipatory mechanisms which are involved in planning a performer's own actions and predicting other ensemble members' actions. The second skill is related to the process of dividing attention between one's own actions and those of others while monitoring the overall, integrated ensemble output. The third ensemble skill is based on adaptive mechanisms that allow performers to react to variations in each other's action timing and other performance parameters. The previous research related to music lesson in Junior High School was entitled "Upaya Meningkatkan Aktivitas dan Hasil Belajar Apresiasi Musik Nusantara melalui Penggunaan Lagu Model pada Siswa Kelas VIII SMP Negeri 1 Pangkah Kabupaten Tegal" or "Attempt to Improve Activity and Learning Results of Nusantara Music Appreciation through the Use of Model Song for Students Grade 8 of SMP Negeri 1 Pangkah Kabupaten Tegal" by Herminingrum [4] who took Action Research as the research types. Results show that the use of model song can enhance the ability to appreciate Nusantara music which becomes the teaching material that needs to be appreciated by students. Similar research had also done entitled "Pembelajaran Pianika dalam Bentuk Ansambel pada Siswa Kelas VII di SMP N 1 Kota Gorontalo" or "Learning Pianica in Ansamble to Students Grade 7 in Junior High School 1 Kota Gorontalo". The research was done by Pakaya [5]. This study employed a qualitative method. Before the ensemble was done, each part had been trained and mastered by each student. Results showed that children were able to work together as a team and to appreciate others. Doing music is also known as a creative activity. Based on [6] about the theory of creativity, an expressive creativity is done by one when it does arts without a bounded idea, and/ or without using certain rules or borders or guidelines. Furthermore, the rule and/or physical rule is used to limit personal freedom of thought; however, in the same time, expressive spontaneity is still employed. The first and second stages of doing arts are kinds of general creativity basis. The third stage is inventive. The inventive stage is contained with an ability to develop and combine existing concepts by using previous design solution to establish a new design or concept which has not been existed before. The fourth stage is innovative. The innovative creativity started from the existing concept of thinking and lead to the out of the box product. The product itself should be new which was not established before. The fifth stage is emergent. In the level of emergent, the highest creativity of arts is staged. It contains arts which reject physical punishment, principle, and all limitations. Products resulted in ideas and/or new inventions. Based on the background or theories, this study then aims at finding out the Junior High School students' creativity in doing music creativity in Cultural Art Subject at School. These are done through in-depth analysis. II. METHODOLOGY A method implemented in this study is qualitative interpretative. As qualitative research, therefore, each data collected was analyzed using supproted concept or theory. Data related to the reseach location and research target, data collection technique, data validity techique, and data analysis technique are presented as follows: A. Research Location and Research Target The research located in Semarang City and Regency. The research target is related to music learning in Junior High School with teachers and students as the research subject. The main problem discussed in this study is related to how Junior High School students' creativity in doing creative activities is in music subject as parts of Cultural Art learning at school. B. Data Collection Technique The data collection techniques implemented in this study are observation, interview, and documentation study. The observation was done by observing the music learning directly and the way it is taught in the classroom. The interview was done to students outside the classroom related to how music is performed by students, as the way students play the musical instruments. The documentation study is conducted by studying the music partitur that is played and the music recording taken during the music performance. C. Data Analysis and Validity Technique The data validity technique was done by data triangulation. Mainly, the data validity was conducted by comparing the observation, interview, and documentation study results. The data analysis technique was conducted by using interactive and interpretative anlaysis [7] [8]. In general, the interactive analysis employs data collection procedure, data reduction, data presentation, and verification or conclusion. Each collected data is always interpreted in harmony with the problems raised by considering the emic and ethic of the research. III. RESULT AND DISCUSSION Result and discussion parts in this study are discussed unseparatedly. In short, the research results will be directly discussed. Results and discussion related to the problems being studied which are about the creativity of Junior High School students in doing music in school Cultural Art subject. Creativity seen and analyzed here is creativity related to music accompaniment. Music accompaniment that is learned by students in this research is ensemble music. During the learning process, the students tend to receive freedom in doing it since the teacher only emphasizes the importance of students doing music practice. The main lesson teachers need to teach the students is about accord and music accompaniment pattern. In terms of accord and music accompaniment, each group of students tends to have their own ideas so that the same song can be played by several groups and the results can be varied. Research conducted by [9] showed that from a psychological aspect, playing ensemble as a group music activity can help learners as performers to improve their simultaneous creativity since they have to do some activities in one time. Among the creativities are interpreting the music notation, improvising new music material, adapting to the unexpected playing conditions, and accommodating technical errors. Taylor and Getzels stated that music for the listeners or artists, in the theory of creativity stage 1, used as expressive creativity. What is understood by expressive creativity is done by people when they are doing arts using boundless ideas and/or without using certain rules and/or without using certain limitations and/or guidelines. It what makes students are able to freely decide the accords and music accompaniment they want to play. Some other students tend to follow the rule and lesson taught by the teachers, based on the general knowledge of doing music. This group of students is considered familiar with the conception of music in general but need more knowledge and capability in developing it. Related to this, it is considered to be related to the second conception of Taylor and Getzels's theory in which the music players rigidly implement rules and principles to limit private thought and creativity. However, expressive spontanity is also sometimes done by the students. The spontanious expression of the students is then considered to be related closely to the rigid rule taught in the theory. In other words, students are limited by the previous knowledge and belief students had before. Another group has been able to develop certain music into a new type of music. This group has been categorized as a fully experienced group that can establish a different form of music. The originality and novelty are still related to the existing rules studied before. What the students understood and known through music experiences they had before, trigger the students to have new ideas in developing different type of music. The phenomena of doing creative music like this are considered as Taylor and Getzels's third stage or inventive creativity. Inventive level means the ability to combime existing concept with the previous design solution to create and establish new design. Advances in Social Science, Education and Humanities Research, volume 271 Some other groups whose number is few were able to produce new pattern for music accompaniment. It is good but also has a negative impact at the same time. People tend to avoid new things since it is strange to them. The uniqueness of the new music product makes people not being able to feel the aesthetic content of art. However, no matter how new and strange a song is, it is still an innovative discovery. In Taylor and Getzels's theory, it belongs to the fourth stage which is innovative. Being innovative means to think out of the box. One should be able to produce which was never exist before. Another group found in the research was a group with strange music accompaniment. It is when the song and music accompaniment work separatedly. It causes rejection from the listener since it is considered to have no sense of aesthetic. The harmonization is different from the general harmonization used in music, which usually has a mutual relationship between the vocal and the music accompaniment. This type of music accompaniment, in Taylor and Getzels's theory, belongs to the emergent level or the fifth stage. It is the highest stage of creativity in the arts. Inside, it rejects phisical law, principle, and the applicable limitation. IV. CONCLUSION Based on the research results, it is concluded that the main problem related to this study is how the Junior High School students' creativity is in doing music creativity in Cultural Art Subject at School. The creativity is related to the creativity done by student in playing ensemble as song accompaniment. In the context of creativity in producing music accompaniment, it shows that students tend to have their own idiom in accompanying the music, or in the other words, each group has its own way in singing and playing music, different from one student to another.
2019-09-15T03:07:34.967Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "c32d59c55046833113a96150853d16ced30cf735", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/iconarc-18.2019.30", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a27f64f86774fc966046370f5825845c8b41a4f1", "s2fieldsofstudy": [ "Art", "Education" ], "extfieldsofstudy": [] }
266304698
pes2o/s2orc
v3-fos-license
History and prospects of development of eco-tourism in Kashkadarya region . This article talks about the history and prospects of ecological tourism development in Kashkadarya region. Based on archival data, the author analyzed the problem based on available scientific literature and studied the specific aspects of the history and prospects of the development of ecological tourism in Kashkadarya regions. Introduction In the years of independence, the attention to the protection of nature is gaining importance in the expansion of the number of foreign nature lovers coming to our country.In countries where tourism is developed, competition is well developed, and the large flow of tourists allows industry facilities to operate at full capacity and compete in terms of price.The development of tourism in our country has not been implemented, including in the regions. The economic development model of Uzbekistan was evolutionary and took into account the socio-economic potential of the country, the history of the statehood of the people, national and religious values and the demographic situation.The "Uzbek model", embodying the reforming abilities of the state, protecting the economy from any political and ideological interference, ensuring the rule of law, phased reforms, social protection of the population, which in the new market conditions experienced a serious psychological and material shock, became the basis for the development of society at the transitional stage.These principles have now been reflected in the "Action Strategy for Five Priority Development Areas in 2017-2021."and in the "Strategy for the development of a new Uzbekistan for 2022 -2026" [1]. The territory of Kashkadarya region is mainly surrounded by the Zarafshan and Hisar mountain ranges from the northeast and southeast.Hills occupy the space between the mountains and the plains.A large part of the plain consists of the Karshi desert, bordered by the Sandiqli and Kyzylkum deserts in the west.The climate is continental.Winter is relatively mild.Summer is long (155-160 days), hot, dry.The average temperature in January is from 0.2 to 1.9 degrees, and in July it is 28-29.5 degrees.The highest temperature is +45 degrees.The lowest temperature is 20 degrees.290-300 mm per year in the plains, 520-550 mm in the hills, 550-650 mm in the mountains.it will rain.Precipitation falls mainly in spring and winter, and in summer it is hot [2]. Materials and methods To date, a number of scientific and research works related to the country's tourism industry and its various branches have been carried out in various fields of social sciences, including history, law, philosophy, and political sciences.It is important to analyze the root causes and develop proposals and recommendations aimed at solving them.From the point of view of historiography, the literature reflecting researches in this direction is considered independent by O.Khamidov, N.Shamuratova, A.Alimov, N.Tukhliyev, T.Abdullayeva, R.Hayitboyev, U.Matyakubov, A.Nigmatov, N.Shamuratova and foreign authors, also works of F. David, Goodwin, L. Ceballos, K. Bishop, M. Green, A. Phillips, Y. Kujel, N. Morozova, M. Morozov, S.N.Khamraeva are analyzed. A permanent snow cover is formed in the mountains (2-6 months).The vegetation period is 290-300 days in the plains.The main river is Kashkadarya.Its tributaries are Jinnidarya, Aksuv, Yakkabogdarya, Tankhozdarya, Guzordaryo (along with Big and Small Oradaryo).Rivers are fed by snow, rain and glacier water.River water is mainly used for irrigation.There are Chimkurgan, Qamashi, Pachkamar reservoirs, Faiziabad, Eskibog, Eskiankhor, Koson, Pakhtaabad, Karshi and other canals.6 pumping stations, open and closed collector drainage networks were built during the development of the Karshi desert.The soil of the irrigated lands is mainly typical and light gray soils.There are more sandy soils in the Kitab-Shahrisabz stream.In the mountains, typical gray soils are distributed throughout the high altitude regions.The natural flora consists of about 1200 species of high plants.There are 76,600 hectares of forests in the region.The main part of the forests is made up of spruce and saxophone groves.The hillsides are covered with various grasses and plants, and there are also shrubs.Mountain forests consist of juniper, almond, pistachio, and juniper groves.Namatak, zirk, chakanda, anzur onion, black cumin and other spices grow in the mountains. More than 100 species of birds, 60 species of mammals, and 7 species of reptiles can be found in the region.Rivers and ponds are inhabited by sandfish, eels, carp, flounder, carp, and carp.Hisar mountain-forest and Kitab state geological reserves are located in the province.A complex of high mountain observatories operates in Kitab District [3]. The province is located in the Kashkadarya basin and on the western edges of the Pamir-Aloy mountain range.The territory of the province rises from the west to the east up to 300-400 meters above sea level.The eastern part of the region consists of the Kitab-Kamashi foothills, the height of which ranges from 450-500 to 900-950 meters above sea level.Most of the mountains within the province occupy its northeastern part.The highest points of the mountains here reach 4000 meters.The area of Kashkadarya region is 28.6 square meters.km, the population is over 3,408,300 (2022).About 3/4 of the population lives in villages [4]. The village of Varganza is one of the most ancient and unique villages of Kitab district.This miraculous place is located in the south-eastern part of the district, its area is 14530 km.This village is 26 km from the district center.It is located in a remote area, and today its population is more than 16,000 people. If we talk about the history of the village, the information about the region starts from the 7th century.According to the sources, the inhabitants of the village moved from the Varganza district of ancient Bukhara and began to call the village by this name.In the middle of the 19th century, the village of Varganza was considered the most important and strategically important area of the Kitab region.Villagers speak Uzbek and Tajik [5]. Pomegranates of Varganza village spread not only to our country, but also to neighboring republics.Today, more than ten varieties of pomegranates such as "Korashirin", "Bedona", "Achchidona", "Koradona", "Ulfi", "Kai", "Koranordon" are grown in this mountain village.That is probably why the Varganza pomegranate is called the "king of fruits".Pomegranates of this village are distinguished by their excellent taste from those of other regions.Representatives of the local alohi connect the history of the creation of pomegranate orchards in the Varganza and Kyzilkoya regions with the name of Hazrat Bashir. Results According to reports, Alloma, whose real name is Sultan Said Ahmad Ali, is Hazrat Bashir, 6 km from the village of Varganza.they lived in the distant Kyzylkoya cave and created a large garden nearby.The main tree in the garden is a pomegranate sapling, from which they got a good harvest.Later, representatives of the local population also began to grow pomegranate fruit, but they say that Hazrat Bashir was the first person to grow this fruit in these areas. Almost all the houses in this village are carefully built in the historical-architectural style, with two floors.The wood between the pines is very dense.Sinch walls, which are mainly made of poplar wood, are filled with raw guvala, smaller than half a brick.A person who visits this village for the first time feels as if he has fallen into historical times. The village of Kol is the last settlement on the way to the peak of Hazrat Sultan.The village has about 1.5 thousand inhabitants.One of the things that caught our attention in Kol and Gilon is that in both villages, people are still meticulously rebuilding their houses.Highlanders say that sineche houses are resistant during strong winds, rains and earthquakes that occur frequently in the mountains. The village of Kuruksoy, located in Kalkama region of Chirakchi district, is another vivid example of the development of rural tourism.The 5-century-old maple tree in Kuruksoy, mountain roads, and wonderful nature will captivate anyone.There is a wool market and a flower market in the market with a long history in the village of Kuruksoy.Kalkama mountains are different from other mountains.Along with the majestic stones, the soil layer is also thick.After starting to climb the mountain, the Kalkama Reservoir can be seen on the right side.More than 400 households belonging to the ancient Yuz clan live in the village of Kuruksoy. Gilan village is located in Shahrisabz district of Kashkadarya region, this magical corner is 2250 meters above sea level, in the lap of the sky-high Hisar mountains, surrounded by huge peaks.Today, the population is more than 5,200 people, and Tajiks live there along with Uzbeks [6].The people of Gilan are considered to be the longest-lived people in the oasis region, and in the village you can meet many enlightened fathers and mothers who are over a hundred years old. Gilan village is 85 km from Shahrisabz city.the distance is far, and the nature becomes more beautiful as you go from the city to the mountains.After passing the shady village of Miraki, the Hisorak reservoir begins.Aksuv river joins the reservoir.If you go up from the village of Hisorak, you will go to the villages of Kol and Gilan.Villagers are mainly engaged in horticulture, agriculture and animal husbandry.Fruits such as apples, cherries, and pears grown in this ancient corner are widespread in all regions of our country [7]. Around the village of Gilan and along the road leading to it, the Tien-Shan brown bear, the Turkestan lynx, the Central Asian beaver, which are included in the "Red Book" of the Republic of Uzbekistan, from the class of birds you can see the eagle, the bald eagle, the black heron, the snow heron, the black stork, the osprey and the snake-eating eagle, as well as partridges, quails, pigeons, blue crows, sparrows, and sparrows. .Animals such as fox, wolf, rock marten, among reptiles turtles, cypresses and water snakes, among lizards Turkistan agama, naked-eyed lizards can be found in the countryside. After the village of Hisorak, the road goes along the Aksuv riverbed to the village of Munavvar, and after that the road turns right and goes along the Gilan riverbed.The E3S Web of Conferences 463, 02030 (2023) EESTE2023 https://doi.org/10.1051/e3sconf/202346302030pleasant and beautiful nature of the village of Munavvar will not leave anyone indifferent.Because it is surrounded by high rocks, the summer months are cool, and the winter temperature is mild.Both sides of the Gilan road consist of high and huge rocks.Fruits such as apples, pears, cherries, apricots, cherries are grown in orchards along the riverbed.Some areas of Uzan consist of wild hawthorn groves, almond groves, wheat fields, and wild vineyards included in the "Red Book" of the Republic of Uzbekistan.Above the river bed, Zarafshan, Saur juniper groves, maple groves and irgay groves lie flat.In spring and summer, large tulips, white tulips, tubercle tulips, sunbul korak, norshirach, porcelain flower of Uzbekistan and more than 15 types of astragalus grow in the river bed. Discussion Suvtushar is located in the southeast of Shahrisabz district, in the bed of the Suvtushar river flowing from the Hisar mountain ranges, 1800 m above sea level.is a village located at a height.The population mainly belongs to the Uzbek nationality, their number is about 1100 people and they live in more than 200 households.The nature is pleasant, beautiful, summer is cool, moderate, winter is cold and frosty.About 4 km east of the village of Suvtushar, the territory of the Hisar state reserve begins [8]. After the village of Hisorak, go along the riverbed of the Suvtushar river, which joins the Aksuv river, to the village of Sayyod, and then turn right.The village of Sayad is pleasant and has a very beautiful nature.Because it is surrounded by high hills and rocks, the summer months are cool and the air temperature is mild in the winter.On the right side of the Suvtushar village road, there are hills, and on the left side are the arched rocks of the Hisar mountain range. The road from the village of Hisorak to the village of Suvtushar is uneven, stony and gravelly, dirt in the summer months, mud in the winter months and covered with ice and snow. 5 km from Suvtushar village.above, in the territory of Hisar state reserve, there is one of the largest waterfalls in Uzbekistan -Suvtushar waterfall.In May-July, 5-6 cubic meters of water per second, and 1-3 cubic meters of water at other times, fall from a height of 84 meters from this waterfall.The particles formed from this can fly up to 250-300 meters and 500-600 meters when there is a strong wind.provides Suvtushar waterfall is seen from the top of the village of Suvtushar, forming a strange landscape, and is visible among thick fir trees.Since this waterfall is located in the territory of the Hisar State Reserve, the entry of outsiders is prohibited according to the Law of the Republic of Uzbekistan "On Protected Natural Areas". The inhabitants of Tashkurgan village of Yakkabogh district were relocated in 1999, and now no one lives there.There is a world-famous Hisar state reserve in the village, and this area is protected by the reserve.12 kilometers from Tashkurgan, there are 31 dinosaur tracks from the Jurassic period in the Amir Temur cave, Kal'ai Sheron gorge, which have been preserved to this day.You can get to the Amir Temur cave in the village of Tashkurgan by a two-way road. The Kal'ai Sheron gorge in the area reminds of the beautiful natural habitats of Europe and Asia on the slopes of the mountains.The world of flora and fauna here is also diverse.Rare.Some plants are unique to this place.The air is clean and clear.It is one of the favorable places for the development of ecotourism and extreme tourism.By attracting foreign and local tourists to this place, it is possible to go down from the gorge area and show a unique miracle related to the dinosaurs that existed here millions of years ago and died out.The fact is that 31 dinosaur tracks are still preserved in the gorge.thing is that the age of fir trees in the forest is several centuries.The largest of them have existed for 700-800 years.There are also "Hokiz Burun" and "Baytal Tail" waterfalls in the gorge. Currently, it is 176 km from the city of Karshi."Langar ota" shrine and mosque of the 14th century is located in the village of Langar. 45 km from Shahrisabz city. in the distance, in the upper part of the village of Langar, Qamashi district, there is the Maidanak astronomical observatory, where the largest one and a half meter telescope in Central Asia is installed.The observatory was opened in 1970.Today, it is a fully equipped station with the most modern equipment.The observatory belongs to the Institute of Astronomy named after Mirzo Ulugbek and cooperates with foreign scientists in the study of astronomy. The observatory is located at an altitude of 2,650 meters above the Hisar massif.The unique geographical location, ideal climatic conditions and powerful telescope allow the astronomical observatory to precisely observe the movement of variable stars and galaxies, study celestial bodies, and observe the changes of quasars and supernovae [10]. As the best observatory in the Northern Hemisphere, more than 80 asteroids, four comets and a new minor planet have been discovered here.The minor planet is called "Samarkand" and is included in the international catalog of minor planets.It was discovered by Uzbek astronomers in July 2010.It is known that the small planet revolves around the Sun between the orbits of Mars and Jupiter with a period of about four years. A huge area -about 40 hectares -has been allocated for the observatory.In the vastness of the mountains, the sky is especially low and the stars seem very close.Here you can see the remains of sea animals, an ancient dormant volcano, and the clean air and wonderful scenery will enchant you. Conclusion At this point, it should be noted that today the global nature of the environmental crisis worries humanity.The fact that environmental problems are becoming increasingly serious on a global scale lies in the fact that the power of self-recovery of nature is decreasing, its resources are decreasing, and the environment is getting polluted and poisoned.In this regard, it is very important to raise the ecological culture of the population, to have a reasonable attitude to the environment, and to preserve the blessings of nature for future generations.After all, environmental education is important in ensuring harmony between nature and society and maintaining natural stability. Sheron fortress is surrounded by steep rocks, the height of the rock walls reaches 220-240 meters [9].The lower part of the gorge is covered with thick forest.The most important E3S Web of Conferences 463, 02030 (2023) EESTE2023 https://doi.org/10.1051/e3sconf/202346302030
2023-12-16T16:40:08.486Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "cfc85b51029e13f62f4ad451283f99e52c2a08c0", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/100/e3sconf_eeste2023_02030.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f62c6b08b8de30580fa2efcffcfc8e818181c494", "s2fieldsofstudy": [ "Environmental Science", "History" ], "extfieldsofstudy": [] }
164979663
pes2o/s2orc
v3-fos-license
Inhomogeneous distribution of radon in different types of tissue in the human body For the therapy of inflammatory diseases of the musculoskeletal system, like rheumatoid arthritis or ankylosing spondylitis, radon ( 222 Rn) is a potential used modality [1]. On the other hand, the radioactive decay of radon accounts for 1⁄2 of the natural radiation exposure and is believed to be a major source for carcinogenesis [2]. The main dose is deposited in the lung, from which around 95% is originating from the short living radon decay products [3]. In order to understand the therapeutic effects of radon and the associated risk, knowledge of the distribution and the deposited dose in the human body is of crucial interest. Introduction For the therapy of inflammatory diseases of the musculoskeletal system, like rheumatoid arthritis or ankylosing spondylitis, radon ( 222 Rn) is a potential used modality [1]. On the other hand, the radioactive decay of radon accounts for ½ of the natural radiation exposure and is believed to be a major source for carcinogenesis [2]. The main dose is deposited in the lung, from which around 95% is originating from the short living radon decay products [3]. In order to understand the therapeutic effects of radon and the associated risk, knowledge of the distribution and the deposited dose in the human body is of crucial interest. Solubility and distribution In order to investigate the distribution of radon in the body, the solubility in different types of tissue is important. For the experiments, samples were exposed in a special designed radon exposure chamber with which conditions like in radon galleries can be simulated [4]. After a usual exposure time of one hour, the samples were removed and the γ-emitting decay products 214 Pb and 214 Bi were measured via γ-spectroscopy. From the variation in time of the measured activities, the initial radon activity concentration in the sample can be determined. By additional knowledge of the radon activity concentration during exposure, the radon solubility in the sample can be calculated. For the experiments, different types of samples were chosen. Examples include pristine substances like oleic acid or linoleic acid, which are the most abundant fatty acids in the human body, or isotonic saline solution. Also tissue samples from dead pig like fat, muscle or bone were measured. Additionally the activities inside the body of a voluntary test person after radon therapy were determined at different body parts by γ-spectroscopy. Lung dosimetry For the quantification of the dose, deposited in the lung, a mechanical lung model was developed. The radon decay products, which are the major contributor to the dose, are deposited on a glass fiber filter in a small tube. The flow through the model is regulated by a pump and measured by a flow-meter. This whole setup is placed in the radon exposure chamber and after the experiments, the filter is removed and the deposited activities were measured with the γ-detector. From these activities, the energy dose can be calculated [5]. Results and Outlook The solubility of radon in fatty acids is about 50 times higher than in isotonic saline solution. A similar behaviour is found for fatty tissue in comparison to muscle tissue. For other tissues, experiments are currently under performance. The experiments with a voluntary test person just started but show promising results and will be continued. The measured doses in the mechanical lung model are in the order of µGy or lower. At present, different parameters like the aerosol concentration are varied. A fraction of the radon decay products will attach to aerosols and thus change the deposition mechanism and location, depending on the size of the aerosol. In future experiments, a more anatomical correct lung model will be designed and a better variation of the aerosol concentration and size distribution will be established. In our presentation, we want to present the methods and first results, giving promising indications for a better understanding of radon distribution and deposited dose in the human body.
2019-05-26T14:15:57.120Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "8249bd95fee67f69d3eb6fb6b12fa6bf7cd31c75", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2019/03/bioconf_heir2018_03001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "38a79f7144ebc0f4216631324583b2418588a94e", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Physics" ], "extfieldsofstudy": [] }
9077173
pes2o/s2orc
v3-fos-license
Long-term prognosis in patients continuing taking antithrombotics after peptic ulcer bleeding AIM To investigate the long-term prognosis in peptic ulcer patients continuing taking antithrombotics after ulcer bleeding, and to determine the risk factors that influence the prognosis. METHODS All clinical data of peptic ulcer patients treated from January 1, 2009 to January 1, 2014 were retrospectively collected and analyzed. Patients were divided into either a continuing group to continue taking antithrombotic drugs after ulcer bleeding or a discontinuing group to discontinue antithrombotic drugs. The primary outcome of follow-up in peptic ulcer bleeding patients was recurrent bleeding, and secondary outcome was death or acute cardiovascular disease occurrence. The final date of follow-up was December 31, 2014. Basic demographic data, complications, and disease classifications were analyzed and compared by t- or χ2-test. The number of patients that achieved various outcomes was counted and analyzed statistically. A survival curve was drawn using the Kaplan-Meier method, and the difference was compared using the log-rank test. COX regression multivariate analysis was applied to analyze risk factors for the prognosis of peptic ulcer patients. RESULTS A total of 167 patients were enrolled into this study. As for the baseline information, differences in age, smoking, alcohol abuse, and acute cardiovascular diseases were statistically significant between the continuing and discontinuing groups (70.8 ± 11.4 vs 62.4 ± 12.0, P < 0.001; 8 (8.2%) vs 15 (21.7%), P < 0.05; 65 (66.3%) vs 13 (18.8%), P < 0.001). At the end of the study, 18 patients had recurrent bleeding and three patients died or had acute cardiovascular disease in the continuing group, while four patients had recurrent bleeding and 15 patients died or had acute cardiovascular disease in the discontinuing group. The differences in these results were statistically significant (P = 0.022, P = 0.000). The Kaplan-Meier survival curve indicated that the incidence of recurrent bleeding was higher in patients in the continuing group, and the risk of death and developing acute cardiovascular disease was higher in patients in the discontinuing group (log-rank test, P = 0.000 for both). Furthermore, COX regression multivariate analysis revealed that the hazard ratio (HR) for recurrent bleeding was 2.986 (95%CI: 067-8.356, P = 0.015) in the continuing group, while HR for death or acute cardiovascular disease was 5.216 (95%CI: 1.035-26.278, P = 0.028). CONCLUSION After the occurrence of peptic ulcer bleeding, continuing antithrombotics increases the risk of recurrent bleeding events, while discontinuing antithrombotics would increase the risk of death and developing cardiovascular disease. This suggests that clinicians should comprehensively consider the use of antithrombotics after peptic ulcer bleeding. INTRODUCTION Peptic ulcer is a highly prevalent illness [1] , and it tremendously threatens the health of humans due to high morbidity and severe complications [2][3][4][5] . Among all complications, peptic ulcer bleeding is one of the common clinical diseases [6] . In recent years, despite the application of proton pump inhibitors (PPIs) [7][8][9] and Helicobacter pylori eradication [10][11][12] , the morbidity of peptic ulcer bleeding has not decreased [13] at least partially due to the use of antiplatelet agents, anticoagulants, and thrombin inhibitors. These drugs have recently been used extensively for the treatment of thromboembolic disease [14][15][16] . It has also been estimated that the usage of these drugs has been increasing worldwide as cardiovascular morbidity increases in the aged population [17,18] . This would induce a high incidence of peptic ulcer bleeding [19,20] . Aspirin is an antithrombotic drug that has been widely applied in view of the benefit in preventing cardiovascular disease [21,22] . However, patients with cardiovascular disease are recommended to immediately discontinue the usage of aspirin after successful endoscopic treatment of peptic ulcer bleeding, in order to prevent death or acute disease occurrence, according to the Medication Guide [23] . In a randomized double-blind study, Sung et al [24] found that recurrent bleeding events due to the continued usage of aspirin severely influences the prognosis of patients. Therefore, there is a dilemma to the clinical usage of antithrombotic drugs. Furthermore, there are few studies on antithrombotics usage for treating peptic ulcer bleeding patients, and there is increasing concern on these patients. Hence, we collected the clinical data of patients with peptic ulcer bleeding treated at our hospital in recent five years, aiming to investigate the effect of continued administration of antithrombotic drugs and identify the risk factors for prognosis. Study objects Patients with peptic ulcer treated at Tongren Hospital affiliated to Shanghai Jiao Tong University from January 1, 2009 to January 1, 2014 were included in this study. The study ended on December 31, 2014. Upper gastrointestinal hemorrhage was defined as hemoptysis, hematochezia, melena, fainting, or dizziness with anemia. The following patients were excluded: patients with non-peptic ulcer bleeding, esophageal varices, vascular dysplasia, esophageal or gastric cancer, and ulcer perforation; patients with peptic ulcer bleeding who had an unsuccessful endoscopic treatment; patients who did not receive antithrombotic drugs after a successful therapy; patients who received PIPs to prevent damage from antithrombotic drugs. The enrollment process is shown in Figure 1. Finally, a total of 167 patients were enrolled in this study. Based on whether drug administration was continued or discontinued after peptic ulcer bleeding healed following endoscopic treatment, the patients were divided into either a continuing group (n = 98) or a discontinuing group (n = 69). The continuing group included the patients who continued taking the drugs after the bleeding healed, while the discontinuing group included the patients who discontinued taking the drugs after the bleeding healed. All patients in this study provided a signed informed consent form, and this study was approved by the hospital ethics committee. Study methods The clinical data of the patients, including demographic data and complications, were analyzed statistically. The time to end point was strictly calculated. Primary end point was recurrent bleeding events within 30 d, including hemoptysis, melaena, > 2 g/dL of hemoglobin within 24 h, and unstable blood flow (systolic blood pressure ≤ 90 mmHg or heart rate ≥ 110 times/min). Patients with one of the aforementioned or combined characteristics were defined to achieve the primary end point. Secondary outcomes were death, acute cardiovascular disease, acute myocardial infarction, and ischemia or transient ischemia. The number of patients with different outcomes was counted, and the difference was compared statistically. Survival time was collected to draw the survival rate. Statistical analysis All data were analyzed using SPSS 19.0 software. Age, hemoglobin levels, and body mass index (BMI) measurements are expressed as mean ± SD. Categorical data are expressed as percentages. Measurement data following a normal distribution were compared by the t-test. Frequency data were compared by the χ 2 -test. Kaplan-Meier method was applied to calculate the survival rate and draw the survival curve. Differences were compared using the log-rank test. The multivariable COX proportional regression model was applied to analyze the risk factors for prognosis in patients with peptic ulcer bleeding. P < 0.05 was considered statistically significant. Baseline data of patients with peptic ulcer bleeding After screening, a total of 167 patients with peptic ulcer bleeding were enrolled into this study. Among these patients, 98 continued receiving antithrombotic drugs, and 69 discontinued. The average age of the patients who continued and discontinued receiving antithrombotic drugs was 70.8 ± 11.4 years and 62.4 ± 12.0 years, respectively (P = 0.000). The percentage of patients with a history of smoking or alcohol abuse was significantly higher in the continuing group than in the discontinuing group (P = 0.012). Furthermore, the rate of cardiovascular complications was significantly higher in the continuing group (P = 0.000; Table 1). Comparison of various outcomes between the two groups Recurrent bleeding occurred in 18 patients in the DISCUSSION Peptic ulcer is one of the most common clinical gastrointestinal tract diseases at present [6,25] , and it is generally induced by damage of the gastric or duodenal mucosa. Gastric acid and protease play a crucial role in disease progression [26,27] . The aged population accounts for most of the cases, and ulcer bleeding, perforation, and pyloric obstruction are the most common complications [28][29][30][31] . Helicobacter pylori infection, excessive secretion of gastric acid, and excessive antithrombotic drug intake often trigger the occurrence of ulcer bleeding [32][33][34] . In recent years, peptic ulcer morbidity increased slowly due to medical technology progression [35] ; however, the incidence of ulcer bleeding has been continuously increasing [36,37] . Cardiovascular disease is defined as ischemic or hemorrhagic disease occurring in all tissues due to atherosclerosis and blood viscosity [38] . In view of its high morbidity and mortality, more and more people, even healthy people, tend to take antithrombotics to prevent and reduce its risk [39] . However, excessive drug usage will increase the risk of bleeding in patients with ulcer bleeding. continuing group and in four patients in the discontinuing group. Death or acute cardiovascular disease occurred in three patients in the continuing group and in 15 patients in the discontinuing group. The differences in the rates of primary and secondary outcomes between the two groups were statistically significant (P = 0.018, P = 0.000; Table 2). Survival curve in the two groups Kaplan-Meier results indicated that bleeding occurred more frequently in patients in the continuing group, while survival rate was significantly higher in patients in the discontinuing group (Log-rank test, P = 0.022, P = 0.000; Figure 2). Risk factors for prognosis of patients The multivariable COX proportional regression model indicated that continuing intake of antithrombotic drugs increased the risk of recurrent bleeding events (95%CI: 1.067-8.356, OR = 2.986, P = 0.015), while discontinuing intake of antithrombotic drugs increased the risk of death or acute cardiovascular As the aged population has an increased necessity for preventing acute cardiovascular disease, aspirin and other antithrombotics continue to be broadly used [40][41][42] . It has been reported that most patients with established cardiovascular disease ignore the risk of aspirin, and continue to take aspirin or other antithrombotics for secondary prevention [43,44] . Even more, the literature has shown that cardiovascular disease complications occur more frequently in patients who have discontinued antiplatelet drug therapy, compared to patients who continue this therapy [45] . Nevertheless, continuing intake of aspirin or antithrombotic drugs will increase the risk of hemorrhage complications in surgery instead [46,47] . Accordingly, clinicians could not balance the risk of cardiovascular disease and hemorrhage complications. In addition, there are few studies on the prognosis of patients with peptic ulcer bleeding. Hence, the clinical data of patients with peptic ulcer bleeding treated at our hospital were collected and analyzed to discuss the prognosis of patients, hoping to provide guidance for clinical applications. Age, smoking, and alcohol abuse influence the usage of antithrombotics in patients with peptic ulcer bleeding Our study revealed that patients were older in the continuing group than in the discontinuing group, indicating that aged patients need more antithrombotics. This is consistent to the current social situation. The number of patients with cardiovascular complications was higher in the continuing group than in the discontinuing group, which is in agreement with a previous report that patients with an established disease tend to continue taking drugs. In addition, there was a difference in smoking and alcohol abuse between the two groups; there were more of these patients in the discontinuing group. It is plausible that patients taking drugs tended to reduce smoking or alcohol consumption. Taking antithrombotics affects survival rate in patients with peptic ulcer bleeding Our study indicated that patients who continued to take the drugs had a higher risk of recurrent bleeding events. In contrast, patients in the discontinuing group had a higher risk of death or acute cardiovascular disease. This result is inconsistent with recent studies reporting that there was no difference between the two groups. On one hand, our follow-up time did not have a limit. However, patients with less than two months of follow-up were excluded in the study conducted by Kim et al [48] . Furthermore, it has been widely accepted that recurrent bleeding time was shorter than normal bleeding time. On the other hand, the number of patients in these two studies is different. Consistent with our results, Sung et al [24] estimated a higher incidence of recurrent bleeding events in patients continuing taking aspirin in a randomized double-blind study (a likelihood ratio of nearly 2). Through retrospective research, we also obtained similar results (a likelihood ratio of nearly 3) [49] , which further supports this view. This implies that more attention should be given when continuing taking antithrombotics. Limitations and prospects This study has some limitations. First, a limited number of patients could not sufficiently support our conclusion. Second, we did not distinguish different antithrombotic drugs. However, many studies have shown that the single or combined application of drugs would result in an obvious difference. Finally, the definite time of bleeding was lacking. Hence, we were not able to obtain the precise time when to discontinue or continue drugs. In conclusion, our results demonstrate that after the occurrence of peptic ulcer bleeding, continuing the intake of drugs would increase the risk of recurrent bleeding events, while discontinuing the intake of drugs will increase the risk of death and acute cardiovascular occurrence. These indicate that clinicians must extensively weigh the benefits and risks when using antithromboticsin for treating patients with peptic ulcer bleeding. Background Peptic ulcer is one of the most common gastrointestinal diseases which is generally induced by damage of the gastric or duodenal mucosa. Gastric acid and protease play a crucial role in disease progression. The aged population accounts for most of the cases, and ulcer bleeding, perforation, and pyloric obstruction are the most common complications. Helicobacter pylori infection, excessive secretion of gastric acid, and excessive antithrombotic drug taking would trigger the occurrence of ulcer bleeding. In recent years, peptic ulcer morbidity has increased slowly duo to medical technology progression, but ulcer bleeding incidence rate has been increasing all the time. Cardiovascular disease is defined as ischemic or hemorrhagic disease occurring in all tissues due to atherosclerosis and blood viscosity. In view of its high morbidity and mortality, more and more people, even healthy people, tend to take antithrombotics to prevent and reduce its risk. However, excessive drug usage will increase bleeding risk instead in ulcer bleeding patients. Research frontiers As the aged population has an increased necessity for preventing acute COMMENTS cardiovascular disease, aspirin and other antithrombotics are broadly used. It is reported that most patients with established cardiovascular disease ignore the risk of aspirin and still insist in taking aspirin or other antithrombotics for secondary prevention. Even more, the literature shows that patients have cardiovascular disease complication more easily in those discontinuing antiplatelet drug therapy compared to those continuing antiplatelet drug therapy. Nevertheless, continuing aspirin or antithrombotic drugs will increase hemorrhage complication risk in surgery instead. Accordingly, clinicians could not balance risk of cardiovascular disease and hemorrhage complication. Innovations and breakthroughs The authors investigated the prognosis and risk factors in peptic ulcer bleeding patients. Even more, they showed two survival curves with disparate outcomes to demonstrate survival difference in patients continuing or discontinuing taking antithrombotics. These results suggest that clinicians must take more attention in the usage of antithrombotic drugs. Applications This study demonstrates that after occurrence of peptic ulcer bleeding, continuing taking drugs will increase the risk of recurrent bleeding events, and discontinuing drugs will increase risk of death and acute cardiovascular occurrence, which indicates that clinicians must weigh the risks and benefits when using antithrombotics to treat ulcer bleeding patients. Peer-review Patients with ulcer peptic bleeding were enrolled in this study and clinical information was analyzed by statistical method. Authors found that continuing antithrombotic drugs in ulcer peptic patients increased the risk of recurrent bleeding events while discontinuing drugs increased risk of death or cardiovascular events. The results indicated that clinicians should balance the usage of antithrombotics to reduce risk in peptic ulcer bleeding patients
2018-04-03T02:04:38.249Z
2017-01-28T00:00:00.000
{ "year": 2017, "sha1": "97d7ba8df1c99e638ef577a2f1beec96bb463efb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3748/wjg.v23.i4.723", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "97d7ba8df1c99e638ef577a2f1beec96bb463efb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15921383
pes2o/s2orc
v3-fos-license
Character Values of the Sidelnikov-Lempel-Cohn-Eastman Sequences Binary sequences with good autocorrelation properties and large linear complexity are useful in stream cipher cryptography. The Sidelnikov-Lempel-Cohn-Eastman (SLCE) sequences have nearly optimal autocorrelation. However, the problem of determining the linear complexity of the SLCE sequences is still open. Our approach is to exploit the fact that character values associated with the SLCE sequences can be expressed in terms of a certain type of Jacobi sum. By making use of known evaluations of Gauss and Jacobi sums in the"pure"and"small index"cases, we are able to obtain new insight into the linear complexity of the SLCE sequences. Introduction Let a = a 0 a 1 a 2 . . . be a sequence over a field F. We say that a is periodic if there is an integer v > 0 such that a i = a v+i for all integers i ≥ 0. If v is the smallest such integer, then we say that a has period v. Periodic sequences with certain properties are useful in stream cipher cryptography. A list of general design parameters for cryptographic sequences is given at the end of Section 5.1 in [17]. A good sequence has a long period and ideally should posses two statistical properties known as the balance property and the run property (Properties R-1 and R-2 from [17], respectively). Furthermore, sequences should posses good correlation properties. Individual sequences should have low-valued auto-correlation (Property R-3 from [17]), and sets of sequences should have low-valued cross-correlation. Sequences 1 should also have large linear complexity (large linear span). We will not discuss the run-property or the low-valued cross-correlation property in this paper. It is important that the number of zeroes and ones in the first v elements of a binary sequence of period v differ by at most one [17]. This is the balance property. It is possible to define autocorrelation for sequences with elements from various different fields (see [17]). But in this paper, we will discuss only autocorrelation of sequences defined over F 2 . Thus, we assume that a is a sequence with elements in F 2 . We define the autocorrelation function C τ of a by where τ ∈ {0, ..., v − 1}. From a cryptographic standpoint, it is important that the maximum autocorrelation of the sequence be as small as possible. Let ℓ be the smallest integer for which there exist c 1 , . . . , c ℓ ∈ F such that −a i = c 1 a i−1 + · · · + c ℓ a i−ℓ for each i ≥ ℓ. In other words, let ℓ be the length of the smallest linear feedback shift register that can be used to generate the sequence a (see [17]). Then we say that ℓ is the linear complexity of a. Linear complexity is one of the most important design parameters for cryptographic sequences: using the Berlekamp-Massey algorithm, one can deduce the entire sequence from 2ℓ of its consecutive elements [17]. Ideally, the linear complexity of a sequence would be nearly as large as its period. The polynomial c(x) = 1 + c 1 x + · · · + c ℓ x ℓ ∈ F[x] is called the characteristic polynomial of a. Let A(x) = a 0 + a 1 x + · · · + a v−1 x v−1 . It is well known (see for example [17] and [24]) that a has characteristic polynomial c(x) = x v − 1 gcd(x v − 1, A(x)) (1.1) and linear complexity l = v − deg(gcd(x v − 1, A(x))). (1.2) As discussed in [24], the computation of gcd(x v − 1, A(x)) is harder when the characteristic of F divides v than when it does not. For if the characteristic of F divides v, then one must not only find the common factors of x v − 1 and A(x) but also determine the multiplicity with which they divide gcd(x v − 1, A(x)). In this paper, we study a class of sequences defined over F 2 that were discovered by Sidelnikov [33] and rediscovered by Lempel, Cohn, and Eastman [26]. Following [24], we refer to these sequences as Sidelnikov-Lempel-Cohn-Eastman sequences (or SLCE sequences). As the authors of [24] remark, SLCE sequences are some of the best even length sequences: they have the same number of zeroes as they do ones, and they have nearly optimal autocorrelation properties [26]. In fact, since circulant Hadamard matrices seem not to exist [27], the autocorrelation properties of the SLCE sequences may in fact be optimal. We now define the SLCE sequences, and in so doing, we fix notation (for p, q, m, α, s, and S 2 (x)) that we use throughout the paper. Definition 1.1. Let p an odd prime, m a positive integer, and q = p m . Let α be a primitive element of the finite field F q . An SLCE sequence s = s 0 s 1 s 2 . . . of period q − 1 over F 2 is defined as follows: For Since the SLCE sequences have good autocorrelation and balance properties, it makes sense to study their linear complexity. Since these sequences are binary, it is natural to determine their linear complexity over F 2 . The study of the linear complexity of the SLCE sequences over F 2 began with [20] and was continued in [24] and [31]. However, this problem has turned out to be rather difficult. There are at least two reasons for this. For one thing, since q − 1 is always even, the characteristic of F 2 divides the periods of the sequences. But there is also another problem, which is discussed in the concluding section of [24]. Many well-known sequences correspond (in a sense) to reasonably well-behaved combinatorial objects such as difference sets, divisible difference sets, and partial difference sets (see [8] for difference sets and divisible difference sets, and see [28] for partial difference sets). As a result of this correspondence, explicit formulae have been found for the linear complexity of these sequences (see, for example, [14]). However, the SLCE sequences do not correspond to any of these types of combinatorial objects. Rather, they correspond to combinatorial objects called almost difference sets that are, in a sense, more general and about which much less is presently known (see [5] for background on almost difference sets). The authors of [20], [24], and [31] were able to obtain conditions under which certain polynomials divide gcd(x q−1 +1, S 2 (x)). In light of (1.1) and (1.2), such results provide some insight into the characteristic polynomials of the SLCE sequences over F 2 and yield upper bounds on the linear complexity of these sequences. The authors of [24] also computed gcd(S 2 (x), x q−1 + 1) in a number of cases using MAGMA. However, much still remains to be learned about the divisors of these polynomials. Indeed, the authors of [24] mentioned that it would be nice to obtain new divisibility results giving conditions under which certain polynomials divide gcd(S 2 (x), x q−1 + 1). We obtain more results of this type in this paper. The results from [20] and [24] are based on a representation of the elements of the SLCE sequences in terms of certain quadratic character values. Using this representation in conjunction with certain facts concerning the cyclotomic numbers of order 2, the authors of [20] and [24] were able to gain some insight into the characteristic polynomials of these sequences. Furthermore, the authors of [24] showed that under certain conditions, the problem of determining whether or not a certain polynomial divides gcd(x q−1 + 1, S 2 (x)) is equivalent to determining congruence classes of certain character sums known as Jacobsthal sums. The authors of [31] used known evaluations of cyclotomic numbers in certain special cases to obtain a number of new divisibility conditions. By contrast, the approach of this paper is based on an expression of character values associated with the SLCE sequences (in a manner to be specified later) in terms of certain Jacobi sums (see Theorem 3.1 below). In fact, the problem of determining whether certain polynomials divide gcd(x q−1 + 1, S 2 (x)) turns out to be equivalent to determining the congruence classes of these Jacobi sums modulo certain prime ideals in certain algebraic number fields. Jacobi sums are closely related to both cyclotomic numbers and Jacobsthal sums (see [7, Chapters 2 and 6]), so it is perhaps not surprising that the problem can be interpreted in these various different manners. Nonetheless, our method does have some virtues. At present, the Jacobsthal sum condition from [7] only applies when q ≡ 1 (mod 4), and calculation of the cyclotomic numbers of order t is quite complicated when t is large. Thus, our representation of the problem in terms of Jacobi sums provides a convenient means by which to harness the information from known evaluations of Gauss and Jacobi sums. Indeed, by making use of such evaluations, we are able to obtain divisibility conditions different than those from [20], [24], and [31] (see Theorems 4.1 and 4.2 below). We should also note that since the problem of determining the linear complexity of the SLCE sequences over F 2 is rather difficult, many authors have turned to the important work of calculating the linear complexity of these sequences over other fields. For instance, since the SLCE sequences are constructed using the finite field F q , several authors have studied the linear complexity of these sequences over F p (see [18], [19], [16], [4], [9], [23], [12], [3], and [11]; some of the papers in fact deal with closely related questions). The problem of determining the linear complexity of the SLCE sequences over non-prime fields has also been considered [10]. Preliminary Results We introduce some concepts and list some preliminary results that we use throughout the paper. Let G denote a finite Abelian group of exponent v * . The integral group ring Z[G] consists of all formal sums g∈G a g g, where a g ∈ Z and with addition and multiplication defined as follows: For any subset T ⊆ G, we identify T with the group ring sum of all the elements in T ; indeed, we refer to this sum as T. Notation 2.1. Let n be a positive integer. We write ζ n to denote a primitive, complex nth root of unity. Sometimes we write ζ to refer to a (not necessarily primitive) root of unity. A group character is a homomorphism χ : G → ζ v * . Such a homomorphism can be extended by linearity to a map from Z[G] to Z[ζ v * ]. For a discussion of the use of characters in the theory of difference sets, see [8]; for a discussion of characters over finite fields, see [22]. We also refer to the group ring element D ∈ Z[F * q ] as S D (α). We adopt the following convention. For an integer i ∈ {1, . . . , p − 1}, we refer to the corresponding element of F * p by italicizing i . The following result, due to Lempel, Cohn, and Eastman [26, proof of Theorem 5] plays a fundamental role in our work. We need several results concerning cyclotomic fields. First, we fix some notation. Notation 2.2. Let k be a positive odd divisor of q − 1, and let f denote the multiplicative order of 2 modulo k, so that f is the smallest positive integer for which k|2 f − 1. Let φ(k) denote the Euler phi-function, which is the number of positive integers less than k and relatively prime to k. Theorem 2.2. In the ring of integers Z[ζ k ] of the cyclotomic field Q(ζ k ), the prime ideal factorization of the ideal 2 is given by where P 1 , . . . , P φ(k)/f are distinct prime ideals, and for every i = 1, . . . , φ(k)/f, and γ / ∈ P, then there exists a unique (not necessarily primitive) kth root of unity ζ such that We note that for any quadratic field K, there exists a unique square-free integer n such that K = Q( √ n), see [2, p. 95] or [22, p. 188]. For the proof of the following result, see [2, p. 96] or [22, p. 189]. Theorem 2.4. Let n ≡ 1 (mod 4). Let K = Q( √ n) be a quadratic field. Then the ring O K of algebraic integers in K is given by The following result is a special case of Theorem 10.2.1 from [2, pp. 242-245]. Theorem 2.5. Let K = Q( √ n) be a quadratic field. If n ≡ 1 (mod 8), then the ideal 2 factors into a product of two prime ideals as Further, O K /P i is a finite field of order 2 for i = 1, 2. Theorem 2.6. Let ℓ be a prime. Then Q (−1) (ℓ−1)/2 ℓ is the unique quadratic field contained in the cyclotomic field Q(ζ ℓ ). We now turn our attention to character sums. We note that for every (not necessarily primitive) kth root of unity ζ, there exists a unique character χ : F * q → ζ k of order dividing k such that χ(α) = ζ [22, Chapter 8]. Notation 2.4. Let χ : F * q → ζ k denote the unique character mapping α to ζ k , and let ρ be the (unique) quadratic character on F * q . Note that χ has order k. Definition 2.3. Let χ be the unique character given above, and let φ be another nontrivial character of F * q . We define the Jacobi sum J(χ, φ) by We shall be particularly interested in the Jacobi sum We mention the following congruence (see [7,Theorem 2.18]). The following identity is also important for our work (see [7,Theorem 2.1.4]). It is well known that |J(χ, φ)| = √ q, but in general, the exact value of J(χ, φ) is not known (and, in particular, the exact value of the Jacobi sum K(χ) is not known). Such sums have been evaluated in certain special cases. For instance, evaluations are known for Jacobi sums over characters of small order. This information has already been used to obtain evaluations for cyclotomic numbers [7,Chapter 2] which were in turn used in [31] to obtain divisibility conditions for the SLCE sequences. So, we do not use these evaluations here. Another case in which evaluations are known is that of the pure Jacobi sums. A Jacobi sum is called pure if some positive integral power of it is real. Such sums were studied in [1] and [32]. Indeed, in light of (2.2), the results from [1] and [32] can be used to evaluate certain Jacobi sums of the type K(χ). The authors of [1] and [32] showed that if m is odd, then no Jacobi sum defined on F p m can be pure. They completely determined conditions under which Jacobi sums are pure when m = 2. It follows from Theorem 2.7 and Notation 2.2 that if q = p 2 , then our sum K(χ) is pure only when k is an odd divisor of p + 1. In this case an explicit evaluation of K(χ) is given in [6, Theorem 2.14]. Theorem 2.8. Let m = 2, and let k be an odd divisor of p + 1. Then K(χ) = p. The evaluation in Theorem 2.8 is a special case of a more general result. To explain why, it is necessary to introduce another type of character sum. Definition 2.4. Let ǫ be a character on F q . We define the Gauss sum G(ǫ) by where tr is the field trace from F q to F p . The following identity relates Gauss and Jacobi sums (see [22, Theorem 2.1.3] or [7]). If χφ is not the trivial character, then In particular, since χ is a character of order greater than 2, we have Let s ≥ 1 be an integer, and let χ ′ := χ • N, where N is the field norm from F * q s to F * q . Then χ ′ is a character of F q s of order k, which is called a lifted character. Note that every character on F q s of order k can be obtained as a lifted character from a character of F q of order k. We mention the following important identity, which is known as the Hasse-Davenport Lifting Theorem (see [7,Theorem 11.5.2]). The problem of evaluating Gauss sums is just as hard as the problem of evaluating Jacobi sums. But explicit evaluations have been obtained in a number of special cases. The first of these evaluations is due to Gauss, who evaluated G(ρ) when q = p (i.e. when m = 1). His evaluation can be extended to a general (odd) prime power q = p m [7, Theorem 11.5.4] as A Gauss sum is called pure if some positive integral power of it is real. The following theorem completely classifies pure Gauss sums (see [7,Section 11.6]). We now assume that there exists a positive integer x such that p x ≡ −1 (mod k); indeed, we refer to the least such integer as t. Then, by Theorem 2.9, G(χ) is a pure Gauss sum. Since k is odd and p t + 1 is even, then k|p t + 1 ⇐⇒ 2k|p t + 1. Hence, since t is the smallest positive integer satisfying p t ≡ −1 (mod k), then t is also the smallest positive integer satisfying p t ≡ −1 (mod 2k). We note that χρ is a character of order lcm(2, k) = 2k. Thus, G(χρ) is a pure Gauss sum. Thus, in this case, we can use Theorem 2.9, (2.7), and (2.4) to evaluate the Jacobi sum K(χ). We note that by Theorem 2.9, m = 2ts for some positive integer s. Since the evaluation in (2.7) breaks into two cases, our evaluation also breaks into two cases. First, we assume that p ≡ 1 (mod 4). Then Let us consider the special case in which m = 2 and k|p + 1 (so that t = s = 1). Since p ≡ 1 (mod 4), it follows that (p t + 1)/2k is odd. Then by Theorem 2.8, the evaluation of K(χ) given above reduces to the evaluation K(χ) = p. Next, we assume that p ≡ 3 (mod 4). Then Again, let us consider the special case in which m = 2 and k|p + 1 (so that s = t = 1). Since p ≡ 3 (mod 4), it follows that (p t + 1)/2k is even. Then by Theorem 2.8, the evaluation of K(χ) given above reduces to the evaluation K(χ) = p. Corollary 2.1. Assume that there exist positive integers x such that p x ≡ −1 (mod k), and let t be the least such integer. Then there exists s ∈ N such that m = 2ts, and Finally, a third case in which there are known evaluations for Gauss and Jacobi sums is that of the small index Gauss and Jacobi sums. We will discuss the sums K(χ) in this context. Recall that Gal(Q(ζ k )) ∼ = (Z/kZ) * . Let σ p ∈ Gal(Q(ζ k )) be the automorphism mapping ζ k to ζ p k . Then, since the Froebenius map is an automorphism of F q fixing the elements of F p , we have that Thus, K(χ) is in the fixed field of σ p , and by the Fundamental Theorem of Galois Theory, this field has degree [(Z/kZ) * : p ] as an extension of Q. Since we know how to evaluate K(χ) when there exist positive integers x such that p x ≡ −1 (mod k), we can confine ourselves to the case in which there exist no such integers. Having made this assumption, we see that the quotient group (Z/kZ) * / p must contain the (non-identity) element −1 + p and so (by Lagrange's Theorem) must have even order. The small index assumption is the assumption that [(Z/kZ) * : p ] is a small positive integer. By making this assumption, we can infer that K(χ) lies in an algebraic number field of small degree, and can therefore use facts about such number fields to evaluate K(χ). Explicit evaluations have been obtained for Gauss sums in the index 2 and index 4 cases. It is sometimes possible to translate these Gauss sum evaluations into evaluations of K(χ). Thus, (Z/kZ) * contains at most 3 elements of order 2, and it follows easily from the Chinese Remainder Theorem that (since k is odd) either k = ℓ r 1 1 or k = ℓ r 1 1 ℓ r2 2 for some odd primes ℓ 1 and ℓ 2 , and some positive integers r 1 and r 2 . The following evaluation is due to Langevin [25]. We note that the congruence condition ℓ ≡ 3 (mod 4) is actually forced by the index 2 assumption, as Langevin demonstrates in his paper. Furthermore, the hypothesis in the evaluation below that ℓ > 3 is only necessary to obtain a nice expression for the Gauss sum in terms of the class number of a certain quadratic field. We have rephrased Langevin's result in the manner in which it was stated in [34]. Theorem 2.10. Let k = ℓ r , where ℓ > 3 is a prime congruent to 3 (mod 4) and r is a positive integer. We suppose that [(Z/kZ) * : p ] = 2 and m = φ(k)/2. Then is the class number of Q( √ −ℓ), and the integers a and b satisfy the three conditions a, b ≡ 0 (mod p), 4p h = a 2 + ℓb 2 , and a ≡ −2p Furthermore, these conditions are sufficient to determine a completely and to determine b up to sign. In the above formula, in place of the expression a+b , where a ′ , b ′ ∈ Z. Note that The integers a and b in the version from [34] (and from Theorem 2.10 above) are obtained by setting a = 2a ′ − b ′ and b = b ′ . As a result, we also have the condition (not stated explicitly in our version of Theorem 2.10) that a ≡ b (mod 2). Note also that [(Z/2kZ) * : p ] = 2. Xia and Yang have evaluated index 2 Gauss sums over characters of order 2ℓ r [34]. Their result breaks into two separate cases: one in which ℓ ≡ 3 (mod 8) and one in which ℓ ≡ 7 (mod 8). We only make use of the result for the case in which ℓ ≡ 7 (mod 8). Theorem 2.11. Let k = ℓ r , where ℓ > 3 is a prime congruent to 7 (mod 8) and r is a positive integer. We supppose that [(Z/2kZ) * : p ] = 2 and m = φ(k)/2. Let ǫ be a character on F q of order 2k. Then Let us make a slight modification to our earlier hypotheses. Assume s is a positive integer, and let m = φ(k)s/2. So, we are now considering a larger class of prime powers p m . Let us set e = φ(k)/2, so that m = es. Let ℓ ≡ 7 (mod 8). We consider two cases. Character Values We show that the problem of finding gcd(S 2 (x), x q−1 + 1) is equivalent to determining the equivalence class of K(χ) modulo a certain prime ideal. Several authors have previously made use of complex group characters to determine the linear complexity of various classes of sequences (see, for instance, [29] and [14]). Notation 3.1. Since F * 2 f is a cyclic group of order 2 f −1, it has a subgroup of order k. Hence, the polynomial x k + 1 = (1 + x)(1 + x + · · · + x k−1 ) splits completely over F 2 f . Let β ∈ F 2 f be an element of order k, so that β is a root of 1 + x + · · · + x k−1 . Let I β (x) be the minimal polynomial of β over F 2 . Notation 3.2. Let ζ denote the unique primitive kth root of unity congruent to φ(β) (mod P). Let χ denote the unique group character mapping α to ζ. Let S z (x) be the polynomial in Z[x] obtained by replacing each coefficient of S 2 (x) with its counterpart (0 or 1) from Z. We now prove the result mentioned at the beginning of this section. As we show in the next section, this result enables us to derive several new divisibility results for the SLCE sequences. We proceed by obtaining an expression for χ(D c ) in terms of K(χ). Theorem 3.1. We have Proof. The reasoning in the next two sentences is taken from [7, Theorem 2.14], where it serves a different purpose. Let γ ∈ F * q be fixed. An element x ∈ F * q satisfies the equation x(1 −x) = γ if and only if it satisfies the equation ( Hence, the number of solutions of the equation x(1 −x) = γ in F * q is 1 + ρ(1 −4 γ), where ρ denotes the (unique) quadratic character on F q . It follows that every element of F * q is represented either twice or zero times in the form x(1 − x), save for 4 −1 , which is represented once. This makes sense since there are q − 2 choices of x for which x(1 − x) ∈ F * q , and q − 2 is an odd number. Making use of Theorem 2.1, we see that So, we deduce that Note that, by (2.1), K(χ) ≡ 1 (mod 2), so that the value we have ascribed to χ(D c ) is indeed an element of Z [ζ k ] . The result now follows by equivalence (3.1). Proof. Let ν ∈ F * q be an element of order n, where n|k. Since, p t ≡ −1 (mod k), it follows that p t ≡ −1 (mod n). Thus, the equation p x ≡ −1 (mod n) has a positive integer solution x. Let t ′ be the smallest such solution. There exists unique integers y, r ≥ 0 such that t = yt ′ + r, r < t ′ . Furthermore, Since r < t ′ , the above equation is only possible if r = 0. Hence, t ′ |t. Now, by Theorem 2.9, there exists a positive integer s ′ such that m = 2t ′ s ′ , so that 2t ′ s ′ = 2ts = 2yt ′ s, and hence s ′ = ys. Consequently, we have s ≡ 0 (mod 2) =⇒ s ′ ≡ 0 (mod 2). So, it follows from Lemma 4.1 that the conditions guaranteeing that I β (x)|S 2 (x) are also sufficient to guarantee that I ν (x)|S 2 (x), where ν is any element of order dividing k. Thus, these conditions are sufficient to guarantee that 1 + x + · · · + x k−1 |S 2 (x). And, of course, they are also necessary. The result follows. We now give some examples to illustrate Theorem 4.1. We use Theorem 4.1 to interpret some of the numerical results from [24]. We now apply the evaluations of the Jacobi sums of index 2 given in Corollary 2.2 to deduce new divisibility conditions. Lemma 4.2. Let k = ℓ r , where ℓ is a prime congruent to 7 (mod 8) and r is a positive integer. We suppose that [Z/kZ : p ] = 2 and m = φ(k)s/2, where s is a The explicit conditions given in Theorem 2.10 are sufficient to determine a completely and to determine b up to sign. In order to determine the sign of b, one must use Stickleberger's congruence [15,Lemma 3.5]. However, we cannot guarantee that the sign of b will be same for Gauss/Jacobi sums corresponding to different characters of order k [7, Section 11.2]. But, if we assume that b ≡ 0 (mod 4), then the residue class mod 4 of a+b 2 is unaffected by the sign of b. We now give an example to illustrate Theorem 4.2. But −37 ≡ 3 (mod 4), and so 1 + x + · · · + x 22 |S 2 (x). We conclude with a few remarks regarding the applicability of Theorem 4.2. The fastest way to compute the class number of Q( √ −ℓ) is via an algorithm due to Shanks, which requires at most O(ℓ 1/4+ǫ ) operations, where ǫ is any positive number; see [13,Section 5.4]. The class number of Q( √ −ℓ) can be used to obtain divisibility results whenever p satisfies [(Z/ℓZ) : p ] = 2, and it follows by Dirichlet's Theorem on primes in an arithmetic progression that there are infinitely many primes p for which this is true. When the class number h = 1, there exists a probabilistic polynomial time algorithm, known as the modified Cornacchia algorithm, that can be used to find the integers a and b satisfying 4p h = 4p = a 2 + ℓb 2 ; see [13,Section 1.5.2]. In the general case, Hardy, Muskat, and Williams have given a deterministic algorithm that finds a and b (up to sign) in at most O((4p h ) 1/4 (log4p h ) 3 (loglog4p h )(logloglog(4p h ))) operations [21].
2016-02-18T17:41:32.000Z
2016-02-18T00:00:00.000
{ "year": 2016, "sha1": "2417e0a0595eebdbe7d090d76611b4e87ee6e2c1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1602.05888", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e77f980b5d849e8fba2658137a8bf21150fc4133", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
126782103
pes2o/s2orc
v3-fos-license
Exploring the Linkages Between Ecosystems and Human Health The linkages between human health and ecosystems are complex, dynamic, and political. For millennia ecosystems have provided humans with essential services such as food, water, shelter and medicine. At the same time, they have mediated the transmission of many diseases and posed a number of health risks. The vitality of ecosystem services for human health and well-being is well captured by Bernard Abraham, President of Weskit-Chi Aboriginal Trappers Association, when he commented on the importance of forest ecosystems to Aboriginal people. He observed that many Aboriginal people consider the forest as: “their food bank, drugstore, meat market, bakery, fruit and vegetable stand, building material centre, beverage supply, and the habitat for all of the creator’s creatures.”1 Introduction The linkages between human health and ecosystems are complex, dynamic, and political. For millennia ecosystems have provided humans with essential services such as food, water, shelter and medicine. At the same time, they have mediated the transmission of many diseases and posed a number of health risks. The vitality of ecosystem services for human health and well-being is well captured by Bernard Abraham, President of Weskit-Chi Aboriginal Trappers Association, when he commented on the importance of forest ecosystems to Aboriginal people. He observed that many Aboriginal people consider the forest as: "their food bank, drugstore, meat market, bakery, fruit and vegetable stand, building material centre, beverage supply, and the habitat for all of the creator's creatures." 1 Many Indigenous people across the world consider the health of the "country" to be intricately linked to human health and community health and well-being. This sentiment is not only true for Indigenous people, but for society as a whole. In addition, the intricate links between ecosystems and human health is expressed through aspects of the Indigenous culture, including views and notions of holistic health and well-being, and ecosystem-based cultural rites, and overall close proximity to nature. The World Health Organization captures this notion of holistic well-being when it defines health as a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity (World Health Organization 1948). However, it is the Ottawa Charter for Health Promotion that makes explicit the connection between human health and ecosystem health through its identification of "stable ecosystems" and the "sustainable use of natural resources" as essential components for health improvement (WHO 1986). Despite this close association between human health and ecosystem health, recent evidence from the Millennium Ecosystem Assessment (MA 2005) suggests that global ecosystems are failing in their ability to continue to provide the services that are essential for human health and well-being, because of increasing human pressure on ecosystems worldwide. The report indicates that, over the past half century, human activities have altered the natural ecosystem more rapidly and extensively than in any comparable time in history. This increased pressure has resulted in close to 60 per cent of the world's ecosystem services being degraded or used unsustainably (ibid). This trend will likely continue to the extent that human beings continue to depend on ecosystems for both the necessities of life such as food and water, as well as the luxuries of diamond and caviar. What is required is recognition of the conjoint nature of society and ecosystems and how each cannot be managed separately. Human beings are integral to ecosystems, and for sometime now, the environmental sector has made use of integrated approaches to ecosystem management that have tried to balance community, economic, and environmental needs. However, the question that still exists is to what extent are these balanced? In addition, what has eluded many concerned with understanding and responding to the underlying causes of human-induced ecosystem degradation is their political and illusive nature. How we apportion blame to the causal factors responsible for ecosystem degradation, and poor health, must be subjected to rigourous analytical interpretations. The factors that have commonly been identified as responsible for ecosystem degradation focus on rapid population growth, abject poverty and poor land use practices in the global South, and over consumerism in the North. However, as has been demonstrated throughout this book, such factors are the outcome of underlying problems that are not readily apparent, and hence are insufficient to explain the causal basis of environmental problems. The causal basis of ecosystem degradation must be examined from the basis of the structural inequalities surrounding the use of and control of ecosystem services, and how such inequalities drive various land use practices that degrade ecosystems. For example, it is important to examine how states, corporate giants, mining and logging companies, and Northern interests monopolize the environments of weaker actors (e.g. local farmers, peasants, global South), forcing them to till and live on marginal lands. In an attempt to eke out a living, displaced or marginalized communities encroach on fragile and protected lands to "illegally" access natural resources, sometimes making use of "inappropriate" land use practices (Bryant and Bailey 1997;Bryant 1998). Also, in an attempt to increase productivity and maximize the potential of marginal lands, weaker actors may resort to intensive production systems, making use of fertilizers, pesticides, and intensive farming practices which not only adversely impact ecosystems, but consequently human health. In such contexts, then, the identification of "inappropriate land use practices" as one driving factor of ecosystem degradation completely misses the underlying reason for the use of inappropriate land use practices, and any attempt by "experts" or professionals to develop interventions or policies to correct this will be ineffective and may not prevent/correct ecosystem degradation. The causal basis of human-induced ecosystem degradation is complex and not readily apparent to external scientific experts. It is important that any perceived causal factor be examined from an historical, socio-political, and economic perspective, and examined through the lens of unequal power relations surrounding human-environment relationships. In addition, ecosystem degradation should be linked to broader processes such as unfair trade agreements, global prioritization of environmental problems and the desire for the purest gems, chocolate, coffee and caviars have contributed to destroying vast ecosystems, especially in tropical and developing regions of the world. The unfair trade agreements between the global South and the North, the re-colonization of Africa by China, and the lax environmental regulatory environment in poorer regions have all resulted in a monopoly of Southern ecosystems by multinational companies, resulting in the use of unsustainable methods of resource extraction and ecosystem degradation. One other dimension of examining the causal explanations of environmental problems that is of interest to critical scholars is how environmental problems such as ecosystem degradation comes to be identified, defined, and labeled. How do we come to understand ecosystems as "extensively degraded", or describe other environmental problems as "global crisis"? Critical scholars argue that the identification and causal explanation of environmental problems is not value-free nor is it ever politically neutral. They argue that the framings of environmental problems and their causal explanations are shaped by the social and political contexts within which they emerge, and so are never partial, but can be located. Critical environmental scholars caution against accepting scientific environmental knowledge claims as "fact", "accurate" and a true representation of reality, without re-evaluating these claims within the socio-political and historical contexts within which they are framed (Bryant 1998;Forsyth 2003;Peet and Watts 1996). With the exception of phenomena such as climate change, although this has come under scrutiny recently, critical scholars are concerned that by representing many environmental problems as "global" in scope and "crisis" in nature, we gloss over the particularities of environmental problems in specific localities, and fail to pay particular attention to the varying experiences and coping abilities of different population groups. We also fail to capture the micro-politics and power struggles surrounding access to, and use of natural resources at varying scales, and how such struggles shape people's interactions and experiences with the biophysical environment. Also, given that most scientific environmental knowledge serves as the basis for policy formulation, there are concerns that environmental policies and interventions that are based on "unreconstructed" scientific knowledge could fail to uncover the "real" causes of ecosystem degradation and end up proposing wrong interventions that could further augment existing inequities surrounding the use and control of those resources (Forsyth 2003). For example, in many parts of Africa, some natural resource management policies are still based on colonial policies, without reevaluating these policies within the current challenges and needs of today's communities. Within the circles of public health, similar critical perspectives have been used to interrogate public health knowledge claims, the constructions and explanations of certain public health problems, and how subject positions are constructed and labelled. Just like critical environmental scholars, critical public health scholars seek to illustrate that the emergence and social patterning of specific public health problems, especially those associated directly and indirectly with the environment, lie in the unequal power relations surrounding the use of, and control of environmental resources and the uneven distribution of the associated costs (e.g. pollution) of environmental activities. The concern here is to broaden the causal basis of the emergence of specific ecosystem-mediated health problems beyond exposure to disease vectors and microbes, to incorporating people-environment power dynamics and how such dynamics result in the exposure of weaker actors to environmental health risks. Explaining health problems from the basis of exposure to microbes and pathogens alone is equivalent to blaming the victim, while relieving important social and political factors that constrain the individual from freely making the decisions to avoid risk in the first place. Similarly, the uneven distribution of the costs of ecological activities and the resulting social patterning of poor health should be examined from the context of how unequal power relations allow powerful actors to displace their environmental costs to weaker actors through such acts as dumping toxic waste in other communities. These communities also are those with limited coping capabilities and have little resources to mitigate the adverse effects of these environmental pollutants. The above concerns illustrate the complex dynamics surrounding peopleenvironment relationships, illustrating that such issues cannot be understood through uni-lateral analytical procedures, but instead must be contextualized to reflect the temporal and spatial dimensions of such phenomena. Ecosystem degradation, the causal explanations, and the associated health problems are equally complex and should not be explained simplistically and uni-dimensionally. After all, ecosystems and society are conjoined and the activities within each sphere must be seen as constituted by, and from the other. Such inter-dependencies (political, social, economic, ecological dimensions) must always constitute the core of ecosystem-society-health investigations. The theoretical frameworks used to analyze human-environment interactions from such political, ecological, and social perspective fall outside the purview of pure ecological and health sciences disciplines, instead residing more with transdisciplinary fields such as health geography, environmental sociology, medical anthropology, and other related fields. Similarly critical perspectives have been applied to environmental issues in the form of poststructuralist or critical political ecology and to public health field in the form of critical public health. However, the extent to which the two fields have been brought together to examine issues at the interface of public health and environmental conditions is very limited. It is the goal of this book to draw on a variety of critical theoretical perspectives from the social, natural and health sciences to develop a rigourous theoretical framework that will allow for a critical examination of problems at the interface of environment and health, or simply referred to as ecohealth concerns. Most of the ecohealth literature has not engaged with such theoretical perspectives and hence lacks critical theoretical rigour in its analyses of environment and health phenomena. This book draws on critical social theory, including political ecology, feminist theories, and postcolonial and poststructuralist perspectives to examine environment and health issues from a critical perspective. In so doing, a new analytical framework called critical ecohealth is developed through the fusion of two theoretical perspectives: critical political ecology and critical public health. Critical ecohealth locates ecohealth problems, their causal explanations, the proposed interventions within a broader analytical framework, examining how they are framed, and drawing attention to their socio-political, economic, and historical antecedents. Prior to examining these theoretical frameworks in subsequent chapters, it is important to review some of the common associations between human activities, ecosystem change and how this influences human health and well-being. Ecosystem Services and Human Health The Millennium Ecosystem Assessment report (MA 2005) describes an ecosystem as a dynamic complex of plant, animal, and micro-organism communities and the nonliving environment interacting as a functional unit. Ecosystems including, farmlands, water bodies, woodlands, rangelands, and forests, produce services that are essential for human health and well-being. These services are usually classified into four categories: provisioning services, regulating services, cultural services, and supporting services (MA 2005). Provisioning services refer to the benefits derived from ecosystems such as food, freshwater, fibre, shelter, medicine, and fuel. These basic necessities underlie the sustenance of many communities. Many developing country nationals rely on the natural environment for medicinal plants, wildlife and other non-timber forest products. The second category, regulating services, refer to ability of ecosystems to regulate climate, purify freshwater, and regulate pest and diseases. The regulation of ecosystem processes can modify ecosystems in ways that influence the proliferation and transmission of disease vectors, such as mosquitoes or snails. Cultural services are those non-material benefits obtained from ecosystems, and include aesthetic, spiritual, educational, and recreational qualities. Cultural services provide a wide spectrum of benefits, since different cultures associate and interact with the environment in myriad ways. For example, for some, the natural landscape serves as an ideal space for healing, meditation, recreation and the performance of cultural rites. The close bond between many Indigenous communities and other local people, and the biophysical environment endows them with unique knowledge systems about the structure and functioning of these ecosystems. This local knowledge is an essential complement to scientific understanding of the environment. Lastly, supporting services are those that contribute to ecosystem processes such as primary production, soil formation, and nutrient recycling. Compared to other services, the benefits of supporting services are indirect and occur over a longer-time frame. The growing need to meet societal demand for food, shelter, livelihood and profits has resulted in increased pressure on ecosystems, and compromising their ability to continue to provide ecosystem services at an optimal level. Land use activities such as deforestation, clearance of virgin lands for agriculture and human settlement, irrigation, dam construction, road building, mining, wetland modification, and urbanization, have been identified as some of the key modifiers of ecosystems around the world. The modification of various ecosystems has resulted in the emergence and spread of a number of infectious diseases, and modified the transmission of endemic diseases (Patz et al. 2000). For example, the clearance of forests for agricultural purposes can disrupt the structure and functioning of ecosystems and lead to the emergence of infectious diseases. In Central Africa, the outbreak of Ebola, which killed hundreds of people and thousands of apes was linked to human migration into forested ecosystems where people came into contact with new microbes and animal reservoirs (Leroy et al. 2004). In Malaysia, agricultural activities have been linked with the emergence of Nipah virus (Lam and Chua 2002), while increased risk to Lyme disease in the northeastern United States has been associated with forest fragmentation, biodiversity loss, followed by suburban housing development (Schmidt and Ostfeld 2001). Also, with the relative ease of travel and transportation of goods and services around the globe, it does not take long for infectious diseases to spread from one corner of the globe to the other, as seen with the recent case of Severe Acute Respiratory Syndrome (SARS) and Swine Flu. Infectious diseases continue to be of grave concern, both in the developed and developing world. Within the past few years alone, the world has seen the emergence of new infectious diseases such as SARS, H1N1, and HIV/AIDS. Not only did the emergence and quick spread of these diseases cause global pandemonium and drain the health budgets of many regions, but also has raised concerns about the state of global public health security and the readiness of public health authorities to respond and contain the spread of these pandemics in a quick and effective manner. Factors such as rural-urban migration, globalization, North-South migration, trade, and the fast pace of travel all contribute to making this task a challenging, yet important one. According to a recent report, not only are infectious diseases spreading faster geographically than any period in history, but they also seem to be emerging at a quicker pace than before (WHO 2007). The report indicates that since 1970, new diseases have been emerging at an unprecedented rate of one or more per year, resulting in about 40 new diseases that were unknown about a generation ago. Also, within a 5-year period leading up to 2007, the World Health Organization had confirmed over 1100 epidemics events worldwide. This trend does not seem to be abating any time soon and re-iterates the need for a comprehensive understanding of the mechanisms through which human-induced ecosystem change adversely impacts human health. It is important to understand the clear linkages between our activities, what drives these activities, how these activities transform the ecosystem into a disease environment or a health-promoting environment, and how we are differentially impacted by such transformations. Such understanding must be informed by the social, political, and historical contexts within which ecosystem change occurs, so as to allow for the development of socially and biophysically relevant interventions. The sections below explore the linkages between some land use activities and how they shape human health outcomes. Land Fragmentation and Health Activities such as deforestation, clearance of virgin lands for agricultural purposes and human settlement, and road construction for mining and logging are some activities that have led to increased fragmentation of many terrestrial ecosystems. These land use activities disturb ecosystem balance and pre-existing conditions that serve to modulate the emergence and interaction of disease pathogens. This disturbed equilibrium brings humans into contact with new pathogens that can infect humans, livestock or wildlife (Wolfe et al. 2000). The emergence and re-emergence of many infectious diseases such as Chagas disease, trypanosomiasis, leishmaniasis, and onchocerciasis has been associated with land use changes (Molyneux 1998). Habitat changes also favor the emergence of zoonotic diseases and many mosquitoe-borne diseases (Gubler 2002). Recently, there have been increasing concern about the public health threat posed by zoonotic diseases. Zoonotic pathogens -that is, those pathogens that can be transmitted between wild or domesticated animals and humans -have been identified as the most significant cause of emerging infectious diseases (Taylor et al. 2001). Taylor and colleagues observed that out of 1415 species of infectious organisms that have been identified to be pathogenic to human beings, 61% are zoonotic pathogens. Emerging infectious diseases such as Severe Acute Respiratory Symptoms (SARS), avian influenza, West Nile and HIV/AIDS, Nipah virus, Ebola, and hantavirus pulmonary syndrome are all associated with zoonotic pathogens. In general, zoonotic diseases are usually severe, with high fatality rates, and have no readily available cure, treatment or vaccine. Because zoonotic pathogens complete part of their natural life cycle in animal hosts, any human-induced activity that disturbs the equilibrium of wildlife habitats, such as encroachment into forested areas, is likely to facilitate the transmission of zoonotic pathogens between humans, wildlife, domestic animals, and plants (Daszak et al. 2001). Land use activities such as tropical deforestation and the processes leading to it have also been associated with the emergence and proliferation of diseases such as malaria, especially in Africa, Asia, and Latin America (Coluzzi 1994;Tadei et al. 1998). The clearance of forested areas for agriculture, rangelands and settlement allows people to inhabit previously uninhabited spaces, thus exposing them to new disease pathogens (Kalliola and Flores Paitán 1998). The construction of forest roads, the creation of culverts and other dugouts collect rainwater and serve as breeding grounds for mosquitoes (Patz et al. 2004). Also mercury is naturally embedded in the soils of most rainforests. Hence soil erosion that occurs following downpours wash mercury residue into rivers and other water bodies, contaminating water bodies. Such scenarios have led to contaminated fish in places like the Amazon (Fostier et al. 2000). Another example of the health implications of deforestation is noteworthy. In northeastern United States, partial deforestation, followed by subsequent land use changes and human settlement patterns led to the emergence of Lyme disease (Glass et al. 1995). Lyme disease is a bacterial disease that is transmitted by the bite of a deer tick. Rodents are the major reservoir hosts for the bacteria, while deer serves as the host for the tick vectors (Steere et al. 2004). Lyme disease was first named in 1977, but discovered earlier. Incidence has been reported in North America, Asia, and Europe (ibid). Finally, in addition to logging, mining is one extractive activity that causes a number of health problems. In many regions in Africa, lax environmental regulations prevent mining companies from taking the necessary steps to ensure their activities cause minimal impacts to both human and ecosystem health. In tropical rainforests, the use of mercury to extract gold from riverbeds has contaminated fish in many rivers, rendering them toxic (Lebel et al. 1998). Also the land degrading activities associated with mining has caused some communities to lose farmlands and livelihood options. Dugouts, culverts and mining pits create favourable breeding grounds for mosquitoes. Water Resource Development and Health Human interventions in watersheds, rivers, and lake systems take many forms including: irrigation, aquaculture, river damming and other watershed activities. Most of these activities interfere with the natural functioning of aquatic ecosystems, and may inhibit their ability to provide ecosystem services, such as regulation of the hydrological cycle and filtration of freshwater. Some of these activities also alter watersheds in ways that create conducive environments for the proliferation and transmission of disease vectors such as snails and mosquitoes. Some commonly identified diseases emerging from human-induced transformation of watersheds include malaria, dengue and Japanese encephalitis, shistosomiasis, onchocerciasis, and trypanosomiasis. Crop irrigation and the construction of dams are two land use activities that alter aquatic habitats and affect the proliferation, survival, and distribution of disease vectors. For example, irrigated rice fields provide good breeding grounds for mosquitoes, and have resulted in increased incidence and transmission of malaria in Africa, and Japanese encephalitis in Asia (Keiser et al. 2005). Also, culverts, ditches, canals and ponds associated with irrigation provide ideal conditions for the proliferation of mosquito species such as the Culex tarsalis. Culex tarsalis is a mosquito species that bites both humans and animals, and as such a major bridge vector for diseases that are constantly present in animal populations, such as the encephalitis that occurred in St. Louis in western United States (Mahmood et al. 2004). Also, irrigation activities in the Nile Delta following the construction of the Aswan High Dam resulted in the proliferation of another mosquito species, the Culex pipiens, which is associated with increased soil moisture levels. The Culex pipiens is associated with the arthropod-borne disease Bancroftian filariasis or elephantiasis, which mostly occurs in Africa and other tropical regions (Harb et al. 1993;Thompson et al. 1996). Microbial contamination of water as a result of inappropriate sanitation and hygiene is still pertinent, especially in developing countries. A recent report from the World Health Organization estimates the burden of disease from inadequate water, sanitation and hygiene to amount to 1.7 million deaths annually, with over 54 million healthy life years lost. Also, water-associated infectious diseases claim up to 3.2 million lives each year, approximately 6% of all deaths globally (Prüss-Üstün and Corvalán 2006). The contamination of drinking water sources is not only pertinent to the developing world, but also the developed. For example, intensive farming practices and poor food processing in industrialized countries can lead to the contamination of public water sources, as was seen in the Walkerton case in Canada. In 2000, Canada experienced its worst ever water contamination, when a small town in Ontario, Walkerton, got its public water supply infested with Escherichia coli (E. coli) bacteria from farm runoff. The incident resulted in the death of seven people, with as many as 2,300 falling sick. 2 Aquatic ecosystems serve as natural reservoirs for the cholera bacterium (vibrio cholerae O1), where it remains dormant in phytoplankton and zooplankton (Colwell 1996). Environmental conditions that cause algal blooms, such as climate-induced warming of waterways and eutrophication by agriculture and domestic nitrate and phosphate runoff, may increase the proliferation of zooplankton leading to increased dissemination of cholera into human populations (Levins et al. 1994). Also, there is increasing evidence suggesting that the seasonality of cholera epidemics may be linked to the seasonality of algal blooms, and the food chain in marine ecosystems (Colwell 1996). It is recommended that monitoring algae and other microscopic marine organisms for vibrio, especially using remote sensing satellites, may help establish an early warning system for detecting emergence of the pathogen (Levins et al. 1994). Water bodies that are contaminated through the use of pesticides and other toxic chemicals can also pose serious health risks to people, and adversely affect various organ systems. For example, exposure to low concentrations of chemicals such as PCBs, dioxins and DDT may interfere with normal hormone-mediated physiology, impair reproduction, or cause endocrine disruption (Prüss-Üstün and Corvalán 2006). Urbanization and Health On April 7th each year, the World Health Organization (WHO) celebrates World Health Day. It selects a key global health issue as the theme for the day and generates awareness of the problem globally, nationally and locally. For 2010, the theme for World Health Day was "Urbanization and Health". The WHO identifies urbanization as one of the biggest health challenges of the twenty-first century (World Health Report 2008). This is based on the realization that urbanization is proceeding faster than cities can build the necessary infrastructure to contain the increasing numbers. In 2007, for the first time in history, the world's urban population surpassed 50%, with the projection that this number could exceed 70% by 2050 (UN-Habitat 2006). By 2030, it is expected that six out of every 10 people will be living in the city, and by 2050 this figure is expected to increase to 7 out of every 10 people (ibid). Rapid and unplanned urbanization has numerous health implications, not just for the urban poor, but also for all city dwellers. It is true that the urban poor will bear the disproportionate burden of urban health problems. However, lack of social services, employment opportunities, education and other services engender despair, violence, and increased vulnerability. These problems are usually not constrained to only urban slums, but permeate to the suburbs and affect the entire society. It is therefore important that urbanization health-related concerns be viewed from a broad perspective, and their solutions be incorporated into broader public policies. The public health challenges facing urban ecosystems span beyond the health sector and must addressed from an integrated and intersectoral perspective, with partnerships among all relevant sectors. In addition, it is important not to lose sight of the health conditions specific to urban slums. Currently, over 1 billion people -about one third of the urban population -live in slums, with this figure is expected to increase to 1.4 billion by 2020 (UN-Habitat 2006). Inequitable access to most social services, poor housing and sanitation, and inadequate water supply characterize the conditions in many urban slums. These conditions make urban slums fertile grounds for the proliferation and transmission of communicable diseases. Common health problems of the urban poor include tuberculosis, HIV/AIDS, and chronic diseases such as diabetes, heart disease, mental disorders, and road traffic accidents, and drug-related deaths. With such clear trends of increasing urbanization, perhaps what is required is to refocus efforts on preventive health, improving living conditions in urban centres by investing in infrastructure for sanitation, water supply, and supportive housing. It is also important for public health authorities to prepare for the onslaught and complex urban health problems that could arise with such increasing trend. Also, urban planning must ensure that urban centres become welcoming and inclusive communities with all the necessary amenities to cater to the wide spectrum of cultural diversity that immigrates to urban areas. Without such readiness, urban health problems could be a time bomb waiting to explode with the onset of any pandemic. Finally, meeting the needs and wants of city dwellers takes a great toll on suburban and rural ecosystems, which leaves behind bigger ecological footprints. Similarly, the demands in the North for coffee, cocoa, burgers, quality furniture, and minerals take a toll on Southern peripheral ecosystems. The extraction and processing of resources such as timber and minerals fragments ecosystems and increases the opportunity for the emergence of new diseases. Also, aquaculture, shrimp farming, and deforestation for agriculture and ranching all destroy ecosystems. In most cases, the ecosystems drawn on to satisfy the needs of urban dwellers are usually not within the immediate vicinity but in remote, rural areas or in developing countries and tropical regions. In this case, the immediate and direct impacts of ecosystem destruction are displaced to inhabitants of these ecosystems, not the city dwellers. Due to their poor status and limited resources, these communities are unable to cope with or take adequate steps to mitigate the adverse impacts of ecosystem destruction on human health. Modern Food Production Systems and Health The increasing demand for livestock products, especially pigs and chickens, has led to the use of intensive, industrial, and landless production systems (Delgado et al. 1999). These intensive production systems, in association with ecological and other factors, have been linked to the emergence of diseases such as bovine spongiform encephalopathy (BSE), Severe Acute Respiratory Syndrome (SARS), Nipah virus, and avian influenza (World Health Report 2007). Modern production systems are characterized by activities such as increased livestock trade between regions, especially poultry and wild animals (bushmeat), overcrowding and mixing of livestock breeds, and cohabitation of livestock and people, especially in rural communities (Graham et al. 2008). Such production practices create fertile grounds for interspecies host transfer of disease agents, resulting in the emergence of novel strains of diseases or human pathogens such as SARS and influenza. While modern production and processing systems have led to increased availability of food and livestock products, they have also increased pressures on ecosystems: fragmenting habitats, polluting environments and posing serious human health risks. For example, intensive production systems usually require large quantities of livestock feed and increase the pressure on cultivated ecosystems. They also make use of large quantities of fertilizers, pesticides and water to enhance productivity. Intensive farming practices also generate large amounts of waste, which sometimes is not adequately disposed off. Waste is mostly flushed into waterways, which end up polluting freshwater bodies, contaminating public water supplies, and affecting marine plants and animals. In addition, some intensive livestock management practices routinely use sub-therapeutic antibiotics, which have resulted in the occasional emergence of antibiotic-resistant strains such as Salmonella, Campylobacter and E. coli bacteria (Garofalo et al. 2007). The recent outbreak of Nipah virus in Malaysia is a typical example of a disease that occurred as a result of animal husbandry in association with other factors. Nipah virus is an emerging viral pathogen that causes encephalitis, an inflammation of the brain. It is fatal in up to 75% of the people it infects (WHO 2007). Between 1998 and 1999, the first outbreak of Nipah virus was reported in the Malaysian Peninsular, where 265 human cases including 105 deaths were reported (FAO/WHO 2002). The emergence of the virus is attributed to the interaction of various factors including expanding human population, climate change, poor governance, illegal land clearing, forest fires and intensive animal husbandry (ibid). The path of contagion is traced back to the human cases coming into direct contact with sick or dying pigs or fresh pig products. These pigs became sick after coming into contact with a flock of bats that were infected with a previously unknown virus. The bats migrated from neigbouring Indonesia, following an intense El Niño dry spell and forest fires in the region. In Malaysia, the bats came into contact with intensively, commercially raised pigs that were located near fruit orchards. The pigs acted as the intermediate hosts of the new virus, and developed respiratory illnesses. It is believed that transmission among pigs occurred through the aerosol route, with transmission from pigs to humans taking place following contact with throat or nasal secretions of pigs by humans. Nipah virus later occurred in Singapore, where it infected 11 human cases resulting in one death. In Malaysia, the outbreak ended with the mass culling of more than 1 million pigs (WHO 2007). Recent findings suggest that the virus may have become more pathogenic for humans following the outbreaks in Malaysia and Singapore. This means that the virus can spread to humans without an intermediate host such as the pig, and the transmission from human to human can occur with casual contact. For example, evidence suggests that, in the most recent outbreaks in Bangladesh and India, the consumption of contaminated food such as fruits contaminated with the urine or saliva of fruit bats could likely constitute the route of exposure for several new human infections. Also human-to-human transmission could occur through close contact with people's secretions and excretions. In Siliguri, India, it was observed that transmission of Nipah virus occurred in health care setting, where close to 75% of the cases occurred among hospital staff and visitors (WHO Fact Sheet on Nipah Virus 2009). Climate Change and Health Leading up to the United Nations Summit on climate change in Copenhagen, there have been a number of discussions and media coverage on the potential health effects of climate change. For example, the journal Lancet dedicated an entire series to climate change and health. The increased attention and wide coverage of climate change generated both awareness and skeptism, leading some to question the accuracy of climate change data, and to assess whether or not climate change has become one of those phenomena, whose scientific explanations and claims are politically and self-interest driven. While, this dialogue is on-going, there are also discussions about the health and potential health implications of climate change. Climate change is expected to have both direct and indirect impacts on human health and well-being. Extreme weather events, sea level rises, and temperature changes are expected to adversely impact ecosystems, and inhibit their ability to continue to provide the essential services needed for good health, including the provision of clean air, safe drinking water, adequate food supply, shelter, and medicinal plants. Ecosystems play a vital role in regulating climatic conditions through cooling and warming mechanisms, preserving the balance among species, and acting as sinks for greenhouse gases and other pollutants. Climate-induced changes will likely disrupt the ability of ecosystems to continue to fulfill these functions. For the most part, climate change is expected to increase the incidence and impacts of some of the world's leading killer diseases, such as malaria, diarrhoea, dengue, and malnutrition. These health problems and the pathways leading to their occurrence are highly sensitivity to climatic conditions. For example, climateinduced change can affect the proliferation, distribution and transmission of disease vectors and can also influence the length of transmission seasons for vector-borne diseases. Extreme weather events, such as floods and windstorms may contaminate fresh water supplies, facilitate the dispersal of microbes, and affect the breeding, survival, and abundance of disease vectors. Outbreaks of diseases such as cholera and leptospirosis have followed flooding in Central America (Wilson 2000). Heavy precipitation may pollute water sources with increased quantities of chemical and biological pollutants that are washed into rivers and from overloading sewers and waste storage facilities. Temperature increases may also affect water quality by increasing the growth of microorganisms and decrease dissolved oxygen (McMichael et al. 2006). Temperature-related impacts are varied. Rising temperatures may cause drought, increase demand for irrigation, and negatively affect crop production, leading to increased malnutrition, especially in developing countries (McMicheal 1997). Changes in temperature and humidity may affect the breeding and survival of insect vectors such as mosquitoes. Recent studies suggest that climate change could increase the proliferation of the aedes mosquito (vector for dengue), exposing an additional 2 billion people to dengue transmission by the 2080s (Hales et al. 2002). Direct effects from heat waves can cause skin cancer, cataracts, sunstroke and reduced efficiency of the immune system (McMichael et al. 2006). While climate change tends to be discussed from a global perspective, the health effects are regional, local, and population-specific, and are usually not evenly distributed. There are some communities and population groups that are particularly vulnerable. For example, the United Nations Intergovernmental Panel on Climate Change identified Indigenous groups and coastal communities as two groups that are most vulnerable to climate change. As has been discussed in the chapter on Indigenous health, the close affiliation between most Indigenous people and the natural environment, together with poor socio-economic conditions, predispose them to severe impacts resulting from climate change. Similarly, people living in coastal areas and floodplains are extremely vulnerable to extreme weather events, which can destroy infrastructure and displace entire communities. Temporary relocation of displaced people can lead to increased incidence of communicable diseases due to overcrowding, limited health services, lack of clean water and sanitary facilities, poor mental health and poor nutrition. Wars, Conflicts and Health The unfortunate circumstances of armed conflict and war adversely impacts surrounding ecosystems and affect human health. The settings in which conflicts take place fragment ecosystems, disrupt ecosystem functioning and predispose people to new disease pathogens and new infectious diseases. In addition, the mass fleeing and displacement of people from their communities, force them to live in crowded spaces and under unhygienic conditions, which provide ideal conditions for the onset of infectious and communicable diseases. The limited health care services in refugee camps are usually not adequate to address the myriad health concerns presented, and sometimes these living conditions result in the outbreak of epidemics. Two examples that are noteworthy relate to the emergence of Marburg haemorrhagic fever in Angola, which affected over 200 people, and killed over 90 of the victims (WHO 2007), and the cholera epidemic in the Democratic Republic of the Congo, which killed over 50,000 people. Marburg haemorrhagic fever, which is related to Ebola, occurred between 2004 and 2005, following a 27-year civil war in Angola and is reported as the largest epidemic on record (WHO 2007). The disease proliferates in overcrowded areas and settings with inefficient health care services. On the other hand, the cholera epidemic in the Democratic Republic of the Congo, occurred following the Rwandan conflict in 1994. Between 500,000 and 800,000 people fled to seek refuge in the neighboring Congolese city of Goma, when the epidemic struck. The epidemic, which is said to have resulted from a combination of cholera and shigella dysentery, was very fatal, recording a high crude mortality rate of 20-35 per 10, 000 per day (ibid). Conclusion This chapter illustrates the various ways in which human activities impact and, in turn, are impacted by ecosystem dynamics. Ecosystems provide services that are essential for life. These services are continuously under pressure given the growing demand for food and other societal needs. In an attempt to increase productivity, human activities transform ecosystems in ways that compromise their ability to continue to provide ecosystem services. They also transform ecosystems in ways that engender disease and adversely impact both ecosystem and human health. The causal pathways between human activities, their driving forces, and how they transform ecosystems to adversely impact health are complex and not amenable to linear processes. In addition to biophysical processes, social, political, economic and cultural factors further confound these interactions. Hence, given the increasing realization that over the next few decades, the most important determinant of human health will be ecological factors, it is probably prudent for researchers to begin to unravel these intricate connections between society, health, and environment, and ensure that power relations, and social and political considerations are incorporated into people-environment-health analysis. Such understanding will help develop appropriate interventions that will be socio-politically acceptable and also biophysically relevant. The role of ecological factors as important determinants of human health is not new, but dates back to the 19th century. Interests in ecological factors were superseded by modern medical techniques such as the discovery of microbes, viruses, DNA and the increasing focus on individual lifestyle factors. With issues such as climate change and the rapid emergence of new diseases with mediated by ecological factors, there is, once again, growing interests in the use of ecological approaches to public health, with particular emphasis on ecosystem-human dynamics. For the past few decades there have been growing efforts to integrate health and environment concerns and to develop more ecological and holistic approaches to public health improvement, and sustainable natural resources management. This trend has given rise to new approaches such as the ecosystem approaches to human health, also known as the Ecohealth approach. Before discussing some of the key elements of this approach, it is important to trace the evolution of events leading to a renewed interest in ecological approaches to health, and in particular, the ecosystem approach to human health (Forget and Lebel 2001).
2019-04-23T13:27:26.748Z
2010-07-31T00:00:00.000
{ "year": 2010, "sha1": "64cfb261e3c01f8941a3c798c36186c68b0da05c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3e885e24ecca4891ddb59fe39188affdb7eaf0bf", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
219573482
pes2o/s2orc
v3-fos-license
Feshbach Resonances in an Erbium-Dysprosium Dipolar Mixture We report on the observation of heteronuclear magnetic Feshbach resonances in several isotope mixtures of the highly magnetic elements erbium and dysprosium. Among many narrow features, we identify two resonances with a width greater than one Gauss. We characterize one of these resonances, in a mixture of $^{168}$Er and $^{164}$Dy, in terms of loss rates and elastic cross section, and observe a temperature dependence of the on-resonance loss rate suggestive of a universal scaling associated with broad resonances. Our observations hold promise for the use of such a resonance for tuning the interspecies scattering properties in a dipolar mixture. We further compare the prevalence of narrow resonances in an $^{168}$Er-$^{164}$Dy mixture to the single-species case, and observe an increased density of resonances in the mixture. Ultracold quantum gases are a highly successful platform for physics research largely because it is possible to create simplified and controllable versions of condensed matter systems [1]. As the field has advanced, great progress has been made by reintroducing complexity in a carefully controlled manner. This complexity can manifest in the form of interparticle interactions [2][3][4], the species and statistics of the particle under study [5][6][7], or in the form of the potential landscape, control protocols and imaging techniques applied to the system [8,9]. In this work, we explore interspecies Feshbach resonances as a means of generating tunable interactions between two different species of complex dipolar atoms. Atoms with large magnetic dipole moments, such as the lanthanide series elements erbium and dysprosium, interact in a manner that is both long-range and anisotropic. This is in contrast to more commonly used atomic species, such as alkali and alkaline earth metals, which primarily interact in a short-range and isotropic way. The recent creation of degenerate Bose and Fermi gases of such atoms [10][11][12][13] has enabled the observation of a wealth of new phenomena including quantum-stabilized droplet states [14][15][16], roton quasi-particles [17], supersolid states [18][19][20], and a non-isotropic Fermi surface [21]. In a separate direction, degenerate mixtures of multiple atomic species have also provided diverse opportunities for the study of new physical phenomena. Examples include studies of polarons that arise when an impurity species interacts with a background gas [22][23][24][25][26][27], and the formation of heteronuclear molecules with large electric dipole moments [28][29][30][31]. We expect that combining dipolar interactions with heteronuclear mixtures will lead to a rich set of novel physical phenomena, the exploration of which has only recently begun. In particular, dipolar interactions are expected to have dramatic consequences for the miscibil-ity of binary condensates [32][33][34], and in turn on vortex lattices that arise in such systems [35]. Further, novel properties of polarons are predicted to emerge when either the background [36] or both background and impurity [37] particles experience dipolar interactions [38]. Dipolar heteronuclear mixtures have recently been demonstrated [39], but so far the interspecies scattering properties have not been explored, either experimentally or theoretically. In these complex dipolar species, scattering properties are dictated by both anisotropic long-range dipolar interactions, which can be tuned through a combination of system geometry and magnetic field angle, and by contact interactions, which can be tuned through the use of interspecies Feshbach resonances. While scattering models and experimental demonstrations exist for mixtures of single-and twovalence electron atoms (which lack strong dipolar interactions) [40,41], the scenario of two multi-valence electron atoms has yet to be considered, and represents a new frontier for our understanding of ultracold scattering. In many commonly used atomic systems, the strength, character, and location of magnetic Feshbach resonances can be predicted with high precision through coupled-channel calculations [3]. However, the complexity of the internal level structure and coupling mechanisms present in lanthanide atoms lead to significant challenges for the development of a microscopic theory with predictive power, and so necessitate an experimental survey to find resonances with favorable properties [42][43][44][45][46]. To this end, we searched for heteronuclear Feshbach resonances broad enough to provide a practical means for tuning the interspecies interaction in Bose-Bose and Bose-Fermi dipolar quantum mixtures. Using atomicloss spectroscopy to identify resonances, we perform surveys of fermionic 161 Dy and bosonic 164 Dy together with 166 Er, 168 Er, and 170 Er over a magnetic-field range from zero to several hundred Gauss (the exact range varies by isotope combination due to availability of favorable evaporation conditions). We also explored a Fermi-Fermi mixture of 167 Er and 161 Dy, but observed no broad resonances there. In Table I we summarize positions and widths of these features observed in our surveys. As an exemplary case, we present a more detailed characterization of the resonance near 13.5 G in the 168 Er-164 Dy Bose-Bose mixture, through measurements of interspecies thermalization and the dependence of atomic loss on temperature. In addition, our dipolar mixtures host a large number of narrow interspecies resonances. In previous experiments with single species, the density and spacing of these narrow resonances has been studied to reveal a pseudo-random distribution that can be modeled well using random matrices [43,45,46]. By performing high resolution scans over specific magnetic-field ranges, we find that the average density of interspecies resonances exceeds the combined density of intraspecies resonances, perhaps indicating the contribution of odd partial waves or molecular states with antisymmetric electron configurations for the interspecies case, which are not present in the scattering of identical bosons. Finally, in each Fermi-Bose mixture involving 161 Dy we observe a correlated loss feature between fermionic Dy and bosonic Er atoms. Strangely, the loss feature is present at the same magnetic-field value for all three bosonic erbium isotopes studied. Such behavior is inconsistent with a typical interspecies Feshbach resonance, where the magnetic field at which the resonance occurs is strongly dependent on the reduced mass of the atoms involved [47]. The mechanism behind this unusual feature is as of yet unknown and calls for further experimental and theoretical investigations. Our experimental sequence is similar to the one introduced in our previous works [39,48]. More details can be found in the supplemental material [49]. In brief, after cooling the desired isotope combination of erbium and dysprosium atoms into dual-species magneto-optical trap (MOT), we load the atoms into a crossed optical dipole trap (ODT) created by 1064 nm laser light. Here we perform evaporative cooling down to the desired sample temperature. During the whole evaporation sequence, we apply a constant and homogeneous magnetic field (B ev ), pointing along the z-direction opposite to gravity. B ev preserves the spin-polarization into the lowest Zeeman sublevel of both species. We use different values of B ev to optimize the evaporation efficiency depending on the isotope combination and on the range of the target magnetic field (B FB ) to be investigated. The final ODT has trap frequencies ω x,y,z = 2π × (222, 24, 194) s −1 . We typically obtain mixtures with atom numbers ranging from 3 × 10 4 to 1 × 10 5 atoms for each species. The sample is in thermal equilibrium at about 500 nK, which corresponds to about twice the critical temperature for condensation. Typical densities are up to a few 10 12 cm −3 for each species. After preparing the mixture, we linearly ramp the magnetic field from B ev to B FB in 5 ms, either in an increasing or decreasing manner. We hold the mixture for a time ranging between 5 ms and 400 ms depending on the experiment. At the end of the hold time, we release the atoms from the ODT in a 15 ms time-offlight (TOF) expansion after which we record an image of the atoms using a standard low-field absorption imaging technique [12,49]. Note that we adjust the relative amount of erbium and dysprosium in the final thermal mixture for the specific experiments by independently tuning the MOT loading time for each species between 0.5 s to 5 s. In the isotope combinations and range of magnetic fields that we explore here, we observe two interspecies resonances with widths greater than 1 G (see Table I). We now turn to a more detailed characterization of a feature present in the 168 Er-164 Dy combination, for which atom loss is shown in Figure 1(a). We chose to focus on this feature because it is relatively isolated from the many narrow homonuclear and heteronuclear resonances typical of lanthanides. In this experiment, the starting mixture contains 6.2 × 10 4 erbium and 9.1 × 10 4 dysprosium atoms and it is prepared by evaporation at B ev = 10.9 G. In order to compensate for loss during magnetic-field ramps and slow drifts of the atom number, we normalize measurements performed with 200 ms hold times at B FB to interleaved measurements at 10 ms hold time at the same field. We further performed independent trap-loss spectra in single-species operation to confirm the interspecies nature of the resonance. Moreover, such scans allow us to identify intraspecies resonances and exclude them from the fit (see empty symbols in Figure 1(a)). As shown in the inset for erbium, a high-resolution scan reveals a narrow region with less loss near the center of our broad loss feature, probably due to the influence of a second interspecies resonance. This structure is also visible on the dysprosium loss feature but it is not shown in the inset for ease of reading. A Gaussian fit to the loss profiles, with known narrow single-species resonance excluded, returns a center value of 13.31(2) G and 13.33(4) G and a full width at half maximum value of 1.95(5) G and 1.3(1) G for erbium and . Empty symbols correspond to narrow singlespecies resonances, which we exclude from fits. Each point is an average over four experimental repetitions. For each magnetic field, the atom number recorded after 200 ms of hold time is normalized to that at a short hold time of 10 ms. The lines are the Gaussian fits to the data. The inset shows erbium loss measured in a different dataset with 5 mG resolution, and highlights the structure present on the center of the feature. The same structure is visible also for the dysprosium atoms in the mixture. (b) Interspecies elastic cross-section σErDy measured across the Feshbach resonance using cross-species thermalization. Each value of σErDy is extracted from thermalization data using a numerical model for thermalization that includes temporal variation in atom number and temperature; see main text and supplemental material [49]. dysprosium, respectively. The observed difference in the fitted width of the two species can be explained by the imbalance in atom number: because this measurement was performed with fewer erbium atoms than dysprosium, the fractional loss of erbium is higher than that of dysprosium, leading to a greater saturation of loss and broadening of the erbium loss feature. To get insights on its effective strength and width, we perform cross-species thermalization measurements across the resonance (see Fig. 1(b)). Interspecies thermalization experiments are well established techniques to extract effective thermalization cross sections, which in turn depends on the scattering length [50][51][52]. While inferring on a precise value of the scattering length would require the development of a detailed and rigorous model that accurately captures the temperature-dependence of the interspecies and anisotropic dipolar scattering [53], and would go beyond the scope of this work, we are able to determine a thermally averaged scattering crosssection from which we can estimate the width of the resonance. In this cross-thermalization experiment, we selectively heat dysprosium by means of a near-resonant 421 nm light pulse along the vertical direction. We confirmed that the light pulse has no direct measurable effect on erbium. The magnetic field is then jumped to the desired value B FB and held for a variable amount of time, during which the temperature of erbium rises to equilibrate with dysprosium due to elastic collisions. We record the temperature of the two species along a direction orthogonal to the heating pulse, as the effects of center of mass motion are less prevalent here [54], and use a numerical model to extract a cross section from the rate of thermalization [49]. This simple model assumes an energy independent cross section, an assumption which may break down near resonance where unitarity limits on scattering may become significant. From these thermalization measurements, we can see a dramatic increase in the scattering cross section near resonance, as one would expect for an interspecies Feshbach resonance. Further, we observe a significant modification of the cross section associated with the resonance over a Gauss-scale range of magnetic fields, similar to the width we observe in loss measurements. For an isolated resonance and pure contact interactions, a common way to characterize the resonance width is the parameter ∆, given by the difference in magnetic field between the pole of the resonance, at which the thermalization rate is max- Three-body loss coefficient L3 extracted from onresonance loss measurements at the resonance position for different temperatures (black circles), along with a fit to a 1/T 2 scaling (black line), as expected for universal three-body loss. The inset shows the resonance width extracted as FWHM from Gaussian fits to the trap-loss spectra versus cloud temperature for a different dataset. Red circles and blue squares refer to erbium and dysprosium respectively. The reported temperature comes from a TOF estimation. imal, and the nearest zero-crossing in the thermalization rate, which would correspond to a lack of scattering [3]. In lanthanides, the presence of anisotropic dipolar interactions leads to a scattering cross section that does not completely vanish. In addition, multiple narrow and overlapping resonances may be present, which may influence the interpretation of such a width measurement. However, to get a rough estimate of the width of the resonance, we can consider the distance between the resonance pole and the apparent minimum in the thermalization rate at 17 G. This suggests a width of ∆ 3.5 G. The dependence of the loss feature on the cloud temperature can provide additional information on the nature of the resonance. For broad resonances, a universal regime is expected to emerge near resonance where the scattering cross section and loss are dictated primarily by the atomic momentum, rather than the scattering length [55]. In this regime, the three-body loss parameter L 3 follows a nearly universal form scaling as 1/T 2 , where T is the temperature. Such scaling has been observed in broad resonances of several atomic species [55][56][57]. We observe a temperature dependence of the loss rate near resonance that is suggestive of such universal behavior. By varying the final depth of the ODT reached during evaporation, we tune the temperature of the atomic mixture. For each temperature, we measure atom loss on resonance at 13.4 G as a function of the hold time. We then use a numerical model to extract the rate of interspecies three-body loss, and L 3 [49]. These loss coefficients are plotted as a function of temperature in Fig. 2, along with a fit to a 1/T 2 dependence, which provides a reasonable description of our data. The universal temperature dependence arises from a maximum value of L 3 possible at a given temperature, given by: Factors associated with Efimov physics [49] can lead to a lower value for L 3 , but not higher [55,58,59]. From our fit to a 1/T 2 dependence for our data, we extract a value of λ 3 = 1.0(2) × 10 −24 µK 2 cm 6 s −1 , which is compatible with the predicted bound of λ 3,max = 2.4 × 10 −24 µK 2 cm 6 s −1 . A reduction in the peak loss rate with increasing temperature can also result from thermal broadening of the resonance, especially for very narrow resonances [45]. This is unlikely to be the dominant effect here, as for typical differential magnetic moments between entrance and closed channels in our lanthanide system [60], we would expect broadening on the scale of a few times 10 mG for temperatures near 1 µK, much narrower than the Gaussscale width of our feature. Further, suppression of peak loss is typically accompanied by a commensurate broadening and shift of the loss feature on the scale of its width, which we do not observe (inset in Fig. 2). In addition to the few relatively broad resonances, the lanthanides exhibit many narrow resonances, whose statistical properties have been investigated for singlespecies gases [43,45,46]. In this section we compare the abundance of interspecies resonances to single-species resonances by performing high-resolution trap-loss spectroscopy on the isotope combination 166 Er-164 Dy (see Fig. 3). Here, we investigate four different magnetic-field ranges, each 10 G wide, with a resolution 40 times higher than the one used for the exploratory surveys. To enable direct comparison with the previous works performed on single species [43,45], we use similar experimental conditions (isotope, atom number, temperature, and hold time). As expected, we observe many narrow homonuclear resonances [43,45]. In addition, we also identify many narrow heteronuclear resonances. To distinguish these two types of resonance, we first label features with a fractional loss above 30% as resonances. We then categorize these resonances as interspecies if erbium and dysprosium loss features occur simultaneously within a range of ±10 mG and with a loss amplitude ratio in the range 0.5-2. Features that do not meet both of these criteria, are labelled either as homonuclear or ambiguous, based on comparison with separate scans performed with single species, either within this work or from previously published data [43,45]. The number of ambiguous features define our confidence interval. In order to visualize the number of resonances, we construct the staircase function N (B), which describes the cumulative number of resonances from the start of a scan range up to a given magnetic field B FB . Figure 4(ad) shows N (B) for the four investigated magnetic-field ranges. The black lines represent heteronuclear Feshbach resonances, while the blue and the red lines represent the homonuclear 166 Er and 164 Dy resonances, respectively. The shaded regions represent our confidence interval defined by the total number of ambiguous Feshbach resonances. Our analysis results in a total number of heteronuclear resonances of N ErDy (tot) = 339 (16), counting all magnetic-field ranges, and a number of homonuclear resonances of N Er (tot) = 116 (16) and N Dy (tot) = 144 (16). Within our confidence intervals, we detect a total number of homonuclear resonances comparable with those of previous works [43,45]. The corresponding total density of resonancesρ, given by the total number of resonances divided by the total range of magnetic fields scanned are:ρ ErDy = 8.5(4) G −1 ,ρ Er = 2.9(4) G −1 , and ρ Dy = 3.6(4) G −1 . For our combined dataset, we find that the total number of heteronuclear resonances exceeds the combined number of homonuclear resonances for the two species: ρ ErDy = α(ρ Er +ρ Dy ), with α = 1.3 (2). We would expect that the average density of heteronuclear resonances should be greater than the sum of the two homonu- The broad loss feature in Dy near 68.8 G was not observed in previous work [45], and may result from a technical source of loss in our experiment. clear resonance densities. This is because each species contributes a set of internal states that can be coupled to, and the heteronuclear resonances are not subject to the same symmetrization requirements as the homonuclear resonances. In resonances involving distinguishable particles, both gerade and ungerade Born-Oppenheimer molecular potentials contribute, as well as both even and odd partial waves for the entrance channel. Our data is consistent with this expectation (α > 1). Note that we do observe a lower number of interspecies resonances in the range [50,60] G, perhaps as a result of the nonrandom distribution of resonances as observed in the single-species case [43,45], or to the presence of broad homonuclear erbium resonances that could obscure the observation of interspecies resonances. Finally, we have also searched for broad (Gaussrange) resonances in Bose-Fermi mixtures consisting of fermionic 161 Dy combined with different bosonic isotopes of erbium -166 Er, 168 Er, and 170 Er, as well as Fermi-Fermi mixtures of 161 Dy and 167 Er. For these combinations, we perform only coarse scans and thus only resolve broad features. In mixtures involving the bosonic isotopes of erbium we observe a correlated loss feature between erbium and dysprosium near 161 G (see Fig. 5). This loss feature is not present at our level of measurement sensitivity with either species alone, or in the mixture with the fermionic 167 Er. Surprisingly, the loss feature is centered at the same magnetic field (to within our resolution of 0.1 G) for all bosonic isotopes of erbium. This is quite unexpected as the magnetic-field value of the resonance position is typically highly sensitive to the reduced mass of the atoms involved [47]. Several physical mechanisms could be consistent with such a feature. One possibility is that the resonance we observe is associated with a bound state of a shallow molecular potential [61]. Mechanisms to create such potentials have been proposed for species with dipolar interactions [62,63]. However, none are obviously applicable to magnetic atoms in the lowest energy entrance channel. Further, given the level of insensitivity to the mass of erbium, we would expect to see additional resonances of a shallow potential in the magnetic-field range over which we survey, which we do not. A second possibility is that the feature we observe is not a true interspecies resonance, but rather an intraspecies resonance in dysprosium whose loss rate is enhanced by the presence of bosonic erbium atoms. A similar effect was reported in a mixture of fermionic lithium and bosonic rubidium atoms [64]. Finally, it is possible that this feature is not a Feshbach resonance at all, but rather the result of spin-changing processes resulting from unintentional radio-frequency tones in the laboratory, or of an interspecies photoassociation resonance. We have ruled out the most likely culprits for the last effect by varying the relative detuning between our horizontal and vertical dipole traps and observing no change in the resonance position. We hope that our presentation of this mysterious feature may spur theoretical exploration of possible physical mechanisms. In conclusion, we have reported experimental observation of heteronuclear magnetic Feshbach resonances in several isotope mixtures of erbium and dysprosium. Among the Gauss-broad features identified in our surveys, we have characterized one in the combination 168 Er-164 Dy by means of cross-species thermalization measurement and temperature dependence analysis. We performed high-resolution trap-loss spectroscopy in the combination 166 Er-164 Dy to compare the average resonance density of the mixture with respect to the single-species case. In mixtures of fermionic 161 Dy and bosonic erbium atoms, we observed a correlated loss feature which appears to be insensitive on the erbium isotope used but absent in dysprosium alone. Our observations pave the way to realize tunable interactions in quantum degenerate mixtures of dipolar atoms, which will enable varied opportunities including studies of the miscibility of binary condensates, of vortex lattices, and of dipolar polarons [32][33][34][35][36][37]. We thank Jeremy Hutson, Matthew Frye, John Bohn, Arno Trautmann, and the Erbium and DyK teams in Innsbruck for insightful discussions. This work is financially supported through an ERC Consolidator Grant (RARE, No. 681432), a NFRI Grant (MI- * Correspondence and requests for materials should be addressed to Francesca.Ferlaino@uibk.ac.at. Supplemental Material Preparation The experimental sequence is similar to the one introduced in our previous works [39,48]. After cooling the erbium and dysprosium atoms into dual-species magnetooptical traps (MOTs) of the desired isotope combination, we load about 1 − 5 × 10 6 atoms of both erbium and dysprosium into a single-beam optical dipole trap created by 1064 nm laser light and horizontally propagating along the y-direction (hODT). We perform an initial stage of evaporative cooling of about 0.8 s. After that, a second trap beam, coming from the same laser source but detuned by 220 MHz, is shone along the vertical direction z (vODT) onto the atoms forming a crossed ODT where we continue the evaporation for an additional duration of about 4.3 s down to the desired sample temperature. During the whole evaporation sequence, a constant and homogeneous magnetic field (B ev ) pointing along the zdirection and opposite to gravity is applied. Different values of B ev are used depending on the isotope combination and on the range of the target field (B FB ) to be investigated. We typically end up with 3−10×10 4 atoms for each species, in thermal equilibrium at about 500 nK (about twice the critical temperature for condensation). Final trap frequencies are ω x,y,z = 2π×(222, 24, 194) s −1 . At this point, we linearly ramp the magnetic field from B ev to the target field B FB in 5 ms, either in an increasing or decreasing manner. We hold the mixture for a specific time ranging between 5 ms and 400 ms depending on the experiment. At the end of the hold time, we release the atoms from the ODT in a 15 ms time-offlight (TOF) expansion after which we record an image of the atoms using a standard low-field absorption imaging technique [12]. We use pulses of resonant light in the horizontal x − y plane at an angle of ∼ 45 • with respect to our weak trap axis y. 5 ms after the clouds being released from the ODT, we linearly ramp B FB to zero in 10 ms. At the same time, we activate a magnetic field of about 4 G pointing along the imaging direction. Note that we select the relative amount of erbium and dysprosium required in the final thermal cloud by the specific experiments by independent tuning of the respective MOT loading times between 0.5 s to 5 s. B ev and B FB are generated by the same pair of coils in Helmholtz configuration. We ramp them up linearly in the early stage of the evaporation sequence 200 ms after loading the atoms into the hODT. We checked that the response of the current flowing in the coils can follow time ramps on the millisecond time-scale. This translates in a effective change of the field at the sample position in the order of 10 ms to settle at the part per thousand level. FIG. S1. Sample temperature traces for erbium (filled circles) and dysprosium (hollow squares) after dysprosium is heated. Purple, green, and orange correspond to magnetic fields of 12 G, 13.5 G, and 17 G, respectively. Fit lines represent the results of the numerical integration of equation 3, which fits the temperature profile of erbium based on its initial value and the dysprosium temperatures. Different evaporation conditions cause the curves to have slightly different initial and final conditions (see main text). As an exemplary case, we study in more detail the resonance found in the 168 Er-164 Dy Bose-Bose mixture, near 13.5 G. To quantify reliably the value of the interspecies cross-section, we developed the following scheme for cross-species thermalization measurements [50][51][52]. To avoid heating of the sample by crossing Feshbach resonances, we evaporate the mixture at B ev close to resonance. Specifically, when measuring on the low(high)field side of the feature we evaporate at B evap = 10.8 G(16.4 G). Once the sample is prepared as previously described (here we use an unbalanced mixture with twice as much Dy as Er), we compress the trap by linearly increasing the hODT power by a factor of five and the vODT power by two in 500 ms to prevent any plain evaporation. The final trap frequencies in the compressed trap are ω x,y,z = 2π × (409, 26, 391) s −1 . Subsequently, we ramp the magnetic field in 5 ms to either 10 G or 16 G. Here, a pulse of near-resonant 421 nm light propagating along the magnetic field direction (z) is used to selectively heat dysprosium. We fix the duration of the pulse at 5.5 ms to roughly match the trap oscillation period along this direction and set the pulse intensity to give the desired temperature increase of the dysprosium cloud (up to 4 µK). We confirmed that the light pulse has no direct measurable effect on erbium. Finally with a quench fast compared to the shortest thermalization rate, the magnetic field is set to the desired value B FB and held for a variable amount of time, during which the temperature of erbium rises to equilibrate with dysprosium due to thermalizing collisions ( Figure S1). We note that in the temperature evolution of the clouds, the initial temperatures are slightly different. This behavior is mainly due to different evaporation conditions on the two sides of the resonance, the different strength in the quench to the final B FB , and the heating caused by the resonance itself. By comparing the two species' temperature, we ensure that these different conditions are consistent with general offsets on the single measurement thus not affecting the final estimation of the cross-section. To extract a scattering cross-section from our crossspecies thermalization data, we use a fit to a numerical model for the thermalization of two species. In principle, a simple exponential fit to the temperature difference between the two species could also be used, but does not account for changes in the atom number or average temperature of the sample that may arise from residual evaporation during the thermalization time. Our numerical model follows that of Ref. [50]. We treat the scattering cross section as independent of the energy of the colliding particles, an assumption that greatly simplifies the analysis, but inevitably breaks down near enough to resonance where unitarity considerations bound the scattering cross section. This assumption leads to a collision rate for each atom of species 1 with atoms of species 2 given by: where m 1 , m 2 , T 1 and T 2 are the masses and temperatures of species 1 and 2,ω = (ω x ω y ω z ) 1/3 characterizes the frequency of the trap, β 2 = m 2ω 2 2 /m 1ω 2 1 , and σ 12 is the effective interspecies cross section. We assume that the energy exchanged per collision is given by ∆E = ξk B (T 2 − T 1 ) where ξ = 4m 1 m 2 /(m 1 + m 2 ) 2 , and that the heat capacity of each atom is 3k B . This leads to a differential equation for the temperature of erbium: which we can numerically integrate using the instantaneous values for T Dy and N Dy , and from this extract the scattering cross section σ ErDy that yields a thermalization profile that best matches our data, as determined through a least-squares difference. Examples of three such fits, for 12 G, 13.5 G, and 17 G are shown in Fig. S1, and generally describe our thermalization data well. Temperature dependence of loss We quantify the temperature dependence of threebody loss in terms of the interspecies three-body loss coefficient. For a single species, the three-body loss coefficient L 3 can be defined by:Ṅ /N = −L 3 n 2 where N is the total number of atoms, and n 2 = d 3 r n 3 (r)/N represents the average squared density of scattering partners for an atom in the gas. n(r) is the local density of the gas. We define analogous quantities for our two-species mixture, containing particles denoted i and j. In this case, Here, L i,i,j 3 represents the loss rate due to collisions involving two atoms of species i and one of j. To arrive at simple expressions, we make several assumptions and approximations. First, we treat the mass, temperature, and polarizability of the two atomic species as equal, which is a reasonable approximation for erbium and dysprosium isotopes in our 1064 nm wavelength ODT [39]. This assumption implies equivalent spatial distributions for the two species, which we assume to be thermal in our three-dimensional harmonic trap. We next set L i,i,j 3 = L j,j,i 3 ≡ L i 3 near resonance, essentially assuming that the loss process is primarily determined by the two pairwise interactions between the minority participant and the two majority atoms. We find this assumption leads to a model consistent with our observed relative loss between the two species. With these simplifications in place, we define L i 3 using:Ṅ i /N i = −L i 3 n 2 i eff , where andω = (ω x ω y ω z ) 1/3 is the geometric mean of the trap oscillation frequencies. We extract the resonant value of L 3 by measuring remaining atom number versus hold time in mixtures prepared at different temperatures, with the magnetic field set near resonance at 13.4 G. We then fit the resulting data by numerically integrating Eq. 4. Because we observe significant single-species loss of erbium (the majority species), we treat the erbium atom number measured at each time-step as inputs to our fit, and extract the value of L 3 that best predicts the loss of dysprosium. Here, we assume that L i,i,j 3 = L j,j,i 3 ≡ L 3 . We bound the effects of single-species loss in dysprosium by repeating the same measurement and analysis protocol off resonance at 11.5 G and 16.5 G. The error bars in Fig. 2 of the main text include a contribution corresponding to the extracted L 3 in the off-resonant condition, which contain both the effects of single-species loss and the small effect of off-resonant interspecies loss. Also included are errors associated with the observed change in temperature during the loss measurement, and relating to the approximations made in estimating the density. In a regime where the scattering length a exceeds the thermal wavelength λ th = h/ √ 2πmk B T , and thermal broadening is small compared to the width of the loss feature, we expect roughly L 3 ∝ 1/T 2 , as has been observed in several experiments involving single atomic species [55][56][57]. This picture becomes complicated somewhat in the case of a binary mixture due to stronger Efimov effects, which lead to a temperature-dependent modulation of loss relative to the simple 1/T 2 prediction. In particular, the parameter s 0 , which characterizes the strength of the three-body Efimov potential, is equal to approximately 1.006 for identical bosons, but approximately 0.41 for our binary mixture [58,59]. The fractional importance of these temperature-dependent modifications scale as e −πs0 [55], making them a minor correction for identical bosons, but a potentially important effect in mixtures. It is possible that such effects contribute to deviations of our data from a 1/T 2 form, but a true calculation would require knowledge of short-range inelastic processes in our system.
2020-06-12T01:01:24.560Z
2020-06-11T00:00:00.000
{ "year": 2020, "sha1": "b92802f2b7dbc793d8df21b7ac12a86cda29a311", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.06456", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b92802f2b7dbc793d8df21b7ac12a86cda29a311", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
139793588
pes2o/s2orc
v3-fos-license
Laser-induced breakdown spectroscopy for rapid detection of corrosiveness in concrete Identification of corrosiveness in concrete is necessary for evaluation of building strength. In this study, identification of reinforced concrete corrosion has been successfully conducted using laser-induced breakdown spectroscopy (LIBS) method by identification of sodium element as a fingerprint in the sample. Low-power neodymium yttrium aluminum garnet (Nd:YAG) laser was used (1064 nm, 34mJ, 7ns) as an energy source. The laser beam was focused on concrete sample surface which contains different concentration of sodium-chloride (0.1 % - 1%). The experiment was carried out under atmospheric pressure. The sodium emission line at Na I 589.59 nm was successfully detected. Nice linear calibration curve of sodium line in concrete containing different concentration of sodium chloride (NaCl) was made. The linear curve certified that sodium line can be used as a fingerprint for evaluation of concrete strength. Introduction Reinforced concrete takes important rule as the basic component of modern infrastructure development, such as for construction of buildings, bridges, highways and so on [1,2]. Thus identification of concrete corrosion is very important because concrete corrosion can cause cracks in concrete that under certain conditions can cause the collapse of buildings and endanger human life. This also will impact on fund requirement for infrastructure repairmen [3]. In marine environment, the concrete age relatively decreases due to direct contact with high concentration of sodium-chloride in the sea water [4]. The corrosion caused by the penetration of sodium-chloride through the concrete cracks slowly reduced its strength. If this continues for a long time, the concrete will be eventually broken [5][6][7]. Considering the reasons, the detection of corrosion of concrete becomes an important concern. There are some methods to detect the concrete corrosion, such as surface potential (SS) technique, embedded corrosion instrument (ECI) [8], optical fiber sensor [9], and Ultrasonic Tomography (UT) [10]. But those methods have disadvantages, naley, the tools have to be placed directly to concrete, so the installation and equipments are relatively complex. Some cannot use small amount and thin samples, and some of them are relatively expensive. Another simpler way to detect corrosion of concrete is by identifying the content of elements that present in the concrete. The methods that can be used are X-ray fluorescence (XRF) [11] and inductively coupled plasma emission spectroscopy (ICP-ES) [12]. However, those methods are not really practical because they need complicated sample preparation, thus it takes more time to complete the analysis. On the other hand, laser induced breakdown spectroscopy (LIBS) provides a solution to the shortcomings. LIBS is a method that using pulse laser to evaporate and excite a small mass of sample surface, it creates plasma that contains special emission for each component in sample [13,14]. This method requires very simple even no sample preparation, can be use for any kind of material under any atmosphere pressure, and provides rapid material analysis [14,15]. It also can be use for any kind of material including metal [16] and non-metal [15,17]. In this study, the identification of concrete corrosion using LIBS methods will be conducted. The concrete spectra containing sodium will be showed. The measurement of plasma stability and calibration curve will be carried out to show our equipment stability. Method In this study, a pulse Nd:YAG laser (New Wave Research, Polaris II 20 Hz) with 1064 nm in wavelength was used as an energy source. The laser beam was reflected to the silver mirror then focused by convex lens to the sample that located inside the camber. A luminous plasma was produced when the laser beam impinged the sample surface. The induced-plasma emission was detected by an optical multichannel analyzer (OMA) with the aid of optical fiber that connected OMA (LamdaVission SA-100W-HPCB1024/C type). The experimental set-up used in this study was shown in Fig. 1. The laser beam energy, repetition rate and pulse width were 34 mJ, 10 Hz, and 7 ns, respectively. This study was conducted under atmospheric pressure. The samples used in this study were concrete cement with addition of sodium-chloride. There are eight samples used in this study, each sample contains different concentration of sodium from natrium-chloride (NaCl) of 0.1%, 0.25%, 0.5 %, 0.75%, 1 %. Results and discussion In this experiment, a laser pulse with a narrow pulse width (7 ns) was used. The laser peak power is very high. The laser beam was focused by a convex lens which increases the peak power of the laser beam. When the laser beam hits the sample surface, a small amount of the sample will evaporate. Due to high peak power of laser, the evaporated sample then undergoes atomization into its constituent materials due to high energy, and finally the sample also undergoes ionization and excitation. As the nature of the atom, after the ionization process, the atoms are undergo de-excitation by releasing their energy in the form of electromagnetic waves radiation at a certain wavelength. The resulting emitting light is called laser plasma. The color of the produced plasma that was produced depends on the type of the constituent elements. The emission of the plasma was then detected by OMA system, and the intensity and wavelength of the sample elements were obtained. The wavelength of each peak (the element with relatively high intensity) obtained ware then matched with the NIST standard reference data to determine the name of the elements. Figures 2(a) shows the emission spectrum of concrete sample with (red line) and without (blue) addition of NaCl. From that spectrum, we can clearly see the spectrum lines of Ca and Na in the case of concrete containing sodium (red line). However, it should be noticed that completely no Na line (Na I 589.59 nm) appears in the concrete without sodium line. Calcium was emitted from the concrete material, which contains Ca as major element. The higher peak is ionic Ca lines at wavelengths of 393.37 nm and 396.85 nm, while the neutral sodium line is at 589.59 nm wavelength. The result confirmed that the present method can be employed to detection of Na in concrete for evaluation of concrete from the corrosiveness. Figure 2(b) shows the emission spectrum obtained from the concrete sample with addition of sodium at certain concentration. The red line shows the concrete sample with the highest sodium intensity (1% sodium), while the blue line shows concrete spectrum with the lowest sodium intensity (0.1% sodium). It was confirmed that the higher concentration of sodium, the higher emission intensity of sodium as displayed in Fig. 2(b). On the other hand, sodium is one of the causes of the corrosiveness in concrete, and it can be seen using LIBS. Therefore, LIBS can be used to detect the corrosiveness in concrete. Those spectra above also show that LIBS can detect atomic and ionic line of the components. Besides, we conducted the measurement of plasma stability by calculating the ratio intensity of Na and Ca for several shots. Ratio intensities of Ca and Na for several shots are 0.332, 0.332, 0.337, 0.353 and 0.336, respectively. The plasma stability curve is shown in figure 3 (a). Figure 3(a) plasma stability curve, (b) calibration curve This indicates that LIBS can capture the intensity in several number of shots with relatively the same ratio intensity of Ca and Na, which confirm that the plasma has good stability and it has a potential for quantitative analysis. Thus, it has high possibility to conduct quantitative analysis of concrete corrosion in concrete by means of sodium detection in concrete. A typical calibration curve for concrete sample containing various concentrations of sodium is shown in figure 3(b). The curve shows the relation between sodium concentration and intensity of sodium detected at 589.59 nm. The intensity of sodium detected and concentration of sodium in sample shows a linier characteristic, with least square fit R 2 = 0.9984. Conclusion Corrosion detection by identifying sodium content in concrete using LIBS methods has been successfully performed. LIBS methods can be used for analysis with simple or even no sample preparation, requires relatively cheap equipment, and provides rapid sample analysis. By using the LIBS method, the calcium and sodium spectrum lines in concrete can be clearly identified. Concrete sample with 1% sodium content shows the highest sodium intensity, while concrete sample of 0.1 sodium addition shows lowest sodium intensity. To see the methods stability, we carried out the calculation of plasma stability curve. It shows that LIBS can detect plasma with relatively same intensity, therefore the plasma emission is quite stable, so it might be possible to use to detect the corrosiveness of reinforced concrete.
2019-04-30T13:07:40.338Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "d37f218fe0b36ab7aaff30b2e2d7beda2e1a680c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1025/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e291daddc0a8796142f2fd35d4c12a3b16b2ae1d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }