id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
257663418
|
pes2o/s2orc
|
v3-fos-license
|
ReorientDiff: Diffusion Model based Reorientation for Object Manipulation
The ability to manipulate objects in a desired configurations is a fundamental requirement for robots to complete various practical applications. While certain goals can be achieved by picking and placing the objects of interest directly, object reorientation is needed for precise placement in most of the tasks. In such scenarios, the object must be reoriented and re-positioned into intermediate poses that facilitate accurate placement at the target pose. To this end, we propose a reorientation planning method, ReorientDiff, that utilizes a diffusion model-based approach. The proposed method employs both visual inputs from the scene, and goal-specific language prompts to plan intermediate reorientation poses. Specifically, the scene and language-task information are mapped into a joint scene-task representation feature space, which is subsequently leveraged to condition the diffusion model. The diffusion model samples intermediate poses based on the representation using classifier-free guidance and then uses gradients of learned feasibility-score models for implicit iterative pose-refinement. The proposed method is evaluated using a set of YCB-objects and a suction gripper, demonstrating a success rate of 95.2% in simulation. Overall, our study presents a promising approach to address the reorientation challenge in manipulation by learning a conditional distribution, which is an effective way to move towards more generalizable object manipulation. For more results, checkout our website: https://utkarshmishra04.github.io/ReorientDiff.
I. INTRODUCTION
Rearranging objects into specific poses is a fundamental task.It's not only essential for everyday activities at home but also plays a critical role in industrial applications like packing and assembly lines.Performing such a task requires extracting object information from visual-sensor data and planning a pick-place sequence [2], [3].While a single-step pick-place sequence is a viable solution, placing the object at a specific position and orientation is not always feasible.Reorientation is an effective strategy when successfully changing an object's pose allows its placement at the target pose [1].Such a strategy ensures feasible intermediate transition poses in scenarios without common grasps between the current pose and an object's desired placement pose.
The problem of finding reorientation poses is traditionally approached via rejection sampling based on finding successful grasps between the current pose-intermediate pose and intermediate pose-target pose.While previous classical approaches achieve this by using trajectory planners [4] to plan motion from the current pose to the desired pose via diverse candidate intermediate poses, such an exhaustive search is expensive on time and is limited by choice of the Utkarsh A. Mishra and Yongxin Chen are affiliated to the Institute for Robotics and Intelligent Machines (IRIM), Georgia Institute of Technology umishra31@gatech.edu,yongchen@gatech.edunumber of intermediate pose options.Recently, there have been efforts to improve the reorientation process via a datadriven rejection sampling solution using learned models [1] that predict the feasibility score of an intermediate pose w.r.t.feasible grasps in the current and target pose.While their method improves the success rate and planning time, the algorithm requires processing significantly large number of candidate random samples and specifying the target object's placement pose.The former limits scalability, and the latter challenges generalizability.Lately, with the advances in language descriptor foundation models like CLIP [5], which projects images and texts to a common feature space, target object specifications can be directly correlated between visual information and suitable language commands, thus empowering human-robot interaction.This motivated us to explore grounding the problem statement of reorientation on language and hence embed semantic knowledge of the task with the spatial structure of the scene [6].
In this paper, we introduce ReorientDiff, a diffusion model based generative method to restructure the reorientation pose generation pipeline as a conditional distribution learning problem.Such a method enables us to directly sample feasible reorientation poses without rejection sampling, thus improving scalability.Our contributions can be summarized as follows: Learning a distribution of intermediate poses: For a given pile of objects, a target object, and its target placement location, we formulate a conditional distribution of feasible intermediate poses.As compared to rejection sampling using random prior, our approach aims at providing a learned prior to efficiently sample high-quality reorientation poses.Leveraging the multi-modality of diffusion models, this distribution encompasses all poses reachable from both the current pose and the target pose.
Flexibly sampling based on possible grasp poses: It is necessary to make sure that the grasp poses w.r.t.object is constant during one pick-place transition.To achieve this, we flexibly sample intermediate poses from the learned distribution based on feasible grasp poses using classifier guidance via pre-trained success classifiers [7], [1].Such models implicitly refine sampled pose and operate individually for both transitions during reorientation.Hence, the learned distribution can be used for any possible grasp pose based on kino-dynamic feasibility directly at inference.
Representing target placement location via natural language: We leverage CLIP [5] to generate information embeddings from visual input and task descriptions in natural language.We further use these embeddings as conditions for learning the conditional distribution.While this has been As the object cannot be directly placed at the target location, our proposed method, ReorientDiff, samples a reorientation pose using a learned conditional distribution by a diffusion model.Such a proposed reorientation pose acts as a transition for facilitating successful placement.We also consider and take advantage of the object dynamics, as introduced by Wada et al. [1], by which we ensure that un-grasping an object in an unstable pose will eventually allow the object to settle at some favourable pose.
explored in recent literature [6], we see this as a substantial improvement over the baseline.
In the proposed approach, we combine a generic classifierfree conditional sampling [8] with classifier-guided sampling [9] to sample from diffusion models.To validate the performance of ReorientDiff, we consider reorientation of objects in the YCB dataset [10] that are feasible for suction grippers.For each selected object, we choose suitable locations on multiple shelf levels and target orientations.
II. RELATED WORK
Object manipulation and reorientation.Finding the grasp pose that is feasible for both the current and target location is a widely employed strategy for pick-and-place operations [11], [12], [13].Such problems are usually solved in two steps: deciding an appropriate placement pose (within a region of interest) and searching for common grasps.In order to ensure feasible target placement, prior works have mostly relied on known object geometries [11], [12], vision-based object representation [2], [14] or using segmentation and depth maps of the pre-specified target object [15], [16], [1].These strategies have led to several object rearrangement methods [6], [17], [3].Unlike most prior works that consider the availability of common grasps by default, for complex manipulation scenarios where there are no common grasps, reorientation becomes mandatory.The object needs to be reoriented to an intermediate pose and regrasped to place it at the target location.Such a scenario has been traditionally tackled via rejection sampling strategies and recently improved via regression-based methods.We also aim to develop a learning-based method.
Learning for object manipulation.While prior works have pre-dominantly incorporated trajectory planners [4], they have employed learning strategies to decide the target object and its placement pose as discussed in the previous subsection.Additionally, task descriptions as natural language have been very effective for generalized pick-place tasks in planar tabletop [6] and 3D [17] manipulation.Such language descriptions can be embedded into the learning pipeline via foundation models like CLIP [5], which encodes visual and language information into a common representation space.This has been further extended towards language-conditioned object rearrangement planning [18], [19] and supplying highlevel instructions for long-horizon planning [20].
Recently, reorientation problems have been solved by planning to reorient objects using extrinsic supports [21], [22], which enables them to re-grasp the object in a desired way.The above methods are regression-based and limited to modeling only one solution pose.Such approaches cannot cater to the multiple possible solutions of the same problem.In such a case, rejection sampling is still beneficial and can be performed using learned feasibility prediction models [1].We want to develop a pipeline that can still learn about all feasible poses without analyzing extensive random samples.
Generative models for object manipulation.For pick-andplace and reorientation tasks, there can be multiple feasible grasps and reorientation poses respectively.Hence, generative models offer an option to learn them as conditional distributions.Prior works have explored VAE for planning grasps [7] using visible point-cloud of objects.In this direction, diffusion models have been shown to be advantageous for robotics [23], [24], [25], [26], [27].Recent works have demonstrated the multi-modal distribution learning using diffusion models for finding target poses [18], [28] and learning policies [23], [24], [25].In addition to such properties, we also plan to leverage the flexible sampling and conditioning strategies offered by diffusion models to incorporate additional conditions at inference without re-training.
III. PRELIMINARY: DIFFUSION MODELS
Consider samples x 0 from an unknown data distribution q(x 0 ); diffusion models [29] learn to estimate the distribution by a parameterized model p θ (x 0 ) using the given samples.The procedure is completed in two steps: the forward and the reverse diffusion processes.The former continuously injects Gaussian noise in x 0 to create a Markov chain with latents x 1:K following transitions: where The reverse diffusion learns to denoise the data starting from x K and following p The parameterized model ϵ θ (x k , k) is called the scorefunction, and it is trained to predict the perturbations and the noising schedule by the score-matching objective [31] arg min In particular, such a score function represents the gradients of the learned probability distribution as IV. REORIENTATION Reorientation consists of solving two problems simultaneously, finding a pose that is reachable from the current pose and, after the effect of gravity, results in a pose that makes placement at the target pose achievable (as shown in Figure 1).Once we have an estimate of the current and target pose, it is intuitive that there will be a set of poses that will satisfy reorientability.However, only a small subset of such reorientable poses will be valid with provided kinodynamic constraints on grasp poses.Identifying a candidate sample from this subset by either brute force sampling or optimization is computationally expensive and has to be done for every new scenario.
To circumvent the above challenges, we propose a generative modeling approach to sample from the subset of valid reorientation poses.More specifically, our method learns the distribution of all reorientable poses using a conditional diffusion model and use classifiers to guide sampling towards valid poses directly during inference based on provided grasp poses.Hence, we divide the problem into three segments: i) regression-based end-to-end learning for finding the target object and placement pose from the scene and task description (scenario), ii) learning the distribution of all reorientable poses for a given scenario once the object specifications are known and iii) learning grasp feasibility classifiers for selecting only the valid reorientation poses.
To achieve this, we discuss our formulation for constructing scene-task representation, calculating grasp poses from object poses and learning grasp feasibility classifiers below.The diffusion model training and inference is discussed in the next section.
A. Constructing Generic Scene-Task Representations
A scene-task representation is a compact embedding of all available information present in the scene and specified by the user.We define a scene as the location and occupancy of the place from where a target object should be picked and a task as the language prompt containing the descriptions for selecting the target object and deciding placement poses.A top-down RGB-D camera provides an image I ∈ R H×W ×3 and a heightmap H ∈ R H×W ×1 as the description of the pile.For learning the semantic and spatial embeddings [6], [17], we use pre-trained CLIP foundation model and obtain semantic embeddings from the image I and language L. We sequence the embeddings with the spatial embeddings for target object segmentation to get a joint embedding sequence Φ as generic scene-task representation as shown in Figure 2(b).The embedding is further used to predict the target object and the final placement pose.
B. Sampling Grasp Poses
We generate grasp poses by following the classical approach of converting the heightmap into a point cloud representation and eventually to a point-normal representation [1].The predicted target object segmentation of the scene is then used to obtain the surface normals of the target object.After performing an edge masking using the Laplacian of the surface normals, the remaining point-normals on the surface are feasible grasp poses.While we sample grasp poses η 1 for picking the object from the pile in the aforementioned manner, we assume that we have the mesh of the selected object for sampling grasp poses η 2 for placing the object at the predicted pose.
C. Feasibility Score Models
Following prior works [7], [1], [19], a feasibility prediction model is important for early-evaluation and rejection of unfavorable samples.Such a feasibility model predicts the probability of success of a given grasp pose in successfully grasping an object in some candidate pose for a specified scene representation.The phenomenon of grasp success evaluation in dynamic reorientation pose, as addressed by [1], is particularly interesting for our setup.Modelling dynamics for every object is indeed non-trivial and adds to the complexity; hence the feasibility model implicitly takes care of the dynamics of the object after deactivating the grasp.For checking feasibility or the probability of success (y) of sampled grasps for candidate reorientation poses q, we train two models: • For predicting success of reorientation from the current pose in a pile to a candidate pose given pick grasp poses (η 1 ) and scene representation, denoted as M 1 (y|η 1 , q, Φ) • For predicting success of post-grasp deactivation pose from the candidate pose and placement grasp poses (η 2 ), denoted as M 2 (y|η 2 , q, Φ)
V. REORIENTDIFF: DIFFUSION FOR REORIENTATION
We aim to generate intermediate reorientation poses for the target object, which enables successive placement at the desired pose and is reachable from the current pose.We introduce a diffusion model-based approach to sample the most probable successful reorientation poses (q) conditioned on the scene representation priors (Φ), denoted as p(q|Φ), which already contains the spatial and semantic information about the scene and the task.The denoising process can be further flexibly conditioned by sampling from modified distributions of the form p h (q) ∝ p(q|Φ)h(q, Φ), (5) where h(q, Φ) can represent several grasp success probability heuristics.By separating the grasp success from reorientation candidate sampling, the diffusion model trained for reorientation poses can be reused for varied selection of picking (η 1 ) and placement grasp poses (η 2 ).
A. Classifier-free Conditional Pose Generation
Following the distribution defined in (5), we use classifierfree guidance [8] to sample high-likelihood reorientation poses for a particular scene-task representation.We train a score-network [31], ϵ θ (q k , Φ) ∝ ∇ q k log p(q k |Φ) , to denoise from q K ∼ N (0, I) to possible reorientation poses q 0 from a K-step reverse diffusion denoising process.For each step, we calculate εk as The scalar w c implicitly guides the reverse-diffusion towards poses that best satisfy the scene-task representations.Further, we calculate the successive samples for the next (k−1) th step using the DDIM [30] sampling strategy and εk as follows: where, ᾱk is as described in section III.
B. Feasibility Guided Pose Refinement
We use the two feasibility-score prediction models (M 1 and M 2 ), which are pre-trained for predicting grasp feasibility for picking grasp, reorientation pose pairs and placement grasp, reorientation pose pairs, respectively.In such a case, the scores can be converted into probability distributions for each heuristic, defined as, for each i = 1, 2, Following classifier-based guidance [9] formulation for the heuristics, the reverse diffusion can be formulated as: where, qk 0 is the sample proposed at diffusion step k and defined as: Considering Taylor first order approximations for heuristics and standard reverse process Gaussian (µ θ (q k , k, Φ), β k I) as described in section III, we get the new mean (µ θ,h (q k , k, Φ)) for the distribution p h (q k |q k+1 , y, Φ) in ( 8) as: In view of (2), we then obtain the modified score . We notice that injecting noise to g k , as in stochastic DDIM, can slightly improve the performance.We calculate the final q k−1 using the refined ϵ k in (7).A visual clarification of the forward and reverse diffusion is shown in Figure 2(a).
VI. RESULTS: SIMULATION
Based on the environment setup as discussed in section IV, we create datasets, train diffusion and feasibility score models and evaluate them in simulation.
A. Dataset Generation and Training
We use PyBullet [32] and an OMPL [33] based motion planner to solve for collision-free path between current pose and a candidate reorientation pose and from the reorientation pose to the ground-truth placement pose for diverse set of YCB-objects and target locations.We sampled approximately 40000 candidate poses following Wada et al. [1].The goal properties were converted into modular language instructions, and the success of pick and place for both the steps was recorded.The scene and task properties were used to construct the joint visual-language embedding space, which was further used to train the feasibility score models using binary success labels.Eventually, we train a conditional diffusion model using only the successful reorientation poses.Such a diffusion model is reusable for diverse set of grasp poses when combined with the feasibility score models.
B. Performance Evaluation: Scene-Task Representation
To evaluate the quality of the scene-task embedding network, we analyze the accuracy of the object selection and placement pose prediction along with the error in the predicted segmentation.We show a visual analysis in Figure 3 where the output segmentation and the predicted placement pose in the shelf are shown for three scenes and tasks.For accurate shelf-level estimation, we round each object's predicted height to the nearest shelf-level height, and a similar post-processing is conducted for the object orientation.In our experiments, the object selection network was 100% accurate, and the number of pixels wrongly classified was about 1% of the complete image on average over 100 random samples.The average error in predicting the height of the target placement after post-processing is around 8 mm, and the mean error in the yaw angle of the predicted pose is 0.3 rad.
C. Performance Evaluation: Diffusion with Guidance
The trained classifier-free conditional diffusion model and the score feasibility models are used to perform the reverse diffusion using the classifier-free guidance with and without feasibility score guidance.Experiments comparing the performance of both the methods are shown in Figure 4 for a set of YCB Objects [10] and different scene-task scenarios where only 40 candidate poses are sampled and top 10 highlikelihood poses are selected.The comparison shows that while the classifier-free guidance is good enough to sample high-likelihood reorientation poses, the primary purpose of the feasibility score gradients is to reduce the variance in the pose generation and ensure high success probability.A numerical analysis of the overall success is shown and compared with the rejection sampling based baseline [1] in Table I.The reorientation success percentage holds different relevance as compared to the baseline.The baseline does two step reverse rejection sampling where reorientation search is conducted over candidates which are feasible for placement, so there might be a scenario where there is no solution.For the case of ReorientDiff, the reorientation success measures the capability of the diffusion model to generalize to poses which ensure reorientability and scope for future placement.Higher reorientation success and lower placement success is an indication that the model is short-sighted and is giving importance to a single step success metric.From Table I, we ensure high reorientability success along with better placement success.The overall success is based on the accurate placement of the object from the reoriented pose, and it represents the successful completion of a task.The metric is measured by calculating the difference between the desired and the pose after final placement.
D. Performance Evaluation: K-Step Reverse Diffusion
Sampling from a trained diffusion models is flexible and can be achieved using different levels of discretization between x K ∼ N (0, I) to meaningful reorientation poses.We perform the complete analysis for multiple values of the number of reverse denoising steps K as shown in Table II.
ReorientDiff performs well with only 20 sampling steps.
Following our analysis on performance, we explored the time consumption for the overall planning of a successful reorientation pose from a given scene and corresponding task information.We provide the recorded timings for all of our ablations and the baseline in Table III.Our findings show that ReorientDiff leverages fast sampling strategies of FastDPM [34] to recover from computationally heavy gradient calculations for reverse denoising steps.Without using the guidance from the feasibility-score models, classifier-free guidance requires even less time as compared to the baseline, ReorientBot, as shown in Table III.Hence, from our visual and empirical analysis, ReorientDiff successfully proves that formulating the problem of reorientation as learning a conditional distribution is an efficient and scalable way to move towards more generalizable object manipulation.
VII. CONCLUSION
Diffusion models are powerful generative models capable of modeling (conditional) distributions.Our proposed method ReorientDiff exploits the capabilities of such models to predict reorientation poses conditioned on a compact scene-task representation embedding containing information about the target object and its placement location.Further, the samples are refined using learned feasibility-score models to reduce uncertainty and ensure the success of the planned intermediate poses.With only 10 candidate reorientation poses, we achieved an overall success rate of 95.2% across various objects.With the possible inclusion of point-cloud-based object representations [28], such a method can generalize to a more diverse set of objects.
Fig. 1 :
Fig.1: Reorientation for precise target placement The above figure represents the phenomenon of reorientation in which an object from a cluttered file has to be placed precisely in a shelf (target position shown).As the object cannot be directly placed at the target location, our proposed method, ReorientDiff, samples a reorientation pose using a learned conditional distribution by a diffusion model.Such a proposed reorientation pose acts as a transition for facilitating successful placement.We also consider and take advantage of the object dynamics, as introduced by Wada et al.[1], by which we ensure that un-grasping an object in an unstable pose will eventually allow the object to settle at some favourable pose.
Fig. 2 :
Fig. 2: Method Overview (a) Forward and reverse diffusion process.ReorientDiff uses a combination of classifier-free guidance with classifier-based implicit refinement to sample from the learned distribution of intermediate poses.It ensures high-success feasibility with minimal variance by guiding the scene-task sampling using feasibility score gradients.(b) Conditioned score function.ReorientDiff learns the target distribution of feasible reorientation poses conditioned on the scene (pile of objects) and task (language prompt) jointly represented as Φ.We use the pre-trained frozen CLIP text and image embeddings to formulate a joint embedding, trained end-to-end to encode information about placement pose, target object and current pose.Further, the current pose and target poses are processed to obtain feasible grasps (η 1 and η 2 ), which are used to calculate the feasibility gradients g k in (a).The joint embedding is used as a sequence to condition the transformer-based score network ϵ θ (q k , k, Φ) via cross-attention to obtain the classifier-free score estimate in (a).
Fig. 3 :
Fig.3: Visual Analysis of Scene-Task Network Performance The scenetask network maps the visual (row 2) image of the pile (row 1) and language (bottom row) inputs to a feature space which is used to predict the placement location (row 4) and target object segmentation (row 3).
Fig. 4 :
Fig. 4: Reverse Diffusion for Reorientation Pose Generation The reverse sampling process for 4 k-steps at k = 20, 12, 4, 0 for K = 20 in four different scene-task scenarios comprising of the Cracker Box, Mustard Bottle and Sugar Box in different target orientations are shown above.The scenes are shown in the left-side of every sub-figure and consists of the pile with the target object and the predicted placement location on the shelf.The language prompt defining each of the tasks is mentioned below each sub-figure.It consists of either the absolute (the object's name) or the relative (heaviest/lightest) reference to the object and details about the target placement.
TABLE I :
Success evaluation of the proposed method as compared to the rejection sampling based baseline ReorientBot.The ReorientDiff algorithm was tested for more than 100 different scene task settings consisting of equal distribution of the selected objects and all the orientations.A task is considered a success if it is completed at-least once in 3 random seeds.
TABLE II :
Success evaluation with different levels of discretization while sampling using ReorientDiff.
TABLE III :
Computational analysis of the planning time for ReorientDiff (K = 20) with and without feasibility score guidance along with the baseline.
|
2023-03-23T01:16:07.296Z
|
2023-02-28T00:00:00.000
|
{
"year": 2023,
"sha1": "8d043c101b5e6060d54523237ba51da1f7eb8dbf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8d043c101b5e6060d54523237ba51da1f7eb8dbf",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256909453
|
pes2o/s2orc
|
v3-fos-license
|
The small GTPase, nucleolar GTP-binding protein 1 (NOG1), has a novel role in plant innate immunity
Plant defense responses at stomata and apoplast are the most important early events during plant-bacteria interactions. The key components for the signaling of stomatal defense and nonhost resistance have not been fully characterized. Here we report the newly identified small GTPase, Nucleolar GTP-binding protein 1 (NOG1), functions for plant immunity against bacterial pathogens. Virus-induced gene silencing of NOG1 compromised nonhost resistance in N. benthamiana and tomato. Comparative genomic analysis showed that two NOG1 copies are present in all known plant species: NOG1-1 and NOG1-2. Gene downregulation and overexpression studies of NOG1-1 and NOG1-2 in Arabidopsis revealed the novel function of these genes in nonhost resistance and stomatal defense against bacterial pathogens, respectively. Specially, NOG1-2 regulates guard cell signaling in response to biotic and abiotic stimuli through jasmonic acid (JA)- and abscisic acid (ABA)-mediated pathways. The results here provide valuable information on the new functional role of small GTPase, NOG1, in guard cell signaling and early plant defense in response to bacterial pathogens.
Plant pathogens that are able to cause disease in a given plant species are considered host pathogens while those that are unable to do so are nonhost pathogens. Nonhost resistance is a more wide-spread and durable plant defense mechanism that is achieved by a combination of preformed and inducible defenses 1,2 . Preventing the entry of the pathogen into plant tissue is one of the key aspects of nonhost resistance, also known as stomatal innate immunity [3][4][5][6] .
In contrast to many fungal pathogens that are able to penetrate the plant epidermis, bacterial pathogens rely on wounds or natural openings to enter the apoplast 7,8 . One well-characterized means of entry is through the stomata, microscopic pores on the plant surface that allow gas exchange between plant tissues and the atmosphere. Stomatal opening and closure depend on the environmental and physiological conditions of the plant and are regulated by two guard cells that surround the pore 9 . Pathogen Associated Molecular Patterns (PAMPs) such as flagellin-derived peptide flg22 and the bacterial lipopolysaccharide (LPS) can trigger stomatal closure 7 . However, adapted plant bacterial pathogens are able to re-open stomata by means of virulence factors such as the phytotoxin coronatine (COR), a mimic of the active JA-Ile hormone 4,7 . In the absence of COR, transcription factors related to JA signaling such as MYC2 interact with a repressor complex formed by Jasmonate-Zim domain (JAZ) to repress transcription of JA-responsive genes 10 . In the presence of COR, JAZ proteins bind the F-box protein Coronatine insensitive 1 (COI1), a subunit of an E3 ubiquitin ligase complex, and are subjected to 26 S proteasome-mediated degradation 11 . Although JA regulated genes play a critical role in JA-mediated guard cell signaling pathway and stomatal immunity, it still remains unclear what genetic components are directly implicated in this sophisticated network that regulate stomatal defense against bacterial pathogens. In the present study, we identified two small G-proteins, Nucleolar GTP-binding protein 1-1 (NOG1-1) and 1-2 (NOG1-2), which play an important role in the regulation of nonhost resistance and stomatal defense against bacterial pathogens.
NOG1 is involved in nonhost resistance in Nicotiana benthamiana and tomato. A Tobacco rattle
virus (TRV)-based virus-induced gene silencing (VIGS)-mediated fast forward genetics approach was used in N. benthamiana to identify plant genes that play a role in nonhost resistance against bacterial pathogens 26 . One of the cDNAs identified from this approach had homology to the functionally uncharacterized gene with a small GTPase domain, NOG1. Upon inoculation with the nonhost pathogen Pseudomonas syringae pv. tomato T1, bacterial multiplication was significantly increased (>4 logs) in the inoculated leaves when compared to the non-silenced control (TRV::00) that was asymptomatic (Fig. 1A).
To assess how broad the NOG1-mediated nonhost resistance was, NbNOG1-silenced N. benthamiana plants were further analyzed for their response to additional nonhost pathogens such as P. syringae pv. glycinea (a soybean pathogen) and Xanthomonas campestris pv. vesicatoria (a pepper pathogen). The down-regulation of NbNOG1 was confirmed in NbNOG1-silenced N. benthamiana plants (Fig. S1A). Both pathogens multiplied to significantly higher levels at 7 days post-inoculation (dpi) in NOG1-silenced plants compared to wild-type and non-silenced control plants (Fig. S1B,C). Inoculation with the host pathogen P. syringae pv. tabaci caused disease symptoms and bacterial multiplication in both NbNOG1-silenced plants and non-silenced controls with no significant difference at 5 dpi although more number of bacteria were found in infected leaves at 2 dpi (Fig. 1B).
To determine whether downregulation of NOG1 impairs elicitation of the hypersensitive response (HR), a visual inspection of HR symptom development was performed in NbNOG1-silenced and control plants after infiltration with high inoculum of the nonhost pathogens P. syringae pv. tomato T1 and X. campestris pv. vesicatoria, or by transient co-expression of the resistance (R) genes Pto or Cf9 with their corresponding avirulence genes AvrPto or AvrCf9, respectively, or by transient expression of the PAMP elicitor INF1. HR symptoms were observed in the control plants but not in the NbNOG1-silenced plants at the time points tested (Fig. 1C), suggesting that NOG1 also plays a role in elicitation of the HR triggered by nonhost pathogens, gene-for-gene interactions and PAMPs.
To determine if NOG1 is involved in nonhost resistance in other plant species, we used N. benthamiana NOG1 to silence its orthologous gene in tomato (SlNOG1) by VIGS. SlNOG1-silenced tomato plants and non-silenced control were inoculated with the tomato nonhost pathogen P. syringae pv. tabaci that causes fire blight disease in tobacco. Similar to the findings in N. benthamiana, downregulation of SlNOG1 compromised nonhost disease resistance in tomato resulting in disease symptoms and increased bacterial multiplication when compared to the control ( Fig. 2A). Inoculation with the host pathogen P. syringae pv. tomato DC3000 caused slightly more severe disease symptoms accompanied with a higher bacterial titer in the SlNOG1-silenced plants than in control plants (Fig. 2B). Taken together, these results suggest that NOG1 is required for nonhost resistance against bacterial pathogens in N. benthamiana and tomato.
NOG1-1 and NOG1-2 are members of the small GTP-binding family proteins OBG in
Arabidopsis. NbNOG1 showed a high degree of similarity to proteins belonging to the small GTP-binding family protein OBG such as yeast Nog1p (42.7%) and human GTP binding protein 4 (GTPBP4; 48.6%) ( Fig. S2A and Table S1). Sequence homologs of NOG1 were identified in a wide range of plant species. Two copies of NbNOG1 or SlNOG1 with nucleotide identity of 99.1% and 97.5% were identified in N. benthamiana and tomato, respectively. We identified two Arabidopsis genes, At1g50920 (NOG1-1) and At1g10300 (NOG1-2), as NbNOG1 homologs. Both genes are 79% identical at the nucleotide level and 76% similar at the amino acid level, suggesting the selection for functional divergence and adaptation. Using the GTPase domain sequence of NOG1-1 and NOG1-2, a total of 10 orthologs were identified in Arabidopsis. Phylogenetic analysis revealed that NOG1-1 and NOG1-2 are highly similar to small GTP-binding family proteins Obg, DRG, and ERG in Arabidopsis (Fig. S2B). Annotation of the NOG1-2 sequence in The Arabidopsis Information Resource (TAIR; www.arabidopsis.org) showed a 2,064 bps containing two exons and one intron that is predicted to encode a protein of 687 amino acids. However, results from reverse transcription-PCR (RT-PCR) of full length NOG1-2 followed by c-DNA ) and non-silenced control (TRV::00) N. benthamiana plants were vacuum-infiltrated with nonhost pathogen P. syringae pv. tomato T1 (pDSK-GFP uv ) or host pathogen P. syringae pv. tabaci (pDSK-GFP uv ) to observe symptom development (left panels) or bacterial multiplication 3 days post-inoculation (dpi; right panels). An increase in GFP fluorescence associated with bacterial multiplication was observed in TRV::NbNOG1 plants but not in the TRV::00. To monitor bacterial multiplication in TRV::NbNOG1 and TRV::00, N. benthamiana plants were vacuum-infiltrated with P. syringae pv. tomato T1 (A) and P. syringae pv. tabaci (B) and bacterial multiplication was quantified at various dpi as indicated. Bars represent means and standard deviations for three independent experiments. Asterisks above bars indicate statistically significant difference between NbNOG1-silenced plants and control (Student's t-test P < 0.05). (C) HR was observed between NbNOG1-silenced and control N. benthamiana plants. Plants were syringe-infiltrated with P. syringae pv. tomato T1 or X. campestris pv. vesicatoria (1 × 10 6 CFU/ml) or Agrobacterium strains for transient expression of Pto and AvrPto, or Cf-9 and Avr-9, or INF1. Agrobacterium strain GV2260 with empty vector (EV) was used as a control. HR was observed at different hours post inoculation (hpi). This experiment was repeated at least three times and showed similar results. Each experiment had five replications. synthesis and Sanger sequencing showed that no intron sequence was present, and it encodes a truncated protein with 346 amino acids. This was further confirmed by western blot analysis (Figs S3A and S6C). NOG1-2 protein of ~40 kDa was detected by GTPBP4 antibody (N-terminal region) in Arabidopsis and, as expected, His-tag fused NOG1-2 was ~50 kDa. The reason why TAIR annotation shows the presence of an intron in NOG1-2 is due to the presence of a stop codon at the predicted intron. To investigate if the early termination occurs only in Col-0 or other Arabidopsis ecotypes, NOG1-2 amino acid sequences were examined in 19 representative different ecotypes. Interestingly, the truncated version of NOG1-2 is only present in Col-0, Ler-0, Rsch-4 and Wil-2 ( Fig. S3B; Table S2). This early translational termination does not affect the GTPase domain. Furthermore, sequence alignment with NOG1-2 homologs of other eukaryotes suggested that the NOG1-2 start codon begins 87 bps downstream of the start codon annotated by TAIR (Table S1). According to the protein expression results, the 87-bp deletion does not affect translation (Fig. S3A). This 87-bp deleted NOG1-2 was used for all experiments in this study. In contrast to NOG1-2, NOG1-1 sequences were highly similar among different ecotypes of Arabidopsis.
Arabidopsis lines expressing the β-glucuronidase (GUS) reporter gene under the control of NOG1-1 or NOG1-2 promoters showed expression of GUS in guard cells and hydathodes, which are the natural openings for the entry of bacterial pathogens (Fig. S4A,B). In addition, these lines showed distinct patterns of GUS expression The bacterial growth of both pathogens was significantly higher in SlNOG1-silenced plants than TRV::00 plants. Bacterial growth was measured after 2 and 6 dpi. Bars represent means and standard deviation for three independent experiments. Asterisks represent statistically significant difference between treatments for equivalent time points using Student's t-test (P < 0.05).
of pNOG1-1-GUS and pNOG1-2-GUS in different tissues (Fig. S4). For example, NOG1-1 was expressed in the most parts of a flower, while NOG1-2 expression was only found in flower petal.
To verify the expression of NOG1-1 and NOG1-2 in vivo, the changes in GUS activity in the transgenic plants were determined following treatment of biotic and abiotic stimuli. As shown in Fig. 3A, both NOG1-1 and NOG1-2 expression were induced in response to ABA, PAMPs and bacterial pathogens (Fig. 3B). These results suggest that NOG1-1 and NOG1-2 are involved in defense responses to both biotic and abiotic stresses. Fig. 1, NbNOG1-and SlNOG1-silenced N. benthamiana and tomato plants, respectively, compromised nonhost resistance. The function of NOG1-1 and NOG1-2 for nonhost resistance was tested in Arabidopsis. Because nog1-1 T-DNA insertion mutants were not available, we generated RNA interference (RNAi) lines to downregulate NOG1-1 expression. Among 23 transgenic lines, two RNAi lines, RNAi2 and RNAi10, that showed ~50% downregulation of NOG1-1 were selected for further experiments (Fig. S5A). The expression of NOG1-2 was not altered in NOG1-1-RNAi plants (Fig. S6B). Similar to NbNOG1-and SlNOG1-silenced plants that showed stunted growth, NOG1-1-RNAi plants were slightly smaller than wild type (Fig. S5B).
NOG1-1 is necessary for defense responses against bacterial pathogens. As described in
In contrast to NOG1-1, a T-DNA insertion line for NOG1-2, SALK_043706, was identified and obtained from the Arabidopsis Biological Resource Center. T-DNA insertion is located at the 3′UTR of the NOG1-2 gene, which Arabidopsis wild-type (Col-0) plants were individually syringe-infiltrated with ABA (10 µM), Flg22 (20 µM), or LPS (100ng), or flood-inoculated with the pathogens P. syringae pv. maculicola (Psm) and P. syringae pv. tabaci (Pstab) at 1 × 10 4 CFU/ml. RNA was isolated from tissue samples harvested at 0 hr, 6 hr, 12 hr and 24 hr, and qRT-PCR was performed. Bars indicate relative gene expression in comparison with the housekeeping gene Ubiquitin (UBQ5) and in relation to 0 hr time that was considered as 1. Different letters above bars indicate a statistically significant difference within a treatment using two-way ANOVA (P < 0.01). Error bars represent the standard deviation of three biological replicates (three technical replicates for each biological replicate). presumably disrupts the poly adenylation signal and affects transcript stability (Fig. S6A). The qRT-PCR and western blot experiments showed that NOG1-2 transcripts and NOG1-2 protein were significantly reduced in SALK_043706 line (Fig. S6B,C). SALK_043706 (nog1-2) was transformed with a construct containing the NOG1-2 native promoter and coding region but without 3′UTR for a complementation experiment. NOG1-2 expression was slightly increased in the complemented line (NOG1-2-comp) but still comparable to the expression of NOG1-2 in Col-0 (Fig. S6B). In contrast to NOG1-1-RNAi plants, nog1-2 showed a wild-type phenotype. The number of stomata/leaf area was not different in NOG1-1 RNAi or nog1-2 plants when compared to Col-0. We generated a double-gene knockdown plant by transforming nog1-2 with an NOG1-1-RNAi construct. Two lines, nog1-2 NOG1-1-RNAiA and nog1-2 NOG1-1-RNAiB, which showed ~50% NOG1-1 downregulation, were selected for further experiments (Fig. S5C). In addition, NOG1-1 was overexpressed in Arabidopsis Col-0 (NOG1-1 OE).
The double-gene knockdown lines along with Col-0, single-gene knockdown and overexpressor lines were flood-inoculated 27 with Pstab (Fig. 4A) or Psm (Fig. 4B). NOG1-1-RNAi lines and the double-gene knockdown lines had ~10-fold increased bacterial growth when compared to Col-0 (Fig. 4A). The nog1-2 line did not support more growth of Pstab at 3 dpi even though ~10-fold increase in bacterial growth was observed at 1 dpi when compared to Col-0 (Fig. 4A). Both nog1-2 and NOG1-1-RNAi lines showed slightly enhanced susceptibility to the host pathogen Psm by supporting higher bacterial growth (Fig. 4B). Double-gene knockdown lines showed an additive effect in comparison with single-gene knockdown lines for hyper-susceptibility to host pathogen inoculation. Strikingly, NOG1-1-OE lines exhibited fewer disease symptoms and harbored fewer bacteria compared to Col-0 (Fig. 4B). NOG1-2 is involved in the regulation of stomatal closure in response to pathogens and abiotic stimuli. NOG1-1 and NOG1-2 were induced by ABA (Fig. 3) and therefore the role of these genes in stomatal defense was studied. Arabidopsis epidermal peels were prepared from wild-type Col-0, nog1-2, NOG1-1-RNAi2, and NOG1-2-comp plants and treated with either ABA, Flg22, nonhost (Pstab), or host pathogen (Psm). In response to ABA, Flg22, and Pstab, NOG1-1-RNAi2, and NOG1-2-comp lines closed stomata similar to Col-0, while the nog1-2 stomata were not completely closed (Fig. 5A). Treatment with host pathogen, Psm, caused stomata to remain open in all the lines tested, because this pathogen is known to produce COR that can reopen stomata. Quantification of these results was obtained by measuring the stomatal aperture (Fig. 5B). The aperture size of stomata in Col-0, NOG1-1-RNAi2, and NOG1-2-comp lines decreased by 50% to 80% upon treatments that close stomata, while stomatal aperture in nog1-2, remarkably, was only reduced by 10% to 30% (Fig. 5B).
The observation that nog1-2 is defective in closing stomata during biotic stress suggested that nog1-2 could enable more pathogen entry. To test this hypothesis, epidermal peels of nog1-2, NOG1-1-RNAi2, and Col-0 were individually incubated with Psm and Pstab expressing GFPuv 28 , respectively. The bacterial entry in nog1-2 and Col-0 plants was quantified at 1 hour post inoculation (hpi) and 3 hpi. The number of host bacterial cells (Psm) was greater in nog1-2 at 1 hpi but was not different than wild-type and NOG1-1-RNAi2 at 3 hpi since the host pathogen was able to reopen stomata (Fig. 5C). Number of nonhost bacterial cells (Pstab) inside nog1-2 leaves was ~10-fold higher than in Col-0 and NOG1-1-RNAi line at both 1 and 3 hpi (Fig. 5C). In contrast to nog1-2, NOG1-1-RNAi lines did not show any difference in entry of bacteria through stomata when compared to Col-0 (Fig. 5C). This agrees with the results shown in Fig. 5A that the stomata closure in NOG1-1-RNAi2 in response to ABA, flg22, and nonhost bacterial pathogen (Pstab) was not altered even though NOG1-1 was highly expressed in guard cell (Fig. S4). It is possible that NOG1-1 may have a role in stomatal aperture regulation and/or development, but the transcript reduction levels in the RNAi lines is not sufficient to observe defects in stomatal aperture regulation.
NOG1-2 has GTPase activity and positively regulates bacterial pathogen-and abiotic-mediated guard cell signalling. To examine the role of NOG1-2, the biochemical activity of recombinant and purified AtNOG1-2 was assessed in a hydrolysis and phosphate release assay (Fig. 6A, left panel and Fig. S6D). The JAZ9 protein that has been shown to play a role in stomatal closure 29 , which has no known GTPase domain, was used as negative control. Our results show that NOG1-2 has GTP-binding and GTPase activity. Furthermore, NOG1-2 was strongly expressed in guard cells of the Arabidopsis transgenic plants expressing AtNOG1-2-GFP fusion driven by AtNOG1-2 promoter (Fig. 6A, right panel). NOG1-2 was localized to the nucleus in guard cells of Arabidopsis. In N. benthamiana, NbNOG1-GFP (35 S::NOG1) was localized to the nuclei and cytoplasmic membrane (Fig. 6A).
To further examine the involvement of NOG1-2 in JA-and ABA-mediated signaling pathway, the sensitivity of nog1-2 to MeJA and ABA was tested. As reported earlier 30 , several JAZ (jaz9 used in this study) mutants showed sensitivity to MeJA because of the functional compensation by other JAZs, while coi1 mutant showed less sensitivity to MeJA (Fig. 6B). Interestingly, nog1-2 also showed reduced sensitivity to MeJA. It was also found that nog1-2 plants are more susceptible to drought stress and less response to ABA, suggesting that NOG1-2 is involved in JA and ABA signaling pathway (Fig. 6C).
In order to dissect whether NOG1-2 is closely related to other genes involved in guard cell signaling, the gene expression profiling was conducted in nog1-2 lines in response to ABA, coronatine (COR), and host and nonhost bacterial pathogens (Fig. S7). A total of 12 functionally characterized genes for guard cell signaling such as OST1, OST2, rbohD, MPK4, MPK9, MPK12, ABI1, SLAC1, RIN4, SLAH3, CPK4 and CPK6 were determined for the expression patterns upon exposure to both abiotic and biotic stimuli in nog1-2 lines. After ABA treatment, OST2 expression was significantly increased in Col-0 at both 12 and 24 hr, but the expression was decreased in nog1-2 at 24 hr. The expression of MPK4, MPK9, ABI1, and CPK6 was highly upregulated in Col-0 at 12 hr, while these genes were not notably induced in nog1-2. After treatment of COR, rbohD, MPK4, MPK12, and SLAC1 were rapidly induced in Col-0 at 12 hr, but not found in nog1-2. MPK9 and RIN4 expression was notably decreased in To observe stomatal behavior, epidermal peels of Col-0, nog1-2, NOG1-1-RNAi2, and NOG1-2 complemented lines were treated with stomata-opening buffer (KCl-MES), ABA (10 µM or 50 µM), flg22 (20 µM), Pstab and Psm at 1 × 10 4 CFU/ml. Microscopic images were taken 3 hr after inoculation. The aperture size of stomata was measured after 30 min for ABA, 1 hr for flg22 and LPS, and 3 hr for Pstab and Psm. Asterisks indicate significant difference by Student's t-test (P < 0.05). Error bars indicate standard error for counting 50 stomata/each epidermal peel. Three samples were examined for each treatment, and the experiment was repeated at least three times with similar results. (C) Bacterial entry through stomata in nog1-2 and NOG1-1-RNAi2 lines. To quantify bacterial entry, detached Arabidopsis leaves from wild-type Col-0 and nog1-2 and NOG1-1-RNAi2 were floated in bacterial suspensions (1 × 10 4 CFU/ml) of the nonhost pathogen (Pstab) or host pathogen (Psm). After 1 hpi and 3 hpi, leaves were surface-sterilized with 10% bleach, ground, serially diluted and plated on KB media (B). After 2 days, the number of bacterial colony was counted. This experiment was repeated three times and showed similar results: five replications in each experiment. Asterisks indicate significant difference by Student's t-test (P < 0.05).
Transcriptome analysis reveals the regulation of NOG1-1 and NOG1-2 in plant innate immunity against bacterial pathogens. The transcriptome analysis was performed in Col-0, NOG1-1 RNAi, and nog1-2 lines without any treatment of biotic and abiotic stimuli using Affymetrix GeneChip ® Arabidopsis Genome Arrary (Affymetrix). A total of 161 genes were identified as differentially expressed genes (DEGs) in NOG1-1 RNAi and nog1-2 lines compared to Col-0, respectively (Table S3). For nog1-2, only 14 DEGs were identified, nine for upregulation and five for downregulation. All genes are highly related in the signaling pathway of biotic and abiotic stress responses. Arrows represent nuclei in guard cells. One week of seedlings were observed for the localization of AtNOG1-2 under confocal laser microscopy. Scale bar is 10 µM. Atnog1-2 is less sensitive to JA than Col-0. (B) Atnog1-2 line, compared to wild-type Col-0, is less sensitive to JA. Seeds of different Arabidopsis lines were grown in ½ MS medium plates with or without 30 and 50 µM of MeJA, and 7 days later root lengths were measured. Three independent experiments were done, with at least 10 seedlings for each line. Bars represent means ± SD. Asterisks indicate significant difference from Col-0 by Student's t-test (P < 0.05). (C) The mutation of AtNOG1-2 increases sensitivity to drought stress and ABA. Wild-type (Col-0) and nog1-2 plants were grown for four weeks (21 °C/14 hr day and 18 °C/10 hr night), then plants were dehydrated until drought symptom appeared. After leaves were completely collapsed, plants were re-watered to revive. nog1-2 seedlings are less sensitive to ABA. Seedlings of Col-0 and nog1-2 were grown in MS or MS with ABA (1uM) for 2-weeks.
MAPMAN software was used to visualize the DEGs of NOG1-1 RNAi and nog1-2 to determine their putative role in plant defense. Because of very low number of DEGs in nog1-2, DEGs for both NOG1-1 RNAi and nog1-2 lines were pooled for the analysis. The DEGs represented in the Arabidopsis microarray were classified into different functional groups using automated and manual annotation. The MAPMAN analysis identified that the common DEGs in NOG1-1 RNAi and nog1-2 were highly responsive to biotic and abiotic stresses (Fig. 7). Most of down-regulated genes in both NOG1-1 RNAi and nog1-2 are involved in the signaling pathways for abiotic and biotic defense responses. The number of DEGs was significantly higher in NOG1-1 RNAi than in nog1-2.
Discussion
This study identified a small GTP-binding protein (GTPase), NOG1, as a novel player in plant immunity against bacterial pathogens. Two copies of this gene, NOG1-1 and NOG1-2, exist in plants, and are required for nonhost resistance associated with apoplastic and stomatal defense. Stomatal closure in plants can be triggered by bacterial pathogens and PAMPs such as flg22 and LPS 4,5,7 . The guard cell signaling pathway involved in PAMPs-or pathogen-induced stomatal closure is still not fully understood. Only a few proteins, such as FLS2, COI1, MYC2 and MPK4, have been studied with respect to stomatal closure in response to phytobacterial pathogens 31 . Also, Penetration 3 (PEN3) has been demonstrated for stomatal defense against fungal pathogens in Arabidopsis 32, 33 . The results reported here suggest that NOG1-2 may be an additional key regulator of stomatal closure in response to biotic and abiotic stimuli. Interestingly, NOG1-1 doesn't seem to play a major role in regulating stomatal closure but is involved in apoplastic defense against bacterial pathogens, indicating a possible interplay between NOG1-1 and NOG1-2 in plant innate immunity, such as regulation of stomatal opening and induction plant defense responses.
Small GTPases have been studied extensively for their role in cellular development and regulation of signal transduction in plants 13 . More than 100 small GTPases are known from higher eukaryotes, which are generally classified into Ras, Rho, Rab, Sar1/Arf and Ran families 34 . Rho and Rab small GTPases have been widely studied for their role in defense signaling against fungal and bacterial pathogens 35 . NOG1-1 and NOG1-2 encode small GTPases that belong to OBG family whose function in plants has never been investigated. In mammals and yeast, the orthologs of NOG1 are GTPBP4 and Nog1p, and essential for ribosome biogenesis and cell viability 23 . Both GTPBP4 and Nog1p are known to be localized to the nucleus 24 . GTPBP4 orthologs are highly conserved within their GTPase domains (Fig. S2A) and found in many eukaryotes (http://www.genecards.org). Interestingly, NOG1/GTPBP4 orthologs are always present as a single copy in mammals, insects, and yeast, while two homologs are found in monocot and dicot plant species (http://www.phytozome.net). Only one copy of the NOG1 ortholog is present in two algae species (Chlamydomonas reinhardtii; XM_001698344 and Guillardia theta; XM_001698344), but two homologs are present in moss (Physcomitrella patens subsp. Patens; XM_001698344 Four-week old seedlings grown on half MS media were collected for RNA extraction. Three biological replicates were used for each NOG1-1 RNAi and nog1-2 without any treatments. Color patterns from red (upregulation) to green (downregulation) indicate the change of gene expression. and XM_001761522). This finding suggests that higher plant species may need an additional copy of NOG1 for a plant-specific function, such as regulation of stomatal opening and early defense responses specific to plants.
The Arabidopsis genome has genes for 12 JAZ family proteins. It has been reported that single gene mutations in genes encoding JAZ2, JAZ5, JAZ7 or JAZ9 did not result in JA insensitivity as in coi1 mutants, suggesting functional redundancy among JAZ proteins in Arabidopsis 30 . Furthermore, Arabidopsis jaz1 jaz2 double mutant did not alter JA signaling 36 . In Arabidopsis, several JAZs have been shown to interact with COI1 37 , and represses the MYC2 transcription factor to regulate JA-mediated stomatal closure 10 . COI1 functions as a receptor for JA and recruits JAZ proteins for ubiquitination and degradation via the 26S proteosome. It is uncertain if NOG1-2 function for stomatal closure is associated with JAZ/COI1-mediated JA signaling pathway. MYC2 is another key component of the JA signaling pathway 38 . It has been reported that MYC2 interacts with all 12 JAZ proteins, further suggesting the redundant function of JAZs 39 . MYC2 induces JA-responsive genes, and its activity is reduced by JAZ proteins. MYC2 has been shown to be phosphorylated by MPK6 in the regulation of seedling development and photomorphogenesis 40 . Figure 7 shows that several genes involved in MAPK signaling pathway are differentially expressed in NOG1-1 RNAi and nog1-2 lines. It will be interesting to determine if NOG1-2 can be phosphorylated by a kinase. There is evidence for the phosphorylation of small GTPases by kinases that enhance GTPase activity 41 .
As shown in Fig. 6, nog1-2 plants are more susceptible to drought stress and less sensitive to ABA, indicating the involvement of NOG1-2 in the guard cell signaling pathway. The expression of OST2, MPK4, MPK9, ABI1, and CPK6, which are key players in guard cell ABA signal transduction, was significantly altered in nog1-2 line after ABA treatment (Fig. S7). This finding suggests that NOG1-2 may be a key element upstream of guard cell regulating and ABA-induced genes and interplay with a complex network of ABA signaling pathways. MPK4 is known to negatively regulate stomata open/closure against bacterial pathogens 42 . Our study also showed the expression changes of MPK4 in response to PAMPs and bacterial pathogens in nog1-2 line. It has been known that MPK9 and MPK12 are highly expressed during ABA-induced and H 2 O 2 -induced stomatal closure 43 . These two genes are differently expressed in nog1-2 line when compared to Col-0, suggesting that MPK9 and MPK12 function in bacterial pathogen-induced guard cell signaling pathway.
In conclusion, we identified a novel role of NOG1 in plant innate immunity and it would be important to further investigate the mechanism of plant defense response mediated by NOG1. More interestingly, we identified the novel function of NOG1-2 for stomatal closure in response to biotic and abiotic stimuli. This warrants further investigation for the role of NOG1-2 in stomatal regulation through JA and ABA signaling. Nevertheless, identification of NOG1 as one of the key regulators of stomatal aperture and plant innate immunity will become an important avenue to better understand plant response to biotic and abiotic stresses.
Virus-induced gene silencing in N. benthamiana and tomato plants. VIGS library used in this
study for forward genetics screening was constructed using RNA from various biotic and abiotic stress inducing elicitor treated N. benthamiana plants. Agrobacterium tumefaciens GV2260 containing TRV1, TRV2::00 and TRV2::NOG1 was grown overnight on LB medium containing antibiotics (rifampicin, 25; kanamycin, 50) at 28 °C. Bacterial cells were harvested and resuspended in induction medium (10 mM MES pH 5.5; 200 uM acetosyringone), and incubated at room temperature on an orbital shaker for 5 hr. Bacterial cultures containing TRV1 and TRV2 were mixed in equal ratios (OD 600 = 1) and infiltrated into N. benthamiana or tomato leaves using a 1 ml needleless syringe 44 . The infiltrated plants were maintained in a greenhouse and used for studies 15 to 21 days post-infiltration. Table S6 has all the primer information used in this study.
Hypersensitive response analysis. For nonhost pathogens-dependent HR, the bacterial suspension in MES buffer (MES 10Mm, pH6.5) was syringe-infiltrated to fully expanded N. benthamiana leaves for determining nonhost HR cell death. For R/Avr-dependent HR, leaves were infiltrated with a mixture of Agrobacterium tumefaciens expressing Avr genes and its complementary Cf or Pto gene using a sterile needleless syringe. Pto and AvrPto, or Cf9 and AvrCf9 constructs were mixed to 1:1 ratio before infiltration to N. benthamiana leaves. The agro-inoculated plants were maintained under standard growth condition, and HR cell death in the inoculated area was investigated and photographed.
Plant growth, pathogen inoculation, and bacterial growth assay. N. benthamiana and tomato plants were grown in greenhouse. Silenced and control N. benthamiana plants were inoculated with appropriate bacterial pathogens. Bacterial strains were grown at 28 °C on KB medium containing antibiotics in the following concentrations (μg/ml): rifampicin, 50; kanamycin, 25; chloramphenicol, 25 and spectinomycin, 25 for 24 hr. To prepare bacterial inoculum, the culture media were centrifuged at 5000 rpm for 10 min and resuspended in water for bacterial growth assays using vacuum infiltration and spraying. The inoculated plants were then incubated in growth chambers at 90 to 100% relative humidity for the first 24 h.
Arabidopsis thaliana T-DNA insertion lines: SALK_043706 and SALK_072852 containing insertions in NOG1-2 were obtained from http://signal.salk.edu/cgi-bin/tdnaexpress. Wild-type Col-0 and T-DNA insertion lines were grown in 1/2 MS plates in growth chamber at 21 °C with a 14-h photoperiod and a light intensity of about 100 uE m −2 sec −1 . Four-week-old plants were inoculated with appropriate host or nonhost bacterial pathogens, and bacterial growth was measured. For the bacterial growth assays in N. benthamiana and tomato, leaf samples from inoculated leaves at specific time points after inoculation were collected by using a 0.5 cm leaf puncher. Leaf tissues were ground in sterile water, serially diluted and plated on KB plates supplemented with appropriate antibiotics. For the bacterial growth assays in Arabidopsis after flood-inoculation, inoculated leaves were surface-sterilized with 15% H 2 O 2 for 3 min to eliminate epiphytic bacteria and then washed with sterile distilled water. The leaves were then homogenized in sterile distilled water, and serial dilutions were plated onto KB medium containing antibiotics. Bacterial growth was evaluated in three independent experiments. Stomatal aperture measurements and bacterial entry assay. The stomatal aperture measurement experiments were performed by following the protocol available at Melotto lab, University of California Davis (http://melotto.ucdavis.edu/protocol_stomatal.htm) and the previous study 7 . Briefly, plants were conditioned to open stomata by placing plants under fluorescence light for at least 3 hr. Epidermal peels were then immediately floated on stomata-opening buffer (10 mM MES-Tris pH 6.1, 10 mM KCl) for 3 hr. At various time points, the epidermal peels were treated with ABA, flg22, LPS and bacterial pathogens. Epidermal peels were observed under Nikon light microscope.
To determine bacterial entry via stomata, detached leaves from 2-week-old seedlings grown in ½ strength MS medium were floated on bacterial suspension. After 1 hr or 3 hr incubation, leaf surfaces were sterilized using 10% bleach (Clorox), then observed under a fluorescent microscope or plated on KB medium after serial dilutions.
In vitro GTPase activity assay and phosphate release assay. The GTPase activity of NOG1-2 was also evaluated using the ENZchek phosphate release assay kit (Thermo Fisher Scientific, NY). Phosphate (Pi) production was detected as a change in absorbance at 360 nm using a Spectramax M2 spectrophotometer (Molecular Devices, Sunnyvale, CA). The amount of Pi released was estimated from the corresponding values obtained with a standard curve. Data was plotted as nanomoles of Pi released min −1 mg −1 and fitted using nonlinear regression in SigmaPlot 11.0.
Histochemical and fluorescent microscopy analyses.
To determine the expression patterns of NOG1-2 and NOG1-1, the promoters of NOG1-2 (1.2 kb) and NOG1-1 (0.9 kb) were fused to GUS reporter gene. NOG1-1::GUS and NOG1-2::GUS transgenic seedlings were incubated with GUS staining solution at 37 °C. Staining was discarded and chlorophyll cleared by washing with 70% ethanol and keeping the leaves in ethanol for 72 hrs. GUS activity was analyzed by bright-field transmitted light microscopy, and images were taken by digital camera (Nikon). Confocal analysis of GFP expression was performed using confocal microscope (Biorad, CA).
Development of transgenic lines.
To complement the nog1-2 knockdown line, the full length of NOG1-2 coding region was cloned into pMDC162, controlled by NOG1-2 native promoter. This construct was transformed to GV3101, and transferred into nog1-2 using Arabidopsis floral dip transformation. To knock-down NOG1-1 in Col-0, the partial sequence of NOG1-1 (approximately 400 bp) were selected using pssRNAit program (http://plantgrn.noble.org/pssRNAit/). This fragment was cloned into RNAi vector (Invitrogen, NY) and transformed using Arabidopsis floral dip transformation. To make double-gene knockdown line of NOG1-2, NOG1-1, NOG1-1 RNAi construct was transformed into nog1-2. To examine the localization of NOG1-2, the full length coding region of both genes were cloned into either pMDC45 or pMDC83.
RNA extraction and quantitative real-time PCR. Total RNA was purified from Arabidopsis leaves infiltrated with water (mock control), nonhost pathogen P. syringae pv. tabaci (Pstab), or host pathogen P. syringae pv. maculicola (Psm). Total RNA was extracted using TRIzol (Invitrogen), and 2 treated or inoculated leaves were pooled to represent one biological replicate. Total RNA was treated with DNase I (Invitrogen), and 1 μg RNA was used to generate cDNA using Superscript III reverse transcriptase (Invitrogen) and oligo d(T)15-20 primers. The cDNA (1: 20) was then used for real-time quantitative PCR using Power SYBR Green PCR master mix (Applied Biosystems, Foster City, CA, USA) with an ABI Prism 7900 HT sequence detection system (Applied Biosystems). Primers specific for AtUBQ5 was used to normalize small differences in template amounts. Average Cycle Threshold (CT) values calculated using Sequence Detection Systems (version 2.2.2; Applied Biosystems) from duplicate samples were used to determine the fold expression relative to controls. All primers used are shown in Table S4.
Transcriptome analysis of nog1-1 and nog1-2 using Arabidopsis microarray. Arabidopsis seedlings were grown for seven days on ½ MS in controlled conditions with a 16 hr light, 8 hr dark cycle at 24 °C. Total RNA from three biological replicates of NOG1-1 RNAi, nog1-2, and Col-0 leaves were isolated and cleaned by using the Rnaeasy MinElute Cleanup Kit (Qiagen, WN) and used for two-channel microarray. RNA labelling and hybridization to Affymetrix ATH1 arrays were performed as described in the Affymetrix manual. Data normalization between chips was conducted using RMA (Robust Multichip Average) 45 . Gene selections based on Associative T-test were made using Matlab (MathWorks, Natick, MA) 46 . In this method, the background noise presented between replicates and technical noise during microarray experiments was measured by the residual presented among a group of genes whose residuals are homoscedastic. Genes whose residuals between the compared sample pairs that are significantly higher than the measured background noise level were considered to be differentially expressed. A selection threshold of 2 for up-regulated and 1.5 times for down-regulated and a Bonferroni-corrected P value threshold of 2.19202E-06 were used for further analysis. The Bonferroni-corrected P value threshold was derived from 0.05/N in these analyses, where N is the number of probes sets (22810) on the chip. Data Availability Statement. All the data presented in the manuscript will be made publicly available.
|
2023-02-17T15:24:12.508Z
|
2017-08-23T00:00:00.000
|
{
"year": 2017,
"sha1": "9a6001a1594c1a0aaaa6241ea9453a209d845d4a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-08932-9.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "9a6001a1594c1a0aaaa6241ea9453a209d845d4a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
115142346
|
pes2o/s2orc
|
v3-fos-license
|
Novel Resistive-Plate WELL sampling element for (S)DHCAL
Digital and Semi-Digital Hadronic Calorimeters (S)DHCAL were suggested for future Colliders as part of the particle-flow concept. Though studied mostly with RPC-based techniques, investigations have shown that MPGD-based sampling elements could outperform. An attractive, industry-produced, robust, particle-tracking detector for large-area coverage, e.g. in (S)DHCAL, could be the novel single-stage Resistive Plate WELL (RPWELL). It is a single-sided THGEM coupled to the segmented readout electrode through a sheet of large bulk resistivity. We summarize here the preliminary test-beam results obtained with 6.5 mm thick (incl. electronics) {$48 \times 48\,\mathrm{cm^2}$}~RPWELL detectors. Two configurations are considered: a standalone RPWELL detector studied with 150 GeV muons and high-rate pions beams and RPWELL sampling element investigated within a small-(S)DHCAL prototype consisting of 7 resistive MICROMEGAS sampling elements followed by 5 RPWELL ones. The sampling elements were equipped with a Semi-Digital readout electronics based on the MICROROC chip.
Introduction
The particle-flow [1] is the leading concept towards reaching the challenging targeted jet energy resolution in future collider experiments ( σ E E = 30% √ E corresponding to σ E E = 3% for 100 GeV jets). Particle-flow calorimeters [2,3] are key ingredients in the design of experiments optimized for this concept. Having very high granularity, they allow separating the energy deposited by the individual constituents of the jets and measure the energy of each of them in the most adequate subsystem. Digital and Semi-Digital Hadronic Calorimeters ((S)DHCAL) are attractive tools to achieve very high granularity while using cost-effective readout solutions. A typical (S)DHCAL consists of alternating layers of absorbers and sampling elements. Hadronic showers are mostly formed in the absorber, of which the material defines the total calorimeter's depth. The resulting signals are measured by sampling pad-readout elements (typically of 1 cm 2 ), defining the granularity. In (S)DHCAL, the measurement of the energy of individual particles relies on the approximate linear relation between the particle energy and the number of fired pads. Thus, the targeted jet-energy resolution calls for high detection efficiency at low average pad multiplicity. Detection elements based on the glass-RPC technology have been so far the most studied ones [4,5,6]. Depending on the operation voltage, they can yield an average pad multiplicity of 1.5 -2 at 90 -95% efficiency, in 1 m 2 detectors [5,6]. Detection elements based on the MICROMEGAS have demonstrated superior properties: 98% efficiency (at optimal operation voltage) with average pad multiplicity close to unity, in 1 m 2 detectors [7,8] demonstrating also uniform response over the entire sensitive area. A detection efficiency of 95% at similar average pad multiplicity was demonstrated with 16 × 16 cm 2 resistive-MICROMEGAS prototypes, introduced to reduce the probability of discharges induced by highly ionizing particles [9]. Elements based on
The Resistive Plate WELL
The Resistive Plate WELL (RPWELL) [11] has followed a series of other Resistive THick Gaseous Electron Multiplier (THGEM)-based sampling elements developed over the past years at Weizmann Institute [12,13]. It is a robust, industrially mass-produced, single-stage particle-tracking gasavalanche detector. With its discharge-free operation also in harsh radiation fields, large dynamic range, close-to-unity MIP detection efficiency and ∼ 200 µm RMS resolution [14] it becomes an attractive new candidate for particle tracking over large-area coverage. As a few-millimeter thin detector, it could become a candidate of choice as sampling element for (S)DHCAL. The RPWELL (Fig. 1) is a single-sided THGEM electrode coupled to a segmented readout electrode through a thin sheet of large bulk resistivity (10 8 − 10 10 Ωcm) material. The latter has the role of quenching large-size avalanches and preventing discharge development. Past laboratory and accelerator studies have been performed with moderate-size prototypes, with 1 cm 2 square pads and SRS/APV25 readout electronics. They operated equally well in Ne-and Ar-based gas mixtures [15] and in intense hadronic beams. The figure of merit is MIP detection efficiency ≥ 98% at ≤ 1.2 pad multiplicity (Fig. 2) [15].
Having in mind their application to (S)DHCAL, techniques were developed for producing large-area (48 × 48 cm 2 ) 4.5 mm thick (excluding electronics) detectors, incorporating 10 10 Ωcm silicate glass resistive plates (Fig. 3). Five such detectors were built and equipped with a pad-anode (defining a circularshaped active area) embedding ILC-(S)DHCAL MICROROC chips [16] resulting in a total thickness of 6.5 mm. The RP-WELL differed by their electrode quality, of which the thickness variation ranged from 5% (best) to 25% (worst), affecting significantly their stability and hence performance.
Performance in a standalone mode
During August 2018, the first (S)DHCAL sampling element prototype built (with 25% electrode thickness variation) has been investigated at CERN/SPS, in Ar(7%)CO 2 , with muons and high-rate pions. Preliminary analysis results confirm that the performance of this prototype would be suitable for (S)DHCAL since a ≥ 95% detection efficiency across most of the surface was achieved with a pad multiplicity of ∼1 in most events. The average pad multiplicity value was 1.7 due to a small number of events with tens of pads firing -probably indicating a discharge. Some efficiency variations as well as the small number of discharges are attributed to the large electrodethickness variations (thus gain).
Performance within a small-(S)DHCAL prototype
During November 2018, a small-(S)DHCAL prototype (Fig. 4) consisting of four 16×16 cm 2 bulk MICROMEGAs and three 48 × 48 cm 2 resistive MICROMEGAS sampling elements followed by five 48 × 48 cm 2 RPWELL ones has been investigate at CERN/PS using a low energy (2-6 GeV) pion beam. The 12 sampling elements were equipped with a semi digital readout electronics based on the MICROROC chip and read out with a single DAQ system. The RPWELLs with large thickness variations were excluded in some of the measurements. These were carried out with 8-layer (S)DHCAL consisting of 3 16 × 16 cm 2 and 3 48 × 48 cm 2 resistive MICROMEGAS sampling elements followed by 2 48 × 48 cm 2 RPWELL ones.
The hits associated with a single shower are grouped based on time selection. A pion shower profile recorded in all the sampling elements of the 8-layer prototype and a beam profile in the 5 RPWELL detectors are shown in Fig. 5 and 6 respectively.
For each recorded shower, the shower origin was defined as the layer in which at least three pads fired. This layer was denoted by Layer 0 and it served as a reference to define the depth (layer number) within the shower of the RPWELL sampling elements. The average number of RPWELL pads fired as a function of the shower depth is shown in Fig. 7 for different incoming-pion energies, with the lowest threshold level (0.8 fC) applied. The average number of pads fired as a function of the shower depth is shown in Fig. 8 for 4 GeV pions, at the three threshold levels. A preliminary evaluation of the total number of hits that would be recorded by a small-(S)DHCAL prototype consisiting of only RPWELL-based sampling elements as a function of the incoming pion energy is shown in Fig. 9. For each pion energy, the total number of hits is estimated as the sum of the average number of hits recorded as a function of the shower depth. Some of the observed non-linearity could be attributed to the leakage from the small-(S)DHCAL assembly.
Summary and discussion
First studies of RPWELL-based sampling elements for (S)DHCAL have been carried out in a standalone mode and within a small-(S)DHCAL prototype (incorporating also MICROMEGAS-based sampling elements). More stringent QA/QC tests will have to be applied in the future to ensure good control of the thickness of the WELL electrode to a level better than 5%. The 48 × 48 cm 2 RPWELL was operated under 150 GeV muon and high rate pion beams. Up to some instabilities attributed to the thickness variations, detection efficiency greater than 95% and an average pad multiplicity close to "1" were recorded. In the small-(S)DHCAL prototype, low energy pion showers were recorded, and the response of the RPWELL is consistent with the shower depth. Based on the data collected, the estimation of the expected pion energy resolution in a full RPWELL-based (S)DHCAL is ongoing.
|
2019-04-11T06:16:18.000Z
|
2019-04-11T00:00:00.000
|
{
"year": 2019,
"sha1": "3a9ac0cedfd440b7a742537a8d8e07fd5ba6fc6d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.05545",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9f397bd1f475af337f263d050e768e12b5a57b0b",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
73064611
|
pes2o/s2orc
|
v3-fos-license
|
Incidence of glucose-6-phosphate dehydrogenase deficiency in anaemic patients attending General Hospital Kafanchan , Kaduna State , Nigeria
Zonal Headquater N.W., Nigerian Institute for Trypanosomiasis Research, PMB 1147, Birnin Kebbi, Kebbi State, Nigeria. Chemical Pathology Department, Federal College of Veterinary and Medical Laboratory Technology, National Veterinary Research Institute (NVRI), Vom, Plateau State, Nigeria. Microbiology Department, Federal College of Veterinary and Medical Laboratory Technology, National Veterinary Research Institute (NVRI), Vom, Plateau State, Nigeria. Department of Chemical Pathology, School of Clinical Medicine, Igbinedion University, Okada, Nigeria.
INTRODUCTION
Glucose-6-phosphate dehydrogenase (G6PD) enzyme catalyzes the first step in the pentose phosphate pathway, leading to production of antioxidants that protect cells against oxidative damage (Luzzatto et al., 2001).G6PD defciency is the most common enzymatic erythrocyte disorder which is linked to the X-chromosome in humans (Elyassi and Rowshan, 2009;Valaes et al., 1998).A G6PD-defcient patient lacks the ability to protect red blood cells against oxidative stresses produced by the administration of certain drugs, metabolic conditions, infections and ingestion of some foods (Cappellini and Fiorelli, 2008;Glader, 2008).Deficiency in G-6-PD is believed to affect about 100 million people globally (Carter et al., 2002) and the rate of pr-valence is higher among Africans and Asian (Abdulrazzaq et al., 1999).Reports showed that the G6PD A-allele, which contains two mutations, A376G and G202A, is the most common G6PD deficiency variant in Africa (Howes et al., 2013;Johnson et al., 2009) and the severity resulting from G-6-PD deficiency varies significantly between races with more severe deficiency occurring in the Mediterranean population and the milder form in the African population (Owa and Osanyituyi, 1988).Several reports have been published on this genetic disorder in various geographic populations (Beutler, 1993).It has been reported in Greece (Stamatoyannopolous, 1971), Romania (McCurdy et al., 1972), Algeria (Nafa et al., 1994), United State (Geskin et al., 2001), Saudi Arabia (Abdulrazzaq et al., 1999) and Nigeria (Abubakar et al., 2005).
In Nigeria, G6PD deficiency occurs in 24% of boys and 5% of girls (Ademowo and Falusi, 2002).It is also known to be a significant cause of anaemia in children, especially neonates (Sodeinde et al., 1995).Yoruba children had the highest prevalence (16.9%) of G6PD deficiency followed by Igede children (10.5%) and children of Igbo (10.1%) and Tiv (5.0%) ethnicity.Igbo children had 0.38 times the odds of being G6PD deficient compared to Yoruba children.The odds for Igede and Tiv children were not significantly different from Yoruba children (Williams et al., 2013).
Haemolytic anaemia due to G-6-PD deficiency could be severe and life threatening (Luzzatto and Testa, 1978).About 25% of adults throughout the country have the sickle cell trait, AS, while the Hb C trait is largely confined to the Yoruba people of southwestern Nigeria in whom it occurs in about 6%.Other variant hemoglobins including beta thalassemia are rare, but alpha thalassemia occurs in 39% (32% with 3 alpha-globin genes; 7% with 2 alphaglobin genes) (Akinyanju, 1989).While screening of patients for G-6-PD deficiency is not a common practice in health-care delivery services of most poor African countries, there is a need for regular screening of individuals particularly malarial and anaemic patients to be able to establish their G-6-PD status.This is to avoid receiving drugs that could further precipitate haemolytic crisis in G-6-PD deficient individuals.
Study subjects
This study was carried out on a total of 150 anaemic patients attending general Hospital Kafanchan, Kaduna State, North-Central Nigeria.The patients consist of 50 sickle cell anaemia, 60 Iron deficient anaemia and 40 Malaria patients while 50 apparently healthy individuals served as control.113 of the subjects were females and 87 were males selected on age ranging from 0 to 75 years old.
Sample collection and analysis
The blood samples (5 ml) from each subject were collected through venepuncture from the antecubital vein of the forearm into dipotassium ethylene diaminetetracetic acid containers.The collected samples were screened immediately for G-6-PD using methaemoglobin reduction test (Brewer et al., 1962), serum ferritin radioimmunoassay test was used in the determination of iron deficiency.Malaria test strips were used to confirm malarial infection in the study group, genotype screening was used to confirm the sickle cell (SS) status of the studied group not under blood transfusion, while packed cell volume (PCV) of all the sampled patients was determined using microhaematocrit reader.The data were subjected to statistical analysis.
RESULTS
The results obtained are presented in Tables 1 to 5. The values are classified on the basis of sex, marital status, PCV values, G-6-PD activity and genotype.
DISCUSSION
The overall incidence of G-6-PD deficiency in the total population sampled has shown that Iron deficiency anaemic patients recorded highest prevalence while no deficiency was recorded among the malaria patients screened.The NADPH, a required co-factor in many biosynthetic reactions, maintains glutathione in its reduced (Obasa et al., 2011).Iron deficiency anemia is the most common form of anemia.Iron deficiency causes approximately half of all anemia cases worldwide, and affects women more often than men (Stoltzfus, 2001).Iron is a key part of red blood cells.Without iron, the blood cannot transport oxygen effectively.One means of loosing iron is through bleeding.There are various ways of loosing iron which include heavy, long, or frequent menstrual periods in women, cancer in the esophagus, stomach, small bowel, or colon, esophageal varices, usually from cirrhosis, the use of aspirin, ibuprofen, or arthritis medicines for a long time, which can cause gastrointestinal bleeding and peptic ulcer disease (Wikipaedia).Consequently, the higher incidence of G6PD deficiency was recorded in females with iron deficiency and therefore anaemia will be more severe in women than men.
Although most cases of iron-deficiency anaemia are mild and rarely cause complications, additional effect of G6PD deficiency might trigger severe anaemia, since iron can be converted to radicals which could result to oxidative damage of the erythrocyte's membrane (Beutler, 1994) contributing to abnormal red blood cell breakdown.
Sickle cell anaemic patients recorded the highest prevalence, the sickle cell morphology already predetermines their dysfunctional capacity, with G-6-PD deficiency however, there is an additional stress to this group of patients since the free radical generated either by a parasitic infection or administration of offensive drugs can destroy some of the circulating normal red blood cells.
The absence of G6PD deficiency in malaria patients is not surprising as haemolysis affects mature red blood cells more readily as there are fewer of them to host malaria parasites (Stephen et al., 1986).Moreover, malaria parasites could not thrive in immature red blood cells, thus, when an infected RBC dies before the parasite is ready, the malaria parasite dies as well (Stocker et al., 1985) there by inhibiting the chances of exerting a disease state and subsequent manifestation of typical symptoms.The study is also in agreement with the in vitro work of Capellini and Fiorelli (2008) who reported that malaria parasites grow slowest in G6PD-deficient cells.However, since malaria still sequester in the liver, affected persons could become very ill from haemolysis and G6PD patients are contraindicated to anti-malaria.The higher prevalence of G6PD-deficiency in subjects of 0 to 10 years is alarming since G6PD deficiency predisposes neonates to neonatal jaundice and sensitivity to certain drugs.Also, untreated neonatal jaundice may lead to hidden risk for Kernicterus (Kaplan and Hammerman, 2004).There is therefore a need to pay special attention to this age group which are selected by the Padiatrics.
The higher incidence recorded in male subjects correlates with the established fact being X-linked, G6PD deficiency allele confers a selective advantage (Allison, 1960) though genetic heterogeneity may result in varying degree of haemolysis across individuals.However, the proportion of female subjects recorded in this study gives room for concern due to possible unfavourable lyonisation, where random inactivation of an X-chromosome in certain cells creates a population of G6PD-deficient red blood cells (Beutler, 1993;1962;Beutler et al., 1962).Also G-6-PD is known to generate reduced glutathione (GSH) which are free radical scavengers, however due to the deficiency of G-6-PD, the ability to generate GSH from its oxidized form (GSSG) is lost (Beutler, 1994), thus worsening the anaemia.This observation also correlates with higher number of iron deficient anaemic patients having lowest PCV value as compared to sickle cell anaemic patients.
Conclusion
Incidence of G-6-PD deficiency is higher in the iron deficient and absent in malaria patients.Therefore, there is a need for screening anaemic patients as part of the overall health and welfare service to avoid further complications.
Table 1 .
G-6-PD Incidence in the total population screened.
Table 2 .
Relationship between G-6-PD deficiency and marital status.
Table 3 .
The relationship between G-6-PD and sex.
Table 4 .
Relationship between G-6-PD deficiency and age of subjects.
) No. of def. Subjects No. of def. Subjects No. of def. subjects
for free radicals, and thus helps reduce oxidized haemoglobin to free haemoglobin; otherwise oxidized haemoglobin will precipitate as Heinze bodies.While many other body cells have other mechanisms of generating NADPH, the red blood cells rely completely on G6PD activity because it is the only source of NADPH that protects the cell against oxidative stress
|
2018-12-14T20:23:45.799Z
|
2015-02-28T00:00:00.000
|
{
"year": 2015,
"sha1": "bf3631221fbf61c9804a72e92e34b5c62c22513b",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JPHE/article-full-text-pdf/A54E96850262.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bf3631221fbf61c9804a72e92e34b5c62c22513b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246770301
|
pes2o/s2orc
|
v3-fos-license
|
The moderation of maternal parenting on the association of trauma, dissociation, and psychosis in depressive inpatients
ABSTRACT Background The effect of dissociation and parenting style on the relationship between psychological trauma and psychotic symptoms has not previously been investigated. Objective The aim of this study was to develop a moderated mediation model to assess whether the association between psychological trauma and psychotic symptoms is mediated by dissociation and moderated by parental maltreatment. Methods Inpatients with major depressive disorder (MDD) and bipolar depression (BP) were recruited. Self-reported and clinical rating scales were used to measure the level of dissociation, psychotic symptoms, history of psychological trauma and parental maltreatment. The PROCESS macro in SPSS was used to estimate path coefficients and adequacy of the moderated mediation model. High betrayal trauma (HBT), low betrayal trauma (LBT), paternal maltreatment, and maternal maltreatment were alternatively entered into the conceptual model to test the adequacy. Results A total of 91 patients (59 with MDD and 32 with BP) were recruited, with a mean age of 40.59 ± 7.5 years. After testing with different variables, the moderated mediation model showed that the association between LBT and psychotic symptoms was mediated by dissociation and moderated by maternal maltreatment. A higher level of maternal maltreatment enhanced the effect of LBT on dissociation. Conclusions Healthcare workers should be aware of the risk of developing psychotic symptoms among depressive patients with a history of LBT and maternal maltreatment.
Psychotic symptoms in depressive disorder
The impact of psychotic symptoms in patients with depressive disorder has been investigated. Depression with psychotic features has been associated with significant morbidity and mortality, however it may be underdiagnosed and undertreated (Rothschild, 2013). The presence of psychotic symptoms among patients with depressive disorder may worsen disease progression. Patients with major depressive disorder (MDD) have been reported to have more severe depressive episodes with psychotic episodes than without psychotic episodes (Forty et al., 2009). In addition, mood congruent psychotic features in MDD have been positively associated with obsessive-compulsive traits and severity of depression (Tonna, De Panfilis, & Marchesi, 2012). Psychotic symptoms in patients with depression would result in an increased burden of treatment, such as the use of combination therapy (antidepressants plus antipsychotics) or electroconvulsive therapy (Rothschild, 2013;Wijkstra et al., 2015). Investigating psychotic symptoms in depression may help to clarify the aetiology and lead to the development of treatment strategies.
Psychological trauma and dissociation in developing psychotic symptoms
The association between psychotic symptoms and psychological trauma has gained increasing research attention. A previous study recruiting patients with depressive disorder demonstrated that those with psychotic symptoms were significantly more likely to report a history of severe psychological trauma than those without psychotic symptoms (Holshausen, Bowie, & Harkness, 2016). Other studies have also demonstrated that patients with a history of psychosis had a high incidence of psychological trauma (Bebbington et al., 2004), physical and sexual abuse (Read & Argyle, 1999). Freeman and Fowler reported that childhood trauma may be associated with the development of negative schematic beliefs, leading to misinterpretation of normal stimuli and paranoia (Freeman & Fowler, 2009). Another comprehensive review indicated the association between psychological trauma and psychotic-like symptoms, which may be involved in the temporal lobe dysfunction (Schiavone, McKinnon, & Lanius, 2018). However, the causality between trauma and psychotic symptoms remains controversial (Morgan & Fisher, 2007;Read, van Os, Morrison, & Ross, 2005). Hence, further path analysis or explorations of possible mediators can be helpful to clarify the mechanisms of interactions.
To explore the mechanisms of the interaction between trauma and psychotic symptoms, it is necessary to assess other factors associated with psychotic symptoms, such as dissociation. A large epidemiological study in the general population demonstrated that dissociation is related to many kinds of mental disorders, particularly psychotic experiences (Cernis, Evans, Ehlers, & Freeman, 2021). In patients with psychotic disorder, an experience-sampling study revealed that a state of dissociation could significantly predict the later occurrence of auditory hallucinations (Varese, Udachina, Myin-Germeys, Oorschot, & Bentall, 2011). Accordingly, further studies are warranted to explore the role of dissociation in the relationship between psychological trauma and psychotic symptoms.
The role of parenting style
Family-related factors may be associated with the formation of psychosis. A meta-analysis demonstrated that parental communication deviance, defined as vague, fragmented, and contradictory communication in the family, was a risk factor for the development of psychosis in genetically sensitive offspring (de Sousa, Varese, Sellwood, & Bentall, 2014). Therefore, parenting style may also play an important role in the development of psychosis. Previous studies have reported an association between parental rearing styles and the occurrence of psychotic disorder (Parker, Fairley, Greenwood, Jurd, & Silove, 1982;Willinger, Heiden, Meszaros, Formann, & Aschauer, 2002). Furthermore, positive symptoms (delusions or hallucinations) have been associated with a higher level of parental maltreatment, including rejection and overprotection/ control (Catalan et al., 2017;McCreadie, Williamson, Athawes, Connolly, & Tilak-Singh, 1994).
Parenting style has also been associated with psychological trauma. A previous study indicated that individuals with psychological trauma were less likely to have had ideal parenting during childhood (Catalan et al., 2017). Another study of patients with eating disorder reported that childhood trauma was more prevalent in patients exposed to the parenting style of affectionless control (Monteleone et al., 2020). Moreover, paternal warmth has been reported to attenuate the association between childhood trauma and alcohol-related problems (Shin et al., 2019). Due to the potential link between parenting style and psychological trauma or psychotic symptoms, investigations on the effect of parenting style on the association between psychological trauma and psychotic symptoms are needed.
Aim of the current study
Although the potential effects of psychological trauma, dissociation and parenting style on psychotic symptoms have been identified in previous studies, no studies have explored the interactions between them. In addition, although the association between psychological trauma and psychotic symptoms is well known, the roles of dissociation and parenting style in this association have yet to be explored. Furthermore, the effects of different types of psychological trauma on psychotic symptoms remain unclear. Given this gap in the knowledge, the aim of this study was to develop a moderated mediation model to estimate the effect of parenting style and dissociation on the association between psychological trauma and psychotic symptoms among patients with depressive disorder. Clinically non-psychotic patients were selected because the retrospective reporting of psychological trauma and parenting style may be confounded by reality distortion in patients with full-blown psychotic disorder. The hypothesis of the current study was that dissociation mediates the association between psychological trauma and psychotic symptoms, and that parenting style moderates the association between psychological trauma and dissociation.
Ethics
Data from the current study were derived from the 'Investigations on interpersonal adversity, psychiatric status, and socio-cognitive function (IAPS)' project. The IAPS project recruited inpatients with MDD, bipolar disorder with depressive episodes (BP), and schizophrenia. This project was approved by the Institutional Review Board of Kaohsiung Municipal Kai-Syuan Psychiatric Hospital (KSPH-2014-34 and KSPH-2017-04) and followed the protocols of the current revision of the Declaration of Helsinki. All participants signed written informed consent before entering the study.
Participants and procedures
The recruitment periods of the IAPS project were from 30 December, 2014 to 21 December 2016, and from 17 May, 2017 to 21 December, 2020 at Kaohsiung Municipal Kai-Syuan Psychiatric Hospital, Taiwan. Data of the patients with MDD and BP were used in this study. The inclusion criteria were: 1) new admission due to MDD or BP according to the DSM-5 diagnostic criteria; 2) age between 20 and 50 years; and 3) a native speaker of Mandarin Chinese. Patients were not recruited for the study if they: 1) exhibited any level of intellectual disability; 2) had organic syndromes; 3) could not follow the directions of the researchers to complete the study due to disturbances caused by their mental illness, such as a manic episode; and 4) had difficulty in expressing language.
The recruited inpatients with MDD and BP were assessed during the first week of admission. Both selfreport measures and clinical interviews were conducted to collect information on demographics, dissociation, traumatic experiences, maternal maltreatment, depression and psychotic symptoms.
Clinical interview scores Positive and Negative Syndrome Scale (PANSS), Hamilton Depression Rating Scale (HAMD), and Clinician-Administered Dissociative States Scale (CADSS)
The PANSS was designed to measure the severity of psychotic symptoms, negative symptoms and general psychopathology (Kay, Fiszbein, & Opler, 1987). The PANSS is a clinical interview scale composed of 30 questions, with each question being scored using a Likert scale from one to seven. The Chinese-Mandarin version of the PANSS has been shown to have acceptable reliability and validity (Wu, Lan, Hu, Lee, & Liou, 2015). To develop the conceptual model, only positive symptoms of the PANSS (PANSS-P) with seven items were used in this study, indicating the presence of psychotic symptoms (e.g. hallucinations or delusions). A higher total PANSS-P score indicates more severe psychosis.
The HAMD was used to assess depression in this study (Williams, 1988). The HAMD is a clinical interview scale containing 17 items, with a total score from 0 to 52. A higher total HAMD score indicates more severe depression. In this study, the HAMD was used to compare the severity of depression, but it was not entered into the analysis of the conceptual model.
The CADSS was used to measure the state of dissociation in this study. The CADSS is a 27-item scale with 19 subject-rated items and 8 items scored by an observer. It is scored using a Likert scale from 0 to 4, and it has been shown to have good interrater reliability and construct validity (Bremner et al., 1998). In this study, total scores of subjected-rated CADSS (CADSS-S) were used to measure the severity of dissociation and develop the conceptual model.
The PANSS, HAMD, and CADSS were conducted by three well-trained psychiatrists, with an intra-class correlation coefficient of at least 0.9.
Brief Betrayal Trauma Questionnaire (BBTS)
The BBTS was used to measure the experiences of traumatizing events in this study. The 12-item BBTS was developed to identify traumatic experiences during childhood (<18 years) and adulthood (>18 years), including witnessing a catastrophic event, traffic accident, emotional maltreatment, physical and sexual assault, and natural disasters. The frequency of each experience is score using a three-point Likert scale (never, 1-2 times, >2 times). The BBTS evaluates two levels of betrayal in psychological trauma. High betrayal trauma (HBT), which is defined as betrayal by someone to whom you were very close, and low betrayal trauma (LBT), which is defined as betrayal by someone to whom you were not familiar. In this study, the total score of 10 questions associated with HBT during childhood and adulthood were used to assess the severity of HBT. In addition, the total score of six questions regarding LBT during childhood and adulthood were selected to assess the severity of LBT. The BBTS has been reported to have good reliability and validity (Goldberg & Freyd, 2006).
Measure of Parental Style (MOPS)
Parenting style was measured according to the level of parental maltreatment as estimated using the MOPS (Parker et al., 1997). The MOPS is a self-reported questionnaire including 30 questions (15 about fathers and 15 about mothers) to measure perceived parental maltreatment. A higher total MOPS score indicates a higher level of parental maltreatment. Maternal and paternal maltreatment were also assessed separately according to total MOPS score for the mother (MOPS-M) and MOPS score for the father (MOPS-F). In this study, MOPS-M and MOPS-F were used to test the conceptual model.
Statistical analysis
SPSS version 23.0 for Windows (SPSS Inc., Chicago, IL) was used for all analyses. Descriptive statistics were used to summarize the clinical characteristics. To estimate the differences in variables between the patients with MDD and BP, Pearson's χ 2 test was used to compare categorical variables, and the independent t test was for continuous variables. On the other hand, the multiple linear regression was conducted to preliminarily estimate the association between psychotic symptoms and the independent associated factors. The preliminary model was established to initially estimate the association between 'trauma' and 'psychotic symptoms', which was mediated by 'dissociation' (Supplementary Figure 1). If the preliminary model was verified, the final model was developed to confirm our hypothesis. For the final model, we hypothesized that the association between 'trauma' and 'psychotic symptoms' would be mediated by 'dissociation', and the magnitude of the indirect effect would be moderated by 'parental maltreatment' (Figure 1). In order to determine the two models, several variables were analysed. HBT and LBT were alternatively tested to represent 'trauma'. Paternal maltreatment (MOPS-F) and maternal maltreatment (MOPS-M) were also used, respectively, as the moderator of 'parental maltreatment'.
To test the preliminary mediation model, the PROCESS macro version developed by Hayes 3.4 (Hayes, 2015;Andrew, 2018) was used to test the mediation effect. In the PROCESS macro, Hayes's Model 4 was used to fit the mediation in the preliminary model as shown in Supplementary Figure 1. To further verify the moderated indirect effect, the moderated mediation model was tested using the PROCESS macro based on the hypothesis. The Hayes's Model 7 was used to fit the moderated mediations in the conceptual model (Figure 1). The PROCESS macro can perform ordinary least squares regression to estimate the coefficients of the moderated mediation model. All of the quantitative variables were centralized (Hayes & Matthes, 2009), and the 95th percentile bootstrap confidence interval (CI) with 5000 bootstrapping samples was estimated. The index of moderated mediation and its 95% CI calculated using the PROCESS macro was used to determine and quantify the statistical significance of the moderated mediation effect (Hayes, 2015). If the 95% CI did not include zero then the moderated mediation effect was statistically significant, indicating that the model had been successfully developed. With the significantly moderated mediation effect, conditional indirect effects of parental maltreatment on dissociation were evaluated at three different levels of parental maltreatment, corresponding to the values of mean plus standard deviation (SD), medium, and mean minus SD.
Description of patient variables
A total of 91 patients (59 with MDD and 32 with BP) were recruited, with a mean age of 40.59 ± 7.5 years. Comparisons of continuous and dichotomous variables are listed in Table 1
Preliminary estimation of predictors and mediation model
In the Supplementary Table 1, the results of multivariate linear regression were presented. It revealed that present higher scores of psychotic symptoms (PANSS-P) were significantly associated with several factors, including higher scores of HBT (standardized coefficients β = 0.21; P = .046), LBT (β = 0.33; P = .002), paternal maltreatment (β = 0.37; P < .001), maternal maltreatment (β = 0.21; P = .047), dissociation (β = 0.47; P < .001), and depression (β = 0.61; P < .001). Regarding the mediation model, HBT and LBT were individually selected into the model to test the mediating effect. We found the statistical significance for indirect effect of 0.09 based on the product terms of the path from HBT to dissociation (β = 1.29, p < .001) and path from dissociation to psychotic symptoms (β = 0.07, p = .003). The direct effect revealed no significance, demonstrating the fully mediating effect of dissociation on the association between HBT and psychotic symptoms (Supplementary Figure 2). Similarly, the statistical significance for indirect effect of 0.13 based on path from LBT to dissociation (β = 1.34, p < .001) and path from dissociation to psychotic symptoms (β = 0.10, p < .001). The direct effect also revealed no significance (Supplementary Figure 3).
Tests for the moderated mediation model
After testing with trauma (HBT and LBT) and parental maltreatment (MOPS-F and MOPS-M), only one combination fit the conceptual model, indicating that the association between LBT and psychotic symptoms was mediated by dissociation, and that maternal maltreatment (MOPS-M) was the moderator. The remaining combinations did not fit the conceptual model, and the details of least squares regression analyses are presented in Supplementary Table 2 ~ 4.
The results of ordinary least squares regression analysis in the developed model are summarized in Table 2 and visualized in Figure 2 with estimates of patch and significance. LBT was positively associated with the severity of dissociation (a pathway with β = 0.94, p = .014). The severity of dissociation was also positively correlated with psychotic symptoms (b pathway with β = 0.1, p < .001). The index of moderated mediation was 0.005 with a 95% CI of 0.001 to 0.0012, demonstrating the significantly positive effect of moderation. Taken together, these findings indicated the positive indirect effect of LBT on psychotic symptoms through the positive mediating effect of dissociation, and that a higher level of maternal maltreatment enhanced the effect of LBT with a negative moderating effect. In order to better understand the moderating effect of maternal maltreatment, the bootstrap indirect effects were estimated for the mediating effect of severity of dissociation at three different levels of maternal maltreatment (mean + SD, mean, mean -SD). The 95% bootstrap CIs showed that the two higher values of maternal maltreatment did not contain zero, while the lower value did (Table 3, Figure 3). In summary, the moderation effect of maternal maltreatment was confirmed, wherein the mediating effect of dissociation on the relationship between LBT and psychotic symptoms was weaker at higher levels of exposure, indicating the exacerbating effect of maternal maltreatment.
Main findings of the current study
From the result of preliminary model, both LBT and HBT can fit the mediation model with fully mediating effect of dissociation based on the association between HBT or LBT and psychotic symptoms. On the other hand, we found that a higher level of LBT was associated with more severe psychotic symptoms, and that this association was fully mediated by a higher level of dissociation. Moreover, a higher level of maternal maltreatment could moderate the effect of LBT on dissociation. In other words, the results showed the key role of dissociation in the association between LBT and psychotic symptoms, and that maternal maltreatment had a harmful effect on depressive patients with LBT to strengthen the level of dissociation.
Psychoticism among patients with depressive disorder
Since the psychosis continuum had been proposed in 1994 (Claridge, 1994) and elaborated in recent years (Linscott & van Os, 2013), research on psychosis focused not only on patients with psychotic disorders but also on subclinical individuals with psychotic-like experiences. However, a specific group of individuals that has been ignored are those with a disturbed psychiatric function but yet reaching a profound decline in reality testing, such as patients with mood disorder as well as psychotic features. A previous meta-analysis demonstrated that both patients with schizophrenia and BP had deficits in cognitive functions although the impairments among patients with BP were less severer than schizophrenia (Nieto & Castellanos, 2011). It implies that patients with mood disorder may have less distorted reality or cognitive decline than those with full-blown psychotic disorder. On the other hand, studies assessing delusions multidimensionally also found that dimensions of delusional experience, especially distress and preoccupation, are helpful to distinguish psychotic patients from community samples (Sisti et al., 2012), indicating the divergences of reality testing between psychotic patients and nonpsychotic samples. Therefore, the 'psychotic symptoms' that are presented in our study may have aetiological difference from psychotic symptoms in full-blown psychotic disorder, such as schizophrenia.
Mediating effect of dissociation
In addition to the findings of an association between dissociation and psychotic symptoms, we further identified the mediating effect of dissociation on the association between LBT and psychotic symptoms. Previous studies have also reported the mediating effect of dissociation on the relationship between psychological trauma and psychotic symptoms among psychotic patients (Cole, Newman-Taylor, & Kennedy, 2016;Sun et al., 2018), and we further confirmed this mediating effect among patients with MDD and BP. Individuals with psychological trauma may have a diminished sense of self and consequent impairments in reality testing, leading to the formation of psychotic symptoms (Allen, Coyne, & Console, 1997;Kilcommons & Morrison, 2005). Therefore, the mediating effects of dissociation may potentially be explained by the aforementioned mechanism.
The clinical implication of the mediating effect is that dissociation may play an important role in the association between psychological trauma and psychotic symptoms. Depressive disorder with psychotic features is challengeable for clinicians and healthcare workers for the poor prognosis, morbidity and mortality (Rothschild, 2013). It will take more efforts for clinicians in treating with such patients, such as combination therapy (e.g. antidepressants plus antipsychotics) (Wijkstra et al., 2015). It is supposed that timely assessment and intervention of dissociation for patients with depressive disorder and traumatic history may be beneficial to prevent from later psychotic symptoms. In other words, an underlying dissociation may remain undetected in patients with affective disorder if clinicians did not assess systematically for dissociation. With early detection of dissociation among patients with depressive disorder, they can be timely treated at an earlier stage. However, it deserves further prospective study to verify this finding.
On the other hand, it should be noticed about the potentially symptomatic overlap between dissociation and psychotic symptoms. Previous study demonstrated the phenomenological overlap between dissociation and psychotic symptoms among patients with schizophrenia (Vogel, Braungardt, Grabe, Schneider, & Klauer, 2013). However, another study recruiting nonpsychotic samples suggested statistically distinct phenomena between dissociation and psychotic-like symptoms (Humpston et al., 2016). Regarding our study, the variance inflation factor in the linear regression between dissociation and psychotic symptoms was estimated at 1.0, indicating no evidence of collinearity (Sheather, 2009). Similarly, it also needs further investigations to explore the phenomenological characteristics between dissociation and psychotic symptoms.
Moderating effect of maternal maltreatment and the effect of LBT on psychotic symptoms
In this study, we confirmed the key role of parenting dysfunction on the mediating effect of dissociation on the relationship between LBT and psychotic symptoms. Parental dysfunction has also been reported to have an impact on dissociation. A previous study found that the level of dissociation was positively associated with parental dysfunction in patients with schizophrenia (Schroeder, Langeland, Fisher, Huber, & Schafer, 2016). Another study reported that emotional neglect by parents was related to later symptoms of dissociation (Wright, Crawford, & Del Castillo, 2009). On the other hand, disrupted attachment, involving childhood trauma and parental maltreatment, may play an important role in the later development of dissociation (Draijer & Langeland, 1999;Lyons-Ruth, Dutra, Schuder, & Bianchi, 2006). Disorganized attachment is reported to mediate the association between child sexual abuse and dissociation (Hebert, Langevin, & Charest, 2020), and it may also influence in the association between trauma and dissociation. In the current study, only maternal maltreatment had a significantly moderating effect, but not paternal maltreatment. This may be explained by the difference in parenting style between fathers and mothers. Several studies have suggested that Chinese mothers take the main parenting role, and that they are more supportive and responsive than fathers (Shek, 2006(Shek, , 2008. A meta-analysis demonstrated that perceived maternal parenting attributes were more positive than perceived paternal parenting attributes among Chinese adolescents (Dou, Shek, & Kwok, 2020). Due to the importance of maternal parenting style, maternal maltreatment may cause children serious harm, for which support from fathers may not be able to compensate. However, further investigations are needed to explore the effects of differences in parenting style on psychopathology to clarify this issue.
The effects of psychological trauma on psychotic disorder have been investigated previously (Bebbington et al., 2004;Read & Argyle, 1999). In the current study, we further examined the effect of different types of traumas. In 1996, Freyd proposed that HBT could be defined as psychological trauma that is perpetrated by someone to whom the victim is familiar or trusts (Freyd, 1996). Attachment theory (Bowlby, 1999) is defined as the nature of humans to keep relationships with others, and as HBT is involved with attachment-based relationships, it is qualitatively different from LBT (Bernstein & Freyd, 2014). HBT has also been associated with borderline personality organization (Yalch & Levendosky, 2014). Haahr et al. demonstrated that HBT delayed helpseeking behaviours among patients with a first episode of psychosis, resulting in poorer premorbid adjustment and a longer duration of untreated psychosis (Haahr et al., 2018). Thus, HBT is more psychologically hazardous than LBT (Bernstein & Freyd, 2014). However, after examining both kinds of trauma (HBT and LBT), only LBT fit the moderated mediation model. We hypothesize that this may be due to the relatively stronger psychological impact of HBT (Bernstein & Freyd, 2014). According to the results of ordinary least squares regression analysis, the effect of HBT (β = 1.056; p = .001) on dissociation was higher than that of LBT (β = 0.938; p = .014). Therefore, maternal maltreatment may be not strong enough to moderate the association between HBT and dissociation. On the other hand, the insignificance of moderating effect with maternal maltreatment in the association between HBT and dissociation may be confounded by the potentially symptomatic overlap between HBT and maternal maltreatment. Moreover, it may be able to significantly moderate the association between LBT and dissociation due to the relatively lower psychological impact of LBT on developing psychotic symptoms. Further comparative and conceptual studies between HBT and LBT are warranted to clarify this hypothesis.
Strengths and limitations
To the best of our knowledge, this is the first study to investigate the moderating effect of maternal maltreatment on the associations among psychological trauma, dissociation, and psychotic symptoms in patients with MDD and BP. Moreover, the state of psychotic symptoms, depression and dissociation was assessed prospectively by board-certified psychiatrists, which is also a strength of this study. Nevertheless, several limitations need to be addressed. First, the limited number of cases may limit the interpretation of the results. Second, the level of support from parents was not measured. Therefore, further studies are needed to explore whether or not parental support can attenuate the effect of psychological trauma. Third, psychological trauma was selfreported, and it is possible that recall bias may have confounded the results. Fourth, the data of the current study was collected in a single time point. Therefore, the causality cannot be determined. Finally, we did not control for treatment of the participants, which was personalized according to the clinical judgement of their clinicians rather than being standardized.
Conclusion
In the current study, we developed a moderated mediation model, which demonstrated the association between LBT and psychotic symptoms with a mediating effect of dissociation and moderating effect of maternal maltreatment. Our findings indicated that dissociation fully mediated the association between LBT trauma and psychotic symptoms. Furthermore, an increased level of maternal maltreatment may exacerbate the effect of LBT on dissociation, resulting in the enhancement of the full mediation model to develop psychotic symptoms. Clinicians should be aware of depressive patients with a history of LBT and maternal maltreatment due to the risk of developing dissociation and psychotic symptoms. According to these findings, we hypothesize that interventions to prevent maternal maltreatment or enhance parenting skills may protect depressive patients with LBT from developing dissociation and psychotic symptoms. Further studies with a well-controlled design and larger prospective cohort are warranted to verify this hypothesis and extend the applicability and generalizability of the current study.
|
2022-02-12T16:25:26.994Z
|
2022-02-10T00:00:00.000
|
{
"year": 2022,
"sha1": "1a0f0fc747a6ccd17872199b4871b6a604e01da0",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20008198.2021.2024974?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f58c7b622a3727ca155208bf9f4558bcb2914fd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119218597
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of potential fluctuating intra-unit cell magnetic order in cuprates by muon spin relaxation
We report low temperature muon spin relaxation (muSR) measurements of the high-transition-temperature (Tc) cuprate superconductors Bi{2+x}Sr{2-x}CaCu2O{8+\delta} and YBa2Cu3O6.57, aimed at detecting the mysterious intra-unit cell (IUC) magnetic order that has been observed by spin polarized neutron scattering in the pseudogap phase of four different cuprate families. A lack of confirmation by local magnetic probe methods has raised the possibility that the magnetic order fluctuates slowly enough to appear static on the time scale of neutron scattering, but too fast to affect $\mu$SR or nuclear magnetic resonance (NMR) signals. The IUC magnetic order has been linked to a theoretical model for the cuprates, which predicts a long-range ordered phase of electron-current loop order that terminates at a quantum crictical point (QCP). Our study suggests that lowering the temperature to T ~ 25 mK and moving far below the purported QCP does not cause enough of a slowing down of fluctuations for the IUC magnetic order to become detectable on the time scale of muSR. Our measurements place narrow limits on the fluctuation rate of this unidentified magnetic order.
We report low temperature muon spin relaxation (µSR) measurements of the high-transitiontemperature (Tc) cuprate superconductors Bi2+xSr2−xCaCu2O 8+δ and YBa2Cu3O6.57, aimed at detecting the mysterious intra-unit cell (IUC) magnetic order that has been observed by spin polarized neutron scattering in the pseudogap phase of four different cuprate families. A lack of confirmation by local magnetic probe methods has raised the possibility that the magnetic order fluctuates slowly enough to appear static on the time scale of neutron scattering, but too fast to affect µSR or nuclear magnetic resonance (NMR) signals. The IUC magnetic order has been linked to a theoretical model for the cuprates, which predicts a long-range ordered phase of electron-current loop order that terminates at a quantum crictical point (QCP). Our study suggests that lowering the temperature to T ∼ 25 mK and moving far below the purported QCP does not cause enough of a slowing down of fluctuations for the IUC magnetic order to become detectable on the time scale of µSR. Our measurements place narrow limits on the fluctuation rate of this unidentified magnetic order. An enduring and central open question concerning cuprate superconductors is the nature of the mysterious pseudogap regime above T c . Achieving an understanding of the pseudogap (PG) has long been viewed as key to understanding high-T c superconductivity. A clue to the origin of the PG has come from spinpolarized neutron diffraction studies that have detected the onset of an unusual three-dimensional (3-D), longrange IUC magnetic order at a temperature concomitant with the PG onset temperature T * in YBa 2 Cu 3 O 6+x (Y123), HgBa 2 CuO 4+δ (Hg1201) and Bi 2 Sr 2 CaCu 2 O 8+δ (Bi2212). [1][2][3][4][5][6][7] This finding provides evidence for a change in symmetry at T * associated with the onset of a novel type of order, which is supported by other kinds of measurements that indicate the the PG is related to a true phase transition. [9][10][11][12] The magnetic order observed by polarized neutron diffraction is described by staggered out-of-plane magnetic moments that diminish in magnitude from the underdoped to optimally-doped regime. 7,8 A similar mysterious magnetic order is also observed in x = 0.085 La 2−x Sr x CuO 4 (LSCO), 13 although it is short-range, two-dimensional, and onsets at a temperature far below T * . The latter is also the case in underdoped YBa 2 Cu 3 O 6.45 -suggesting a potential competition with Cu spin density wave order at low doping.
The magnetic structure and the hole-doping dependence of the onset temperature of the IUC magnetic order are somewhat compatible with a model derived from a three-band Hubbard model, which attributes the PG to a time-reversal symmetry breaking phase consisting of a pattern of circulating electron currents that preserve translational symmetry. 14 With increased hole doping the transition temperature of the circulating-current (CC) ordered phase is reduced towards zero, terminating at a QCP within the superconducting phase near or above optimal doping. Yet zero-field (ZF) µSR experiments have found no evidence for such a magnetically ordered phase. [15][16][17][18] While it has been suggested that charge screening of the positively charged muon (µ + ) causes severe underdoping of its local environment, resulting in the loss of CC order over a distance of several lattice constants, 19 such severe perturbation of the local environment is inconsistent with µ + -Knight shift measurements that show a linear scaling with the bulk magnetic susceptibility. 20 Moreover, non-perturbative NMR and nuclear quadupole resonance (NQR) experiments also find no evidence of IUC magnetic order. [22][23][24][25] It has been argued from calculations in a multi-orbital Hubbard model and for parameters relevant to cuprate superconductors, that the CC phase proposed in Ref. 14 or variations of it are unlikely to be stabilized as the ground state. 26 A staggered ordering of Ising-like oxygen orbital magnetic moments has been offered as an alternative explanation of the IUC magnetic order. 27 Since the original CC phase proposal, the model has been extended to include quantum critical fluctuations of the CC order parameter. 28,29 The extended model attributes the anomalous normal-state properties of cuprates to a funnel-shaped quantum critical region in the T -versus-p phase diagram that extends to temperatures well above the QCP at p = p c , T = 0. In the quantum-critical region the CC order spatially and temporally fluctuates between four possible ground-state configurations characterized by different directions of the CC order parameter. Local disorder is argued to couple to the CC order, leading to four distinct domains consist-ing of one of the four possible CC order configurations. The fluctuation rate between the different CC order configurations has been estimated to be slow enough to appear static on the time scale of neutron scattering, but too fast to cause relaxation of µSR or NMR spectra. 30 One exception to the null local-probe results is a ZF-µSR study of a large YBa 2 Cu 3 O 6.6 single crystal in which the unusual 3-D IUC magnetic order has been detected by polarized neutron scattering. 17 Static magnetic order with an onset temperature and local magnetic field consistent with the neutron findings was observed, but only in ∼ 3 % of the sample. This raises the possibility of fluctuating IUC magnetic order (that is not necessarily CC order) being locally pinned in a static configuration by disorder. The impurity/disorder type must be fairly specific though, since it has been shown that Zn substitution of Cu in YBa 2 Cu 3 O 6.6 does not affect the magneticonset temperature, but does reduce the magnetic Bragg scattering intensity. 4 In other words, the Zn impurity apparently reduces the volume of the sample containing the IUC magnetic order.
Here we investigate whether there is fluctuating IUC magnetic order that slows down enough near T = 0, where thermal fluctuations vanish, to become detectable by ZF-µSR. If the mysterious magnetic order is associated with a QCP, then near T = 0 we expect quantum fluctuations to dominate close to p c , but in the absence of significant disorder to have a diminishing effect as the hole concentration is lowered. The neutron experiments on Y123 and Hg1201 suggest p c ∼ 0.19, and previous ZF-µSR measurements on Y-doped Bi2212, pure LSCO, and Zndoped LSCO, extending down to 40 mK show a vanishing of low-frequency spin fluctuations above this critical doping. 31 However, a similar ZF-µSR study down to such low temperatures has not been performed on the other cuprates in which IUC magnetic order has been detected by neutrons. An exception are ZF-µSR measurements on a p ∼ 0.167 Bi2212 powdered sample, which indicate the onset of spin fluctuations below T ∼ 5 K, but no spin freezing down to 40 mK. 31 ZF-µSR measurements with the initital muon spin polarization P(0) parallel to theĉ-axis were performed on underdoped (p = 0.094, T c = 58 K) and optimallydoped (p = 0.16, T c = 90 K) Bi2212 single crystals, and single crystals of underdoped (p = 0.11, T c = 62.5 K) YBa 2 Cu 3 O 6.57 . The samples were prepared as described elsewhere. 32,33 Spectra were collected down to as low as T = 24 mK using a dilution refrigerator on the M15 surface muon beam line at the TRIUMF subatomic physics laboratory in Vancouver, Canada. The single crystals were mounted on a silver (Ag) sample holder, covering a 8 mm × 5 mm area. A scintillation detector placed downstream was used to reject muons that missed the sample. A fraction (≤ 40 %) of the incoming muons stopped in the uncovered portion of the Ag sample holder, and a fraction (∼ 20 %) of the muons stopped in the copper (Cu) heat shields of the dilution refrigerator. Since the nuclear dipole fields in Ag are negligible, there is no apprecia- ble time or temperature dependence to the background component from the sample holder. While the relaxation rate of the ZF-µSR signal from Cu does have a temperature dependence caused by muon diffusion, 34 the Cu shields are at constant temperature. We also performed longitudinal-field (LF) µSR measurements on p = 0.11 Y123 single crystals at a fixed temperature far below T c using a helium-gas flow cryostat and low-background sample holder, for the purpose of determining whether the internal magnetic fields are static or dynamic. In this setup there is no Cu component and the background contribution to the LF-µSR signal is less than 20 %. The ZF-µSR asymmetry spectra were fit to the sum of sample and backgrounds terms as follows where a s and G s (t) [a b and G b (t)] are the amplitude and ZF relaxation function for the sample (background) contribution. The background term originating from muons stopping outside of the sample was assumed to be independent of temperature and approximately described by the following relaxation function where G KT z (∆ b , t) is a static Gaussian Kubo-Toyabe function. In particular, where γ µ is the muon gyromagnetic ratio and ∆ b /γ µ is the width of the Gaussian distribution in field sensed by the implanted muon ensemble. The sample contribution was assumed to be the product of two relaxation functions which assumes that muons stopping in the sample sense the vector sum of static nuclear dipolar fields and fields of some other origin that generate a weak exponential relaxation rate λ. An exception is Bi2212 at p = 0.094, where the ZF-µSR asymmetry spectra below T = 1 K were better described by This function assumes an enhanced exponential relaxation rate λ+η due to a fraction f of the muons experiencing additional fields in some parts of the sample. In contrast to the relaxation rates ∆ s and ∆ b , the exponential relaxation rates λ and η were allowed to vary with temperature in the fitting of the ZF-µSR spectra. In addition, f was assumed to be independent of temperature. Figures 1 and 2 show representative ZF-µSR asymmetry spectra for the Y123 and Bi2212 samples. Figs. 1 and 2) show that a small 0.4 × 3 % = 1.2 % contribution to the total signal cannot be ruled out. However, it is worth mentioning that no such minority phase was previously observed in lowbackground measurements of the p = 0.11 Y123 sample above T = 2.3 K. 17 Figure 3 shows the temperature dependence of the exponential relaxation rate λ for all three samples, along with λ + η for p = 0.094 Bi2212 below T = 1 K. While there is an increase in the relaxation rate for the p = 0.094 Bi2212 sample below T = 1 K, this is most likely due to low-energy spin fluctuations in the CuO 2 planes, as spin freezing is observed in Y-doped Bi2212 below p ∼ 0.10. 31 The lack of any increase of λ at low temperatures for the p = 0.11 Y123 and the p = 0.16 Bi2212 samples rules out the onset of quasi-static magnetism below T = 5 K. However, these ZF-µSR results do not rule out the possibility that even at these low temperatures and at a hole doping far below p c , the IUC magnetic order fluctuates too fast to be detectable on the time scale of ZF-µSR. Assuming the local magnetic field due to IUC magnetic order is 141 G (as estimated in Ref. 17), the ZF-µSR results for p = 0.11 Y123 and p = 0.16 Bi2212 imply a lower limit of 1.9×10 6 Hz for the fluctuation rate. This is far below the upper limit of 10 11 Hz imposed by the energy resolution of the polarized neutron experiments.
Our LF-µSR measurements in a different experimental setup greatly increase the lower limit of the fluctuation rate. Figure 4(a) shows LF-µSR spectra recorded for p = 0.11 Y123 well below T c . Below T c weak applied fields are completely or partially screened from the bulk, and hence external fields well in excess of the lower critical field H c1 were applied. A longitudinal field of B LF = 0.5 kOe completely decouples the muon spin from the nuclear dipoles of the background and the internal magnetic fields of the sample. If the muons sense a rapidly fluctuating nearly Gaussian distribution of field, the ZF-µSR signal will decay with a pure exponential relaxation G s (t) = exp(−λt). In this case the dependence of the dynamic relaxation rate on the LF is given by the Redfield equation 35 where ∆/γ µ is the width of the field distribution and ν is the local-field fluctuation frequency. In the previous µSR study of a large single crystal of YBa 2 Cu 3 O 6.6 in which static magnetic order was detected in 3 % of the sample, 17 the mean local field detected was ∼ 141 G -which was shown to be in good agreement with the magnitude and direction of the ordered moment determined by polarized neutron diffraction. Figure 4(b) shows a simulation of the dependence of λ LF on ν for a LF of B LF = 500 G and different values of the local-field fluctuation amplitude ∆/γ µ . The values ∆/γ µ = 141 G and 24 G assume the polarized neutron measurements of p = 0.11 Y123 (Ref. 2) detect IUC magnetic order within the CuO 2 planes in 3 % and 100 % of the sample, respectively. Also shown is the upper limit λ LF ≤ 0.01 µs −1 inferred from the corresponding LF-µSR spectrum in Fig. 4(a) under the assumption that fluctuating magnetism occurs throughout the sample. The simulation of λ LF for ∆/γ µ = 141 G exceeds the upper limit of the relaxation rate observed in p = 0.11 Y123 below ν ∼ 3×10 10 Hz. On the other hand, the sim- ulation for ∆/γ µ = 24 G only rules out a fluctuation rate below ν ∼ 10 9 Hz. If the IUC magnetic order is due to loop currents flowing out of the CuO 2 plane through the apical oxygen as proposed in Ref. 36, ∆/γ µ = 22 G and the lower limit of the fluctuation rate is slightly reduced. Regardless, the combined LF-µSR results and the polarized neutron measurements place narrow limits of 10 9 to 10 11 Hz on the fluctuation rate of the IUC magnetic order.
If the IUC magnetic order is associated with fluctuations between different orientations of a CC-ordered state in finite size domains, rather than spatially-uniform long range magnetic order, quantum fluctuations will not di-minish away from the QCP. 30 The lowest quantum fluctuation frequency between the distinct CC configurations is estimated to be less than 10 10 Hz -a scenario not completely ruled out by our LF-µSR results. As for other possible origins of the IUC magnetic order, while our estimated lower limit of the fluctuation frequency assumes fluctuating magnetic order throughout the sample volume, the current measurements do not rule out the possibility that there is slower fluctuating IUC magnetic order contained in a small volume fraction.
|
2016-10-18T19:05:30.000Z
|
2016-06-15T00:00:00.000
|
{
"year": 2016,
"sha1": "131f4eb4353d9b1797a7fd9e7a263e9a573e4f07",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1606.04865",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "131f4eb4353d9b1797a7fd9e7a263e9a573e4f07",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
238989495
|
pes2o/s2orc
|
v3-fos-license
|
Hepatitis A Outbreak in a Facility for the Disabled, Gyeonggi Province, Korea: An Epidemiological Investigation
Objectives: The number of cases of hepatitis A virus (HAV) infections has sharply increased in Korea, especially among young adults. In this study, an HAV outbreak in a facility for disabled people was investigated, and we found epidemiological differences both between 2 different generations and between generally abled and disabled groups. Methods: We analyzed the incubation period and attack rate of an HAV outbreak and investigated the prevalence of HAV antibodies among the staff and residents of a facility for the disabled. We performed a retrospective cohort study during the HAV outbreak, which lasted from February 8 to 25, 2019, including examinations of HAV antibody tests and post-exposure HAV vaccination for the staff or residents of the facility. Results: There were 9 confirmed cases in 2 staff members and 7 residents. Among 53 people (30 staff and 23 residents), except for the 9 confirmed cases and 1 staff member with a known history of HAV infection, HAV seroprevalence was seen in 16.7% of the staff under 40 years of age and 95.2% of those over 40 years of age, while the corresponding rates in the residents were 0.0% and 58.8%, respectively. Conclusions: This result implies that it is necessary to prioritize HAV vaccination for vulnerable groups and workers of residential care facilities.
In Korea, HAV infection is managed as a grade 2 communicable disease. Although HAV infection is preventable by vaccination, the number of cases of HAV infection has sharply increased recently in Korea [11][12][13]. Additionally, the prevalence of HAV infection in young adults accounts for a large proportion of the HAV infection burden in Korea [3,10,12,[14][15][16]. This might be because general hygienic conditions have improved over the years; therefore, young adults have not been provid-ed the opportunity to acquire natural immunity against HAV infection [10,[17][18][19].
In this study, we investigated an HAV outbreak in a facility for disabled individuals, wherein we found epidemiological differences both between 2 different generations (aged under and over 40 years) and between generally abled and disabled groups. These results can be used as evidence to establish public health strategies, especially for prioritizing vulnerable groups to prevent HAV infection in Korea.
Identification of the Outbreak
On February 18, 2019, the index case (a resident of facility A) of an HAV infection outbreak in facility A was reported to the Paju Health Center. The facility's regular physician treated the index case's fever and fatigue, and transferred him to a local medical facility. General laboratory tests were carried out for the index case, including an immunoglobulin M (IgM) anti-HAV test, and the individual was diagnosed with HAV infection on February 18, 2019. After 3 days, 1 more case was confirmed and 2 more were suspected by the Paju Health Center. On February 22, 2019, as multiple cases were either confirmed or suspected in the same place and at the same time, the situation was judged to be an outbreak. Therefore, an epidemiological investigation was initiated to determine the size and source of infection and prevent further transmission. Since HAV infection can be transmitted by contaminated food or feces, it is crucial to rapidly find and block the source of infection [1,9].
Case Definition
We defined a case as any individual who resided in or worked at facility A and whose laboratory test results were positive for the anti-HAV IgM test, regardless of their symptoms, from December 12, 2018 (i.e., the day before the maximum incubation period for HAV infection-50 days), to the day of symptom onset of the primary case (i.e., February 9, 2019). The risk population was defined as all staff and residents belonging to facility A during the period in which cases were confirmed and suspected. According to the Waterborne and Foodborne Infectious Diseases Management Guidelines of the Korea Disease Control and Prevention Agency, the rest of the residents and staff in facility A were defined as exposed (i.e., people who reside in the same place, regularly eat together, or share a toilet during the infectious period).
Study Design and Response Measures
A retrospective cohort study was conducted, including all residents and staff. Considering that the exposed individuals were strictly limited to facility A and the size of this population did not change during the investigation period, a retrospective cohort study design was appropriate for this case. After identification of the outbreak and as per the instructions of the infectious disease investigator of Gyeonggi Province, data were collected on a history of common food consumption, genotype analysis of HAV from the cases, and water use (cooking, drinking, and living water). After collecting the above data, the following was carried out: an assessment of contact history and intensity between the residents and staff, HAV antibody testing for all exposed individuals, vaccination for people who were identified as being susceptible to HAV infection, isolation and medical treatment of people who showed HAV infection-related symptoms, and vaccination for people who did not belong to facility A.
Data Collection and Analysis
We obtained demographic and medical information on the at-risk population through face-to-face interviews. Due to the medical conditions of the residents, most of them were unable to converse; the investigator therefore collected general information mainly from the staff. We used R version 4.0.2 (https:// www.r-project.org/) for the statistical analyses.
Ethics Statement
The Institutional Review Board of Seoul National University Bundang Hospital exempted this study from review because the data in this study did not include any personal or identifiable information (X-2103-673-904).
Outbreak Investigations
From the onset of the index case's symptoms on February 8, 2019, the outbreak lasted for 17 days and resulted in 9 cases of HAV infection ( Figure 1). Not every case immediately underwent laboratory testing for HAV because 5 cases identified earlier visited different hospitals individually. After the outbreak was identified, on February 22, 2019, the infectious disease investigator of Gyeonggi Province ordered a screening test for all members of facility A, which identified 4 more cases. Exposure to the pathogen was assumed to have occurred between January 1 and 31, 2019, considering that the incubation period of HAV is known to be 15 days to 50 days, with an average of 28 days, and the peak of the epidemic curve was located on February 15, 2019. The age of the at-risk population varied from 25 years to 77 years, while that of the cases varied from 32 years to 51 years ( Table 1). The overall attack rate in facility A was 14.3%. The attack rates were not significantly different according to sex. There were no confirmed cases in the age groups under 30 years and over 60 years. The 31-year to 40-year age group showed the highest attack rate (43.8%). The attack rates in the staff and residents were 6.1% and 23.3%, respectively.
To identify the impact of disability and its related characteristics on HAV infection, we calculated the attack rate according to disability classification and related characteristics, including level of communication ability and requirement for assistance when eating (Table 2). Regarding the disability classification, residents with physical disability had the highest attack rate (33.3%). Likewise, those who had difficulties communicating fluently had a higher attack rate (33.3%) than other groups defined by communication ability. Regarding the requirement for assistance while eating, residents who needed total assistance showed the highest HAV infection rate (50.0%).
To rule out person-to-person transmission, the contact history between each confirmed case was investigated. However, no epidemiological association between them was identified. It was found that all confirmed cases were located on the first floor of facility A; however, this was not a meaningful discovery because the second floor was mostly composed of rooms for staff (office), couples, or for common use.
Laboratory Tests
The laboratory test results of the samples are described in Table 3. On February 23 and February 25, 2019, all staff and residents were tested for HAV antibodies, which resulted in 3 additional IgM-positive cases. Among the 7 residents identified as HAV cases, stool polymerase chain reaction tests were also positive for 6 cases. HAV 1A was suspected to be the causative pathogen of the outbreak. As the same HAV genotype was found in the groundwater, the source of infection was suspected-but not concluded-to be groundwater due to the unclear temporal order between the outbreak and the contamination of the groundwater. Table 4 shows the seroprevalence of HAV antibodies. Among the commuting staff, 95.2% of individuals (n=21) over 40 years old were positive for immunoglobulin G (IgG) antibodies against HAV; however, only 2 of 12 staff members under the age of 40 (16.7%) had anti-HAV IgG antibodies. Among the residents, 10 of the 17 people aged over 40 years (58.8%) were positive for IgG antibodies, and none of them had IgM antibodies. IgM antibody positivity was found in 2 staff members and 5 residents under 40 years of age, as well as in 2 residents aged over 40 years.
We also conducted laboratory tests on the environment of facility A, including samples collected from kitchens, living rooms, and other common places, as well as water. We found that the 15 samples collected from inside facility A were negative for the pathogen; however, HAV 1A was found in the groundwater that was used for bathing, washing, and cleaning.
Environmental Investigation
The inside of the cooking room was well divided into the inspection area, washing area, and pre-treatment area. Refrigeration and freezing temperatures were properly maintained, and no ingredients were past the expiration date. There were 3 cooks, all of whom had health certificates and were wearing sanitary clothes and hats while cooking, indicative of good health compliance. The staff and residents were served the same meals and ate in a shared dining facility. To rule out the possibility of transmission via food, the investigator also examined the menu served during the incubation period (50 days before and 14 days after symptom onset of the index case). The investigator then confirmed that raw food or salted seafood had never been served during the investigation period.
Tap water was used for cooking, while groundwater was used for kimchi, washing, cleaning, bathing, and other purposes until January 20, 2019, and was not used thereafter. The investigator strongly suspected that the kimchi made using groundwater was a potent source of infection; however, it had already been exhausted. For drinking water, a general water purifier located at the restaurant entrance on the first floor was used, and water purification was ensured by regular maintenance of the water purifier. Cups were used in a communal manner, wherein unused cups were stored in an ultraviolet sterilizer and used cups were collected in a tray above the sterilizer, and could be divided into before and after use.
The period of exposure was estimated, by considering the maximum incubation period of HAV infection, as extending from December 21, 2018 to February 28, 2019, the day when the symptoms of the index case began. This coincides with the time when groundwater was used, and a total of 9 patients were identified within 17 days from the first patient based on the onset of symptoms; therefore, the outbreak was suspected to be a single exposure related to groundwater use.
DISCUSSION
In this study, 9 individuals out of 33 staff members and 30 residents at a facility for the disabled were diagnosed with HAV 1A infection in February 2019. According to the laboratory test results, groundwater was suspected to be the source of the outbreak, but this hypothesis was not confirmed due to a lack of information that clarified the temporal order of groundwater contamination and the HAV outbreak. Considering that the incubation period of HAV infection extends up to 50 days [1,[7][8][9], the epidemic curve of this outbreak (Figure 1) showed a point source outbreak pattern. The overall attack rate at facili- [10,[17][18][19]. Considering that previous studies have explained the vulnerability of younger adults by lowered opportunity of exposure to naturally circulating pathogens due to improved sanitary and hygiene systems, the difference in the attack rate between staff and residents (6.1 and 23.3%, respectively), implies that residents' relatively low opportunities for external or social activities resulted in higher susceptibility [17][18][19]. Furthermore, the difference between the staff and residents was especially pronounced when stratified by age, as shown in Table 4, which presents the differences in anti-HAV seroprevalence between the groups aged under and over 40 years. Intergenerational differences in anti-HAV seroprevalence have been discussed in several previous studies, but the magnitude of the differences found herein reinforces the importance of vaccination for vulnerable populations. The anti-HAV seroprevalence of the residents was lower than that of both the staff members and the general population [3,17,20,21]. Since people with physical and mental disabilities mostly cannot explain their conditions by themselves, further HAV outbreaks can occur in other residential facilities due to delayed recognition and identification [22]. Therefore, it would be helpful to immunize those working and living at facilities for the disabled.
The results for the attack rate of residents by classification of disability, level of communication ability, and necessity of assistance when eating reflects the possibility of person-to-person transmission. The reason for this is that physically disabled people generally require more direct contact with the staff. Furthermore, the necessity of assistance when eating showed consistent results, as the residents who required total assistance showed the highest attack rate (50.0%). This result aligns with those of a previous study analyzing an HAV outbreak at a residential facility for disabled people in 2011, which found that the attack rate of the residents was significantly higher than that of the teachers or staff [23]. This result indicates a higher susceptibility of residents given an evenly distributed intensity of exposure in the high-risk population.
Even with the immediate response by the public health authorities, there are several limitations of this investigation and study. First, since regular checks for groundwater do not include HAV testing in Korea, it was difficult to clarify the temporal order of the incident, even though regular groundwater tests were never missed during the investigation period. The investigator examined the latest results of groundwater testing for facility A; however, it was irrelevant, as the results did not include HAV. Second, considering that HAV is transmitted mainly via the fecal-oral route, the most suspected source of infection, kimchi, could not be tested. At the time of investigation, the suspected kimchi had already been consumed by the staff and residents; therefore, the investigator could not request a laboratory test for the kimchi. Lastly, since most of the confirmed cases were disabled and unable to communicate fluently, it was difficult to track person-to-person transmission. Although the investigation was conducted with the hypothesis that there was an even level of exposure to pathogens in the risk population since the daily life of the residents was confined to facility A, an evaluation of person-to-person transmission could have enabled more accurate research results.
Despite these limitations, this study emphasizes the necessity of establishing infectious disease management policies, such as infection prevention management rules, systematic HAV antibody testing, and prioritization of vaccinations for staff and residents of facilities. People living in groups should be aware of the risk of exposure to HAV and should be proactively vaccinated.
|
2021-10-16T06:16:34.987Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d11300b1524bd668b931a9cbdb77f36f5460bf9e",
"oa_license": "CCBYNC",
"oa_url": "https://www.jpmph.org/upload/pdf/jpmph-21-349.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93f11f952ac5deb04ad292fc0d545b378b2346f6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
139237428
|
pes2o/s2orc
|
v3-fos-license
|
Study on the Filament Yarns Spreading Techniques and Assessment Methods of the Electronic Fiberglass Fabric
The filament yarns spreading techniques of electronic fiberglass fabric were developed in the past few years in order to meet the requirements of the development of electronic industry. Copper clad laminate (CCL) requires that the warp and weft yarns of the fabric could be spread out of apart and formed flat. The penetration performance of resin could be improved due to the filament yarns spreading techniques of electronic fiberglass fabric, the same as peeling strength of CCL and drilling performance of printed circuit board (PCB). This paper shows the filament yarns spreading techniques of electronic fiberglass fabric from several aspects, such as methods and functions, also with the assessment methods of their effects.
Introduction
The electronic fiberglass fabric, which is woven by fiberglass yarns, is the basic material of the copper clad laminate (CCL), and CCL is the basic material of the printed circuit board (PCB). The developing trend of electronic products is toward thinner, lighter, shorter and smaller, therefore the CCL requires to develop high density, thin multilayer, high Tg, lead-free and halogen-free. The heat resistance and surface smoothness of PCB are getting higher, which makes electronic fiberglass fabric thinner and even more uniform, also have faster resin wettability and higher dimensional stability.
In order to deal with the development trend of downstream industry, the filament yarns spreading techniques of the electronic fiberglass fabric were developed, with improving the performance of treatment agent of electronic fiberglass fabric at the same time. The technique mainly refers to flattening in the production process of electronic fiberglass fabric. When the fabric was soaked with the resin, there was a greater contact area between them. It also improved the penetration rate of electronic fiberglass fabric and resin, and made better combination of fabric and resin. The penetration of resin on the electronic fiberglass fabric became faster with filament yarns spreading. The treatment agent also made the chemical bond between glass fiber and resin more stable, so that the heat resistance of PCB improved. Meanwhile, the surface of the electronic fiberglass fabric became smoother, the gap between the warp and weft yarn became smaller. Fibers and resin in the PCB were relatively uniform, which improved the processability, surface smoothness, dimensional stability and water suppression of the PCB, and made the PCB more reliable [1] .
Technological process
The electronic fiberglass fabric processed with filament yarns spreading technique called spreading electronic fiberglass fabric, which is a physical process for the common woven electronic fiberglass fabric. Usually high pressure spunlacing was used, which made the warp and weft yarns of the fabric 2 1234567890''""
SAMSE
IOP Publishing IOP Conf. Series: Materials Science and Engineering 322 (2018) 022037 doi:10.1088/1757-899X/322/2/022037 spread and flat. The protrusion of the overlap of the warp and weft yarns was obviously reduced, and the gap between the warp and weft yarns reduced significantly. It improves the smoothness and dimensional stability of the fabric, permeability and surface peeling resistance of the resin. Thus, it is possible to avoid the defects of different hole wall smoothness and different quality of the conducting hole caused by drilling on gaps of warp and weft yarn of the electronic fiberglass fabric when drilling on the PCB [2] .
The basic technological process of electronic fiberglass fabric includes three steps. First step is expanding by expanding wheel after weaving on air-jet loom. At the same time, fabric is rapidly heated and extruded by continuous rollers. Second step is re-coiling and degreasing in high temperature furnace for several hours. Third step is spreading, usually by high pressure water. The spreading machine generates high pressure water and breakes up the weft yarn mainly. So that the space gap between the warp and weft yarn becomes smaller, the air permeability is decreased, and the electronic fiberglass fabric becomes soft, and the penetration performance is improved and even more uniform. After that, the roller setting and proper agent are needed [3] .
Methods of spreading
Generally the warp yarns are under high tension during the technological process of electronic fiberglass fabric. Tremendous resistance and friction must be overcome whichever method was choosen. The spreading effect of warp yarn is poor than weft yarn. As a result, the surface roughness and smoothness of half cured sheet is not good. The quality of CCL is difficult to meet the requirements of technology, such as ALIVH (arbitrary layer in the via hole technology) and B2it (embedded bump solder joint interconnect technology) of PCB. The half cured sheet made by electronic fiberglass fabric with poor spreading effect could not meet the requirements of high-grade CCL because of bad penetration performance of resin and bad drilling performance. The fiberglass fabric appeares to show holes between monofilaments and resistance of ion mobility is poor [4] .
Therefore, the spreading effect of warp yarn is the key to measure the spreading methods of electronic fiberglass fabric. At present, the common methods of spreading include the following ways:
Rollers method
The rollers method is coating electronic fiberglass fabric on the porous rollers by tensive rollers. The plum rollers concentric with the porous rollers were driven by variable frequency motor and put into a tank full of water. The water wave vibration produced by plum rollers was transfer to electronic fiberglass fabric through porous rollers, so that the fibers of warp and weft yarns of electronic fiberglass fabric could spread to achieve spreading effect.
The spreading effect of this method is proportional to the the vibration intensity of wave, and the vibration intensity is proportional to the rotation rate and cube of amplitude. Therefore, the vibration intensity of wave could be enhanced by improving the rotation rate and increasing the number of petals of plum rollers or making petals bigger, thereby increasing the spreading effect. Yet the fiber strength of electronic fiberglass fabric must be considered with the increasing rotation rate and the number or the size of petals. Spreading should not cause hairiness and bias weft, which means the appropriate rotation rate and plum rollers should be selected in case of the hairiness and bias weft could be controlled in the allowed range.
High pressure spunlacing method
Spunlacing method is spraying high pressure water (soft water) on the both sides of electronic fiberglass fabric through high-pressure pump and nozzle on a spreading machine consisting of several guide rollers, so that the fiber of electronic fiberglass fabric is evenly dispersed. The spreading machine formed the soft water circulation system through the circulating water pump, the two stage filtration system, the replenishment tank and the water tank. The circulating water pump was driven by a variable frequency motor to ensure the dynamic balance of the circulating soft water.
The effect of this method depends on the outlet pressure of high-pressure pump, the nozzle type and the distance from nozzle to the surface of fabric. The selection of process parameters are usually different according to the different types of electronic fiberglass fabric. Generally when electronic fiberglass fabric is thin, the pump pressure is low, the nozzle flow is smaller, the distance of the nozzle to the surface of fabric should adjust farther to ensure the full spreading effect and avoid to producing a trace of water on the fabric. High pressure spunlacing method have better spreading effect and higher qualified rate of downstream products than rollers method in case of choosing appropriate nozzle type, spray height and spray pressure.
HDI method
Shanghai Honghe Electronic Material Co., Ltd. developed a efficient spreading method for HDI substrate (high density interconnect multilayer board), which could spread under the condition of fiber tension as low as possible. The width of electronic fiberglass fabric reaches the maximum, and the uniformity is better. This method is processing the desizing fabric by jet-flow with pressure of 0.49Mpa-4.9Mpa (5kg/cm2-50kg/cm2), diameter of 0.1mm-0.3mm, water temperature of 25℃-85℃ and the vibration frequency of 25Hz-50Hz. And then the fabric is formed by pressure rollers of 0.98Mpa-9.8MPa (10kg/cm2-100kg/cm2) and soaked by silane coupling agent finally [5] .
This method could reduce the damage of fabric appearance to the lowest level during spreading by matching the supporting belt speed and tension control. An so on, this method has obvious spreading effect especially for thin and ultra thin fabrics. HDI method could make the warp and weft yarns of electronic fiberglass fabric more dispersed, fabric structure more uniform and smooth, gaps around interweave smaller or even disappear. The interweave of fabric becomes more dispersed and the permeability decreased, the penetration performance of resin improved, the width of warp and weft yarns increases, fabric thickness decreased, the uniformity of the thickness increased. CCL has better uniformity, smaller thermal expansion coefficient difference, better size stability, and better performance in the downstream processing.
Other methods
There are many factors affecting the effect of the electronic fiberglass fabric, and there are many spreading techniques deserved attention and reference in production practice: 3.4.1. Low twist and non twist of electronic fiberglass yarn. The twisting of electronic fiberglass yarn is beneficial to weaving, which makes the yarn having better strand integrity, abrasion resistance and hairiness less. Yet twisting is not beneficial to the spreading of electronic fiberglass fabric. Undoubtedly as the most suggested way, the electronic fiberglass yarn should have low twist or even no twist from the point of view of spreading. That because fiber is easy to spread into flat shape in the later processing stage. Therefore, the low twist and non twist of electronic fiberglass yarn must become the mainstream development trend. However, there is a high requirement for the wetting agent in the spinning stage, and only a few enterprises in Japan could produce it right now.
Low tension in weaving preparation process.
It is very important to improve the spreading effect of the warp yarn in the preparation process before weaving due to the warp tension of the electronic fiberglass fabric and the spreading effect of the warp yarn is inferior to that of the weft yarn. Low tension warping and shaft is beneficial to loose yarn, but how to ensure that yarn would not shake after low tension and warp work stably seems important. Therefore, it is necessary to consider that how to control the tension piecewise in tension setting. In order to facilitate the warp spreading, the tension of the warp and the shafts should be as minimum as possible in case of the quality of weaving fabric and the uniformity of yarn tense is guaranteed. At the same time, yarns should not vibrate during warping and shaft.
Expanding in weaving process.
The effect of warp yarn spreading could be improved by expanding the electronic fiberglass fabric in the weaving process. The electronic fiberglass fabric will be expanded by a couple of temples (special thread rollers) on both sides of air-jet loom, which could achieve warp yarns spreading effect in weaving. Temples set in the winding end could also play a role in preventing wrinkle in weaving thin cloth and extreme thin cloth.
Healthing of electronic fiberglass fabric.
The warp and weft yarns of the electronic fiberglass fabric consist of hundreds of monofilament with diameter of several microns, and the surface of the monofilament is adhered with the starch based soakage agent. In case of using starch swelling tide principle when fabric weaving was completed, for a certain period of time in a high humidity environment, the process for electronic fiberglass fabric called "healthing" process in Nittobo. After the heat treatment, the electronic fiberglass fabric is treated by thermal desizing. And the warp and weft yarns become looser, which is more conducive to spread the fiber.
Functions of spreading
Compared with the ordinary electronic fiberglass fabric, the spreading electronic fiberglass fabric has advantages in the following aspects:
Lower air permeability and better uniformity
The direct effect of spreading is that the gap of warp and weft yarns becomes narrow, single yarn width becomes wide, interweave becomes smooth and permeability becomes small and uniform.
The leaching property of resin improves obviously
The warp and weft yarn of the electronic fiberglass fabric becomes more flat after spreading, and the contact area with the resin increases, so that the penetration rate of the resin is faster and the soaking time is shortened.
Better affinity with resin
After spreading, the surface area of the warp and weft yarn of the electronic fiberglass fabric is bigger, the penetration property of the resin becomes better. Also the binding and affinity of the fabric and resin becomes better, which could be reflected from the change of the resin content.
Heat resistance and heat moisture reliability improves apparently
After spreading, the adhesion and affinity of the electronic fiberglass fabric and the resin improves, and the combination of the glass fiber and the resin is more compact, which could effectively improve the thermal stress of the plate and improve the heat resistance reliability of the board.
Better drilling performance on PCB
After spreading, the distribution of warp and weft yarns tend to be uniform and the space is reduced, which makes the distribution of yarns in the plate more uniform. It is beneficial to drilling performance on the plate, and the hole inner wall becomes smoother and the hole position becomes more accurate.
Better surface smoothness
The surface of electronic fiberglass fabric becomes smoother through the spreading process and the sheet surface consisting of it becomes more smoother. It could be reflected by the change of surface friction and tensile strength test value of the fabric.
Better dimensional stability of the plate
The thermal expansion molding shrinkage degree of formed sheet will be reduced after spreading process because of twist loss of the warp and weft yarns, which is equivalent to a certain degree of stress release. It could be reflected from the CTE size of sheet, or the size change of X, Y direction after etching of sheet. It is beneficial to the low pressure forming of the sheet because of its small size change.
Better resistance to ion migration of PCB
The resistance performance of the ion migration of sheet requires higher along with light and thin and short and small trend of electronic products, also with small hole spacing and small interlayer spacing of PCB and dense of lines. The penetration performance of resin seems better for spreading and flattening electronic fiberglass fabric. The boundary between electronic fiberglass fabric and resin seems closer and better, which leads the sheet not easy to absorb moisture. Better moisture resistance is helpful to improve the resistance performance of ion migration.
Assessment methods
The evaluation methods of the spreading effect of electronic fiberglass fabric are very limited, such as the following methods:
Air permeability method
The air permeability of fabric is the primary index to measure the spreading effect of electronic fiberglass fabric by the fabric permeability tester. For example, the air permeability of 7628 electronic fiberglass fabric without spreading is 6-8ml/cm2*s-1 in general, and the air permeability of the same fabric after rollers spreading is less than 3ml/cm2*s-1 [6] . The permeability test cost is low and the operation is convenient, but it is an indirect characterization of the spreading effect, which could only reflect the permeability of the whole specimen, but could not express the micro situation of spreading.
Direct observation method
The sample could be directly observed by magnifying glass or microscope, with measuring the width of warp and weft yarn and the area of interweaving area of warp and weft yarn, also with calculating the uniformity of yarn (especially warp yarn) width. The spreading effect could be compared between different spreading fabrics by measuring the data before and after spreading.
Penetration performance method
The wetting rate is calculated according to the wetting time of the resin, and the effect of the spreading of different electronic fiberglass fabric could be compared.
Fabric cross section observation method
The filament distribution of the fabric could be observed and analyzed. For example, most of the weft yarns of the spreading fabric are in the 3 layer, and most of the HDI spreading fiber fabrics are 2 layers, and the warp yarns are less than those of the ordinary spreading electronic fiberglass fabric.
Surface roughness method
This representation method of spreading is measuring the surface height difference of electronic fiberglass fabric. The three-dimensional data of the sample surface shape and the surface height difference of the data could be measured by scanning laser microscope, as shown in Figure 1. After modifying height difference data as required, the statistical data could be collected. The monofilaments far from the surface of the fabric will interfere the experimental data, so the exclusion of monofilament interference and the correction of height difference data are required to by selecting certain local image regions, as shown in Figure 2. It could be used as the characterization method of the spreading effect of the electronic fiberglass fabric by comparing the surface roughness data before and after the spreading of different electronic fiberglass fabrics.
Concluding Remarks
There are already several methods to spread filament yarns of electronic fiberglass fabric. However, the assessment methods are still not direct and effective enough. Still there is a lot of work need to do, such as developing the characterization method, measuring the interface of the composites, and building a testing database of surface roughness.
|
2019-04-30T13:07:12.163Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "84e2b01d9183e6ad4e04d17564a3b7114715ffa8",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/322/2/022037/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "877beee93a41df01d6840135ac47f246f65db2f7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
219752850
|
pes2o/s2orc
|
v3-fos-license
|
FcircSEC: An R Package for Full Length circRNA Sequence Extraction and Classification
Circular RNAs (circRNAs) are formed by joining the 3′ and 5′ ends of RNA molecules. Identification of circRNAs is an important part of circRNA research. The circRNA prediction methods can predict the circRNAs with start and end positions in the chromosome but cannot identify the full-length circRNA sequences. We present an R package FcircSEC (Full Length circRNA Sequence Extraction and Classification) to extract the full-length circRNA sequences based on gene annotation and the output of any circRNA prediction tools whose output has a chromosome, start and end positions, and a strand for each circRNA. To validate FcircSEC, we have used three databases, circbase, circRNAdb, and plantcircbase. With information such as the chromosome and strand of each circRNA as the input, the identified sequences by FcircSEC are consistent with the databases. The novelty of FcircSEC is that it can take the output of state-of-the-art circRNA prediction tools as input and is applicable for human and other species. We also classify the circRNAs as exonic, intronic, and others. The R package FcircSEC is freely available.
Introduction
Circular RNAs (circRNAs) are formed by joining a downstream 3 ′ splice donor site and an upstream 5 ′ splice acceptor site in the primary transcript [1]. In most cases, circRNAs originate from exons close to the 5 ′ end of a protein coding gene and may consist of one or more exons. Furthermore, multiple circRNAs can be produced from a single gene. cir-cRNAs are generated through several distinct mechanisms that rely on complementary sequences within flanking introns [2][3][4], exon skipping [4,5], and exon-containing lariat precursors [6]. circRNAs were first discovered approximately 40 years ago and thought to be an RNA splicing error [7]. Until 2013, the researchers did not pay much attention in this area, but after publishing the paper [8], the circRNA research turned into a prominent field in scientific research. A significant amount of circRNAs is identified through the high-throughput RNA sequencing and bioinfor-matics analysis [9,10]. In recent years, many types of cir-cRNAs have been identified and found to be stable and abundant [2]. One of the important properties of circRNA is that they have tissue-specific expression. Several studies conclude that circRNAs are substantially enriched in brain tissues and the expression levels are dynamic during brain development of human and mice brain tissues [11][12][13]. cir-cRNAs show differential expressions between primary ovarian tumors and metastatic tumors in ovarian carcinoma [14]. Some circRNAs also interact with RNA-binding proteins (RBPs) [15] although very little enrichment in binding sites of RBPs is found for circRNA sequences compared with those of its corresponding linear mRNA. The studies [8,16,17] reveal that circRNAs can bind to a few RNAbinding proteins (RBPs), such as Argonaute and MBL. circRNAs are conserved across different species and act as a microRNA (miRNA) sponge while miRNAs have oncogenic or tumor suppressor properties [18]. Although the function of most circRNAs is unknown, some functions of the circRNAs are known as miRNA sponges [8,19,20], protein translation templates [21][22][23][24], and regulation of gene expression [25][26][27][28]. Different studies suggest that circRNAs are important biomarkers for different cancers [29][30][31] and autoimmune diseases [32,33], a potential noninvasive diagnosis for atherosclerosis [34], disorders of the central neural diseases [35], degenerative diseases [17], and cancers [10,36].
Identification of circRNAs is a crucial step for circRNA research. A number of methods is available for the identification of circRNAs such as CIRI [37], circRNA_finder [38], DCC [39], find_circ [40], segemehl [41], CIRCexplorer [3], MapSplice [42], and UROBORUS [43]. Each of these methods can predict circRNAs and their position in the chromosome, but these methods cannot provide the full-length circRNA sequence. To infer/predict the function of the circRNAs, differential expression analysis and network analysis are very common, and full-length circRNA sequences are required. CIRI-full [44] can extract the full-length circRNA sequences from its own output of CIRI; however, the method fails for the unequal read lengths in a sample and does not accept the annotation file in gff format. FUCHS [45] does not provide a full-length circRNA sequence directly; only when using its output with additional software is it possible to obtain the full-length circRNA sequences. Besides, FUCHS is tested for the output of DCC only. Recently, a software tool circtools [46] has been published as a one-stop software solution for circRNA research which also uses the FUCHS module. Another method CircPrimer [47] can extract the full-length circRNA sequences although its main function is to design primers. CircPrimer cannot extract circRNA sequences other than for humans. The output of CIRI, find_ circ, circRNA_finder, DCC, and segemehl gives three types of circRNAs (exonic, intronic, and intergenic). CIRCexplorer and MapSplice give two types (exonic and intronic) while UROBORUS gives only one type (exonic) of circRNAs in their output. Again, FUCHS, circtools, and CircPrimer cannot provide circRNA classification. A number of papers [48][49][50] classified their circRNAs as these five types: exonic, intronic, intergenic, sense overlapping, and antisense. Another paper [51] classified circRNA as exonic, intronic, intergenic, bidirectional/intragenic, and antisense. Our realization is that cir-cRNA classification is not finished yet. The existence of exonic and intronic circRNA is supported by numerous biological experiments, but other types are rarely validated by PCR experiments. Therefore, we have classified the circRNAs as exonic, intronic, and others.
There are four available tools for extracting full-length circRNA sequences, CIRI-full, FUCHS, circtools, and Cir-cPrimer. CIRI-full utilizes both BSJ (back-splice junction) and RO (reverse overlap) features to obtain full-length circRNA sequences. CIRI-full uses the output of CIRI, and RNA-seq data is needed to reconstruct the full-length sequence. The main limitation of CIRI-full is that it is not applicable if the sequencing read lengths are not equal for all reads in the RNA-seq data. Besides, CIRI-full does not accept the annotation file in gff format. FUCHS is developed to fully characterize candidate circRNA sequence utilizing all RNA-seq information from long reads (>150 bp). It is tested for the output of DCC only and not applicable for short reads. Besides, FUCHS cannot provide a full-length circRNA sequence directly. circtools is designed for RBP enrichment screenings and circRNA primer design, as well as circRNA sequence reconstruction. For circRNA sequence reconstruction, circtools utilizes the FUCHS module. The main function of CircPrimer is to design primers for circRNAs. Additionally, it can extract full-length circRNA sequences. It depends on the annotation information and is useful for human circRNAs only.
In this paper, we present an R package FcircSEC to extract directly the full-length circRNA sequences and to classify the circRNAs utilizing the output of circRNA prediction methods and the gene annotation information. We have followed the approach similar to CircPrimer in extracting circRNA sequences. Like CircPrimer, FcircSEC first selects the best transcript from the annotation file, then from the part of the transcript within the circRNA boundary, the introns are removed and finally combines all the exon sequences as a circRNA sequence. But our best transcript selection strategy (described in Materials and Methods) is different from CircPrimer. Even CircPrimer is applicable for human circRNAs only, but FcircSEC is useful for human and other species. FcircSEC only needs the output of the cir-cRNA prediction tool along with the reference genome and the annotation file. The main advantage of FcircSEC is that it can use the output of many state-of-the-art circRNA prediction tools for extracting the actual sequence (with information on chromosome, circRNA start and end position, and strand). As there are no tools for full-length circRNAs for the user of the circRNA prediction tools other than CIRI and DCC, FcircSEC can be a good choice for them.
Materials and Methods
In our R package FcircSEC, from the gene annotation information of the reference genome, we extracted all transcripts and got the number of exons with their start and end positions for each transcript. Then, we selected the best transcript using the output of circRNA prediction methods. Finally, we extracted the full-length circRNA sequences from the selected best transcript. To check the validity of our package, we used human circRNAs from two popular databases circbase (http://circbase.org/) and circRNAdb (http://202.195 .183.4:8000/circrnadb/circRNADb.php) and plant circRNAs from the plantcircbase (http://ibi.zju.edu.cn/plantcircbase/) database. The circRNA sequences obtained by FcircSEC were consistent with the databases.
The package needs three input files: (1) the four types of information (chromosome name, start position, end position, and strand of the circRNAs) from the output of circRNA prediction tools, (2) the reference genome, and (3) the annotation file corresponding to the reference genome. Inputs (2) and (3) can be downloaded from UCSC, NCBI, or any other databases, and input (1) can be obtained from circRNA prediction tools like CIRI, find_circ, circRNA_ finder, DCC, CIRCexplorer, segemehl, MapSplice, and UROBORUS whose outputs have the abovementioned four types of information. The genome versions used in our 2 International Journal of Genomics analysis for different species are given in Table 1. The flowchart of FcircSEC is provided in Figure 1.
Extraction of Transcript Information from the Gene
Annotation File. In this step, the input was the gene annotation file of the reference genome. The annotation file has nine columns: seqname, source, feature, start, end, score, strand, frame, and attribute. To extract the transcript information from the annotation data, the following steps were followed: Step 1: from the attribute column of the annotation file, extract the transcript name and the gene name Figure 1: Workflow of the FcircSEC package. From the gene annotation file, a nine-column transcript data file is generated for all transcripts with the number of exons and the start and end of each exon. Using the transcript data and the output of the circRNA prediction tool, the circRNA classification is done. Using the circRNA classification information and the reference genome, the full-length circRNA sequences are extracted.
International Journal of Genomics
Step 2: for each unique transcript, count the number of exons and obtain the start position and end position of each exon Step 3: subtract 1 from the start position of the exons Step 4: make a 9-column text file with transcript name (ID), chromosome, transcript strand, transcript start, transcript end, number of exons in each transcript, start positions of exons, end positions of exons, and gene name 2.3. Selection of the Best Transcript. The inputs of this step were the transcript data obtained from the previous Section 2.2 and the four columns (chromosome, start position, end position, and strand of circRNAs) from the output of cir-cRNA prediction methods. In the best transcript selection, we followed two strategies. We selected the transcript whose coordinates (an interval from transcript start to end) contained the circRNA boundary. If there were several such transcripts, we selected all of them. For all possible transcripts, we checked whether the circRNA start and end positions exactly matched or not with the start of the first exon and end of the last exon, respectively. If yes, we selected that transcript as best transcript which has the longest splice sequence (sequence of all combined exons). If not, we selected the transcripts having maximum number of exons and then selected the one having the maximum length.
Let T be all transcripts extracted from the gene annotation file and O be the output from the circRNA prediction tool. For i th circRNA of O, all possible transcripts T possible were selected containing the circRNA boundary (e.g., transcripts 1 and 2 for circRNA 1 in Figure 3(a)). Then, the best transcript was selected using case 1 and case 2.
Case 1. For any transcript from T possible , if the start position of the first exon and the end position of the last exon are exactly matched with the circRNA boundary (e.g., transcript 1 in Figure 3(a)), select that transcript. If more than one such type of transcript is selected, repeat the following steps until a single transcript is selected: Figure 3(b)), select that transcript having the maximum number of exons within the boundary (e.g., transcript 1 in Figure 3(b)). If more than one such type of transcript is (1) select the transcript having the maximum transcript length (2) select the first one
Circular RNA Classification and Sequence Extraction.
The inputs of this step were the best transcript obtained from the previous Section 2.3, the four columns of the outputs of the circRNA prediction tools, and the reference genome. For any circRNA, if no best transcript is avail-able, the corresponding circRNA was declared as "other" type. In the best transcript, if there was no exon within the circRNA boundary and an intron is contained within the circRNA boundary, we defined that circRNA as intronic. When there were some exons in the best transcript within the circRNA boundary, and the first and the last exon contained the start and end positions of the cir-cRNA, respectively, we defined that circRNA as exonic, while the circRNA which was neither exonic nor intronic was declared as "other" type.
Let O be the output from the circRNA prediction tool and T best be the best transcript for the i th circRNA. Some variables were defined as For the i th circRNA from O, the circRNA classification and sequence extraction were done using either of the following cases: Case 1. If start = 1 and end = 1 (Figure 4(a)), the circRNA is exonic, and the sequence is composed of the exons from T best within the circRNA boundary (Figure 4(a)).
Case 2.
If there are no exons and only one intron in T best within the circRNA boundary, the circRNA is intronic. The sequence is composed of one intron from T best (Figure 4(b)).
Case 3. If case 1 and case 2 are not satisfied, the circRNA is other type. The sequence is composed of a genomic sequence from start to end of the circRNA (Figure 4(c)).
Extraction of Transcript Data and Full-Length circRNA
Sequences. We have extracted the full-length circRNA sequences for the circRNAs downloaded from three databases, circbase, circRNAdb, and plantcircbase. For circbase and circRNAdb, only the human circRNAs have been used, and for plantcircbase, plant circRNAs have been used.
We have extracted the transcript data from the gene annotation file. Using the transcript data and the output of the circRNA prediction tools, we have created the circRNA classification file which contains the circRNA classification and all the required information for getting the full-length circRNA sequences. Using the start and end positions of circRNAs obtained from the circRNA prediction tool, we have extracted the genomic sequence from the reference genome. Finally, using the circRNA classification information and mentary Tables S1-S13, Tables S14-S28, and Tables S29-S43, respectively. The supplementary Tables S14-S28 (circRNA classification tables) have a total of 15 columns, and these columns represent, respectively, (1) circRNA ID, (2) chromosome, (3) circRNA start position, (4) circRNA end position, (5) circRNA strand, (6) circRNA length (7) circRNA type, (8) number of exons, (9) exon sizes, (10) exon offsets (start of each exon), (11) best transcript, (12) transcript strand, (13) transcript start, (14) transcript end, and (15) host gene.
Distribution of the circRNAs.
In circbase, there are a total of 92375 human circRNAs; the extracted circRNA sequences by FcircSEC are 93.39% exonic, 0.75% are intronic, and 5.86% are others, while in circRNAdb, out of 32914 cir-cRNAs, 99.32% are exonic, 0.02% are intronic, and 0.66% are others. Among the 67 experimentally validated circRNAs from plantcircbase, 62.69% are exonic and 37.31% are others, but no intronic is found. The classes of circRNAs for all other species are provided in Table 2. Again the distribution of the number of exons for the full-length exonic circRNAs is given in Figure 5. From Figure 5, we can observe that the median number of exons for most of the species is between 2 and 4.
Matched Sequences between Databases and FcircSEC.
Since FcircSEC requires the chromosome name, start and end positions, and strand of each circRNAs as input, we have taken this information for each circRNA from the databases and extracted the full-length circRNA sequences using Fcirc-SEC. Then, we have compared the sequences extracted by FcircSEC with those provided in the databases. During analysis, a sequence is matched if the whole sequence extracted by FcircSEC and the one provided in the database are identical (100%) and unmatched otherwise. We have calculated the proportion of matched sequences between FcircSEC and the three databases circbase, circRNAdb, and plantcircbase. Table 3 lists the proportion of matched sequences.
In circbase and circRNAdb, there are a total of 92375 and 32914 full-length human circRNA sequences, respectively. We have extracted these circRNA sequences by FcircSEC and compared them with those of the databases. From Table 3, we can see that 95.1% and 98.9% of the sequences extracted by FcircSEC are exactly matched with circbase and circRNAdb, respectively. In plantcircbase, there are 67 (out of 95143) experimentally validated full-length circRNA sequences. We have extracted these 67 circRNA sequences by FcircSEC and found that all are exactly matched with the databases. We have also extracted the full-length sequences for the rest of the 95076 circRNAs (available in Supplementary Table S31-S43). The circRNA is other type, and its sequence is the genomic sequence from start to end of the circRNA. 6 International Journal of Genomics There are mainly four available tools for extracting full-length cir-cRNA sequences: CIRI-full, FUCHS, circtools, and CircPrimer. Different methods depend on different prediction tools; for example, CIRI-full is dependent on CIRI, FUCHS is dependent on DCC, while CircPrimer and FcircSEC are not dependent on any prediction tools. Even some methods need RNA-seq data while others do not. As a result, performance of these methods is incomparable. Therefore, we have compared FcircSEC with the alternative methods in terms of application, limitation, etc. in Table 4.
From Table 4, we can see that CIRI-full takes the output of CIRI only as input, and RNA-seq data is needed to get the full-length sequence. It is not applicable if the lengths of all the reads in the RNA-seq data are not equal and if the annotation file is in gff format. Only the users of CIRI can use this tool for getting the full-length sequence. FUCHS and circtools take the output of DCC as input, and RNA-seq data is also needed to reconstruct the sequence. Both the tools are not applicable for short reads and cannot provide the full-length sequence directly. For both the tools, other software is needed to reconstruct the sequence. Both the tools are applicable for the users of DCC only. CircPrimer, although developed for designing primers, can extract the full-length sequences. But it is applicable for human circRNA only. FcircSEC can take the output of the state-of-the-art cir-cRNA prediction tools as input. As RNA-seq data is not needed, there is no restriction in sequencing read lengths in using FcircSEC. It can take the annotation file in either gff or gtf format and is useful for human and other species. It can directly provide the full-length sequences. It can also classify circRNAs as three types (exonic, intronic, and others) while other methods cannot. The only limitation of FcircSEC is that it does not provide any information on splice sites within the circRNA sequence. In summary, we can say that FcircSEC has advantages over the existing methods.
Discussion
There are several circRNA prediction tools, but for only two tools CIRI and DCC, there is an existing method (CIRI-full and FUCHS) for getting the full-length sequences. For the users of other circRNA prediction tools (except CIRI and DCC), there are no existing tools for getting the full-length sequences. Although our method depends on the gene annotation information only, it will be a useful tool for the users who are interested in using the circRNA prediction tools other than CIRI and DCC.
CIRI-full and FUCHS can take the output of CIRI and DCC, respectively, as input, and hence, CIRI-full and FUCHS are applicable for the users of CIRI and DCC, respectively. circtools is also useful for DCC users as it uses the FUCHS module for circRNA sequence reconstruction. Cir-cPrimer is applicable for human circRNAs only. Our method FcircSEC depends on the output of circRNA prediction tools, annotation information, and reference genome. FcircSEC can take the output of state-of-the-art circRNA prediction tools as input and, therefore, is applicable for almost all users of circRNA prediction tools.
Our method can extract the full-length circRNA sequence using the output of the existing circRNA prediction tools. We assume that the results of the existing circRNA prediction tools are correct, and we have not applied any filtering steps to detect the false positives. Again, within the circRNA boundary, we find a matching of the start of the first and the end of the last exons of the best transcript with the circRNA start and end positions. We assume that the circRNA contains all the intermediate exons, and we combine all the exons as a full-length circRNA. That is, we have not skipped any exons. This strategy is also used in CIRCexplorer.
FcircSEC does not take into account investigating the presence of the splice site within the circRNA sequence. For International Journal of Genomics exonic circRNA, it combines all exons within the circRNA boundary to construct the full-length sequence. For the intronic and other types, it assumes that circRNAs are not spliced. By searching the databases circbase and circRNAdb, we have found that in almost all cases, the circRNA combines all exons. Besides, RNA-seq data is needed to examine the presence of a splice site within circRNAs. This is beyond the scope of the current work as FcircSEC is based on annotation information and does not take sequencing reads into account. This is the limitation of FcircSEC. We will try to overcome this limitation in the next version of the package. Overall, the full-length sequence extraction is crucial in circRNA research. After predicting the candidate circRNAs, all the downstream analyses depend on the circRNA sequences. Therefore, FcircSEC can play an important role through extracting full-length circRNA sequences in identifying important circRNA biomarkers.
Conclusions
A number of methods are available in the literature for predicting the circRNA sequences. But only a limited number of methods are available for extracting full-length circRNA sequences. In this paper, we have developed an R package FcircSEC for extracting full-length circRNA sequences using the output of most of the popular circRNA prediction tools. The results of FcircSEC are consistent with the published cir-cRNA databases and give more information that are not available in the public databases. Moreover, as for the users of the circRNA prediction tools other than CIRI and DCC, as there is no full-length circRNA sequence extraction method, FcircSEC can be a good choice for them. The R package FcircSEC is freely available at http://hpcc.siat.ac.cn/ FcircSEC/Home.html.
Disclosure
The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.
|
2020-06-04T09:05:59.845Z
|
2020-05-28T00:00:00.000
|
{
"year": 2020,
"sha1": "d0a18bf54c081dcf0635019a513a4395f20fe4e7",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijg/2020/9084901.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c12e99827ceb64c6839fc5fe689225c4774c7944",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
220426913
|
pes2o/s2orc
|
v3-fos-license
|
The Nationalization of Culture
Although na tional.ism is an exa mple of a cultur al fol'ce which in many cases has overrul ed 0U1er, traditional identit ies and loyalties in 19th and 20th centur y 11oc iety, the st udy of n ationalism has not been focused very much on th e culLw:al praxis of na Lional identity formation and harin g. As a result , the ideology and polit ics of nationali sm are far bet ter unders tood than the creation of Hungari anness and wedi. shness. Thi s paper discusses some approaches in the national-cultur e buildin g of veryday life, using mainly Swedish examples. The focus is also on nati ona l cult ur e as a battle ar ena, where different interest groups use argum ents about national unity or heritage in hegemonic struggles. Different types of "nationalization processes" are discussed, as for example ways in which certain cultural domains come to be defined as national, how nation.al space is transformed into cultural space , or the way in which every new generation not only is nationalized into a given heritage but also creates its own version of a common, national frame of reference.
Alt hough na tional.i sm is an exa mple of a cultur al fol' ce whi ch in many cases has overrul ed 0U1er, traditional identit ies and loyalties in 19t h and 20th centur y 11oc iet y, th e st udy of n ationalism has not been focused very mu ch on th e cul Lw:al praxis of na Lional identity forma tion and harin g. As a result , the ideology and polit ics of nationali sm are far bet ter unders tood than the creation of Hungari anness and wedi. shness.
Thi s paper discusses some approaches in the national-cultur e buildin g of veryday life, using mainly Swedish examples. The focus is also on nati on a l cult ur e as a battle ar ena, wh ere different interest groups use argum ents about national unity or heritage in hegemonic struggles. Different types of "nationalization processes" are discussed, as for example ways in which certain cultural domains come to be defined as national, how nation.al space is transformed into cultural space , or the way in which every new generation not only is nationalized into a given heritage but also creates its own version of a common, national frame of reference. University, Finngatan 8, S-22362 Lund. Revisiting the national project Nationalism is of special interest to that branch of anthropology within which most of the following essays were produced : European ethnology, a discipline born in the nineteenth century as a child of nationalism and Herder's Volksgeist. European ethnology and folklore developed with the more or less explicit goal of salvaging and assembling "national" folk cultures . This strongly ideologically charged project also included ideas about folk mentalities or national character.
Docent, fil .dr. Orvar Lofgr en, Department of Europ ean Ethnology, Lund
Later gen erations of ethnologists faced the task of critically deconstructing these pioneer attempts at creating a national folk heritage, and it is only after such a purge that it has become possible to return to the question of national identity and culture with new theoretical perspectives.
This collection of papers was born out of this recent ethnological interest in new perspectives on the making and remaking of national cultures. The starting point was a collaboration between researchers in Sweden and Hun-gary. In Budapest Tamas Hofer had, together with a group of colleagues, analysed the construction of a Hungarian national identity and the crucial role of folk culture in this process, in Stockholm Ake Daun and Billy Ehn, among others, had been studying Swedish mentalities and changing self-representations , especially in the light of the recent waves of immigration to Sweden (cf. Daun & Ehn 1988). In Lund a group including Jonas Frykman and myself had worked on a project concerning class formation and culture-building in nineteenthand twentieth-century Sweden, where one of our main tasks was to scrutinize ideas about a modern and homogeneous Swedish national culture and to look at the extent to which cliches and notions of national homogeneity concealed a cultural differentiation based upon factors like class, gender and generations (see Lofgren 1986).
These various approaches of deconstructing and reconstructing national culture-building had led all ofus towards an interest both in the ways in which national rhetoric had been used as an argument in hegemonic conflicts be-tween competing interests and classes in Hungarian and Swedish society during the last century (Do some Hungarian/Swedes claim to be more Hungarian/Swedish than others?), but also in the question of how behind this ideological facade of national unity, an actual nationalization of shared cultural understandings and knowledge had been established. To what extent, for example, do Swedes or Hungarians of today share common frame of reference compared with the situation fifty or a hundred years ago?
It became evident that the cultural politics of nation building and the process of nationalizing culture are best studied within a comparative framework, in order not to be blocked by the occupational disease always threatening scholars looking at their native culture: what we in Sweden call "home-blindness".
This collection of essays is the first result of a joint discussion of Hungarian and Swedish research into the making and remaking of national cultures. 1 Ake Daun's contribution, "Studying national culture by means of quantitative methods" mainly deals with the methodological problems of studying contemporary culture on a national level combining qualitative and quantitative approaches. Drawing from an ongoing research project he discusses various strategies for locating basic themes and personality traits on a national level, trying to avoid the pitfalls of earlier grandiose speculations about "national character". Jonas Frykman's paper "Social mobility and national character" looks at ideas about what is seen as "typically Swedish" and relates them to the culture-building of Swedish intellectuals in the making of the modern welfare state. It is their style of life and outlook on the world that has often been expressed in terms of "national character". He analyses the social and cultural conditions under which such images of culture and personality are produced -a society with a high degree of mobility.
His analysis of a national setting, where progressive intellectuals have dominated the discourse on national culture and "Swedishness" can be compared to Peter Niedermii.ller's paper on class and national culture in Hungary, 6 "Symbols and reality in national culture: The Hungarian case". Here the cultural battle over who represents the true Hungarian identity has been carried out within a very different social structure. He discusses the various paths developed in attempts to construct a Hungarian identity and heritage through uses of folk culture and the competing interests involved in these processes.
A neglected field of study is the strong modern connection between sport and nationalism. In his paper, "National feeling in sport", Billy Ehn discusses the national rhetorics of sport and the ways in which they express national sentiments and loyalties, using material mainly from Swedish sport journalism . Katalin Sink6's paper "Arpad versus Saint Istvan: Competing interests in the figurative representation of Hungarian history" looks at processes of confrontation and negotiation between competing national heroes, symbolizing two different sets of ideas about Hungary and Hungarianess, which have been used by different groups for different purposes over the centuries.
Lena J ohannesson discusses a different genre of figurative representations in her paper "Anti-heroic heroes in more or less heroic media". She looks at the ways in which Swedish anti-heroes have been portrayed in 20th century media and the ways in which these national images are comments on "Swedish" virtues and vices.
Food seems to have a magic position in the maintenance of an national identity among expatriots, who long to feel the tastes of the old country. Lists of what is "typical" Swedish often include food-items. In her paper, "From peasant dish to national symbol: An early deliberate example", Eszter Kisban traces a very marked Hungarian example of the making of a national dish and the ways in which this Hungarian symbol has been used in cultural politics as well as in the tourist industry marketing of Hungarian culture.
New dialogues
The scope of the papers indicates the new kind of interdisciplinary dialogues developed in the n field of study of national culture and identity. For a long time this kind of dialogue was poorly developed. Although there were some attempts at cross-disciplinary exchanges, a fairly conventional division of labour existed, in which historians concentrated on nationalism as a political and ideological phenomenon, whereas anthropologists mainly worked within the conceptual framework of ethnicity, mostly with an emphasis on synchronic perspectives . This traditional division is, however, slowly disintegrating , as historians become more interested in nations as cultural formations and anthropologists have begun to interest themselves in the cultural politics of nation-building. 2 Up till a few years ago research on national identity was to a great extent focused on the ideology and politics of nationalism, often within a framework of exposing nationalism as a type of false consciousness. There were so many myths of national culture, so much ideological rhetoric waiting to be scrutinized and exposed. (A fairly typical example of this genre is Ernest Gellner's book "Nations and Nationalism" from 1983.) This was a necessary phase of research which now enables us to look in a more detached way at nationalism as a cultural phenomenon and as a historical process. (See, for example, the much more nuanced approach found in Benedict Anderson's influential discussion of the origins and spread of nationalism from 1983 .) In spite of the expanding literature we still live with an underdeveloped and ambiguous analytical framework, as Philip Schlesinger has pointed out in his critical survey of current research (1987); concepts like national identity, culture, mentality or heritage are still vaguely defined .
Being national?
When entering the Nordic museum, a nineteenth-century child of Swedish nationalism, you first encounter the imposing statue of the Swedish king Gustav Vasa, often seen as the sixteenth-century founder of the Swedish nation state. Under his stern gaze is a carved motto directed to the visitor : "Be Ye Swedish!" (Warer Swenske!) This early twentieth-century version of a · royal command may illustrate the first analytical problem, that of working with concepts which cannot easily be moved around in history. An adjective like "national" or "Swedish" has totally different connotations for different epochs and different social groups. The twentieth-century message of the importance of being Swedish would have made very little sense to Gustav Vasa's peasant subjects. Swedishness is a quality which can hardly be used transhistorically, at least not without a discussion of how this elusive trait is defined or redefined in different historical settings.
In the same way we have an extensive debate on the concept of nationalism. Should it be reserved for the ideological and political movements from the late eighteenth century onwards, as a product of the intellectual climate of the American and French revolutions? Is it possible or meaningful to talk about nationalism in medieval England or sixteenth-century Sweden? It seems to me reasonable to make an analytical distinction between the concepts of patriotism and nationalism in this comparative context, as representing two different cultural paradigms in nation-building. The wider concept of patriotism is based upon the love of God, King and Country by subjects of the state, whereas the idea of nationalism is based upon ideas about a "Volksgemeinschaft", a shared history and culture, a common destiny, an idea of equality and fellowship, which means that nationalism contains political dynamite and can thus be used both to mask class interests or to fight them .
In the following I will concentrate on the period of the nineteenth and twentieth centuries: the grand centuries of nationalist ideology and nation-states as opposed to the earlier era of the absolute monarchies. I will mainly focus on the problem of the making and constant remaking of national identity and culture, as an arena of contestation between different interests.
Do-it-yourself nationalism? "The National Flag, the National Anthem and the National Emblem are the three symbols A middle-class family on the beach with the Swedish flag hoisted on the top of the tent, photo from the 1920s . Private flags were then still a rather exotic sight, or as a rural answer to a questionnaire on the use of this national symbol put it in 1940: "In my childhood towards the end of the nineteenth century the Swedish flag was an almost unknown concept. It was a good bit later into the next century that I started to see the blue and yellow flag among upper-class types and in vicarages ... Hardly anybody thought of the flag as a symbol of the nation but rather as something tied to royalty, militarism and well-to-do people. The result was that the flag did not become popular among the less well-to-do and it still isn't, because the tradition is tenacious" (quoted after Bii:irnstad 1976: 48). On the whole the national flag became a popular symbol rather late in Sweden. Brav e attempts were made to create a national holiday, called ''The day of the Swedish flag" from 1916, but this national celebration has remained a rather empty, official ritual with none of the popular fervour of the fourth or the fourteenth of July. Interestingly enough , while offical flag-waving is rather limited (and often joked about) in the Scandinavian countries , the private use of national flags is today more common here than anywhere else in the world. Flagpoles are found everywhere, next to summer houses, in caravan camps as well as in suburban gardens, and the flag is hoisted on all kinds of occasions , from family birthdays to Midsummer parties. through which an independent country proclaims its identity and sovereignty and, as such, they command instantaneous respect and loyalty. In themselves they reflect the entire background, thought and culture of a nation" (after Firth 1973: 314) .
This quote from a pamphlet published by the Indian Government in the 1960s illustrates the ways in which a common symbolic language of nationhood is taken for granted today.
The interesting paradox in the emergence of 8 nationalism is that it is an international ideology which is imported for national ends. Looking back at the pioneer era of Western national culture-building we may view this ideology of nationalism as a gigantic do-it-yourself kit . Gradually a set of ideas is developed as to what elements make up a proper nation, the ingredients which are needed to turn state formations into national cultures with a shared symbolic capital. The experiences and strategies of creating national languages, heritages and symbolic estates, etc. are circulated among in-tellectual activists in different corners of the world and the eventual result is a kind of check-list: every nation should have not only a common language, a common past and destiny, but also a national folk culture, a national character or mentality, national values, perhaps even some national tastes and a national landscape (often enshrined in the form of national parks), a gallery of national myths and heroes (and villains), a set of symbols, including flag and anthem, sacred texts and images, etc. This national inventory is produced mainly during the nineteenth century, but elaborated during the twentieth. The process in which national projects are made transnational, and recycled or remade in different setting and at different times is still with us, as new nations continue to be born within the same basic nineteenth-century paradigm. It is thus an irony that th~ liberating force of nationalism in developing countries can be seen in a way as the ultimate victory of colonial hegemony, as the nation-building is often carried out along truly Western lines.
The late-comers to this process of nationbuilding also have_ to live with the ironic comments of the pioneers. For the latter their own national identity has had time to be transformed from an ideological construction to a given, natural fact, and in their ridiculing of late-comers' attempts to create national symbols (mainly in the Third World) the "old" nations fail to see the parallels to their own past. Ernest Gellner has touched on this problem which is sometimes boiled down to the derogative maxim "I am a patriot, he is a nationalist and they are tribalists" (Gellner 1983: 87).
Constructing national identity
Gellner's quote underlines the fact that some national ideologies have been naturalized so early that they are rarely questioned today. Norbert Elias has pointed at the same problem in his comparison of French and German selfrepresentations: "The questions "What is really French? What is really English? have long since ceased to be a matter of much discussion for the French and English. But for centuries "What is really German?" had not been laid to rest" (Elias 1939(Elias /-1978. If there is a certain chameleonic vagueness about the concept of nationalism, it is still usually contained within the field of meanings denoting ideology, doctrine or political movement. The use of the concept of national identity is, however, more ambivalent, and it is probably in the development of this concept that ethnicity theory can make its most fruitful contribution, namely in the focus on identity as a dynamic process of construction and reproduction over time, in direct relation or opposition to specific other groups and interests: it is this dynamic and dialectical approach to identity management that is important here (cf. Schlesinger 1987).
During the last decade ethnicity studies have stressed the ways in which ethnic boundaries may change over time, how ethnic markers and symbols are created and communicated and how different criteria of identity can be selected in different situations. (There is, of course, the risk that this focus on the strategic aspects of ethnicity management overstates the fluidity, malleability and manipulatory aspects of ethnic identity.) National identity can thus be seen as a specific form of collective identity. Like ethnic identity, it can be both latent and manifest: activated in special situations, confrontations or settings, dormant in others .
In what ways are national identities different from ethnic ones, and not only a specific variation on the ethnicity theme? It is evident that a force like nationalism often uses ethnicity as a basis for constructing national cultures, but it can also be argued (in some cases) that an ethnic identity can be a by-product of nation-building. National identity can also be superimposed on traditional ethnic cleavages, turning Finns and Swedes into fellow countrymen in Finland, or producing true Americans out of a mosaic of immigrants. We need to devote more attention to the ways in which national identity in a gradual process comes to transcend and subordinate other loyalties, be they regional, ethnic, or based upon class, gen-Towards the end of the nineteenth century the northern province of Dalecarlia came to be seen as the typical Swedish peasant heritage. Urban intellectuals made pilgrimages to this rather atypical piece of Sweden , where peasants still wore folk costumes, lived in large villages and maintained colourful rituals. "Dalecarlia with its solid people, its cottages, its old traditions, which still survive up here ... where everything speaks Swedish as in no other region, and nowhere else does one feel so happy and proud of being Swedish as there", exclaimed one of the visitors in 1899 (quoted in Rosander 1987: 315). The reason that Dalecarlia was chosen as the cradle of Sweden was not only the picturesque peasant life still surviving in the region but also because Dalecarlian culture fitted the middle-class mythology of "th e old peasant society". There was no large rural proletariat to disturb the image of a happy village Gemeinschaft, and here one found the stereotypes of a freedom-loving, individualistic, and principled peasantry, embodying honesty , honour and love of traditions , living a simple life in close contact with nature . In short, the Dalecarlians represented the kind of cultural ancestors the new progressive middle-class intellegentia wanted to have. It is therefore no coincidence that the first building brought to the new open air museum Skansen in Stockholm (opened in 1891) was taken from Dalecarlia . Outside the cottage, museum guides pose in the Dalecarlian dresses, which were later developed into something of a national folk costume for the urban middle class (Photo : Nordiska museet). der or religion. How is it that national identity often works so well as an inclusive symbol?
Unlike ethnic identities national ones are always directly linked to problems of state formation and state discourse. They are produced and reproduced within a very special institutional framework, which sets them apart from other types of identity constructs. Benedict Anderson has discussed national identity in terms of "imagined communities" of 10 national fellowship. His by now almost classic definition of the nation runs: "It is an imagined political community and imagined as both inherently limited and sovereign.
It is imagined because the members of even the smallest nation will never know most of their fellow-members, meet them or even hear of them, yet in the minds of each lives the image of their communion" (Anderson 1983: 15).
In a discussion of Anderson's thesis Michael
Harbsmeier has argued that his use of the anthropologist Victor Turner's communitas concept is too broad, it does not help us to understand the very specific nature of "the national community", as opposed to the communitas of religious groups or empires . He develops Anderson 's framework by arguing that national identity is, unlike many other forms of social identity, totally dependent upon the imagined or real approval of this identity as a national otherness by others, i.e. other nations (Harbsmeier 1986: 52).
The fact that national identity is always defined as a contrast or a complement to other nations, is illustrated by the nineteenth-century Scandinavian national movements. Norwegian nationalism was born , not in Norway, but among Norwegian students and intellectuals in Copenhagen towards the end of the eighteenth century. The Norwegian national identity came to be profiled against the centuries of Danish rule and the enforced union with Sweden from 1814. It is no coincidence that it was the historical period of up to 1300, before the union with Denmark, that came into focus in the creation of a Norwegian cultural heritage : Norwegians were above all Vikings (cf. 0sterud 1987). In the Finnish national movement, folklore became even more important. The search for a Finnish folk literature and the emphasis on Finnish as a national language was a counter to the former Swedish domination and the new Russian rule after 1809. This construction of a national Finnish folk culture was a task mainly carried out by the Swedishspeaking intellectual elite, who in this process had to become even more Finnish than the peasantry itself (cf. Hanko 1980) .
In nineteenth-century Denmark the construction of a national heritage and a national identity was above all profiled against the arch-enemy in the south, Germany, while Swedish nationalism of this era really lacked an arch-enemy or rather the threat of a dominating neighbour, as the traditional fear of Russian intervention had diminished. Against this background it is hardly surprising that the cult of Scandinavianism became a Swedish speciality, or even a kind of substitute nationalism. The national anthem talks about the "mountainous North" and the national folk museum was named the "Nordic Museum".
Without analysing this national culture building as a contrasting project we cannot explain the different strategic uses made, for example, of folk culture in the nineteenth-century Scandinavian context . It is no coincidence that the authentic Norwegian peasant was to be found in the remote mountain valleys of Telemark and his Swedish counterpart in Dalecarlia, or that true Finnish folk culture survived in the forests of Karelia. 3 For Hungary Tamas Hofer has analysed a similar process of stereotyping (see Hofer n .d.). The Hungarian peasant of the plains was created as a national contrast to the Austrian mountain peasant . Hofer has also discussed the ways in which a national peasant folk culture was used by different groups for hegemonic ends at different points in Hungarian history -for example, the elaborate use of folk culture as national symbolism during the Stalinist era of the 1950s. This was the great period for "state folklorism" in Eastern Europe, when smiling factory girls paraded in peasant costumes and the image of the "traditional folk" was used in appeals for national unity by the new rulers. Today, as Eszter Kisban points out in her paper, the tourist industry is one of the chief marketing agencies for such stereotypes of national folklore.
The anthropologist Michael Herzfeld has explored the cultural politics of folklore in his studies of the remaking of a Greek national identity after the end of Turkish rule in the nineteenth century, a process in which the Greek cultural heritage had to be purified of all Eastern elements and appear in a manner which conformed to European stereotypes of the true, classical Greek nation (see Herzfeld 1987) .
Even the American immigrant nation developed a search for its own "folk culture" at the beginning of the twentieth century, when collectors and scholars roamed the Appalachian mountains in search of an "Elizabethan cul-ture" whose bearers spoke like Shakespeare and plaited baskets while singing medieval ballads . This traditional culture had to be salvaged and reproduced in order to stem the disintegrating forces both from the modern world and the new waves of proletarian immigrants (Whisnant 1983).
Examples like these illustrate the ways in which folk culture becomes nationalized (and also sacralized). A correct, authorized and timeless version of folk life is produced through the processes of selection, categorization, relocation and "freezing". One of the most interesting parts of this process is what is left out, (more or less unconsciously) disregarded orignored as not being worthy of entering the showcases of the new national museums or the pages of the folklore heritage publications.
It is, however, important not to reduce these processes of the nationalization of folk culture to one of just "inventing traditions". Here we have a much more complex pattern of accomodation, reorganization and recycling, in which different interest groups have different claims at stake. (Cf. the discussion of the ways in which Swedish and Hungarian intellectuals used the folk culture as a strategy of cultural politics in the contributions below by Niedermiiller, Sinko, and Frykman.) If national peasants were produced in contrast to competing national images of other nations, the same process of profiling is found in the creation of national stereotypes: the typical Swede or Hungarian is usually profiled (consciously or unconsciously) against a counterpart and it is interesting to note that the stereotype tends to change with the object of comparison.
In relation to the happy-go-lucky nations of the Mediterranean, Swedes define themselves as grey and boring, obsessed with order, punctuality and the control of emotions, characterized by a total lack of spontaneity and espritde-uie. If the comparison is made in relation to Finns or Russians, other qualities are stressed, because these Northern neighbours are often stereotyped as even greyer and more boring: they even make the Swedes look a little bohemian. On the whole there is an interesting metaphor of North and South in national self-12 representation: one's own identity is contrasted with those who are more Southern and easy-going (but less dependable) and those who are Northerners and less easy-going than one's fellow countrymen. There seems to be a tendency in many settings to produce an image which is based upon an idea of the golden mean. "We English are not as warm and hottempered as the French or the Spaniards, but more dependable and efficient; on the other hand, we are not as rigid or controlled as the Germans or the Scandinavians." 4 Ideas about emotional control or lack of it seem very central in these kinds of stereotypes, where north and south often stands for the cultural opposition of cold and warm. Another striking feature of these stereotypes are their gender bias. Although das Vaterland is usually symbolized by a national mother -Britannia, Marianne, Mother Denmark and Mother Svea (of Sweden) -the typical Swede, Dane or German is usually a man.
But national stereotypes also reflect changing geopolitical conditions, as for example in the altered ways in which Hungarians have viewed the Austrians, from the period ofHabsburgian dominance to the contemporary situation, or the manners in which Danes have defined Swedes over the last century (and viceversa). There is always an element of underdog-topdog argumentation in the ways national pride or national identity are expressed in relation to neighbourhood nations, be they defined as Big Brothers or Little Sisters. 5 To conclude, one may argue that the construction of national identity is a task which calls for internal and external communication.
In order to create a symbolic community, identity markers have to be created within the national arena in order to achieve a sense of belonging and loyalty to the national project, but this identity also has to be marketed to the outside world as a national otherness. Such projects of self-presentation and self-definition can be analysed in many cultural arenas during the nineteenth and twentieth centuries.
(An example of the latter is the big world exhibitions from 1851 and onwards, where nations have peddled their self-images; cf. the discus-sion in Benedict 1983, Rydell 1984and Smeds 1983 National culture National identity and national culture are often used as interchangeable concepts. Here I would like to argue for the need to keep them apart, reserving the concept of national culture for that kind of collective sharing which exists on a national level or within a national cultural space. Rather little research in this field has studied what is actually shared on a national level and how it is shared.
It is quite clear that communication is a crucial problem here: how are these imagined communities shaped and held together over time, how is the social and political space of the nation also transformed into a cultural space: a common culture? This sharing is done in different ways and on different levels .
Let us think about the various ingredients which may be contained in the vague concept of national culture. First of all, I think we have to distinguish between "The National Culture" and an everyday national sharing of memories, symbols and knowledge. "The National Culture" which the French historian Maurice Agulhon (1987) has also termed "The national school culture" (or la Grande Culture) is a normative cultural capital: What Every Frenchman Should Know. This is the kind of knowledge which is dished out in school, carrying the authorized seal of the official public culture. The making of this kind of normative cultural heritage is an interesting study in itself. The boundaries between ideas about what every Swede ought to know and what all Swedes actually share tend, however, to become rather blurred.
An interesting example of this confusion of a descriptive and normative approach to national culture is found in the recent study Cultural Literacy: What Every American Needs to Know (Hirsch 1987). Hirsch starts out by trying to delineate what actually is shared on a national level, using the USA as his case: "Suppose we think of American public culture as existing in three segments. At one end is our civil religion, which is laden with definitive value traditions. Here we have absolute commitments to freedom, patriotism, equality, selfgovernment, and so on. At the other end of the spectrum is the vocabulary of our national discourse, by no means empty of content but nonetheless value-neutral in the sense that it is used to support all the conflicting values that arise in public discourse ... Between these two extremes lies the vast middle domain of culture proper. Here are the concrete politics, customs, technologies, and legends that define and determine our current attitudes and actions and our institutions. Here we find constant change, growth, conflict. This realm determines the texture of our national life" (Hirsch 1987: 102).
Hirsch's categorization can be questioned, but his aim is to look at the domain of vocabulary, or rather what he terms the cultural literacy of a given nation: "the whole system of widely shared information and associations" (p. 103), the kind of cultural competence needed to be able to take part in public discourse. Where he goes wrong is in his insistence that this national cultural capital belongs to a general mainstream culture which stands above class interests and power relations. The problem of hegemony and contestation is brushed away and his ultimate aim, thus, becomes rather futile, namely a list of 4,500 dates, places, people, events, books, phrases and sayings that make up the American common culture.
This attempt at standardization mirrors a given social position, reflecting the perspective of a middle-class, middle-aged WASP. The whole project again illustrates the difficulty of separating normative and descriptive approaches to what constitutes a national culture or shared knowledge.
Let me illustrate this dilemma further by quoting a couple of less ambitious attempts at defining national sharing. First T. S. Eliot's classical list of English institutions: "Derby-day, the Henley regatta, Cowes, August the 12th, a cup final, the greyhound races, the Fortuna game, the dart board, Wensley-The patterns of national sharing are also demonstrated in images and visual cliches which became saturated with symbolic meaning. This process of cultural condensation is very marked in the development of national sceneries. One of the best Swedish examples of this is the view of the little red cottage in the meadow at the edge of the lake, a landscape reproduced on scores of postcards and travel brochures. This image evokes a range of associations and connotations, which may produce profound homesickness or ironic comments -reaction which are hard for the outsider to grasp. dale cheese-, cabbage boiled in cloves, pickled beetroots, churches in nineteenth-century Gothic and Elgar's music" (Eliot 1949: 30).
Here is a Swedish version from 1985: "To be Swedish is to have experienced the Swedish summer in all its glory, it is Christmas morning, it is the high school graduation. It is to have been dressed up for the last day of school and to have seen the sun set over the edge of the forest, it is to have lit the Advent candles and to have read Elsa Beskow and seen the king. It is to have walked across a barrack square and to have stood by a grave" (Nordstedt 1985).
Both these examples are insider's list of cul-14 tural traits, made for other insiders. They are lists of key symbols or key events which probably have a rich field of cultural connotations and evoke shared memories of similar situations. They both claim to have captured the essence or spirit ofEnglishness or Swedishness, but they reflect one version of or perspective on what constitutes the typical or essential in the national culture. This is England and Sweden described through the cultural lenses of two (male) intellectuals.
If we ask other persons to make up lists like these, we will get a wide range of variations with some common focus, but above all there is a tendency for people to pick very visible national traits: public rituals, family feasts, favourite dishes, key symbols and images. It is the "Sunday Best" version of the national cul-ture which is often described, and it is interesting to reflect upon how such symbolic compressions of national culture are created and changed over time. You will hardly get the same list in 1920 as in 1988. Eliot's use of Elgar can be taken as one example of this gradual selection. In 1972 another fellow countryman states that "Elgar is loved by the English people as one of the greatest English composers and also for his unique expression of the deep intangible feelings of England" (quoted after Crump 1986: 164).
But as Jeremy Crump has shown in his analysis of the reception of Elgar, his music gradually became defined as typically English through being performed on ceremonial occasions and also by being put to patriotic use during the First World War.
The selection of items for such "Top Ten" lists of national symbols will often include small details or seemingly trivial elements, which are symbolic representations or distillations of central ideas or patterns of behaviour. They have, as Billy Ehn has put it, "a high specific cultural weight". He points out that images of Swedishness can be evoked in memories of the tastes and smells emanating from the traditional midsummer meal of pickled herring, new potatoes and cold aquavit: "a phenomenon which mirrors a whole cultural universe, images of summer, festivity, pleasure and nationhood" (Ehn 1983: 14).
The impact of such events depends not only on their being very visible rituals, but also on their sensual or emotional quality. The common national memories and understandings are sometimes more strongly articulated in non-verbal forms, in shared smells, sounds, tastes and visions. Raymond Williams has coined the concept structure of feelings for such elusive cultural phenomena, which cannot be described in terms of ideology or worldview (Williams 1977). In this sense, some feelings are more national than others, i.e. they have a stronger symbolic charge.
I would, however, argue that the most important aspects of this national sharing are anchored in the trivialities of everyday life, in the ways in which we can talk about Swedish routines and habits. These traits are so obvious to us that we do not even consider them as typically Swedish. They are easier for an outsider to observe. Concepts like Swedishness and Englishness, for example, imply that there is a certain cultural praxis as well as style that is contained within the national boundaries.
It is interesting to think about what people actually mean when they talk about a person behaving in a "very Swedish" way or looking "very British". People often find it difficult to actually verbalize these traits: they will say vaguely that there is something very Swedish about the way he carries his body, eats his meal, expresses certain feelings or laughs at a joke. lnstangible traits like this make up one elusive part of a national cultural capital, or rather -to continue with Bourdieu's terminology -a national habitus or a set of dispositions. When people talk about Swedishness, they talk about this kind of imponderabilia, rather than about "cultural heritage" or la Grande Culture. Swedishness then denotes not so much what people talk about but their way of talking: the styles in which a problem is addressed, an argument is carried on or a conflict resolved (or suppressed).
To conclude: a concept like national culture is in acute need of deconstruction: what kinds of knowledge of shared understandings is this national capital made up of, which parts of this capital are highly visible, which forms are less articulated or tangible? Are we talking about what all Swedes know or what they ought to know? It seems important to distinguish between, on the one hand, the symbolic capital that is defined as national and patriotic and, on the other hand, the knowledge and experiences which happen to be contained within national boundaries: the inside jokes, associations, references and memories which Swedes understand and Norwegians don't. In short, how can we categorize these different forms of sharing into registers or levels of a "national culture"?
How wide is nation-wide?
The problem of sharing raises question of communication and the creation of national arenas of interaction. The making of a nation is thus a problem very much linked to the project of integration and standardization. Language is a good example of this. One of the early aims of nationalists was to create a national language, often in settings where the spoken or written word did not respect national boundaries . For nineteenth-century Norwegian nationalists the creation of a truly Norwegian standard language meant that old influences from written Danish had to be contested, but also that the border between Norway and Sweden had to be made into a linguistic boundary as well, in spite of the fact that people on both sides of that border shared a common dialect. The task of the linguists was to create a standard Norwegian language and the job of the school system was to make sure all Norwegians learned to speak it (cf. 6sterud 1986: 13). All over Europe we can study the same process, which also led to the creation of specific academic disciplines and school subjects , like "Swedish", "Danish" or "English". (See the discussion of the Scandinavian case i Teleman 1986 and for Britain, Colls & Dodd 1986.) If language became an important medium for national cohesion and belonging (in most, but far from all nations), the nationalization of culture was very much linked to the creation of a public sphere by the rising bourgeoisie, who created new arenas and media of debate and information. We need to study the ways in which this kind of public discourse was turned into a national discourse. Benedict Anderson has argued for the importance of what he calls "print capitalism" in producing a national community. He focuses on the role of the new media of newspapers in the late eighteenth and nineteenth centuries and their role in supplying intellectuals with a forum for national exchanges. In Sweden it is evident that the creation of a multitude of local newspapers had this cohesive effect, in spite of the fact that there was no "national" paper in the nineteenth century (although there were some magazines). There was a constant borrowing and recycling of material between papers and a debate which made the local doctor or bureaucrat out in the province feel that he was taking part in a national discourse and had a knowledge of what was happening in the national capital.
Another new mass medium was the national school book. In Sweden the standard reader for the elementary school (Folkskolans li.isebok) was used in all Swedish schools from 1868 up to around 1900. Several generations of Swedes, thus, grew up reading the same texts and looking at the same pictures (Furuland 1987).
Media like these not only created national communities of communication but also produced gaps or communicative barriers between, for example, Swedes and Danes . Cultural sharing in a sense became less regional and more national but also less international during the nineteenth century. The Swedish elite talked and read more Swedish and less French and Latin, while the peasants were drawn into a national framework of thought and action.
During the twentieth century the mass media have often been seen as the symbol (or scapegoat) of the internationalization of national cultures, but even in the age of satellite television and rock videos I would argue for a more differentiated analysis of this phenomenon . The new media of our century , like radio and television, have played a crucial role in a further nationalization of culture. Many of the nineteenth-century media still remained class based media, and a truly national public discourse was not created until the twentieth century .
In two other studies Lofgren 1989 and n.d. I have looked at this kind of massmediation of national culture: first in the ways in which a "national nature" is created in nineteenth-century Sweden -a set of sceneries which most Swedes learn to recognize as "typically Swedish", views packed with national symbolism. This process of framing and condensing national messages in a piece of nature cannot be understood without reference to the mass-production of landscape sceneries, from oleographs to picture postcards and travel brochures and this production of national images was also helped by the profileration of texts and songs about Swedish nature.
The other example looks at the very crucial role of radio broadcasting in establishing a national sharing. I have tentatively argued that the period of national broadcasting (and later) television with a one-channel system between about 1930 and 1970 have had an enormous integrating effect in Swedish culture and everyday life. These were decades when (almost) all Swedes listened to the same radio programmes or later viewed the same TVshows.
In the late 1920s and the 30s national broadcasting gave Swedes a common focus, common topics of conversation and frames of references. A new kind of imagined community was developed as Swedes all over the country listened in to the same media event, be it the Sunday service, a sports transmission or a popular cabaret. New national personalities were created and even the weather was nationalized in the magic chanting of temperatures and winds from meteorological stations all over the country. National broadcasting also created a national rhythm of listening. People flocked to the morning gymnastics, waited eagerly for the gramophone hour, gathered for the evening news and went to bed with the national anthem, which ended each broadcasting day. The radio created new national traditions, such as the New Year's Eve celebrations. At midnight a mighty community of listeners stood to attention as the church bells from all Swedish cathedrals rang in a new Swedish year.
But even today , with a much more pluralistic media world, we must look at the ways in which international influences are nationalized into a local context as they cross the border. Dallas, Disney and Dynasty have different meanings and play different roles in different national settings. Sweden is, for example, often presented as the most Americanized country in Europe, but this Americanization has been carried out in an extremely Swedish manner. For a visitor from the USA it is often hard to recognize this American influence in the Swedish way of life: there is what Robert Redfield once termed an interesting process of parochialization going on in Stockholm as well as in Budapest. Ulf Hannerz has developed the concept of creolization for this local transformation of cultural flows within the world system in a discussion of American culture (Hannerz 1987).
2 Ethno logia Europaea XIX.1 A good example of the effect of national cultural barriers is found in the international world of advertising, where it is often demonstrated that an American or French advertisement cannot simply be transplanted into a Swedish magazine -it needs to be reworked by a local agency.
In the same way, consumer culture may also be both an internationalizing and a nationalizing force. One of the really strong cohesive national forces in the United States is to be found in consumer patterns and messages (cf. for example Roland Marchand's study Advertising the American Dream, 1985). Consumption in the USA is in a way very American, with brands, styles and habits which keep the 50 states together, but also create barriers to the outside world. These barrier are often demonstrated in the popular jokes about American tourist complaints about the lack American ways (especially foodways) in foreign countries. The establishment of a number of national chains of shops, motels, restaurants and other commercial institutions has created a standardized pattern which makes the Californian feel at home in both Idaho and South Carolina . (When the waitress approaches him in such distant territories asking what kind of salad dressing he would prefer with his meal, he instantly knows that there are three choices : French, Blue cheese or Thousand Islands.) To conclude: we need to develop a study of nationalizing media, agents, institutions and arenas. How is the nation established as a nationwide cultural space, as a horizon or communicative community, and how is the boundary towards other nations maintained? Such an analysis must focus on everything from schools and national (military) service to TVcommercials and fashions, and it must examine the way regional or subcultural worlds are made national and the way international messages are creolized.
The disintegrating nation
Another perspective on this communicative process is found in the discourse on the disintegrating national culture, a discourse which is In 1909 a prize competition for a Swedish national monument was launched. A private donator had written to the king and pointed out that Sweden still lacked such a manifestation, which could demonstrate the Swedish people's gratitude for its country, state and culture and also create a feeling of national unity. Of the 36 contributions, the only one remembered today is Sven Boberg's "Sleep in peace", with Mother Sweden snoring on the throne, flanked by the two heroes king Gustavus Adolphus and Charles the XII, who are squeezed into their boots. The artist suggested that his statue should be positioned in the entrance to the Houses of Parliament, in order to make sure that no one could get in or out. A national monument was never erected in Sweden . 1909 was certainly not the right moment, as the nation witnessed the biggest general strike in European history, and time never again became ripe for this kind of national rhetoric. at least as old as nationalism itself. Nations have always been seen as falling apart, but the forces (or threats) of disintegration tend to vary through time.
One constant threat has been defined as regionalism, but this concept covers a wide range of relations, which may fluctuate in interesting ways from nation to nation and from time to time . France is a good example of highly varied regional movements, changing not only in focus and intensity but also in their political profile during the last two centuries.
18
Many forms of regionalism may function not as a potential threat to national break-up but rather as a kind of tension which may keep the national project alive and vital. In the Scandinavian countries regionalism has often functioned as a stable and more integrating than threatening element in the national landscape. In some ways the province or region has had the role of providing a micro-level model for patriotism. By learning to love your home region -one part of the national whole -you prepared yourself for national feelings on a higher level, this was the general idea in school education at the beginning of the twentieth century.
At that time in Sweden socialism was often defined as a major threat to national unity, later to be replaced by internationalism or Americanization. We find similar transformations in other nations, depending on the political climate.
This genre of popular debate is perhaps better analysed as a form of cultural contestation, in which different interest groups accuse other groups (or ideologies) of threatening the national ideal. Why do some Swedes at certain times define themselves as more Swedish or better nationalists than others? Why is that this kind of discourse is more marked in certain historical periods? It has , for example , sometimes been argued that Swedes are not very chauvinistic , because national slogans or patriotic appeals are less common here than, for example, in the United States or in Romania. But national arguments or national feelings are mainly activated in situations of uncertainty or anxiety. The incessant talk about American morals and values in the United States does not necessarily mean that Americans are more patriotic (or chauvinistic), but rather that the national identity has to be constantly reaffirmed because it is a somewhat fragile construction. The ethnic mix and fluidity calls for a constant remaking of America .
In the Sweden of the sixties and seventies flag-waving and patriotic rhetoric were definitely out, at least in intellectual circles, but this was a period of national stability. In the political turbulence of the twenties and thirties national rhetoric was a tool of political struggle between the left and the right . The conservatives argued that the social democrats were unpatriotic and out to destroy both traditions and the national heritage. Unlike their counterparts in France and Britain, the Swedish social democrats were, however, very successful in projecting an image of themselves as working "in the best interests of the whole nation". In a way they wrested the national argument from the hands of the conservatives and made it a part of their National Welfare 2* In 1935 another national event led to a competition for a monument celebrating what was seen as the first meeting of the Swedish parliament in 1435. This time the object was a statue of the peasant rebel leader Engelbrekt from that turbulent period of Swedish history, who was used as a national symbol by both the right and the left. Many social democrats chose to see Engelbrekt as a symbol of early democratic and egalitarian strivings, thereby underlining the political parallels between the 1430s and the 1930s . The sculptor Bror Hjorth's contribution expressed this version of Engelbrekt, who was depicted as a popular leader and forceful agitator, but the official committee found his version too revolutionary and chose a milder and more conventional image of Engelbrekt. (Cf. the discussion in Johannesson 1985.) programme. One symbolic manifestation of this change was the introduction of the national flag into the May Day demonstrations during the 1930s .
This was a period when the concept of citizenship became central in the national rhetoric about the making of a modern nation, populated by modern individuals who had been freed from traditional collective loyalties in order to be nationalized as citizens of the new Modern Sweden. The constant references to The caption to this cartoon from 1905 runs: A traitor to his country. The poli ceman: What the hell is wrong with our, sir? -I am sorry, my good constable, but I just wasn't able to stand up any longer when they sang the anth em for the 82nd time. The decades around 1900 was a period of intensive production (and singing) of patriotic songs in Sweden, and community singing had another peak period during the Second World War (and even more so in occupied Denmark, cf. Karlsson 1988: 155ffi. Benedict Anderson has pointed out the strong emotional charge in this kind of national ritual: "No matter how banal the words and mediocre the tunes, there is in this singing an experience of simultaneity. At precisely such moments, people wholly unknown to each other utter the same verses to the same melody. The image : unisonance . Singing the Marseillaise, Waltzing Matilda, and Indonesia Raya provides occasions for unisonality, for the echoed phychological realization of the imagined community" (Anderson 1983: 132). -A more recent example of this is the key role of patriotic collective singing in the 1988 demonstrations for national revival in the Baltic states. Many of the national rituals, like hoisting the flag, visiting a national shrine of breaking out in song, appeal more to emotions and gut reactions than to intellectual reasoning. Even the most ardent anti-nationalist may find himself fighting a lump in the throat at such occasions. the many rights and obligations of citizenship _ a status which only the nation can give its people -was very typical of this period of nation-building.
Although the social democratic utopia, usually called The People's Home was very much part of a project of modernity with eyes directed forward rather than to the past, there was also an attempt to redefine the national heritage. In the 1930s Swedish democracy was still a young institution and in shaping a new national history, great emphasis was placed upon the democratic traditions of Sweden (and above all the Swedish peasantry). The ethnologists joined in this redefinition. The traditional villages could now be described as the cradles of democracy, as "the moulds in which the Swedish folk mentality had been shaped, the setting in which our people has gained its basic social instincts" (after Johansson 1987: 7). New combinations of national heroes and villains were also produced.
In the 1930s we can thus analyse how a new national heritage is constructed with new symbols of common ancestry and identity, and the same type of analysis could be carried out for the end of the nineteenth century, when conservatives and liberals fought over the true national values and genuine heritage. (Cf. also Patrick Wright's discussion (1985) of the political struggle over definitions of the national heritage between Labour and Conservatives in post-war England.) The discourse on national disintegration often misses the fact that national culture is constantly redefined. Every new generation produces its own national sharing and frames of reference, selecting items from the symbolic estate of earlier generations. It is usually not the nation that is falling apart but rather an older version of the national ideal. When indignant protests are made about Swedish schoolchildren who (supposedly) call the national anthem the "ice hockey song" because they only hear it at international matches, people forget that only a few generations of Swedes have ever learnt to sing it.
This constant redefinition of a national symbolic and cultural capital can be analysed by trying to trace what kind of sharing has united different Swedes (say, a clergyman, a farm woman and an industrial worker) in 1880, in 1930 or today . I would maintain that the sharing is greater today than in the past, but different. Maurice Agulhon has, for example, argued that France today is more culturally homogenous than during the nineteenth century, but that the national, symbolic capital (i.e. the patriotic school book culture) has diminished (Agulhon 1987).
In the same way national rhetoric tends to change . Arguments or language of earlier periods may sound bombastic, chauvinistic or even racist to our modern ears, but we have at the same time developed new forms of rhetoric about the superiority of our own country, which we do not think of as chauvinistic . In his paper on sports and nationalism below, Billy Ehn points out that nationalistic arguments and rhetoric which in other settings or arenas would sound bombastic flourish in the sport pages.
National culture as rhetoric and practice During the last two centuries nationalism has evolved as a strong source of cultural and social identity, and so far we have little evidence that it is dying, although it may often be dormant . The symbolic community of the nation still produces strong feelings and strong commitments as well as gut reactions oflove, hate, pride and aggression. Flag-waving or flagburning is still, in most settings, no laughing matter.
In this paper, I have argued for a historical anthropology of national cultures, focusing on some of the processes which develop, reproduce and change national identity and culture . This is a field of study which calls not only for a historical but also a comparative approach. Elusive phenomena like Swedishness or Hungarianness are best studied in contrast .
The comparative study of the ways in which nations are turned into cultural formations may benefit from separating three levels. First of all, there exists what we could call an international cultural grammar of nationhood, with a thesaurus of general ideas about the cultural ingredients needed to form a nation, like the check-list I mentioned earlier. This includes a symbolic estate (flag, anthem, national landscape, sacred texts, etc.), ideas about a national heritage (a national history and literature, a national folk culture, etc.), as well as notions of national character, values and tastes. This international grammar may also contain specific ideas about the institutional framework. During the nineteenth century it was not only a concept of national folk culture that was circulated between (mainly) European nations, but also guidelines for the proper establishment of institutions like national folk museums and archives, to name one example.
The international thesaurus is transformed into a specific national lexicon, local forms of cultural expression, which tend to vary from nation to nation. In this field we can observe how national rhetoric and symbols may be located in different arenas, emphasized in different historical periods or social situations. The third term, dialect vocabulary, focuses on the internal divisions within the nation: conflict groups and interests using national arguments and rhetoric, sometimes also creating different styles of national discourse: accusing each other of "vulgar nationalism", "unpatriotic behaviour" or just representing the wrong type of Swedishness. The definition of the Swedish folk heritage of the late nineteenth-century bourgeoisie differed a great deal from that of the social democrats of the 1930s.
Whereas the concept of nationalism is relatively clearly defined as a political ideology, national culture is a term which often contains a mixture of normative and descriptive elements. I have argued for a focus on the everyday level of cultural sharing, which happens to be contained by national borders: the shared understandings and frames of references of Swedes or Hungarians.
In the study of the ways in which culture is nationalized we thus have to distinguish between two processes. One is concerned with the ways in which cultural elements are turned into symbols or national rhetoric -declared to symbolize the essence of the nation or its inhabitants or stated as norms about proper na-22 tional behaviour and virtues; the other has to do with how cultural flows are contained, organized and transformed within the national borders -how national space becomes cultural space. This also calls for an analysis of the ways in which different cultural domains are nationalized, from landscape to sport, or perhaps even de-nationalized at later stages, as in the case with national symbols which lose their power or meaning.
In looking at national culture as process it is important to avoid a narrative structure based upon an evolutionary or devolutionary perspective, in which nations are born, come of age or fade away, to name a few common life cycle metaphors in studies of nationalism. Modern nationalism is a cultural paradigm, but all nations do not go through identical processes of making and re-making. Take the question of timing: when are certain national strategies, claims or rhetorics legitimate and successful or just futile or even comical? 6 The erection of a national monument in Budapest in 1896 created a national rallying point, whereas in Sweden in 1909 the same plans proved to be a total flop. Nationalism may often be a dormant cultural force, activated only situationally and selectively. National identity is not always an overriding loyalty and there are social groups which may combine a very international and cosmopolitan identity with a sense of national belonging.
In 1882 the Frenchman Joseph Ernest Roman gave his classic definition of a nation having to be something more than a mere customs union, a true nation must have a soul, he added in the style of contemporary speech and continued: «L'existence d'une nation est (pardonnez moi cette metaphore) un plebiscite de tous lesjours, comme l'existence de l'individu est une affirmation perpetuelle de vie » (quoted after 0stergard 1988: 29).
It is this problem of how the nation is reaf~ firmed by its national subjects in "daily referendums" that perhaps is the least developed theme in studies of national culture-building.
The national project cannot survive as a mere ideological construction, it must exist as a cultural praxis in everyday life. Being Swedish is a kind of experience which is activated in watching the Olympics on TV, in hoisting the flag for a family reunion, in making ironic comments about the Swedish national character (and feeling hurt when non-Swedes make similar remarks), in memories of holiday trips to national sights, or in feelings of being out of place on the wrong side of the national border and securely at home on the inside, in the sharing of national frames of references, from jokes to images.
We need to devote a lot more attention to how this kind of national sharing is produced and reproduced in everyday life, asking how deep, how long and how wide it is -at given times and in different social settings, and how it varies from generation to generation. A study of this process, thus, calls for an analysis, not so much of rhetoric but of practice, of the lived national experience. Notes A version of this paper was presented at the 12th International Congress of Anthropological and Ethnological Sciences in Zagreb, 24--31 July, 1988, in the session on History and Anthropology, and I am grateful for the stimulating comments put forward at this session. Special thanks also to Alan Crozier for his help with the translation and his constructive remarks. 1. The first workshop on "National culture as process" in Budapest, 1-3 May 1989, also included papers by Tamas Hofer and two Hungarian sociologists, Gyorgy Csepeli and Judit Lendvay, as well as contributions by the Swedish historian Bo Ohngren and the ethnologist Anders Lundin. 2. The interest in cultural perspectives on nationbuilding among historians is expressed in works like Weber 1976, Hobsbawm & Ranger 1983, Braudel 1986and Agulhon 1987 (see also the excellent overview in 0stergard 1988), whereas recent examples of anthropologists dealing with the cultural politics of nationalism are found in studies by, for example, Herzfeld 1987 andKapferer 1988. 3. See the discussion on the nationalization ofDalecarlia in Rosander 1986 and the similar Norwegian processes in Berggreen 1989 and the general discussion in Oinas 1978. 4. The metaphor of a north-south dichotomy in national stereotypes was developed by Tomas Ger-holm in a colloquium on national mentalities at Lund University in 1985. 5. The changing Hungarian cultural construction of national identity and the stereotyping of other nations were discussed at the seminar in two contributions by Gyorgy Csepeli (n.d.) and Judit Lendvay (n.d.). For a discussion of the changing stereotypes of Danes and Swedes over the last century, see the discussion in Lofgren 1986 and Linde-Laursen (n.d.). A general discussion of national stereotypes is found in the Dutch anthropological journal Foocal. Tydschrift uoor Anthropologie, April 1986 which presents material from a colloquy on national character. 6. See the discussion of the timing of the claim to nationhood in Smith 1986: 8ff. andGellner 1983.
|
2019-05-08T13:29:37.417Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "2d3171dae3b6f7318eeb36ee0df2b3fe5717184b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.16995/ee.1224",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6e6435f68b3df44c83b141406b2824cef6021907",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
234484196
|
pes2o/s2orc
|
v3-fos-license
|
First‐in‐human trial assessing the pharmacokinetic‐pharmacodynamic profile of a novel recombinant human chorionic gonadotropin in healthy women and men of reproductive age
Abstract The purpose of this first‐in‐human trial was to examine the safety, pharmacokinetics (PK), and pharmacodynamics (PD) of a novel recombinant human chorionic gonadotropin (rhCG; FE 999302, choriogonadotropin beta) to support its clinical development for various therapeutic indications. The single and multiple dose PK of choriogonadotropin beta (CG beta) were evaluated in women and the single dose PK and PD of CG beta were compared to those of CG alfa in men. CG beta was safe and well‐tolerated in all 84 healthy subjects. In women, the area under the curve (AUC) and the peak serum concentration (Cmax) increased approximately dose proportionally following single and multiple doses of CG beta. The apparent clearance (CL/F) was ~ 0.5 L/h, the mean terminal half‐life (t½) ~ 45 h and the apparent distribution volume (Vz/F) ~ 30 L. After single administration in men, the mean AUC was 1.5‐fold greater for CG beta than for CG alfa. Mean Cmax and Vz/F were comparable for the 2 preparations. In accordance with the differences in AUC, the CL/F was lower for CG beta (CL/F 0.5 vs. 0.8 L/h), explained by a longer t½ (47 vs. 32 h). Serum testosterone levels induced by a single dose rhCG reflected the PK profiles with a slight delay, resulting in 59% higher AUC for CG beta. The PK parameters for CG beta were comparable in men and in women. In conclusion, the PK differs between the two rhCG preparations, causing higher exposure and a higher PD response for CG beta, which may require relatively lower therapeutic doses.
INTRODUCTION
Human chorionic gonadotropin (hCG) is a glycoprotein hormone that is produced by the pituitary in small amounts in nonpregnant women, men, and menopausal women whereas large amounts are produced by the placenta of pregnant women. 1,2 During early pregnancy, hCG is first expressed by the blastocyst before implantation and is increasingly produced after implantation by the syncytiotrophoblast. During the first trimester of pregnancy, hCG is produced in increasing amounts up to 10th week of gestation and then decreases gradually. 3 Because intact hCG is cleared by the kidneys, hCG may be isolated from the urine of women and used for the manufacturing of therapeutic preparations. 4,5 The hCG consists of a 92 amino acid single α-subunit, which is common to all the pituitary glycoprotein hormones, and a specific β-subunit of 145 amino acids. Each subunit is post-translationally modified by the addition of complex carbohydrate moieties. The alpha subunit contains 2-N-linked glycosylation sites at amino acids 52 and 78 and the beta subunit contains 2-N-linked glycosylation sites at amino acids 13 and 30 and 4 O-linked glycosylation sites at amino acids 121, 127, 132, and 138. 6,7 The hCG and luteinizing hormone (LH) shows similar molecular structures and interact with the same LH/chorionic gonadotropin (CG) receptor. 8 As a result of this similarity to LH, hCG is used pharmacologically in a number of clinical indications. In women, hCG is used to induce final follicular maturation following controlled ovarian stimulation or to induce ovulation in anovulatory women. 9 In men with hypogonadotropic hypogonadism, hCG is given to induce and maintain spermatogenesis.
To date, there is only one approved recombinant hCG (rhCG) preparation (CG alfa) which is expressed by a Chinese Hamster Ovary (CHO) cell-line and the pharmacokinetics (PK) are similar to those of urinary hCG. 10,11 CG beta (FE 999302) is a novel rhCG that has been produced by a human cell line (PER.C6). The amino acid sequence of the α-and β-chains of CG beta are identical to those of endogenous hCG and CG alfa. Glycosylation of both natural and recombinant hCG is highly complex and may contain a wide range of structures. 12 The glycosylation of rhCG reflects the range of glycosyl-transferases present in the host cell line and is known to differ between rhCG products produced by different cell lines. 13 PER.C6 and CHO cell lines are both used for production of recombinant follicle-stimulating hormone (rFSH), with follitropin alfa expressed by a CHO cell line and follitropin delta expressed by the PER.C6 cell line. Investigations show that the preparations of rFSH from the PER.C6 human cell line and a CHO cell line display important differences in PK and pharmacodynamic (PD) properties. 14 These differences include consistently higher exposure, longer time to peak serum concentration (C max ), and longer terminal half-life (t ½ ) of follitropin delta after a single administration, and longer t ½ at steady-state after repeated administrations, compared with follitropin alfa. A significantly lower clearance of follitropin delta compared with that of follitropin alfa was also well measured. Based on these differences, which can be attributed to the glycosylation profile, it may be anticipated that the PK and PD properties of rhCG expressed by a human cell line and by a CHO cell line will also be dissimilar.
To examine the safety, PK, and PD of CG beta, the first-inhuman trial comprised three parts conducted sequentially and included healthy women using oral contraceptives and healthy men downregulated with GnRH agonist. The single and multiple dose PK of CG beta were first evaluated in healthy women and then the single dose PK and PD were compared to those of
WHAT QUESTION DID THIS STUDY ADDRESS?
A new recombinant hCG (rhCG; choriogonadotropin [CG] beta) produced by a human-derived cell line (PER.C6) is currently in clinical development. The amino acid sequence of the α-and β-chains are identical to the natural sequences and also to that of rhCG expressed by Chinese Hamster Ovary (CHO) cell line (CG alfa), but the glycosylation provided by the PER.C6 and CHO cells is different. In this trial, the pharmacokinetics (PK) of choriogonadotropin beta were assessed in women and men and the PKs and pharmacodynamics (PDs) were compared in men to those of CG alfa.
WHAT DOES THIS STUDY ADD TO OUR KNOWLEDGE?
It is concluded that the PK of the two rhCG preparations are different, due to a slower clearance of CG beta resulting in a higher PD response.
HOW MIGHT THIS CHANGE CLINICAL PHARMACOLOGY OR TRANSLATIONAL SCIENCE?
Further development of CG beta may require lower doses of this potent hCG compared to current therapeutic hCG preparations.
CG alfa in healthy men. The goal of this research was to establish the PK and PD of CG beta in women and men over a broad dose-range in order to allow further development of CG beta for any potential therapeutic indication.
Participants
This first-in-human trial of CG beta included 84 women and men. Eligible participants were women 18-40 years of age or men 18-50 years of age with a body mass index (BMI) of 18-29 kg/m 2 . All participants were healthy according to medical history, physical examination (including gynecological examination in women), a 12-lead electrocardiogram (ECG), and clinical laboratory profiles of blood and urine. Written informed consent was obtained from all subjects prior to inclusion in the trial, which was conducted in accordance with the Declaration of Helsinki and International Council for Harmonization-Good Clinical Practice. The trial was approved by the Ethical Committee of the Bavarian Chamber of Physicians, Germany.
Study design
This trial was composed of 3 parts, including only women in parts 1 and 2 and men in part 3. All women were required to have used combined oral contraceptive (ethinylestradiol content ≥0.015 mg) or combined contraceptive vaginal ring for at least three cycles prior to trial inclusion. All women were switched to Yasmin (Bayer) contraceptive tablets 14 days prior to CG beta administration and this contraceptive was taken daily throughout the study period. Men were downregulated with a depot GnRH agonist (triptorelin, Decapeptyl, Ferring Pharmaceuticals) to suppress endogenous hormone production.
Part 1
The first part of the trial had a double-blind, placebocontrolled and randomized single ascending dose design and included 35 women. Divided in 5 cohorts (5 active treatment, and 2 placebo in each cohort), 25 women were dosed with a single dose of CG beta and 10 women were dosed with placebo. The dose levels of CG beta were 4, 16, 64, 128, and 256 μg. All doses were administered as single subcutaneous injections in the abdomen. Blood samples for measurement of serum hCG concentrations were obtained immediately before administration of CG beta or placebo and at 2,5,8,10,12,14,16,24,36,48,72,96,120,144,168,216, and 264 h after administration.
Part 2
The second part of the trial had a double-blind, placebocontrolled, and randomized multiple ascending dose design and included 16 women. Divided into 2 cohorts (6 active treatment, and 2 placebo in each cohort), 12 women were dosed daily with CG beta and 4 women were dosed with placebo. The daily CG beta dose levels were 8 and 16 μg administered as single subcutaneous injections in the abdomen for 10 consecutive days. Blood samples for measurement of serum hCG concentrations were obtained immediately before administration of each CG beta or placebo doses, and then at 2,5,8,10,12,14,16,24,36,48,72,96,120,144,168,216, and 264 h after the last dose.
Part 3
The last part of the trial had an open randomized 2-way crossover design comparing the PK and the testosterone release after administration of CG beta and CG alfa (Ovitrelle; Merck Serono) in 33 downregulated men. All men received 3 doses of 3.75 mg Decapeptyl in order to downregulate the pituitary-gonadal axis and suppress testosterone to less than or equal to 1 ng/ml and LH to less than or equal to 2.5 IU/ml at the time of rhCG administration. Two doses of Decapeptyl were administered prior to the first drug administration on days −28 and −10 and a third dose was given on day 12 after the first dose of rhCG. A single dose of 125 µg rhCG of each preparation was administered s.c. in a crossover design with the 2 treatment periods ~ 3 weeks apart. Blood samples for measurement of serum hCG concentrations were obtained immediately before drug administration, and then at 4, 8, 12, 14, 16, 24,
Safety and tolerability
Safety and tolerability were assessed by monitoring of adverse events (AEs), injection site reactions, clinical laboratory assessments [clinical chemistry, hematology, and urinalysis], physical examination, vital signs [blood pressure, pulse, and body temperature], 12-lead ECG and-in women-transvaginal ultrasonography. The summarized AEs are those reported during the treatment phase (i.e., from administration of rhCG until the last assessment 11 days after the last dose).
Bioanalytical methods
Serum rhCG levels were measured using a sandwich immunoassay, comprising a monoclonal mouse anti-hCG beta 2 as capture antibody and a ruthenium-labeled monoclonal mouse anti-hCG Holo C3 as detection antibody with electrochemiluminescence (ECL) detection (Meso Scale Discovery system). The analytical standard used was CG beta for quantification of CG beta, and CG alfa for quantification of CG alfa, and the analytical range was 0.100-12.0 ng/ml serum (up to 240 ng/ml with extended dilution). Interassay precision was less than or equal to 9%, and intra-assay precision was less than or equal to 6%, in the main method validation, as calculated using analysis of variance (ANOVA). In order to ensure equivalent exposure CG alfa was quantitated by amino acid analysis independent of the label values.
Analysis of serum concentrations of testosterone was performed by means of a validated liquid chromatography tandem mass spectrometry method.
The analyses for antibodies against rhCG were performed using a validated bioanalytical method. The method validations were designed to follow the principles stated in Shankar et al. and the regulatory guideline. 15,16 The method was a semihomogenous bridging assay using ECL as the detection method. The trial samples were analyzed using a tiered approach. All samples (study samples and controls) were analyzed as duplicates and the mean signal was used for determination of results.
Statistical analyses
The PK, PD, and safety were summarized using descriptive statistics.
Women receiving a single dose of 4 µg CG beta had serum hCG concentrations below the limit of quantification in 4 of the 5 women. Therefore, it was not possible to calculate any meaningful PK variables for this dose group. In contrast, all serum hCG concentrations from other women and all men were included in the PK calculations.
The PK and PD parameters were calculated by noncompartmental analysis using the software Phoenix WinNonlin (Pharsight Corporation). The PD parameters were calculated for baseline corrected data, assuming a constant background testosterone concentration after downregulation. The relation between body weight (BW) and exposure was investigating for area under the curve (AUC) and C max by fitting the function k/BW c to data using linear regression after log-transformation. Dose adjusted exposure data from all subjects in all three parts were used for this investigation. If the exponent c is different from 0, then the exposure is related to BW. If c = 1, the exposure is inversely proportional to BW. Analysis of dose proportionality for AUC and C max was based on the single dose groups 16 to 256 µg CG beta. The slope (beta) was estimated from the model log(parameter) = ln(alpha) + beta * ln(dose). A slope of 1 corresponds to dose proportionality.
Comparison of PK and PD parameters between CG beta and CG alfa in part three of the trial were performed using ANOVA on log-transformed parameters, including factors for drug, period, and subject. Estimated ratios and 90% confidence intervals (CIs) were derived from the model and backtransformation of log-transformed differences. The statistical analyses were performed using the software SAS.
RESULTS
Thirty-five healthy women aged between 18 and 40 years were randomized and dosed for the single dose PK investigation (part 1 of the trial), 7 in each treatment group. The mean BW was 65.9 kg with a range from 50.7 to 90.6 kg, and mean BMI was 23.9 kg/m 2 with a range from 19.1 to 28.9 kg/m 2 . Overall, the treatment groups were similar with respect to demographic parameters.
For the repeated dose PK investigation (part 2 of the trial), 16 women between 19 and 40 years of age were randomized and dosed, 8 in each treatment group. The mean BW was 64.7 kg with a range from 55.0 to 81.4 kg, and mean BMI was 23.4 kg/m 2 with a range from 19.7 to 27.2 kg/m 2 . Overall, the treatment groups were similar with respect to demographic and baseline characteristics.
Thirty-three healthy men between 18 and 50 years of age were included in the single dose PK and PD investigations (part 3 of the trial). The mean body weight was 82.6 kg with a range from 59.3 to 96.0 kg, and mean BMI was 25.3 kg/m 2 with a range from 19.9 to 29.0 kg/m 2 .
Part 1: Single dose PK in women
The mean serum concentrations of CG beta after single dosing are shown in Figure 1.
After administration of CG beta, serum concentrations increased until reaching the maximal concentration at 24 h (median) with a range of 2 to 48 h. The geometric mean C max ranged from 0.3 to 7.7 ng/ml after single doses of 16, 64, 128, and 256 µg CG beta. Subsequently, the concentrations declined with a geometric mean t ½ across the 4 evaluable doses of 45 h (percent coefficient of variation [CV%]: 18%). The concentrations were approximately back to baseline level 11 days after the administration of CG beta. The geometric mean values of apparent total clearance and apparent volume of distribution were estimated to 0.48 L/h (CV%: 30%) and 31 L (CV%: 31%), respectively, across the evaluable doses.
Part 2: Multiple dose PK in women
The mean serum concentrations of CG beta after single dosing are shown in Figure 1.
Following the daily administration of CG beta over 10 days, the trough concentration increased and reached steady-state after 6-7 days in the 8 µg group and after 7-8 days in the 16 µg group. The median time for reaching maximal serum CG beta concentrations after the last CG beta dose was 10 h (range 5-16 h) after multiple dosing. The geometric mean C max values were 0.69 ng/ml (CV%: 32%) in the 8 µg dose group and 1.9 ng/ml (CV%: 21%) in the 16 µg dose group. Two subjects in the 8 µg group showed substantially lower exposure compared to the rest of the subjects in this dose cohort. The geometric mean t ½ across the 2 doses was 42 h (CV%: 15). The geometric mean values for the apparent total clearance and apparent volume of distribution of CG beta were 0.45 L/h (CV%: 40%) and 27 L (CV%: 50%), respectively.
Part 3 Single dose PK in men
The mean serum concentrations of CG beta and CG alfa after single dosing are shown in Figure 1.
The average time taken for the mean hCG concentration to reach C max after a single injection of 125 µg CG beta compared to a single injection of 125 µg CG alfa was around 24 h for both compounds. The geometric mean serum C max were also comparable being 2.59 ng/ml (CV%: 40%) after CG beta administration and 2.59 ng/ml (CV%: 73%) after CG alfa administration. However, in spite of similar C max values, exposure as determined by AUC t was substantially different with the geometric mean AUC t for CG beta being 50% (90% F I G U R E 1 Time course of mean serum concentrations after single and multiple s.c. administrations of CG beta and CG alfa to women and men in parts 1 to 3 of the trial. Individual serum concentrations are shown with dots and the arithmetic mean with solid lines. Standard deviation is shown with shaded areas. The upper plot to the left shows the serum concentrations after single administration of CG beta to women in part 1 of the trial. The upper plot to the right shows the serum concentrations after multiple administration of CG beta to women for 10 days in part 2 of the trial. The lower plot to the left shows the serum concentrations after single administration of CG beta and CG alfa to men in part 3 of the trial. The lower plot to the right shows the serum concentrations on a logarithmic scale after single administration of CG beta and CG alfa to men in part 3 of the trial. CG, chorionic gonadotropin; hCG, human chorionic gonadotropin | 1595 PK-PD OF NEW CHORIONIC GONADOTROPIN-BETA CI = 1.36-1.65) greater compared to that for CG alfa. This difference was also reflected in the geometric mean half-life, which was 47 h after a single injection of 125 µg CG beta and 32 h after a single injection of 125 µg CG alfa and the geometric mean apparent total clearance, which was 0.50 L/h (CV%: 31%) after CG beta administration and 0.75 L/h (CV%: 42%) after CG alfa administration. The geometric mean apparent distribution volumes (V z /F) were 34 L (CV%: 37%) and 35 L (CV%: 46%), respectively, after CG beta and CG alfa administration.
Comparison of PK results after a single dose to women and men
Mean serum CG beta concentrations after single s.c. injection of 128 µg CG beta to women and 125 µg CG beta to men are shown in Figure 2.
After single dose administration of 128 µg CG beta to 5 women and 125 µg CG beta to 33 men the PK profiles and PK parameters for CG beta were comparable.
Relationship between body weight and CG beta exposure
The association between BW and exposure in women and men is shown in Figure 2. Regardless of gender, both AUC and C max decreased with increasing body weight. The power exponent for BW was 0.85 (95% CI = 0.36-1.35, p = 0.0009) for AUC and 1.12 (95% CI = 0.61-1.63, p < 0.0001) for C max , indicating that both AUC and C max declined approximately proportionally to the inverse of the BW.
Part 3: Single dose PD in men
The mean baseline corrected serum testosterone concentrations after single dosing of CG beta and CG alfa are shown in Figure 3.
The median time for reaching baseline corrected maximal testosterone concentration was 96 h (range 48-168 h) after a single s.c. injection of 125 µg CG beta and 72 h (range 48-120 h) after a single s.c. injection of 125 µg CG alfa with F I G U R E 2 Exposure of CG beta in women and men. Upper plot: Time course of mean serum concentrations after single s.c. administration of 128 µg CG beta to women and 125 µg CG beta to men in part 1 and part 3 of the trial. Individual serum concentrations are shown with dots and the arithmetic mean with solid lines. Standard deviation is shown with shaded areas. Lower plots: Body weight influence on exposure by means of AUC and C max . AUC and C max values are dose normalized to 125 µg CG beta. Solid line represents fitted regression curve. AUC = 10335/BW 0.85 (p = 0.0009) and C max = 349/BW 1.12 (p < .0001). AUC, area under the curve; BW, body weight; CG, chorionic gonadotropin; C max , peak plasma concentration; hCG, human chorionic gonadotropin geometric mean testosterone plasma C max concentrations of 7.1 ng/ml (CV%: 30%) and 6.7 ng/ml (CV%: 32%), respectively. In accordance with the concentration profiles and exposure of CG beta and CG alfa, the testosterone AUC t was 1.6-fold (90% CI = 1.50-1.68) greater after administration with CG beta than with CG alfa.
A summary of the estimated PK parameters of CG beta in women and the estimated PK and PD parameters of CG beta and CG alfa in men is shown in Table 1 and Table 2, respectively.
Safety
CG beta was well-tolerated in both women and men after single or multiple s.c. injections. No severe or serious AEs occurred, no AE led to discontinuation of the trial, and none of the subjects developed antibodies against CG beta.
In Part 1, 21 AEs were reported by 12 women (48%) on active treatment and 7 AEs were reported by 5 women (50%) on placebo. There were no apparent dose-related trends in AE frequency. In part 2, there were 12 AEs in 6 women (100%) in the 8 µg group, 35 AEs in 5 women (83%) in the 16 µg group, and 12 AEs in 4 women (100%) in the placebo group. The most frequently reported AEs in women on active treatment were nausea, headache, and uterine spotting. In part 3, the frequency of AEs in downregulated men was comparable in the 2 treatments (i.e., 22 AEs occurred in 15 men [45%] after CG beta treatment, and 28 AEs occurred in 18 men (55%) after CG alfa treatment, without any apparent difference between the 2 groups. The most frequently reported AEs in men were hot flush and headache. An overview of AEs reasonably possibly related to treatment is provided in the Supplementary Tables S1-S3.
In part 3, two downregulated men experienced transient increases in alanine aminotransferase and aspartate aminotransferase in the second treatment period; in one subject, the liver enzyme increases occurred after administration of CG alfa, in the other subject they occurred prior to and after administration of CG beta. There were no other clinically significant findings or apparent dose-related trends in physical examination, vital signs, ECG, transvaginal ultrasounds, or safety laboratory data after either single or repeated administrations in women and men.
DISCUSSION
The three-part design of this first-in-human trial of CG beta provides information on the safety of CG beta in healthy subjects, single and multiple dose PK in women, comparative single dose PK in men, and allows a comparison of the PK of CG beta between genders. Single ascending doses up to 256 µg were safe and welltolerated in women and the increases in serum CG beta levels were approximately proportional with dose. CG beta serum hCG concentrations were too low to calculate meaningful PK parameters after a single dose of 4 µg, but in the other dose groups from 16 to 256 µg the AUC and C max of CG beta increased in an approximately dose proportional manner. The PK parameters t ½ , apparent clearance (CL/F), and V z /F were all similar across the dose range. The half-life of CG beta was longer (45 vs. 29 h), the CL/F (0.5 vs. 0.7 L/h) was lower and the V z /F (31 vs. 29 L) was comparable, when compared to available literature data of CG alfa. 10,11 The difference in elimination rate between CG beta and CG alfa may be explained by the higher degree of sialylation of CG beta molecule including mono-, di-, tri-, as well as tetra-sialylation structures. 17 Following multiple daily dosing of CG beta in women, serum hCG levels accumulated and reached approximate steady-state levels after 6-8 days. The estimates of t ½ , CL/F, and V z /F for CG beta after daily repeated administration were similar to the estimates obtained after a single dose of CG beta (42 vs. 45 h, 0.45 vs. 0.48 L/h, and 27 vs. 31 L, respectively). The median time of maximum plasma concentration (T max ) was naturally shorter after multiple administration (8)(9)(10)(11) h) compared to single dose administration (16-24 h) as concentrations remaining from previous doses were declining exponentially. Thus, the shift in T max was mainly caused by the slow elimination of CG beta in combination with the relatively short dosing interval of 24 h.
In part 3 of the trial, the rhCG dose administered to downregulated men was 125 µg for both preparations. This choice of dose was based on previous experience with urinary hCG and published data for CG alfa. 10,18 A dose of 125 µg rhCG is high enough to give reliable comparative PK data and also induces sufficient testosterone production for comparative analysis (125 µg of CG alfa is approximately equivalent to 2500 IU as determined in the rat bioassay). 18 Administration of 125 µg CG beta and CG alfa to men resulted in considerably higher exposure (1.5-fold) to CG beta compared with CG alfa. In line with this, the estimated apparent clearance was lower and the half-life longer for CG beta when compared to CG alfa but the apparent volume of distribution after administration was similar between compounds. Despite the difference in exposure, the C max for serum hCG concentration increased to similar levels indicating that the absorption rate is very similar, whereas the elimination is slower for CG beta as supported by the lower CL/F and in accord with the higher exposure. The PK data of CG alfa in part 3 are in good agreement with those previously published for CG alfa. 8,14 After single dose administration of 125 µg rhCG to men, the production of testosterone was higher (1.6-fold) following CG beta injection than after CG alfa injection. Maximum serum testosterone production was reached at 3 days after injection for both compounds but thereafter serum testosterone declined at a slower rate in the CG beta group than in the CG alfa group. Because the half-life of endogenous testosterone is relatively short, the slower testosterone decline reflects the longer half-life of CG beta. [19][20][21] Thus, the higher exposure and lower apparent clearance of CG beta when compared to CG alfa, resulted in sustained higher testosterone levels after CG beta administration.
The association between BW and exposure in both women and men indicated that regardless of gender, both AUC and C max decreased with increasing body weight. Other studies have shown similar associations for urinary and other recombinant hCG preparations. [22][23][24] Comparing the PK properties of CG beta after a single s.c. administration of 125 µg in men and 128 µg in women revealed very similar PK profiles without any apparent gender-specific characteristics. The slightly higher exposure in women is ascribed to their lower BW rather than to the marginally higher dose. The PK parameters were similar regardless of gender. The differences between rhCG expressed in a human cell line and in a CHO cell line were assessed in men only. However, because the PK differences of gonadotropins between men and women are known to be limited it is to be expected that the differences between CG beta and CG alfa observed in men can also be expected in women. 25 The safety profile of CG beta in this trial was reassuring with rather few AEs, all of which were of mild or moderate intensity. The most frequent reported AE in women was headache, and in men was hot flush, the latter most likely related to their testosterone deficient status. Overall, the drug was well-tolerated and its potential immunogenicity seems low and in line with that reported for rFSH produced by the same cell line. 26 In conclusion, CG beta has shown to be safe and welltolerated both in women and men. The PK-PD profile of CG beta is different from CG alfa, and because the amino acid sequences are identical, it can be inferred that the glycosylation differences are responsible for the lower clearance of CG beta in comparison to CG alfa. Due to PK and PD differences, the potential therapeutic dose of CG beta is likely to be lower, in both women and men, than that for other hCG preparations.
|
2021-05-14T06:16:53.708Z
|
2021-05-13T00:00:00.000
|
{
"year": 2021,
"sha1": "8d871e26007aa6da640078a1f93b7a5273e9b5e3",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cts.13037",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37c6def06e71c1fc90759a2409c892d6abb8f806",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17827645
|
pes2o/s2orc
|
v3-fos-license
|
Dorsal Compressive Atlantoaxial Bands and the Craniocervical Junction Syndrome: Association with Clinical Signs and Syringomyelia in Mature Cavalier King Charles Spaniels
Background Dorsal compressive lesions at the atlantoaxial junction (ie, AA bands) occur in dogs with Chiari‐like malformations (CMs), but their clinical relevance is unclear. Objective Investigate the influence of AA bands on clinical status and syringomyelia (SM) in mature cavalier King Charles spaniels (CKCS). Animals Thirty‐six CKCS, 5–12 years of age, including 20 dogs with neuropathic pain. Methods Dogs were examined and assigned a neurologic grade. Magnetic resonance imaging (MRI) of the craniocervical junction was performed with the craniocervical junction extended and flexed (ie, normal standing position). Imaging studies were assessed for the presence of an AA band, CM, SM or some combination of these findings. Band and SM severity were quantified using an objective compression index and ordinal grading scale, respectively. Results Of 36 CKCS imaged, 34 had CM. Atlantoaxial bands were present in 31 dogs and were more prominent in extended than flexed positions. Syringomyelia was found in 26 dogs, 23 of which also had AA bands. Bands were associated with both the presence (P = .0031) and severity (P = .008) of clinical signs and SM (P = .0147, P = .0311, respectively). Higher compression indices were associated with more severe SM (P = .0137). Conclusions Prevalence of AA bands in older CKCS is high. Positioning of dogs in extension during MRI enhances the sensitivity of the study for detecting this important abnormality. There were significant associations among AA bands, clinical signs, and SM in dogs with CM; additional work is needed to understand whether or not this relationship is causal.
D orsal compressive lesions have been described at the atlantoaxial (AA) and atlantooccipital junctions in cavalier King Charles spaniels (CKCS), causing varying degrees of attenuation of the subarachnoid space (SAS) and the spinal cord. [1][2][3][4][5] These lesions are visible surgically as the foci area of whitened and thickened soft tissue dorsal to the AA and atlantooccipital junctions. 1,[6][7][8][9] Histopathologically, atlantooccipital dorsal constrictive lesions are composed of areas of lymphoplasmacytic inflammation and fibrosis, with areas of mineralization, osseous metaplasia, or both. 2,7 The presence of inflammatory cells suggests that these are not static lesions, but rather sites of ongoing inflammation. Atlantoaxial dorsal constrictive lesions are not as well described histopathologically as their more cranial counterparts, although they appear to involve the dorsal interarcuate ligament when visualized and resected at surgery. 1,4,5 These lesions have been described using various terms, including AA dorsal compressive or constrictive lesions, and dural fibrous bands. 1,[3][4][5][6] In this report, they will be referred to as AA bands.
Atlantoaxial bands can be diagnosed preoperatively as areas of focal SAS compression on T2-weighted magnetic resonance imaging (MRI) images; dilatation of the SAS also may be seen immediately caudal or cranial to the band site or in both locations on imaging and at surgery (Fig 1). 1,3,8 These AA bands are present in 38% of small and toy breed dogs. 3 Of these, CKCS represent 1 of the most commonly affected breeds, with a prevalence of craniocervical junction anomalies of 42% in a group of symptomatic and asymptomatic CKCS. 1 They occur most commonly in conjunction with Chiari-like malformations (CMs), although they may occur alone or with other craniocervical junction anomalies, such as dorsal angulation of the dens. 1,3,6,10 In humans, dorsal AA compressive bands have been found to play an important role in the development of clinical signs and syringomyelia (SM), particularly in persons also diagnosed with Chiari malformations. 7,11 In veterinary medicine, AA bands are suspected of causing neuropathic pain, similar to other craniocervical junction anomalies, including head, craniocervical junc-tion and cervical hyperesthesia, generalized dysesthesia and allodynia. Clinical signs related to cervical myelopathy are also described. 3,4,6,8 In addition, improvement in clinical signs can result from surgical band excision in dogs. 4,5 However, the relationship between these bands and both clinical status and SM is not fully understood. In a group of 64 CKCS screened for craniocervical junction disorders, no relationship was found between degree of compression caused by AA bands and the presence or severity of clinical signs or SM. 1 A separate screening study described an objective method of assessing degree of dorsal compression caused by AA bands (ie, dorsal compression index), but did not evaluate the clinical relevance of this measurement. 3 Our study aims to expand upon the current understanding of the problem by evaluating the relationships among AA bands, clinical signs, CM and SM in CKCS > 5 years of age.
Inclusion Criteria
A group of 36 dogs was prospectively recruited from various sources, including a group of CKCS evaluated previously in a separate study, 1 CKCS clubs, online breed-associated groups, and CKCS presented to the Cornell University and North Carolina State University veterinary teaching hospitals as patients. Inclusion criteria were as follows: >5 years of age, normal CBC and serum biochemistry panel results (within 7 days of imaging) and absence of physical examination findings contraindicating anesthesia, such as heart murmur grade >4 of 6, or evidence of clinically apparent cardiac disease (eg, coughing, tachypnea).
Clinical and Magnetic Resonance Imaging Assessment
Dogs were assessed for pain, dysesthesia, and neurologic dysfunction by neurologic examination performed by 1 of the investigators (SCG or NJO) in addition to owner questionnaires assessing clinical signs seen at home. In the latter, owners were asked if their dogs had a history of scratching; rubbing their head, neck or shoulders on objects; episodes of crying out after play; decreased interaction with littermates or housemates; or, evidence of neck or head pain at home (eg, limited movement of the neck, blepharospasm, head-shy behavior). The area scratched, the frequency of scratching, factors precipitating its occurrence (eg, excitement, play, changes in environmental temperature or barometric pressure, neck leads, contact with the skin, or hair on the neck), and response to medications, surgery, or both also were evaluated, where applicable. Lastly, owners were asked if they had noted any changes in their dog's gait or ability to climb stairs. Information acquired from the questionnaires was then confirmed and supplemented at an in-person interview at the time of imaging.
This information was used to assign a neurologic grade between 0 and 5. 1 Dogs were anesthetized with fentanyl (premedication), propofol (induction), and either sevoflurane or isoflurane (maintenance). They were positioned in sternal recumbency for MRI, a first using padding to achieve a craniocervical junction posture approximating a normal standing position 1 and then with their craniocervical junction extended and their neck flat on the table, in a more typical posture used for MRI scanning of the cervical spine and brain. Head angles in flexion and extension were measured using a previously described method. 12 Acquired MRI sequences included T1-and T2-weighted sagittal images and T2-weighted transverse images of the craniocervical junction. These were uploaded into OsiriX Medical Imaging Software (open source software, www.osirix-viewer.com) and evaluated (by SCG) for the following: presence of a CM (ie, cerebellar indentation and herniation through the foramen magnum and loss of cerebrospinal fluid at the craniocervical junction); presence of SM; and presence of dorsal compression of the SAS, spinal cord, or both at the level of the first and second cervical vertebrae (ie, an AA band). In dogs with SM, an AA band, or both a severity grade was assigned (Table 1). This grade was assigned separately from images of the cervical spine in both the flexed and extended positions. A compression index also was generated to provide an objective assessment of severity of compression secondary to AA band formation. This index was determined using T2-weighted sagittal images with the craniocervical junction in an extended position, using a method described previously. 3 Specifically, the distance was measured between the dorsal-most aspect of the dorsal SAS at the AA junction and the ventral-most point of bandrelated compression. This distance was then divided by the height of the nearest normal spinal cord. Extended images were used to assess the full extent of compression that might occur in a physiologic range of motion; positioning in flexion decreased the apparent degree of compression in most cases (Fig 2).
Statistical Analysis
Data were analyzed using SAS software. b Data analyzed included the presence of a CM (Y or N); clinical signs (Y or N); severity of neurologic signs graded from 0 to 5; presence of SM (Y or N); severity of SM graded from 0 to 3; presence of an AA band (Y or N); severity of AA band compression graded from 0 to 3; and compression index. Contingency tables were constructed to investigate the relationship between the presence of an AA band and the presence of neurologic signs and of SM. Ordinal factors, such as neurologic grade, SM grade, and compression grade also were examined by construction of contingency tables. Significance of relationships between pairs of ordinal variables was established using Chi Square tests. Spearman correlation coefficients were used to investigate associations between disease severity and continuous measurements (eg, compression index). Wilcoxon nonparametric tests were used to compare continuous measurements across 2 groups (ie, presence of SM), and Kruskal-Wallace tests were used for comparisons across more than 2 groups (eg, severity of neurologic signs). A logistic regression model was developed to examine the relationship among the presence of a CM, AA bands and the presence of clinical signs or SM. To control for the increased chance of false positives caused by multiple comparisons, the alpha level was decreased from 0.10 to 0.035.
Patient Characteristics
Thirty-six CKCS ranging in age from 5 to 12 years (mean, 8.8; median, 9) were evaluated; 15/36 (41.7%) were male. Twenty of the dogs (56%) had varying degrees of scratching, neck pain, and dysesthesia ( Table 2). Six dogs had neck pain alone but no other sensory or neurologic signs. Twelve dogs (33%) were being treated for their clinical signs at the time of imaging. These were treated with pain medications (gabapentin, pregabalin, or tramadol), omeprazole, prednisone, or some combination of these drugs. All but 1of these were being treated for neuropathic pain; the remaining dog was being treated for osteoarthritis. A neurologic grade of 5 (ie, ataxia and tetraparesis) was recorded in 1 dog, in which severe postoperative scar tissue and displacement of an implant appeared to cause compression of the SAS and spinal cord at both the atlantooccipital and AA junctions. This dog was excluded from further analysis because of the secondary (ie, postoperative) nature of its compression. Two additional dogs had a history of foramen magnum decompression surgery. In these, although scar tissue was not outwardly apparent, its presence could not be excluded. Consequently, to eliminate the risk of these dogs introducing bias to the study, they were excluded from statistical analyses investigating the relationship among AA bands, clinical signs, and SM.
Imaging Findings
Mean head angles were 142 and 196 degrees in flexion and extension, respectively, and positioning differed by an average of 57 degrees between flexion and extension. On MRI analysis, 33 dogs had a CM present (94%). Thirty-one dogs (88.6%) showed AA band compression on T2-weighted sagittal and transverse images, with severity ranging from minimal deformation of the dorsal SAS at the level of the AA junction to complete elimination of the SAS coupled with dorsolateral compression of the underlying spinal cord (Fig 3, Table 3). Twenty-nine of these had a concurrent CM; of the remaining 3 dogs, 2 had signs of neuropathic pain.
When comparing each individual dog's extended and flexed sequences, dorsal compression of the SAS and spinal cord at the AA junction was less prominent in flexed positions than in extension (Fig 2). In extension, the degree of compression ranged from a dorsal compression index of 0-46.7%, with the majority ranging between 20 and 30% (mean, 20.6; median, 20).
Relationship Between Atlantoaxial Bands and Syringomyelia
Syringomyelia was present in 26 (74.3%) dogs overall, 23 of which also had AA bands. The cranial-most extent of SM was located either at the level of the first cervical vertebra (C1 l11, 42.3%), C2 (10, 38.5%), or C3 (5, 19.2%). All grades of SM severity were seen: grade 1, 28%; grade 2, 47%; and, grade 3, 14%. In evaluating the relationship between the presence of AA bands and SM, AA bands were significantly associated with the presence (P = .029) and severity (ie, grade; P = .046) of SM. Objective measurements of compression severity (ie, compression index) were associated with both SM presence (P = .039) and severity (ie, grade; P = .0458; Fig 3). In a logistic regression model evaluating the relationship between the presence of a CM and AA bands as independent variables and the presence of SM as the dependent variable, the presence of AA bands was significantly predictive of the presence of SM (0.007). In contrast, the presence of a CM was not significant as a predictor of the presence of SM (P = .055), although, the number of dogs included in the study may not have been sufficient to identify this relationship.
Relationship Between Atlantoaxial Bands and Clinical Signs
In evaluating clinical signs, the presence of an AA band was found to be associated with their presence (P = .0024) but not their severity (P = .132). In evaluating the relationship between compression indexes and clinical signs, a higher index was associated with the presence (P = .028), but not the severity (P = .095) of clinical signs. In a logistic regression model evaluating the relationship between the presence of CM and AA bands as independent variables, and the presence of clinical signs as the dependent variable, only the presence of AA bands was found to be significantly predictive of the presence of clinical signs (P = .008).
Discussion
Atlantoaxial bands were found in a larger proportion of CKCS in this study (83.8%) than has been reported previously in this breed (42%) 1 or in small and toy breeds in general (38%). 3 This finding may be explained by the maturity of dogs in this study, which specifically examined >5 years of age. Although previous study populations were predominantly young, with median ages of 1.8 3 and 2.5 1 years, the median age of dogs in our study was 9 years. The higher prevalence of AA bands in older CKCS populations may reflect progression of the disease over time, as described for SM, 13,14 but longitudinal studies are needed to confirm this hypothesis. Excessive vertebral movement at the AA junction has been proposed as a contributing factor for dorsal compressive band formation in dogs. 3,4 Although this relationship is not yet confirmed, if present, such an effect could lead to cumulative compression over time.
Compression was, in general, more pronounced in extended (ie, straight) positions, compared to each dog's flexed views. In fact, in a small number of cases, AA bands that were noticeable in extension were not apparent in flexed views. Thus, imaging in flexion alone could underestimate the prevalence of AA bands, and may have played a role in their apparently lower prevalence in previous studies. In contrast, other craniocervical junction anomalies, such as cerebellar herniation, become more pronounced in flexed views. 12 For this reason, it may be optimal to image the canine craniocervical junction in both flexion and extension to obtain maximal diagnostic information.
An unconfirmed association has been suspected between AA bands and neuropathic pain signs consistent with cervical myelopathies. 1,7,8,10 In our study, AA bands appeared to be independently associated with the presence of clinical signs, which primarily manifested as neuropathic pain. In addition, despite the high prevalence of CM, AA bands had a stronger relationship with the presence of clinical signs than did CM. When severity of signs was considered, it was not found to be associated with the severity of band compression, which A B Fig 3. T2-weighted sagittal MR images of the craniocervical junction in cavalier King Charles spaniels demonstrating variability in atlantoaxial band severity. In Figure 1A, focal compression (arrow) of the dorsal subarachnoid space (SAS) is present (grade 1). In Figure 1B the SAS is eliminated, and ventral displacement of the underlying spinal cord is seen (arrow; grade 3), along with atlanto-occipital overlapping, dilation of the SAS cranial to the dural band (X), and syringomyelia (*). ranged from minimal indentation to complete obliteration of the SAS, with concurrent underlying spinal cord compression. Neuropathic pain associated with this condition may result either from primary neural compression or as a consequence of SM. The latter, present in our study in 92.3% of dogs with AA bands, previously has been identified as an important factor in determining the presence of neurologic signs in dogs with craniocervical junction anomalies. 1,15,16 Atlantoaxial bands cause variable degrees of attenuation of the SAS at the AA junction, as can be seen on MRI studies and at surgery. 1,3,8,10 Compression of the SAS throughout the neuraxis, in turn, has been suspected of playing a role in SM formation by influencing local cerebrospinal fluid flow dynamics, as described in numerous hydrodynamic models. 15,[17][18][19][20] For this reason, AA bands have been suspected of playing a role in SM formation since they were first identified. 1,8,10,15 Our study confirms the existence of an association between AA bands and SM. Specifically, the presence of an AA band was associated with the presence of SM, regardless of the presence of CM. In addition, more severe compression was positively associated with both the presence and severity of SM. The pathophysiology underlying SM associated with AA bands remains to be elucidated, however, and must be considered in light of other craniocervical junction anomalies commonly found in this area such as CM.
Several factors have been proposed to play a role in the presence of SM associated with CM, including differences in parenchyma 21,22 and caudal fossa 1,23 sizes, changes in cerebrospinal fluid flow characteristics, 17,24 cerebellar pulsation, 25 and abnormal jugular foramina size leading to venous congestion. 26 Atlantoaxial bands may play a role in the development of SM, and if so, they would be expected to alter local cerebrospinal fluid flow dynamics in a manner similar to that described in the spinal thecal sac constriction model (ie, as observed in spinal ligation studies). In this model, focal iatrogenic constriction of the SAS results in SM formation, both cranially and caudally to the point of obstruction. 19 The fluid pulse pressure theory, generated in response to this model, theorizes that the accumulation of extracellular fluid within the spinal cord parenchyma (ie, SM formation) results from pressure differentials cranial and caudal to the focal point of obstruction. Thus, as a result of systolic pulse pressure waves located within the dorsal SAS caudal to the obstruction, relatively lower pressures are thought to occur within the underlying spinal cord parenchyma. It is hypothesized that these lower pressures encourage movement of extracellular fluid into the caudal spinal cord parenchyma. Conversely, it is thought that during valsalva maneuvers, lower pressures exist within the spinal cord parenchyma cranial to the point of obstruction (compared with the SAS), leading to cranial SM formation. 19 In our study, the cranial-most extent of SM was found overlying the first cervical vertebra in the majority of cases, and extending caudal to the AA band. The presence of SM both cranial and caudal to this focal compression suggests that dynamics similar to those observed in spinal ligations studies could be at play in AA band-related SM, or simply could be a reflection of syrinx progression associated with age. Studies evaluating cerebrospinal fluid flow within this area are needed to better understand the processes influencing the location and extent of syrinx formation.
Conclusions
Our study confirms the high prevalence of AA bands in older CKCS and demonstrates that positioning of dogs in extension during MRI enhances the sensitivity of the study for detecting this important abnormality. There was a significant association among AA bands, clinical signs and SM in dogs with CM but additional work is needed to understand whether this relationship is causal or not.
|
2016-08-09T08:50:54.084Z
|
2015-05-01T00:00:00.000
|
{
"year": 2015,
"sha1": "893ecb876f26cb094cb3b65c6b2fe1623e2af841",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvim.12604",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "893ecb876f26cb094cb3b65c6b2fe1623e2af841",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231698964
|
pes2o/s2orc
|
v3-fos-license
|
An Optimal Reduction of TV-Denoising to Adaptive Online Learning
We consider the problem of estimating a function from $n$ noisy samples whose discrete Total Variation (TV) is bounded by $C_n$. We reveal a deep connection to the seemingly disparate problem of Strongly Adaptive online learning (Daniely et al, 2015) and provide an $O(n \log n)$ time algorithm that attains the near minimax optimal rate of $\tilde O (n^{1/3}C_n^{2/3})$ under squared error loss. The resulting algorithm runs online and optimally adapts to the unknown smoothness parameter $C_n$. This leads to a new and more versatile alternative to wavelets-based methods for (1) adaptively estimating TV bounded functions; (2) online forecasting of TV bounded trends in time series.
Introduction
Total variation (TV) denoising (Rudin et al., 1992) is a classical algorithm originated in the signal processing community which removes noise from a noisy signal y by solving the following regularized optimization problem min f f − y 2 2 +λTV(f ).
where TV(·) denotes the total variation functional which is equivalent to |f (x)|dx for weakly differentiable functions. In discrete time, TV denoising is known as "fused lasso" in the statistics literature (Tibshirani et al., 2005;Hoefling, 2010), which solves where θ i is the element at index i of the vector θ. Unlike their L2-counterpart, the TV regularization functional is designed to promote sparsity in the number of change points, hence inducing a "piecewise constant" structure in the solution.
Over the three decades since the advent of TV denoising, it has seen many influential applications. Algorithms that use TV-regularization has been deployed in every cellphone, digital camera and medical imaging devices. More recently, TV denoising is recognized as a pivotal component in generating the first image of a super massive black hole (Akiyama et al., 2019). Moreover, the idea of TV regularization has inspired a myriad of extensions to other tasks such as image debluring, super-resolution, inpainting, compression, rendering, stylization (we refer readers to a recent book (Chambolle et al., 2010) and the references therein) as well as other tasks beyond the context of images such as change-point detection, semisupervised learning and graph partitioning.
In this paper, we focus on the non-parametric statistical estimation problem behind TV-denoising which aims to estimate a function f : [0, 1] → R using observations of the following form: where i are iid N (0, σ 2 ) and the function f belongs to some fixed non-parametric function class F. The exogenous variables x i belongs to some subset X of R. The above setup is a widely adopted one in the non-parametric regression literature (Tsybakov, 2008). In this work, we take F to be the Total Variation class: {f |TV(f ) ≤ C n } or its discrete counterpart We are interested in finding algorithms that generate estimatesŷ t , t ∈ [n] such that the total square error is minimized. Throughout this paper, when we refer to rate, we mean the growth rate of R n as a function of n and C n . The family F(C n ) we consider here features a rich class of functions that exhibit spatially heterogeneous smoothness behavior. These functions can be very smoothly varying in certain regions of space, while in other regions, it can exhibit fast variations (see for eg. Fig. 5) or abrupt changes that may even be discontinuous. A good estimator should be able to detect such local fluctuations (which can be short lived) and adjust the amount of "smoothing" to apply 1 arXiv:2101.09438v2 [cs.LG] 26 Jan 2021 according to the level of smoothness of the functions in each local neighborhood. Such estimators are referred as locally adaptive estimators by Donoho (Donoho et al., 1998).
We are interested in algorithms that achieve the minimax optimal rates for estimating functions in F(C n ) defined as: which is known to be Θ(n 1/3 C 2/3 n ) (Donoho et al., 1990;Mammen, 1991).
There is a body of work in Strongly Adaptive online learning that focuses on designing online algorithms such that its regret in any local time window is controlled (Daniely et al., 2015). Hence the notion of local adaptivity is built into such algorithms. This makes the problem of estimating TV bounded functions, a natural candidate to be amenable to techniques from Strongly Adaptive online learning. However, it is not clear that whether using Strongly Adaptive algorithms can lead to minimax optimal estimation rates. By formalizing the intuition above, we answer it affirmatively in this work.
We reserve the phrase adaptive estimation to describe the act of estimating TV bounded functions such that R n of the estimator/algorithm can be bounded by a function of n and C n without any prior knowledge of C n . An adaptively optimal estimatorŷ is able to estimate an arbitrary function f with an error R n (ŷ, f ) =Õ inf Cn such that f ∈F (Cn) R * n (C n ) .
A TV bounded function will be referred as a Bounded Variation (BV) function henceforth for brevity.The notatioñ O(·) hides poly-logarithmic factors of n.
It is well known that all linear estimators that output a linear transformation of the observations attain a suboptimal Ω( √ nC n ) rate (Donoho et al., 1990). This covers a large family of algorithms including the popular methods based on smoothing kernels, splines and local polynomials, as well as methods such as online gradient descent (see a recent discussion from Baby and Wang, 2019). Wavelet smoothing (Donoho et al., 1998) is known to attain the near minimax optimal rate ofÕ(n 1/3 C 2/3 n ) for R n without any prior information about C n . Recently the same rate is shown to be achievable for the online forecasting setting by adding a wavelets-based adaptive restarting schedule to OGD (Baby and Wang, 2019).
In this paper, we provide an alternative to wavelet smoothing by a novel reduction to a strongly adaptive regret minimization problem from the online learning literature. We show that the resulting algorithm achieves the same adaptive optimal rate ofÕ(n 1/3 C 2/3 n ). The algo-rithm is more versatile than wavelet smoothing for three reasons: 1. Our algorithm is based on aggregating experts that performs local predictions. The experts we use perform online averaging. However, one may use more advanced algorithms such as kernel/spline smoothing, polynomial regression or even deep learning approaches as experts that can potentially lead to better performance in practice. Hence our algorithm is highly configurable.
2. Our algorithm accepts a learning rate parameter that can be set without prior knowledge of C n to obtain the near optimal rate ofÕ(n 1/3 C 2/3 n ) (see Theorem 5). However, this learning rate can also be tuned using heuristics that can lead to better practical performance (see Section 5).
3. It can also handle a more challenging setting where the data are streamed sequentially in an online fashion.
To the best of our knowledge, we are the first to formalize the connection between strongly adaptive online learning and the problem of local-adaptivity in nonparametric regression. By establishing this new perspective, we hope to encourage further collaboration between these two communities.
Problem Setup
Though we are primarily motivated to solve the offline/batch estimation problem, our starting point is to consider a significant generalization of the batch problem as shown in Fig. 1. Any adaptively optimal algorithm to this online game immediately implies adaptive optimality in the batch/offline setting. For example, to solve the batch problem, adversary can be thought of as revealing the indices isotonically, i.e i t = t. However, note that in the online game, adversary can even query the same index multiple times. The term "forecasting strategy" in step 1 of Fig. 1, is used to mean an algorithm that makes a prediction at current time point only based on the historical data.
Solving the online problem has an added advantage that the resulting algorithm can be applied to various instances of time series forecasting like financial markets, spread of contagious disease etc.
Though this constraint is considered to be mild and natural, we note that standard non-parametric regression algorithms do not make this assumption.
(c) We receive a feedback
Notes on novelty and contributions
To the best of our knowledge, in non-parametric regression literature, only wavelet smoothing 1 (Donoho et al., 1998) is able to provably attain a near optimalÕ(n 1/3 C 2/3 n ) rate for estimating BV functions in batch setting without knowing the value of C n . There are model-selection techniques based on information-criterion, which often either incurs significant practical overhead or comes with no optimal rate guarantees (We will review these approaches in Section 1.3).
The contributions of this work is mainly theoretical. Our primary result is a novel reduction from the problem of estimating BV functions to Strongly Adaptive online learning (Daniely et al., 2015). This reduction approach results in the development of a new O(n log n) time algorithm that is: 1) minimax optimal (modulo log factors) 2) adaptive to C n and 3) can be used to tackle both online and offline estimation problems thereby providing new insights. To elaborate slightly, this is facilitated by few fundamentally different viewpoints than those adopted in the wavelet literature. In particular, we exhibit a specific partitioning of TV bounded function into consecutive chunks that incurs low total variation such that total number of chunks is O(n 1/3 C 2/3 n ). Then by designing a strongly adaptive online learner, we ensure anÕ(1) cumulative squared error in each chunk of that partition. This immediately implies 1 Though (Baby and Wang, 2019) proposes a minimax policy for forecasting TV bounded sequences online, they heavily rely on the adaptive minimaxity of wavelet smoothing. an estimation error rate ofÕ(n 1/3 C 2/3 n ) when summed across all chunks. To the best of our knowledge, this is the first time a connection between strongly adaptive online learning and estimating BV functions has been exploited in literature.
Experimental results (see Section 5) indicate that our algorithm can outperform wavelet smoothing in terms of its cumulative squared error incurred in practice. We demonstrate that the proposed algorithm can be used without any hyper-parameter tuning and incurs very low computational overhead in comparison to model selection based approaches for the fused lasso problem (see Eq. (1)).
Before closing this section, we remind the reader that this work shouldn't be viewed only as providing yet another solution to a classical problem but rather one that provides a fundamentally new set of tools that adds new insight to this decades-old problem that might have a profound impact in many extensions of the basic setting we consider and other downstream tasks such as estimating higher-dimensional BV functions, fused lasso on graphs, image deblurring, trend filtering and so on.
Related Work
As noted before, the theoretical analysis of estimating BV functions is well studied in the rich literature of nonparametric regression. Apart from wavelet smoothing (Donoho et al., 1990;Donoho and Johnstone, 1994a,b;Donoho et al., 1998), many algorithms such as Trend Filtering (Kim et al., 2009;Tibshirani, 2014;Wang et al., 2016;Sadhanala et al., 2016b;Guntuboyina et al., 2017) and locally adaptive regression splines (Mammen and van de Geer, 1997) can be used for estimation. However, one drawback of these algorithms is that they require the TV of ground truth C n as an input to the algorithm to guarantee minimax optimal rates. For example, the solution to fused lasso (Eq. (1)) is minimax optimal only when one chooses the hyper-parameter λ optimally. It is shown in (Wang et al., 2016) that optimal choice of λ depends on the variational budget C n which may be unknown beforehand.
Theoretically one may tune the choice C n (or λ) as a hyper-parameter using criteria like AIC, BIC, Stein-Unbiased Risk Estimate (SURE)-based approaches or the use techniques presented in (Birge and Massart, 2001). However, such model selection based schemes often have statistical or computational overheads that make them impractical. The most relevant is the effective degree of freedom (dof) approach (See Eq.(8) and Eq.(9) in (Tibshirani and Taylor, 2012)). It requires solving fused lasso with many λ (computational overhead). The estimate of dof is unstable in some regimes (statistical overhead). Generally, these methods may work well in practice but often do not come with theoretical guarantees of adaptive optimality. Moreover, we are not aware of any such model-selection technique that can solve the online version of the problem.
There is also a body of work that focuses on the computation of solving problem (1) and their higher-dimensional extensions (see (Chambolle and Lions, 1997;Barbero and Sra, 2011), and the excellent survey therein). This is complementary to our focus, which is to minimize the error against the (unobserved) ground truth. Computationally, (Johnson, 2013)'s dynamic programming has a worst-case O(n) time-complexity, but only for a fixed λ. Our algorithm runs in O(n log n)-time while avoids choosing the λ parameter all together.
The closest to us is perhaps (Baby and Wang, 2019) which indeed has motivated this work. They consider an online protocol similar to Fig. 1 with the adversary constrained to reveal the indices i t isotonically (i.e i t = t) and propose an adaptive restart scheme based on wavelets. However such techniques are not useful to compete against a more powerful adversary which can query indices in any arbitrary manner -for example when the exogenous variables x ∈ X are sampled iid from a distribution and revealed online. Further, their proof critically relies on adaptive minimaxity of wavelets. We aim to build a radically new algorithm that is agnostic to the results from wavelet smoothing literature.
A strongly adaptive online learner (Daniely et al., 2015;Adamskiy et al., 2016), incurs low static regret in any interval. This is accomplished by maintaining a pool of sleeping experts that are static regret minimizing algorithms which are awake only in some specific duration. Then an aggregation strategy to hedge over the experts is used to guarantee low regret in any interval. This work was preceded by the notion of weakly adaptive regret in (Hazan and Seshadhri, 2007). To the best of our knowledge, the efficient reduction of TV-denoising to strongly-adaptive online learning is new to this paper. We defer further discussions on related work to Appendix A.
Preliminaries
In this section, we briefly review the elements from online learning literature that are crucial to the development of our algorithm.
Geometric Cover
Geometric Cover (GC) proposed in (Daniely et al., 2015) is a collection of intervals that belong to N defined below. In what follows [a, b] denotes the set of natural numbers lie between a and b, both inclusive.
where ∀k ∈ N ∪ {0}, and I k = {[i · 2 k , (i + 1) · 2 k − 1] : i ∈ N}. Define AWAKE(t) := {I ∈ I : t ∈ I}.By the construction of Geometric Cover I, it holds that |AWAKE(t)|= log t + 1. (2) Let's denote I| J := {I ∈ I : I ⊆ J} for an interval J ⊆ N. The GC has a very nice property recorded in the following Proposition. Then the interval I can be partitioned into two finite sequences of disjoint consecutive intervals (I −k , . . . , I 0 ) ⊆ I| I and (I 1 , . . . , I p ) ⊆ I| I such that,
Sleeping Experts and Specialist Aggregation Algorithm (SAA)
In the problem of learning from expert advice with outcome space O and action space A, there are K experts who provide a list of actions a t,: = [a t,1 , ..., a t,K ] ∈ A K at time t = 1, ..., n. The learner is supposed to takes an action a t ∈ A based on the expert advice 2 before the outcome o t ∈ O is revealed by an adversary. The player then incurs a loss given by (a t , o t ), where is a loss function.
In the most basic setting, A, O are discrete sets, can be described by a table, and we assign one constant expert to each a ∈ A, then this becomes an online version of Von Neumann's linear matrix game. More generally, A can be a convex set, describing parameters of a classifier, o ∈ O could denote a feature-label pair in which case the loss could be a square loss or logistic loss that measures the performance of each classifier.
Our result leverages a variant of the learning from expert advice problem which assumes an arbitrary subset of K experts might be sleeping at time t and the learner needs to compete against an expert only during its awake duration. The learner chooses a distribution w t over the awake experts and plays a weighted average over the actions of those awake experts. It then incurs a surrogate-loss called "MixLoss" which is a measure of how good the distribution w t is. (See Figure 2 for details.) This setting is different from the classical prediction with experts advice problem in two aspects: 1) The adversary is endowed with more power of selecting an awake expert set in addition to the actual outcome o t at each round. 2) Instead of the loss (a t , o t ), the learner is incurred a surrogate loss on the distribution chosen by the learner at time t.
Consider the protocol of learning with sleeping experts shown in Fig. 2. Assume an expert pool of size K.
2 Could be a t,k for some k ∈ [K] or any other points in A 4 For t = 1, . . . , n 1. Adversary picks a subset A t ⊂ [K] of awake experts.
2. Learner choose a distribution w t over A t .
3. Adversary reveals loss of all awake experts,
Figure 2:
Interaction protocol with sleeping experts. The expert pool size is K.
Initialize u 1,k = 1/|S| for all k in an index set S used to index the expert pool. For t = 1, . . . , n 1. Adversary reveals A t ⊆ S.
2. Play weighted average action wrt distribution: 3. Broadcast the weights w t,k .
4. Receive losses t,k for all k ∈ A t . 5. Update: where 1{·} is the indicator function, t,k := L(a t,k , o t ) and a t,k is the action taken by expert k at time t.
Note that t,j = MixLoss(e j ) where e j selects j with probability 1. The regret measures the performance of the learner against any fixed expert in terms of the MixLoss in the sub-sequence where she is awake.
A MixLoss regret bound is useful because it implies a regret bound on any exp-concave losses for learners playing the weighted average action a t = k∈At w t,k a t,k . To see this, let L (a, o) be η exp-concave in its first argument a ∈ A. By the definition of exp-concavity it follows that if SAA is run with losses L(a, o) = ηL (a, o), then, where a t,k is the action taken by expert k at time t. We refer to Chapter 3 of (Cesa-Bianchi and Lugosi, 2006) and (Adamskiy et al., 2016) for further details on SAA.
Main Results
In this section, we present our algorithm and its performance guarantees.
Algorithm
As noted in Section 1, our goal is to explore the possibility that a Strongly Adaptive online learner can lead to minimax optimal estimation rate. Consequently the algorithm that we present is a fairly standard Strongly Adaptive online learner that can guarantee logarithmic regret in any interval.
Our algorithm ALIGATOR (Aggregation of onLIne av-eraGes using A geomeTric cOveR) defined in Fig.4 can be used to tackle both online and batch estimation problems. The policy is based on learning with sleeping experts where expert pool is defined as follows.
is as defined in Section 2.1 and A I is an algorithm that perform online averaging in interval I. Let A I (t) denote the prediction of the expert A I at time t, if I ∈ AWAKE(t).
Due to relation (2), we have |E|≤ n log n. Our policy basically performs SAA over E.
The precise definition of A I (t) used in our algorithm is Fig. 4. This particular choice of experts is motivated by the fact that performing online averages lead to logarithmic static regret under quadratic losses. As shown later, this property when combined with the SAA scheme leads to logarithmic regret in any interval of [n].
2. For t = 1 to n: (a) Adversary reveals an arbitrary x it ∈ X .
Performance Guarantees
Theorem 5. Consider the online game in Fig. 1.
whereÕ(·) hides the dependency of constants B, σ and poly-logarithmic factors of n and δ.
Proof Sketch. We first show that ALIGATOR suffers logarithmic regret against any expert in the pool E during its awake period. Then we exhibit a particular partition of the underlying TV bounded function such that number of chunks in the partition is O(n 1/3 C 2/3 n ) (Lemma 14 in Appendix B). Following this, we cover each chunk with atmost log n experts and show that each expert in the cover suffers aÕ(1) estimation error. The Theorem then follows by summing the estimation error across all chunks of the partition. In summary, the delicate interplay between Strongly Adaptive regret bounds and properties of the partition we exhibit leads to the adaptively minimax optimal estimation rate for ALIGATOR. We emphasize that existence of such partitions is a highly non-trivial matter.
Remark 6. We note that under the above setting, ALI-GATOR is minimax optimal in n and C n , and adaptive to unknown C n .
Remark 7. If the noise level σ is unknown, it can be robustly estimated from the wavelet coefficients of the observed data by a Median Absolute Deviation estimator (Johnstone, 2017). This is facilitated by the sparsity of wavelet coefficients of BV functions .
Remark 8. In the offline problem where we have access to all observations ahead of time, the choice of η = 1/(8ν 2 ) whereν = max{|y 1 |, . . . , |y n |} results in the same near optimal rate for R n as in Theorem 5. This is due to the fact that B + σ log(2n/δ) is nothing but a high probability bound on each |y t |. Hence we don't require the prior knowledge of B and σ for the offline problem.
Remark 9. The authors of (Donoho et al., 1998) use the error metric given by the L2 function norm in a compact in- Tibshirani, 2014). When x it = t/n, ALI-GATOR guarantees that the empirical norm 1 n n t=1 (ŷ t − f (t/n)) 2 decays at the rate ofÕ n −2/3 C 2/3 n . For the TV class, it can be shown that the empirical norm and the function norm are close enough such that the estimation rates do not change (see Section 15.5 of (Johnstone, 2017)).
Remark 10. Note that conditioned on the past observations, the prediction of ALIGATOR is deterministic in each round. So in the online setting, we can compete with an adversary who chooses the underlying ground truth in an adaptive manner based on the learner's past moves. With such an adaptive adversary, it becomes important to reveal the set of covariates X ahead of time. Otherwise there exists a strategy for the adversary to choose the covariates x it that can enforce a linear growth in the cumulative squared error. We refer the readers to (Kotłowski et al., 2016) for more details about such adversarial strategy.
Proposition 11. The overall run-time of ALIGATOR is O(n log n).
Proof. On each round |AWAKE(t)| is O(log n) by (2). So we only need to aggregate and update the weights of O(log n) experts per round which can be done in O(log n) time.
Extensions
Motivated from a practical perspective, we discus two direct extensions to ALIGATOR below. These extensions highlight the versatility of ALIGATOR in adapting to each application.
Hedged ALIGATOR. In our theoretical results, we found that choosing learning rate η conservatively according to Theorem 5 or Remark 8 ensures the minimax rates. In practice, however, one could use larger learning rates to adapt to the structure of every input sequence.
We propose to use a hedged ALIGATOR scheme that aggregates the predictions of ALIGATOR instantiated with different learning rates. In particular, we run different 6 instances of ALIGATOR in parallel where an instance corresponds to a learning rate in the exponential grid [η, 2η, . . . , max{η, log 2 n}] which has a size of O log (B 2 + σ 2 ) log n . Here η is chosen as in Theorem 5 or Remark 8. Then we aggregate each of these instances by the Exponential Weighted Averages (EWA) algorithm (Cesa- Bianchi and Lugosi, 2006). The learning rate of this outer EWA layer is set according to the theoretical value.By exp-concavity of squared error losses, this strategy helps to match the performance of the best ALIGATOR instance. Since the theoretical choice of learning rate is included in the exponential grid, the strategy can also guarantee optimal minimax rate. We emphasize that Hedged ALIGATOR is adaptive to C n and requires no hyper-parameter tuning.
ALIGATOR with polynomial regression experts. This extension is motivated by the problem of identifying trends in time series. Though in Section 3.1 we use online averaging as experts, in practice one can consider using other algorithms. For example, if the trends in a time series are piecewise-linear, then experts based on online averaging can lead to poor practical performance because the TV budget C n of piecewise linear signals can be very large. To alleviate this, in this extension, we propose to use Online Polynomial Regression as experts where a polynomial of a fixed degree d is fitted to the data with time points as its exogenous variables. This is similar to the idea adopted in (Baby and Wang, 2020) where they construct a policy that performs restarted online polynomial regression where the restart schedule is adaptively chosen via wavelet based methods. They show that such a scheme can guarantee estimation rates that grow with (a scaled) L1 norm of higher order differences of the underlying trend which can be much smaller than its TV budget C n . This extension can be viewed as a variant to the scheme in (Baby and Wang, 2020) where the "hard" restarts are replaced by "soft restarts" via maintaining distributions over the sleeping experts.
Experimental Results
For empirical evaluation, we consider online and offline vesrions of the problems separately.
Description of policies. We begin by a description of each algorithm whose error curve is plotted in the figures.
ALIGATOR (hedged): This is the extension described in Section 4 ALIGATOR (heuristics): For this hueristics strategy, we divide the loss of each expert by 2(σ 2 +σ 2 /m) where m is the number of samples whose running average is compued by the expert. This loss is proportional to the notion of (squared) z-score used in hypothesis testing. Intuitively, lower (squared) z-score corresponds to better experts. The multiplier 2 in the previous expression is found to provide Figure 6: Cumulative squared error rate of various algorithms on offline setting and online setting. ALIGATOR achieves the optimalÕ(n 1/3 ) rate while performing better than wavelet based methods. In particular, in the offline setting, it achieves a performance closer to that of dof based fused lasso while only incurring a cheapÕ(n) run-time overhead. Daily COVID cases in Florida ground truth holt es aligator(hedged) Figure 7: A demo on forecasting COVID cases based on real world data. We display the two weeks forecasts of hedged ALIGATOR and Holt ES, starting from the time points identified by the dotted lines. Both the algorithms are trained on a 2 month data prior to each dotted line. We see that hedged ALIGATOR detects changes in trends more quickly than Holt ES. Further, hedged ALIGATOR attains a 20% reduction in the average RMSE from that of Holt ES (see Section 5).
good performnace across all signals we consider.
arrows: This is the the policy presented in (Baby and Wang, 2019), which runs online averaging with an adaptive restarting rule based on wavelet denoising results.
wavelets: This is the universal soft thresholding estimator from (Donoho et al., 1998) based on Haar wavelets which is known to be minimax optimal for estimating BV functions.
oracle fused lasso: This estimator is obtained by solving (1) whose hyper-parameter is tuned by assuming access to an oracle that can compute the mean squared error wrt actual ground truth. The exact ranges used in the hyperparameter grid search is described in Appendix C. Note that the oracle fused lasso estimator is purely hypothetical due to absence of such oracles described before in reality and is ultimately impractical. It is used here to facilitate meaningful comparisons.
fused lasso (dof): In this experiment, we maintain a list of λ for the fused lasso problem (Eq. (1)). Then we compute the Stein's Unbiased Risk estimator for the expected squared error incurred by each λ by estimating its degree of freedom (dof) (Tibshirani and Taylor, 2012) and select the λ with minimum estimated error.
Experiments on synthetic data. For the ground truth signal, we use the Doppler function of (Donoho and Johnstone, 1994a) whose waveform is depicted in Fig. 5. The observed data are generated by adding iid noise to the ground truth. For offline setting, we have access to all observations ahead of time. So we run Arrows and both versions of ALIGATOR two times on the same data, once in isotonic order (i.e i t = t in Fig. 1) and other in reverse isotonic order and average the predictions to get estimates of the ground truth. For online setting such a forwardbackward averaging is not performed. This process of generating the noisy data and computing estimates are repeated for 5 trials and the average cumulative error is plotted. As we can see from Fig.6 (a), ALIGATOR versions attains theÕ(n 1/3 ) rate and incurs much lower error than wavelet smoothing. Further, performance of hedged and heuristics versions of ALIGATOR is in the vicinity to that of the hypothetical fusedlasso estimator while the policies arrows and wavelets violate this property by a large margin. Even though the dof based fused lasso comes very close to the oracle counterpart, we emphasize that this strategy is not known to provide theoretical guarantees for its rate and requires heavy computational bottleneck since it requires to solve the fused lasso (Eq. 1) for many different values of λ.
For the online version of the problem, we consider the policy Arrows as the benchmark. This policy has been established to be minimax optimal for online forecasting of TV bounded sequences in (Baby and Wang, 2019). We see from Fig.6 (b) that all the policies attains anÕ(n 1/3 ) rate while ALIGATOR variants enjoy lower cumulative errors.
Experiments on real data. Next we consider the task of forecasting COVID cases using the extension of Aligator with polynomial regression experts as in Section 4. The data are obtained from the CDC website (cdc).
We address a very relevant problem as follows: Given access to the historical data, forecast the evolution of COVID cases for the next 2 weeks. We compare the performance of hedged ALIGATOR and Holt Exponential Smoothing (Holt ES), on this problem, where the later is a common algorithm used in Time Series forecasting to detect underlying trends. For ALIGATOR, we use Online Linear Regression as experts where a polynomial of degree one is fitted to the data with time points as its exogenous variables. For each time point t in [Apr 20, Sep 27], we train both hedged ALIGATOR and Holt ES on a training window of past 2 months. Then we calculate a 2 week forecast for both algorithms. For ALIGATOR this is achieved by linearly extrapolating the predictions of experts awake at time t and aggregating them. Following this, we compute the Root Mean Squared Error (RMSE) in the interval [t, t + 14) for both algorithms. These RMSE are then averaged across all t in [Apr 20, Sep 27].
We choose data from the state of Florida, USA, as an illustrative example. We obtained an average RMSE of 1330.12 for hedged ALIGATOR and 1671.77 for Holt ES. Thus hedged ALIGATOR attains a 20% reduction in forecast error from that of Holt ES. A qualitative comparison of the forecasts is illustrated in Fig. 7. As we can see, the time series is non-stationary and has a varying degree of smoothness. ALIGATOR is able to adapt to the local changes quickly, while Holt ES fails to do so despite having a more sophisticated training phase. Similar experi-mental results for some of the other states are reported in Appendix C.
The training step of hedged ALIGATOR involves learning the weights of all experts by an online interaction protocol as shown in Fig. 1 with i t = t. It is remarkable that no hyper-parameter tuning is required by ALIGATOR for its training phase. The slowest learning rate to be used in the grid for hedged ALIGATOR is computed as follows. First we calculate the maximum loss incurred by each expert for a one step ahead forecast in its awake duration. Then we take the maximum of this quantity across all experts in the pool. Let this quantity be β. The slowest learning rate in the grid is then set as 1/(2β). The learning rate of the outer layer of EWA is also set the same. This is justifiable because the quantity 4 B + σ log(2n/δ) in the denominator of the learning rate in Theorem 5 is a high probability bound on the loss incurred by any expert for a one step ahead forecast.
We defer further experimental results to Appendix C. An important caveat for practitioners. Though ALI-GATOR is able to detect non-stationary trends in the COVID data efficiently, we do not advocate using ALIGATOR as is for pandemic forecasting, which is a substantially more complex problem that requires input from domain experts.
However, ALIGATOR could have a role in this problem, and other online forecasting tasks. Estimating (and removing) trend is an important first step in many time series methods (e.g., Box-Jenkins method). Most trend estimation methods only apply to offline problems (e.g., Hodrick-Prescott filter or L1 Trend Filter) (Kim et al., 2009), while Holt ES is a common method used for online trend estimation. For instance, Holt ES is being used as a subroutine for trend estimation in a state-of-the-art forecasting method (Jin et al., 2021) for COVID cases that CDC is currently using. We expect that using ALIGATOR instead in such models that use Holt ES will lead to more accurate forecasting, but that is beyond the scope of this paper.
Concluding Discussion
In this work, we presented a novel reduction from estimating BV functions to Strongly Adaptive online learning. The reduction gives rise to a new algorithm ALIGATOR that attains the near minimax optimal rate ofÕ(n 1/3 C 2/3 n ) in O(n log n) run-time. The results form a parallel to wavelet smoothing in terms of optimal adaptivity to unknown variational budget C n . However, our algorithm is more versatile than wavelets in terms of its configurability and practical performance. Further, for offline estimation, ALIGATOR variants achieves a performance closer (than wavelets) to an oracle fused lasso while incurring only añ O(n) run-time with no hyper parameter tuning. This is in contrast to degree of freedom based approaches of tuning the fused lasso hyper parameter that requires significantly more computational overhead and is not known to provide guarantees on its rate.
A More on Related Work
For any forecasting strategy whose outputŷ t at time t depends only on past observations, . Hence any algorithm that minimizes the dynamic regret against the sequence f (x i1 ), . . . , f (x in ) with t (x) = (x − y t ) 2 being the loss at time t, can be potentially applied to solve our problem. However as noted in (Baby and Wang, 2019) a wide array of techniques such as (Zinkevich, 2003;Hall and Willett, 2013;Besbes et al., 2015;Chen et al., 2018b;Jadbabaie et al., 2015;Yang et al., 2016;Zhang et al., 2018a,b;Chen et al., 2018a;Yuan and Lamperski, 2019) are unable to achieve the optimal rate. However, we note that many of these algorithms support general convex/strongly-convex losses. The existence of a strategy withÕ(n 1/3 C 2/3 n ) rate for R n , even in the more general (in comparison to offline problem) online setting considered in Fig. 1 is implied by the results of (Rakhlin and Sridharan, 2014) on online non-parametric regression with Besov spaces via a non-constructive argument. (Kotłowski et al., 2016) studies the problem of forecasting isotonic sequences. However, the techniques are not extensible to forecasting the much richer family of TV bounded sequences.
We acknowledge that univariate TV-denoising is a simple and classical problem setting, and there had been a number of studies on TV-denoising in multiple dimensions and on graphs, and to higher order TV functional, while establishing the optimal rates in those settings (Tibshirani, 2014;Wang et al., 2016;Hutter and Rigollet, 2016;Sadhanala et al., 2016aSadhanala et al., , 2017Li et al., 2018). The problem of adaptivity in C n is generally open for those settings, except for highly special cases where the optimal tuning parameter happens to be independent to C n (see e.g., (Hutter and Rigollet, 2016)). Generalization of the techniques developed in this paper to these settings are possible but beyond the scope of this paper. That said, as (Padilla et al., 2017) establishes, an adaptive univariate fused lasso is already able to handle signal processing tasks on graphs with great generality by simply taking the depth-first-search order as a chain.
Using a specialist aggregation scheme to incur low adaptive regret was explored in (Adamskiy et al., 2016). However, the experts they use are same as that of (Hazan and Seshadhri, 2007). Due to this, their techniques are not directly applicable in our setting where the exogenous variables are queried in an arbitrary manner.
There are image denoising algorithms based on deep neural networks such as (Zhang et al., 2017). However, this body of work is complementary to our focus on establishing the connection between denoising and strongly adaptive online learning.
B Proofs of Technical Results
For the sake of clarity, we present a sequence of lemmas and sketch how to chain them to reach the main result in Section B.1. This is followed by proof of all lemmas in Section B.2 and finally the proof of Theorem 5 in Section B.3.
B.1 Proof strategy for Theorem 5
We first show that ALIGATOR suffers logarithmic regret against any expert in the pool E during its awake period. Then we exhibit a particular partition of the underlying TV bounded function such that number of chunks in the partition is O(n 1/3 C 2/3 n ). Following this, we cover each chunk with atmost log n experts and show that each expert in the cover suffers aÕ(1) estimation error. The Theorem then follows by summing the estimation error across all chunks.
First, we show that ALIGATOR is competitive against any expert in the pool E.
Lemma 12. For any interval I ∈ I| [n] such that T (I) is non-empty, the predictions made by ALIGATORŷ t satisfy with probability atleast 1 − δ.
Corollary 13. Let S = {P 1 , . . . , P M } be an arbitrary ordered set of consecutive intervals in [n]. For each i ∈ [n] let U i be the set containing elements of the GC that covers the interval P i according to Proposition 1. Denote λ := log(n log n)Rσ+2R 2 σ log(2n log n/δ) 3−e . Then ALIGATOR forecastsŷ t satisfy n t=1 with probability atleast 1 − δ.
The minimum across all partitions in the Corollary above hints to the novel ability of ALIGATOR to incur potentially very low estimation errors.
Next, we proceed to exhibit a partition of the set of exogenous variables queried by the adversary that will eventually lead to the minimax rate ofÕ(n 1/3 C 2/3 n ). The existence of such partitions is a non-trivial matter.
Number of partitions
The next lemma controls the estimation error incurred by an expert during its awake period.
To prove Theorem 5, our strategy is to apply Corollary 13 to the partition in Lemma 14. By the construction of the GC, each chunk in the partition can be covered using atmost log n intervals. Now consider the estimation error incurred by an expert corresponding to one such interval. Due to statements 1 and 2 in Lemma 14 the V (x,x) 2 |T (I)| term of error bound in Lemma 15 can be shown to O(1). When summed across all intervals that cover a chunk, the total estimation error within a chunk becomesÕ(1). Now appealing to statement 3 of Lemma 14, we get a total error of O(n 1/3 C 2/3 n ) when the error is summed across all chunks in the partition.
Some notations. In the analysis thereafter, we will use the following filtration.
Lemma 18. For any j ∈ [n], we have Proof. We have, where line (a) is due to the independence of j with the past. Since (A I (j) + θ j − 2y j ) 2 ≤ 16(B +σ) 2 under the event V, it holds that Lemma 19. For any interval I ∈ I, it holds with probability atleast 1 − δ that Condition on the event V that | t |≤ σ 2 log(4n/δ).∀t ∈ [n] which happens with probability atleast 1 − δ/2 by Lemma 16. By Lemma 18, we have {Z j } j∈T (I) is a martingale difference sequence and |Z j |≤ 16(B +σ) 2 = R σ . Note that once we condition on the filtration F j , there is no randomness remaining in the terms (A I (j) − θ j ) 2 and Using Lemma 17 and taking η = 1/R σ we get, with probability atleast 1 − δ/(4n log n) for a fixed expert A I . Taking a union bound across all O(n log n) experts in E leads to, for any expert A I . By similar arguments on the martingale difference sequence (ŷ j − θ j ) 2 − (y j −ŷ j ) 2 − (y j + θ j ) 2 , it can be shown that for any interval I ∈ I| [n] . Taking union bound across the previous two bad events and multiplying the probability of noise boundedness event V leads to the lemma.
Lemma 12. For any interval I ∈ I| [n] such that T (I) is non-empty, the predictions made by ALIGATORŷ t satisfy with probability atleast 1 − δ.
Number of partitions
Proof. We provide below a constructive proof. Consider the following scheme of partitioning S. i. pings = pings + p(i), TV = TV + |θ (i) − θ (i−1) | Statements 1 and 2 of the Lemma trivially follows from the strategy. Next, we provide an upper bound on number of bins M spawned by the above scheme. Let [x 1 , x r1 ], [x r1+1 , x r2 ], . . . , [x r M −1 ,xr M ] be the partition of S discovered by the above scheme.
Proof. Let q(t) = t−1 s=1 1{i s ∈ I}. Assume q(t) > 0. Fix a particular expert A I and a time t. Since y t ∼ N (θ t , σ 2 ) by gaussian tail inequality we have, Applying a union bound across all time points and all experts implies that for any expert A I and t ∈ T (I) with q(t) > 0, 2n 3 log n δ with probability atleast 1 − δ. Now adding and subtracting θ t inside the |·| on LHS and using |a − b|≥ |a|−|b| yields, Hence, with probability atleast 1 − δ. In (a) we used the relation (a + b) 2 ≤ 2a 2 + 2b 2 . Further we have, Combining (5) and (6) completes the proof.
B.3 Proof of the main result: Theorem 5
Proof. Throughout the proof we carry forward all notations used in Lemmas 14 and 15. We will apply Corollary 13 to the partition in Lemma 14. Take a specific partition [x i , x j ] ∈ P with j = m. Consider a set of indices F = {k i , k i + 1, . . . , k j } of consecutive natural numbers between k i and k j . By Proposition 1 F can be covered using elements in I| [n] . Let this cover be U. For any I ∈ U, we have ≤ 2V (x,x) 2 |T (F )|+2σ 2 log(2n 3 log n/δ) log(|T (I)|) ≤ (b) 2B 2 + 2σ 2 log(2n 3 log n/δ) log(n), , with probability atleast 1 − δ.
Step (a) is due to Lemma 15 and (b) is due to statement 1 of Lemma 14.
Using Lemma 12 and a union bound on the bad events in Lemmas 12 and 15 yields, 3 − e 2B 2 + 2σ 2 log(2n 3 log n/δ) log(n) + λ, with probability atleast 1 − 2δ and λ is as defined in Corollary 13. Due to the property of exponentially decaying lengths as stipulated by Proposition 1, there are only atmost 2 log|F |≤ 2 log n intervals in U. So, Similar bound can be obtained for the last bin [x r M −1 +1 , x m ] in P. There are two cases to consider. In case 1, we consider the scenario when V (x r M −1 +1 , x m ) obeys relation 1 of Lemma 14. Then the analysis is identical to the one presented above. In case 2, we consider the scenario when V (x r M −1 +1 , x m−1 ) obeys relation 2 of Lemma 14 while V (x r M −1 +1 , x m ) doesn't. Then the error incurred within the interior [x r M −1 +1 , x m−1 ] can be bounded as before. To bound the error at last point, we only need to bound the error of expert that performs mean estimation of iid gaussians. It is well known that the cumulative squared error for this problem is atmost σ 2 log(n/δ) with probability atleast 1 − δ.
C Excluded details in Experimental section
Waveforms. The waveforms shown in Fig. 8 and 9 are borrowed from (Donoho and Johnstone, 1994a). Note that both functions exhibit spatially inhomogeneous smoothness behaviour. Histogram for fused lasso (dof) residuals Figure 11: Histogram of residuals for various algorithms when run on Doppler function with noise level σ = 0.35. Note that they are residuals w.r.t to ground truth. ALIGATOR incurs lower bias than wavelets. The bias incurred by dof fused lasso is roughly comparable to ALIGATOR while former is more compute intensive. Histogram for fused lasso (dof) residuals Figure 13: Histogram of residuals for various algorithms when run on Heavisine function with noise level σ = 0.35. Note that they are residuals w.r.t to ground truth. ALIGATOR incurs lower bias than wavelets. The bias incurred by dof fused lasso is roughly comparable to ALIGATOR while former is more compute intensive. Figure 14: Hyper-parameter search for learning rate in ALIGATOR (heuristics).
Hyper-parameter search. Initially we used a grid search on an exponential grid to realize that the optimal λ across all experiments fall within the range [0. 125,8]. Then we used a fine-tuned grid [0.125, 0.25, 0.5, 0.75, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6 to search for the final hyper parameter value. For ALIGATOR (heuristics), we searched for different noise levels in order to find best learning rate. We set search method as Loss/(para * (σ 2 + σ 2 /m)). As Fig. 14 shows, para = 2 is found to provide good results across all signals we consider.
Padding for wavelets. For "wavelet" estimator in Fig. 6, when data length is not a power of 2, we used the reflect padding mode in (Lee et al., 2019), though the results are similar for other padding schemes.
Experiments on Real Data. We follow the experimental setup described in Section 5. A qualitative comparison of the forecasts for the state of New Mexico, USA is illustrated in Fig. 15. The average RMSE of ALIGATOR and Holt ES for all states in USA is reported in Table 1. Ja n -3 1 F e b -2 0 M a r -1 1 M a r -3 1 A p r -2 0 M a y -1 0 M a y -3 0 Ju n -1 9 Ju l-0 9 Ju l-2 9 A u g -1 8 S e p -0 7 S e p -2 7 O c t -1 3 0 100 200 300 400 reported cases Daily COVID cases in New Mexico ground truth holt es aligator (hedged) Figure 15: A demo on forecasting COVID cases based on real world data. We display the two weeks forecasts of hedged ALIGATOR and Holt ES, starting from the time points identified by the dotted lines. Both the algorithms are trained on a 2 month data prior to each dotted line. We see that hedged ALIGATOR detects changes in trends more quickly than Holt ES. Further, hedged ALIGATOR attains a 12% reduction in the average RMSE from that of Holt ES (see Table 1). Table 1: Average RMSE across all states in USA. The experimental setup and computation of error metrics are as described in Section 5. The % improvement tab is computed as follows. Let x 1 and x 2 be the RMSE of ALIGATOR and Holt ES respectively. Then % improvement = (x 2 − x 1 )/max{x 1 , x 2 }.
|
2021-01-26T02:15:50.670Z
|
2021-01-23T00:00:00.000
|
{
"year": 2021,
"sha1": "7fa6ae9129bb6d7f061ecb03a223f07da5a5b003",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7fa6ae9129bb6d7f061ecb03a223f07da5a5b003",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
1341371
|
pes2o/s2orc
|
v3-fos-license
|
Environmental factors preceding illness onset differ in phenotypes of the juvenile idiopathic inflammatory myopathies
Abstract Objective. To assess whether certain environmental factors temporally associated with the onset of juvenile idiopathic inflammatory myopathies (JIIMs) differ between phenotypes. Methods. Physicians completed questionnaires regarding documented infections, medications, immunizations and an open-ended question about other noted exposures within 6 months before illness onset for 285 patients with probable or definite JIIM. Medical records were reviewed for 81% of the patients. Phenotypes were defined by standard clinical and laboratory measures. Results. Sixty per cent of JIIM patients had a reported exposure within 6 months before illness onset. Most patients (62%) had one recorded exposure, 26% had two and 12% had three to five exposures. Patients older than the median age at diagnosis, those with a longer delay to diagnosis and those with anti-signal recognition particle autoantibodies had a higher frequency of documented exposures [odds ratios (ORs) 95% CI 3.4, 31]. Infections were the most common exposure and represented 44% of the total number of reported exposures. Non-infectious exposures included medications (18%), immunizations (11%), stressful life events (11%) and unusual sun exposure (7%). Exposures varied by age at diagnosis, race, disease course and the presence of certain myositis autoantibodies. Conclusion. The JIIMs may be related to multiple exposures and these appear to vary among phenotypes.
Introduction
The juvenile idiopathic inflammatory myopathies (JIIMs) are a heterogeneous group of acquired systemic autoimmune diseases characterized by symmetric proximal weakness, the presence of characteristic rashes and other systemic features. While the aetiology of these disorders remains unknown, many lines of evidence suggest that they result from the interaction of multiple genetic risk factors and environmental exposures [1].
The JIIMs, like other autoimmune disorders, appear to be comprised of a number of clinical and serological phenotypes, each of which defines more homogeneous subsets of patients in terms of demographic features, the presence of certain myositis-associated autoantibodies, immunogenetics and outcomes [2,3]. For example, patients with anti-p155 autoantibodies form a phenotype characterized by the frequent presence of cutaneous involvement and characteristic photosensitive rashes of JDM and the HLA-DQA1*0301 allele, whereas patients with anti-synthetase autoantibodies frequently have moderate to severe weakness, arthritis, RP, mechanic's hands, fevers, interstitial lung disease and HLA DRB1*0301 [3][4][5]. Clinical features of illness also appear to differ by age, gender, race and even disease course phenotypes [6][7][8]. Such homogeneous phenotypes might share unique combinations of environmental and genetic risk factors that result in a discrete disorder [9].
Environmental risk factors in JIIMs are not as well understood, and most efforts have focused on the potential role of infections in their aetiologies. Studies of cohorts of patients with JDM indicate that respiratory and gastrointestinal infections may be temporally associated with the onset of JIIM [16,17]. Prior studies of other autoimmune diseases suggest differences in environmental risk factors in different phenotypes [9], but the relationship between environmental risk factors and phenotypes has not been examined in the JIIMs [16,17].
We, therefore, undertook this study to examine whether environmental factors that are temporally associated with the clinical onset of JIIM differ in selected phenotypes, focusing on a large, well-characterized population with data on both infectious and non-infectious exposures.
Patients
Four hundred and twenty-three patients with probable or definite JDM or juvenile PM (JPM) [18] were enrolled into the NIH Clinical Center or Food and Drug Administration's investigational review board-approved natural history protocols from September 1994 until July 2008; subjects' written consent/assent was obtained according to the Declaration of Helsinki. The study was approved by the NIDDK/NIAMS Institutional Review Board. Enrolled patients provided a blood sample for autoantibody testing and the treating physician completed a questionnaire that included clinical, demographic and laboratory data. For 285 of these patients, questions about factors temporally associated with illness onset were also completed, which is the basis of the present study. Informed consent/parent assent was consistent with the Declaration of Helsinki. Phenotypes were defined by age of illness onset, clinical features, disease course, race or autoantibodies. Disease course was classified as monocyclic if the patient achieved remission without evidence of active disease, based on clinical examination and laboratory testing, within 2 years of diagnosis; as polycyclic if the patient had recurrence of active disease after a definite remission; as chronic continuous if disease activity persisted for >2 years; and as undefined if follow-up was <2 years from the time of diagnosis [8]. Clinical, demographic and autoantibody characteristics of the study population are described in Table 1. Only the autoantibody phenotypes defined as anti-aminoacyl-tRNA synthetase, anti-signal recognition particle (anti-SRP), anti-Mi2, anti-p155, anti-MJ, anti-U1 RNP and autoantibody negative were included in the analyses of environmental factors.
The physician questionnaire contained three questions about environmental exposures that had been previously suggested to be possibly associated with the onset of JDM [16,17,19,20]. These included whether the patient had any documented infections, received any immunizations or took any medications (including vitamins, minerals, herbal preparations and dietary supplements) within 6 months before illness onset. The questionnaire also included an additional open-ended question about other environmental exposures within 6 months before illness onset relating to other possible triggers of disease and to specify these and when they occurred. Stressful life events were categorized as major vs minor and as One hundred and twenty-one patients were tested by IP immunoblotting. a Other myositis autoantibodies, which were not examined in the environmental exposure analysis, included: anti-Ro (n = 15), anti-PM/Scl (n = 5), anti-Sm (n = 3), anti-La (n = 2), anti-U5 RNP (n = 1), anti-U3 RNP (n = 1), anti-Ku (n = 1) and anti-Th (n = 1). Some patients have more than one myositis autoantibody. One hundred and one patients were not tested for myositis autoantibodies.
network, family, academic or unknown type, based on the Adolescent Perceived Event Scale of Compas et al. [21] (personal communication: B. Compas, Vanderbilt University). Illness onset was defined as the month and year when the first symptom related to myositis developed. A paediatric rheumatologist (L.G.R. or G.M.) reviewed available medical records for 81% of the patients in order to confirm the reported exposures, as well as the diagnostic and clinical material contained in the questionnaire. Patient sera were tested for myositis autoantibodies by validated methods [22,23]. For anti-p155/140 and anti-MJ autoantibodies, serum samples were screened by immunoprecipitation (IP), and this was confirmed by IP blotting [5,24]. Sera were considered positive if they blotted the antigen in immunoprecipitates prepared using reference serum (direct) or if reference serum blotted the antigen in immunoprecipitates prepared using patient serum (reverse). Since some IP-positive sera do not react by immunoblotting, reverse IP blotting was used for most sera [5].
Case-only analyses were conducted to describe the frequency of exposures overall and in relation to patient phenotype. Statistical analysis was performed using Sigma Stat Version 3.1 (Systat Software, Inc., Chicago, IL, USA), including 2 and Fisher's exact tests to determine differences in the proportion of patients with different environmental exposures. Odds ratios (ORs) and 95% CIs were calculated using GraphPad InStat version 3.06 (GraphPad Software, San Diego, CA, USA). P-values were adjusted for multiple testing using Holm's procedure [25], using SAS System for Windows, version 9.1.3 and SAS Enterprise Guide Version 4.1 (SAS Institute, Cary, NC, USA).
Frequency of documented exposures
Sixty per cent of JIIM patients had one or more reported exposures within 6 months before illness onset ( Table 2). The total number of reported exposures was more frequent in white patients than in other racial groups (P = 0.008; OR 2.1; 95% CI 1.2, 3.5; Table 2). Although most patients (62%) had only one reported exposure, 26% had two exposures, and 12% had three to five recorded exposures in the 6 months before illness onset ( Table 2). Of the 64 patients with more than one exposure, 50% had a combination of infection and medication, and 27% had a combination of infection and immunization. The combination of infection and immunization was more frequent in patients from non-white racial groups (P < 0.0001; OR 22.0; 95% CI 5.9, 81.7).
Patients who were >7.5 years of age at diagnosis (the median age at diagnosis) more often had three to five reported exposures compared with younger patients (P = 0.027; OR 3.7; 95% CI 1.3,10.6; Table 2), and patients with anti-SRP autoantibody more often had three to five exposures compared with patients without a myositis autoantibody (P = 0.027; OR = 31.0; 95% CI 1.9, 507). Patients with a longer delay to diagnosis (>4 months, the median delay) were more likely to have three to five exposures than patients with a shorter delay (P = 0.034; OR 3.4; 95% CI 1. 2, 9.8). There were no other significant differences in the frequency or number of noted exposures between clinical phenotypes [JDM vs juvenile PM (JPM)], nor by gender, disease course, delay to diagnosis or between other autoantibody phenotypes (data not shown).
Types of exposures
Infections were the most common type of exposure identified within 6 months before diagnosis, consisting of 45% of the total number of reported exposures, followed by medications (18%) and immunizations (11%) ( Table 2). Patients 47.5 years of age at onset, those with 44 months delay to diagnosis, those with a polycyclic illness course and those who were myositis autoantibody negative were more likely to have an infection in the 6 months before diagnosis than older patients, those with greater delay to diagnosis, those with a monocyclic or chronic continuous illness course or those who had anti-p155 or anti-SRP autoantibodies (ORs 1.8-4.3, Table 2 and data not shown). There were no other differences between documented types of exposure between clinical or other autoantibody phenotypes, nor by gender, disease course, race or delay to diagnosis.
From the open-ended exposure question, stressful life events constituted 11% of the reported exposures. Patients >7.5 years of age reportedly experienced a stressful life event more frequently in the 6 months before diagnosis than younger patients (P = 0.003; OR 3.5; 95% CI 1.5, 7.9). Unusual sun exposures comprised 7% of the total exposures in the 6 months before diagnosis and occurred exclusively in patients with JDM, not JPM. Unusual sun exposures included those resulting in sunburn, as well as receiving more sun than usual or travel to a more sunny location. An unusual chemical exposure was recorded 3% of the time and included application of pesticides inside or around the home, painting the home, use of formaldehyde to clean the child's bed and application of a hair perming chemical. Seven (2%) reported exposures involving unusual animal contact within 6 months of illness onset, including a dog or cat scratch, exotic bird bite or multiple flea or mosquito bites. Four (2%) exposures involved weight-training exercise or physical trauma, and two (0.6%) exposures involved dietary supplement usage before illness onset, including creatine monokinase and Echinacea. These less-frequent exposures were present exclusively in patients with JDM, except that weight-training exercise was also noted in one JPM patient. Weight training, physical trauma and dietary supplements were seen exclusively in patients >7.5 years of age at diagnosis and in patients with a greater delay to diagnosis.
Drug exposures
Patients 47.5 years of age were more likely to have one drug exposure (P = 0.004; OR 15.4; 95% CI 1.8, 135), whereas those >7.5 years of age were more likely to have two drug exposures (P = 0.008; OR 18.9; 95% CI 1.0, 358) in the 6 months before illness onset (Table 4). Of interest, >25% of the medication usage documented included drugs that were potentially photosensitizing or myopathic [26][27][28][29][30][31]. There were no other differences noted in medication usage between clinical or autoantibody phenotypes, nor were there any differences by gender, race, disease course or delay to diagnosis.
Immunizations
There was no difference in the proportion of patients who received an immunization in the 6 months before illness onset or in the number of immunizations received, between clinical or autoantibody phenotypes, nor by age, gender, race, disease course or delay to diagnosis. Patients with a polycyclic illness course were more likely than patients with a monocyclic illness course to have received an immunization or to have received a measles-mumps-rubella (MMR) vaccine in the 6 months before illness onset (21 vs 6%; P = 0.023; OR 4.1; 95% CI 1.2, 13.9 and 50 vs 6%; P = 0.035; OR = 17.0; 95% CI 1.3, 223, respectively). Given the time period under study, it was not surprising that patients >7.5 years of age at diagnosis were more likely to have received a hepatitis B vaccine than younger patients (47 vs 12%; P = 0.002; OR 6.2; 95% CI 2.0, 19.7), whereas patients 47.5 years of age were more likely to have received a diphtheria-(pertussis)-tetanus vaccine (22 vs 6%; P = 0.033; OR 8.2; 95% CI 0.98, 69.8).
Stressful life events
Nine per cent (n = 26) of patients had at least one stressful life event in the 6 months before illness onset, with 72% of these being major stressors and the remainder being minor stressors. The majority of these patients (65%) had one stressor, but 31% had two recorded stressors and 4% had three stressors. The categorization of stressors included network (50%), family (25%), academic (19%) and unknown types (6%). Patients >7.5 years of age had a stressful life event more frequently than younger children (P = 0.003; OR 3.5; 95% CI 1.5, 7.9). There were no differences in the proportion of patients with a reported stressor or in the number or type of stressor in 6 months before illness onset between clinical or autoantibody phenotype, nor by gender, race, disease course or delay to diagnosis.
Discussion
The availability of a large, well-characterized population enabled us to examine the relationship between environmental exposures before illness onset and phenotypes in JIIM. We confirmed a number of exposures that had also been seen in prior studies of JDM, particularly the temporal association of respiratory infections preceding illness onset [16,17]. We identified for the first time that a number of other non-infectious exposures occurred within 6 months of the first signs of illness, including medications, many of which are potentially myopathic or photosensitizing, immunizations, stressful life events and sun exposure. The main novel findings of this study were differences in some exposures by age at diagnosis, delay to diagnosis, race, disease course and autoantibody phenotypes. For example, children younger than the median age at the time of diagnosis had a higher frequency of documented infections, whereas older children had a higher frequency of stressful life events in the months before illness onset. Patients without a myositis autoantibody had a higher frequency of infections in the 6 months before illness onset than was seen in patients with anti-p155 or anti-SRP autoantibodies, whereas patients with anti-SRP autoantibodies had a greater number of documented exposures than patients without a myositis autoantibody. These findings suggest that environmental exposures may differ by phenotype, and that they could be useful in understanding pathogeneses [1]. We found that an infectious illness, particularly a respiratory infection, frequently occurs within several months before juvenile myositis onset, supporting the findings of other studies of exposures temporally associated with the onset of JDM. In one study, a prospective registry of patients within 6 months of illness onset in which data were based on a parent environmental interview and medical record review, respiratory infections were identified within 3 months of illness onset in 57% of patients [16]. The other, a retrospective cohort with review of medical records by infectious disease specialists, identified infections within 3 months before Conventions as per Table 2. Bold values represent P 4 0.05 after Holm's adjustment for multiple comparisons (using family-wise error rates of 5%). a Based on total number of drug exposures, because some patients had >1 documented drug exposure. Potentially myopathic drugs included penicillin (n = 2) and ranitidine (n = 1) [28][29][30][31]. Potentially photosensitizing drugs included loratadine (n = 1), diphenhydramine (n = 1) and sertaline (n = 1) [26,27]. Drugs classified as potentially both myopathic and photosensitizing included ibuprofen (n = 5), trimethroprim/sulphamethoxazole (n = 3), isoniazid (n = 2) and erythromycin-sulfisoxazole (n = 1). Drugs not known to be either myopathic or photosensitizing included amoxicillin (n = 4), cefaclor (n = 3), pseudoephedrine (n = 2), cetirizine (n = 1), albuterol inhaler (n = 1), flecainide (n = 1), bromopheniramine maleate (n = 1), nedocromil (n = 1), oxybutynin (n = 1), permethrin (n = 1), pyrethrine (n = 1), cantharidine (n = 1), cefadroxil (n = 1), acetaminophen (n = 1), streptomycin (n = 1), erythromycin (n = 1) and nystatin (n = 1). Drugs whose classification is unknown included unknown antibiotic (n = 10), birth control (n = 3) and anaesthetic (n = 1). *P = 0.004; OR 15.4; 95% CI 1.8, 135. y P = 0.008 ; OR 18.9; 95% CI 1.0, 358.
the first symptoms of JDM in 33-50% of patients, and respiratory infections accounted for 80% of the infections [17]. The lack of control comparator groups in all of these studies, however, does not enable one to conclude that these exposures differ from a healthy population, nor that they are associated with the onset of illness. While infections, particularly upper respiratory infections, are reported frequently in school age children [32], a prospective matched cohort of new-onset JDM patients reported a higher frequency of antecedent illness in the JDM patients compared with friend controls from the same geographical region [19]. We identified for the first time that a number of other non-infectious exposures also occurred within 6 months of the first signs of illness, including medications, many of which are potentially myopathic or photosensitizing, immunizations, stressful life events and sun exposure. Pachman et al. [16] noted medication use in >60% of patients, including medications for symptoms of early illness or antibiotics to treat associated infections. A listing of medications taken by patients in the present study and in others includes similar medications (Table 4), and we noted that many of the medications could be potentially myopathic or phototoxic [26,27,29,30]. Drug-induced myositis has been well described with a number of different medications, including D-penicillamine, lipid-lowering agents, L-tryptophan and IFN-a [33,34]. Myopathic or phototoxic drugs, however, could lead to the first symptoms of myositis. Other environmental factors reported here, including ultraviolet light exposure, emotional stress and heavy weight lifting, have been reported as possible risk factors for adult DM or PM in case-controlled studies [35][36][37][38].
Almost 40% of the patients in this study had two or more reported exposures within 6 months before illness onset, rather than a single documented exposure. This is consistent with the concept that, just as systemic autoimmune diseases are polygenic [39], they might also be polyenvironmental, meaning that patients may have more than one exposure before developing the disease. These exposures may also be dependent on gene-gene, environment-environment and gene-environment interactions. In diseases such as cancer, multiple infectious and non-infectious environmental factors have been associated with specific malignancies, and these environmental exposures have been shown to affect the development of disease in different ways, including altering mutagenesis, promotion and direct carcinogenesis [40]. Synergistic interactions between some of these environmental factors, including viral and non-infectious exposures, have also been seen in certain malignancies [41,42]. It is possible, though, that there was a confounding between exposures, such as an infectious illness and the use of antibiotics, as noted by Pachman et al. [16]. Our data suggest that further investigation of the interaction between environmental exposures may be useful.
It is important to emphasize that the temporal association of environmental exposures with illness onset does not imply causality. For example, certain exposures, such as trauma or weight training, could have occurred after the onset of illness as a consequence of the first unrecognized symptoms of disease, such as fatigue or muscle weakness. Rather, exposures with temporal relationships to disease onset, as were seen in this hypothesisgenerating study, constitute a first step for determining which factors may trigger the onset of illness and warrant further investigation. Additional support for a relationship between these exposures and disease pathogenesis could be provided by dechallenge data, which did not exist in this cohort-based study, from laboratory investigations and from case-controlled epidemiological studies [43]. A case-control study by Pachman et al. [19] did not find any significant differences in pesticide use, psychological stress or exposure to animals in 80 JDM patients within 6 months of illness onset compared with 63 age-matched geographically similar healthy controls with similar school or daycare experiences, nor was parvovirus found to be an aetiological factor in recent-onset JDM patients compared with age-, genderand race-matched controls [44]. However, both of those studies may not been adequately powered to detect differences between the cases and controls. Also, the extent of matching of controls may have obscured differences with JDM patients. For example, in the parvovirus study [44], the controls were age, race and gender matched to patients, but they were not geographically matched, whereas in the study of Pachman et al. [19], the healthy controls, frequently age-matched classmates and neighbours, may have been geographically overmatched, but they were not gender or race matched. An appropriately powered prospective case-controlled study is needed to confirm the observations from this and other previous reports.
There are a number of potential limitations in this study. A primary limitation is the absence of a control group. Thus, the frequencies of exposures observed in juvenile myositis patients overall may not differ from healthy control populations and these exposures may not be associated with the onset of illness. In addition, there could be under-or over-reporting of potential exposures, including a selection bias in the patients who had the environmental component of the questionnaire completed. We also found more exposures, including infectious illnesses, in white patients compared with patients in other racial groups. This could potentially be the result of differences in access to health care, resulting in better documentation of such exposures. The somewhat arbitrary period of 6 months before the onset of illness for identification of environmental factors might not be relevant to the initiation of myositis for all exposures. Certain exposures could require a longer period to induce their effects, as has been reported in malignancies, silicosis and other disorders, while for other exposures a shorter time frame might be more relevant [37,38]. Also, exposures other than infections, drugs and vaccines were reported in an open-ended manner, and patients were not required to be directly interviewed to obtain information about environmental exposures. We attempted to www.rheumatology.oxfordjournals.org overcome these possible biases by conducting a formal review of most of the medical records of the study subjects. However, the medical records might also have selection bias by reporting only some of the significant environmental exposures. Certain exposures, such as exposures in the home and use of certain chemicals, are likely not captured uniformly in the medical record by the treating physician. Nonetheless, the fact that our data on infections before illness onset are similar to those of other large cohorts suggests that the quality of the data and reporting are reliable [16,17]. Finally, while some of the ORs in our study are large, the CIs may be wide and estimates could be inflated due to relatively small numbers of patients in some groups.
In summary, we have identified a number of environmental exposures, including infectious and non-infectious agents that occurred within 6 months before illness onset, varied by phenotype and may be important in the pathogenesis of JIIM. These findings suggest that a search for a single environmental factor that causes or triggers a single disease as currently defined, such as JIIM, may be unproductive, as patients could have several environmental exposures and these could vary with the disease phenotype that develops. These exposures require confirmation in case-controlled studies to identify whether they are associated with illness onset and whether they play any role in aetiology, yet they suggest focused areas of further research to better understand the environmental factors associated with the onset of JIIM phenotypes and their possible interrelationships with genetic risk factors.
Rheumatology key messages . Environmental exposures before the onset of juvenile myositis include infections, medications, vaccinations, sun exposure and stressful life events. . Exposures vary by disease phenotype, defined by age of illness onset, race and autoantibody status.
|
2017-03-31T03:22:35.887Z
|
2010-08-27T00:00:00.000
|
{
"year": 2010,
"sha1": "b55a0e31017b0b12cb2d82f0b23dfaf1d619528e",
"oa_license": null,
"oa_url": "https://academic.oup.com/rheumatology/article-pdf/49/12/2381/5065535/keq277.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b55a0e31017b0b12cb2d82f0b23dfaf1d619528e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255957377
|
pes2o/s2orc
|
v3-fos-license
|
Sugar-sweetened beverages increases the risk of hypertension among children and adolescence: a systematic review and dose–response meta-analysis
In the current systematic review and meta-analysis, we summarized the studies that evaluated the effects of sugar-sweetened beverages (SSBs) intake on blood pressure among children and adolescents. In a systematic search from PubMed, Scopus, Embase and Cochrane electronic databases up to 20 April 2020, the observational studies that evaluated the association between sugar-sweetened beverages intake and hypertension, systolic or diastolic blood pressure (SBP, DBP) were retrieved. A total of 14 studies with 93873 participants were included in the current meta-analysis. High SSB consumption was associated with 1.67 mmHg increase in SBP in children and adolescents (WMD: 1.67; CI 1.021–2.321; P < 0.001). The difference in DBP was not significant (WMD: 0.313; CI −0.131– 0.757; P = 0.108). High SSB consumers were 1.36 times more likely to develop hypertension compared with low SSB consumers (OR: 1.365; CI 1.145–1.626; P = 0.001). In dose–response meta-analysis, no departure from linearity was observed between SSB intake and change in SBP (P-nonlinearity = 0.707) or DBP (P-nonlinearity = 0.180). According to our finding, high SSB consumption increases SBP and hypertension in children and adolescents.
Background
The increasing prevalence of obesity and weight gain in pediatric population, as a major health problem, is associated with insulin resistance, hypertension, atherogenic dyslipidemia, and pro-inflammatory state [1]. Hypertension, as one of the major component of metabolic syndrome is also associated with obesity state and has an increasing prevalence in youth [2]. Hypertension in children and adolescents is defined as average systolic blood pressure (SBP) and/or diastolic blood pressure (DBP) greater than 95th percentile for gender, age, and height on ≥ 3 occasions, while prehypertension is defined as average SBP or DBP levels that are greater than 90 th percentile but less than 95th percentile [3]. Hypertension and elevated blood pressure among children is associated with cardiovascular risk factors and obesity as well. Although major final outcome of CVD such as death and cardiovascular disability do not occur in hypertensive children, they are encountered with increased risk of intermediate markers of target organ damage, such as left ventricular hypertrophy, retinal vascular changes, thickening of the carotid vessel wall, and even subtle cognitive changes [4]. It is widely recognized that blood pressure
Open Access
Journal of Translational Medicine *Correspondence: abbasalizad_m@yahoo.com 1 Drug Applied Research Center, Tabriz University of Medical Sciences, Tabriz, Iran Full list of author information is available at the end of the article Page 2 of 18 Farhangi et al. J Transl Med (2020) 18:344 levels are influenced by genetic as well as by environmental factors [5,6]. In this regard, more than 90 different genetic polymorphisms have been identified to be associated with high blood pressure [7]. For example, a recent study reported that polymorphism of aldosterone synthase gene is linked with the development of hypertension through increasing the aldosterone level and aldosterone/renin ratio [5]. On the other hand, among environmental parameters, obesity, smoking, alcohol consumption, diet, and physical inactivity likely play a major role in development of hypertension [6]. The role of sugars in developing cardio-metabolic disorders and hypertension in children has been actively investigated. However, recently the role of sugar-sweetened beverages (SSBs) in developing hypertension particularly in children and adolescents is highlighted [8][9][10][11][12]. SSBs, as a liquid form of carbonated or noncarbonated energy beverages, are the principle source of added sugar in diets [13]. For instance, a cross-sectional study from China showed that SSBs provide 10-15% of total calorie intake of school students [9]. Another study in Taiwan indicated that adolescents are also one of the major groups who consume a high amount of SSBs [13]. The US Nutrition Examination Survey showed that approximately 64% of the pediatric and adolescents aged 2-19 years have daily SSB consumption contributing to 8.4% of the daily energy intake [14]. In Iran, the average SSBs intake among children and adolescents was 38.5 ± 75.0 g per day with the mean daily SSB intake of 98 ml in boys and 70 ml in girls [15].
In Australia, the average amount of 217 mL of SSB per day is consumed by youth contributing to 5.5% of their total energy intake [16]. In Mexico, SSB intake as one of the main sources of added sugar intake contributes to 8.3% of the total energy intake among children and adolescents [17]. Therefore, SSBs contain excessive amounts of energy, in the form of simple sugar. All of these figures have exceeded the recommended intake of free sugars that has been proposed by the World Health Organization to be less than 5% of total energy intakes [18]. Increased sympathetic nervous system activity [19], significant increase in blood pressure due to potential antinatriuresis effect of fructose affecting salt metabolism [9] and increased serum uric acid due to fructose metabolism [20][21][22] are several suggested mechanisms of the association between SSBs intake and hypertension among children and adolescents. Although numerous studies confirmed the role of high SSBs consumption in developing hypertension in youth [9,13,[23][24][25], there are several inconsistencies reporting no significant association between SSB intake and blood pressure [13,26,27]. Moreover, childhood and adolescence are critical periods for the acquisition of healthy behaviors; therefore, the study of several indices and their co-occurrence in this ages should be a priority. In the current systematic review and meta-analysis, we aimed to summarize the studies that evaluated the association between SSBs intake and blood pressure among children and adolescents in twoclass and dose-response meta-analysis.
Materials and methods
The current study was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [28]. The completed checklist has been provided in the Additional file 1 (Additional file 1: Table S1); moreover, the abstract was written according to the 12-item PRISMA extension checklist [29].
Data sources
A systematic search using PubMed, Scopus, Embase and Cochrane electronic databases was performed to find the studies evaluated the association between sugar-sweetened beverages intake and hypertension up to 1 April 2020. No language and time restrictions were applied. Moreover, hand-searching from reference lists of all relevant papers, previous reviews and meta-analyses was performed to cover all relevant publications. Strategy search was created using a combination of the MeSH (Medical Subject Headings) terms from the PubMed database and free text words.
Search strategy
For the search purpose, we used MeSH (Medical Subject Heading) and non-MeSH keywords including the following: ( Table S2). The reviewed literatures were inserted into the EndNote software (version X8, for Windows, Thomson Reuters, Philadelphia, PA, USA). For each electronic database, search strategy was adopted.
Study selection
In the current systematic review and meta-analysis, observational studies with the design of cross-sectional, case control or cohort evaluating the association between sugar-sweetened beverages (SSB) and hypertension (HTN), systolic blood pressure (SBP) and diastolic blood Since there is no official definition for SSBs, they were defined as any type of above-mentioned drinks. Initially, retrieved citations were merged, duplications were eliminated and the review process was facilitated. Accordingly, the titles and abstracts of all articles were evaluated independently by 2 reviewers (MAF, LN). Full-texts of relevant articles were retrieved if meeting the eligibility criteria, and then were re-evaluated. Any disagreements were discussed and resolved by consensus.
Risk of bias and quality assessment
The quality of cross-sectional studies was assessed by Agency for Healthcare Research and Quality (AHRQ) checklist [30]. There was no quality criteria for inclusion of the studies in the current meta-analysis. The items were scored "1" if the answer was "YES, " and "0" if the answer was "NO" or "UNCLEAR. " The final quality assessments scores were as follows: low quality = 0-3; moderate quality = 4-7; high quality ≥ 8. The details of the studies' quality assessment are presented in Additional file 1: Table S3.
Data synthesis and analysis Two class meta-analysis of the comparison of SBP and DBP between SSB categories
The comparison of SBP and DBP between highest versus lowest category of SSB was performed by measuring the unstandardized mean differences as the effect size calculated by pooled estimate of weighted mean difference (WMD) with 95% confidence interval (CI), and the fixed effects and random effects models according to level of heterogeneity. When the mean values were missed and median and range were provided, we used the method provided by Hozo et al. [31] considering the median values as best estimate of mean for sample size more than 25 and calculating SD as follows: . When SD of the mean difference was not available from the studies, we calculated it using the following formula: SD change = square root [(SD baseline 2 [32], SD = IQR/1.35 (symmetrical data distribution) and SD = SEM × sqrt (n), where n is number of participants, IQR is interquartile range and SEM is standard error of the mean. When the number of individuals in each category of SSB was not provided in the manuscript, we assumed that equal number of participants is enrolled in each group. When the odds of hypertension in SSB consumers versus non-consumers were provided, ORs and 95% CIs were used to estimate the combined effects. Subgroup analysis was also performed to identify possible sources of heterogeneity according to the study setting, SSB dose, and baseline values of SBP or DBP, design, health status, sample size, region, quality score of study, gender and study design. The dose of SSB intake was converted to gram of intake per day according to food agriculture organization (FAO) guidelines for converting units, denominators and expressions [33].
Cochran's Q test and I squared test was used to identify between-study heterogeneity; I 2 ˂ 25%, no heterogeneity; I 2 = 25-50%, moderate heterogeneity; I 2 > 50% large heterogeneity [34]. The heterogeneity was considered significant if either the Q statistic had P value < 0.1 or I 2 > 50%. Sensitivity analysis by exclusion of one study at a time was applied to test the influence of each individual study on overall pooled estimates and heterogeneity [35]. Begg's funnel plots was assessed to evaluate the publication bias followed by the Egger's regression asymmetry test and Begg's adjusted rank correlation for formal statistical assessment of funnel plot asymmetry. The data were analyzed using STATA version 13 (STATA Corp, College Station, TX, USA), and P-values less than 0.05 were considered as statistically significant.
Dose-response meta-analysis of the association between SSB dose and change in SBP or DBP
For dose response meta-analysis, the eligible studies had been reported the mean (SD) of continuous variable (e.g. SBP, DBP) in at least three categories. The median point in each SSB category was also identified. If medians had not been reported in the manuscript, then approximate medians were estimated, using the midpoint of the lower and upper limits. If the highest study category was open-ended, its SSB dose was calculated by assuming that the interval was the same as the closest category. The lowest categories of SSB intake was considered as the reference dose for each study. Any potential non-linear associations of SSB intake were performed by fractional polynominal modelling (polynomials) to explore the nonlinear potential effects of SSB dosage (g/d) and the studyspecific parameter [36].
Flow of studies
Our search strategy identified 1661 potentially relevant articles. Thereafter 857 manuscripts were remained for full text screening after removing duplicates and exclusion according to the title and abstract reading. Totally, 671 manuscripts were excluded because of their irrelevant subject, inappropriate design, being reviews including meta-analysis or systematic reviews, conferences and seminars, not relevant age groups, not evaluating the association of studied parameters. A final number of 14 manuscripts were included in the current meta-analysis ( Fig. 1).
Finding from the dose-response meta-analysis of the association between SSB dose and blood pressure
The details of dose-response meta-analysis are shown in Table 2 and the results for the SBP and DBP are presented in Figs. 5 and 6, respectively. According to the results of dose-response meta-analysis, no evidence of departure from linearity was observed for the association between dose of SSB with mean change in SBP (P-nonlinearity = 0.707) or DBP (P-nonlinearity = 0.180).
Publication bias
The funnel plots are presented in Additional file 1: Figure
Discussion
According to our finding, high SSBs intake among children and adolescents was associated with higher SBP and odds of hypertension. Moreover, no evidence of departure from linearity was observed in the dose-response meta-analysis of change in SBP or DBP according to SSB dosage. A total of 14 studies with 93,873 participants were included in the current meta-analysis. SSBs such as sugar sodas and juices are one of the main sources of excess sugar consumption containing 22 to 39 g of sugar per serving [40,41]. The American Academy of Pediatrics (AAP) has recommended that young children refrain from intake of SSB because of its potential adverse effects on obesity and related disorders [42]. According to the last update of the clinical practice guideline which is issued by AAP, the prevalence of pediatric prehypertension and hypertension has increased to 14.8% and 16.3%, respectively [2]. In our work, high SSB intake was associated with increased systolic blood pressure and odds of hypertension; numerous trials also evaluated the effects of reduced SSB intake on blood pressure; in the study by Chen L et al. reduction in SSB intake of 1 serving/day over 18 months was associated with a 1.8 and 1.1 mmHg reduction in SBP and DBP, respectively [43]. Chiu S et al. also reported reduced systolic blood pressure after replacing sugar sweetened sodas with milk in young male adolescents [44]. Accumulating evidence has linked SSB consumption during childhood to unhealthy weight gain which itself associated with risk of health outcomes such as type 2 diabetes, metabolic syndrome, cardiovascular diseases and other obesity-related disorders in later life [45]. Therefore, intake of SSB should be limited in children and adolescents to reduce obesityrelated chronic disease risk.
By using subgroup analyses, we could successfully identify possible sources of heterogeneity; such that the setting, region, gender and study quality were associated with a significant source of heterogeneity for SBP and SSB dosage, baseline DBP values, study quality, gender and design were possible source of heterogeneity across studies for DBP. Although the effect of high SSB intake on DBP was not significant, while subgrouping, the results were significant for the studies performed in apparently healthy and Asian populations, school setting, with high baseline DBP values and in large sample size studies. So, potential sources of bias were detected with the help of subgroup analyses. It seems that school is one of the best environments for children's psychological, physical and social development [46]. Since children spend so much of their day predominantly in the school setting, the school food environment can contribute in reversing the trend towards childhood obesity [47]. Research has shown that children consume nearly 35-47% of their daily dietary intake and they are exposed to less healthful food and beverages such as SSBs and energy dense food (pizza, french fries, chips and candies) while at school [48]. It seems that improvement to the school food environment through decreasing availability of SSBs and less healthful nutritional practices can be considered as a strategy to reduce obesity and its-related complications in children and adolescents [47]. Numerous school base studies have effective strategies combating against children health problems [49][50][51]. WHO recommends that reduction of SSB intake among children should be implemented initially in schools by developing rules about consuming soft drinks in schools, removing vending machines selling soft drinks from school premises, provision of safe drinking water fountains in schools and other locations where children gather and promoting healthy dietary behavior in classrooms [52]. Moreover, children with higher baseline DBP values showed higher association of SSB intake with DBP; this finding showed that possibly the adverse effect of high SSB intake increased by increase in baseline blood pressure. In our research, the association between mean difference in SBP or DBP with SSB dosage did not exert a non-linear association. Therefore, increase in SBP or DBP is not a dose-dependent event after SSB consumption; this finding was also similar to the previous [54]. It seems that the role of FFQ as a self-reported data collection tool for estimating the serving sizes might be a source of bias, this is mostly because of the difference in the FFQs structure and items and also difference in the serving definition in numerous studies. Also, in different studies outcome of study was adjusted for wide heterogenic confounders that may have affected the accuracy of dose-response estimates [53]. In the present meta-analysis, we found that SSB consumption is associated with the elevated SBP and DBP among apparently healthy subjects. However, we should take into account that the most of studies had included healthy participants in their researches and only one study performed among diabetic subjects. Therefore, the observed results may not reflect the true relationship regarding the subjects' health status. Since the previous studies have shown that SSB intake is positively associated with diabetes and other health outcome [55], these data support the benefits of lower intake of SSBs.
Region was also another important factor affecting the SSB and DBP association. Our meta-analysis found that in the studies that performed in Asia there was a potent effect of high SSB intake on DBP, while this association was not significant for the studies that performed in USA/Oceania. Interestingly, this finding was also similar for SBP subgrouping. This finding is possibly due to this fact that most of the studies were form Asia and this high number of studies give greater power to Asian studies; also, in the previous report of global, regional, and national consumption of sugar-sweetened beverages in 187 countries, the SSB intake among Asian countries was lower than European and American countries and these findings were strongly dependent to age, country and sex of participants [56]; therefore, the role of these confounders in explaining the association between SSB intake and burden of disease should be considered. On the other hand, cultural differences among the lifestyle and sociodemographic factors play an important role in dietary intakes especially sugar; and it has been proposed as an explanation for the disparities in disease risk among ethnically diverse population [57,58]. It seems that cultural factors by influencing on food preferences and choices may contribute to diet quality and subsequently health inequalities [57]. On the other hand, according to the latest data, childhood obesity prevalence, which coincides with the highest prevalence of hypertension and other metabolic disorders, in Latin American is among the highest in the world [59]. However, only one study from Latin American countries was included in our meta-analysis and as a result, we missed information on the relationship between SSB intake and hypertension among children and adolescents in this geographical region. Several potential mechanisms may describe how SSB consumption could results in increasing the risk of hypertension. Hyperuricemia which is induced by a higher fructose load from sugar-sweetened beverages may leads to acute endothelial dysfunction and chronic Na retention and consequently predisposes individuals to hypertension [60,61]. In this regard, findings from a human study showed a significant increase in blood pressure after acute administration of fructose while this effect was not seen with glucose [62]. Therefore, it has been hypothesized that the fructose in SSBs is responsible for their association with elevated blood pressure. Heredity appears to play a major role in the development of metabolic abnormalities such as hypertension especially in childhood and reports have shown heritability of childhood hypertension is estimated at 50 percent [63]. However, from included studies in our meta-analysis, only one citation [9] had included those who didn't have a history of hypertension. On the other hand, none of included studies have adjusted for family history, thus our finding in the present meta-analysis should be interpreted with caution. Additionally, SSB consumption has been shown to be a part of an overall unhealthy dietary pattern and is correlated with unfavorable socioeconomic status [64]. There is limited research has directly compared the effect of SSB intake to other foods with regard to the risk of cardio-metabolic risk factors such as elevated blood pressure [65]. For example, Amini et al. reported western dietary pattern which contains high amount of SSB is associated with greater odds of having increased blood pressure [65]. Besides, the Dietary Approach to Stop Hypertension (DASH) which emphasizes on higher consumption of vegetables, fruits, nuts, legumes, fish, chicken, whole grains, low-fat dairy products, and lower consumption of SSBs and red meat, has been shown to be negatively associated with hypertension in adults and children [66].
Recently accumulating evidence has linked the maternal diet during pregnancy and breastfeeding to food and tastes preferences of children [67]. The fetus experiences maternal diet tastes and smells through amniotic fluids during pregnancy and afterward by breast milk [68]. Thus, maternal intake in pregnancy could program taste preference of the child towards SSB and health care providers should pay particular attention to educating women in this area.
The association between high SSBs intake and higher odds of hypertension among children and adolescents was another main finding in the present research. A large number of studies have shown that blood pressure in childhood predicts the future hypertension in adulthood [69,70]. Hence, early interventions are warranted.
Strength and limitations
The current systematic review and meta-analysis for the first time evaluated the dose-response association between sugar-sweetened beverage intake and hypertension in children and adolescents. Due to growing prevalence of hypertension in this population, this study has clinical and social implications regarding developing preventive strategies against high SSB consumption in children and adolescents. However, several limitations of the current meta-analysis should also be mentioned; first, using different kinds of FFQ for extraction of SSB intake is a matter of bias because this information is self-reported and has different structures and definitions between studies. Second, there were different kinds of SSBs in these studies and subgrouping according to SSB types were not possible. Moreover, different studies have reported the SSB intake with different units and these conversions might be a cause of error in estimating the accurate dosage of SSB consumption. Additionally, there were different adjustments for confounders in different studies that might affect the results.
Conclusion
The current meta-analysis, for the first time revealed that high SSBs consumption is associated with increased SBP and odds of hypertension among children and adolescents. Although further large prospective studies and well-designed intervention studies are recommended to confirm the observed relationships, the results of the present study support recommendations to decrease the consumption of SSB to prevent and control hypertension and its complications. Developing strategic programs to reduce SSBs consumption particularly in school settings is suggested to reduce the disease burden in this population.
|
2023-01-18T14:10:28.856Z
|
2020-09-05T00:00:00.000
|
{
"year": 2020,
"sha1": "bb10d35cf953c20b804b6a2b840dff4633642aee",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-020-02511-9",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "bb10d35cf953c20b804b6a2b840dff4633642aee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
117090392
|
pes2o/s2orc
|
v3-fos-license
|
Elliptic Algebra U_{q,p}(g^) and Quantum Z-algebras
A new definition of the elliptic algebra U_{q,p}(g^) associated with an untwisted affine Lie algebra g^ is given as a topological algebra over the ring of formal power series in p. We also introduce a quantum dynamical analogue of Lepowsky-Wilson's Z-algebras. The Z-algebra governs the irreducibility of the infinite dimensional U_{q,p}(g^)-modules. Some level-1 examples indicate a direct connection of the irreducible U_{q,p}(g^)-modules to those of the W-algebras associated with the coset g^ \oplus g^ \supset (g^)_{diag} with level (r-g-1,1) (g:the dual Coxeter number), which includes Fateev- Lukyanov's WB_l-algebra.
(1) l , B (1) l , D (1) l , E (1) 6 , E 8 . We show that at least for A (1) l and D (1) l the level-1 elliptic currents e j (z) and f j (z) coincide with the screening currents of the deformed W -algebras obtained in [26][27][28]. We also show that the irreducible representations of U q,p ( g) is naturally decomposed into a direct sum of the irreducible W -algebras of the coset type for g = A (1) l , B (1) l , D (1) l . This suggests in particular an existence of a deformation of Fateev-Lukyanov's WB l -algebra [31] as the commutant of the screening operators provided by the level-1 elliptic currents e j (z) and f j (z) of U q,p (B (1) l ). It is also worth to mention that the coset type W -algebras describe a critical behavior of the face type elliptic solvable lattice models [32,33]. Correspondingly the U q,p ( g) provides an algebraic framework to formulate the lattice model itself in the spirit of Jimbo and Miwa [34]. This has been established for sl N in [1,2,7,10,35,36] by constructing the L-operator and introducing the Hopf algebroid structure. In order to construct the L-operator of U q,p ( g) and also to get a realization of a generating function of the deformation of the W -algebras, it is crucial to introduce new types of elliptic bosons, which we call the fundamental weight type A j m and the orthonormal basis type E ±j m distinguishing from the usual ones α j,m (α ∨ j,m ) corresponding to the simple (co-)root and appearing as generators of U q,p ( g). An idea of such bosons has already appeared in [26][27][28]. We give an explicit construction of them for g = A (1) n , B (1) n , C (1) n , D (1) n . As a check we calculate the commutation relations among E ±j m as well as among the elliptic currents k ±j (z), the generating functions of E ±j m , and show that they have a universal form. See Theorem 5.3 and 5.7. This paper is organized as follows. In section 2, we define the elliptic algebra U q,p ( g) as a topological algebra generated by the elliptic Drinfeld generators. This is a new definition of U q,p ( g) given independently of U q ( g) unlike the previous one in Appendix A in [2]. In section 3, we define a quantum dynamical analogue Z V of Lepowsky and Wilson's Z-algebra associated with the level-k U q,p ( g)-module V and its universal counterpart Z k . The irreducibility of the level-k highest weight representation of U q,p ( g) is shown to be governed by the Z k -module. In section 4, we give a simple realization of Z k in terms of the quantum (non-dynamical) Z-algebra associated with the level-k U q ( g)-module and define a standard representation of U q,p ( g). We provide some level-1 examples of the standard representations and discuss their relation to the deformation of the W -algebras. In section 5, we give a construction of the new elliptic bosons of the fundamental weight type and the orthonormal basis type and derive various commutation relations.
Definition
Let g = X (1) l be an untwisted affine Lie algebra associated with the generalized Cartan matrix A = (a ij ) i, j ∈ {0} ∪ I, I = {1, · · · , l}. We denote by B = (b ij ), b ij = d i a ij the symmetrization of A. We take d i = 1 (i ∈ I) for the simply laced cases, d i = 1 (1 ≤ i ≤ l − 1), d l = 1/2 for B (1) l and d i = 1 (1 ≤ i ≤ l − 1), d l = 2 for C (1) l . Let q = e ∈ C[[ ]] and set q i = q d i . Let p be an indeterminate.
We seth * = ⊕ i∈I CΛ i , h * =h * ⊕ CΛ 0 , Q = ⊕ i∈I Zα i and P = ⊕ i∈I ZΛ i . Let N = l + 1 for X l = A l , = l for B l , C l , D l , = 7 for E 6 , = 8 for E 7 , E 8 , = 3 for G 2 , = 4 for F 4 and consider the orthonormal basis {ξ j (1 ≤ j ≤ N )} in R N with the inner product (ξ j , ξ k ) = δ j,k . For A l , we also setξ We define ǫ j =ξ j for A l and = ξ j for other X l . The simple roots α j and the fundamental weights can be expressed as a linear sum of ǫ j [37,38]. We follow Kac's conventions.
We define h ǫ j ∈h (j ∈ I) by < ǫ i , h ǫ j >= (ǫ i , ǫ j ) and h α ∈h for α = j c j ǫ j , c j ∈ C by h α = j c j h ǫ j . We regardh ⊕h * as the Heisenberg algebra by In particular, we have [h j , α k ] = a jk . We also set h j = hΛ j .
In order to treat the dynamical shifts in the face type elliptic algebra systematically, we introduce another Heisenberg algebra generated by P α and Q β (α, β ∈h * ) satisfying the commutation We also set Here α ∨ j = 2α j /(α j , α j ). For the abelian group R Q = N j=1 ZQ α j , we denote by C[R Q ] the group algebra over C of R Q . We denote by e α the element of C[R Q ] corresponding to α ∈ R Q . These e α satisfy e α e β = e α+β and (e α ) −1 = e −α . In particular, e 0 = 1 is the identity element. Now let us set H = h ⊕ Ph = j C(P ǫ j + h ǫ j ) + j CP ǫ j + Cc and denote its dual space by H * = h * ⊕Qh. We define the paring by (2.1), < Q α , P β >= (α, β) and < Q α , h β >=< Q α , c >=< Q α , d >= 0 =< α, P β >=< δ, P β >=< Λ 0 , P β > . We define F = M H * to be the field of meromorphic functions on H * . We regard a function of P + h = j a j (P ǫ j + h ǫ j ), P = j b j P ǫ j and c, f = f (P + h, P, c), as an element in F by f (µ) = f (< µ, P + h >, < µ, P >, < µ, c >) for We use the following notations.
Definition 2.1. The elliptic algebra U q,p ( g) is a topological algebra over F [[p]] generated by M H * , e j,m , f j,m , α ∨ j,n , K ± j , (j ∈ I, m ∈ Z, n ∈ Z =0 ), d and the central element c. We assume K ± j are invertible and set Note that ψ ± j (z) are formal Laurent series in z, whose coefficients are well defined in the p-adic topology. We call e j (z), f j (z), ψ ± j (z) the elliptic currents. The defining relations are as follows. For g(P ), g(P + h) ∈ M H * , where p * = pq −2c and δ(z) = n∈Z z n . We also denote by U ′ q,p ( g) the subalgebra obtained by removing d.
We treat the relations (2.12), (2.15)-(2.21) as formal Laurent series in z, w and z j 's. In each term of (2.17)-(2.21), the expansion direction of the structure function given by a ratio of infinite products is chosen according to the order of the accompanied product of the elliptic currents.
For example, in the l.h.s of (2.17), should be expanded in z 1 /z 2 . In each term in (2.20), the coefficient function is expanded in z σ(k) /z σ(m) (m < k), w/z σ(m) (m ≤ s) and z σ(m) /w (m ≥ s + 1). All the coefficients in z j 's are well defined in the p-adic topology.
Let U q ( g) be the quantum affine algebra associated with g in the Drinfeld realization [3]. See Appendix A. U q,p ( g) is a natural face type ( i.e. dynamical) elliptic deformation of U q ( g) in the following sense.
Here the smash product ♯ is defined as follows.
Proof. At p = 0, the relations for α ∨ j,m , e j (z), f j (z) (2.12)-(2.21) coincide with those for a ∨ i,m , x + j (z), x − j (z) (A.6)-(A.11) of U q ( g). Therefore from (2.6)-(2.10), one has the isomorphism 2.2 H-algebra U q,p ( g) Let A be a complex associative algebra, H be a finite dimensional commutative subalgebra of A, and M H * be the field of meromorphic functions on H * the dual space of H.
Definition 2.3 (H-algebra).
An H-algebra is an associative algebra A with 1, which is bigraded over H * , A = α,β∈H * A αβ , and equipped with two algebra embeddings µ l , µ r : M H * → A 00 (the left and right moment maps), such that
Dynamical Representations
Let us consider a vector space V over F, which is H-diagonalizable, i.e.
Let us define the H-algebra D H,V of the C-linear operators on V by Definition 2.5. We define a dynamical representation of U q,p ( g) on V to be an H-algebra homomorphism π : U q,p ( g) → D H,V . By the action π of U q,p ( g) we regard V as a U q,p ( g)-module.
Definition 2.6. For k ∈ C, we say that a U q,p ( g)-module has level k if c act as the scalar k on it.
Remark. For the level-0 representations, Definition 2.5 is essentially the same as in [8], by identifying P and P + h with λ and λ − γh, respectively. This definition is valid also for the non-zero level cases [10].
For ω ∈ C, we set and we call V ω the space of elements homogeneous of degree ω. We also say that X ∈ D H,V is and denote by (D H,V ) ω the space of all endomorphisms homogeneous of degree ω.
We define the category C k in the analogous way to the classical affine Lie algebra case [18].
Definition 2.10. For k ∈ C, C k is the full subcategory of the category of U q,p ( g)-modules consisting of those modules V such that
The Dynamical Quantum Z-Algebras
In this section we introduce a quantum and dynamical analogue Z k of Lepowsky-Wilson's Zalgebra associated with the level-k U q,p ( g)-modules and define a category D k of the Z k -modules.
Each representation of Z k in D k turns out to be a dynamical analogue of the quantum Z-algebra derived by Jing [23] from the level-k representation in the U q ( g) counterpart of D k . See sec.4.1.
We also provide the Serre relations (3.23) which are not written in [23] explicitly.
The Heisenberg algebra U q,p (H)
Let U q,p (H) be the subalgebra of U q,p ( g) generated by α ∨ i,n (i ∈ I, n ∈ Z =0 ) and c. It is convenient to introduce the simple root type generators α j,m and α ′ j,m defined by Then we have the induced U q,p (H)-module
The dynamical quantum Z-algebra Z V
Let k ∈ C × and (V, π) ∈ C k . We call πU q,p (H) ⊂ (D H,V ) 00 the level-k Heisenberg algebra. We define the following vertex operators in ( These satisfy the following relations. for j ∈ I and call them the dynamical quantum Z operators associated with (V, π) ∈ C k .
Note that due to the truncation property of the grading of V ∈ C k w.r.t d, Z ± j (z; V) are well defined i.e. the coefficients Z ± j,n (V) of Z ± j (z; V) = n∈Z Z ± j,n (V)z −n in z are well defined elements in (D H,V ) n for all n ∈ Z. For the sake of simplicity of the presentation, we often drop π to denote the elements in D H,V .
From the defining relations of U q,p ( g), we obtain the following relations of the dynamical quantum Z operators.
This vanishes due to (3.4) and where p * = pq −2k . Similarly, [α i,m , Z − j (z; V)] = 0 follows from (3.5) and The case m < 0 can be proved in a similar way.
Definition 3.4. For k ∈ C × and (V, π) ∈ C k , we call the H-subalgebra of D H,V generated by , M H * and d the dynamical quantum Z-algebra Z V associated with (V, π).
The universal algebra Z k
Using the relations in Theorem 3.3, we define the universal dynamical quantum Z-algebra as follows.
Definition 3.5. Let Z ± i,m (i ∈ I, m ∈ Z) be abstract symbols. We set Z ± i (z) = m∈Z Z ± i,m z −m . We define the universal dynamical quantum Z-algebra Z k to be a topological algebra over F[[q 2k ]] generated by Z ± i,m , K ± i (i ∈ I, m ∈ Z), d, M H * subject to the relations obtained by replacing We treat the relations as formal Laurent series in z, w and z j 's in a similar way to those of U q,p ( g) in Sec.2.1. The defining relations are well-defined in the q 2k -adic topology.
Proposition 3.6. Z k is an H-algebra with the same µ l , µ r as in U q,p ( g).
Note that for (V, π) ∈ C k we extend π to the map π : Definition 3.7. For k ∈ C × , we denote by D k the full subcategory of the category of Z k -modules consisting of those modules (W, σ) such that (i) W has level k.
(iii) For every ω ∈ C, there exists n 0 ∈ N such that for all n > n 0 , W ω+n = 0.
The functor Λ
We define a reverse functor Λ : D k → C k as follows. Let (W, σ) ∈ D k be a Z k -module. We define U q,p (H)-module Ind W by requiring α i,m · W = 0 and Let F α,k be the level-k Fock module defined in Sec.3.1. We have a natural isomorphism F α,k ⊗ C [18]. We thus identify the U q,p (H)-module Ind W with F α,k ⊗ C W, with the action π of U q,p (H) These are well-defined elements of D H,Ind W [[z, z −1 ]]. By a similar argument to the proof of Theorem 3.3 one can show that e ′ j (z) and f ′ j (z) satisfy the defining relations of U q,p ( g) with c = k. We hence extend π : U q,p (H) → D H,Ind W to π : U q,p ( g) → D H,Ind W as an H-algebra homomorphism by π(e j (z)) = e ′ j (z), π(f j (z)) = f ′ j (z), By construction, the latter map is uniquely determined.
We thus reach the following definition.
Definition 3.10. We define a functor Λ : We obtain the following theorem analogously to the case of the affine Lie algebras [18].
Theorem 3.11. For k ∈ C × , the two categories C k and D k are equivalent by the functors Ω : C k → D k and Λ : D k → C k . In particular, the level-k U q,p ( g)-module Ind W = F α,k ⊗ C W ∈ C k is irreducible if and only if W ∈ D k is an irreducible Z k -module. 4 The Induced U q,p ( g)-Modules In this section we give a simple realization of the dynamical quantum Z-algebra Z k in terms of the quantum Z-algebra Z k associated with U q ( g) and construct the level-k induced U q,p ( g)modules. We also give some examples of the level-1 irreducible representations.
4.1
The quantum Z-algebra Z k associated with U q ( g) One can apply the arguments similar to those in Secs.3.1-3.3 to the quantum affine algebra U q ( g) in the Drinfeld realization and define the corresponding quantum Z-algebras Z V associated with the level-k U q ( g)-module V [23] and the universal one Z k . See Appendix A. We also denote by C k and D k the U q ( g) counterparts of the categories C k and D k .
Comparing the defining relations of Z k with those of Z k , we obtain the following isomorphism.
Proposition 4.1. We have the isomorphism as an H-algebra by where Z ± j,m denotes the generators in Z k (Definition A.3).
Theorem 4.2. For (W,σ) ∈ D k and generic µ ∈ h * , there is a dynamical representation σ of where P d denotes a C-linear operator on 1 ⊗ e Qμ C[R Q ] such that
Examples of the irreducible representations
We here give some examples of the level-1 irreducible induced representations of U q,p ( g) of types 8 and B (1) l .
The simply laced case :
Let C[Q] be the group algebra of the root lattice Q = ⊕ i Zα i with the central extension: Let us consider the fundamental weight Λ a of g with 0 ≤ a ≤ l for A Then for generic µ ∈ h * , we have from Theorem 4.2 a level-1 irreducible Z 1 ( g) module W H,Q (Λ a , µ) := (F ⊗ C W (Λ a )) ⊗ e Qμ C[R Q ] with the action given by Then from Proposition 4.4 we obtain: Theorem 4.6. A level-1 irreducible highest weight representations of U q,p ( g) is given by V(Λ a + µ, µ) := Ind W H,Q (Λ a , µ) with the highest weight (Λ a + µ, µ): The highest weight vector is 1 1 ⊗ eΛ a ⊗ e Qμ . The derivation operator d is realized as where r, r * ∈ C × , and A j m are the fundamental weight type elliptic bosons given in Sec.5.1.
Theorem 4.7. [26,27] For p = q 2r and p * = pq −2 = q 2r * , i.e. r * = r − 1, the deformed Here E +j m denotes the orthonormal basis type elliptic boson given in (5.3), and : : denotes the normal ordering of the enclosed expression such that the operators E ±j m for m < 0 are to be placed to the left of the operators E ±j m for m > 0. In addition, the level-1 elliptic currents e j (w) and f j (w) of U q,p (A (1) l ) obtained from Proposition 3.9, (4.1) and (4.2) are the screening currents of the deformed W (A l )-algebra, i.e. they commute with T n (z) up to a total difference.
See also [2,7,29]. A similar statement is valid also for the deformed W (D l ) [28] and U q,p (D (1) l ). We also expect that for r ∈ Z >0 satisfying r > g + 1 and for a level-(r − g − 1) dominant integral weight µ, the space F γ,κ (Λ a , µ) becomes completely degenerate with respect to the action of the corresponding deformed W (g)-algebra [26][27][28], although the E 6,7,8 -type deformed W algebras have not yet been constructed explicitly. In order to get the irreducible module one should make the BRST-resolution in terms of the BRST-charge constructed from the half currents of U q,p ( g).
An explicit demonstration for the A
(1) 1 case has been discussed in [35]. Remark. In Theorem 4.7, we assumed p = q 2r in order to make a connection to the deformed W (A l )-algbera. The same relation arises naturally when one considers the finite dimensional representations of the universal elliptic dynamical R matrices [5,40].
The B
(1) l case We follow the work [41] and its quantum analogues [42,43] with a slight modification in the Ramond sector according to [44]. Let e α i (i ∈ I) be the generators of the group algebra C [Q] with the following central extension.
As before we regard h i (i ∈ I) as an operator such that We also need the Neveu-Schwartz (N S) fermion {Ψ n |n ∈ Z + 1 2 } and the Ramond (R) fermion {Ψ n |n ∈ Z} satisfying the following anti-commutation relations.
and their submodules F N S,R even (reps. F N S,R odd ) generated by the even (reps. odd) number of Ψ −m 's. One should note that for the R fermion Ψ 2 0 = N and {Ψ m , Ψ 0 } = 0 for m = 0. So we have two degenerate vacuum states 1 and Ψ 0 1. We hence consider the extended space and realize the R-fermions by Note that { Ψ m , Ψ n } = δ m+n,0 N (q m + q −m ). We set The action of Ψ m on F N S is given by where u ∈ F N S , whereas Ψ m acts on F R as Let us define the fermion fields Ψ N S (z) and Ψ R (z) by One can derive the following operator product expansions.
It is also easy to calculate the characters of these modules: rr * is the central charge of the WB l algebra by Fateev and Lukyanov [31], and the derivation operator d is realized as where r, r * ∈ C × , and A j m are the fundamental weight type elliptic bosons of the type B l given in Sec.5.1, Ψ m denotes Ψ m on F N S and Ψ m on F R . We obtain: λ∈max(Λa) mod Q 0 +Cδ ch F λ,γ,κ (Λa,µ) coincides with the character of the Verma modules of the WB l -algebra with the highest weight h = 1 2rr * |r(μ + κ +ρ) − r * (Λ a +μ + γ + κ +ρ)| 2 and the central charge c W with r, r * = r − 1 ∈ C being generic. Conjecture 4.8. There exists a deformation of the WB l -algebra such that i) its generating functions commute with the level-1 elliptic currents e j (z) and f j (z) of U q,p (B (1) l ) modulo a total difference, i.e. e j (z) and f j (z) at c = 1 are the screening currents of the deformation of the WB l -algebra, ii) for generic r and µ ∈ h * , F λ,ξ,κ (Λ + µ, µ) is an irreducible module of the deformation of the WB l -algebra.
Remark. All the algebras W (g) appearing in sec.4.2.1 and WB l in this subsection are the Walgebras associated with the coset X (1) − 1, 1). In particular, the WB l is different from the one obtained from the quantum Hamiltonian reduction of the affine Lie algebra B (1) l . The W -algebras associated with such coset describe the critical behavior of the face type solvable lattice models introduced by Jimbo, Miwa and Okado [33].
Elliptic Bosons of Various Types
In this section we introduce elliptic bosons of the fundamental weight type A j m and the orthogonal basis type E ±j m for U q,p ( g), g = A l . The level-1 bosons A j m and E ±j m are used to realize the derivation operator d and the generating function of the deformed W (A l )-algebra, respectively, in sec.4.2.
Let α i,m be the elliptic bosons of the simple root type as in Sec.2. We define the fundamental Note that using the matrix B(m) = ([b i,j m]) 1≤i,j≤l , we have [28] A j m = l k=1 (B(m) −1 ) kj α k,m .
Solving (5.1) we obtain the following.
For D (1) l , Here l , D l .
l . In addition, we have Remark. The level-1 case i.e. c = 1, the A (1) l type relation was given in [26,27] and the D (1) l type was essentially given in [28], where parameters q and t should be identified with our p * 1 2 = p (1) l cases are different from those given in [28]. At least the formulas for B
|
2014-07-14T10:58:14.000Z
|
2014-04-07T00:00:00.000
|
{
"year": 2014,
"sha1": "4fc370db1123fae827c594e50643161aa319202f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4fc370db1123fae827c594e50643161aa319202f",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
221747722
|
pes2o/s2orc
|
v3-fos-license
|
Education, Smoking and CRP Genetics in Relation to C-Reactive Protein Concentrations in Black South Africans
Because elevated circulating C-reactive protein (CRP) and low socio-economic status (SES), have both been implicated in cardiovascular disease development, we investigated whether SES factors associate with and interact with CRP polymorphisms in relation to the phenotype. Included in the study were 1569 black South Africans for whom CRP concentrations, 12 CRP single nucleotide polymorphisms (SNPs), cardiovascular health markers, and SES factors were known. None of the investigated SES aspects was found to associate with CRP concentrations when measured individually; however, in adjusted analyses, attaining twelve or more years of formal education resulted in a hypothetically predicted 18.9% lower CRP concentration. We also present the first evidence that active smokers with a C-allele at rs3093068 are at an increased risk of presenting with elevated CRP concentrations. Apart from education level, most SES factors on their own are not associated with the elevated CRP phenotype observed in black South Africans. However, these factors may collectively with other environmental, genetic, and behavioral aspects such as smoking, contribute to the elevated inflammation levels observed in this population. The gene-smoking status interaction in relation to inflammation observed here is of interest and if replicated could be used in at-risk individuals to serve as an additional motivation to quit.
Introduction
Non-communicable diseases (NCDs) accounted for 71.3% of global deaths between 2005 and 2015 [1]. Of these NCDs, cardiovascular disease (CVD) took the highest toll in developing nations such as South Africa [2]. Several CVDs share an inflammatory origin, which is influenced by numerous factors, including anthropometry, level of physical activity, and the genetic background of an individual [3]. One such marker of inflammation, which has been determined to predict future CVD risk, is the cytokine, C-reactive protein (CRP). Elevated levels of this protein, i.e., >3 mg/L, are predictive of future CVD [4,5]. CRP is generally elevated in black individuals, coinciding with notably stronger inflammatory responses as well as higher CVD risk than in other ethnicities [6][7][8][9]. Other factors besides
Materials and Methods
This cross-sectional, observational study was nested within the South African arm of the Prospective Urban and Rural Epidemiology (PURE) study, with details of the sampling strategy described by Pisa et al. [16]. In total, 2010 apparently healthy adults (>30 years), from both rural and urban communities, were included at the baseline in 2005. Individuals with a measured fever (tympanic temperature > 38.0 • C), were excluded. Further exclusions were that participants could not have known acute overt pre-existing diseases, be pregnant or lactating at the time of sampling.
Biochemical Measurements
Fasting blood samples were collected by registered nurses. High-sensitivity CRP concentrations were measured on a Sequential Multiple Analyzer Computer (SMAC), using a particle-enhanced immunoturbidometric assay (Konelab™ autoanalyzer, Thermo Fisher Scientific Oy, Vantaa, Finland). Quantitative determination of high-density lipoprotein cholesterol (HDL-c), triglycerides and total cholesterol in the sera of participants were done on a Konelab™ 20i autoanalyzer (Thermo Fisher Scientific). Low-density lipoprotein concentrations (LDL-c) were calculated using the Friedewald equation for those with triglycerides below 400 mg/dL. Nurses trained in voluntary counseling and human immune deficiency virus (HIV) testing performed HIV tests in accordance with prevailing governmental and WHO guidelines. Pre-test counseling was provided in group format, after which signed informed consent was obtained individually. Those testing positive for HIV on a rapid First Response HIV1-2.O card test (Transnational Technologies Inc. PMC Medical, Nani Daman, India), were retested using a card test developed by Pareeshak (BHAT Bio-tech, Bangalore, India) to ensure diagnostic accuracy. All participants, irrespective of HIV status, received individual post-test counseling. Whole EDTA blood was used for measuring glycated hemoglobin (HbA1c) from fasting participants, with a D-10 Hemoglobin testing system (Bio-Rad Laboratories, Hercules, CA, USA).
Anthropometric and Physiological Measurements and Lifestyle Questionnaires
The participant's body weight was measured in minimal clothing with arms hanging freely at the side. Weight was measured in duplicate, with the mean recorded. Height was measured in duplicate with a stadiometer, with the head in the Frankfort plane in a fully erect state while the participant inhaled. The mean was then calculated and recorded in meters. Body mass index (BMI) was calculated using the standard formula and reported as kg/m 2 . Waist circumference (WC) and hip circumference were measured using unstretchable metal tape in accordance with the recommendations of the International Society for the Advancement of Kinanthropometry. An Omron automatic digital blood pressure monitor (Omron HEM-757, Kyoto, Japan) was used to measure the right brachial artery blood pressure in the sitting position. Participants did not smoke, exercise, or eat 30 min beforehand, and had to be rested and calm for five minutes before measurement.
Volunteers responded to an interviewer-administered questionnaire in their language of choice, in which various socio-demographic variables (age, gender, medical history (stroke and diabetes incidence), tobacco use, alcohol usage, and SES factors (i.e., roof type, access to electricity, primary cooking fuel, primary heat source, water source, and education)) were collected. Water sources were grouped into sourced water (i.e., from wells, rivers, or boreholes) or municipal water sources. Even though we did not have access to a standardized SES-index, we overcame this by focusing on individual factors that constitute an individual's living environment, to identify factors for which mitigation efforts could be instituted in an attempt to lower CRP concentrations. Food portion books were specifically designed and standardized for the South African PURE-North West population.
Validated, interviewer-based quantitative food frequency questionnaires or qFFQs [20] were completed to determine dietary intakes. The data obtained from qFFQs were entered into the Foodfinder3 program (Medical Research Council, Tygerberg, South Africa, 2007) and sent to the Medical Research Council of South Africa for nutrient analyses.
Genetic Analyses
Polymorphic sites and novel SNPs within the CRP gene were identified by sequencing in 30 randomly selected DNA samples and an in silico search. These variants were scored (varying from 0-1) by the Assay Design Tool to determine a viable customized genotyping array to be analyzed using the Illumina ® VeraCode GoldenGate assay technology on a BeadXpress ® platform (Illumina ® Inc., San Diego, CA, USA) for genotyping the selected SNPs. Ultimately only 12 CRP SNP clusters passed the quality control (QC) measures for making genotype calls by having a GenCall score >0.5 and a call rate ≥0.9 and are reported on here (see Table S1 for SNP details). The BeadXpress ® analysis was performed by the National Health Laboratory Service (NHLS) at the University of the Witwatersrand, Johannesburg.
Statistical Analyses
A total of 1569 individuals, for whom we had both CRP concentrations and all the genetic information regarding the SNPs investigated in the CRP gene, were included in our analyses. Statistical analyses were conducted using R [21]. Continuous variables were inspected for normality using histograms and measures of skewness. Variables with a skewed distribution were natural log-transformed and reported as median and interquartile ranges. Based on global recommendations for CRP cut-off values, data subsets were created i.e., ≤3 mg/L; >3 mg/L [5]. The compareGroups library was used to construct bivariate tables comparing our constructed cohorts, using non-parametric methods for both continuous and categorical data. Spearman correlations were computed, testing for linear associations with of continuous values, while median values and interquartile ranges for each categorical variable were reported. Significance testing was conducted using the independent two-group Mann-Whitney U test or the Kruskal-Wallis One-Way ANOVA by Ranks Test. A backward stepwise linear regression was conducted using the stepAIC function within the MASS library. Models were evaluated based on the Akaike information criterion (AIC) obtained. The final variables obtained were evaluated for co-linearity. Association analyses for SNP X environment interaction were then performed, using the SNPSassoc library, including the co-variates obtained from the linear regression model, and included based on lowest scoring AIC value. This was done for each SNP in combination with each demographic and SES factor. Where applicable, p-values were adjusted using the methods suggested by Bonferroni.
Ethics Statement
The authors and study coordinators complied with all ethical standards. The PURE-SA (North West province) study was approved by the Health Research Ethics Committee of the Faculty of Health Sciences, North-West University (NWU), in accordance with the ethical principles outlined by the Declaration of Helsinki with approval numbers 04M10, for the larger study, and NWU-00004-17-S1 for our affiliated study. Goodwill permission was granted by household heads and community leaders (mayors and traditional leaders), as well as the Department of Health of South Africa. Signed informed consent was given by each participant after being apprised of the aims of the study. Sufficient time for reflection was given, and subjects could withdraw at any time, or withhold whatever information they were not willing to share, without reprisal.
Demographics and Anthropometrics of the Study Population and Their CVD-Risk Factors Stratified to at-Risk CRP Phenotypes
Women were more likely to present with elevated CRP concentrations (median unadjusted value of 3.58 mg/L). Post-menopausal women (self-reported with amenorrhea), had higher median CRP concentrations (4.31 [1.72; 11.9] mg/L) than men (2.42 [0.72; 7.87] mg/L) and pre-menopausal women (3.05 [0.82; 9.00] mg/L; p < 0.0001). Individuals with elevated CRP concentrations were physically larger than those with normal CRP, as indicated by higher BMI and other anthropometric markers, even though similar daily dietary intakes were noted. Post-menopausal women also had significantly larger WC (median: 82.4 cm) than pre-menopausal women (median: 79.0 cm) and men (median: 74.2 cm). After adjusting for WC, which differed between the genders (p < 0.0001), the difference in CRP concentrations observed between men and women, as well as pre-and post-menopausal women, disappeared. Those with elevated CRP were also significantly older, although age was only weakly, but significantly, associated with CRP (ρ = 0.12). Median CRP concentrations were similar irrespective of HIV status, tobacco and alcohol use. Smokers had a lower median WC (74 cm) as opposed to grouped individuals who had never smoked or were former smokers (81.4 cm, p < 10 −12 ).
Median CRP concentrations were similar (p > 0.05) in rural and urban participants (Table 1), with similar proportions of individuals being classified as having normal or elevated CRP concentrations observed in these two areas. Factors pertaining to SES differed between the two localities (data not shown). Rural participants were more likely to be married and have lower education levels than urbanites, thereby pointing toward a lower SES level for rural dwellers. Ruralists were also more likely to access public water systems such as communal wells, to use wood as a primary heating and cooking fuel source, and have roofs constructed of corrugated iron sheeting with no insulation. Next, we stratified factors pertaining to SES according to CRP risk values (Table 2). Except for marital status, similar distributions and median CRP concentrations were observed for all investigated SES factors. Individuals presenting with normal CRP concentrations were more likely to identify as never being married; however, when adjusting for age and WC, similar CRP concentrations were observed across all marital status categories. Smokers had significantly lower formal educational attainment than non-smokers (data not shown). Individuals with elevated CRP concentrations presented with significantly poorer markers of CVD risk than those with normal CRP concentrations ( Table 3). Cases of elevated CRP were prone to co-present with increased blood pressure, increased heart rate and a poorer lipid profile. Median glycated hemoglobin concentrations were also increased in individuals with elevated CRP concentrations. To describe CRP concentrations and the interactions of modulators thereof on a physiological scale, natural log-transformed CRP (lnCRP) concentrations were modeled using a stepwise, backward linear regression approach. Eight statistically significant predictors were identified from the measured variables, including clinical, demographic and socio-economic factors (Tables 3 and 4). The model presented accounted for 14.3% of the variation observed in CRP concentrations of our black population. A 22.0% predicted reduction in CRP concentration was observed in response to an increase of 1 mmol/L in HDL-c. All SES elements investigated in this study failed to predict CRP concentrations, except for whether an individual had attained 12 or more years of formal education, which resulted in a predicted reduction of 18.9% in CRP concentrations.
Effects of SES Factors on Association between Different CRP Genotypes and CRP Concentrations
The odds of presenting with elevated CRP concentrations were independently investigated for each demographic or SES component included in this study in combination with each of the twelve CRP genotypes. The only significant interaction observed in our population was that of smoking status in individuals of differing rs3093068 genotypes. Individuals indicating that they were former smokers were included in our association analysis as abstainers to enable sufficient statistical power. Smokers had lower median WC (74 cm) as opposed to individuals who had never smoked or were former smokers (81.4 cm, p < 10 −12 ). In contrast, current smokers presented with the higher median daily dietary intake (7306 kJ) than current non-smokers (7037 kJ). The odds of presenting with elevated CRP concentrations were 71% higher for those homozygous for the minor allele (C/C) than non-smokers (Figure 1). Individuals with the wild-type had similar odds of presenting with elevated CRP concentrations, irrespective of their smoking status. glycated hemoglobin, HDL-c high-density lipoprotein cholesterol, WC waist circumference.
Effects of SES Factors on Association between Different CRP Genotypes and CRP Concentrations
The odds of presenting with elevated CRP concentrations were independently investigated for each demographic or SES component included in this study in combination with each of the twelve CRP genotypes. The only significant interaction observed in our population was that of smoking status in individuals of differing rs3093068 genotypes. Individuals indicating that they were former smokers were included in our association analysis as abstainers to enable sufficient statistical power. Smokers had lower median WC (74 cm) as opposed to individuals who had never smoked or were former smokers (81.4 cm, p < 10 −12 ). In contrast, current smokers presented with the higher median daily dietary intake (7306 kJ) than current non-smokers (7037 kJ). The odds of presenting with elevated CRP concentrations were 71% higher for those homozygous for the minor allele (C/C) than non-smokers (Figure 1). Individuals with the wild-type had similar odds of presenting with elevated CRP concentrations, irrespective of their smoking status.
Figure 1. Interaction between tobacco smoke and rs3093068 in the Prospective Urban and Rural
Epidemiology study-North West arm. The minor allele is associated with increased CRP concentrations, which are further increased in smokers. Men were more likely to be current smokers Figure 1. Interaction between tobacco smoke and rs3093068 in the Prospective Urban and Rural Epidemiology study-North West arm. The minor allele is associated with increased CRP concentrations, which are further increased in smokers. Men were more likely to be current smokers (59.5% vs. 47.6%; p < 0.0001). Homozygous smokers for the minor allele had a 71% increased risk of presenting with elevated CRP concentrations. Abbreviations: CRP, C-reactive protein; C, cytosine; CI, 95% confidence interval; G, guanine.
Discussion
Little evidence exists on whether-and indeed, if-individual SES factors that constitute an individual's immediate living environment affect their inflammatory status. In this study, we failed to find sufficient evidence that the investigated SES elements acted individually as impetus for elevated CRP concentrations, the exception being that lower CRP concentrations were predicted from adjusted analyses in individuals completing at least 12 years of formal education. Our evidence, however, highlights the fact that the inflammatory phenotype observed in black populations is the result of a combination of various factors, including, but not limited to, the combined effects of genetics with individual lifestyle choices such as smoking. Moreover, our results indicated that black participants with CRP concentrations above 3 mg/L have a higher prevalence of CVD risk factors.
Several epidemiological studies exclude individuals with CRP concentrations above 10 mg/L, which is seen as the clinical cut-off point for acute infections. However, [22] reported that certain individuals, especially obese women, had repeatedly presented with CRP concentrations above 10 mg/L without any indication of acute infection. In our study, all individuals examined had normal body temperatures, reducing the likelihood of acute infection as a cause of excessively elevated CRP concentrations in the 363 (23.1%) individuals presenting with CRP concentrations above 10 mg/L. Nienaber-Rousseau et al. (unpublished) proved statistically that excluding participants within our population with CRP concentrations higher than 10 mg/L leads to the exclusion of certain CRP genotypes, which results in a biased representation of the actual drivers of increased CRP concentrations observed in black African populations. Furthermore, we included these individuals as excluding them would have decreased the statistical power when stratifying within the different SES components and different genotypes. We also included individuals who were seropositive for HIV, as median CRP values were similar regardless of HIV status. Infection rates are also higher among individuals with low SES, which could result in the introduction of bias should HIV-positive individuals have been excluded [23].
Elevated CRP concentrations were regularly observed in the women included in our study. Study [24] reported that black women were more likely to have CRP concentrations above 3 mg/L and that elevated CRP was more frequently observed in post-menopausal women, although it was strongly correlated with abdominal obesity. Likewise, gender as well as pre-menopausal and post-menopausal differences dissipated when we corrected for WC in our study, implicating WC as a major contributing factor to the development of an elevated CRP phenotype. Anthropometric markers such as waist and hip circumferences, weight and BMI had significant positive correlations with CRP concentration ( Table 2, ρ = 0.27, 0.21, 0.22 and 0.24, respectively). Various other reports record the influence of adiposity on the inflammatory state of the individual [25,26]. The association between BMI and CRP, irrespective of ethnicity, was reported in another study [27], and elevated CRP concentrations, as well as increased CVD risk, are often the result of increased adiposity [26].
Using CRP as a prognostic marker for future CVD risk appears to be independent of ethnic or geographical factors [6]. Factors pertaining to CVD risk were observed as being elevated in individuals harboring elevated CRP concentrations in our sample. Similar to our findings, a multi-ethnic study reports increased resting heart rate to be associated with increased concentrations of inflammatory markers, including CRP [28]. Inflammation markers, and especially CRP, are also linked to vascular stiffness, atherosclerosis and the development of end-organ damage, characteristics of a long-term hypertensive state combined with hyperlipidemia [29]. African Americans are also reported to be more likely to exhibit elevated HbA1c concentrations, with CRP highly correlated with HbA1c levels [30]. Excessive weight, hyperlipoproteinemia, and decreased insulin sensitivity are traits associated with the metabolic syndrome or MetS [31]. Combined with the elevated inflammation levels, MetS was, therefore, prominent in the group of volunteers studied and even more so in post-menopausal women, regardless of their SES.
SES factors differed between urban and rural participants; however, CRP concentrations were similar regardless of where individuals resided. The lack of any impact exerted on CRP concentrations by SES elements (Table 2) further strengthens our observation that individual SES components are not the main causative effect of elevated CRP concentrations in this population. The detected similarity in CRP concentrations between different levels of urbanization with varying markers of SES is in contrast to observations made in an Asian population, where city dwellers had higher CRP concentrations [32]. The years following the fall of apartheid in South Africa were marked by unprecedented rates of urbanization, which improved economic activity and increased rural-to-urban migrations [16]. Furthermore, improved access to basic utilities resulted from governmental efforts, even in the rural areas included in this study [33]. It may, therefore, be argued that the definition of what constitutes a rural area differed between our two studies, which may have resulted in this discrepancy. Of all the included SES factors, only education was determined to be an influencer of CRP concentration, and only when controlling for other confounding variables.
Although some of the values in Table 4 suggest substantial changes in CRP for a single unit change in a specific variable, the interpretation should consider the physiological changes of such alterations. Age-dependent increases in CRP were associated with elevated adiposity due to changes in hormonal balances, as reported in previous studies [34] similar to our investigation. Substantial reductions in CRP were predicted with a 1 mmol/L change in HDL-c; however, eliciting this response may prove difficult in a resource-poor environment. These covariates, however, do predict possible routes of intervention, whereby proper nutrition (focusing on weight management, treatment of hyperlipidemia, and glycemic control), as well as increased physical activity (to improve resting heart rate) and increasing education levels, can reduce inflammation in populations [35][36][37]. Completing 12 or more years of formal education was associated with reduced CRP concentrations (Table 1, unadjusted), although this reduction was found to be non-significant. In our multivariate model, completing secondary school or tertiary education corresponded to a significant 18.9% reduction in predicted CRP concentration. The authors of [13] estimate that 87.9% of CRP variation attributed to education level could be primarily explained by the higher number of smokers, the lower dietary quality and reduced levels of exercise in lower educated individuals. Similarly, it was reported for our cohort that education levels were associated with lower BMIs in both men and women [16].
Various other studies have also failed to find differences in the CRP concentrations of smokers versus non-smokers, although smoking is known to affect CVD risk [38,39]. Smokers in our study had lower WC, with higher daily dietary intakes than non-smokers. Previously, African American smokers were reported to have lower levels of weight gain than white Americans [40]. However, nicotine does increase energy expenditure [40], which may have resulted in the smaller WC observed in active tobacco users in our study. To our knowledge, we present the first indication that smoking status results in increased CRP concentrations in individuals harboring the minor allele of rs3093068, of which the major allele is associated with increased CRP concentrations [19]. Smokers with the minor allele had odds of presenting with elevated CRP concentrations statistically similar to those with the wild-type, negating the CRP-lowering effects of the minor allele.
Conclusions
Our main findings suggest that CRP concentrations in black South Africans are not associated with individual SES factors. Even though the SES factors included are not primarily responsible for the elevated CRP concentrations observed, improving the general SES of individuals commonly results in better health outcomes. Therefore, there should be collective efforts to improve the general socio-economic standing of the people of the Republic of South Africa. Health promotion efforts should focus on reducing the individual symptoms that constitute MetS, with public health promotion efforts especially focused on individuals with lower education levels. Here we also presented the first evidence that smoking status increases CRP concentrations in individuals who are homozygous for the minor allele of rs3093068, although more evidence is needed from other ethnicities. Our data were also cross-sectional in nature, and, therefore, do not account for changes in SES factors for which future elevated CRP concentrations were yet to be moderated by improvements in these SES factors. Future studies measuring SES factors should, consequently, also include questions regarding the period for which the individual had access to improved standards of living.
Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/17/18/6646/s1, Table S1: CRP SNPs, their minor allele frequencies. Funding: Data used in the presented work were collected as part of the North West Province, South African arm of the Prospective Urban and Rural Epidemiological (PURE) study. No external funding sources were utilized for this retrospective analysis.
|
2020-09-17T13:06:16.488Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8ce8fc3fd35a6dd5cfc9cc03082b45f0e594c963",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/18/6646/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e4695c1ae9d9c1cf26c3781f11a4c5a360613c79",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235637330
|
pes2o/s2orc
|
v3-fos-license
|
Towards a Privacy Respecting Image-based User Profiling Component
This paper explores the development of a framework for content-aware user profiling, studying how image producers and consumers can be better understood and consequently better served through services such as matchmaking and friend recommendations. User interests and similarities are extracted and analyzed on the edge employing state of the art CNN models over user images for the tasks of classification, as well as, building latent user representations from personal media content. A private-by-design approach is employed through the development and deployment of models on-device, avoiding the need for communicating personal data to a central service. Experimental results show that user profiling can provide accurate ranking of the users’ interests and meaningful user associations through profile similarity.
I. INTRODUCTION
The challenge of effectively understanding user interests and traits through their online behaviour becomes ever more apparent as users spend a growing amount of time being online, e.g. consuming multimedia content, searching for information related to a given interest or otherwise socially interacting. With mobile internet usage being at an all-time high, information and content stored on a user's device can ultimately contribute to building very effective and precise representations of the user, their preferences and behavioural patterns. Understanding the particularities of a user can significantly boost the appeal and effectiveness of applications, allowing the optimisation and personalisation of their service. For instance, in the context of the HELIOS 1 H2020 project, we are building a user profiling framework to provide content aware matching between users. User profiling is, however, a multi-faceted problem and the issues one must address include the choice of information that is to be utilized, the decision of whether to use existing standards or to create new ones while also accounting for privacy issues related to storing and processing personal data and information.
Mobile user profiling can be roughly categorized as follows: i) explicit profile extraction, where users are profiled utilising explicitly defined mobile data e.g., demographics, website clicks, mobile purchases, in-app behaviours [1], and ii) implicit profile learning, where methods employ collaborative 1 https://helios-h2020.eu/ filtering, latent factor models, network embedding and deep learning to build a user representation [2]. The effectiveness of methods that fall into the explicit profile extraction category depends highly on the accurate collection of comprehensive user-related information and thus often suffers in terms of consistency and scalability. On the other hand, implicit profiling often suffers from information scarcity (especially true for methods that depend on abundant peers' information to group users [3] or demand domain knowledge to regularize the employed models and avoid overfitting (e.g. matrix/tensor factorization methods that model profiles as latent factors and learn user profiles through optimization within a large parameter space [4]). More recent works have applied deep neural networks to learn the network embeddings of users for many end-to-end deep learning tasks, such as deep learning based recommender systems [5] [6].
In this work, we focus on utilizing image content present in the user's device (e.g. stored, liked, exchanged images) for building user profiles. For this to work effectively we need to be able to capture the semantics of the images instead of simply what is depicted, and to this end we present two distinct methods. The first depends on using explicitly defined concepts that are deemed relevant to the user profiling task and training a hierarchical classifier to recognize them. The second method, on the other hand, attempts to learn an appropriate user representation from user similarity data that we infer from the autotags expansion pack of the YFCC100m dataset [7]. Using Deep Metric Learning (DML) our model learns a user embedding that relates the distance of the embeddings to the similarity of the users.
While appropriately defining the nature of user profiles is essential, there is also a need for proper datasets to train new deep learning models. The challenge arises due to lack of publicly available, large, high quality datasets of user images annotated with relevant information to accommodate the task of user profiling. Thus, we build two datasets for the training and evaluation purposes of this work. The first one, described in section IV-A0a and [8], contains images from Pinterest categorized in predefined interest classes as well as 12 randomly selected Pinterest users, whose pinboards have been manually labeled to fit the aforementioned interest classes, for testing purposes. The second one, described in section IV-A0b, is based on a subset of the YFCC100m dataset [7] with images grouped per user and each user labelled by a feature vector calculated using the auto tags of their images.
We also want to safeguard the privacy of the user data and thus require that the proposed models can run on the users' mobile devices without communication to external servers. This design decision limits the memory and computation resources available to our models and requires a right tradeoff between performance and efficiency. To summarize, the contributions of this work include the following: • the development of a hierarchical classification scheme for user profiling; • a novel formulation of a DML model to calculate pairwise user similarities; • the construction of a publicly available dataset to be used for user profiling or similar tasks; • a study on the feasibility of developing such models in a privacy respecting manner on mobile devices.
II. RELATED WORK A. Profiling with Predefined Categories
Using predefined categories to construct user profiles is the most straightforward way to proceed, offering the benefits of simplicity and interpretability. A common approach is based on detecting common objects and recognizing scenes in users' image collections, as is done in [9], [10].
Compared to object and scene categories, interest categories, are more directly related to the users and thus constitute a promising alternative. In our work, we opted for this route and the specifics of our interest-based model will be fleshed out in section III-A. The approach followed in [11], is similar to the classification method we present in that they trained a model based on interest categories with data scraped from Pinterest. However, they did not make their dataset publicly available and the technique they used to collect the data depended on the pinboards being annotated to the appropriate category, a label that is no longer available from Pinterest. The categories were also not fine-tuned to better represent the task, the classification was flat without hierarchical elements and there was no effort to make the models appropriate for mobile use.
B. Profiling with Mined Representations
While profiling with predefined categories has the benefits of being more straightforward and interpretable, the extracted information is inevitably limited by the fixed nature of the categories. To address this limitation, methods that learn the appropriate user representation from data have been proposed [12]- [14]. In particular, DML techniques have been employed in [14] to train a user similarity network, as we also do in section III-B, but using a different formulation.
In the remainder of this section we will provide a brief introduction to DML. Metric Learning is the field that focuses on learning a distance function to measure the similarity between data samples. Recent works combine Metric with Deep Learning for learning a distance function through the training of a Deep Neural Network (DNN). The main objective of such works is to approximate an embedding function that maps samples in a feature space where relevant samples are closer to each other than to irrelevant ones. A DNN can approximate such an embedding function through a training scheme that penalizes the violation of the samples' ordering.
Any deep learning architecture can be selected and adapted based on the underlying problem for the implementation of the DNN. Several DML setups have been proposed in the literature for the training of the DNN [15]- [18].
Considerable effort has also been invested over the years into the critical step of the DML process that is the organization of the data samples in the form required by the loss function and the composition of a representative training set. We will follow the semi-hard negative mining scheme [16], where along with the hard negatives [19], all those negative samples are also considered that are further away from the anchor than a fixed positive sample, but still closer than the anchor-positive distance plus the margin, offering thus a softer transition between positive and negative samples and significantly boosting the overall performance.
III. METHODOLOGY
This section presents the proposed framework for contentbased user profiling. First we describe the profiling through the extraction of user interests from predefined categories introducing the concept of interest categories. Next we focus on building a method able to capture more semantic nuances turning away from predefined concepts and instead exploiting user similarity data to learn a latent user representation.
A. Profiling based on Predefined User Interests
Our design strategy starts by trying to classify the users' images in interest categories and interpret the interest distribution of each user's images as their profile. To achieve this we first need to define the interest categories. Since there are not any suitable publicly available datasets, we created one from scratch, utilizing the online platform Pinterest, as it is a popular social network where users post content that reflects their interests.
Inspired from the Pinterest topics (now rebranded as ideas 2 ) we selected the 15 interest categories shown in Table I and also further split four of the most popular ones into subcategories, displayed in Table II. We define two levels of detail in order to provide better flexibility and accuracy, given that there is a natural hierarchy in the interest categories and the images of a compound category tend to share a lot of similarities making it often difficult to accurately identify the subcategories. For example, clothes and bags are two subcategories of fashion, but an image of a woman holding a bag can be misleading regarding what constitutes the implied interest. Furthermore, the flexibility of being able to compute both coarse and fine profiles is important in a mobile setting where memory and computation resources are valuable. This hierarchical scheme is implemented by first training a coarse classifier on the 15 categories of Table I and then training another four local classifiers according to Table II. A thresholding step is also introduced to account for the fact that not all the available images of a user are expected to be exploitable for profiling. It is quite possible that some will not convey any useful information about the interests of a user, the most obvious example being photos that were taken by accident, as well as, blurry and distorted photos. To increase the system's robustness against such noisy pictures we make use of a filtering mechanism that rejects the images that our model classifies as not informative enough. A noninformative image in our case would correspond to an image that the model cannot clearly classify to some interest(s). From an information theory perspective, the uninformative images would be those that maximize the entropy [20] of the output probability distribution, which occurs when the latter approximates the uniform distribution. In such cases, the output probability distribution does not present a clear peak and the model does not confidently predict a category. Thus, when the probability distribution is found to be entirely below a predefined threshold, the model should disregard the image. Fig. 1 and 2 show the inference process for a coarse and fine profile, respectively. Let us assume that a user has N images I i , i = 1, . . . , N . To calculate the user's coarse profile each image is passed to the coarse classifier, c coarse , to produce the per image coarse category distributions d i coarse .
Next, the filtering procedure is activated to disregard images with distribution below the threshold t and the final coarse user profile is calculated as the average of the distributions of the remaining images. That is, let I t be defined as then the user's coarse profile at threshold t is To calculate the fine distribution of the image I i , that is d i fine , we can simply replace the index that each compound category appears in d i coarse with the output of the corresponding local classifier multiplied by the coarse probability of the compound category. As an optimization to avoid running all four local classifiers for every input image, we can run the local classifier only if the the coarse value of the corresponding compound category is above the defined threshold t. In all other cases, we can assume that the output of the local classifier is a non-informative uniform distribution. The final fine profile, p fine , is then calculated as the average fine distribution over all images, similarly to the case of coarse categories.
B. Profiling through User Similarity and DML
While the proposed hierarchical classification scheme described in section III-A has the attractive property of building profiles that correspond to meaningful human concepts related to predefined interest categories, it also carries the shortcomings of limited semantic variability and scalability. This is to be expected as it is often the case that model expressiveness is at odds with model interpretability. In this section we propose a method that is in principle able to capture significantly more semantic nuances. To do that we employ user similarity data to construct a latent user representation that retains as much of the original similarity structure as possible. Learning from Fig. 3. DML model architecture. We first extract image features with a pretrained CNN and average them to form the input user representation. The resulting vector is passed to a fully connected network that learns the appropriate embedding to a user space where similar users are close and dissimilar ones are further apart. similarity data closely resonates with the field of DML and as such we propose the use of the triplet loss during training. Triplet loss requires batches of triplets that consist of an anchor user (reference user) along with a similar and a dissimilar user. This way the model learns to build user representations that reflect the original user similarity structure and can effectively be interpreted for our purposes as user profiles. Because the user representations are not trained on manually specified concepts, the model has the capacity to discover through the available data the most suitable way to represent the users.
To evaluate this method we created a dataset, described in section IV-A0b, based on YFCC100m [7] and its autotags expansion, that includes for each user a collection of images and a 1570-d vector that represents the distribution of autotags in their images. We then define the similarity S ij between users i and j to be the cosine similarity of their autotag distribution vectors. Because these vectors correspond to probability distributions they have unit length and are positive and as such the cosine similarity becomes the inner product of the two vectors, for which it holds that 0 ≤ S ij ≤ 1 and S ij = S ji . DML is focused on learning a distance function to measure in our case the similarity between users by approximating an embedding function that maps users in a feature space where similar users are close, while dissimilar ones are further apart.
Let us assume that we have a user x, that we will refer to as the anchor user, and two other users x + and x − that are similar and dissimilar to the anchor, respectively. Then we will call the pair (x, x + ) a positive sample, the pair (x, x − ) a negative sample and the triplet (x, x + , x − ) will be the input to our model during training. To train our model we have to choose a strategy to mine these triplets as well as an appropriate loss function.
An appropriate loss function would take high values when x − is closer to x than x + is and low values when the opposite happens, with a reasonable margin separating the two. A formulation that reflects this is where D is the distance function we approximate with our model and m > 0 is a margin parameter to ensure a sufficiently large difference between the anchor-positive and negative distances. This is known as the triplet loss.
Having settled on the triplet loss function, we consider an appropriate sampling strategy to generate the triplets at each iteration of the training process. We note that training on all the possible O(n 3 ) triplets would be infeasible. The ideal triplets are those that do not trivially satisfy the loss constraint, but rather violate it and thus provide valuable feedback to the training process. We specifically chose to use semi-hard triplets, which are the triplets (x, x + , x − ) that produce That is, samples that satisfy D(x, x + ) < D(x, x − ) as desirable, but not within the appropriate margin m (chosen equal to 1 in our implementation). The triplets are created online and so it is also important to ensure that each batch has enough semi-hard examples. For this reason, we use a large batch size of 512 and create the triplets by first selecting all the (x, x + ) pairs within the batch and then for each such anchor-positive pair we select a negative sample x − such that the semi-hard rule is satisfied. Our labels, however, only assign a similarity score between 0 and 1 for each user pair and thus a threshold needs to be defined to translate these scores to positive and negative examples. For our implementation user pairs with a similarity score above 0.8 are marked as positive and those with similarity below 0.4 as negative.
Last, we specify the architecture of the model, shown in Fig. 3, that will approximate the distance function D between the users. The primary input to the model is each user's images, from which we extract features with a pretrained CNN and minimize the computational overhead by calculating the mean of the produced vectors. This first part of the network is not specifically trained for the task due to the lack of available large scale data. The trainable part is the fully connected network that follows and consists of three linear layers with ReLU activations in between. The output of this network is the user embeddings, which are used at inference as the user profiles, and the distance between the embeddings is defined by computing their Euclidean distance.
IV. EVALUATION SETUP A. Datasets
First we briefly discuss the datasets that we created for training and testing our models; we have also made PID2020 (IV-A0a) publicly available following an anonymization process [8].
a) Pinterest Interest Dataset 2020 (PID2020): This dataset was used for training and testing the hierarchical classification method of section III-A. To build the training part, for each of the categories shown at Tables I and II we queried Pinterest with several relevant terms. For the testing part we had to resort to manual labeling of user profiles. So, for 12 Pinterest users we manually classified their Pinboards in one of the defined categories or if they did not correspond to any of the categories we left them unlabeled. The user profile was calculated by assigning to each image the category of the Pinboard it belonged to and constructing the category distribution. We note that while unlabeled images were left out during the construction of the ground truth profile, they were included in the test set as noise that we consider realistic to be present in real cases; it is the model's job to filter them out with the thresholding mechanism previously described.
b) YFCC100m Autotag Expansion: While user tags vary widely and not always offer any meaningful semantic content, machine tags are predictable and easier to reason with. For this reason we utilized the autotag expansion of the YFCC100m to define user profiles as the autotag distribution of the user's images. Approximately 170,000 users are included and a total of 5 million images, taken from the MediaEval2016 Benchmark [21] , a subset of the YFCC100m dataset. This dataset was used for training the DML model described in section III-B.
B. Backbone Network
The backbone network is the CNN responsible for the majority of the memory and computation load. Because we are interested in deploying the proposed models to mobile devices, we chose to experiment with MobileNetV2 [22]. We also compare its performance with EfficientNet-B3 [23], a state-of-the-art network. Both networks are pre-trained on ImageNet [24] and fine-tuned end-to-end for the hierarchical classification task of section III-A. The DML model, described in section III-B, is not trained end-to-end, but rather only the features of the images are extracted from the backbone CNN and we then train the fully connected layer that we defined on top of the features. The described architecture is deliberately one of the simplest possible, but at the same time efficient, in order to be able to run in a wide range of mobile devices. To port the trained networks to mobile devices, we convert them to TFLite [25] objects and also experiment with quantizing their weights. Table III shows the top-1 accuracy of the coarse and the local classifiers as measured on the left-out validation set after a 0.8/0.2 training/validation split of the PID dataset (IV-A0a). As expected, the best performing model is the one with the EfficientNet-B3 backbone, but MobileNetV2 follows closely. While, the quantized TFLite model lags pretty significantly behind in some categories, the overall classification accuracy seems to be reasonably close.
V. EXPERIMENT RESULTS
However, classification only matters for our purposes as a means to construct user profiles and as such we test our classifiers at producing coarse and fine profiles for 12 manually labeled users. Since we are mainly interested in the ranking ability of our models, that is whether they are able to pick out a user's order of preference and not so much at the absolute values, we chose evaluation metrics typically used for ranking tasks. We report the Area Under Curve (AUC), the Mean Average Precision (MAP) and the Normalized Discounted Cumulative Gain (NDCG) for the coarse and fine profiles at Tables IV and V, respectively. The results seem promising; however, the sample size of 12 users is very small, which is reflected on the high standard deviations. Further investigation is needed, but finding appropriate data for this task is very challenging.
Assessing the performance of the DML model is also not straightforward, but we devised an evaluation scheme as follows. From the left-out validation set we collected the 1000 most common user tags and created a bag-of-words representation for each user. This time we did not use the machine generated tags, but rather the user defined tags that accompanied the images, in order to eliminate the effects of the autotagging process from our results. Based on these tags we calculate the Jaccard similarity between two users as the ratio of the size of the intersection to the size of the union of the tags of a user pair. We are interested in measuring the Jaccard similarity, between each user and the k-th most similar user according to our model, calculated using the Euclidean distance between the respective user embeddings. We expect a decreasing trend for the Jaccard similarity as k increases, since the users become more dissimilar, and this is also reflected in their tags. Our hypothesis is indeed validated in Fig. 4. Fig. 4. Evaluation of the DML model. We observe a correlation between the predicted similarity and the Jaccard similarity of the users.
VI. CONCLUSIONS AND FUTURE WORK
This paper explored the problem of developing content aware user profiling in a mobile setting. Our first approach relied on predefined interest categories, leading to the design of a deep learning model capable of inferring both coarse and fine grained user profiles based on users' photo collections. The model was trained with a newly collected dataset based on Pinterest image and user data achieving scores of 93.6, 88.5 and 92.5 mean AUC, MAP and NDCG respectively using a small, efficient TFLite model suitable for mobile deployment. Furthermore, the content aware user profiles created in this way are interpretable as they correspond to meaningful concepts, an important feature when aiming to be transparent with the users about how the underlying algorithms work.
To address the disadvantage of relying only on predefined categories, we created an additional model with lower interpretability, but with the potential to capture more concepts with higher semantic coverage. This model was based on DML and approximates a function that can map a user's photo collection to an embedding space where similar users are closer and dissimilar users are further apart. The model was trained on a subset of the YFCC100m dataset annotated with autotags, while it was also demonstrated that there is correlation between the closeness of the users' embeddings and the similarity of the users based on the tags they provided to their photos.
It is important to note that care has been taken to design models that are capable of being deployed in a mobile environment, that is they can be compressed in a small binary package and require modest computational resources. A method being able to run on mobile resources ensures data privacy and eliminates restrictions regarding user data management since all processing is happening locally on personal devices. However, should the framework be used in a social networking setting, additional measures should be taken in order to ensure that there are no sensitive data leaked through profile exchanges. All in all, the models developed in this paper provide a solid foundation for the incorporation of content aware features to image-based user modelling.
As future work, with a view to preserving the privacy of user profiles during the matching process, a promising avenue to explore is to encrypt and transmit the profiles and then calculate the matching score on the encrypted data using homomorphic encryption techniques [26].
Another extension worth considering is whether the deployed models can be dynamically updated on the user device. However, this is challenging as it is not straightforward to define what the training target would be and it would also cause different users to calculate their profiles with different models, which raises questions about how their profiles would then be compared.
|
2021-06-26T13:17:17.537Z
|
2021-06-28T00:00:00.000
|
{
"year": 2021,
"sha1": "9bb90444af2d1cba1a7307c9ab6baa11b6b0e9b2",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/5070237/files/CBMI_2020_paper_38_camera_ready.pdf",
"oa_status": "GREEN",
"pdf_src": "IEEE",
"pdf_hash": "9bb90444af2d1cba1a7307c9ab6baa11b6b0e9b2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
13573245
|
pes2o/s2orc
|
v3-fos-license
|
Entoloma subgenus Leptonia in boreal-temperate Eurasia: towards a phylogenetic species concept
This study reveals the concordance, or lack thereof, between morphological and phylogenetic species concepts within Entoloma subg. Leptonia in boreal-temperate Eurasia, combining a critical morphological examination with a multigene phylogeny based on nrITS, nrLSU and mtSSU sequences. A total of 16 taxa was investigated. Emended concepts of subg. Leptonia and sect. Leptonia as well as the new sect. Dichroi are presented. Two species (Entoloma percoelestinum and E. sublaevisporum) and one variety (E. tjallingiorum var. laricinum) are described as new to science. On the basis of the morphological and phylogenetical evidence E. alnetorum is reduced to a variety of E. tjallingiorum, and E. venustum is considered a variety of E. callichroum. Accordingly, the new combinations E. tjallingiorum var. alnetorum and E. callichroum var. venustum are proposed. Entoloma lepidissimum var. pauciangulatum is now treated as a synonym of E. chytrophilum. Neotypes for E. dichroum, E. euchroum and E. lampropus are designated.
. Moreover, have demonstrated the paraphyly of the Entolomataceae. Continued phylogenetic studies, based on both morphological characters and molecular markers (He et al. 2013, Morgado et al. 2013, Vila et al. 2013 reveal more insight into the interrelation between morphological and phylogenetic species concepts, as well as into the evolution of the Entolomataceae, that will result in future in a more natural classification. Also subg. Leptonia in the sense of Noordeloos (2004) is polyphyletic. Sect. Leptonia of the subgenus belongs to the Nolanea-Claudopus clade, and Cyanula and Griseorubida to the Inocephalus-Cyanula clade (Co-David et al. 2009). Based on these data Cyanula recently has been raised to the subgenus level (Noordeloos & Gates 2012).
This paper is an attempt to clarify the phylogeny and species concept of a morphologically distinct group within the genus Entoloma, viz. subg. Leptonia sect. Leptonia in the classification of Noordeloos (2004). Despite the fact that most species are rare, some of them were described as far back as in the 19th century. The protologues of these species were short and incomplete, and the type specimens were not preserved. The morphological variability of Leptonia species appears to be very high and depending also on the age of basidiomata and weather conditions. Due to these factors, a large number of misunderstandings and incorrect identifications are found in the literature. This paper aims to describe morphological variability of each phylogenetic species resulting in emended descriptions and an identification tool, as well as to reconstruct an infrageneric classification of subg. Leptonia.
Taxa sampling
To clarify the taxonomic status of 16 taxa of Entoloma subg. Leptonia as well as their position within the genus, 98 specimens of this subgenus or previously considered as belonging to this subgenus were selected for morphological study and molecular sampling (Table 1). ITS1-5.8S-ITS2 sequences were obtained for all of them. Type material, if possible, was included in the analysis. LSU sequences were obtained for 1-3 collections from each taxon. Species for which DNA extraction from type specimens appeared to be impossible or unsuccessful and where no additional reliable collections were available (E. austriacum, E. cedretorum, E. insidiosum, E. juniperinum, E. klofacianum, E. lidbergii, E. syringicolor and E. wynnei) are not considered in the present work. The outgroup choice and taxa sampling to determine the position of studied species in the system were primarily based on the recent global study on the phylogeny of the Entolomataceae (Co-David et al. 2009). Therefore, the representatives of the main subgenera of the crown Entoloma clade -Nolanea and Claudopus (Nolanea-Claudopus clade), Inocephalus and Cyanula (Inocephalus-Cyanula clade), as well as Entoloma, Pouzarella, Alboleptonia, and Trichopilus were included in the phylogenetic analyses (Table 2). Two species of subg. Entoloma (Prunuloides clade, which occupies basal position towards the groups treated above (Co-David et al. 2009)), were selected as outgroup in all analyses. A total of 114 specimens was included in the work. Most of the sequences were obtained from the present study. Additional 8 nrITS1-5.8S-ITS2, 19 nrLSU and 19 mtSSU sequences were retrieved from the Genbank: with the acronym GQ -Co- David et al. 2009; GU - JQ -He et al. 2013;JX -Vila et al. 2013. The geographic origin of the collections includes Europe, the Caucasus and extratropical Asia from the Urals to the Russian Far East.
Morphological analyses
The study was based both on recently collected material and collections kept in European and Asian herbaria (KR, L, LE, M, PRM, UPS, VLA, WU, ZT). The specimens were collected, documented and preserved using standard methods. Macroscopic descriptions are based on the study of the fresh material as well as on analysis of the photos. The dried material was examined using standard microscopic techniques. Spores, basidia and cystidia were observed in squash preparations of small parts of the lamellae in 5 % KOH or 1 % Congo Red in concentrated NH 4 OH. The pileipellis was examined in a preparation of the radial section of the pileus in 5 % KOH. Microscopic measurements and drawings were made with AxioImager A1 microscopes. Basidiospore dimensions are based on observing 20 spores, cystidia and basidia dimensions on observing at least 10 structures per collection. Basidia were measured without sterigmata, and the spores without hilum. Spore length to width ratios are reported as Q. The collected material is deposited in the Naturalis Biodiversity Center, section Botany (L), in the Mycological Herbarium of the Komarov Botanical Institute (LE) and in the collection of J. Vila (JVG) and S. Català (SGC). The holotype of E. sublaevisporum is deposited in LIP (Lille, France).
DNA extraction, amplification and sequencing
DNA was extracted from herbarium material using a CTAB extraction buffer technique with the following steps of consecutive addition of chloroform-isoamyl alcohol mixture, then isopropyl alcohol-3M sodium acetate solution for precipitation, 70 % ethanol for washing and finally water for dissolution. The alternative method of extraction DNA was using Axy Prep Multisourse Genomic DNA Miniprep Kit (Axygen Biosciences).
Alignments and phylogenetic analysis
The sequences were aligned with MAFFT web tool (http://align. bmr.kyushu-u.ac.jp/mafft/online/server/) with Q-INS-I strategy and default settings for other options. The final alignment was corrected manually using MEGA version 5 (Tamura et al. 2011).
Phylogenetic reconstructions were performed with maximum likelihood (ML), maximum parsimony (MP) and Bayesian (BA) analyses. Representatives of the basal Entoloma clade (Co- David et al. 2009), E. turbidum and E. nitidum, were selected as outgroup for all analyses.
MP analysis was performed using PAUP*4.0.b10 (Swofford 2002). One hundred heuristic searches were conducted by stepwise addition with random sequence addition and tree bisectionreconnection (TBR) branch-swapping algorithm. One tree was held at each step during stepwise addition. All characters were treated as unordered and of equal weight. Parsimony bootstrap values were calculated from 1 000 replicates. Only clades with a support ≥ 50 % were retained. Gaps were treated as missing data.
The ML analysis was run in the RAxML servers (http://phylobench. vital-it.ch/raxml-bb/index.php; which implements the search protocol of Stamatakis et al. (2008)), under a GTR+G model with one hundred rapid bootstrap replicates.
Bayesian analysis was performed using MrBayes 3.1 (Ronquist & Huelsenbeck 2003) for two independent runs, each with 2 000 000 generations with sampling every 100 generations, with GTR+G model and four chains. Posterior probability (PP) value ≥ 0.95 are considered significant.
Species were delineated on the base of phylogenetic species concept referring to the examples from fungi in Taylor et al. (2000). Monophyletic clades are recognized as phylogenetic species when they are concordantly supported by the majority of the received phylogenetic trees. Additionally, ITS sequence differences were taken into account. Genetic distances between ITS sequences were estimated using PAUP*4.0.b10 (Swofford 2002). We consider a p-distance greater than 3 % to be a criterion that we will use to recognize new species, following Petersen et al. (2008) and Hughes et al. (2009). This approach was based on the data for within-species variation and heterozygosity from the Great Smoky Mountains National Park in the United States. According to these data, aproximatelly 2-3 % sequence divergence usually represents different species for Basidiomycotina. Morphological criteria were also taken into account. In the case where the morphological differences between separate monophyletic clades are not evident or p-distance is less than 3 % we prefer to consider these as varieties within a phylogenetic species.
Analysis of the nrITS1-5.8S-ITS2 dataset
The full alignment contained 122 ITS sequences with 1 047 characters. We first excluded from the alignment two large ambiguously aligned regions in ITS1. The first region (282 bp) corresponded to a presumptive insertion characteristic for the Entoloma dichroum group. In addition, E. eugenei from this group had another insertion of 51 bp that was also excluded from the full analysis.
First of all, the analyses show a well-supported Leptonia clade (1.0/100 -BA/MP -hereinafter), separated from both the Nolanea-Claudopus and the Inocephalus-Cyanula clades ( Fig. 1). As was shown by Co-David et al. (2009) and confirmed by the present work, subg. Leptonia in the traditional sense (Noordeloos 1992(Noordeloos , 2004 is not monophyletic and must be considered without sections Cyanula and Griseorubida. Section Cyanula recently has been raised to the subgenus level (Noordeloos & Gates 2012). Representatives of sect. Griseorubida (E. indutoides) are nested within the Inocephalus subclade, and the exact taxonomic position of the section must be clarified. Therefore, the available data allowed treating subg. Leptonia in the strict sense, corresponding to sect. Leptonia in the sense of Noordeloos (1992Noordeloos ( , 2004 and Largent (1994). here as a new species due to genetical evidence and spore morphology (viz. nodulose form) combined with the difference in colour between the pileus and stipe (see discussion in the taxonomic part).
Although sect. Dichroi has few species, it, however, forms a clearly distinct group. Morphologically it includes species with very pronounced, sharply angled spores such as E. dichroum and E. eugenei. Entoloma allochroum, of uncertain phylogenetic position, is the only other species with similar spores. Section Dichroi is also distinguished genetically by the presence of large insertions in the ITS1 region (282 bp). ITS region in E. dichroum is rather variable, but the p-distance between the specimens studied does not exceed 2 %. Morphological intraspecific variability in E. dichroum is also pronounced (as the variation in colour of the pileus) ( Fig. 17a -c). Entoloma dichroum was described by Persoon in 1801. As the holotype does not exist and the type locality is unknown we selected a neotype from the available material. We studied 8 collections previously identified as E. dichroum, and only 4 of them correspond to the current concept of this species. We selected therefore LE227472 (Zhiguli, Russia) as it fits best the protologue.
The E. callichroum-clade consists of two branches. One: the holotype of E. callichroum only and the other including the holotype of E. venustum and 7 specimens which are conspeciphic with it. As the difference between the holotypes is low (p-distance 1.8 % bp difference) and morphological characters vary within the range of genetically identical specimens of E. venustum (basidiospore size, presence and amount of the cheilocystidia, the intensity of the bluish tinge in lamellae -see discussion in the taxonomic part) so that some specimens are indistinguishable from the E. callichroum, we propose to treat E. venustum as a variety of E. callichroum.
Two clades corresponding to the morphological concept of E. coelestinum were recovered in all analyses. Specimens previously identified as E. coelestinum form a well-supported clade, which, however, consists of two sister clades that can be distinguished morphologically. One collection characterized by more pronounced angled, not nodulose spores and a polished stipe fits well with the protologue of the type, and the current concept of E. coelestinum (Noordeloos 2004). The larger clade (5.3 % divergence with previous) is represented by specimens characterized by vaguely angled, almost nodulose spores and a longitudinally fibrillose-striate stipe. A new species, E. percoelestinum, is proposed for this taxon below. The E. allochroum clade involves almost identical collections, including the holotype, and only the Caucasian one slightly stands out among them. The E. lepidissimum clade is also highly supported, homogeneous, and includes the holotype specimen.
The position of E. callichroum, E. coelestinum, E. lepidissimum and E. allochroum within subg. Leptonia clade is uncertain. In the BA, ML and MP analyses they nest in the section Leptonia clade with low support. Morphologically these species occupy an intermediate position between sect. Leptonia and sect. Dichroi. The first three species have basidiospores with moderately developed angles, but E. allochroum has sharply angled basidiospores. It can be concluded that the degree of development of angles in basidiospores for members of the Entolomataceae generally is a phylogenetically informative feature, but with some restrictions. Similarly shaped basidiospores have developed independently in the sister clades. A wider sampling of species assigned to subg. Leptonia from other geographic regions will be necessary to resolve the phylogeny within subg. Leptonia.
Analysis of the nrLSU and mtSSU datasets
To accommodate subg. Leptonia in an understanding of the Entolomataceae tree produced by Co-David et al. (2009), analy-ses of the LSU, mtSSU and combined LSU-mtSSU datasets were performed on almost all taxa of Leptonia examined in this study. The PCR with MS1 and MS2 primers was unsuccessful for E. callichroum var. callichroum and for E. eugenei. The analyses were performed without these species.
Analysis of the nrLSU dataset
The dataset contained 57 LSU sequences with 775 characters (gaps included), of which 162 were parsimony-informative. The 100 equally most parsimonious trees (MPTs) were saved (length = 854, CI = 0.3255, HI = 0.6745, RI = 0.4043, RC = 0.1316). There were some topological differences and contradictions among the MPTs, the strict consensus tree from the MP analysis the best tree from the ML analysis, and the 50 % majority rule consensus tree from the BA. There was a high number of clades in each of these trees with low support.
In total, LSU trees are less informative than nrITS and mtSSU ones. Topological relations among the species of Leptonia and other subgenera of Entoloma were not confirmed by sufficient support because of low difference between LSU sequences. So, we do not present any LSU trees in this work as ambiguous.
Analysis of the mtSSU dataset
The dataset contained 54 mtSSU sequences that represent the main subgenera of the Entoloma s.l. taxa comprising the main branch of the Entolomataceae as well as representatives of almost all species of Leptonia considered in the present work. The analysed dataset included 658 characters (gaps included), of which 175 were parsimony-informative. The saved 100 most parsimonious trees had the following characteristics: (2004) is polyphyletic and sections Leptonia, Cyanula and Griseorubidum do not form a monophyletic clade (Fig. 3). Cyanula, containing most known species of former subg. Leptonia, recently has been raised to the subgenus level (Noordeloos & Gates 2012). It is characterized by the presence of brilliant granules, absence of clamp-connections in all tissues and a fibrillose to polished stipe. We suggest that the Griseorubidum must be also excluded from subg. Leptonia based on phylogenetic affinity and morphological resemblance to Inocephalus species, like the presence of brilliant granules and well differentiated cheilocystidia. Section Leptonia (1.0/89) clearly stands out within the clade represented by subg. Leptonia, and it is characterized morphologically by the presence of clamp-connections, absence of brilliant granules and more or less fibrillose to squamulose stipe.
The analyses of the mtSSU dataset confirm the separation of the new species E. percoelestinum and E. sublaevisporum and the variety E. tjallingiorum var. laricinum.
Analysis of the combined mtSSU-LSU dataset
The dataset contained 52 sequences as in the previous analyses. The analysed dataset included 1 433 characters (gaps included), of which 334 were parsimony-informative. In the MP analysis, 93 most parsimonious trees were recovered (length = 1313, CI = 0.4501, HI = 0.6113, RI = 0.6204, RC = 0.2793). The ML, MP and BA analyses revealed nearly identical topologies, although the MP bootstrap values were rather low.
The topology revealed by the analysis of the combined data is similar to the results of the analysis of the mtSSU data ( (Noordeloos & Gates 2012).
Key to the species of subgenus Leptonia in boreal and temperate Eurasia
For the convenience in identifying following morphospecies have been added to the key, on account of the evidence in Noordeloos (2004)
Descriptions of the species
Below we provide descriptions of the species included in the analysis (in the key indicated with the number that corresponds with the descriptive text of this paper). Most of them are rarely collected and exact and complete information was difficult to find. Therefore the descriptions below contain new data on ecology, substratum, colour variation, spore shape, and presence / absence of cheilocystidia. One section, two species and one variety are described as new to science. Notes -This species has originally been described from the island of La Palma (Islas Canarias, Spain), where it was found on chips of Pinus bark in pot with Cymbidium in a garden. After that it was collected several times in different types of natural habitats in Europe, Caucasus, Western Siberia. These records fit the original description well (Wölfel & Noordeloos 2001, Noordeloos 2004, and molecular data confirm the conspecifity of the new records with the holotype. Entoloma chytrophilum can be recognized by the bright blue colour of the fruitbodies, plano-convex shape of the pileus and lignicolous habit on coniferous wood. From the other blue coloured species (E. coelestinum, E. dichroum, E. lepidissimum) it differs first of all by the rather large, thin-walled and nodulose spores. Entoloma lepidissimum var. pauciangulatum must be considered a synonym of E. chytrophilum due to morphological (almost nodulose spores and blue colour of both pileus and stipe (Gminder & Enderle 1996)) and phylogenetic evidence. Despite a later publication, the epithet 'chytrophilum' has priority over 'pauciangulatum' at the rank of species, because the name 'pauciangulatum' has priority only in the rank of variety in which it was published (Art. 11.2). Notes -Entoloma euchroum is a very distinctive species distributed all over Europe and reported also from Siberia. Usually it is easy to recognize due to its entirely blue-violaceous basidiomes, lignicolous habitat and sweet, flowerlike smell. Sometimes it can be rather variable in colour, depending on the growing conditions, but violaceus blue tinges are always present in all parts of the basidioma, especially in the lamellae. It can be distinguished from the other taxa that can possess more or less violaceous-blue lamellae (E. callichroum var. venustum, E. lepidissimum) by the presence of cheilocystidia with a thickwalled upper part, often filled with violaceous brown content cheilocystidia. Cheilocystidia are absent in E. callichroum var. venustum, rare, and intermixed with basidia in E. lepidissimum. Also the squamulose stem and lignicolous habitat help to distinguish E. euchroum.
Habitat -On dead wood of coniferous trees or on soil in forests and open places, including grasslands.
Known distribution -Northern, Western and Eastern Europe, Caucasus, Western Siberia, Russian Far East. Additional specimens examined. austria, Frankenberg, Ried, Hinterzeining, 22 Sept 1994, F. Sucti (WU 13198, as E. dichroum);Rastenfeld, NW Dobra, on wood, 9 Sept. 2009, A. Hausknecht (WU 24148, as E. dichroum Notes -Entoloma lampropus is a rather confusing species. It was described by Fries as a species with "pileo subcarnoso convexo cinereo-griseo fibrilloso, lamellis albidis denticuloadnatis, stipite nitido coeruleo fistuloso" (Fries 1815). In the sanctioning work it was characterized by "pileo demum umbilicate fibrilloso griseo, lamellis adnatis albido-griseis, stipite fistuloso coeruleo" (Fries 1821). The brevity of the description led to various interpretations of the species. In the concept of Lange (1937, pl. 76C), Bresadola (1929, pl. 570-1), Hesler (1967 it represents a species with a smooth stipe without clamp connections, and is considered a member of subg. Cyanula. The present work follows the interpretation of Kühner & Romagnesi (1953), which was based on a key phrase in the Fries's description "primo obtutu simillimus priori (Agaricus placidus)", stressing the resemblance to E. placidum. This interpretation was adopted by Noordeloos (1982aNoordeloos ( , 1992. The misapplied interpretation of Agaricus lampropus mentioned above, was described as a new species, Rhodophyllus sodalis Kühner & Romagn., and later on accepted as E. sodale Kühner & Romagn. ex Noordel. Noordeloos (1982a) published a modern description of E. lampropus, but nevertheless, due to the limited number of records the species continued to be insufficiently known and lot of misidentifications were still encountered in several herbaria during our study. The present study expands the concept of E. lampropus with more morphological, ecological and molecular data, and its phylogenetic position is now known, and fixed with a neotype.
Entoloma lampropus belongs to the group of species characterized by the many-angled nodulose spores and the presence of 'cheilocystidia' in form of hyphal elements, often septate, arising from the hymenophoral trama. The predominantly grey brown pileus sometimes has a slight lilac tinge near the margin, and the stipe is blue and longitudinally fibrillose, without squamules (contrary to E. tjallingiorum). It grows both terrestrial and on rotten wood (mainly coniferous). (Fr.: Fr.) Noordel., Persoonia 11, 2: 150. 1981. -Fig. 9c Pileus 10-30 mm broad, conical to convex with straight margin and slightly depressed or umbonate centre, not hygrophanous, not translucently striate, entirely radially fibrillose to minutely squamulose, especially in the centre, with small dark greybrown squamules on greyish background. Lamellae moderately distant, adnate or emarginate, with small decurrent tooth, whitish to cream becoming pink, with concolorous edge. Stipe 25 -65 × 1-3 mm, cylindrical or slightly broadened towards base, distinctly longitudinally fibrillose-striate with whitish or pale blue fibrils on deep blue background, pruinose in upper part, base with white tomentum. Context whitish, blue beneath the stipe surface. Smell and taste farinaceous. Spores 8.0 -11.0(-11.5) × 6.0-7.0(-7.5) μm, Q = 1.2 -1.6, heterodiametrical, with 6-8 blunt angles in side view. Basidia 27.0 -33.0 × 8.8 -11.5 μm, 4-spored, narrowly clavate, clamped. Lamellae edge fertile or heterogeneous. Scattered cystidia-like elements sometimes present in the edge of the lamellae as vacuolised basidioles or septate terminal cells of hyphae of the trama. Pileipellis a trichoderm in centre, plagiotrichoderm towards margin, composed of cylindrical to slightly inflated hyphae 10 -25 μm wide, with intracellular pigment and abundant clamp-connections. Notes -Entoloma placidum is very similar to E. lampropus due to the grey-brown pileus, blue longitudinally fibrillose stipe without squamules, and more or less nodulose spores. True cheilocystidia are absent in both species, but in some specimens cystidia-like elements can be observed as vacuolised basidioles (Noordeloos 1982a, b) or septate terminal endings of the hyphae of the trama (Vila & Caballero 2007). Entoloma placidum can be recognized by the farinaceous smell, distinctly nodulose spores, and habitat on the wood of deciduous trees, especially Fagus. For a long time it was known as a species growing exclusively on beech wood. Recently some records were made on other deciduous trees (Corylus avellana, Betula pendula) (Vila & Caballero 2007). Molecular data confirm that they all belong to E. placidum. Entoloma lampropus grows on conifers or on soil. Entoloma tjallingiorum is also very similar but differs by the distinctly squamulose stipe and well differentiated cheilocystidia. Vila,Noordel. & O.V. Morozova,Fig. 9d,11,12 Etymology. From latin 'laevus' (smooth), referring to the very weakly angled, subnodulose to almost smooth spores. Diagnosis. The species is characterized by the grey-brown pileus, bluish finely longitudinally striate stipe combined with the many-angled, nodulose spores.
Entoloma sublaevisporum
Pileus up to 20 mm broad, flattened or slightly convex, with a shallow central depression; grey to pale grey-brown, with darker centre, without violaceous tinges or only a hint in central depression; not hygrophanous, not translucently striate, with fine to heavy fibrils, especially in the apex, where it is subsquamulose; margin straight to revolute, protruding above the lamellae. Lamellae moderately distant (L = 15-20) with abundant lamellulae (1 : 3 to 1 : 5), adnate to emarginate, thin, slightly ventricose or not; whitish, turning pale pinkish when spores mature; edge of the same colour, entire or somewhat irregular. Stipe central, up to 40 × 2 mm, cylindrical, straight or slightly curved; dark blue to bluish grey; surface smooth or with weak fibrils, finely longitudinally striate; pruinose at apex and with white basal tomentum.
Cheilocystidia were found only in the specimen from Austria. The p-distance between E. chytrophilum and E. sublaevisporum is 7.4 %.
Known distribution -Western and Eastern Europe, Western Siberia.
Notes -According to the phylogenetic analysis the new species is close to E. chytrophilum. Morphologically this similarity is confirmed by the spore morphology. Entoloma sublaevisporum Notes -Entoloma tjallingiorum belongs to the group of species characterized by the many-angled almost nodulose spores and septate terminal elements of the hymenophoral trama that protrude through the hymenium ('cheilocystidia'). Among the species with greyish brown pileus and blue stipe it stands out by the stouter basidiocarps and distinctly squamulose stipe.
In an earlier paper (Noordeloos 1982a(Noordeloos , 1992) some collections with a bluish tinge in the lamellae and pigmented cheilocystidia were also assigned to E. tjallingiorum. Molecular data show that these specimens are discolored forms of E. euchroum. The current concept of E. tjallingiorum therefore excludes forms with blue tinges in the lamellae.
Habitat -On dead wood of Alnus incana in deciduous forests in May -June, rarely also in July or August.
Known distribution -Western and Eastern Europe, Western Siberia. Notes -Entoloma alnetorum has been described as a species very similar to E. tjallingiorum due to the thin-walled, almost nodulose spores, however it is distinguished by the pale ochraceus pileus and the vernal appearance in Alnus forests (Monthoux & Röllin 1988). Phylogenetic analysis shows that all specimens possessing these features are grouped together. At the same time the difference between them and typical E. tjallingiorum is very small (p-distance between holotypes is 1.3 %). Some specimens (LE227507, LE227584, LE234285) with the typical habit of E. tjallingiorum occur in the E. alnetorum-clade with the difference only in 0.2 %. For this reason we decided to consider E. alnetorum as a variety of E. tjallingiorum. It is noteworthy that in E. dichroum also specimens with a pale pileus can be encountered (JVG 1070821-4). They can be separated from the species of the tjallingiorum-group by the characteristic spores with sharp angles. c. var. laricinum O.V. Morozova,Noordel.,Vila & E.S. Popov,Fig. 13f,g,16 Etymology. The name refers to the substrate on which it has been found (Larix cajanderi). Pileus 5 -35 mm broad, conical or hemispherical, hardly expanding with age, with involute then straight margin, not hygrophanous, not translucently striate, dark violaceous-blue, purplish brown, or very pale, greyish with lilac tinge, entirely granular-fibrillose, becoming squamulose with violaceous-blue squamules on pale background. Lamellae adnate-emarginate with decurrent tooth, white then pinkish, with entire concolorous edge. Stipe 30-80 × 2-5 mm, clavate or cylindrical with slightly swollen base, deep blue, different from colour of pileus, longitudinally fibrillose, with dark blue squamules on the paler background, base with white tomentum. Flesh white, dark blue under the surface. Smell indistinct, taste unpleasant. Spores 9.2-11.5 × 6.4-7.7 μm, Q = 1. 3-1.7, heterodiametrical, with 5 -7 pronounced Notes -Entoloma dichroum together with E. eugenei forms a separate clade genetically characterized by the large insertion in the ITS1-region. Morphologically, E. dichroum can be recognized by the bright blue squamulose stipe and spores with 5 -7 sharp angles. The pileus colour, however, varies considerably among the studied collections, from bright blue to violaceous-blue, violaceous-brown and pale brown. The ITS-sequences slightly vary, however this variability (p-distance 1.4-2 % base-pair difference) might well be acceptable within a species. More material would possibly allow for the distinction of varieties. Entoloma allochroum, another species with sharply-angled spores possesses a lilaceous or violaceous, less squamulose, more longitudinally fibrillose stipe. Entoloma dichroum differs from the closely related E. eugenei mainly by the slender collybioid habit, the heterogeneous lamellae edge, and slightly smaller and less pronouncedly angled spores. Notes -Entoloma eugenei is morphologically very close to E. dichroum. The main morphological difference is in its tri- cholomatoid habit, the sterile lamellae edge, and slightly larger and more pronouncedly angled spores. Genetically it differs from E. dichroum among other things in one rather large (about 40 base-pair) insertion in the ITS1-region. The significant divergence between these two species (p-distance 9.8 % base-pair difference) could be explained by geographical reasons -the natural isolated habitat of E. eugenei in the Southern Far East and Japan with unique climatic conditions (Noordeloos & Morozova 2010), while E. dichroum is known from Europe.
InCERTAE SEdIS
9. Entoloma allochroum Noordel., Persoonia 11,4: 463. 1982. -Fig. 17e Notes -Entoloma allochroum is an easily recognizable species due to the presence of the lilaceous or violaceous colours both in the pileus and, especially, in the stipe, white lamellae, as well as rather thick-walled and pronouncedly angled spores. Due to the sharply-angled spores E. allochroum is similar to E. dichroum and E. eugenei, however the molecular evidence does not allow the placing of this species in sect. Dichroi (Fig. 2).
Habitat -On soil in grasslands and in wet deciduous forest. Known distribution -Western Europe, Western Siberia, Rus sian Far East.
Notes -The phylogenetic analysis shows that E. venustum is very close to E. callichroum (p-distance 1.8 % base-pair difference) and, therefore, could be considered its variety. Both species are morphologically distinct by the pinkish violaceous pileus; more or less lilaceous-blue tinges in the lamellae, the steel blue or violaceous-blue stipe, and the size of the spores. The description of E. venustum as a new species was based on the bright colour of the basidiomata and on the presence of well developed cheilocystidia, which, however, do not form a sterile gill edge and are often hardly distinguishable from basidioles (Wölfel & Hampe 2011). These characters can significantly vary within the range of genetically (nrITS) identical specimens. A more reliable feature for delimitation of these two taxa is spore form. Spores of E. venustum are narrower and possess more pronounced angles. Also the presence of a number of extremely long (up to 16 μm) germinating (?) spores has been reported from the holotype and other specimens (Table 3). Notes -Entoloma coelestinum is distinguished by the tiny, very dark blue to black basidiocarps with conical hardly expanded pileus combined with the small spores. In the course of the phylogenetic analysis specimens previously identified as E. coelestinum ended up in a well-supported clade, which, however, consists itself of two sister clades that can be distinguished morphologically. The larger clade is characterized by almost nodulose spores and a longitudinally fibrillose-striate stipe. It includes blue-coloured basidiomes and entirely black ones (Fig. 22e). The other clade consist of one collection characterized by more pronouncedly angled, not nodulose spores and a polished stipe. This collection fits well with the protologue, and the current concept of E. coelestinum (Noordeloos 2004). Considering these morphological differences, and the significant p-distance between these clades (5.3 % base-pair difference) it was decided to describe the first clade as the new species, E. percoelestinum below. Unfortunately we were unable to design a neotype for E. coelestinum since the limited material studied is not from the original geographic area. More material from Europe, especially from Sweden is needed to do so. Pileus 5-12 mm broad, conical or hemispherical with umbo, not hygrophanous, not translucently striate, with straight margin, radially fibrillose, squamulose at centre, uniformly dark blue, blackish blue or black. Lamellae moderately distant, adnateemarginate, ventricose, white, becoming pinkish, with entire concolorous edge. Stipe 20-40 × 1-2 mm, cylindrical, longitudinally fibrillose-striate or almost smooth, concolourous with pileus, whitely tomentose at base. Context thin, concolorous with the surface. Smell indistinct or fungoid, taste not reported. Spores 6.5 -8.5(-9.0) × 5.0 -6.5 μm, Q = 1.3 -1.5(-1.7), heterodiametrical, with 7-9 blunt angles in side-view, almost nodulose. Basidia 27.9 -37.0(-45.4) × 8.1-9.6(-13.7) μm, 4-spored, narrowly clavate to subcylindrical, clamped. Lamellae edge fertile. Cheilocystidia absent. Pileipellis a plagiotrichoderm of cylindrical to slightly inflated hyphae 10-20 μm wide with blue intracellular pigment. Clamp-connections present.
Known distribution -Western and Eastern Europe, Western Siberia. pronounced angles pedunculate, hardly distinguish-edge able from the basidioles, 30.9-42.8 × 12.0-19.0 μm E. callichroum var. venustum 9.5-13.0(14.0) × 5.7-7.2 1.4-1.8(2.0) with 6-8 moderately rare, broadly clavate or sphaero-whitish with bluish tinge towards (LE254312) pronounced angles pedunculate hardly distinguish-edge able from the basidioles, 29.8-42.7 × 12.9-21.0 μm Notes -In the boreal-temperate Eurasia several species with small blue or blackish blue basidiomata are recognized. Entoloma percoelestinum differs from E. coelestinum by almost nodulose spores and a longitudinally fibrillose-striate stipe, from E. chytrophilum by the smaller spores and conical, hardly expanding pileus, and from E. lepidissimum by the smaller spores and lack of the blue tinge in young lamellae. Entoloma klofacianum is characterized by the isodiametrical spores. North American Leptonia subcoelestina is also close but it differs by the larger spores and by the pileipellis which lacks clamps and is composed of submoniliform cells.
Notes -Entoloma lepidissimum is recognized by the dark blue basidiomata with bluish lamellae. Microscopically the scattered cheilocystidia also can be distinctive. Despite the fact that the blue tinge of the lamellae was not mentioned in the protologue, all studied specimens are characterized by bluish lamellae. Molecular data support their identity with the holotype. The similar species E. coelestinum is distinguished by the white lamellae, smaller spores and more conical pileus. Entoloma chytrophilum possesses white lamellae, nodulose spores and a more applanate pileus. lenko, V. Malysheva, A. Kiyashko, T. Svetasheva, L. Marina, O. Desyatova, O. Shiryaeva (Kirillova), A. Fedosova, E. Ilyukhin, E. Lukashina, S. Lukashin, E. Zvyagina, I. Ukhanova, and S. Arslanov. We express our sincere thanks to all of them. We are also grateful to the anonymous reviewers of the manuscript for their valuable and constructive comments. This work was supported in part by the Russian Foundation for Basic Research (project N 12-04-33018 mol-a-ved and N 13-04-00838 a).
|
2016-05-12T22:15:10.714Z
|
2014-05-01T00:00:00.000
|
{
"year": 2014,
"sha1": "f68aa06778f011289fc1ab7e24ea925ecbbce66c",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4150075?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f68aa06778f011289fc1ab7e24ea925ecbbce66c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
1538690
|
pes2o/s2orc
|
v3-fos-license
|
Smoothed marginal distribution constraints for language modeling
We present an algorithm for re-estimating parameters of backoff n-gram language models so as to preserve given marginal distributions, along the lines of well-known Kneser-Ney (1995) smoothing. Unlike Kneser-Ney, our approach is designed to be applied to any given smoothed backoff model, including models that have already been heavily pruned. As a result, the algorithm avoids issues observed when pruning Kneser-Ney models (Siivola et al., 2007; Chelba et al., 2010), while retaining the benefits of such marginal distribution constraints. We present experimental results for heavily pruned backoff n-gram models, and demonstrate perplexity and word error rate reductions when used with various baseline smoothing methods. An open-source version of the algorithm has been released as part of the OpenGrm ngram library. 1
Introduction
Smoothed n-gram language models are the defacto standard statistical models of language for a wide range of natural language applications, including speech recognition and machine translation. Such models are trained on large text corpora, by counting the frequency of n-gram collocations, then normalizing and smoothing (regularizing) the resulting multinomial distributions. Standard techniques store the observed n-grams and derive probabilities of unobserved n-grams via their longest observed suffix and "backoff" costs associated with the prefix histories of the unobserved suffixes. Hence the size of the model grows with the number of observed n-grams, which is very large for typical training corpora. 1 www.opengrm.org Natural language applications, however, are commonly used in scenarios requiring relatively small footprint models. For example, applications running on mobile devices or in low latency streaming scenarios may be required to limit the complexity of models and algorithms to achieve the desired operating profile. As a result, statistical language models -an important component of many such applications -are often trained on very large corpora, then modified to fit within some pre-specified size bound. One method to achieve significant space reduction is through randomized data structures, such as Bloom (Talbot and Osborne, 2007) or Bloomier (Talbot and Brants, 2008) filters. These data structures permit efficient querying for specific n-grams in a model that has been stored in a fraction of the space required to store the full, exact model, though with some probability of false positives. Another common approach -which we pursue in this paper -is model pruning, whereby some number of the n-grams are removed from explicit storage in the model, so that their probability must be assigned via backoff smoothing. One simple pruning method is count thresholding, i.e., discarding n-grams that occur less than k times in the corpus. Beyond count thresholding, the most widely used pruning methods (Seymore and Rosenfeld, 1996;Stolcke, 1998) employ greedy algorithms to reduce the number of stored n-grams by comparing the stored probabilities to those that would be assigned via the backoff smoothing mechanism, and removing those with the least impact according to some criterion.
While these greedy pruning methods are highly effective for models estimated with most common smoothing approaches, they have been shown to be far less effective with Kneser-Ney trained language models (Siivola et al., 2007;Chelba et al., 2010), leading to severe degradation in model quality relative to other standard smoothing meth-4-gram models Backoff Interpolated Perplexity n-grams Perplexity n-grams Smoothing method full pruned (×1000) full pruned (×1000) Absolute Discounting (Ney et al., 1994) 120 Ristad (1995) 126.4 203.6 395.6 ---N/A --- Katz (1987) 119.8 198.1 386.2 ---N/A ---Kneser-Ney (Kneser and Ney, 1995) 114. Kneser-Ney (Chen and Goodman, 1998) ods. Thus, while Kneser-Ney may be the preferred smoothing method for large, unpruned models -where it can achieve real improvements over other smoothing methods -when relatively sparse, pruned models are required, it has severely diminished utility. Table 1 presents a slightly reformatted version of Table 3 from Chelba et al. (2010). In their experiments (see Table 1 caption for specifics on training/test setup), they trained 4-gram Broadcast News language models using a variety of both backoff and interpolated smoothing methods and measured perplexity before and after Stolcke (1998) relative entropy based pruning. With this size training data, the perplexity of all of the smoothing methods other than Kneser-Ney degrades from around 120 with the full model to around 200 with the heavily pruned model. Kneser-Ney smoothed models have lower perplexity with the full model than the other methods by about 5 points, but degrade with pruning to far higher perplexity, between 270-285.
The cause of this degradation is Kneser-Ney's unique method for estimating smoothed language models, which will be presented in more detail in Section 3. Briefly, the smoothing method reestimates lower-order n-gram parameters in order to avoid over-estimating the likelihood of n-grams that already have ample probability mass allocated as part of higher-order n-grams. This is done via a marginal distribution constraint which requires the expected frequency of the lower-order n-grams to match their observed frequency in the training data, much as is commonly done for maximum entropy model training. Goodman (2001) proved that, under certain assumptions, such constraints can only improve language models. Lower-order n-gram parameters resulting from Kneser-Ney are not relative frequency estimates, as with other smoothing methods; rather they are parameters estimated specifically for use within the larger smoothed model.
There are (at least) a couple of reasons why such parameters do not play well with model pruning. First, the pruning methods commonly use lower order n-gram probabilities to derive an estimate of state marginals, and, since these parameters are no longer smoothed relative frequency estimates, they do not serve that purpose well. For this reason, the widely-used SRILM toolkit recently provided switches to modify their pruning algorithm to use another model for state marginal estimates (Stolcke et al., 2011). Second, and perhaps more importantly, the marginal constraints that were applied prior to smoothing will not in general be consistent with the much smaller pruned model. For example, if a bigram parameter is modified due to the presence of some set of trigrams, and then some or all of those trigrams are pruned from the model, the bigram associated with the modified parameter will be unlikely to have an overall expected frequency equal to its observed frequency anymore. As a result, the resulting model degrades dramatically with pruning.
In this paper, we present an algorithm that imposes marginal distribution constraints of the sort used in Kneser-Ney modeling on arbitrary smoothed backoff n-gram language models. Our approach makes use of the same sort of derivation as the original Kneser-Ney modeling, but, among other differences, relies on smoothed estimates of the empirical relative frequency rather than the unsmoothed observed frequency. The algorithm can be applied after the smoothed model has been pruned, hence avoiding the pitfalls associated with Kneser-Ney modeling. Furthermore, while Kneser-Ney is conventionally defined as a variant of absolute discounting, our method can be applied to models smoothed with any backoff smoothing, including mixtures of models, widely used for domain adaptation.
We next establish formal preliminaries and our smoothed marginal distribution constraints method.
Preliminaries
N-gram language models are typically presented mathematically in terms of words w, the strings (histories) h that precede them, and the suffixes of the histories (backoffs) h that are used in the smoothing recursion. Let V be a vocabulary (alphabet), and V * a string of zero or more symbols drawn from V . Let V k denote the set of strings w ∈ V * of length k, i.e., |w| = k. We will use variables u, v, w, x, y, z ∈ V to denote single symbols from the vocabulary; h, g ∈ V * to denote history sequences preceding the specific word; and h , g ∈ V * the respective backoff histories of h and g as typically defined (see below). For a string w = w 1 . . . w |w| we can calculate the smoothed conditional probability of each word w i in the sequence given the k words that preceded it, depending on the order of the Markov model. Let h k i = w i−k . . . w i−1 be the previous k words in the sequence. Then the smoothed model is defined recursively as follows: where c(h k i w i ) is the count of the n-gram sequence w i−k . . . w i in the training corpus; P is a regularized probability estimate that provides some probability mass for unobserved n-grams; and α(h k i ) is a factor that ensures normalization. Note that for h = h k i , the typically defined backoff history h = h k−1 i , i.e., the longest suffix of h that is not h itself. When we use h and g (for notational convenience) in future equations, it is this definition that we are using.
There are many ways to estimate P, including absolute discounting (Ney et al., 1994), Katz (1987) and Witten and Bell (1991). Interpolated models are special cases of this form, where the P is determined using model mixing, and the α parameter is exactly the mixing factor value for the lower order model. N-gram language models allow for a sparse representation, so that only a subset of the possible ngrams must be explicitly stored. Probabilities for the rest of the n-grams are calculated through the "otherwise" semantics in the equation above. For an n-gram language model G, we will say that an n-gram hw ∈ G if it is explicitly represented in the model; otherwise hw ∈ G. In the standard ngram formulation above, the assumption is that if c(h k i w i ) > 0 then the n-gram has a parameter; yet with pruning, we remove many observed n-grams from the model, hence this is no longer the appropriate criterion. We reformulate the standard equation as follows: . We assume that, if hw ∈ G then all prefixes and suffixes of hw are also in G. Figure 1 presents a schema of an automaton representation of an n-gram model, of the sort used in the OpenGrm library (Roark et al., 2012). States represent histories h, and the words w, whose probabilities are conditioned on h, label the arcs, leading to the history state for the subsequent word. State labels are provided in Figure 1 as a convenience, to show the (implicit) history encoded by the state, e.g., 'xyz' indicates that the state represents a history with the previous three symbols being x, y and z. Failure arcs, labeled with a φ in Figure 1, encode an "otherwise" semantics and have as destination the origin state's backoff history. Many higher order states will back off to the same lower order state, specifically those that share the same suffix.
Note that, in general, the recursive definition of backoff may require the traversal of several back- off arcs before emitting a word, e.g., the highest order states in Figure 1 needing to traverse a couple of φ arcs to reach state 'z'. We can define the backoff cost between a state h k i and any of its suffix states as follows. Let α(h, h) = 1 and for m > 1, If h k i w ∈ G then the probability of that n-gram will be defined in terms of backoff to its longest suffix h k−m i w ∈ G. Let h wG denote the longest suffix of h such that h wG w ∈ G. Note that this is not necessarily a proper suffix, since h wG could be h itself or it could be . Then which is equivalent to equation 1.
Marginal distribution constraints
Marginal distribution constraints attempt to match the expected frequency of an n-gram with its observed frequency. In other words, if we use the model to randomly generate a very large corpus, the n-grams should occur with the same relative frequency in both the generated and original (training) corpus. Standard smoothing methods overgenerate lower-order n-grams. Using standard n-gram notation (where g is the backoff history for g), this constraint is stated in Kneser and Ney (1995) as where P is the empirical relative frequency estimate. Taking this approach, certain base smoothing methods end up with very nice, easy to calculate solutions based on counts. Absolute discounting (Ney et al., 1994) in particular, using the above approach, leads to the well-known Kneser-Ney smoothing approach (Kneser and Ney, 1995;Chen and Goodman, 1998). We will follow this same approach, with a couple of changes. First, we will make use of regularized estimates of relative frequency P rather than raw relative frequency P. Second, rather than just looking at observed histories h that back off to h , we will look at all histories (observed or not) of the length of the longest history in the model. For notational simplicity, suppose we have an n+1-gram model, hence the longest history in the model is of length n. Assume the length of the particular backoff history |h | = k. Let V n−k h be the set of strings h ∈ V n with h as a suffix. Then we can restate the marginal distribution constraint in equation 3 as Next we solve for β(h w) parameters used in equation 1. Note that h is a suffix of any h ∈ V n−k h , so conditioning probabilities on h and h is the same as conditioning on just h. Each of the following derivation steps simply relies on the chain rule or definition of conditional probability, as well as pulling terms out of the summation.
Then, multiplying both sides by the normalizing denominator on the right-hand side and using equation 2 to substitute α(h, h wG ) β(h wG w) for P(w | h): Note that we are only interested in h w ∈ G, hence there are two disjoint subsets of histories h ∈ V n−k h that are being summed over: those such that h wG = h and those such that |h wG | > |h |. We next separate these sums in the next step of the derivation: Finally, we solve for β(h w) in the second sum on the right-hand side of equation 7, yielding the formula in equation 8. Note that this equation is the correlate of equation (6) in Kneser and Ney (1995), modulo the two differences noted earlier: use of smoothed probability P rather than raw relative frequency; and summing over all history substrings in V n−k h rather than just those with count greater than zero, which is also a change due to smoothing. Keep in mind, P is the target expected frequency from a given smoothed model. Kneser-Ney models are not useful input models, since their P n-gram parameters are not relative frequency estimates. This means that we cannot simply 'repair' pruned Kneser-Ney models, but must use other smoothing methods where the smoothed values are based on relative frequency estimation.
There are, in addition, two other important differences in our approach from that in Kneser and Ney (1995), which would remain as differences even if our target expected frequency were the unsmoothed relative frequency P instead of the smoothed estimate P. First, the sum in the numerator is over histories of length n, the highest order in the n-gram model, whereas in the Kneser-Ney approach the sum is over histories that immediately back off to h , i.e., from the next highest order in the n-gram model. Thus the unigram distribution is with respect to the bigram model, the bigram model is with respect to the trigram model, and so forth. In our optimization, we sum instead over all possible history sequences of length n. Second, an early assumption made in Kneser and Ney (1995) is that the denominator term in their equation (6) (our Eq. 8) is constant across all words for a given history, which is clearly false. We do not make this assumption. Of course, the probabilities must be normalized, hence the final values of β(h w) will be proportional to the values in Eq. 8.
We briefly note that, like Kneser-Ney, if the baseline smoothing method is consistent, then the amount of smoothing in the limit will go to zero and our resulting model will also be consistent.
The smoothed relative frequency estimate P and higher order β values on the right-hand side of Eq. 8 are given values (from the input smoothed model and previous stages in the algorithm, respectively), implying an algorithm that estimates highest orders of the model first. In addition, steady state history probabilities P(h) must be calculated. We turn to the estimation algorithm next.
Model constraint algorithm
Our algorithm takes a smoothed backoff n-gram language model in an automaton format (see Figure 1) and returns a smoothed backoff n-gram language model with the same topology. For all ngrams in the model that are suffixes of other ngrams in the model -i.e., that are backed-off to -we calculate the weight provided by equation 8 and assign it (after normalization) to the appropriate n-gram arc in the automaton. There are several important considerations for this algorithm, which we address in this section. First, we must provide a probability for every state in the model. Second, we must memoize summed values that are used repeatedly. Finally, we must iterate the calculation of certain values that depend on the n-gram weights being re-estimated.
Steady state probability calculation
The steady state probability P(h) is taken to be the probability of observing h after a long word sequence, i.e., the state's relative frequency in a long sequence of randomly-generated sentences from the model: whereP is the corpus probability derived as follows: The smoothed n-gram probability model P(w | h) is naturally extended to a sentence s = w 0 . . . w l , where w 0 = <s> and w l = </s> are the sentence initial and final words, by P(s) = l i=1 P(w i | h n i ). The corpus probability s 1 . . . s r is taken as: where λ parameterizes the corpus length distribution. 2 Assuming the n-gram language model automaton G has a single final state </s> into 2P models words in a corpus rather than a single sentence since Equation 9 tends to zero as m → ∞ otherwise. In Markov chain terms, the corpus distribution is made irreducible to allow a non-trivial stationary distribution. which all </s> arcs enter, adding a λ weighted arc from the </s> state to the initial state and having a final weight 1 − λ in order to leave the automaton at the </s> state will model this corpus distribution. According to Eq. 9, P (h) is then the stationary distribution of the finite irreducible Markov Chain defined by this altered automaton. There are many methods for computing such a stationary distribution; we use the well-known power method (Stewart, 1999).
One difficulty remains to be resolved. The backoff arcs have a special interpretation in the automaton: they are traversed only if a word fails to match at the higher order. These failure arcs must be properly handled before applying standard stationary distribution calculations. A simple approach would be for each word w and state h such that hw / ∈ G, but h w ∈ G, add a w arc from state h to h w with weight α(h, h )β(h w ) and then remove all failure arcs (see Figure 2a). This however results in an automaton with |V | arcs leaving every state, which is unwieldy with larger vocabularies and n-gram orders. Instead, for each word w and state h such that hw ∈ G, add a w arc from state h to h w with weight −α(h, h )β(h w) and then replace all failure labels with labels (see Figure 2b). In this case, the added negativelyweighted arcs compensate for the excess probability mass allowed by the epsilon arcs 3 . The number of added arcs is no more than found in the original model.
Accumulation of higher order values
We are summing over all possible histories of length n in equation 8, and the steady state probability calculation outlined in the previous section includes the probability mass for histories h ∈ G. The probability mass of states not in G ends up being allocated to the state representing their longest suffix that is explicitly in G. That is the state that would be active when these histories are encountered. Hence, once we have calculated the steady state probabilities for each state in the smoothed model, we only need to sum over states explicitly in the model.
As stated earlier, the use of β(h wG w) in the numerator of equation 8 for h wG that are larger than h implies that the longer n-grams must be re-estimated first. Thus we process each history length in descending order, finishing with the unigram state. Since we assume that, for every ngram hw ∈ G, every prefix and suffix is also in G, we know that if h w ∈ G then there is no history h such that h is a suffix of h and hw ∈ G. This allows us to recursively accumulate the α(h, h ) P(h) in the denominator of Eq. 8.
For every n-gram, we can accumulate values required to calculate the three terms in equation 8, and pass them along to calculate lower order ngram values. Note, however, that a naive implementation of an algorithm to assign these values is O(|V | n ). This is due to the fact that the denominator factor must be accumulated for all higher order states that do not have the given n-gram. Hence, for every state h directly backing off to h (order |V |), and for every n-gram arc leaving state h (order |V |), some value must be accumulated. This can be particularly clearly seen at the unigram state, which has an arc for every unigram (the size of the vocabulary): for every bigram state (also order of the vocabulary), in the naive algorithm we must look for every possible arc. Since there are O(|V | n−2 ) lower order histories in the model in the worst case, we have overall complexity O(|V | n ). However, we know that the number of stored n-grams is very sparse relative to the possible number of n-grams, so the typical case complexity is far lower. Importantly, the denominator is calculated by first assuming that all higher order states back off to the current n-gram, then subtract out the mass associated with those that are already observed at the higher order. In such a way, we need only perform work for higher order n-grams hw that are explicitly in the model. This optimization achieves orders-of-magnitude speedups, so that models take seconds to process.
Because smoothing is not necessarily con-strained across n-gram orders, it is possible that higher-order n-grams could be smoothed less than lower order n-grams, so that the numerator of equation 8 can be less than zero, which is not valid. A value less than zero means that the higher order n-grams will already produce the n-gram more frequently than its smoothed expected frequency. We set a minimum value for the numerator, and any n-gram numerator value less than is replaced with (for the current study, = 0.001). We find this to be relatively infrequent, about 1% of n-grams for most models.
Iteration
Recall that P and β terms on the right-hand side of equation 8 are given and do not change. But there are two other terms in the equation that change as we update the n-gram parameters. The α(h, h ) backoff weights in the denominator ensure normalization at the higher order states, and change as the n-gram parameters at the current state are modified. Further, the steady state probabilities will change as the model changes. Hence, at each state, we must iterate the calculation of the denominator term: first adjust n-gram weights and normalize; then recalculate backoff weights at higher order states and iterate. Since this only involves the denominator term, each n-gram weight can be updated by multiplying by the ratio of the old term and the new term. After the entire model has been re-estimated, the steady state probability calculation presented in Section 4.1 is run again and model estimation happens again. As we shall see in the experimental results, this typically converges after just a few iterations. At this time, we have no convergence proofs for either of these iterative components to the algorithm, but expect that something can be said about this, which will be a priority in future work.
Experimental results
All results presented here are for English Broadcast News. We received scripts for replicating the Chelba et al. (2010) results from the authors, and we report statistics on our replication of their paper's results in Table 2. The scripts are distributed in such a way that the user supplies the data from LDC98T31 (1996 CSR HUB4 Language Model corpus) and the script breaks the collection into training and testing sets, normalizes the text, and trains and prunes the language models using the SRILM toolkit (Stolcke et al., 2011). Presumably due to minor differences in text normalization, resulting in very slightly fewer n-grams in all conditions, we achieve negligibly lower perplexities (one or two tenths of a point) in all conditions, as can be seen when comparing with Table 1. All of the same trends result, thus that paper's result is successfully replicated here. Note that we ran our Kneser-Ney pruning (noted with a † in the table), using the new -prune-history-lm switch in SRILM -created in response to the Chelba et al. (2010) paper -which allows the use of another model to calculate the state marginals for pruning. This fixes part of the problem -perplexity does not degrade as much as the Kneser-Ney pruned model in Table 1 -but, as argued earlier in this paper, this is not the sole reason for the degradation and the perplexity remains extremely inflated.
We follow Chelba et al. (2010) in training and test set definition, vocabulary size, and parameters for reporting perplexity. Note that unigrams in the models are never pruned, hence all models assign probabilities over an identical vocabulary and perplexity is comparable across models. For all results reported here, we use the SRILM toolkit for baseline model training and pruning, then convert from the resulting ARPA format model to an OpenFst format (Allauzen et al., 2007), as used in the OpenGrm n-gram library (Roark et al., 2012). We then apply the marginal distribution constraints, and convert the result back to ARPA format for perplexity evaluation with the SRILM toolkit. All models are subjected to full normalization sanity checks, as with typical model functions in the OpenGrm library.
Recall that our algorithm assumes that, for every n-gram in the model, all prefix and suffix ngrams are also in the model. For pruned models, the SRILM toolkit does not impose such a requirement, hence explicit arcs are added to the Table 3: Perplexity reductions achieved with marginal distribution constraints (MDC) on the heavily pruned models from Chelba et al. (2010), and a mixture model. WFST ngram counts are slightly higher than ARPA format in Table 2 due to adding prefix and suffix n-grams.
model during conversion, with probability equal to what they would receive in the the original model. The resulting model is equivalent, but with a small number of additional arcs in the explicit representation (around 1% for the most heavily pruned models). Table 3 presents perplexity results for models that result from applying our marginal distribution constraints to the four heavily pruned models from Table 2. In all four cases, we get perplexity reductions of around 10 points. We present the number of n-grams represented explicitly in the WFST, which is a slight increase from those presented in Table 2 due to the reintroduction of prefix and suffix n-grams.
In addition to the four models reported in Chelba et al. (2010), we produced a mixture model by interpolating (with equal weight) smoothed ngram probabilities from the full (unpruned) absolute discounting, Witten-Bell and Katz models, which share the same set of n-grams. After renormalizing and pruning to approximately the same size as the other models, we get commensurate gains using this model as with the other models. Figure 3 demonstrates the importance of iterating the steady state history calculation. All of the methods achieve perplexity reductions with subsequent iterations. Katz and absolute discounting achieve very little reduction in the first iteration, but catch back up in the second iteration.
The other iterative part of the algorithm, discussed in Section 4.3, is the denominator of equation 8, which changes due to adjustments in the backoff weights required by the revised n-gram probabilities. If we do not iteratively update the backoff weights when reestimating the weights, the 'Pruned+MDC' perplexities in Table 3 increase by between 0.2-0.4 points. Hence, iterating the steady state probability calculation is quite important, as illustrated by Figure 3; iterating the denominator calculation much less so, at least for these models. We noted in Section 3 that a key difference between our approach and Kneser and Ney (1995) is that their approach treated the denominator as a constant. If we do this, the 'Pruned+MDC' perplexities increase by between 4.5-5.6 points, i.e., about half of the perplexity reduction is lost for each method. Thus, while iteration of denominator calculation may not be critical, it should not be treated as a constant.
We now look at the impacts on system performance we can achieve with these new models 4 , and whether the perplexity differences that we observe translate to real error rate reductions.
For automatic speech recognition experiments, we used as test set the 1997 Hub4 evaluation set consisting of 32,689 words. The acoustic model is a tied-state triphone GMM-based HMM whose input features are 9-frame stacked 13-dimensional PLP-cepstral coefficients projected down to 39 dimensions using LDA. The model was trained on the 1996 and 1997 Hub4 acoustic model training sets (about 150 hours of data) using semi-tied covariance modeling and CMLLR-based speaker adaptive training and 4 iterations of boosted MMI.
We used a multi-pass decoding strategy: two quick passes for adaptation supervision, CMLLR and MLLR estimation; then a slower full decoding pass running about 3 times slower than real time. Table 4 presents recognition results for the heavily pruned models that we have been considering, both for first pass decoding and rescoring of the resulting lattices using failure transitions rather than epsilon backoff approximations. Chelba et al. (2010), and a mixture model. Kneser-Ney results are shown for: a) original pruning; and b) with -prune-history-lm switch.
The perplexity reductions that were achieved for these models do translate to real word error rate reductions at both stages of between 0.5 and 0.9 percent absolute. All of these gains are statistically significant at p < 0.0001 using the stratified shuffling test (Yeh, 2000). For pruned Kneser-Ney models, fixing the state marginals with the -prune-history-lm switch reduces the WER versus the original pruned model, but no reductions were achieved vs. baseline methods. Table 5 presents perplexity and WER results for less heavily pruned models, where the pruning thresholds were set to yield approximately 1.5 million n-grams (4 times more than the previous models); and another set at around 5 million n-grams, as well as the full, unpruned models. While the robust gains we've observed up to now persist with the 1.5M n-gram models (WER reductions significant, Witten-Bell at p < 0.02, others at p < 0.0001), the larger models yield diminishing gains, with no real WER improvements. Performance of Witten-Bell models with the marginal distribution constraints degrade badly for the larger models, indicating that this method of regularization, unmodified by aggressive pruning, does not provide a well suited distribution for this sort of optimization. We speculate that this is due to underregularization, having noted some floating point precision issues when allowing the backoff recalculation to run indefinitely.
Summary and Future Directions
The presented method reestimates lower order n-gram model parameters for a given smoothed backoff model, achieving perplexity and WER reductions for many smoothed models. There remain a number of open questions to investigate in the future. Recall that the numerator in Eq. 8 can be less than zero, meaning that no parameterization would lead to a model with the target frequency of the lower order n-gram, presumably due to over-or under-regularization. We anticipate a pre-constraint on the baseline smoothing method, that would recognize this problem and adjust the smoothing to ensure that a solution does exist. Additionally, it is clear that different regularization methods yield different behaviors, notably that large, relatively lightly pruned Witten-Bell models yield poor results. We will look to identify the issues with such models and provide general guidelines for prepping models prior to processing. Finally, we would like to perform extensive controlled experimentation to examine the relative contribution of the various aspects of our approach.
Acknowledgments
Thanks to Ciprian Chelba and colleagues for the scripts to replicate their results. This work was supported in part by a Google Faculty Research Award and NSF grant #IIS-0964102. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF. Table 5: Perplexity (PPL) and both first pass (FP) and rescoring (RS) WER reductions for less heavily pruned models using marginal distribution constraints (MDC).
|
2014-07-01T00:00:00.000Z
|
2013-08-01T00:00:00.000
|
{
"year": 2013,
"sha1": "88181c089c0851ef99ca189ca7cf5f57339b2543",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "88181c089c0851ef99ca189ca7cf5f57339b2543",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
14968551
|
pes2o/s2orc
|
v3-fos-license
|
Glaucomatous Visual Field Defect Severity and the Prevalence of Motor Vehicle Collisions in Japanese: A Hospital/Clinic-Based Cross-Sectional Study
Purpose. This study examined the association between the severity of visual field defects and the prevalence of motor vehicle collisions (MVCs) in subjects with primary open-angle glaucoma (POAG). Methods. This is a cross-sectional study. Japanese patients who have had driver's licence between 40 and 85 years of age were screened for eligibility. Participants answered a questionnaire about MVCs experienced during the previous 5 years. Subjects with POAG were classified as having mild, moderate, or severe visual field defect. We evaluated associations between the severity of POAG and the prevalence of MVCs by logistic regression models. Results. The prevalence of MVCs was significantly associated with the severity of POAG categorized by worse eye MD (control: 30/187 = 16.0%; mild POAG: 17/92 = 18.5%; moderate POAG: 14/60 = 23.3%; severe POAG: 14/47 = 29.8%; P = 0.025, Cochran-Armitage trend test). Compared to the control group, the adjusted OR for MVC prevalence in subjects with mild, moderate, or severe POAG in the worse eye was 1.07 (95% CI: 0.55 to 2.10), 1.44 (95% CI: 0.68 to 3.08), and 2.28 (95% CI: 1.07 to 4.88). Conclusions. There is a significant association between the severity of glaucoma in the worse eye MD and the prevalence of MVCs.
Introduction
Motor vehicle collisions (MVCs) are among the most common serious public health concerns in the world: it is estimated that each year 1 to 2 million people die in MVCs and another 50 million are injured, costing the global community about US$518 billion [1].
Glaucoma is the second leading cause of blindness in the world, affecting approximately 5 million adults globally and damaging both peripheral and central vision [2]. Age is a significant risk factor for primary open-angle glaucoma (POAG) [3]. As the elderly population and, concomitantly, the number of elderly drivers continue to grow in both developed and developing countries, ever more elderly drivers with glaucomatous visual field defects are on the road. Glaucoma patients have been reported to have problems in everyday vision related activities such as search [4] and face recognition [5]. Smith et al. reported that, in comparison to control patients, glaucoma patients have difficulty finding objectives in photographs of everyday scenes [4]. Since driving is really a vision-dependent task, individuals with glaucoma may have a higher risk of being involved in MVCs.
Several studies have investigated associations between glaucoma and MVCs [6][7][8][9][10][11][12][13]. Haymes Journal of Ophthalmology normal, healthy control subjects and 48 with glaucoma and concluded that individuals with glaucoma were more than 6 times more likely to be involved in MVCs [13]. However, the associations between the severity of glaucomatous visual field defects and the prevalence of MVCs have not been clarified, so we examined these associations in subjects with POAG.
Subjects and Methods
This study's procedures conformed to the tenets of the Declaration of Helsinki and to national (Japanese) and institutional (Keio University School of Medicine) regulations. The study was approved by the Ethics Committee of the Keio University School of Medicine (number 2010293). All study subjects gave informed, written consent prior to being enrolled.
Study Design and Subject
Enrolment. Japanese patients between 40 and 85 years of age who visited Keio University Hospital (Tokyo, Japan), Iidabashi Eye Clinic (Tokyo, Japan), or Tanabe Eye Clinic (Yamanashi, Japan) between May 1, 2011, and November 30, 2011, were screened for eligibility for this cross-sectional study. Patients with POAG and control subjects were screened at these institutions' glaucoma clinics and general outpatient clinics, respectively.
Evaluation of Subjects with Glaucoma.
Patients with glaucoma were screened for eligibility using a battery of ophthalmic examinations, including slit-lamp biomicroscopy, funduscopy, gonioscopy, intraocular pressure measurements by Goldmann applanation tonometry, and visual field examination with a Humphrey visual field analyser (HFA) and the 24-2 Swedish Interactive Threshold Algorithm Standard Strategy (Carl Zeiss Meditec, Dublin, CA). The findings were analyzed by S. T. and K. Y. whom subspecialize in glaucoma. The reliability of the findings was confirmed to be high, with less than a 20% fixation loss rate and less than a 15% falsepositive rate [14].
POAG was diagnosed when three findings were present: (1) glaucomatous optic cupping, represented by notch formation, generalized cup enlargement, a senile sclerotic or myopic disc, or nerve-fibre layer defects; (2) glaucomatous visual field defects, defined according to Anderson and Patella's criteria (a cluster of 3 or more points in the pattern deviation plot within a single hemifield (superior or inferior) with a value < 5%, one of which must have a value < 1%; [15]); and (3) an open angle observed on gonioscopy.
Evaluation of Control Subjects.
Most of these patients were seen for an annual eye check-up or for outer adnexal disease. Control subjects were evaluated by ophthalmic examination, including best corrected visual acuity (BCVA) measurements, autorefractometry, slit-lamp biomicroscopy, funduscopy, and intraocular pressure measurements using Goldmann applanation tonometry or a noncontact tonometer. The findings were analysed by S. Tanabe and K. Yuki. Control subjects had to be free of ocular fundus disease that might affect visual function and to have a decimal BCVA in both eyes of 0.7 or more.
Exclusion Criteria.
Subjects were excluded if they had an ophthalmologic disease other than POAG that could potentially compromise visual acuity or contribute to visual field loss, such as secondary glaucoma or age-related macular degeneration. Subjects were also excluded if they had a decimal BCVA of less than 0.7, if they did not have a driver's license or drove 1 kilometre or less per week, or if they had a mental disorder that prevented them from understanding the questionnaire. Of the 943 consecutive subjects screened, 557 were excluded. The reasons for excluding subjects were as follows (the numbers in parentheses indicate the number of subjects excluded): being younger than 40 (53), being older than 85 (56), refusal to participate (10), dementia (2), low visual acuity (26), secondary glaucoma (61), primary angle-closure glaucoma (15), postretinal-detachment (20), diabetic retinopathy (36), bullous keratopathy (2), age-related macular degeneration (5), other ocular diseases (7), never having a driver's license (175), driving less than 1 kilometer per week (89).
Evaluation of Motor Vehicle Collisions.
All study participants answered the following questionnaire in Japanese (translated): (1) Do you have a driver's license? (Yes/No/Previously) (2) How long have you driven/did you drive a car?
( years) (3) How many kilometres per week do you normally drive? ( km) (4) Have you been involved in one or more traffic accidents in the past five years, including a single-car or minor accident, in which you were driving the car? (Yes/No) (5) How many traffic accidents have you ever been involved in, in the past five years? ( ) Demographic information recorded for all subjects included age, sex, height, weight, alcohol intake (yes/no), smoking history (yes/no/previous), current and previous illnesses (e.g., systemic hypertension, diabetes mellitus, depression, and brain infarction), and medical history, including oral medications such as sleeping aids, antihypertensive drugs, or tranquilizers.
Integrated Binocular Visual Field.
A binocular integrated visual field (IVF) was calculated for each patient by merging a patient's monocular HFA VFs, using the "best sensitivity" method, where the IVF total deviation (TD) at each point was calculated using the maximum TD (least negative) value from each of the two overlapping points, as if the subject was viewing the field binocularly [16]. The IVF MD was calculated as the mean of 52 TD values across the visual field. We were unable to obtain IVF data for 8 POAG subjects.
Grading Glaucoma Severity.
For this study, we defined mild POAG as a visual field defect corresponding to a mean deviation (MD) of −6 dB or better, moderate POAG as an MD between −6 and −12 dB, and severe POAG as an MD of −12 dB or worse [17]. For each patient, we determined POAG severity for the worse eye (the more negative MD), the better eye (the more positive MD), and the IVF.
Adjusting for Age.
In our previous report, the results could have been biased by significant age differences between the control group and the three POAG groups [12]. In this study, the average ages of the groups were compared by ANOVA at the end of each month. When significant age differences were found between the groups, we matched the ages by changing the screening criteria in the youngest group from 40-85 years of age to 45-85 years of age. This adjustment was necessary only in the mild glaucoma group and was made only in the months of September and November.
Statistical Analysis.
Descriptive statistics were calculated for the demographic, medical, and visual-function variables. The homogeneity of distribution between the control and POAG groups was examined by ANOVA, Kruskal-Wallis test, chi-square test, or Fisher's exact test, depending on the variables. The association between POAG severity and the prevalence of MVCs was evaluated with the Cochran-Armitage trend test. Adjusted ORs and 95% CIs for the prevalence of MVCs were estimated with logistic regression models to examine the effects of the following confounding factors on unadjusted results, by the forced-entry method: glaucoma severity (control, mild, moderate, and severe POAG groups), age, sex, the presence of diabetes mellitus, the proportion of alcohol drinkers in the group, the BCVA in the better eye, and the distance driven each week.
The accident rate, which represents the number of MVCs per 10,000 km driven, was calculated by the number of MVCs (question 5) divided by the (average distance driven per week (question 3) × 52 (weeks/year) × 5 years) × 10,000.
Associations between the number of MVCs and POAG severity, and between the accident rate and POAG severity, were evaluated by Jonckheere-Terpstra tests.
A value less than 0.05 was considered statistically significant. Decimal visual acuity was converted to LogMAR visual acuity for analysis. All data were analysed with IBM SPSS statistics software version 21.0 (IBM Japan, Tokyo, Japan).
Results
We enrolled 199 consecutive POAG patients and 187 consecutive control subjects in this study. The POAG patients were divided into three groups according to the severity of POAG in the worse eye, better eye, and IVF. All participants were of Asian ethnicity. The subjects' demographic characteristics are shown grouped by worse eye MD in Table 1, by better eye MD in Table 2, and by IVF MD in Table 3. No significant differences were observed in age, sex, prevalence of diabetes mellitus, or the number of comorbid illnesses among controls and POAG groups when categorized by the MD in the worse eye, better eye, or IVF. When grouped by MD in the worse eye, there were significant differences in BCVA in the worse eye between the control and POAG groups.
The prevalence of MVCs did not differ significantly between the control group and the three POAG groups combined (control subjects: 30/187 (16.0%); POAG subjects: 45/199 (22.6%); = 0.12). However, there was a statistically significant association between the prevalence of MVCs and POAG severity in the worse eye ( = 0.025, Cochran-Armitage trend test, Table 4). We did not observe a significant association between the prevalence of MVCs and POAG severity in the better eye ( = 0.12) or IVF ( = 0.27; Cochran-Armitage trend test, Table 4).
Adjusted ORs and 95% CIs were estimated with logistic regression models. Compared to the control group, the adjusted ORs for the prevalence of MVCs in subjects with Table 5. However, no significant association was observed in subjects with mild, moderate, or severe POAG categorized by POAG severity in the better eye or IVF ( Table 5). The mean number of MVCs per group in the past five years was 0.19 ± 0.48 (interquartile range: 0) in the control group, 0.30 ± 0.77 (interquartile range: 0) in the mild glaucoma group, 0.28 ± 0.56 (interquartile range: 0) in moderate glaucoma group, and 0.36 ± 0.61 (interquartile range: 1) in the severe glaucoma group, when subjects were grouped by POAG severity in the worse eye. This trend is statistically significant ( = 0.03, Jonckheere-Terpstra test). When subjects were grouped by POAG severity according to the MD in the better eye or the IVF, the number of MVCs was 0.19 ± 0.48 (interquartile range: 0) in the control group, 0.31 ± 0.69 (interquartile range: 0) and 0.35 ± 0.70 (interquartile range: 0) in the mild glaucoma group, 0.29 ± 0.55 (interquartile range: 0.5) and 0.15 ± 0.37 (interquartile range: 0) in the moderate glaucoma group, and 0.33 ± 0.65 (interquartile range: 0.5) and 0.33 ± 0.82 (interquartile range: 0) in the severe glaucoma group ( = 0.08, = 0.11, resp., Jonckheere-Terpstra test).
The accident rates are shown in Figures 1 to 3. When subjects were grouped by POAG severity in the worse eye, the accident rate (the number of MVCs per 10,000 km driven) was 0.1 ± 0.5 in the control group, 0.3 ± 0.1 in the mild glaucoma group, 0.8 ± 0.5 in the moderate glaucoma group, and 2.1 ± 8.0 in the severe glaucoma group (mean ± standard deviation). The accident rate in the severe glaucoma group was significantly higher than that in the control group ( < 0.001 ANOVA, Tukey post hoc test). However, no significant difference was observed among the four groups (ANOVA; = 0.06 and = 0.05, resp.) when grouped by POAG severity in the better eye or IVF. The trend was statistically significant ( = 0.03, Figure 1, Jonckheere-Terpstra test) when POAG severity was categorized by MD for the worse = 0.84) were not associated with MVCs after multivariable adjustment for glaucoma severity in the worse eye. Younger age and a greater distance driven per week were also associated with MVCs after multivariable adjustment for glaucoma severity categorized by the MD of the better eye or IVF. Mild glaucoma (n = 165) Figure 3: The association between the number of MVCs per 10,000 km driven and glaucoma severity, grouped by IVF. No significant association was observed between the accident rate and glaucoma severity, grouped by IVF ( = 0.12, Jonckheere-Terpstra test). MVC (motor vehicle collision); IVF (integrated visual field). Error bar: standard error.
Discussion
In this study, we showed that the severity of glaucomatous visual field defects is significantly associated with the prevalence of MVCs. The adjusted OR for MVCs was 2.5 times higher for subjects with severe POAG in the worse eye than for control subjects.
Our results are compatible with findings from our former study [12]. Our previous report compared the prevalence of self-reported MVCs among normal, healthy control subjects ( = 121) and those with mild ( = 50), moderate ( = 51), or severe POAG ( = 20). In that study, we found that subjects with a severe visual field defect in the worse eye were more likely to be involved in MVCs (OR 9.9 (95% CI, 2.1-47.8), control as reference). In a well-designed, nested case-control study in patients with glaucoma, McGwin Jr. et al. compared the severity of visual field defects between subjects who had or had not been involved in MVCs, using Advanced Glaucoma Intervention Study (AGIS) scores, and concluded that patients with moderate or severe visual field defects (AGIS score [12][13][14][15][16][17][18][19][20] in the worse eye had a significantly increased risk for MVCs (OR 3.6 (95% CI 1.4-9.4) and OR 4.4 (95% CI 1.6-12.4), resp.) [8]. These studies are compatible with our findings that the MD in the worse eye is associated with MVCs.
Our findings are also compatible with findings from another study on the association between driving performance and visual field defects: Haymes et al. reported that the MD in the worse eye is correlated with real-world driving performance [18]. Multivariable analysis showed that patients with glaucoma and an MD < −4 dB in the worse eye were more than 4 times more likely to have the driving instructor intervene during real-world driving compared to those with better visual fields and that a poor MD was the predominant cause of failure to see and yield to a pedestrian [18]. It is reasonable to speculate that poor driving performance may lead to MVCs. These results support our finding that a more severe visual field defect in the worse eye is associated with more MVCs.
The association between the accident rate, defined as the number of MVCs per 10,000 km driven, and glaucoma severity is especially interesting. It has been suggested that a worse glaucomatous visual field is associated with driving cessation and limitations [19]. Therefore, the accident rate is one of the most accurate indicators for the possibility of being involved in MVCs. We found that the accident rate was 14 times higher for subjects with a severe visual field defect in the worse eye than for control subjects; this difference is statistically significant. This result suggests that it is important for ophthalmologists to notify patients with severe glaucoma that if they drive, they may have a higher risk for MVCs.
We found that younger age was associated with MVCs, even after adjusting for distance driven. It has been reported that the relationship between age and MVCs forms a Ushape curve [20,21], and middle-aged subjects (40-50 years old) are the safest drivers. The reason for this discrepancy between our study and previous studies is unclear. However, we excluded subjects with low visual acuity, age-related macular degeneration, and other ocular diseases associated with older age, and these visual impairments are risk factors for MVCs [11,22,23]. The higher risk found for MVCs in younger subjects in this study may reflect the exclusion of older subjects with age-related visual impairments other than glaucoma.
We also found that greater driving distances were associated with MVCs. Theoretically, subjects who drive greater distances are exposed to a higher risk of being involved in MVCs than those who drive fewer kilometres. Our results suggest that, among drivers with glaucoma, youngest drivers with a severe visual field defect in the worse eye who drive long distances may have the highest risk of MVCs.
In this study, we did not find significant associations between the prevalence of MVCs and POAG severity when severity was categorized by the MD of the better eye or the IVF MD. Crabb et al. monitored patients' eye movements during driving simulations and reported that deterioration in the superior peripheral area of the binocular IVF could affect driving performance [24]. Glen et al. reported that the response rate for detecting hazards in a series of reallife driving films fell significantly when viewing films with a superior visual field defect, compared with an inferior visual field defect ( < 0.001) [25]. It has also been reported that specific VF regions are important for different tasks and affect hand-eye coordination [26], postural stability [27], risk of falling [13], and risk of fractures [28]. Further studies should be performed to investigate relationships between pointwise VF sensitivities and MVC history. Another possible explanation is that the number of participants classified as severe is simply too small to detect a significant result.
Strength of Our Study.
To the best of our knowledge, we are the first group to report an association between glaucoma severity and MVCs in Japan and even in Asia. We have previously reported that severe glaucoma is associated with MVCs in Japan. However, the previous study was a single center study with 12 MVCs, relatively small number. In contrast, the total number of MVCs analyzed in our current study is 75, which allowed us to generate more robust results than in our previous study. Another important distinction from the previous study is that we clearly define MVCs in the questionnaire in this current study. Furthermore, we asked for history of MVCs in past 10 years in the previous study, while in the current study we asked for history of MVCs in only the past 5 years. Therefore, recall bias may be reduced in the current study. Finally, in our current study, we found novel factors to be associated with MVCs, including long distance driving and younger age, associations that were not detected in our previous study.
Study Limitations.
The greatest limitation of our study is the use of self-reported MVCs, which may have a recall bias, as a main variable [29]. However, Marottoli et al. reported that compared with state-reported MVCs, self-reported MVCs provide sufficient information to assess crashes [30]. A reluctance to provide information may have affected the selfreported data; subjects with glaucoma who were followed up for a long time by the same doctor may have hesitated to provide a full history of MVCs and thus biased the result.
Another limitation is that we did not evaluate whether subjects were at fault in the MVCs they reported, which would be an important piece of information for this study. However, fault can be difficult to define in a car accident, especially in this type of self-reported questionnaire, and people are likely to answer that they were not responsible for MVCs. Therefore, in this study, we did not ask who was at fault in the MVCs.
The present study has some other limitations. Ophthalmic data were obtained after the MVCs had occurred, typically after an interval of up to 5 years, and the subjects' visual field defects may have worsened during that time period. This may have reduced the accuracy of our results. A causal association between MVCs and visual field defects was not confirmed in this cross-sectional study. A normal visual field was confirmed in control subjects by a fundoscopic evaluation of the optic disc, not by visual field testing. MVCs were not precisely defined in the questionnaire. These results may be partly dependent on the imbalance of group sizes when stratifying according to worse eye/best eye/IVF MD. Therefore, we cannot conclude that better eye MD and IVF MD are not associated with MVCs in this study. Increasing the sample sizes in severe glaucoma group categorized by better eye MD or IVF MD may give a different result.
Our study shows a significant association between the severity of glaucomatous visual field defects in the worse eye and MVCs. The association between MVCs and glaucoma severity is worthy of further investigation.
Conflict of Interests
The authors have no conflict of interests to declare.
|
2016-05-12T22:15:10.714Z
|
2015-04-01T00:00:00.000
|
{
"year": 2015,
"sha1": "1d50999ac1f83b03d91f4e4e6d26f8ffd8015362",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/joph/2015/497067.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8a3eedd006ffc7a9f7d63b24d9190ff336ce6c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
262183501
|
pes2o/s2orc
|
v3-fos-license
|
The assessment of status of Sibam River and Air Hitam River Pekanbaru city Riau Province using pollution index
The quality of the Siak River is gradually deteriorating with the rapid socio-economic development in its tributary watersheds. S. Sibam and Air Hitam are part of the Siak tributaries in the Siak tributary sub-region in Pekanbaru City. The quality of the Sibam and Air Hitam tributaries in the Siak Watershed of Pekanbaru City has been studied to determine the relative pollution level of water quality standards. Water quality measurements were carried out from January to August 2022 at the upstream, middle
Introduction
Sibam River and Air Hitam River are Siak Sub-Watersheds (DAS) (Nadia et al., 2016).These rivers function as primary drainage channels in the western part of Pekanbaru Cit and empty into the Siak River.The Sibam and Air Hitam watersheds are peat swamp areas.However, the development of Pekanbaru City caused the Sibam and Air Hitam watersheds to change their functions into oil palm plantations, residential areas, and warehousing industries.Peatland degradation and conversion to plantations significantly affect river water quality, leading to dissolved organic carbon fluxes and oxygen levels (Abrams et al., 2015).In addition, increased activity in the Sibam and Air Hitam watersheds puts pressure on the Sibam and Air Hitam Rivers, causing the rivers to become silted and water quality to decrease as the Siak sub-watershed, the Sibam and Air Hitam Rivers impact the decline in the water quality of the Siak River due to pollution.According to Indonesia's environmental and forestry statistics 2019, the Siak River is in a mild to heavily polluted condition (Ministry of Environment and Forestry of Indonesia, 2019).
According to the Indonesian Public Works Department (IPWD) (2005), 60% of Siak River pollution comes from domestic waste.Research proves that the water quality of the Siak River has been polluted by household waste, especially from its tributaries in the Pekanbaru City area (Yuliati et al., 2018;Rixen et al., 2010).This waste causes the water quality of the Siak River to continue to decline.Pollution that enters the river affects river species and reduces river function (Gurjar and Vinod 2019).
Moreover, pollutants will worsen water quality, making it unsafe for humans.Meanwhile, the Siak River, the source of fresh drinking water, can be disrupted.Accurately measuring changes in water quality in rivers is very important for formulating environmental protection and river pollution control policies.In this case, it is necessary to use a water quality assessment method to determine river water (2023) quality status.Determining the status of a river can be done using the pollution index because it is a simple and easy assessment method for assessing water pollution.This approach resulted in a value indicating the relative pollution levels against the water quality standard (Suriadikusumah et al., 2020).A pollution index has been applied to evaluate water quality in Indonesian rivers such as Ciliwung, Jakarta (Sabila et al., 2017), Citarum (Marselina et al., 2022) and Cimanuk (Nurrohman et al., 2019) (Bandung).However, the determination of the quality status of Siak tributaries is rarely reported.For this reason, it is necessary to determine the quality status of the tributaries as the basis for controlling the pollution of the Siak River.
Location and time of research
This river is located in Pekanbaru Regency, the Capital of Riau Province (Figure 1).This research was carried out in January-Agustus 2022 at the Sibam River and Air Hitam River.Each river consists of three research stations.Station 1 is upstream of the river, station 2 is in the middle, and station 3 is the downstream part of the river where it confluences the Siak River.The coordinates of the research stations for the two rivers are presented in Table 1.
Sample collection
Samples were analyzed for eight parameters, i.e., Temperature, Total Suspended Solids (TSS), Dissolved Oxygen (DO), pH, Biochemical Oxygen Demand (BOD5), Chemical Oxygen Demand (COD), Nitrate, Nitrite, Ammonia, and Total Phosphate and river hydrodynamics, namely Velocity and water discharge.Velocity is measured using a flow watch FL-03.Discharge calculation is obtained by multiplying the current rate and the crosssectional area of the river (Sosrodarsono, 1993).
A Van Dor Water Sampler takes one liter of water at a depth of 0-20 cm.The water sample is put into a polyethylene bottle and stored in a cool box at 4 0 C.
The procedure for preserving river water samples is carried out according to Indonesian National Standard (INS) 2008.Temperature (mercury thermometer) and pH were measured in situ using a portable tools analysis pH-meter (Hanna HI 98107).Dissolved oxygen measure with Winkler methods. Where • PIj: Pollution Index for designation (j), which is a function of Ci/Lij • Ci: Concentration water quality parameters (i) obtained from the analysis of water samples on a location for taking samples from a river channel (unit?), If the standard Lij value has a range for Ci ≤ Lij average Hesitation appears if two values Ci/Lij is nearby the reference value of 1.0, for example, C1/Lij= 0.9 and C2/L2j = 1.1, or huge differences exist, for illustration, C3/L3j = 5.0 and C4/L4j = 10.0.In this example, the level of water body damage is difficult to determine.An approach to overcome this case is: (5) The pollutant index value will be categorized into several water quality ranges.The level of water pollution is divided into four classes, namely: Meets the quality standard if the value 0 ≤ PIj ≤ 1.0; Lightly polluted = 1.0<PIj ≤ 5.0; Moderately polluted =5.0 < PIj ≤ 10; Heavily polluted, If the value PIj>10.
Water Quality
The water quality characteristics of the Sibam and Air Hitam Rivers are shown in Table 2 and Table 3.The temperature in the Sibam River ranges from 28-30 ºC and slightly lower in the Air Hitam River (28-31 ºC).However, the temperatures in the two rivers are generally relatively similar.The TSS values for the Sibam River and Air Hitam range from 4 to 6 mg/l and 14 to 58 mg/l, respectively.It can be seen from Table 2 and Table 3 that TSS levels in the Sibam River are lower than in the Air Hitam River.The highest TSS value in the Sibam River and Air Hitam is at station (St) 3, where this station meets the Siak River.
The Sibam and Air Hitam rivers fluctuate upstream to downstream.The dissolved oxygen (DO) in the Sibam River ranged from 4.12-5.52,with the highest value at St 1 of the second sampling (5.56 mg/l) (Table 2 and Table 3).The highest oxygen in the Air Hitam River was observed at St 2, with a 4.8 mg/l.Oxygen concentration in each river still meets the quality standard according to RGRI no 22 of 2022 class III of 3 mg/l.
The pH values in the Air Hitam and Sibam Rivers are five.This value indicates that the two rivers are acidic.There is no difference in the pH value in the Sibam River and the Air Hitam River based on the observation and sampling St that have been carried out.According to RGRI 22 of 2021, the pH value of river water is standard if it is around 6-9, except for rivers that are affected by peat swamps.Therefore, the pH of the Sibam River and Air Hitam is categorized as fulfilling the quality standard even though the value is low because it is less than six.
In the Sibam River, BOD levels are lower than in the Air Hitam River.Concentrations of BOD in the Sibam River were relatively similar (3-4 mg/l), while Air Hitam ranged from 9-18 mg/l.The high BOD content at all observation stations in the Air Hitam River has exceeded the class III quality standard according to RGRI 22 of 2021 (6 mg/l).
Table 4 shows COD concentrations in the Sibam River in sampling 1 to 3 were 13.3-21.5,10.00-20.8mg/l, and 10.80-18.5 mg/l, respectively.The COD in the observation of sampling 1 is higher than that of sampling 2 and 3.There is a tendency for the COD to increase to the lower reaches of the river, especially at St 3 (confluence with the Siak River).The range of COD in the Air Hitam River observations will be seen in Table 3. Nitrate levels fluctuate in the Air Hitam River.On the contrary, the Sibam River tends to be similar along the river flow.The range of nitrate is 0.28-0.60mg/l (Air Hitam River) (Table 3) and 0.07-0.11mg/l (Sibam River).Nitrate grades in the Air Hitam River were higher than in the Sibam River.However, the groups met the class III quality standard (20 mg/l).
The output of nitrite measurements during the study in the Sibam River is illustrated in Table 2.The average content of nitrite ranges from 0.09 to 0.10 mg/L.Nitrite concentration in the Sibam is relatively similar.However, the nitrite at St 2 (0.10 mg/L) tends to be higher than at St 1 and 3 (0.09 mg/L).The nitrite content in the Air Hitam River is higher, ranging from 0.24-0.52mg/l.This value exceeded the quality standard, 0.06 mg/l (Class III, RGRI 22/2021).The high nitrite in Air Hitam was observed at St 1 (0.52 mg/l).
Ammonia measured during the Sibam and Air Hitam River studies is presented in Table 2 and Table 3 with a range of 0.06-0.09mg/l and 0.14-0.37mg/l, respectively.The ammonia content in the Air Hitam River is higher, especially at St 1 (0.52 mg/l), already exceeding the quality standard of 0.05 mg/l.
Table 3 shows that the TP content of the Sibam River increased slightly downstream.In contrast to the Air Hitam, the TP concentration fluctuates along the river's course.
The average depth of the Sibam River ranges from 0.36 -0.92 m.The most profound river depth (2023) is at St 3 (0.92 m), while the lowest is at St 2 (0.36 m).Sibam River discharge fluctuates in the range of 0.35 -2.17 m 3 /s.The Sibam River discharge tends to fluctuate.The highest Sibam river discharge is at St 3 (2.17m 3 /s), while the lowest is at St 2 (0.35 m 3 /s)
Water quality status
In sampling 1 and 2, the PI value in the Sibam River ranges from 2.50 to 2.95.In contrast, the third sampling varies from 2.50 to 3.15, with an average of 2.79, indicating that the river quality status is lightly polluted (Table 4).The PI values for the Air Hitam River for all research stations are presented in Table 5, ranging from 1.70 to 3.57.Although the PI value is higher than Sibam River, it is still considered lightly polluted.
Discussion
Human activities significantly influence the quality of river waters in general.Pollution levels in the Sibam and Air Hitam Rivers are increasing in the river's lower reaches due to increasing human activities, such as settlements, industry, and agriculture.According to Kotti et al., (2005), river water pollution increases from upstream to downstream due to human activities along the banks of the river.Residential activities and oil palm plantations in watersheds have decreased the water quality of the Sibam River and Air Hitam River.The decline in river water quality is characterized by nitrite and ammonia parameters (Sibam River and Air Hitam River), and BOD (Air Hitam River) does not meet quality standards.Following the statement Abbott (2018) that agricultural activities and urbanization on the banks of the river increase nitrogen levels in the waters.Agricultural activities in the Sibam and Air Hitam watersheds are oil palm plantations.Peat swamps in watersheds are converted to oil palms.Additionally, the remnants of fertilizer are carried with rain runoff into the river, increasing nitrogen concentration in river water (Comte et al., 2012).Additionally, the remnants of fertilizer are carried with rain runoff into the river, increasing nitrogen levels in river water.Anthropogenic activities on the riverbanks have increased the BOD concentration in the Sibam and Air Hitam rivers.Furthermore, runoff water can transport eroded soil particles from areas to water bodies.Suspended particles contribute significantly to water turbidity, which impairs photosynthesis, degrades light penetration, alters oxygen levels, and reduces the food supply for aquatic organisms (Bilotta and Brazier, 2008).Moreover, sediment plug streams, reducing their water-holding capacity, and can cover spawning beds, killing populations of fish (Kemp et al., 2011) Anthropogenic activities on the riverbanks have increased the BOD content in the Sibam and Air Hitam Rivers.A higher BOD content usually indicates the presence of organic pollutants from untreated domestic and industrial wastes (Saifullah et al., 2016).Surface water has a BOD tolerance limit of 5 mg/L for aquatic life and reflects the amount of organic matter in water bodies (Loucif et al., 2020).
The results of the calculation of the Sibam and Air Hitam River pollution indexes describe the quality status of the river in a lightly polluted condition.However, the pollution index value of the Air Hitam River is higher than that of the Sibam River.This higher pollution index value indicates that the quality of the Air Hitam River is worse than the Sibam River.
Figure 1 .
Figure 1.Study Location in Sibam River and Air Hitam River Pekanbaru City Data analysis Analysis of the water quality status of the Sibam River and Air Hitam River uses a pollution index as stated by the Decree of the Minister of the Environment Indonesia (MEI) Number 115 of 2003.According to Regulation of Government of the Republic of Indonesia (RGRI) Number 22/2021 (Class III), the water quality standard used Class III because these rivers are designated to cultivate freshwater fish.Determining river water quality of status with pollution index can be seen as follows:
Table 1 .
Position of three stations on Sibam and Air Hitam River.
Jurnal Ilmu-Ilmu Perairan, Pesisir dan Perikanan Volume 12, Number 3, Page 229-235 Yuliati et al. (2023) • R: Average value of Ci/Lij The results of this PIj calculation show the level of water pollution based on the standard of water pollution status according to Regulation of Government of the Republic of Indonesia (RGRI) Number 22/2021 Class III, concerning the implementation and protection of the environment.The PIj grade may be determined by: • It selected parameters that, if the parameter grade is the low point, the water quality shall upgrade.• Choose the concentration of quality standard parameters that do not possess a range.• Evaluate the Ci/Lij value for every parameter at every sampling location.• Decide the maximum grade or theoretical value of Cim (e.g., for DO, then Cim is a saturated DO value).In this example, the Ci/Lij rate of the calculation results is exchanged for the calculated Ci/Lij value, specifically: • Lij: State the concentration of the listed water quality parameters in the Quality Standard of a Water Designation (j), • M: Maximum value of Ci/Lij Depik
Table 2 .
Characteristics of the water quality of the Sibam River
Table 3 .
Characteristics of the water quality of the Air Hitam River
Table 5 .
Pollution Index (PI) of Air Hitam River
|
2023-09-24T15:56:13.855Z
|
2023-08-31T00:00:00.000
|
{
"year": 2023,
"sha1": "f3250cfa2f446d56e524f874b76c68b96a8a3c8d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.13170/depik.12.2.29001",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ff61249eaf6e4b49adfe4c223791f72f776cc528",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
233200290
|
pes2o/s2orc
|
v3-fos-license
|
Post–Covid-19 Cholangiopathy—A New Indication for Liver Transplantation: A Case Report
Liver injury is one of the nonpulmonary manifestations described in coronavirus disease 2019 (COVID-19). Post–COVID-19 cholangiopathy is a special entity of liver injury that has been suggested as a variant of secondary sclerosing cholangitis in critically ill patients (SSC-CIP). In the general population, the outcome of SSC-CIP has been reported to be poor without orthotopic liver transplantation (OLT). However, the role of OLT for post–COVID-19 cholangiopathy is unknown. We present a case report of a 47-year-old man who recovered from acute respiratory distress syndrome from COVID-19 and subsequently developed end-stage liver disease from post–COVID-19 cholangiopathy. The patient underwent OLT and is doing well with normal liver tests for 7 months. To our knowledge, this is the first case report of a patient who underwent successful liver transplantation for post–COVID-19 cholangiopathy.
S EVERE acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was first identified in December 2020 in Wuhan City, China and was declared a global pandemic by the World Health Organization on March 11, 2020. The disease is termed coronavirus disease 2019 . COVID-19 is typically characterized by fever, fatigue, dry cough, anosmia, and headache that may evolve to respiratory failure [1]. Liver test abnormalities have been identified as one of a growing spectrum of nonpulmonary manifestations described in COVID-19 that may be potentially attributable to hepatic expression of the main viral entry receptor of the RNA virus, angiotensin-converting enzyme II (ACE2) [2,3].
The incidence of abnormal liver tests in patients with COVID-19 ranges from 14% to 76%, and most of them are aspartate aminotransferase (AST) and alanine aminotransferase (ALT). The aminotransferases are mildly elevated on admission in most cases (less than 2 times the upper limit of normal), and total bilirubin levels can modestly increase early in the disease process. Although most liver damage in COVID-19 infection is the hepatocellular type, severe cholestasis can also occur and 12% of patients showed total bilirubin levels elevated to more than 3 times the upper limit of normal [4,5]. A recent study revealed an association between abnormal liver tests and severe COVID-19, including intensive care unit (ICU) admission, mechanical ventilation, and death [6].
The unusual entity of secondary sclerosing cholangitis in critically ill patients (SSC-CIP) has been recognized as a novel entity in patients after COVID-19 infection [7] and recently named post −COVID-19 cholangiopathy [8]. It is characterized by marked cholestasis associated with ongoing jaundice that persists long after pulmonary and renal recovery. We report a case of a patient who developed post−COVID-19 cholangiopathy requiring a liver transplant. To our knowledge, this is the first case report of a patient developing post−COVID-19 cholangiopathy who underwent successful liver transplantation. This case is being reported to provide reference and guidance for possible long-term complications after the COVID-19 pandemic, including liver-related consequences with liver transplantation as a viable treatment option.
CASE PRESENTATION Clinical Presentation and Diagnostic Tests
A 47-year-old man with class 3 (severe) obesity, body mass index of 51, obstructive sleep apnea, hypertension, and hyperlipidemia with no history of liver disease presented to an outside facility with respiratory symptoms of dyspnea, cough, and fever. Chest x-ray showed diffuse patchy airspace opacities compatible with multifocal pneumonia, and he was subsequently found to be positive for SARS-CoV-2 infection. The patient was treated with hydroxychloroquine (Plaquenil), azithromycin, and high-dose vitamin C. Laboratory findings were remarkable for elevated AST of 79 U/L and ALT 52 U/L, with a total bilirubin of 0.3 mg/dL at the time of presentation. The patient experienced rapid clinical decline that included acute respiratory distress syndrome and acute kidney injury. He required prolonged mechanical ventilation (29 days) and continuous venovenous hemofiltration. Although his pulmonary function subsequently improved and he was weaned off mechanical ventilation, his acute kidney injury persisted and required regular hemodialysis.
At day 58 from his initial presentation, his pertinent laboratory blood tests included AST of 384 U/L, ALT of 175 U/L, alkaline phosphatase of 1644 U/L, and total bilirubin of 19.0 mg/dL. Initial abdominal ultrasound showed severe fatty liver and innumerable gallstones throughout the gallbladder without biliary dilation or gallbladder wall thickening. Computed tomography (CT) of the abdomen and pelvis without contrast showed normal liver size and contour without focal hepatic lesions or evidence of biliary ductal dilatation. At day 73, his follow-up blood tests were AST of 236 U/L, ALT of 121 U/L, alkaline phosphatase 970 U/L, and serum total bilirubin of 10.9 mg/dL. A liver biopsy demonstrated mechanical bile duct obstruction presumably related to sepsis, with drug-induced liver injury less likely. Endoscopic retrograde cholangiopancreatography (ERCP) with sphincterotomy was performed and a small pigment stone retrieved. Noteworthy is the finding of diffuse intrahepatic biliary strictures or cholangiopathy (Fig 1).
On day 81 from his initial presentation, the patient was hospitalized for hypotension during hemodialysis. Pertinent laboratory tests were ALT of 130 U/L, AST of 491 U/L, alkaline phosphatase of 2730 U/L, and marked hyperbilirubinemia with total bilirubin of 19 mg/dL. Abdominal ultrasound showed cholelithiasis without evidence of acute cholecystitis, and magnetic resonance cholangiopancreatography demonstrated mild intrahepatic biliary ductal dilatation with multifocal strictures or beading without extrahepatic biliary dilatation. ERCP confirmed findings of secondary sclerosing cholangitis: short segments of strictures and dilatations of the intrahepatic ducts, with no pathologic findings in the common hepatic and common bile ducts.
Preoperative Treatment Planning and Management
The patient was evaluated and placed on the list for liver transplantation with a Model for End-Stage Liver Disease score of 37. Although the patient met criteria for a combined liver and kidney transplantation, the multidisciplinary liver and kidney transplantation team pursued a single-organ liver transplantation as a life-saving treatment. Waitlisitng for renal transplantation was deferred because of comorbidity that included morbid obesity and post−COVID-19 infection pulmonary dysfunction: restrictive ventilatory defect with low-expiratory volume and reduced diffusion capacity and evidence of postinflammatory state with mild interstitial edema. Therefore, staged renal transplantation is planned once the patient has recovered from his severe illness.
Orthotopic Liver Transplantation and Explant Histopathology
On day 108 from his initial presentation, the patient underwent an orthotopic liver transplantation (OLT) with a whole hepatic allograft from a deceased donor. OLT was performed with intraoperative renal replacement therapy and total peripheral and mesenteric venovenous bypass. A staged choledochocholedochostomy was performed the following day [9]. Noteworthy are findings on the native liver: 4000 g in weight with histologic findings of severe sclerosing cholangitis with hepatic abscesses (Fig. 2-6). There was no histologic evidence of IgG4 or other causes of secondary sclerosing cholangitis.
Postoperative Care
The induction immunosuppression regimen consisted of basiliximab (Simulect) on post-OLT days 0 and 4, solumedrol taper, and everolimus (Zortress) followed by maintenance therapy with tacrolimus and everolimus. His hepatic allograft function normalized within 8 days after OLT, and the patient was weaned off mechanical ventilation on post-OLT day 13. The patient continued his recuperation in the acute rehabilitation unit beginning post-OLT day 46 and was subsequently discharged home on the post-OLT day 55.
At 7 months after OLT, the patient's hepatic allograft function remained normal: ALT of 27 U/L, AST of 28 U/L, alkaline phosphatase of 123 U/L, and total bilirubin 0.2 mg/dL. He did not experience an episode of acute cellular-or antibody-mediated rejection during the post-OLT period. Regarding his renal function, he continues to require regular hemodialysis. After a successful weight loss of 50 kg in our weight loss management program, the patient is currently undergoing a comprehensive evaluation for the renal transplantation wait list.
DISCUSSION
SSC-CIP is a recently recognized form of cholestatic liver disease occurring in patients without a history of hepatobiliary disease, after receiving treatment in the ICU in different settings, including cardiothoracic surgery, infection, trauma, and burns [10,11].
The pathophysiology of SSC-CIP is not completely understood. The presence of critical illness and its treatment in an ICU seem to play an important role in the development of the disease, but the main pathogenic mechanism seems to be a combination of bile duct ischemia, changes in bile composition, and biliary infection [12,13]. The diagnosis is made by ERCP or magnetic resonance cholangiopancreatography revealing diffuse strictures and dilatations of the intrahepatic bile ducts with filling defects caused by the presence of biliary casts [13,14].
Among the various pathophysiologic events associated with critical illness, biliary ischemia seems to play a major role in the cause of SSC-CIP. Whereas the hepatocytes receive a dual blood supply from the portal vein and the hepatic artery, the biliary epithelium receives its blood supply solely from the peribiliary plexus. Thus, the cholangiocytes are more susceptible to ischemia than the hepatocytes. When ischemia of the biliary epithelium takes place, blood supply to the bile ducts is reduced, resulting in necrosis and sloughing of the biliary epithelium and bile cast formation [12,15]. Deltenre et al [16] demonstrated that the degree of ischemic cholangiopathy was inversely proportional to the caliber of the supplying occluded artery.
Toxic bile may also play a role in the pathogenesis of SSC-CIP. Destruction of the protective mechanisms for the cholangiocytes, that is, secretion of phospholipids from the hepatocytes and biliary secretion of bicarbonate, will cause their lipid membrane to be susceptible to the detergent properties of the hydrophobic bile acids [12]. Biliary secretion of bicarbonate, via the transporter ion exchanger 2, forms a protective alkaline bicarbonate film on the apical cholangiocyte membrane as part of a defense strategy [17]. A disturbance in the fine balance between bile acids and protective mechanisms can lead to damage to the biliary epithelium leading to sclerosing cholangitis. A heightened inflammatory response through the release of proinflammatory cytokines will also add to the development of toxic bile and contribute to cholangiocyte necrosis [12].
Multimodal treatments for critical illness have been associated with the development of SSC-CIP. Prolonged hypotension and vasopressor administration are common in patients with SSC-CIP. All the patients experienced an episode of severe hemodynamic instability with a decrease in mean arterial blood pressure <65 mm Hg for at least 60 minutes and often longer [12,18]. Vasopressor administration is also very common before the development of SSC-CIP. Epinephrine, norepinephrine, dopamine, and dobutamine all increase systemic blood pressure but do not have the same effect on hepato-splanchnic blood flow. Dopamine has a positive effect on liver perfusion [19], contrary to epinephrine and norepinephrine, which are thought to have a negative effect on splanchnic blood flow [20].
Mechanical ventilation with high positive end-expiratory pressures >10 cm H 2 O also has been shown to contribute to microcirculatory ischemia within the hepato-splanchnic vascular plexus [21]. Additionally, excessive use of prone positioning of mechanically ventilated patients has been linked to the development of SSC-CIP [22]. SSC-CIP has a mortality of up to 50% in patients during an ICU admission. Adverse prognostic factors include associated renal failure, a high Model for End-Stage Liver Disease score, and rapid deterioration to liver cirrhosis [23]. In another study of SSC-CIP patients, 60% survived the ICU, 40% developed stable biliary cirrhosis, and 20% required a liver transplant [24]. Without liver transplant, the median survival in such patients is 12 to 44 months [25].
Post−COVID-19 cholangiopathy has been recently described [7,8] and refers to SSC-CIP in patients who recovered from severe COVID-19 infection. All the patients described in the case reports had no pre-existing liver disease, and all had a prolonged hospitalization because of acute hypoxemic respiratory failure requiring mechanical ventilation and additional complications from COVID-19. All the patients developed marked cholestasis with associated jaundice that persisted long after pulmonary and renal recovery. None of the imaging studies was suggestive of cirrhosis.
Patients with severe COVID-19 infection have several predisposing conditions for SSC-CIP, such as hypotension and administration of vasopressors. The presence of COVID-19 −associated coagulopathy has a high risk for arterial and venous thromboembolism [26,27]. Mechanical ventilation with the use of positive end-expiratory pressure for prolonged periods because of the challenges of weaning [28] plus the use of prone positioning in such patients for up to 16 hours per day are relatively common [29]. Increased proinflammatory cytokines with the syndrome of uncontrolled immune activation leads to cytokine storm [30] and contributes to the development of toxic bile and then cholangiocyte necrosis [12].
The histologic picture of patients with post−COVID-19 cholangiopathy seems to differ from the histologic findings seen in patients with SSC-CIP of other causes [8]. All the biopsy samples exhibited extensive degenerative cholangiocyte injury with extreme cholangiocyte cytoplasmic vacuolization and regenerative change not previously described for SSC-CIP. The microvascular features of hepatic artery endothelial swelling, portal vein phlebitis, and sinusoidal obstruction syndrome are also unique, as is the intrahepatic microangiopathy affecting all 3 microvascular compartments, as noted in autopsy findings in patients succumbing to COVID-19 [31]. These histologic changes suggest direct hepatic injury from COVID-19 in patients with underlying SSC-CIP.
The patient we describe in this report had similar findings, including destruction of the biliary epithelium characterized by vacuolar degeneration with cytoarchitectural disarray, anisonucleosis, and even cholangiocyte necrosis. The extensive biliary injury was associated with marked cholestasis, ductular reaction, and ductulocentric fibrosis. Furthermore, our patient demonstrated obliterative portal venopathy and microarteriopathy characterized by endothelial cell swelling with obliteration of the arterial lumen. Many of these features have been previously described in post−COVID-19 cholangiopathy [8]. Therefore, post−COVID-19 cholangiopathy seems to be a variant of SSC-CIP and further investigation is needed in regard to the pathogenicity of COVID-19 in the biliary epithelium. A major concern for patients with post−COVID-19 cholangiopathy is that it may lead to progressive liver injury with the potential need for liver transplantation, as seen in our patient [8].
To our knowledge, this is the first report of a patient requiring liver transplantation because of fulminant post−COVID-19 cholangiopathy. Given the increased number of COVID-19 infections in intensive care medical management, it is important to develop a practical approach to screening and evaluation of patients who are likely to develop post−COVID-19 cholangiopathy, particularly those who will progress to a fulminant course and require an expedited OLT.
|
2021-04-11T05:43:27.106Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "8bee3e4cf7bfd61c946d8c2e85a4be3b371f6235",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.transproceed.2021.03.007",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bee3e4cf7bfd61c946d8c2e85a4be3b371f6235",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213574696
|
pes2o/s2orc
|
v3-fos-license
|
Source to sink reconstruction of a Holocene Fjord‐infill: Depositional patterns, suspended sediment yields, wind‐induced circulation patterns and trapping efficiency for Lake Strynevatnet, inner Nordfjord, Norway
This paper reconstructs the sedimentation volumes and patterns, suspended sediment yields, wind‐induced circulation patterns and sediment trapping efficiency of Lake Strynevatnet, western Norway as an integrated source to sink system. The lake became deglaciated ca 11 ky cal bp, with glacio‐isostatic uplift isolating the basin from the nearby fjord (Nordfjord) ca 9.2 ky cal bp. Based on geophysical data collected in 2010, the upper 15–20 m of Holocene sediment accumulation in the lake was mapped. A sediment body in the centre of the lake indicates a depositional mechanism dominated by suspension sedimentation. The source of this sediment is associated with the adjacent glaciated catchments westward of the lake. Three seismic units were identified based on seismic facies generating an evolutionary model utilizing three depositional units (U1, U2 and U3), in which unit U2 represents the Storegga tsunami event. Unit U3 is further divided into three subunits; U3a, U3b and U3c based on their spatial continuity and subtle downlapping and onlapping relationships. The degree to which wind conditions could have affected the lake depositional patterns were studied utilizing an open‐source coupled hydrodynamic and sediment transport model. The results show that fluvial discharge alone is incapable of generating a circulation pattern in the lake currents. Suspended sediment concentrations in the lake are highest for strong winds. Modelled sediment accumulation on the lake floor shows that mild or absent winds lead to a proximal to distal sediment thickness trend, while strong winds result in uniform sediment thickness. Based on this it is argued that the thickness trends of seismic subunits U3a‐c are related to a variable palaeowind climate. As such, seismic data of lake infills, in combination with numerical modelling, may provide valuable palaeoclimatic information on wind patterns.
| INTRODUCTION
Understanding sedimentation patterns in fjords and fjordlake systems is important for palaeoenvironemental and palaeoclimate reconstructions. Lakes in glaciated areas provide an ideal setting to study their response to Late Holocene climate changes and glacial fluctuations. The focus of this paper is on Lake Strynevatnet, a 220 m deep lake characterized by high sediment trapping efficiency. By characterizing the sedimentary infill geometry the aim is to increase our understanding of the interaction between sedimentation rates, sediment dispersal patterns and glacier activity over the past ca 8150 years. The sediment dispersal patterns in Lake Strynevatnet are likely driven by wind-induced circulation patterns in the absence of other main driving forces for sediment dispersal such as large fluvial systems and slope failures. There has been little research on the effects of palaeowind conditions on Holocene lake sedimentation.
Reconstructions of lake sedimentation using seismic data allows a detailed view of the shifts in sediment thickness and volume, sedimentary patterns and sediment yield with changing glacial conditions. The objective of this paper is therefore to reconstruct the long-term suspended sediment budget in Lake Strynevatnet based on geophysical measurements and to gain insight into the effects of temporal deglaciation of the Jostedalsbreen ice cap on the infill pattern. In this contribution, the focus is only on suspended sediments as these usually travel furthest in a lacustrine system. Bedload from the rivers that drain the glaciers is typically coarse-grained and limited to the delta foresets, which very locally develop in Lake Strynevatnet Beylich and Laute, 2015).
Research questions that are addressed here include: (a) How much suspended sediment did the lake basin catchment produce during the Holocene? (b) How did the infill change throughout deglaciation of the Jostedalsbreen ice cap? (c) Does the infill pattern indicate changes in dispersal modes? (d) How do the long-term average suspended sediment flux values compare to contemporaneous measurements? 2 | RESEARCH AREA
| Physiography
Lake Strynevatnet is a freshwater lake located at the head of the 110 km long Nordfjord fjord ( Figure 1). The physiography of Lake Strynevatnet is characterized by a typical fjord-like geometry that includes up to 1,000 m high steep valley walls and a U-shaped basin floor. With a total area of 21.6 km 2 , its maximum depth is approximately 220 m, its median width is 2 km and it reaches 13.7 km in length. A shallow lake (Nerfloen; less than 10 m deep) connects the western effluent of Lake Strynevatnet to Nordfjord, continuing into a 10 km long and 500-1,000 m wide valley that is characterized by a meandering fluvial system. The catchment of Lake Strynevatnet is 440 km 2 in extent, of which 16% to 18% is presently glaciated (Vasskog et al., 2012), including four wide valley systems that jointly represent 370 km 2 of the entire catchment. While two of these valleys (Erdalen and Sunndalen) are presently connected to the Jostedalsbreen ice sheet, the other two are mostly affected by fringing glaciers (Hjelledalen and Glomsdalen) (Figure 1). Sunndalen and Hjelledalen merge 3.5 km upstream of their effluent. Of these valley systems, sediment and water discharges of Erdalen were monitored between 2004 and 2015 ( Figure 2; Beylich et al., 2009Beylich et al., , 2017Laute, 2012, 2015). The contemporary water discharge from Erdalen is low (on average 3.5 m 3 /s), while peak flow can reach up to 30 m 3 /s (Beylich et al., 2017). The Erdalen catchment area encompasses 79.5 km 2 and consists of a series of small basins and connecting braid plains descending from the Jostedalsbreen ice sheet. The Erdalen River builds a coarse-grained delta in the southernmost part of Lake Strynevatnet (Beylich and Laute, 2015). Between 2004 and 2015 daily discharge-weighted suspended sediment concentrations were calculated from hourly readings of optical turbidity at four stationary hydrometric stations in Erdalen using turbidity sensors (Global Water), in combination with frequent direct water sampling. From these measurements, a mean annual suspended sediment yield is calculated for the Erdalen drainage basin (16.4 t/km 2 /year). Contemporary suspended sediment transport accounts for almost two-thirds of the total fluvial transport and, accordingly, plays an important role within the sedimentary budget of the Erdalen drainage basin as well as for the suspended sediment supply to Lake Strynevatnet (Beylich et al., 2017).
| Deglaciation history and sediment storage
Nordfjord is one of the major fjord systems in Western Norway in which a detailed glacial and deglacial history has been well-documented (Aarseth et al., 1989;Aarseth, 1997;Nesje et al., 2000;Hjelstuen et al., 2009;Lyså et al., 2010). At the head of Nordfjord, a series of lakes developed after the area deglaciated as a result of glacio-isostatic uplift. Lake Strynevatnet deglaciated ca 11000 cal yr bp during the Preboreal interval (Rye et al., 1997;Hjelstuen et al., 2009;Nesje, 2009;Lyså et al., 2010). It became a freshwater lacustrine environment ca 9200 cal yr bp as glacio-isostatic uplift raised the basin and disconnected it from the fjord environment (Vasskog et al., 2012). The Jostedalsbreen ice cap melted between 7300 and 6100 cal yr bp (Nesje et al., 2000;Nesje, 2009), which was followed by regrowth ca 4000 cal yr bp. This latter interval, which is referred to as the Neoglacial event, eventually led to a glacial maximum during the Little Ice Age ca 1750 ad (Nesje, 2009). The glacier history is reflected in the infill pattern of a series of upstream valley basins in Erdalen, with a total sediment storage capacity of ca 50 × 10 6 m 3 . A number of studies have constrained the sediment budgets of Nordfjord, with estimates pointing to ca 25 km 3 of sediments stored within the entire Nordfjord Basin (Hjelstuen et al., 2009). Previous geophysical studies, accompanied by shallow piston cores in Lake Strynevatnet and in the nearby Nerfloen, Lovatnet and Oldevatnet lakes identify Holocene climate oscillations (Waldmann et al., 2007). Other investigations record the 8150 cal yr bp Storegga tsunami event in the shallow basin fill (Vasskog et al., 2013).
| Parametric echosounder survey
In summer 2010 a 2-week geophysical campaign was conducted to collect echosounder data in lakes Strynevatnet and Nerfloen using a parametric echosounder system (PES) developed by Innomar Technologies GmbH, which is a portable alternative to chirp and boomer sourced geophysical systems. A total of ca 100 km of high-quality geophysical data was collected in Lake Strynevatnet using the PES mounted to a 4.5 m inflatable boat powered by a small outboard motor. Navigation and positioning was provided by GPS. Data collection was constrained by wave heights <0.2 m. The PES uses non-linear sound pulses that are transmitted in the water body and reach the lake floor (Wunderlich and Muller, 2003). The transducer transmits two slightly different simultaneous frequencies, which interact in the water column. A new secondary frequency, ranging between 4 and 15 kHz, is generated from the different frequencies of the primary transmitted waves and this is low enough to penetrate the subsurface sediments. However, the primary-frequency signals (about 100 kHz) can be used to determine the water depth. The parametric approach results in a narrow secondary sound beam (±1.8°) with a footprint of about 13 m at 220 m water depth, and smaller footprint at shallower depths. On average, the number of pulses per second ranges between 10 and 15, depending on the water depth and the recording window of the receiver. Four or five pulses per ping were used to increase the signal-to-noise ratio. The reflected signals are used F I G U R E 2 Picture of the Erdalen valley taken from Lake Strynevatnet. The U-shaped valley and steep walls of the lake are well defined to calculate an echo print showing the sub-bottom structures along the sailed track. The obtained two way travel-time can be converted into depth based on an assumed sound speed in unconsolidated, fine-grained sediments of 1,500 m/s (which is in the range of values used in similar studies by Mullins et al., 1991;Van Rensbergen et al., 1998;Chapron et al., 2007;Fanetti et al., 2008;Waldmann et al., 2010;Cukur et al., 2015;Schneider von Deimling et al., 2016), in the absence of local velocity measurements. The echo strength depends on the reflection coefficient, the attenuation of the signal and the roughness of the lake floor. The penetration depth is mainly controlled by sediment parameters (e.g. roughness and attenuation), sub-bottom profiling properties (e.g. source level), secondary frequency and by environmental conditions (e.g. noise originating from the vessel engines).
| Geophysical data treatment and interpretation
The geophysical data were interpreted using the dedicated PES software ISE (version 2.9.5 from Innomar Technologie GmbH). The GPS data were visualized and analysed using Fugawi Global Navigator software version 4.5. The interpreted sub-bottom and lake floor reflectors were exported to Surfer Version 8 (developed by Golden Software) and interpolated into surfaces on a 100 m 2 grid using triangulation with linear interpretations (anisotropic ratio of 1 and an angle of 320°). These surfaces were used to determine bathymetry and thickness and to calculate sediment volumes.
| Delft3D model
Data was processed using Delft3D (version 4.02), which is an open-source software package developed by Deltares designed to simulate three dimensional hydrodynamics, sediment fluxes and morphologic changes in fluvial, lacustrine and marine settings (Lesser et al., 2004;Hillen et al., 2014;van der Vegt et al., 2016).
| RESULTS
The geophysical data resulted in three long E-W profiles (following the lake main axis) and 13 transverse profiles perpendicularly crossing the lake long axis (Figure 3).
| Bathymetry
In the absence of available bathymetric data for Lake Strynevatnet geophysical profiles were used to construct a bathymetric map (Figure 3). The map shows steep valley sidewalls (typically between 15° and 30°) and a flat lake floor with a maximum depth of ca 220 m.
| Seismic stratigraphy and facies definition
The lake sidewalls were mostly bare of sediment, most likely due to their steep angles. Based on seismic facies interpretation, three main seismic units are identified below the lake floor (U1-U3; Figures 4 and 5). The units are described as follows: • Seismic unit U1 consists of faint and slightly undulating non-continuous parallel to sub-parallel reflectors. Occasionally, some local inclined internal reflectors characterize the unit (e.g. in profile X81; Figure 5). The unit base is most often unidentified in the seismic profiles due to local strong attenuation of the signal by the overlying sediments. Nevertheless, when occasionally visualized, it is recognized as a clear undulating reflector especially in the north-west area of the lake. The thickness of unit U1 can reach up to 5 m. The upper boundary of unit U1 is typically continuous and less undulating than the lower boundary. • Seismic unit U2 consists of a seismically transparent unit up to 4 m thick, with clear upper and lower reflectors. Unit U2 is imaged consistently throughout the basin and its thickness varies between 1.5 and 3.5 m. The upper boundary of unit U2 is commonly very well expressed as a strong continuous and sub-horizontal reflector. • Seismic unit U3 consists of thin, parallel to sub-parallel sub-horizontal reflectors with a thickness ranging between 2 and 7 m. The continuity and amplitude of reflectors varies within unit U3. Strong reflectors are commonly very continuous while thin reflectors with lower amplitude are less continuous (Figures 4 and 5). Overall, this unit thins towards the north-west. Unit U3 can be further divided into three subunits: U3a, U3b and U3c based on their spatial continuity and subtle downlapping and onlapping internal relationships (Figures 4 and 5). While units U3a and U3c are relatively uniform in thickness, the thickness of unit U3b decreases towards the north-west.
| Thickness trends of stratigraphic units
Based on the geophysical interpretation of key reflectors, isopach maps have been constructed utilizing the base and top of unit U1 and the tops of units U2, U3a, U3b and U3c (when visible). It should be taken into consideration that estimates do not include thicknesses thinner than 0.3 m due to resolution constraints. In addition, there are uncertainties associated with estimating the thicknesses of individual units (e.g. seismic interpretation and velocity uncertainties) as well as gridding (e.g. interpolation uncertainties). Combining these caveats, it is estimated that the resulting calculations point to a ±20% error. The thicknesses of the seismic units were calculated based on the interpreted surfaces ( Figure 6). The sediment thickness of seismic unit U1 and U2 reaches up to 8 m, thickening towards the sediment main source areas of Erdalen and Hjelledalen. The sediment thickness of seismic unit U3 ranges between 2 and 7 m and thins westwards, away from the main source areas. The thickest sediment package of unit U3 is identified in the deepest part of Lake Strynevatnet. However, the thickness trends of subunits U3a, U3b and U3c are markedly different and vary across the lake. Unit U3a is about 1.5 m thick and generally uniform across the lake, except for thinning near Erdalen and Hjelledalen. Subunit U3b, however, has a distinct increase in thickness towards the Erdalen source area, ranging from 0 to 3 m. Subunit U3c has a relatively uniform thickness, ranging between 2 to 3 m from south-east to north-west. There is little variation in thickness perpendicular to the valley walls.
| Sediment volumes
The isopach maps have allowed the preserved sediment volumes to be calculated for the individual units and subunits. The total minimum volume of sediments mapped in Lake Strynevatnet equals 0.0537 km 3 . The sum of unit U1 and U2 represents 55% of this volume (0.0293 km 3 ), while unit U3 represents the remaining 45% (0.0244 km 3 ). From the latter, subunits U3a, U3b and U3c represent 21%, 31% and 48%, respectively. The volume of unit U2 will likely represent reworked material from the underlying unit U1 complemented with sediments that were brought to Lake Strynevatnet from Nordfjord by the Storegga tsunami.
STRATIGRAPHY INTERPRETATION
A major age anchor used to constrain the sediment yield calculations in this study is based on a sedimentary unit related to the Storegga tsunami event (Bondevik et al., 1997;Haflidason et al., 2005). The specific character of the transparent double reflector in this study closely matches the reflector assigned to the Storegga event ( Figure 4; Vasskog et al., 2013). This bed was dated to 8150 cal yr bp by Vasskog et al. (2013). A second well constrained age is the onset of marine deposition following the deglaciation of Lake Strynevatnet calculated to ca 11000 cal yr bp (Waldmann et al., 2007;Vasskog et al., 2012). Based on this chronology the following interpretation is proposed for the lake architecture: • Unit U1 was likely deposited after the basin was fully deglaciated ca 11000 cal yr bp, supporting the findings of Waldmann et al. (2007). During this period, glacier tongues retreated from the Strynevatnet Basin to the surrounding valleys (Rye et al., 1997;Nesje, 2009). Unit U1 is especially well observed in profile X76 ( Figure 5), where the undulating lower boundary of the unit reflects the palaeomorphology of the basin floor with sub-horizontal
F I G U R E 5 Density plots showing the raw data (upper figures) and the interpretations (lower figures) of two cross-sections X81 and X76
using the Parametric Echosounder (see Figure 3 for locations internal reflectors. In some locations such as in the mouth of Glomsdalen, the internal reflectors dip ca 0.9° lakewards, suggesting local glacio-deltaic deposits. • Considering the internal seismic properties of unit U2, it is interpreted as an event bed that can be traced across the entire basin. Previous studies correlate this unit with the debris that accumulated following the Storegga tsunami event of 8150 cal yr bp (Vasskog et al., 2013, Figure 4). It is therefore suggested that the undulating lower boundary of this unit corresponds to an erosional surface.
According to Vasskog et al. (2012Vasskog et al. ( , 2013, sediment overlying the Storegga event bed was deposited under full lacustrine conditions. In unit U3 this is reflected by the consistently parallel nature of the reflectors. Unlike the inclined reflectors characteristic of deposition by slope failure and associated turbidity currents reported from similar nearby lacustrine systems (Vasskog et al., 2011), the parallel reflectors reveal a draping stacking pattern that is traditionally interpreted as indicating settling-dominated depositional processes.
LAKE CIRCULATION
Discussed below are the possible mechanisms behind the changing sedimentation patterns identified in subunits U3a-3Uc by modelling current and sediment transport patterns in the lake. This approach applies a coupled hydrodynamic model (Delft3D) to evaluate the effects of river discharge and wind on dispersal patterns of the lake sediments.
| Analyses of thickness variability based on hydrodynamic modelling
Hydrodynamic modelling is a technique that contributes to a better understanding of the palaeo currents for a given set of boundary conditions. The seismic data has been used to construct the lake bathymetry. The environmental conditions will have changed during the Holocene, as the catchment area shifts from fully glaciated to non-glaciated conditions and back to glaciated. These shifts will have affected the fluvial discharge (water and sediment load) and the local weather system. Wind is an important driver of lake circulation (Krist and Schaetzl, 2001;Nutz et al., 2015;Nutz et al., 2016;Schuster and Nutz, 2018). Given a source of suspended sediment (River Erdalen and the nearby streams), the circulation pattern in the lake determines the mode and fashion the suspended sediments settle. Here, hydrodynamic modelling is used to explain the observed depositional patterns in Lake Strynevatnet as well as the possible causes of variations in relative thickness. It is not possible to reproduce the absolute sediment thicknesses due to a lack of core data from the deep basin.
In addition to discharge, wind is a potential driver of lake circulation (Krist and Schaetzl, 2001;Nutz et al., 2015;Nutz et al., 2016;Schuster and Nutz, 2018). Because Lake Strynevatnet is bordered by steep valley walls (>1,000 m), local winds at the lake surface are expected to originate either from the southeast or from the west-north-west (Figure 3). Hydrodynamic modelling is then used to test the impact of lake currents due to variabilities in wind strengths and directions. It can be assumed that the wind strength may have varied between full glacial and non-glacial intervals. This is especially true with katabatic winds during glacial periods, which may have been distinctly different from non-glacial conditions. Katabatic winds consisting of cold air would have run down Hjelledalen, Sunndalen and Erdalen as the valley confined winds would have been funnelled and amplified across the surface of Lake Strynevatnet. Furthermore, the western winds may also have changed in intensity during the Late Holocene. It is hypothesized that changes in discharge and/or wind will affect the currents in lakes Strynevatnet and Nerfloen. Complex currents will affect the sediment concentration patterns and the water mixing rate, thereby potentially affecting the sedimentation pattern at the lake floor. To test this hypothesis, a full 3D coupled hydrodynamic and sediment transport model was applied to simulate the water and sediment flow under different generic wind and discharge conditions.
| Delft3D model setup
The absence of clearly identified past input conditions inhibits the construction of a valid model. Boundary input values were assumed and model applications were approached in a qualitative, rather than quantitative fashion (main parameters used are summarized in Tables 1 and 2). As such, three generic scenarios with varying wind forces were designed to test hypotheses. The selected palaeowind conditions represent a realistic range in potential wind forces and directions. By adding suspended sediments to the fluvial input boundary, it is possible to describe the lacustrine sediment dynamics, mixing of the water column and sediment trapping efficiency for all three wind scenarios. In addition, by defining two grain-size fractions with different settling velocities it becomes possible to evaluate if the grain sizes of the suspended sediments play a role in the sedimentation patterns of the lake. The generic Delft3D model setup includes the lake bathymetry ( Figure 3) and a simplified setup for river inflow that includes the total discharge of all four streams entering the lake from the south-east.
The effect of discharge on the lake circulation was initially assessed by imposing a water discharge of 60 m 3 /s (Scenario 1a) and 600 m 3 /s (Scenario 1b) without wind, to assess if discharge alone induced circulation patterns. The imposed discharge values represent a duplication of the present-day peak discharge (60 m 3 /s) and a more extreme value, potentially representing deglacial conditions (600 m 3 /s). Subsequently, two wind scenarios (Scenarios 2 and 3, see Table 2) were defined with a discharge of 60 m 3 /s and a dominant wind direction approximately parallel to the lake axis (Figure 7).
Winds were set to originate from either the east-north-east representing katabatic winds from Jostedalsbreen ice cap, or from the west to north-west reflecting Atlantic winds funnelling through Nordfjord and adjacent fjords. Hence, it was possible to create two artificial daily wind time series using a simple stochastic approach with identical directions but with different magnitudes. The high-wind scenario is characterized by a fivefold increase in wind speed compared to the low-wind scenario.
| Scenario 1
With no winds (Scenario 1), only minor currents were generated in Lake Strynevatnet near the inflow and outflow areas of the fluvial systems ( Figure 8A). A similar interpretation can be obtained from the cross-sections. Here, generated currents are minor at all modelled water depths with an overall direction towards the lake outlet ( Figure 8B,C). To evaluate the sensitivity of the model to fluvial discharge the imposed fluvial discharge was increased by a factor of 10 (600 m 3 /s). However, the results indicate that the increased discharge by itself did not produce a circulation pattern in the lake. The main reason for this is that the discharge at the outflow point has a similar capacity as the discharge at the inflow point, preventing the evolution of complex circulation patterns. A poorly mixed water column was probably generated, with sediment concentrations for both clay and silt increasing with water depth ( Figure 8A,J). Therefore, the simulations show that fluvial discharge alone is insufficient to create lacustrine circulation patterns.
| Scenario 2
The mild wind conditions of Scenario 2 resulted in a moving water mass, local downwelling and upwelling and contrasting flow directions for shallow and deep sections of the lake ( Figure 8D through F). For example, after 24 h of east-south-east wind (100° at 4.0 m/s) at day 197 of the simulation, the modelled average flow velocities are well below 0.05 m/s ( Figure 8E). A clear 3D circulation pattern emerges with vertical ( Figure 8E) and lateral currents ( Figure 8F). Two different flow regimes occur at depth: (a) an upper current towards the westnorth-west (similar to the modelled wind direction at that time), and (b) a return current towards the south-east below water depths of approximately 50 m. Downwelling occurs along the southern shores and upwelling along the northern shore at the location of the cross-section. The upwelling and downwelling, as well as the return flow, change direction if the wind direction alternates to a western wind. As a result of the complex flow pattern during the low to moderate wind events of Scenario 2, the water mass is fairly well mixed and concentrations vary little with depth ( Figure 8J). However, during periods of low wind activity, the degree of water and sediment mixing decreases.
| Scenario 3
Scenario 3 overall shows a similar behaviour of the water mass under the strong wind conditions (wind from the west at 20 m/s) computed for scenario 2, yet the flow velocities are significantly higher ( Figure 8G through I). The downwelling is also more pronounced with much higher overall flow velocities ranging from 0.3 m/s at the surface to 0.1-0.2 m/s at depth. Furthermore, the boundary between the shallow and return flows occurs at approximately 25-30 m water depth. Near Lake Nerfloen, where the water depth is much less, a more local and complex flow pattern evolves ( Figure 7G). The sediment concentrations ( Figure 8J) are higher than for Scenario 2 and constant with depth, suggesting a well-mixed water body. It is noteworthy that in Scenario 3 the modelled silt concentration exceeds the clay concentration in Lake Strynevatnet, while the reverse is observed in Scenarios 1 and 2.
| Simulated time-series analyses
Time-series analyses of the simulated scenarios show that the lake water body reacts quickly to a shift in wind conditions. Within 3-5 h (depending on the magnitude of the shift), a new circulation pattern is established.
At the Nerfloen outflow point, the sediment concentrations for both the clay and silt sediment class increase over time for all scenarios (Figure 9). An equilibrium concentration will likely evolve over time as a function of the inflow concentration, the average residence time of the water in Lake Strynevatnet and the sediment setting velocity. The modelling scenarios were not designed to quantify this equilibrium concentration, yet Figure 9 shows that after 365 days, this concentration is not yet reached. Furthermore, the modelling results show that Lake Strynevatnet is not a perfect sediment trap. Suspended sediment is able to escape from the lake while being transported towards Nordfjord. This has an implication for the Holocene sediment yield reconstruction.
The time-series of silt concentration for Scenario 3 ( Figure 9) does not follow the same trend as for clay sediment concentration, but it shows strong increases and decreases over short F I G U R E 7 Synthetic wind climate as used for hydrodynamic modelling to understand the current patterns in Lake Strynevatnet under different wind conditions. The rose diagram (inset) shows the probability for the wind direction, which is similar for the low-wind and high-wind scenarios probability Scenario 1 (mild wind) Scenario 2 (strong wind) beaufort 480 | STORMS eT al.
time intervals. Further analyses of the simulation data revealed that there were 12 wind events in Scenario 3 that led to strong flow velocities in the shallow part of Lake Nerfloen, which gave rise to minor erosion of the lake floor ( Figure 10). Erosion ranged between <0.1 and 6 mm per event, depending on the wind conditions and on the location in the lake. Erosion occurred for storms originating from both western and eastern directions.
As only silt is defined as lake floor sediment (Table 1), erosion will only affect the silt concentration in the water column. The re-suspended silt is drawn into Lake Strynevatnet by the circulation patterns originating from both western and eastern winds. The effective mixing of the water body is rapidly reflected in the silt concentration across Lake Strynevatnet (Figure 10). Within a day, the silt concentrations in the lake centre and near the Erdalen effluent respond to the erosion event in Lake Nerfloen. The silt concentration curve in the lake centre is quite smooth as mixing of the water body is highest at this location. Near the Erdalen effluent, the silt concentration curve is more volatile due to changes in the wind direction with time that affects the local mixing rate of fluvial derived and lake waters.
The simulated sediment accumulated rates for Scenarios 1-3 are plotted in Figure 11. A clear proximal to distal trend in sediment accumulation rate arises for Scenarios 1 and 2, where the rates are highest near the Erdalen effluent. The absence of wind-driven lake circulation patterns for Scenario 1 leads to higher proximal deposition rates and lower distal deposition rates. This trend in deposition rate also occurs in Scenario 2, but compared to Scenario 1, more sediment is deposited in the distal (western) area of Lake Strynevatnet. Wind-induced circulation patterns in the lake affect the deposition pattern, but not to the extent that the proximal to distal trend is overprinted. For Scenario 3 it is evident that the many wind-events that occur during the year have fully overprinted the proximal to distal trend resulting in more homogeneous sediment accumulation rates across the lake.
| Estimated trapping efficiency
The absolute trapping efficiency of Lake Strynevatnet for suspended sediment is unknown. The numerical simulations do, however, provide a first estimate of the trapping efficiency for the three scenarios as the supplied and preserved sediment volume can be directly compared. The simulated sediment trapping efficiencies for Scenarios 1-3 over the final 2 months of the simulated year are respectively 0.82, 0.76 and 0.68. The trapping efficiency for Scenarios 1-3 for clay only are respectively 0.72, 0.65 and 0.61 and for silt 0.93, 0.87 and 0.74. The numbers show that the trapping efficiency is higher for silt than for clay sediments and that the trapping efficiency decreases with the strength of the wind-induced currents. Over time, it is likely that the simulated trapping efficiency decreases further as Figure 9 shows that the sediment concentrations at the outflow point of Lake Strynevatnet are still rising towards the end of the simulation.
YIELD FOR UNIT U3
The average suspended sediment yield can be calculated based on the reconstructed sediment volumes interpreted from the geophysical data. In order to do that, we need to assume a dry-bulk sediment density and a sediment trapping efficiency. Vasskog et al. (2012) reported a sediment dry-bulk density in Lake Nerfloen, the shallow extension of Lake Strynevatnet, ranging between 400 and 700 kg/ m 3 , for sediments overlying the Storegga events (see fig. 3 in Vasskog et al., 2012). In the absence of direct dry-bulk sediment density measurements in Lake Strynevatnet, this study uses the average value of the measured dry-bulk sediment density value of 517 kg/m 3 based on values reported by Vasskog et al. (2012). This average value is in agreement F I G U R E 1 0 Time series of modelled silt concentrations for Scenario 3 at three locations in Lake Strynevatnet. Blue and orange bars indicate the timing and magnitude of erosion events in Lake Nerfloen, as well as the wind direction associated with the erosion event. The lake depocentre location refers to the red star in Figure 8A wind with the dry-bulk density estimates based on granulometric measurements of fine silt to clay (Verstraeten and Poesen, 2001) and falls within the range of dry bed sediment densities for Drammensfjord (250-700 kg/m 3 ; Smittenberg et al., 2005). In addition, it is necessary to correct the mapped sediment volumes of unit U3 for the trapping efficiency of the suspended sediment in Lake Strynevatnet. The estimated trapping efficiency of 0.7 is slightly lower than the calculated value for Scenario 2 which represents clay and silt combined, as suspended sediment concentrations at the outflow point of Lake Strynevatnet are still rising (Figure 9). In order to calculate the sediment yield for unit U3, a catchment area of 440 km 2 is used, representing all catchments of Lake Strynevatnet (Figure 1). Based on the above assumptions regarding dry-bulk density and trapping efficiency, it is estimated that since 8150 cal yr bp, 5.0 t/km 2 /year of sediment has been supplied to Lake Strynevatnet.
The contemporary suspended sediment yield values for Erdalen valley (16.4 t/km 2 /year; Beylich et al., 2017) appear to be much higher than the overall value of the past 8150 years. This might suggest that the current glacier retreat results in a higher sediment yield than that obtained over the past 8150 years, which includes a glacier-free period. Compared to other partly glaciated drainage basin systems in Norway and in other cold climate environments worldwide, the contemporary suspended sediment yield measured for Erdalen is rather low (Beylich et al., 2017).
| DISCUSSION
This study indicates the value of combining detailed mapping of lake infill sediments and a thorough understanding of the associated hydrodynamic processes in the lake. The variability in hydrodynamic processes can be better understood by utilizing numerical process models, assuming there is a basic bathymetric map available. The seismic lake infill characterization serves as quality control of the numerical process models. The ongoing challenge is how to operate these numerical models in the absence of any hydrodynamic monitoring data to validate and calibrate model parameters. There are still many lakes in the world where we lack a basic bathymetric map and have no insight into the sediment infill, nor are there any hydrodynamic observations or monitoring data. Global climate models can provide us with some general climate data on precipitation, temperature and wind which may be enough to improve our understanding of lake hydrodynamics and associated morphodynamics. Yet in order to assess the results of the hydrodynamic models there needs to be an understanding of the lake bathymetry and sediment infill.
Specifically, for lakes that are under threat of future climate change (including the effects of lake-level variations, and variations in sediment and water discharge), in combination with an overall increasing human population and associated dependency on lake resources (fresh water, fish farming, irrigation, etc.) a combination of numerical modelling and geophysical characterization (which are both lowcost) will provide valuable insight into lake hydrodynamics which will benefit future management of lake waters.
The geophysical data presented in this paper show that unit U3 consists of three subunits with clearly differentiated stratigraphic signatures. The Delft3D simulation data provide a qualitative explanation for such observed trends related to wind patterns and wind-induced currents. Sediment distribution observations for subunits U3a-U3c suggests a pattern of alternating strong and weak wind conditions. Vasskog et al. (2012) discussed the presence of distinct glacial and non-glacial conditions during the past 8000 years in the catchment area of Lake Strynevatnet. In the absence of a core and absolute age dating of the three subunits, it is only possible to speculate whether the formation of subunits U3a-3Uc are related to the variable Holocene glacial conditions of Jostedalsbreen and related Holocene regional climate variability (Karlén and Kuylenstierna, 1996). Palaeoclimate reconstructions typically address past temperature and precipitation patterns yet provide very little information on palaeowind conditions. This study shows that changes in wind patterns (either originating from the Atlantic side or katabatic in origin) will clearly affect lake depositional patterns, which provides an as yet untapped potential stratigraphic archive that can contribute to an improved understanding of palaeoclimate (Krist and Schaetzl, 2001;Nutz et al., 2015;Nutz et al., 2016;Schuster and Nutz, 2018).
The numerical model study only focused on simulating wind-induced currents, the associated spatial and temporal sediment concentrations and the resulting sediment accumulation thickness, ignoring potential complicating factors such as mass failures along the steep lake valley walls, hyperpycnal events or density driven currents, temperature induced lake circulation or the effects of surface ice on lake circulation. The model returned qualitative results and allowed the wind to be identified as an important force that clearly affects the depositional patterns in the lake. It also shows that winds from both directions have a similar effect on the lake circulation. In addition, it has become evident that erosion resulting from wind-induced currents must have affected the shallow Lake Nerfloen confirming the findings of Vasskog et al. (2012).
| CONCLUSIONS
• The present study combines geophysical data and numerical model data to increase our understanding of | 483 STORMS eT al.
palaeocurrent patterns in Lake Strynevatnet. • Three seismic units and their ages are identified in Lake Strynevatnet. A chronological framework is obtained from other studies (a) unit U1: marine and lacustrine deposits 11000-8150 cal yr bp), (b) unit U2: Storegga-tsunami deposits (8150 cal yr bp), and (c) unit U3: lacustrine deposits (8,150-present). The latter unit can be subdivided into three based on differences in geometry and seismic facies. • Preserved sediment volumes have been reconstructed for each of the three units and subunits based on the seismic data. The total minimum volume of sediments imaged and mapped in the Strynevatnet Basin is 0.0537 km 3 . The sum of unit U1 and U2 represent 55% of the preserved sediment volume (0.0293 km 3 ) while unit U3 represents 45% of the total sediment volume (0.0244 km 3 ). Of that latter volume, subunit U3a represents 21%, U3b provides 31% and U3c represents 48%. • Based on the hydrodynamic modelling results of wind-induced circulation patterns, it is possible to conclude that wind magnitude, not wind direction, affects the sediment dispersal system on the lake floor. Weak winds lead to a typical trend in sedimentation rate from high near the river effluent to low in settings that are more distal. Strong winds are more effective at mixing the upper part of the water column, which results in a homogeneous sedimentation rate across the lake. • Based on the comparison between the sedimentation pattern in the lake and the hydrodynamic simulation presented here, it is proposed that the glaciation history of Jostedalsbreen has had an impact on the strength of the katabatic winds during the past 8150 years. • Model results show that the trapping efficiency in Lake Strynevatnet for suspended sediments decreases with modelled wave-induced current velocities in the lake (from 0.82 when the wind is absent to 0.68 for strong winds). Furthermore, trapping efficiency for silt is higher than for clays due to the difference in settling velocities. • The calculated yield for fine sediment (suspended) is 5.0 t/ km 2 /year (post 8150 cal yr bp). Measured present-day values are 16.4 t/km 2 /year. • Changes in wind patterns (either originating from the Atlantic side or katabatic in origin) will affect lake depositional patterns, which provide an as yet untapped potential to improve our understanding of palaeoclimates in stratigraphic studies.
|
2020-01-16T09:06:13.836Z
|
2020-01-13T00:00:00.000
|
{
"year": 2020,
"sha1": "c0085aea653da5a48192a2fe22fc1040b3f69ce1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/dep2.101",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "532b5e48b4f2daf6656d1eff45d123699b9af0c4",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
262063625
|
pes2o/s2orc
|
v3-fos-license
|
Orientational dynamics in supercooled glycerol computed from MD simulations: self and cross contributions
The orientational dynamics of supercooled glycerol using molecular dynamics simulations for temperatures ranging from 323 K to 253 K, is probed through correlation functions of first and second ranks of Legendre polynomials, pertaining respectively to dielectric spectroscopy (DS) and depolarized dynamic light scattering (DDLS). The self, cross, and total correlation functions are compared with relevant experimental data. The computations reveal the low sensitivity of DDLS to cross-correlations, in agreement with what is found in experimental work, and strengthen the idea of directly comparing DS and DDLS data to evaluate the effect of cross-correlations in polar liquids. The analysis of the net static cross-correlations and their spatial decomposition shows that, although cross-correlations extend over nanometric distances, their net magnitude originates, in the case of glycerol, from the first shell of neighbouring molecules. Accessing the angular dependence of the static correlation allows us to get a microscopic understanding of why the rank-1 correlation function is more sensitive to cross-correlation than its rank-2 counterpart.
Dielectric spectroscopy (DS) is a powerful tool for studying polar supercooled liquid dynamics [1][2][3].The outcome of the measurement, the complex dielectric permittivity ϵ(ω), contains a wealthiness of information regarding the collective orientational motion of the permanent dipoles of the constitutive molecules, and more precisely on the relaxation processes at work in the liquid under scrutiny [4,5].The broad range of available frequencies (10 −5 − 10 13 Hz) for a wide range of temperatures, allows to follow the slow down of the structural α relaxation upon cooling close to the glass transition temperature as well as the emergence of secondary relaxation processes such as JG processes, believed to be an intrinsic characteristic of glassy dynamics [6], or the excess wing, recently associated with dynamical facilitation [7].DS can also be used to characterize the cooperative nature of the α relaxation [8], to determine density scaling of the relaxation time [9,10], or to study physical aging of out-of-equilibrium liquids [11,12].
The complex dielectric permittivity obtained from DS measurements can be linked to the time-dependent equilibrium field free total dipole moment correlation function C 1 (t) through a Fourier-Laplace transform [13][14][15] where k B is Boltzmann's constant, ρ is the liquid density, T its temperature and ϵ ∞ is the permittivity at visible * Corresponding author: marceau.henot@cea.froptical frequencies.The dipole correlation function of rank ℓ is: where N is the number of dipoles in the cavity considered, P ℓ is the Legendre polynomial of rank ℓ and ϑ i,j (t 0 , t 0 +t) corresponds to the angle between molecule i at time t 0 and j at t 0 + t.In DS, this angle is measured between dipole moments and the technique is sensitive to the order ℓ = 1 leading to: The static value can be rewritten in: where g K is the Kirkwood correlation factor that can either be > 1, in which case the dipole-dipole correlation are overall positive, or < 1 meaning that antialignments dominates.The dynamics is also expected to be affected by cross-correlation because there is a priori no reason that the timescales and shapes of the self and cross-correlations function coincide exactly.A striking example of dynamical consequences of intermolecular correlations is the behavior of mono-alcohols which display another relaxation process at low frequencies, called the Debye peak, related to the formation of supramolecular H-bonded structures consisting of chains (g K > 1) or rings (g K < 1) [16].A recent theory from Déjardin et al. [17], showed that the liquid dynamics can be strongly affected by the effect of positive cross-correlations.
Recently, results from DS were compared to other techniques less sensitive to cross-correlations.The fluorescence response of a local probe diluted in a mono-alcohol was shown to be insensitive to the Debye relaxation of the liquid, allowing it to disentangle it from the other relaxation processes [18].Another technique that has been proven useful in that regard is depolarized dynamic light scattering (DDLS) which probes molecular orientations through the anisotropy of the polarizability.The relevant correlation function is given by eq. 2 with ℓ = 2.It follows that the technique does not distinguish between parallel and antiparallel alignments.There is strong experimental evidence that DDLS is insensitive to crosscorrelations.For example, Gabriel et al. [19] showed that in mono-alcohols DDLS displays an α peak but no Debye peak.In addition, in a non-associating liquid, Pabst et al. [20] showed that progressively diluting the system in a non-polar solvent leads the DS spectra to look more and more alike the DDLS spectra.All of this illustrates the importance of cross-correlation effects in DS which can significantly broaden the α peak.Moreover, while the shape of the α peak in DS spectra is system dependent, it was shown in DDLS to follow a generic line shape of slope −1/2 on the high-frequency flank [21].There is still debate, however, on whether this generic response reflects the true structural relaxation better than the dielectric one [22].
When dealing with physical processes taking place at the nanometric scale, molecular dynamics (MD) simulation is an attractive method that can give access to microscopic observables that are otherwise hard, or impossible, to obtain experimentally.This method is however limited to high temperatures or simplified systems, due to its computational cost.To study the generic behavior of liquid glass-formers, model systems can be thought of being made of polydisperse beads interacting through a Lennard-Jones potential.This helped give information on the spatio-temporal nature of relaxations [7,23].Another approach, more suitable for direct comparison with experiments, is to rely on a more precise modelization of specific molecules, taking into account their dipolar nature and electrostatic interactions.This gives access to their dielectric response [24][25][26][27][28]. Recently, MD simulations on a model dipolar system showed that, while the orientational ℓ = 1 correlation function of weakly polar systems is dominated by the self response, strongly polar liquids are much affected by cross-correlations [29].
The wide variety of organic liquids available has led to the choice of some systems, considered as models or representatives.Glycerol, by its apparent simplicity and its low tendency to crystallize, has long been the subject of extensive studies, by various techniques including dielectric spectroscopy [2,11,[30][31][32], neutron spectroscopy [33], nuclear magnetic resonance (NMR) [34], DDLS [35,36] and MD simulations [37][38][39][40][41][42][43][44].Its dynamics is, however, not particularly simple.As a trialcool, it is subject to H-bonds but does not display a Debye peak that would result from linear supramolecu-lar chains.Shear mechanical spectroscopy has shown the existence of a low-frequency mode that is believed to result from the hydrogen-bonded network formed between molecules [45].
In this article, we report a MD study of the orientational dynamics of glycerol, on a large temperature range (from 253 to 323 K) reaching the moderately supercooled regime, simulated from a model already widely used in the literature [37][38][39][40]44] over durations of up to 7 µs.We first compute the self response of the dipolar moment for ranks ℓ = 1 and 2 from which we deduce the loss function χ ′′ ℓ (f ) for frequencies down to 200 kHz.We then analyze the cross-correlation and we exploit the possibility offered by MD to decompose this part of the response as a function of the relative distance and orientation of the dipoles.We compute the total loss function for both ranks as well as the part resulting from crosscorrelations alone which allowed us to verify that crosscorrelations play a major role in the ℓ = 1 response while being almost negligible for ℓ = 2.We compare these data to experimental DS and DDLS spectra and obtain similar temperature dependence for the relaxation time and slope of the high-frequency flank of the α peak.We discuss how the differences in the spectra associated with different ranks can be related to the underlying molecular relaxation mechanisms.Moreover, we show that, for glycerol, the net cross-correlation originates only from the first shell of neighouring molecules.This is the case for both ℓ = 1 and 2 although their different sensitivity to orientational correlations leads, at the end, to significant differences in the importance of contributions coming from cross-correlations.
II. METHODS
The molecular dynamics (MD) simulations were performed using OpenMM [46] on an Nvidia RTX A5000 GPU.Glycerol has been modeled using the reparameterized AMBER force field previously employed in the literature [37][38][39][40][41][42]44] and whose parameters are given in the suppl.mat.Atoms belonging to the same molecule interact through harmonic potentials for bond length and angle and a periodic potential for bond torsion.Non-bounded atoms interact through a Lenard Jones potential with a 1 nm cutoff and a coulomb interaction computed using a Particle Mesh Ewald (PME) algorithm (1 nm cutoff and 0.0005 error tolerance).The simulation does not account for electronic polarizability.Each atom carries a constant partial charge originally derived by Chelli et al. [37] from quantum mechanical calculations.Later, Blieck et al. [39] noticed that this parameterization led, in the temperature range 333 -413 K, to a dynamics 10 times faster than measured experimentally by neutron spectroscopy.They slowed down the dynamics by the right amount by reducing by 5 % the hydroxyl group atomic charges.They also checked that the simulation reproduced fairly well the static structure factor measured by neutron scattering [47].This corresponds to a mean dipole moment of ⟨µ⟩ = 3.2 D which is higher than the µ exp = 2.68 D value measured in a nonpolar solvent [48].This can be seen as a way to compensate for the absence of electronic polarizability which leads, in the real system, through the reaction field, to an effective dipole moment greater than µ exp [4].The same parameters were later used by Egorov et al. [40] (who corrected slightly the charges to ensure the molecule neutrality, and made all bound flexible) to study glycerol-water mixtures and more recently by Becher et al. [44] to reproduce NMR spectra in the 300-540 K range.The parameters used in this work were almost identical with only small modifications intended to reduce the computational cost: the length of bonds involving hydrogen were fixed (as in refs.[37,39]) and hydrogen atom mass was increased by 40% allowing to use an integration time of 4 fs.The simulations were carried out on a system of N = 2160 molecules (30 240 atoms) in a cubic cell of side length a ≈ 65 Å with periodic boundary conditions (PBCs), in the NPT ensemble at eight different temperatures T (from 323 to 253 K) and at pressure P = 1 bar using a Monte Carlo barostat and a Nosé-Hoover thermostat.In order to study the effect of the system size, a simulation at T = 323 K was performed on a system consisting only of N = 540 molecules (a ≈ 41 Å).Random initial states were generated using Packmol [49], equilibrated at 323 K and progressively cooled down to 253 K by 10 K steps by waiting at each step an equilibration time corresponding to 98 to 200 τ α , reaching 7 µs (see details in the suppl.mat.).At each temperature, simulation runs lasted from more than 180 τ α for T ≥ 263 K and 67 τ α at 253 K (corresponding to 4 µs).For all simulation runs, the dipole of each molecule ⃗ µ i (t) (i ∈ [1, N ]) and its position ⃗ r i (t) were determined by computing the barycenter of the positive (q + at ⃗ r + ) and negative charges (q − = −q + at ⃗ r − ) with
III.1. Self correlation
The self dipole correlation function is defined by: It characterizes the molecular relaxation through rotational movement of the permanent dipole.This function is shown in fig.1a at each temperature, for rank ℓ = 1 and 2. Three regimes can be observed: at short time (t < 100 fs) a small decorrelation occurs and a boson peak is visible at t ≈ 70 fs.At long time, there is a complete, non-exponential decorrelation (C self ℓ (t) reaches 0) corresponding to the α relaxation.At intermediate times the correlation is high but slowly decreasing.This regime is almost nonexistent at 323 K but extends over two decades at T = 253 K.While the global shape is the same for ℓ = 1 and 2, the short time decorrelation appears more intense for ℓ = 2.This is simply due to the quicker decreases of P 2 (cos ϑ) compared to P 1 (cos ϑ) for ϑ ≪ 1.The mean self relaxation time is obtained from dt and is shown as a function of 1/T in blue in fig.6a.The relaxation times are shorter for ℓ = 2 (empty markers) than for ℓ = 1 (solid markers) and they both display a super-activated behaviour.
The self loss function was obtained, following eq. 1, by applying the fluctuation-dissipation theorem [50]: where TF is the Fourier transform, computed using the fftlog algorithm adapted to log spaced data [51].The fact that the correlation function was averaged on long times (≈ 100τ α ) leads to a fairly low amount of noise on the spectra, shown in fig.1b.They were all rescaled by superimposing their microscopic peaks at 10 13 Hz.The frequency at which the maximum of the α peak is reached was found to correspond (within the uncertainty) to 1/(2πτ self ℓ ).On the low-frequency side, the spectra follow a power law with slope 1, as expected.On the high-frequency flank of each spectrum, there is a power law regime on one to two decades in frequency with a slope −β self ℓ , interrupted by the fast process [3].The corresponding values of β self ℓ are shown in blue in fig.6b.For ℓ = 1 (solid markers), slope increases with temperature (ranging from 0.36 at 253 K to 0.46 at 323 K) while for ℓ = 2 (empty markers), it is temperature independent and systematically smaller (≈ 0.27).These low β values are associated with the non-exponential nature of the relaxation process.
III.2. Static cross-correlation
As stated in the introduction, experimental methods such as DS and DDLS are sensitive, not only to the self correlation function, but rather to a total correlation made of the self part and of a cross-correlation part.We thus need to get access to the correlation function associated to cross-correlation: However, one has to be careful with the application of this definition directly to the MD simulation box due to the effect of PBCs on the treatment of electrostatic interactions.With the PME method used here, our simulation box can be seen as wrapped in tinfoil, or embedded in an infinite medium in which the macroscopic electric field is null [26,28,52].This effect is responsible for a long-range dipole correlation of significant amplitude, that is an artifact of the simulation, and which cannot be suppressed or diminished by increasing the simulation box size (see fig.S1 of suppl.mat.).This artificial cross-correlation is maximum on average for couples of molecules separated by a distance of the order of the box size a.A way to get around this difficulty is to use a simulation box large enough to decouple the real correlation (occurring at relatively small distances) and the artifact [26,28].We decompose the cross-correlation function C cross ℓ (t) into contributions per unit distance denoted c ℓ (r, t) depending on the distance r = ∥⃗ r j − ⃗ r i ∥ between the reference molecule i and all molecules within [r, r + dr].This quantity, computed for slices of 0.5 Å and averaged over i and t 0 , is plotted in fig.2a for the static case (t = 0) for ℓ = 1 and 2. The dipole density n(r) is plotted in black in fig.2b alongside with the parabola in red that would be obtained for a homogeneous system of the same average density.The difference between these two curves shows a series of maxima that corresponds to the first, second and third neighbour peaks.
For ℓ = 1, it appears that the cross-correlation contribution c 1 (r) (in blue) is maximum for the first neighbours and reaches zero before the second layers of neighbours.Fig. 2c represents, as a function of r, the total contribution r 0 c(r ′ )dr ′ integrated within a sphere of radius r.We see that the cross-correlation reaches a plateau at r ≈ 7 Å while the cross-correlation coming from PCBs starts to be perceptible for r > 20 Å (see suppl.mat.).It is also interesting to study the mean level of static cross-correlation Γ(r) = c(r)/n(r), plotted in fig.2d that shows that the cross-correlation per dipole is positive, is a strictly decreasing function of the distance and is only of the order of 5-15 % in average for the nearest neighbours.From these data, we can deduce the Kirkwood factor g K = 1 + r lim 0 c(r ′ )dr ′ = 1.70 ± 0.02 at 273 K, with r lim = 7.5 Å.This is smaller than the 2.6 ± 0.2 value reported in the literature and deduced from static permittivity measurements [36] but it is compatible with previous numerical results on glycerol for T > 250 K [43].It is also interesting to note that our value of µ 2 g K (which is the quantity accessible from the experiments, see eq. 1) matches exactly the experimental value and displays the same temperature dependence (cf fig.S2 in suppl.mat.).In other words, the simulation gives the expected value of ϵ(0) in the whole temperature range.However, as the dipolar moment of glycerol has been measured with a reasonable accuracy in a non-polar solvent [48], it is likely that the MD simulation underestimate the value of g K .For ℓ = 2, the cross-correlation (in orange) appears to be also due to the first layer of neighbours but is much less intense than for ℓ = 1 and its integrated value saturates for a slightly higher radius.With r lim = 9.5 Å, the quantity analogous to the Kirkwood factor for ℓ = 2 would be only 1.11 ± 0.01.
The dipolar interaction is anisotropic in nature and it makes sense, rather than averaging the cross-correlation over all dipoles situated at a given distance, to distinguish the contribution as a function of the relative orientation.Spherical coordinates (r, θ, ϕ) can be defined with respect to the reference dipole i where θ is the angle between ⃗ µ i and ⃗ r = ⃗ r j − ⃗ r i .By symmetry, the contribution should not depend on the azimuthal angle ϕ.The cross-correlation contribution c ℓ (r, θ) per unit distance and solid angle is shown in the static case in fig.3a (ℓ = 1) and b (ℓ = 2) as a function of r and θ.For ℓ = 1, similarly to a recent observation in water [28], the spatial distribution of cross-correlation appears strikingly different than the θ averaged curves shown previously.The correlation is positive in an angular sector situated above and below the dipole of reference | cos θ| < cos θ lim and mostly negative on its sides.It is null on lines of constant | cos θ| = cos θ lim shown in grey with θ lim = 52°.The cross-correlation contributions summed over these two angular sectors are shown in fig.3c (top).These positive and negative contributions extend way over what is visible when considering their sum and, while decreasing, are far from negligible at r = 16 Å.However, these contributions cancel each other for r > 7.5 Å.This an- gular dependence of the cross-correlation is what is expected when considering the energy interaction between two electrostatic dipoles as a function of their relative orientation [53] and drawings of the most favorable dipoles orientations are shown in fig.3a.The net contribution of cross-correlation comes from the first shell of neighbouring molecules.Close molecules situated above and below are strongly positively correlated (even more above than below) with an alignment rate reaching 50 %.This correlation is favored by the dipole-dipole interaction although the top-bottom asymmetry illustrates that at such a close distance, it is not the only interaction playing a role.On the sides, molecules belonging to the first shell but further than 5.2 Å (corresponding to the first neighbor peak) are on average anti-aligned (again as favored by the dipole-dipole interaction), although not enough to compensate for the positive correlation.Finally, side molecules closer than 5.2 Å are positively aligned.This means that there exists other effects (that may be related to constraints constraints on molecule conformation or to the presence of H-bonds) able to compensate for the a priori unfavorable situation of having barycenters of the same sign charges facing each other on average.For ℓ = 2 also, a complex spatial dependence of the cross-correlation is visible in fig.3b.As this quantity is not sensitive to the correlation sign, it shows a very different behavior than ℓ = 1.Some oscillations, that correlate well with the density inhomogeneity, are visible.Those also appear for ℓ = 1 but dominate here.After the first layer of neighbours, the contribution from the two angular sectors are in anti-phase and cancel each other on average.Similarly to the ℓ = 1 case, the net contribution comes from the first neighbour shell but it is interesting to observe, that its different sensitivity to orientational correlations makes this quantity significantly less sensitive to cross-correlations.
III.3. Global dipole dynamics
For the reason mentioned above and related to the effect of PBCs, the cross-correlation function C cross ℓ (t) is computed in the following from eq. 6 by considering only the molecules located within a sphere of radius r lim rather than in all the simulation box.The resulting normalized cross-correlation function is shown in green in fig.4a (ℓ = but with only a 15 % increase on average between the self and total mean relaxation times for ℓ = 1 and a 25 % increase for ℓ = 2.This means that the Kivelson and Madden relationship [54] (τ tot = g K τ self ) does not seem to be verified in glycerol contrary to other systems such as water [55].
In the same way as for the self part, a loss function χ ′′ (f ) can be computed from the cross part and for the total correlation.The results are plotted for 273 K in fig.4c (ℓ = 1) and d (ℓ = 2).It is well visible that the amplitude of the cross-correlation is of the same order as the self part for ℓ = 1 while it is much smaller for ℓ = 2. On each spectrum, the high frequency flank of the α peak has a power law slope −β such that β self appears constant close to 0.3.This is already visible on the spectra of fig.5a but it is even clearer in fig.5b where the spectra are shown as a function of a dimensionless frequency ωτ self 1 .The collapse of the high-frequency side is much better for the total spectra with ℓ = 2 (although the α peak slightly broadens upon cooling) than for ℓ = 1 (self or total).
III.4. Comparison with DS and DDLS experiments
The spectra χ ′′ tot 1 (f ) of fig.5a can be compared to experimental glycerol dielectric spectra.The general allure corresponds well to the measurements of Lunkenheimer & Loidl [2] with a clear α peak showing no excess wing in this range of temperature and a small Boson peak around 10 12 Hz.The temperature dependence of the α relaxation time is compared in fig.6a, with DS data as grey solid circles and corresponding MD data τ tot 1 in red.They show the same trend and could be fitted with a WFL law with close parameters (up to a vertical prefactor).However, the MD α relaxation time is systematically shorter than its experimental DS counterpart by a factor 2 to 3. It is important to note the absolute value of the relaxation time is affected in the simulation by the choice of µ which was optimized by Blieck et al. [39] on neutron scattering data for temperatures larger than 333 K (it was already visible that at the smallest temperature simulated by the authors, 313 K, the relaxation time was underestimated).It seems that this is also the case for other studies using the same parameters [44].The temperature evolution of the slope β tot 1 is also comparable in MD and in experiments (for which it comes from a Cole-Davidson fit to the spectra), as shown by grey solid circles in fig.6b but the slopes are systematically underestimated by the simulation.
Contrary to DS, which probes the reorientational dynamics of the molecules by following the permanent dipolar moment, DDLS does it by following the anisotropy of the polarisability tensor.Glycerol molecules are be- lieved to rotate as a rigid entity [44] and we can reasonably assume that both techniques probe the same dynamics (although giving access to different ranks ℓ).It is also worth noting that results from DDLS experiments can also be affected by a scattering mechanism called dipoleinduced-dipole related to the fluctuation of the internal field.Cummins et al. [56] have argued that this effect may be neglected in the experimental spectra of supercooled liquids.Here we follow these authors.The spectra χ ′′ tot 2 (f ), from the simulation, should therefore be comparable with the Fourier transform of the DDLS correlation function reported by Gabriel et al. [36].The authors concentrated mainly on frequencies under a few MHz and thus on temperatures smaller than 260 K (although they also performed a measurement at 323 K).Here also, the MD spectra show a good qualitative agreement with experiments.The DDLS mean relaxation time is shorter than in DS by an amount that appears similar in the experiments and in MD.The inequality β tot 2 < β tot 1 is also verified for both.In the DDLS experiments (see grey empty circles in fig.6d), given the uncertainty, the slope β tot 2 does not appear to depend much on the temperature and it is also the case in the simulation (empty red circles).However, here again, the slopes are underestimated in the simulation.
The differences between the simulation and the real system must be weighed against the relative simplicity of the modeling.Indeed, force field parameters, that control intra and inter-molecular interactions were not adjusted specifically on glycerol but were designed to be applicable [31,36] for DS and refs.[35,36] for DDLS are shown in grey.
to the widest range possible of organic compounds.The only parameter that was adjusted specifically to glycerol was µ.Moreover, the partial charges in the molecule are considered as fixed point charges at the center of each atom, the electronic polarizability is not taken into account and H-bonds are mimicked only by electrostatic interactions of these fixed charges which can limit their strength [57].Nonetheless, it is already impressive that a classical model can reproduce well some aspects of the real system.With these limitations in mind, MD simulations can be used, as demonstrated recently by Becher et al. [44], to obtain information on microscopic observables or on the relative effect of external parameters such as the temperature.For ℓ = 2, and in contrast to the ℓ = 1 case, the effect of cross-correlation is very weak and the total loss function can reasonably be assimilated to the self loss function alone (see fig. 4d).This is in complete agreement with the observations of Gabriel et al. [36] and previous work by the same team [19,20] on the ability for DDLS to give access to the self orientational dynamics.This is also in agreement with the recent MD observations on a model system of Koperwas et al. [29] who compared the self and total correlation functions for ℓ = 2 at long times (t ≥ τ α ).
The ratio of the self relaxation times τ self 1 /τ self 2 , shown in fig.6c, ranges from 1.4 to 2.0 and is increasing with temperature.This quantity is affected by the details of the molecular relaxation mechanism with two limiting cases leading to values of 3 for isotropic rotational diffusion and 1 for discrete jumps of random amplitude [5].However, a given value of this ratio cannot be associated with a typical orientational jump amplitude.For example, in a mean field model, tuning the interaction parameter allows to change continuously this ratio [58].In the present case, it is more likely that its value reflects a dynamics governed by rare relaxation events, that are less averaged for ℓ = 2 than for ℓ = 1 [59].In this case, its decrease upon cooling could be associated with rarer and rarer events.The slope β < 1 can also be linked with a dynamic governed by relatively rare events, spread on long time scales, that does not average enough to produce an exponential relaxation (β = 1) [59].In this context, as C 2 cancels out for smaller angles than C 1 , it is less averaged over relaxation events leading to β 2 < β 1 , consistently with our observations.
In the DDLS works mentioned above [19,20,36], the authors suggested to use these measurements as a proxy to approach the DS self response, which is not accessible experimentally.This means admitting that the response is reasonably independent of the rank ℓ but this also requires a way to normalize the DDLS data so that they can be quantitatively compared to DS spectra.With a well-chosen, temperature independent, scaling factor, it is possible a get a perfect overlap of the DS and DDLS spectra on the excess wing flank at low temperature (T < 200 K).From this, it was shown that the DS signal could be well fitted by the sum of the DDLS signal and a cross-correlation term (described by a stretched exponential of fixed stretching parameter), with the relative weight of these two terms perfectly matching the Kirkwood correlation factor at all temperatures [36].This demonstrates the usefulness and relevance of the direct comparison between DDLS and DS spectra.
In our numerical work, we were not able to reach the range of low temperatures in which the excess wing appears.This prevented us from using the method described above to determine from scratch a scaling factor between the ℓ = 1 and ℓ = 2 spectra.We instead had to rely on the experimental determination by Gabriel et al. [36].Indeed, while the DS and DDLS spectra collapse on the excess wing for T < 200 K, they cross at a finite frequency f cross for higher temperatures.We determined a (T independent) scaling factor by making sure that the ℓ = 1 and ℓ = 2 total spectra cross at the same f cross /f α than the experiments for T = 323 K and T = 263 K (see vertical dashed lines in fig.5c).The result is shown in fig.5c and it appears that, for all temperatures, the α peak of the total ℓ = 2 and the self ℓ = 1 spectra match fairly well.The difference of slopes β is small enough so that the discrepancy remains low (< 30 %) under the frequency at which the fast process starts to be percept-ible.This allows us to understand the success of the experimental approach consisting of a direct comparison between DS and DDLS data [19,20,36].
IV. CONCLUSION AND PERSPECTIVES.
In this work, we studied using MD simulations, the orientational dynamics of glycerol from which we extracted the self correlation function and the associated loss function for different ranks ℓ of the Legendre polynomial.For ℓ = 1 and 2, we studied the spatial dependence of the cross-correlation and showed that they play a significant role in the ℓ = 1 response while being almost negligible for ℓ = 2.In accordance with recent experimental observations based on a comparison between DDLS and DS spectra, we showed that, although these techniques give access to different ranks ℓ of the correlation function (and consequently do not lead to the exact same spectra, in particular regarding the slope β), the scaling factor that corresponds to a merging of the excess wing of DS and DDLS spectra at low temperatures leads to a fairly good merging of the self part of the ℓ = 1 and the total ℓ = 2 spectra.This strengthens the idea that useful information can arise from a direct comparison between DDLS and DS measurements.
Moreover, we took advantage of the possibility given by MD simulations to access molecular observables to discuss the microscopic origin of the cross-correlation observed in DS.We found out that the net cross-correlation originates from the first shell of neighbouring molecules that tend to positively align independently of their orientation.Investigating in more detail the molecular origin of the cross-correlations, and their link with molecular conformation and H-bonds will be the subject of future work.
Table S4.Lennard-Jones parameters.The potential is of the form VLJ = 4 σ r 12 − σ r 6 .The input parameters of the Amber .frcmodfile is the half atom-atom distance at which the potential reaches its minimum (Rm = 2
Figure 1 .
Figure 1.(a) Dipole self correlation functions for the Legendre polynomial of rank ℓ = 1 (top) and ℓ = 2 (bottom) at different temperature T ranging from 323 K to 253 K with 10 K steps.(b) Dielectric loss function corresponding to the self part of the correlation functions ℓ = 1 and 2. The black dashed lines correspond to a power law fit of slope −β self ℓ on the highfrequency wing.
Figure 2 .
Figure 2. Distance dependence of the dipole static crosscorrelation function (ℓ = 1 and 2) at T = 273 K. (a) Contribution c per unit distance to the cross-correlation of all the molecules situated at r (b) Number of molecules per unit distance at r.The red curve corresponds to a homogeneous medium of the same density.The vertical grey lines on all the plots correspond to the first, second and third neighbour peaks.(c) Contribution to the cross-correlation of all the molecules within a sphere of radius r.(d) Mean level of crosscorrelation Γ (∈ [−1, 1]) for the molecules at r.
Figure 3 .
Figure 3. Relative orientation and distance dependence of the static dipole cross-correlation functions at T = 273 K. (a) Contribution c per unit distance and solid angle to the crosscorrelation (ℓ = 1) of all the molecules situated at r and at angle θ.Red zones are positively correlated while blue zones are anti-correlated as illustrated by the grey dipole drawings.(b) Same plot than a. for ℓ = 2. (c) Same quantity as in fig.2a, for ℓ = 1 (top) and ℓ = 2 (bottom), but with distinguished contribution from the red sector (| cos θ| > 0.62, see grey lines on a,b) and from the complement blue sector.Drawings close to the vertical axes illustrate the physical meaning of the correlation sign for ℓ = 1 (right) and ℓ = 2 (left).
1) and b (ℓ = 2) at T = 273 K.The self part is shown in blue and the total correlation function C tot ℓ (t) = C self ℓ (t) + C cross ℓ (t) is shown in red.The cross part does not display a short time decorrelation but for ℓ = 1, a small amplitude peak is visible at short time (t < 1 ps) (see inset of fig.4a).The mean cross τ cross ℓ and total τ tot ℓ relaxation time, computed by integrating the corresponding normalized correlation functions, are shown for all temperatures in green and red in fig.6a.They follow τ self ℓ < τ tot ℓ < τ cross ℓ
ℓ < β cross ℓ < 1 .
The slope of the total loss function β tot ℓ results from both previous slopes as well as the strength of the crosscorrelation and takes an intermediate value.The total spectra for all temperatures are shown in fig.5a and the values of the slopes β are shown in fig.6b as a function of T : β tot 1 is increasing with T while β tot
Figure 4 .
Figure 4. Correlation and loss functions at T = 273 K. (a, b) Normalized self, cross and total correlation function for ℓ = 1 (a) and 2 (b).The inset of (a) is a zoom at short time with a linear scale.(c, d) Corresponding loss functions for ℓ = 1 (c) and 2 (d).The dashed lines in (a) defines the slopes β cross ℓ
Figure 5 .
Figure 5. Orientational loss function for all temperatures (same color scale than in fig. 1) studied as a function of f (a) and ωτ self 1 (b).The total loss function is shown for ℓ = 1 (top, left scale in a) and ℓ = 2 (bottom, right scale in b).The self part for ℓ = 1 is shown in (a) only for the extreme temperatures and for all temperatures in (b) (middle).(c) For several temperatures, ℓ = 2 total spectra (orange) plotted with a scale factor (chosen to correspond to the one determined experimentally by Gabriel et al. [36]) alongside with the ℓ = 1 self (blue) and total (red) spectra.The vertical dashed lines shows where the crossing between DS and DDLS data occurs on experimental data.
Figure 6 .
Figure 6.(a) Mean relaxation time obtained by integrating the self (blue squares), cross (green triangles) and total (red circles) dipole correlation function for ℓ = 1 (solid markers) and ℓ = 2 (empty markers).Markers are linked to improve readability.Relaxation time measured experimentally by DDLS (empty circles) from refs.[35, 36] and DS (solids circles) from refs.[2, 31, 36] are shown in grey.(b, d) Power law exponent β of the high-frequency wing of the loss function for ℓ = 1 (b) and ℓ = 2 (d).The color code is the same as for (a).(c) Temperature dependence of the ratio of self mean relaxation time for ℓ = 1 and ℓ = 2.In b. and d. experimental data, from refs.[31,36] for DS and refs.[35,36] for DDLS are shown in grey.
|
2023-09-21T06:41:49.158Z
|
2023-09-20T00:00:00.000
|
{
"year": 2023,
"sha1": "937275979e45fc24a61e85c88e50702beb8b0f6a",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/cp/d3cp04578a",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "937275979e45fc24a61e85c88e50702beb8b0f6a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
220747057
|
pes2o/s2orc
|
v3-fos-license
|
Magnetoimpedance and Stress-Impedance Effects in Amorphous CoFeSiB Ribbons at Elevated Temperatures
The temperature dependencies of magnetoimpedance (MI) and stress impedance (SI) were analyzed both in the as-quenched soft magnetic Co68.5Fe4Si15B12.5 ribbons and after their heat treatment at 425 K for 8 h. It was found that MI shows weak changes under the influence of mechanical stresses in the temperature range of 295–325 K and SI does not exceed 10%. At higher temperatures, the MI changes significantly under the influence of mechanical stresses, and SI variations reach 30%. Changes in the magnetoelastic properties for the different temperatures were taken into consideration for the discussion of the observed MI and SI responses. The solutions for the problem of thermal stability of the magnetic sensors working on the principles of MI or SI were discussed taking into account the joint contributions of the temperature and the applied mechanical stresses.
Introduction
There are different sensing technologies based on the coupling of the magnetic and electric/ elastic properties of soft ferromagnets [1,2]. The magnetoelastic resonance of amorphous ribbons was proven to be capable to ensure the precise measurements of the viscosity of technologically important fluids, such as lubricant oils [3] or the properties of biological samples [4]. High-frequency electrical properties of amorphous soft magnetic alloys are strongly sensitive to various external effects causing a change of the magnetic permeability [5]. In particular, the magnetoimpedance (MI) [6][7][8] and the stress-impedance (SI) [9,10] effects, consisting in a change of the total electric impedance of a ferromagnetic conductor under the influence of the external magnetic field and deformations, respectively, are well studied phenomena in amorphous and nanocrystalline wires, composite wires [11], ribbons and thin films. In some cases, they were investigated in a condition of application of torsional stress [12].
The MI and SI are very promising for the creation of highly-sensitive detectors of various external physical parameters [13][14][15][16] that can be appropriate for different kinds of applications including biology and medicine [17][18][19]. Therefore, despite a rather long history of MI and SI effect investigation, the fundamentals related to these phenomena and the search for new MI and SI materials are still under the special attention of researchers.
MI sensors for many applications require enhanced thermal stability in the working temperature range. Therefore, it is necessary to investigate the temperature dependence of MI responses and their temperature stability [20,21]. It should be noted that MI sensitive elements very often consist of different kind of materials [16,17], having different electrical conductivity values and different thermal expansion coefficients. Therefore, a change in the temperature can result in the appearance or modification of the distribution of mechanical stresses in the MI element and change the output signal [22]. For example, it was found that the temperature change in the MI of the elastically deformed Co-based amorphous ribbon can reach 3%/K, while in the absence of deformation the temperature changes do not exceed the value of 0.5%/K [23]. Thereby, it is not sufficient to take into account the contribution of the temperature for the development of the thermostable MI sensors with a high range of functional temperatures. In this case, the investigation of the influence of both the temperature and mechanical stresses in the formation of the MI responses is necessary.
From s fundamental point of view, these investigations allow to study the temperature changes in the magnetoelastic properties of the amorphous soft magnetic alloys. It is important because a magnetic anisotropy of the amorphous soft magnetic alloys has mainly a magnetoelastic nature [3,24]. For example, the investigation of the temperature dependence of the impedance of the elastically deformed Co-based ribbons [23] and wires [25] showed that a magnetostriction sign can be changed and compensation magnetostriction temperature can be determined.
In this work, the temperature dependencies of MI and SI effect observed in Co-based amorphous ribbons were studied in a view of the MI sensors' thermal stability increase that was discussed for wide ranges of alternating current frequencies.
Samples
The amorphous ribbons with a thickness of 24 µm and a width of 710 µm (nominal composition Co 68.5 Fe 4 Si 15 B 12.5 ) were prepared using a rapid quenching technique onto the surface of the Cu weal.
Co-rich amorphous wires and ribbons are well known due to their excellent magnetoimpedance properties related to the extra magnetic softness closely connected to low magnetoelastic anisotropy. Co 68.5 Fe 4 Si 15 B 12.5 composition amorphous ribbons are very convenient materials as they have quite a high Curie point [26] of about 630 K, allowing the temperature dependence of the magnetoimpedance investigation in the practically important range of technological temperatures. In addition, this particular composition has such a technological advantage as the possibility of high-level surface properties' control. The idea to use an amorphous ribbon-based GMI (giant magnetoimpedance effect) biosensor for both magnetic label and label-free detection was proposed long ago and it is currently under active development [27,28]. The quality of the surface of sensitive elements is crucial for biosensing purposes [29].
Magnetic hysteresis loops were obtained by the induction method in a longitudinal magnetic field (applied along the long side of the rectangular elongated sample) with a frequency of 1 kHz. The magnetic field amplitude was as high as 1.5 kA/m. The saturation magnetization (M S ) at room temperature was as high as M S = 560 MA/m, the coercive force H C ≈ 50 A/m and the Curie temperature The ribbons of 30 mm length were used for the magnetoimpedance and stress-impedance investigation. The samples were studied both in as-quenched state (S-AQ) and after the heat treatment (S-HT). The thermal treatment was carried out at the temperature of 425 K for 8 h.
The Impedance Measurements
The impedance was measured using a homemade automatic setup. It allowed to investigate the simultaneous contributions of the magnetic field, mechanical stresses, and temperature on the impedance of ferromagnetic conductors with different geometries, including the geometry of amorphous ribbons. The Agilent 4294A impedance analyzer is a main part of the setup (Figure 1). The Impedance Probe 42941A (Keysight Technologies, Santa Rosa, CA, USA) is used in order to connect the analyzer with the measuring cell. The possibility to compensate the contribution of self-impedance of the measuring cell is an important part of the measurements, which was always used for system calibration. In addition, the measuring system included a thermocouple connected with a millivoltmeter (Figure 2). It should be noted that a thermocouple was situated in close proximity but not in direct contact with the ribbon surface, in order to exclude a distortion of the measuring results. We made calibration tables for the whole of the temperature range that allowed to determine a sample's temperature using a direct datum of flow temperature.
Materials 2020, 13, x FOR PEER REVIEW 3 of 13 amorphous ribbons. The Agilent 4294A impedance analyzer is a main part of the setup (Figure 1). The Impedance Probe 42941A (Keysight Technologies, Santa Rosa, CA, USA) is used in order to connect the analyzer with the measuring cell. The possibility to compensate the contribution of selfimpedance of the measuring cell is an important part of the measurements, which was always used for system calibration. In addition, the measuring system included a thermocouple connected with a millivoltmeter (Figure 2). It should be noted that a thermocouple was situated in close proximity but not in direct contact with the ribbon surface, in order to exclude a distortion of the measuring results. We made calibration tables for the whole of the temperature range that allowed to determine a sample's temperature using a direct datum of flow temperature. Materials 2020, 13, x FOR PEER REVIEW 3 of 13 amorphous ribbons. The Agilent 4294A impedance analyzer is a main part of the setup (Figure 1). The Impedance Probe 42941A (Keysight Technologies, Santa Rosa, CA, USA) is used in order to connect the analyzer with the measuring cell. The possibility to compensate the contribution of selfimpedance of the measuring cell is an important part of the measurements, which was always used for system calibration. In addition, the measuring system included a thermocouple connected with a millivoltmeter (Figure 2). It should be noted that a thermocouple was situated in close proximity but not in direct contact with the ribbon surface, in order to exclude a distortion of the measuring results. We made calibration tables for the whole of the temperature range that allowed to determine a sample's temperature using a direct datum of flow temperature. The external magnetic field was created by the pair of Helmholtz coils. They were connected to a power supply, ensuring a maximum magnetic field value of ±12.5 kA/m. Three pairs of orthogonal magnetic field coils connected to three independent stabilized power supplies were used for the careful compensation of geomagnetic and effective laboratory fields ( Figure 1). The sample was heated by the air stream (or argon gas). The maximum possible temperature was as high as 775 K. The measuring cell was mounted on the air duct as shown in Figure 2. The base of the measuring cell was made of a heat-resistant dielectric material. The sample was attached to the contacts as shown in Figure 2. The contacts were silver plated aiming to avoid oxidation during heating. One of the contacts was fixed on the base rigidly. The second contact was mobile, because it has a swivel connection with the base of the cell. First, this provided a free change in the length of the sample with temperature. Secondly, this construction allowed the application of the force to the sample for creating external tensile stresses. An SMA (SubMiniature version A) connector (Tyco Electronics Ltd., Schaffhausen, Switzerland) was used for the electric connection with the contacts. The Impedance Probe 42941A was connected to this jack.
A Kevlar thread was attached to the movable contact of the measuring cell in order to create tensile stresses in the sample. Another end of the thread was connected to the stacked load as shown in Figure 2.
The typical Young's modulus, E, for the Co-based amorphous alloys, is about 200 GPa [30,31]. According to Hooke's law, it can be determined that the maximum elongation of the sample is approximately 1 × 10 −4 m at σ max = 690 MPa (corresponding to the maximum value of the mechanical stresses in this study, see Section 2.3). The distance between the movable and fixed contacts was as high as a = 25 mm. In turn, the ratio of the horizontal and vertical movements of the moving contact, (along the line of the force action), ∆x and ∆z, respectively, can be determined using the equation: where l = 50 mm is the distance from the axis of rotation of the movable contact to its contact area ( Figure 2). Using Equation (1), it is easy to calculate that the horizontal movement of the movable contact exceeds the vertical by more than three orders of magnitude even with σ max . Therefore, the bending of the sample can be neglected with the selected method of stretching. The whole setup was controlled by a homemade program that allowed to set the AC frequency range and use the algorithms for changing the magnetic field or temperature and automatically collect the data.
Experiment Conditions
The impedance variations were obtained for the frequency range of an alternating current, f, from 0.1 to 100 MHz with an effective current intensity of 1 mA. The external magnetic field, H, was oriented along the long side of the ribbon. Its maximum intensity, H max , was as high as 12 kA/m. The tensile stresses, σ, were created by the force acting along the long side of the ribbon. The maximum tensile stress value was 690 MPa. The impedance was measured in the temperature range of 295-405 K. The magnetoimpedance effect ratio was calculated as follows: where Z(H) and Z (H max ) are the impedance moduli in the magnetic fields H and H max , respectively. The stress-impedance effect value was determined by the equation: where Z(σ) and Z(σ = 0) are the impedance moduli at certain tensile stresses σ and σ = 0 MPa, respectively.
Results
Magnetic hysteresis loops were measured by the induction method in a longitudinal magnetic field with a frequency of 1 kHz. The magnetic field amplitude for these measurements was ±1.5 kA/m. In the as-quenched state, the investigated amorphous ribbons can be described as soft ferromagnets Materials 2020, 13, 3216 5 of 13 with longitudinal effective anisotropy and a low coercivity of about 50 A/m, The heat treatment of the ribbons leads to a slight increase in the anisotropy field and coercive force ( Figure 3). The remnant magnetization, in contrast, slightly decreases after the heat treatment, indicating the existence of some non-uniform stress relaxation processes. It might be due to the difference in the stress relaxation peculiarities of the surface and the volume parts of the ribbon.
Results
Magnetic hysteresis loops were measured by the induction method in a longitudinal magnetic field with a frequency of 1 kHz. The magnetic field amplitude for these measurements was ±1.5 kA/m. In the as-quenched state, the investigated amorphous ribbons can be described as soft ferromagnets with longitudinal effective anisotropy and a low coercivity of about 50 A/m, The heat treatment of the ribbons leads to a slight increase in the anisotropy field and coercive force ( Figure 3). The remnant magnetization, in contrast, slightly decreases after the heat treatment, indicating the existence of some non-uniform stress relaxation processes. It might be due to the difference in the stress relaxation peculiarities of the surface and the volume parts of the ribbon. Figure 4 shows the dependencies of the maximum magnetoimpedance ratio MI max on the alternating current frequency value. The value of MI max corresponds to the maximum of the MI(H) dependence calculated using Equation (2) (see, for example, Figure 5). It can be seen that the MI max (f ) curves of the S-AQ sample have maxima of f ≈ 8 MHz at whole mechanical stress ( Figure 4, filled symbols). An increase in mechanical stresses in the range of 0-460 MPa causes a noticeable increase in the MI max . Thus, the increase in the MI max was close to 30% at a frequency of 8 MHz and it reached the maximum value of 350%. However, the further increase in σ lead to a slight decrease in MI max .
Results
Magnetic hysteresis loops were measured by the induction method in a longitudinal magnetic field with a frequency of 1 kHz. The magnetic field amplitude for these measurements was ±1.5 kA/m. In the as-quenched state, the investigated amorphous ribbons can be described as soft ferromagnets with longitudinal effective anisotropy and a low coercivity of about 50 A/m, The heat treatment of the ribbons leads to a slight increase in the anisotropy field and coercive force ( Figure 3). The remnant magnetization, in contrast, slightly decreases after the heat treatment, indicating the existence of some non-uniform stress relaxation processes. It might be due to the difference in the stress relaxation peculiarities of the surface and the volume parts of the ribbon. The thermal reversibility features of the MI of the S-HT amorphous ribbons was also investigated. The change in the MI measured at room temperature after heating up to 405 K did not exceed ± 6%, being related to the value measured before such a heating.
MI of the Co68.5Fe4Si15B12.5 Ribbons at the T = 295 K before and after Heat Treatment
For T = 295 K, the mechanical stress application caused strong changes in the MI(H) dependencies of the S-AQ amorphous ribbons, without any significant change of MImax value ( Figure 5a). Thus, when σ = 0 MPa, the MI(H) curve had a weakly pronounced ascending part. This part became much more pronounced with the increase in the mechanical stresses. The field strength, Hp, which was necessary to achieve MImax, was increased. As σ was increasing, MI(H = 0) decreased and approached the zero value (Figure 5a, insert). The MI sensitivity to the magnetic field in the range of 0 to Hp increased with the mechanical stresses increase from 0 to 575 MPa from 0.4%/(A/m) to 2%/(A/m), but decreased slightly with the further increase in σ.
MI(H = 0) of the S-HT sample did not change very much under the application of the mechanical stresses (the change is less than 8%). Hp also varied insignificantly. However, the ascending part of the MI(H) curve was increased (Figure 5b). The MI sensitivity with respect to the external magnetic field in the range of 0-Hp increased slightly from 0.5 to 0.6%/(A/m) with a mechanical stresses value increase.
In addition, it can be mentioned that the difference between the MImax values of the ribbons in the as-quenched state and after the heat treatment (for all the values of the applied mechanical stresses) becomes insignificant for the frequencies of the alternating current above 40 MHz.
MI and SI of the Heat-Threated Co68.5Fe4Si15B12.5 Ribbons in the Tempearature Range from 295 to 405 K
In the temperature range from 295 to 325 K, the character of the effect of the mechanical stresses on the MI(H) dependencies of the S-HT type samples did not change (Figures 5b and 6a). It is important to note that the ascending parts of the MI(H) curves obtained in the temperature range of 295 to 325 K and the mechanical stresses of 0 to 230 MPa practically coincide with each other. However, the MI sensitivity with respect to the external magnetic field in the range of 0-Hp remained The MI value of the ribbons becomes smaller after the heat treatment ( Figure 4, empty symbols). The MI max decreased more than 100% in the alternating current frequency range of 1-10 MHz. The maxima of the MI max (f ) dependencies were observed at the frequency of about 10 MHz. The increase in mechanical stress leads to an increase in MI max , but it did not exceed 20%.
The thermal reversibility features of the MI of the S-HT amorphous ribbons was also investigated. The change in the MI measured at room temperature after heating up to 405 K did not exceed ± 6%, being related to the value measured before such a heating.
For T = 295 K, the mechanical stress application caused strong changes in the MI(H) dependencies of the S-AQ amorphous ribbons, without any significant change of MI max value (Figure 5a). Thus, when σ = 0 MPa, the MI(H) curve had a weakly pronounced ascending part. This part became much more pronounced with the increase in the mechanical stresses. The field strength, H p , which was necessary to achieve MI max , was increased. As σ was increasing, MI(H = 0) decreased and approached the zero value (Figure 5a, insert). The MI sensitivity to the magnetic field in the range of 0 to H p increased with the mechanical stresses increase from 0 to 575 MPa from 0.4%/(A/m) to 2%/(A/m), but decreased slightly with the further increase in σ.
MI(H = 0) of the S-HT sample did not change very much under the application of the mechanical stresses (the change is less than 8%). H p also varied insignificantly. However, the ascending part of the MI(H) curve was increased (Figure 5b). The MI sensitivity with respect to the external magnetic field in the range of 0-H p increased slightly from 0.5 to 0.6%/(A/m) with a mechanical stresses value increase.
In addition, it can be mentioned that the difference between the MI max values of the ribbons in the as-quenched state and after the heat treatment (for all the values of the applied mechanical stresses) becomes insignificant for the frequencies of the alternating current above 40 MHz.
MI and SI of the Heat-Threated Co 68.5 Fe 4 Si 15 B 12.5 Ribbons in the Tempearature Range from 295 to 405 K
In the temperature range from 295 to 325 K, the character of the effect of the mechanical stresses on the MI(H) dependencies of the S-HT type samples did not change (Figures 5b and 6a). It is important to note that the ascending parts of the MI(H) curves obtained in the temperature range of 295 to 325 K and the mechanical stresses of 0 to 230 MPa practically coincide with each other. However, the MI sensitivity with respect to the external magnetic field in the range of 0-H p remained almost constant (Figure 7a). It is also worth mentioning that with the higher temperatures, the situation was different (Figure 7b). almost constant (Figure 7a). It is also worth mentioning that with the higher temperatures, the situation was different (Figure 7b). The MI(H) dependencies undergo significant change under the application of mechanical stresses when T > 325 K (Figure 6b). Thus, with the σ increase, in the beginning the ascending part of the MI(H) curve becomes less and less pronounced, and then the ascending tendency completely disappears. In the other words, the Hp decreases down to the value of zero.
The features of the stress-impedance dependencies SI(σ) calculated using Equation (3) are also different in the temperature ranges from 295 to 325 K and from 325 to 405 K (Figure 8a). In the temperature range from 295 to 325 K, the change in the impedance under the application of the mechanical stresses did not exceed 10% in the all alternating current frequency range. When the temperature increases above 345 K, the stress-impedance value increases and exceeds 30% at the alternating current frequencies above 40 MHz (Figure 8b). almost constant (Figure 7a). It is also worth mentioning that with the higher temperatures, the situation was different (Figure 7b). The MI(H) dependencies undergo significant change under the application of mechanical stresses when T > 325 K (Figure 6b). Thus, with the σ increase, in the beginning the ascending part of the MI(H) curve becomes less and less pronounced, and then the ascending tendency completely disappears. In the other words, the Hp decreases down to the value of zero.
The features of the stress-impedance dependencies SI(σ) calculated using Equation (3) are also different in the temperature ranges from 295 to 325 K and from 325 to 405 K (Figure 8a). In the temperature range from 295 to 325 K, the change in the impedance under the application of the mechanical stresses did not exceed 10% in the all alternating current frequency range. When the temperature increases above 345 K, the stress-impedance value increases and exceeds 30% at the alternating current frequencies above 40 MHz (Figure 8b). The MI(H) dependencies undergo significant change under the application of mechanical stresses when T > 325 K (Figure 6b). Thus, with the σ increase, in the beginning the ascending part of the MI(H) curve becomes less and less pronounced, and then the ascending tendency completely disappears. In the other words, the H p decreases down to the value of zero.
The features of the stress-impedance dependencies SI(σ) calculated using Equation (3) are also different in the temperature ranges from 295 to 325 K and from 325 to 405 K (Figure 8a). In the temperature range from 295 to 325 K, the change in the impedance under the application of the mechanical stresses did not exceed 10% in the all alternating current frequency range. When the temperature increases above 345 K, the stress-impedance value increases and exceeds 30% at the alternating current frequencies above 40 MHz (Figure 8b).
The mechanical stresses σ p which were necessary in order to achieve the maximum of the SI value decreased with an increase in the temperature. For example, at f = 10 MHz, σ p decreased from 460 to 230 Mpa, with an increase in the temperature from 365 to 405 K (Figure 8a). Note that under the mechanical stresses close to σ p , the ascending part of the MI(H) curves disappeared completely (Figure 6b). Materials 2020, 13, x FOR PEER REVIEW 8 of 13 The mechanical stresses σp which were necessary in order to achieve the maximum of the SI value decreased with an increase in the temperature. For example, at f = 10 MHz, σp decreased from 460 to 230 Mpa, with an increase in the temperature from 365 to 405 K (Figure 8a). Note that under the mechanical stresses close to σp, the ascending part of the MI(H) curves disappeared completely (Figure 6b).
Discussion
The heat treatment of the Co68.5Fe4Si15B12.5 amorphous ribbons at the temperature of 425 K led to a noticeable decrease in the magnitude of the magnetoimpedance effect. However, the magnetic field sensitivity of the MI significantly increased at σ = 0 MPa. Good thermal reversibility of the MI was also achieved in the temperature range from 295 to 405 K with no structural transition in the ribbons, and their state was kept amorphous despite some stress relaxation.
Moreover, the MI sensitivity with respect to the magnetic field of the heat-treated ribbons varied very little in the temperature range from 295 to 325 K under the influence of the mechanical stresses (Section 3.1., Figure 7a). We mentioned in the Introduction that in the composite materials the temperature change results in appearance of mechanical stresses in the MI element due to the difference in thermal expansion coefficients of the MI sensor materials. It affects the thermal stability of the MI sensor characteristics. Therefore, the results obtained in the present study can be useful for practical applications. In particular, the temperature range from 295 to 325 K, including normal human body temperature, can be sufficient for the biomedical applications of the materials with such a temperature interval of thermal stability [17,19].
It was reported previously that for the amorphous alloys of similar compositions, heat treatments at temperatures above 375 K cause structural relaxation, affecting the magnetoelastic properties [32,33]. We suppose that the change in the effect of mechanical stresses on the MI of the ribbons after heat treatment (Figure 4a,b) is associated with a change in their magnetostriction.
The impedance module of a ferromagnetic planar conductor of thickness d can be represented using the following equation [5,34]: sin sinh sin sinh 2 cosh cos where RDC is the DC resistance; k = d/δ; δ = (ρ/πfμ0μt) 1/2 is the thickness of the skin layer; f is the frequency of the alternating current; ρ is the electrical resistivity; μ0 is the magnetic constant; μt is the effective transverse (relative to the direction of the alternating current) magnetic permeability. Thus,
Discussion
The heat treatment of the Co 68.5 Fe 4 Si 15 B 12.5 amorphous ribbons at the temperature of 425 K led to a noticeable decrease in the magnitude of the magnetoimpedance effect. However, the magnetic field sensitivity of the MI significantly increased at σ = 0 MPa. Good thermal reversibility of the MI was also achieved in the temperature range from 295 to 405 K with no structural transition in the ribbons, and their state was kept amorphous despite some stress relaxation.
Moreover, the MI sensitivity with respect to the magnetic field of the heat-treated ribbons varied very little in the temperature range from 295 to 325 K under the influence of the mechanical stresses (Section 3.1., Figure 7a). We mentioned in the Introduction that in the composite materials the temperature change results in appearance of mechanical stresses in the MI element due to the difference in thermal expansion coefficients of the MI sensor materials. It affects the thermal stability of the MI sensor characteristics. Therefore, the results obtained in the present study can be useful for practical applications. In particular, the temperature range from 295 to 325 K, including normal human body temperature, can be sufficient for the biomedical applications of the materials with such a temperature interval of thermal stability [17,19].
It was reported previously that for the amorphous alloys of similar compositions, heat treatments at temperatures above 375 K cause structural relaxation, affecting the magnetoelastic properties [32,33]. We suppose that the change in the effect of mechanical stresses on the MI of the ribbons after heat treatment (Figure 4) is associated with a change in their magnetostriction.
The impedance module of a ferromagnetic planar conductor of thickness d can be represented using the following equation [5,34]: where R DC is the DC resistance; k = d/δ; δ = (ρ/πfµ 0 µ t ) 1/2 is the thickness of the skin layer; f is the frequency of the alternating current; ρ is the electrical resistivity; µ 0 is the magnetic constant; µ t is the effective transverse (relative to the direction of the alternating current) magnetic permeability. Thus, the temperature changes in Z, and therefore MI (see Equation (2)), will be determined by the temperature changes in the magnetic and electrical properties. Note that the temperature changes in ρ and R DC of soft magnetic alloys are insignificant in comparison with the temperature changes in µ t [20,35].
Assuming that the magnetization vector and the anisotropy axis lie in the plane of the ribbon, we can write the equation for the free energy functional [36]: where K is the constant of the effective anisotropy; λ s is the saturation magnetostriction constant; h is the AC field; α is the angle between the anisotropy axis of the ribbon and the transverse direction; θ is the angle between the axis of anisotropy and magnetization (Figure 9a). Using the standard procedure described, for example, in [23], one can obtain the following equation for transverse magnetic permeability: Materials 2020, 13, x FOR PEER REVIEW 9 of 13 the temperature changes in Z, and therefore MI (see Equation (2)), will be determined by the temperature changes in the magnetic and electrical properties. Note that the temperature changes in ρ and RDC of soft magnetic alloys are insignificant in comparison with the temperature changes in μt [20,35].
Assuming that the magnetization vector and the anisotropy axis lie in the plane of the ribbon, we can write the equation for the free energy functional [36]: where K is the constant of the effective anisotropy; λs is the saturation magnetostriction constant; h is the AC field; α is the angle between the anisotropy axis of the ribbon and the transverse direction; θ is the angle between the axis of anisotropy and magnetization (Figure 9a). Using the standard procedure described, for example, in [23], one can obtain the following equation for transverse magnetic permeability: ( ) μ α θ μ λ σ α θ μ α θ θ α Thus, the temperature changes in the transverse magnetic permeability will be determined by the temperature changes in the magnetization, effective anisotropy and magnetostriction.
Let us evaluate the influence of the temperature changes in magnetization and effective anisotropy on the MI for the case of S-HT ribbons. Considering the MI(H) dependencies at σ = 0 (Figures 5b and 6), we can see that the field Hp practically does not change with the temperature change. In this case, Hp ≈ HK [37], where HK is the effective anisotropy field. Solving the equation ∂W/∂θ = 0, we can show that: For σ = 0, we obtain that HK ~ K/Ms. Thus, taking into account the weak temperature change in Hp, we can conclude that the temperature changes in the magnetization and effective anisotropy do not significantly affect the MI. Most likely, not only the K/Ms ratio, but also the values of the Ms and K change slightly, since the studied temperatures are much lower than TC.
The equilibrium magnetization orientation θ necessary for μt calculating can be determined from the conditions ∂W/∂θ = 0 and ∂ 2 W/∂θ 2 > 0. For an arbitrary value of α, the solution of this problem is Thus, the temperature changes in the transverse magnetic permeability will be determined by the temperature changes in the magnetization, effective anisotropy and magnetostriction.
Let us evaluate the influence of the temperature changes in magnetization and effective anisotropy on the MI for the case of S-HT ribbons. Considering the MI(H) dependencies at σ = 0 (Figures 5b and 6), we can see that the field H p practically does not change with the temperature change. In this case, H p ≈ H K [37], where H K is the effective anisotropy field. Solving the equation ∂W/∂θ = 0, we can show that: For σ = 0, we obtain that H K~K /Ms. Thus, taking into account the weak temperature change in H p , we can conclude that the temperature changes in the magnetization and effective anisotropy do not significantly affect the MI. Most likely, not only the K/Ms ratio, but also the values of the M s and K change slightly, since the studied temperatures are much lower than T C .
The equilibrium magnetization orientation θ necessary for µ t calculating can be determined from the conditions ∂W/∂θ = 0 and ∂ 2 W/∂θ 2 > 0. For an arbitrary value of α, the solution of this problem is possible only by numerical methods [23]. However, for the purposes of our analysis, it suffices to take into account that under the action of the mechanical stresses, the angle θ will decrease in the case of the negative magnetostriction, that is, the magnetization will approach the transverse direction, and in the case of a positive one, vice versa [23]. It follows from Equation (6) that this will affect the dependencies µ t (H) and consequently, the dependencies MI(H) (see Equations (4) and (2)). In the first case, the field H p ≈ H K and the ascending part on the MI(H) dependency will increase, and in the second case, they will decrease [38,39]. It also follows from Equation (6) that the greater the magnetostriction, the more pronounced these changes will be.
Let us turn to the S-AQ magnetoimpedance dependencies obtained at room temperature (Figure 5a). When σ = 0 MPa, MI(H) has a slightly pronounced ascending part, which indicates an existence of a predominantly longitudinal effective magnetic anisotropy [38,39]. The increase in the ascending part of the MI(H) and its maximum shift toward the high fields with increasing tensile stresses indicate the negative value of the effective magnetostriction coefficient, as shown above.
The MI(H) curves of the S-HT amorphous ribbons contain the well defined ascending part at σ = 0 MPa. They change very little in the temperature range from 295 to 325 K with increasing tensile stresses (Figures 5b and 6a). This is probably due to the almost zero magnetostriction value. However, one can see significant changes in the magnetoimpedance dependencies under the action of mechanical stresses at T > 325 K (Figure 6b). The ascending part becomes less pronounced with an increase in σ. It disappears at a certain value of mechanical stresses, σ p . In turn, the field H p decreases with the increasing of the mechanical stress and it becomes equal to zero at σ ≈ σ p . Such changes under the action of the tensile mechanical stresses indicate positive magnetostriction. Note that states for which the ascending part disappears in the MI(H) curve (Figure 6b) correspond to the predominantly longitudinal orientation of the magnetization (even at H = 0) [38,39]. We also noted (Section 3.2) that σ p decreases with a temperature increase, which is presumably due to an increase in magnetostriction.
The magnetostriction values for the S-AQ and S-HT samples were determined from the increment of the field H p caused by the change in σ, under the assumption that the H p field is close to the effective magnetic anisotropy field [37]. The dependence of the value of the effective magnetostriction coefficient on the mechanical stresses value was also taken into account, which can be expressed as follows [40,41]: where λ s0 is a magnetostriction value in the absence of the mechanical stresses, β is a coefficient usually taking a value in the range of (1 ÷ 6) × 10 −10 MPa −1 . As can be seen, the λ s0 of the S-AQ amorphous ribbons at room temperature is negative and is approximately equal to −0.4 × 10 −7 (Figure 9b, filled symbol). Close magnetostriction values for the ribbons with similar compositions were obtained by other authors [26,37,40,42]. The magnetostriction coefficient for the S-HT amorphous ribbons is positive over the entire studied temperature range. It increases with a temperature increase (Figure 9b, empty symbols). In the temperature range from 295 to 325 K, the value of λ s0 is very small, and it does not exceed 0.3 × 10 −7 .
The near-zero value of magnetostriction around 295 K allows us to suggest that this temperature is the temperature of the magnetostriction compensation for the Co 68.5 Fe 4 Si 15 B 12.5 heat-treated amorphous ribbons. The presence of the compensation temperature is a characteristic feature for the amorphous CoFeSiB alloys. It is explained by the competition of single-ion and two-ion interactions [43,44]. Even a small content of Fe atoms in an amorphous Co-based alloy makes a significant contribution to the competition of single-ion and two-ion interactions [44].
Considering these results, we can conclude that it is important to achieve near-zero magnetostriction values for the MI element in a wider temperature range if the goal is to expand the temperature ranges with a high thermostability of the MI sensors. As well, the materials of the MI sensor with the thermal expansion coefficient close to that for the MI element should be used. Note that to some extent the magnetostriction of the amorphous alloys and its temperature dependence can be controlled by heat treatment and by varying their compositions [40,41,43,44].
On the other hand, for complex composite materials like multilayered structures, it is possible to select a material of the substrate with a desired temperature expansion coefficient. In this case, the mechanical stresses arising in the MI element could compensate the temperature changes and control the MI. Obviously, the thermal expansion of the substrate should be less than that of the MI element in the case of positive magnetostriction. In the case of negative magnetostriction, the ratio should be the opposite. However, this method requires the careful control of the experimental and fabrication conditions.
Conclusions
The magnetostriction of the Co 68.5 Fe 4 Si 15 B 12.5 amorphous ribbons changes its value from −0.4 × 10 −7 to almost zero after low temperature relaxation heat treatment at 425 K for 8 h. The low positive values of the magnetostriction in the heat-treated ribbons are maintained in the temperature range from 295 to 325 K, and cause small changes in the magnetoimpedance under the influence of temperature and mechanical stresses, as well as the low stress-impedance effect. The increase in the magnetostriction with the temperature leads to the increase in the sensitivity of the magnetoimpedance to mechanical stresses and a sufficiently large stress-impedance effect (above 30%) at the temperatures above 325 K.
It is shown that the combined influence of the temperature and the mechanical stresses should be taken into account when the solving issues for increasing the MI sensors' thermal stability. This is because the MI sensitive element even for the case of supposedly uniform material can be composed of the parts with different temperature expansion coefficients. Therefore, the temperature changes lead the increase in the mechanical stresses in the MI element, affecting the thermal stability of its characteristics.
|
2020-07-23T09:02:14.258Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "4989d63495b9ae1bb8de14dc9dd6c596a0eea9f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ma13143216",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97f5c004ff56abafdd5cce6f85c2ca761d833d94",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
248916444
|
pes2o/s2orc
|
v3-fos-license
|
Immunohistochemical detection of caspase 3 and proliferating cell nuclear antigen in the intestines of dogs naturally infected with parvovirus
Department of Pathology, Faculty of Veterinary Medicine, Burdur Mehmet Akif Ersoy University, Burdur, Turkiye. Canine parvovirus (CPV) causes a contagious and fatal viral disease in dogs characterized by hemorrhagic enteritis. Apoptosis is a programmed cell death and one of the primary markers of this process is caspase 3. Proliferating cell nuclear antigen (PCNA) is also associated with important vital cellular processes. This study was conducted to examine the expressions of caspase 3 and PCNA in the intestinal samples of dogs naturally infected with CPV using immunohistochemical methods. A total of 30 dogs with parvoviral enteritis and five control dogs gut tissues were evaluated for caspase 3 and PCNA expressions. Increased immunoactivities of caspase 3 and PCNA were observed in epithelial, crypt and inflammatory cells in the CPV-infected dogs. Increased expressions of both markers were observed being related to the severity of disease. These results demonstrated the important roles of caspase 3 and PCNA in CPV pathogenesis. These markers may be useful for early diagnosis, estimation of the severity or future treatment strategies of this important disease.
Introduction
Canine parvovirus (CPV) infection is an important viral disease of dogs. Its usual clinical form is parvoviral enteritis; but, it also manifests itself as a parvoviral myocarditis or mixed form. Young dogs, aged between 6 and 20 weeks, are the most susceptible ones to parvoviral enteritis. Clinically infected dogs become anorectic and lethargic and may vomit and develop diarrhea, with transient pyrexia occurring commonly. 1 Apoptosis is an evolutionarily conserved process of programmed cell death (PCD) or a highly regulated cell suicide mechanism. 2 Cells dying through PCD often undergo distinct morphological changes known as apoptosis and cleave their deoxyribonucleic acid (DNA) into small fragments. The caspase family of cellular proteases initiates and executes apoptotic cell death. Caspase 3, a pivotal effector caspase, is an essential protease of the apoptotic process. 3 Proliferating cell nuclear antigen (PCNA) is an intranuclear 36.00-kD non-histone protein and one of the central molecules responsible for decisions regarding life and death of the cell. 4 The PCNA immunostaining characteristics allow the identification of cells in different phases of the cycle. The expression of PCNA increases during the G1-phase, peaks at the S-phase and declines during G2/M-phases of the cell cycle. 5 This protein has also an essential role in nucleic acid metabolism as a component of DNA replication and repair mechanisms. An increase in PCNA expression levels may be induced by growth factors or as a result of DNA damage in the absence of cell cycling. 6 The CPV infection is an important and fatal disease characterized by vomiting, hemorrhagic enteritis and intestinal findings. However, the pathogenetic pathways of the disease are not completely understood. Therefore, this study was conducted to examine the immunohistochemical expressions of caspase 3 and PCNA in the intestines of dogs naturally infected with parvovirus.
Materials and Methods
In this study, 30 intestinal samples from dogs with positive parvovirus rapid tests or suspicious diagnoses were collected from the archive of the Department of Pathology, Faculty of Veterinary Medicine, Burdur Mehmet Akif Ersoy University, Burdur, Turkiye. The dogs examined in this study were aged 2 to 5 months and were of both sexes and different breeds. Necropsy notes were evaluated and dogs with enteric parasitic infections were excluded from the study. Intestinal tissues of five puppies of similar ages died due to traffic or other accidents were used as controls. Ethical approval was not required for this retrospective study.
Four serial sections were taken from the paraffin blocks of intestinal samples of 30 infected dogs and five control dogs. One of the sections was stained with Hematoxylin and Eosin (H&E). Intestinal samples taken from duodenum, jejunum, ileum, cecum, colon and rectum, especially the ileocecal region, of dogs with suspected parvoviral enteritis were selected for this study.
To determine the percentage of immunostained cells for each marker, 100 cells were counted in 10 fields on each section using a 40× objective for all groups. The percentage of immunostained cells in each sample was calculated and statistical analysis was performed.
The SPSS Software (version 20.0; IBM Corp., Armonk, USA) was used for analysis of immunohistochemistry results. The variables were assessed by Duncan test and ANOVA tests were used to compare groups. Values for p < 0.05 were considered statistically significant.
Results
According to necropsy notes, none of the puppies were vaccinated for parvoviral enteritis. The puppies had inappetence, depression, bloody diarrhea and vomiting. They died 3 -4 days after the initial symptoms. According to the clinical symptoms obtained from necropsy notes, dogs were classified based on the disease severity. In this study, 3 mild, 10 moderate and 13 severe cases were evaluated. No sex predisposition was observed.
Examination of the archived notes revealed marked dehydration and weakness as common findings and the lesions were primarily localized in the small intestine (24 out of 30 cases). The lesions were segmental or widespread and irregularly distributed, with frequent findings of intestinal hemorrhage and fibrinous exudate. Malodorous and watery or hemorrhagic contents were the common findings and hemorrhages were marked and typically localized especially in the ileocecal valve (Fig. 1). Erosion and ulcers were also observed in the gut mucosa of severely infected dogs. Intestinal walls were swollen, edematous and hemorrhagic in severe cases. In severely infected dogs, Peyer's patches were edematous and hemorrhagic and sometimes evident from the serosal or mucosal aspects. Different amounts of fluid were accumulated in the abdominal cavity of 19 cases. Severe hyperemia in the mesenteric vessels and enlargement and hemorrhage were commonly observed in mesenteric lymph nodes. Histopathological examination of the intestinal sections revealed that all the small and large intestinal sections were infected with the disease; but, the most marked lesions were localized in the ileocecal junction. Serosal edema, desquamation of the villi, erosion, ulcers and hemorrhages of the mucosa were common. The ulcerous areas showed inflammatory cell infiltrations especially composed of neutrophils and a small number of lymphoid cells. Atrophies of the Peyer's patches or total necrosis were diagnosed in severe cases ( Figs. 2A and 2B). Desquamation of the villi and fusion of cryptic epithelial cells or villi were observed in 15 cases. Regenerative crypt cells were characteristic findings in all cases. Although segmental lesions were detected in 17 cases, lesions were localized in the entire intestine, from the duodenum to rectum, in severely infected dogs. Secondary bacterial colonies were noticed in 13 dogs.
A total of 26 cases were positive for parvovirus; for that reason, remained negative four cases removed from caspase 3 and PCNA immunohistochemistry in this study. The CPV immunopositive cases revealed a positive immunoreaction being localized especially in cryptic epithelial and inflammatory cells. The parvovirus-positive immunoreaction was primarily observed in histologically lesioned areas (Fig. 2C). There was no positive reaction in the antibody-omitted negative control sections and control dogs' intestinal sections.
The PCNA expression was observed in the control group as well; but, it was prominent in the parvovirusinfected gut samples. The PCNA immunolabeling was primarily detected in the nuclei of proliferative cells. The most prominent reaction was observed in regenerative crypt cells and epithelia of the intestinal villi (Figs. 2D and 2E). The PCNA expression was also detected in relatively normal epithelial cells near the lesioned areas. In addition to epithelial cells, some interstitial cells also expressed PCNA. According to anamnesis and necropsy notes, in dogs suffered for a longer time before death, numerous regenerated cells showed a marked PCNA expression compared to those in dogs died within a short time after initial symptoms. Expression scores increased by severity of the disease ( Table 1). The negative control sections showed no PCNA expression.
Marked caspase 3 expression was observed in different cells in lesioned areas. Both epithelial and mesenchymal cells such as crypt cells and epithelial cells of the villi, muscle cells and some peripheral nerve cells showed positive reaction.
In addition to lesioned areas, cells near the lesions also expressed caspase 3. Some regenerative cells and abnormal cells also exhibited marked positive immunoreaction (Fig. 2F). Marked expressions were observed in severely affected puppies (Table 1). There was no reaction in the negative control sections; but, a slight expression was observed in the intestinal samples of control dogs. The most common expression for each marker was noticed in epithelial cells. Statistical analysis results of the positive cell percentage were shown in Figure 3.
Discussion
The CPV infection is a common and fatal disease that most commonly affects puppies. The infected dogs develop acute gastroenteritis and leukopenia. 8 In this study, 26 of 30 cases were positive for CPV infection and the ages of the dogs ranged from 2 to 5 months. Our clinical findings were similar to those of previous studies and classical knowledge. 1 Parvoviruses may infect cells at any phase of the cell cycle; replication is dependent on cellular mechanisms that are functional only during nucleoprotein synthesis prior to mitosis. Hence, the effects of parvoviral infection are primarily manifested in tissues with a high mitotic rate. Parvovirus replication in dogs was primarily detected in lymphoid tissues and gastrointestinal tract epithelial cells. 1 Therefore, the gut samples were used for this study. Histopathological findings of our study were in agreement with classical knowledge and edema, desquamation, erosion, ulcers and hemorrhages in guts as well as severe atrophy or total necrosis of the Peyer's patches were frequently observed.
Recent studies have demonstrated that CPV induces apoptosis in cell cultures. 9 However, the role of apoptosis in the pathogenesis of CPV infection in the intestine is unknown. Since caspase 3 is a key protease executing apoptosis, we investigated whether caspase 3 is expressed in the intestinal sections of dogs naturally infected with parvovirus. It was found that caspase 3 is indeed strongly expressed in some cells of the intestines. This result indicated that the apoptosis of intestinal cells appears to be mediated by caspase-dependent and in some cells independent pathways. In fact, an earlier study has suggested that other signaling pathways can induce apoptosis independent of the caspase cascade. 10 These findings indicate that the necrosis in the intestines may be directly or adversely affected by caspase 3 activation.
Destruction of virus-infected cells through the induction of apoptosis is an important host defense mechanism that may serve to limit virus replication and spread within host tissues. 11 We demonstrated here that parvovirus induces apoptosis in the intestinal cells primarily by inducing the caspase 3 pathway. We also found that both parvovirus and caspase 3 are expressed in crypt and epithelial cells of the villi. These results indicate a specific role for caspase 3 in parvoviral enteritis in dog intestinal cells.
In CPV infection, regeneration of the cryptic epithelium and partial or complete restoration of the mucosal architecture occur if undamaged stem cells persist in most of the affected crypt cells and the animal survives the acute phase of parvoviral enteritis. After the acute period, the infected dogs either succumb or begin to recover. 1 The identification of PCNA as a processivity factor for replicative DNA polymerases has placed it at the heart of the replisome. However, an earlier study has revealed additional roles for this protein in coordinating the complex network of interactions at the replication fork. 12 In this study, marked PCNA activity was observed in regenerated crypt and epithelial cells of the villi. Immunohistochemical measurement of the cells proliferative activity has been widely used to assess the biological behavior of tumors. The PCNA has been found to be useful for the diagnosis and evaluation of prognosis of patients suffering from a variety of malignant tumors. 13 However, there is limited information regarding such measurements in viral infections. There have been no reports regarding PCNA in canine parvoviral enteritis till date. In the present study, increased PCNA activity was observed; while, it was not sufficient for complete healing. Therefore, the proportion of PCNA activity may be related to the survival and recovery of parvovirus-infected dogs.
In this study, increased expressions of PCNA and caspase 3 were observed in numerous intestinal cells in dogs with naturally induced parvoviral enteritis and this result indicated that both PCNA and caspase 3 have important roles and may be useful for the disease prognosis determination. Although PCNA expression was observed in regenerative intestinal cells, caspase 3 expression was observed in cells near the lesions. Since the positive reaction most commonly observed in crypt cells, it is believed that these cells have an important role in parvoviral enteritis. A major limitation of this study was the absence of hematological examination and laboratory results of the puppies.
In conclusion, the present study showed that CPV induces apoptosis in the intestinal cells and moreover, there were epithelia regenerations in dogs survived for a longer time. Our results suggest the presence of an intrinsic balance between apoptosis and cell proliferation in the intestinal cells of dogs with parvoviral enteritis.
|
2022-05-21T05:23:54.763Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "c66135d9a9ac086bfe103e43354255af3d38f0d8",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c66135d9a9ac086bfe103e43354255af3d38f0d8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263843866
|
pes2o/s2orc
|
v3-fos-license
|
The Influence of Tax Service Quality and Tax Rate on Taxpayer Compliance among SMEs in Indonesia
____________________________________________________________ SMEs are a form of community business to support the country's economy during the covid-19 pandemic, SMEs are the most affected businesses. This study aims to determine the level of compliance of SMEs taxpayers in terms of tax rates, quality of fiscal service, and taxpayer self assessment system. The population in this study were SMEs registered at KPP Pratama Cirebon 2. The data analysis method used is descriptive statistics using structural equation model (SEM) analysis. The research results show that there is an effect of tax rate on the self assessment system, quality of tax service on the self assessment system, there is a relationship between tax rate and quality of tax service, there is no effect of the self assessment system on taxpayer compliance, there is no effect of tax rate on taxpayer compliance, there is no influence of fiscal service quality on taxpayer compliance. The hope is that
Introduction
WHO officially announced Covid 19 as a global pandemic on March 19 2020, Covid 19 is an infectious disease caused by the SARS-Cov-2 virus.Covid 19 entered Indonesia estimated in March 2020 (Nurcahyono et al., 2021).Covid 19 impacts the entire structure of human life, both in social, economic, cultural and so on.To deal with this pandemic, the government designed policies to stop the spread of Covid 19.The policies implemented by the government, for example, call for wearing masks, studying and working from home, Large-Scale Social Restrictions (Pembatasan Sosial Berskala Besar), vaccination and social distancing.Social distancing is an effort to stop the spread of Covid 19.However, its implementation causes a decrease in economic activity and productivity of business actors, resulting in a decrease in tax revenues.Taxes are the most significant state income compared to, for example, the oil and gas and nonoil and gas sectors (Listiyowati et al., 2021).Revenue from the tax sector in 2021 will reach IDR.1,277.5 Trillion or 103.9%.Taxes have supported 70% of the APBN in the last few years in Indonesia.Taxes are the primary source of income in providing community contributions to the country's economic development (Mir'atusholihah et al., 2014).An increase in revenue proves public awareness of taxes, but in percentage terms, it is still below the target set by the government.
State revenue in 2017 was 1,654.70; in 2018 amounted to 1,928.10; in 2019 1,955.10; in 2020 amounting to 1,698.60; in 2021 it is 1,742.70,while tax revenue in 2017 is 1,343.50; in 2018 1,518.80; in 2019 1,546.10; 2020 1,404.50; in 2021 1,444.50.Based on this data, tax revenue contributes to the largest state revenue.Tax revenue has increased yearly, but in 2020, it experienced a decline due to the Covid-19 pandemic entering Indonesia.Likewise, in 2021, it still experiences the impact of Covid-19 and has decreased compared to 2019, however, it has increased from 2020 (Anyaduba & Oboh, 2019).
In Indonesia, three types of tax collection apply, namely the self-assessment system, the official assessment system, and the withholding assessment system.Before a tax law is drafted, the process always pays attention to issues of theory and principles that are universal and unique, especially those related to the fairness of collection (Sa'diyah & Hariyono, 2022;Sari, 2022).Unlike retribution, which is a form where payments made by individuals can immediately receive back performance, tax collection in its implementation does not provide direct counter-performance, so a particular review is needed to provide arguments to the public about why the state has authority and justice.In tax collection and why people are obliged to pay taxes (Christanty et al., 2023).
With the rapid socio-economic development as a result of national development and globalization as well as reform in various fields and after evaluating developments in the implementation of tax laws so far, especially Law Number 7 of 1983 as last amended, Law Number 10 of 1994 is deemed necessary.To make several substantive changes to improve its function and role to support national development policies, especially in the economic sector (Farhan et al., 2019).The changes to the income tax law in question still adhere to the principles of taxation that are universally adopted, namely fairness, ease/efficiency of administration and productivity of state revenues and still maintain the self-assessment system.Improvements mainly refer to the system and procedures for paying taxes in the current year not to disrupt taxpayers' liquidity in running their business (Parwati et al., 2021).Therefore, the direction and objectives of improving the income tax law are 1) to further increase the fairness of tax imposition, 2) to provide greater convenience to taxpayers, and 3) to support government policy in order to increase direct investment in Indonesia, both foreign investment and direct investment.Domestic capital in specific business fields and specific areas that receive priority.
Since 1984, Indonesia began using a self-assessment system, previously the official assessment system.The self-assessment system is a tax collection system that gives taxpayers the authority to determine the amount of tax owed.This model gives the authority to determine the amount of tax to the taxpayer himself.The taxpayer is active, starting from calculating, depositing and reporting the tax owed himself, and the Fiscus does not interfere and only supervises (Pratiwi et al., 2022).The self-assessment system's success depends on the taxpayer compliance level, especially SMEs taxpayer compliance.The SMEs sector contributes to GDP, namely 61.97% of the total National GDP or the equivalent of IDR.Eight thousand five hundred trillion in 2020, and the SMEs sector also contributes to absorbing many workers, namely 97% of the business world's absorption capacity in 2020.The COVID-19 pandemic also impacts SMEs, namely decreased demand, product marketing, access to raw materials and low levels of human resources (Listiyowati et al., 2021).This made the government issue a stimulus policy so SMEs could survive.The Government's policy in supporting and assisting SMEs taxpayers affected by Covid 19 is one of the incentives for providing government-borne final income tax (DTP), based on PMK No. 44/PMK.03/2020The Indonesian Ministry of Finance fully covers the taxes for SMEs affected by Covid 19.This policy applies from April 2020 to September 2020 (Pratiwi et al., 2022).Then, through the PEN (National Economic Recovery) program, the Minister of Finance explained that the PPH SMEs tax incentives will continue until 2020.2021, and the policy of reducing the value of SMEs tariffs from 1% to 0.5% for SMEs with a turnover below Rp. 4.8 Billion.This policy was implemented to increase SMEs taxpayer compliance in paying taxes to maintain it (Sholehah & Ramayanti, 2022).Taxpayers have been conceptualized from several points of view.Brown and Mazur (2003) Argue that tax compliance is a difficult concept, both theoretically and empirically.They consider three compliance perspectives, namely payment, charging and reporting.Kirchler and Wahl (2010) Emphasize that the challenges of taxpayer compliance research can be easily divided into two categories: conceptualization problems and unclear terminology.
One way to increase taxpayer compliance is to maintain the quality of fiskus services.Taxpayer compliance cannot be separated from the government's role in responding to taxpayers' desires to obtain accessible information and payment services so that public compliance in paying taxes increases (Tarmidi & Novitasari, 2022).Apart from satisfaction with services, public trust in the government and the legal system will encourage taxpayers' willingness to pay taxes to increase if the funds collected from taxes are distributed evenly to finance all state needs and management.This is usually proven by increasing economic growth and public infrastructure that supports the mobility of citizens' lives so that economic, social, cultural and security activities can run smoothly to improve the welfare of citizens in general (Permata & Zahroh, 2022).
The tax rate factor also influences the level of taxpayer compliance.In connection with this, the Government issued a policy of setting a tariff of 1% (one per cent) by issuing Government Regulation No. 46 of 2013 concerning income tax on income from businesses received or obtained by taxpayers with an inevitable gross turnover (Lenggono, 2019;Nurcahyono & Kristiana, 2019).The purpose of implementing this regulation is to provide convenience to taxpayers, especially taxpayers in the SMEs sector, educate the public about orderly administration and contribute to implementing development in the form of taxes.The expected final goal is to increase taxpayer compliance.
Several studies have brought up factors that influence tax obligations, which can be maximized with mediation and mediation variables such as tax knowledge Hawa and Dongoran (2022), tax socialization Safitri and Silalahi (2020), machiavellian ethics Trisnawati et al. (2017) and taxpayer satisfaction Schoeman et al (2022) which has a significant impact on increasing taxpayer awareness.The latest research conducted by Sholehah and Ramayanti (2022) states that if tax socialization is carried out more intensely or regularly, the greater the compliance of SMEs taxpayers in fulfilling their tax obligations.Tax sanctions, if carried out intensively, will increase the compliance of SMEs taxpayers, whereas according to Listiyowati et al. (2021), Tax Socialization does not influence SMEs taxpayer compliance, tax authorities' services have no influence on SMEs taxpayer compliance, the Self Assessment System influences SMEs taxpayer compliance.This research differs from previous research, namely that the author added a self-assessment system variable as a mediator, which is expected to complement previous research.Based on the perspective of empirical research, the function of the self-assessment system as an intermediary factor must be optimized from the psychological side of taxpayers, which is rare and difficult to optimize (Mooij & Liu, 2021;Sandra & Anwar, 2018).
Much extensive research has been conducted on factors influencing tax compliance using various methodologies, including experimental research, surveys, regression modelling and analytical studies.However, the results often prove to be uncertain or mixed, and this shows that further research is still needed, especially involving the Self Assessment System, which still needs to be improved in terms of taxpayer implementation (Schoeman et al., 2022).This research aims to obtain empirical evidence regarding SMEs taxpayer compliance with a self-assessment system which functions as a mediating variable.The self-assessment system as an intermediary variable can become psychological capital in building the mental awareness of taxpayers, of course, by increasing the quality of tax services and tax levels that have a sense of justice.This will have managerial implications for increasing taxpayer compliance so that it can be the best solution for taxpayer awareness.The theoretical implications become empirical proof of whether SAS is an excellent mediating variable in improving taxpayer behaviour.
Hypothesis Development: The influence of the tax rate on SMEs taxpayer compliance
The economic deterrence approach is a concept developed in theories that attempt to explain criminal behaviour.Building on the theory of criminal behaviour and directing it to tax compliance behaviour, Allingham and Sandmo (1972), developed utility theory and assumed that a rational individual considers the possibility of being audited and the penalties associated with fraudulent behaviour.Therefore, a person will consider the possibility of an uncertain outcome and its consequences (Hasan et al., 2020;Zeeshan Hamid, 2012).Changes in VAT rates can influence the decision to register SMEs as taxpayers, so SMEs need to understand tax compliance.Tax compliance, in a broad and operational sense, is when someone is required to register as a taxpayer.Registered taxpayers must also complete and submit their tax returns accurately and on time and then pay the applicable tax obligations in full and on time (Schoeman et al., 2022).SMEs tax rates align with Government Regulation No. 23 of 2018, namely 0.5% of revenue receipts where gross circulation in a year is under 4.8 billion.Simplification of the 1% tax rate is expected to encourage SMEs taxpayers to report their tax obligations.So, the tax rate influences SMEs taxpayer compliance.This is supported by research (Ariyanto & Nuswantara, 2020;Mir'atusholihah et al., 2014).H1: There is an influence of the tax rate on SMEs taxpayer compliance
The influence of tax service quality on SMES taxpayer compliance
The issue of service quality is an essential indicator of the success of any business organization in today's competitive environment.Service quality is an effort to meet customer needs and desires and accurately balance customer expectations.Susuawu et al. (2020) the quality of service for tax authorities in emerging and developing countries is even more critical due to the poor level of tax revenue performance (Amoh & Ali-Nakyea, 2019).
Fiscus services are a way for tax officers to help take care of everything that taxpayers need.Excellent or lousy service quality will give an impression to taxpayers, which will impact decision-making regarding fulfilling their subsequent tax obligations.The quality of tax services proposed by Parasuraman et al. (1985), responsiveness refers to the tax authority's agility in responding to taxpayers' questions and needs.Reliability refers to the ability of the tax authority to provide excellent service to taxpayers reliably and accurately.Assurance involves taxpayers' trust and confidence in the tax authorities to treat them faithfully.When tax authorities make taxpayers' needs a priority and also empathize with them, the quality of empathy will be activated.Tangibility refers to the physical description of services provided by tax authorities to taxpayers, such as physical facilities, tools and machines.Previous research shows that fiskus service quality influences taxpayer compliance (Fuadi & Mangoting, 2013;Ifada et al., 2023;Puspanita et al., 2021).
H2: There is an influence of tax service quality on taxpayer compliance
The mediating role of the self-assessment system on tax rates and SMES taxpayer compliance Self-assessment shifts the task of calculating and reporting taxes to taxpayers.In this scheme, taxpayers complete their SPT with a self-assessment letter and proof of payment to the Tax Authority (Anyaduba & Oboh, 2019).Jacobs (2013) emphasizes that voluntary tax compliance is best achieved through selfassessment.Taxpayer compliance based on the self-assessment system is usually included in the tax laws of each country where taxpayers will calculate their tax obligations, submit evidence to the tax authority based on which they calculated their tax liability, file the return on the legal (due) date; and payment of tax obligations (Jacobs, 2013).SMEs tax rates are regulated by PP Number 23 of 2018, simplifying the tax rate from 1% to 0.5%.The simpler the SMEs tax rates, coupled with the implementation of the Self Assessment System/Taxpayers are given confidence in calculating, remitting and reporting their tax obligations following Article 12 paragraph (1) of the KUP Law; it is hoped that SMEs Taxpayer Compliance will increase.
H3: Assessment system mediates the effect of tax rate and tax compliance on SMEs
The mediating role of the self-assessment system influences the quality of tax services and SMEs taxpayer compliance Fiscus services are how tax officers help, manage and prepare all the needs of a taxpayer (Pratiwi et al., 2022).So, the Fiscus Service has a responsibility that must be carried out to assist taxpayers in managing and preparing all taxpayer needs.Khaerunnisa et al. (2016) Measures taxpayer compliance with three indicators, namely, taxpayers understand and try to understand all tax law provisions, fill out tax forms entirely and clearly, calculate the amount of tax owed correctly and pay taxes on time.The quality of tax authorities can be essential in increasing SMEs taxpayer compliance.Tax authorities services are services provided by tax authorities to taxpayers to help taxpayers fulfil and carry out their tax obligations (Puspanita et al., 2021).The quality of tax service can be assessed based on the perception of SMEs taxpayers by how the services they receive are compared to the services they desire.The better the tax authorities' services, the greater the compliance of SMEs taxpayers (Dewi & Susanto, 2021).With good tax service and the implementation of a self-assessment system, it is hoped that SMEs taxpayer compliance can increase.
Method
This research uses primary data originating from questionnaires distributed to SMEs taxpayers in Cirebon Regency.The population in this study were 114,923 SMEs registered with KPP Cirebon II.The sample in this research used a probability sampling method.The number of samples used in this research was 149 respondents, with an accidental sampling technique based on the respondents met.The data analysis method used is descriptive statistics using the data analysis method used is descriptive statistics using Structural Equation Model (SEM) analysis with the AMOS 26 application, namely multivariate analysis, which is a combination of factor analysis with regression (correlation) analysis, which tests relationships between variables in the model, both between indicators and constructs and relationships between constructs.
The stages of SEM analysis with AMOS are ( 1 3) One way to see whether there is an identification problem is to look at the estimation results.SEM analysis can only be carried out if the model identification results show that the model is included in the over-identified category.This identification is done by looking at the df value of the model created.(4) The measurement model test evaluates the strength of the regression path from a construct to the observed variables or indicators.In other words, researchers want to confirm whether the observed variables used can confirm a factor or construct.This analysis technique is also called Confirmatory Factor Analysis (CFA).Because the measurement model is related to a factor, the analysis carried out is the same as factor analysis; only the author starts by first determining (a priori mode) observed variables, which are seen as indicators of a factor based on previous research.
Result and Discussion
This research uses SMEs taxpayer respondents who are registered at KPP Cirebon II.Research data obtained from respondents consisted of 83 men and 66 women.Furthermore, in terms of age, respondents aged 20-30 years dominated with 82 questionnaires and 67 respondents aged 31 years and over.Apart from that, the majority of educational levels are dominated by high school/vocational school graduates, with 77 respondents, 34 respondents with diplomas, 7 respondents with bachelor's degrees, one respondent with master's degrees and 30 respondents with others.The validity test results in Table 1 above show that the SEM calculation results show an estimated value of > 0.05, which indicates that the statement indicators used are valid.Table 1 that the overall CR results are > 0.07, so that all research indicators are declared reliable and can be continued to the next test.
Discussion
Hypothesis H1 states that the tax rate's effect on taxpayer compliance is not proven.This can be seen from the probability value of 0.010 < 0.05, a significant value.Thus, there is an influence of the tax rate on taxpayer compliance.This follows the utility theory developed by Allingham and Sandmo (1972), which assumes that taxpayers are rational individuals who will consider the risk of tax levels affecting the value of taxes paid.This follows research Schoeman et al. (2022), which suggests that changes in tax rates (direction and magnitude of change) may impact SMEs' decisions to register or cancel registration as taxpayers.Based on utility theory, it is anticipated that when the tax rate is lower and the responsibility is more minor, this will decrease tax avoidance.Most research findings state that more participants will be willing to register as taxpayers if there is a reduction in tax rates, and even more will be willing to register if there is a significant reduction in tax rates.The same thing was also said by Khasanah et al. (2021), who stated that tax administration, especially determining rates, will be a consideration for those obliged to pay taxes.
Hypothesis H2, which states the influence of tax service quality on taxpayer compliance, is proven.This can be seen from the probability value of 0.000 < 0.05, a significant value, which means it has an effect.Thus, there is an influence of the quality of tax service on taxpayer compliance.These results explain that the quality of tax authorities' services positively affects taxpayer compliance.To improve the quality of tax authorities' services, the government is obliged to implement: 1) Responsiveness, tax officials provide fast service, are always willing to support taxpayers with their problems and are quick to respond to taxpayer complaints; 2) Reliability, tax officials are consistent and reliable in the delivery of taxpayer services, their work is accurate, transactions are timely and provide information to taxpayers following new tax laws and services; 3) Certainty, building trust and confidence in taxpayers and making taxpayers feel safe when making tax transactions; 4) Empathy, prioritizing the interests of taxpayers and giving full attention to taxpayers, having a polite and cheerful nature when dealing with taxpayers (Kristianingrum et al., 2022;Noviyani & Muid, 2019;Prihanto, 2020;Susuawu et al., 2020).
Hypothesis H3 in this research is that the self-assessment system mediates the effect of the tax rate on taxpayer compliance.The self-assessment system cannot mediate the tax rate with taxpayer compliance.The justification that can explain the absence of mediation or interaction between the self-assessment system and the tax rate on taxpayer compliance is that many SMEs taxpayers still do not understand the self-assessment system, so socialization and literacy are needed for SMEs.The self-assessment system transfers the task of independently calculating and reporting taxes to taxpayers.The government is trying to introduce taxpayers to filling in SPT by submitting a self-assessment letter and proof of payment to the tax authority (Anyaduba & Oboh, 2019).Fuadi and Mangoting (2013) voluntary taxpayer compliance is ideal by carrying out an independent taxpayer assessment.For this purpose, Jacobs (2013) reports that there are three aspects of tax assessment and determination of tax obligations that need to be considered: (1) tax withholding, (2) government assessment, and (3) self-assessment scheme.Taxpayer compliance based on the Self Assessment System is usually included in the tax laws of each country where taxpayers will calculate their tax obligations, submit evidence to the tax authority based on which they calculated their tax liability, file the return on the legal (due) date; and payment of tax obligations (Jacobs, 2013).
Hypothesis H4 in this research is that the Self Assessment System mediates the influence of Fiscus Service Quality on Taxpayer Compliance.The Self Assessment System cannot mediate the Quality of Fiscus Services on Taxpayer Compliance.The justification that can explain the absence of mediation or interaction between the Self-Assessment System and the Quality of Fiscus Services regarding SMEs Taxpayer Compliance is the finding in the field that SMEs Taxpayers want to be helped/guided from Calculation, Payment to Tax Reporting (Kurniawati et al., 2021;Sa'diyah & Hariyono, 2022).By establishing good quality tax services, SMEs have the awareness to carry out a self-assessment system to meet tax obligations.This is a challenge for the government in responding to taxpayers' desires to obtain accessible information and services in payments so that people's compliance in paying taxes independently increases.The government is also obliged to build public trust in how the self-assessment system can provide convenience, of course, with the convenience of the taxpayer-filling feature so that it is easy for taxpayers to use.The government is also obliged to provide evidence of increased economic growth and public infrastructure that supports the mobility of citizens' lives so that economic, social, cultural and security activities can run smoothly (Permata & Zahroh, 2022); thus, it is hoped that this will trigger satisfaction with tax distribution and have an impact on taxpayer awareness in carrying out tax reporting independently.This is following the statement Sholehah and Ramayanti (2022), which states that if the socialization of self-charging taxes is carried out more intensively, it will have a significant impact on the compliance of SMEs taxpayers in fulfilling tax obligations and law enforcement in the form of tax sanctions if carried out intensively it will also increase SMEs taxpayer compliance.
Conclusion and Recommendation
The research explains the positive influence of the tax on the self-assessment system and the positive influence of the Quality of Fiscus Services on taxpayer compliance.The research found that the selfassessment system must mediate the tax rate and the quality of tax authorities' services to increase SMEs taxpayer compliance.Tax rates and services become meaningless when taxpayers do not trust tax institutions, especially SMEs taxpayers who still need clarification about the self-assessment system.Hence, they need guidance and guidance from tax institutions.The Government must pay attention to this image of trust and justice if it expects the public to have high levels of taxpayer compliance.Theoretical implications explain that the tax rate and tax service services are important factors in increasing taxpayer compliance.Taxpayers are finally trapped in the theory of criminal behaviour, which directs them to tax compliance behaviour Allingham and Sandmo (1972), then put forward the utility theory and assume that individuals Rational individuals are individuals who consider the possibility of being investigated and punished for fraudulent behaviour.So, when filling out SPT independently, someone will consider the possibility of uncertain results and their consequences, which is perceived to be more risky (Hamid, 2013).For further research, it is necessary to explore the role of tax justice and tax trust as mediating factors.This follows utility theory, which takes into account risk factors and changes received by taxpayers.When the tax rate changes to a higher direction and fiscus services decrease, taxpayers will avoid it, including carrying out a self-assessment system; they want to avoid self-charging.
In general, this research provides an accurate view that the condition of society is not yet concerned with the self-assessment system, so the Government must work hard in providing intensive tax outreach and assistance as well as evaluating tax allocations that have real value and benefits for the wider community, especially SMEs.
) describing the research framework in a flow diagram (path diagram).AMOS has developed the existing conventions in drawing flowcharts, so use them.Convert the flow diagram into structural and measurement model equations.(2) Measure the feasibility of the indicators used in the research with a GFI value <0.80.(
Table 1 .
Validity and Reliability Results
|
2023-10-12T15:05:00.471Z
|
2023-10-05T00:00:00.000
|
{
"year": 2023,
"sha1": "4bbe52af40ac6efc977a20600a337ae99efc48c1",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.unimus.ac.id/index.php/MAX/article/download/12824/7396",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "86355996644036cbe280f55918c824829537fb1b",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
}
|
143198176
|
pes2o/s2orc
|
v3-fos-license
|
The enrichment of magical thinking through practices among Reiki self-healers 1
A whispering voice with tone of solemn affirmation broke the silence: ‘you have now received your first Reiki tuning.’ A group of eight people were sitting still, their chairs in a line, holding their hands in the Gassho position.2 The owner of the voice, the teacher of the Reiki course, started to speak again. ‘Spirits of all kinds were filling the air as I did the tuning.3 It was almost like a rush hour in the city of Helsinki, so crowded with spirits was the space.’ She told us that some of these spirits stepped out and took their positions behind the people participants. ‘These spirits might be your personal guides on the Reiki-path, or they might have some other personally significant meaning to you.’ Everybody received their own spiritual guides; some got two of them. The spirits represented well-known mythological figures such as Merlin, or characters from different religious traditions, such as the Archangel Michael. Some were just ‘ordinary folk’, such as the spirits of an old man and a little girl. Some spirits were more like an attribute as pure lightness, or just vague beings; one was simply ‘a green creature’.4 The Reiki Master had started the 45-minute long ritual by beating on a big shamanic drum. She walked around and stopped behind each of us by turn.
The enrichment of magical thinking through practices among Reiki self-healers 1
Prologue: in the field
A whispering voice with tone of solemn affirmation broke the silence: 'you have now received your first Reiki tuning.' A group of eight people were sitting still, their chairs in a line, holding their hands in the Gassho position. 2 The owner of the voice, the teacher of the Reiki course, started to speak again.'Spirits of all kinds were filling the air as I did the tuning. 3It was almost like a rush hour in the city of Helsinki, so crowded with spirits was the space.' She told us that some of these spirits stepped out and took their positions behind the people participants.'These spirits might be your personal guides on the Reiki-path, or they might have some other personally significant meaning to you.' Everybody received their own spiritual guides; some got two of them.The spirits represented well-known mythological figures such as Merlin, or characters from different religious traditions, such as the Archangel Michael.Some were just 'ordinary folk' , such as the spirits of an old man and a little girl.Some spirits were more like an attribute as pure lightness, or just vague beings; one was simply 'a green creature' . 4 The Reiki Master had started the 45-minute long ritual by beating on a big shamanic drum.She walked around and stopped behind each of us by turn.
This study was funded by the Signe and Ane Gyllenberg Foundation.I thank Ilkka Pyysiäinen, Jani Närhi, Mikko Heimola and Pyry Hannila for their helpful comments about this article.2 In the Gassho position palms are together and fingers are pointing upwards.The position is used when praying or when greeting in Japan and in various Buddhist traditions and among western yoga practices where it is better known by the name of Namaste. 3 The word 'tuning' is my own translation from the Finnish word viritys, which means something like 'not in balance' (as with a musical instrument) and needing to be brought into balance.In English Reiki literature and in scholarly studies the word 'attunement' is used.See, for example, Macpherson 2008. 4 Field diary Reiki 1/2010.
Everyone was asked to sit comfortably and to keep their eyes closed or halfclosed.Listening to the drumming and feeling the resonance of the drum roll in the body created a sense of a magical space, partitioning off the big gymnasium, in which the course was taking place.When the drumbeat stopped, the sound of harmonic music arose from the flat-top in the background.
The tuning itself looked like a mime in which the Reiki Master was shaping the air and trying to untie some invisible knots.At times, she pushed and pulled the air upwards and downwards, then twirled the air above, in front of, and behind the participants, one after another.Twice she touched the participants.First she took the hands of a person, opened them so that they made the shape of a bowl, whirled the air over it, blew into the hands and then closed the hands back into the Gassho position.It looked as if she had transferred some transparent essence into the hands of the person.In the final part of the tuning the Reiki Master touched each of the participants by putting her hands on their shoulders.Some reported afterwards that her hands felt icecold; others felt them to be hot.These two acts of concrete touch helped the participants to recall and outline their impressions.The reminiscences of the others might also help to recall one's own impressions, although this recollection could also be an act of mimicry.For example, a vision of one big eye was reported by several of the participants.Also visions of colours and pictures were reported, as well as the hearing of sounds and feeling bodily sensations.
After everybody had verbalised her, or his, experience of the initiation, the teacher showed us the first Reiki symbol: Cho-Ku-Rei.According to the preceding lessons, it was necessary to tune the channels before the Reiki symbols could work as vehicles of healing.Basically, the Reiki healing energy is said to be ready to operate under two conditions.For one thing, the channels have to be opened up by the Reiki Master, and for another, a definite symbol is needed to activate the Reiki energy through the tuned channels.Cho-Ku-Rei is the symbol used when healing oneself or others by means of physical contact.The symbol was enacted first by drawing it on paper, and then in the air.In healing practice, the symbol is drawn on the healer's palms.The other symbol was given the day after the tuning.It was called Sei Hei Ki, and it is designed for use when healing emotional states or attitudes. 5We were told that it could be too much for somebody to get two tunings in one day.We were also reminded to drink plenty of water during the weekend, as it is believed to help the energies to flow.
5
In the next stage of Reiki three more symbols are given.With the aid of these symbols the Reiki-trainer is said to be able to heal regardless of time and space.
The lessons before the tuning prepared the trainees for the magical act.Questions such as where does Reiki energy come from?what kind of energy is it?and what are the benefits of practising Reiki? were on the agenda.We were told that Reiki came to the western world via Hawaii.A Hawaiian-born Japanese woman, Hawayo Takata (1900-80) learned the philosophy and practice of Reiki in Japan (1935-7) under the guidance of Chujiro Hayashi , who was one of the first students of Usui Mikao (1865Mikao ( -1926)), the founder of the Reiki symbols.6 The importance of lineage is emphasised among Reiki Masters (teachers) and practitioners.At the head of the lineage is the founder of Reiki, Mikao Usui, and in the Western lineage then comes Chujiro Hayashi, and Haway Takata follows.Finnish Reiki Masters usually have at least five names in the lineage before them.
6
There are different interpretations of the historical origins of Reiki.Some practitioners espouse more esoteric interpretations, others focus on the techniques (handpositions, symbols, tuning the channels) which Usui Mikao passed on to Hawayo Takata.Some followers of Mikao Usui have regenerated Reiki on the basis of their own experiences, others have respectfully devoted themselves to Usui Mikao's work by continuing the tradition according to his original instructions.But all the different branches of Reiki represent one of two major traditions: the traditional Japanese Reiki and Western Reiki.The difference between these traditions is said to lie in the exercise of intuition and meditation, which are more typical for Japanese Reiki.In Japan, there are nowadays six Reiki lineages independent of Usui's Reiki Healing Society.In this article the detailed background knowledge of the history of Reiki and the different lineages of teachings are not relevant to the analysis.
The first symbol to be learned, Cho-Ku-Rei.
The symbol when healing mind, Sei He Ki.
The meaning of the word 'reiki' comprises of two elements based on two words the meanings of which are then united.Rei means spiritual and ki means life force. 7Reiki is par excellence a healing method based on the belief that there is a universal healing energy which can be channelled through the hands.It is a common belief among Reiki practitioners that the energy in question is actually the purest loving energy in the whole cosmos.People worked with it thousands of years ago, for example in ancient Egypt, and then it was forgotten. 8According to this belief, such energy is 'dormant' within us, and has to be reactivated.Once activated through initiation, it is in use for the rest of one's life.
The Reiki teacher 9 of this weekend course was pedagogically very skilful; a talented speaker, supportive and inclusive.Her attitude was positive and she performed in a lively manner.She spoke in dialect.Her appearance was folksy and accessible.She was not normative, but rather seemed flexible in relation to the different interpretations concerning Reiki.She rather encouraged us to use intuition when practising Reiki. 10 She told us stories which were easy to remember either due to one, exceptional element or first-hand experience.For example, she told us the story of a lawnmower and a woman who called her one day and asked: 'is it really true that you can give Reiki to anything?' 'Yes' , the Reiki teacher answered.After a while the phone rang again, and the previous questioner said that she had just used Reiki on the lawnmower, 'and it works' .Many similar narratives were told.They all included concrete problems which were then solved by means of Reiki energy.'It works' seem to be the best evidence of the authenticity of pure Reiki healing energy.She also told us about very touching, traumatic events that had happened to her.These stories were also easy to remember as they affected everyone present.The unspoken message beyond these sad stor ies 7 The original meaning of word 'reiki' has also become a matter of speculation.8 Such narratives of 'a lost paradise' or 'a golden age long ago' are well-known in many religious traditions.9 Reiki teachers have gone through first, second and third stages of Reiki initiations before they are able to teach others.10 The concept of intuition is very popular in everyday speech and among spiritual healing practices.Usually intuition means, in these contexts, a kind of pure and genuine knowledge within us.The source of intuitive knowledge is either in one's own religious tradition, or it might be channelled through some entity from outer space (for example God, the Holy Ghost, a cosmic entity, a Reiki spirit).The easiest way to use intuition is to just trust in thoughts that come to mind in relaxed circumstances.
(Field diary 1/2010.) was 'if she can survive happily through such difficult ordeals, then Reiki must be a very powerful tool' .The Reiki teacher also told us some examples of when she herself had 'certainly been in the wrong mood and had not been able to even think about Reiki' .In that way she emphasised that Reiki might not work if one is not in a well-balanced state of mind.She counselled us to keep Reiki on our mind also after the course.'The challenge I am facing every day is how to integrate these subtle, positive practices into everyday living with children, and home life.'
Reiki
The spread of Reiki has been rapid in the Western world since 1980, when Hawayo Takata died.She kept the Reiki symbols in secret and passed on the teachings of Reiki only as an oral tradition.Takata initiated 22 Reiki Masters, who in turn have initiated several new Reiki Masters.The first Finnish Reiki Master started her courses as early as 1985.It is estimated that there are tens of thousands of trainers in Finland, and millions all over the world.11Nowadays the symbols and detailed instructions of the hand positions along with illustrative pictures are available to all via many Reiki handbooks and web pages.12Some books give a step-by-step description of how the initiation into Reiki mastery is done.Still, the tunings of the Reiki channels have to be made by the Reiki Master.It cannot be self-made.
Training in traditional Reiki has three degrees (levels).No special background or credentials are needed to receive training, but the preceding levels have to be performed before proceeding to the next level.The prices of weekend courses vary between 200 and 500 euros in Finland.Personal guidance for becoming a Master costs 1,000-2,000 euros a year.It includes several personal meetings with a Reiki Master, and becoming a Master can take some years.
In this article, Reiki is an example of a spiritually based healing context, which offers an entry into the magical thinking through the ritual initiation.There are several practices like Reiki in the field of new spirituality.Their backgrounds are situated in a variety of religious traditions, although many religious ideas in the field are based on assimilation of ideas and practices familiar in Eastern religious traditions. 13 Why is Reiki so particularly famous in the field?It would seem that Reiki is very flexible and easily integrated to other practices.Several persons I have met in the field who have practised or are practising Reiki have usually mixed it with elements from other healing practices. 14One factor, furthermore, which explains the popularity of Reiki has to do with healing.Healing, as well as illness and sickness, involving pain and relief from pain, are universal experiences felt by everyone.Complementary and alternative ways of healing are as popular among ordinary folk now as they have been throughout the history of medicine.Even medical nursing staff participate in Reiki courses in their leisure time.One reason for the popularity might also be that Reiki courses are open to everybody.Everybody can learn to heal.After initiation, participants are promised, and believed, to be rewarded for the ability to heal themselves and those near to them with the help of cosmic energy.
The study
In this article I study what I call the enrichment of magical thinking among Reiki self-healers.By the term 'enrichment' , I refer to an observable thickness (or density) of spontaneous reasoning going along lines of magical trains of thought.This includes, for example, assumptions of agency and magical contagion.
At first my research interests focused on intuitive presumptions beyond expectations of healing in general.When in the field, I soon realised that intui tive thinking prevailed over reasoning and behaviour.Actually, it looked like the intention was to stimulate one's mind towards intuitive thinking, 13 Overview studies concerning the new spirituality (or New Age) include contributions by, for example, Wouter J. Hanegraaff (1996) Paul Heelas (1996) Meredith McGuire (1998), Linda Wood head and Paul Heelas (2000) and Steven Sutcliffe (2003).14 Besides observation of Reiki courses, my ethnographic data includes notes and material from mind-body-spirit festivals in Finland (Minä Olen -messut 2001, 2009, 2010, 2011, Hengen ja tiedon messut 2008, 2009, 2011).
and vice versa, to relieve the mind from analytical and reflective thinking.New questions followed: What were the features that encourage intuitive and magical thinking during Reiki courses?How did the routine performance of magic al thoughts happen?What was the impetus to continue practising magical thinking?My theoretical frame of reference goes back to studies about the architecture of mind, and especially those concerning the duality of mind, known as two minds theory, or dual-process theory.My approach is based on latest experimental findings in cognitive and social psychology concerning both intuitive, magical thinking and reflective, analytical thinking 15 (Chaiken & Trope 1999;Epstein 1994;Epstein & Pacini 1999;Evans 1989Evans , 2008Evans , 2010;;Evans & Over 1996;Hammond 1996;Kahneman 2011;Lieberman 2003;Nisbett et al. 2001;Reber 1993;Sloman 1996;Stanovich 1999).In the study I mix qualitative data and analysis with a quantitative frame of reference.My study belongs to the field of the cognitive science of religion (henceforth CSR), which is a relatively new multidisciplinary research programme. 16 The analytical frame of reference includes three stages, or steps.The first stage is the 'entry into the magical world' , the second is the 'acquisition of magical skills' , and the third is the 'development of magical expertise' .This three-step model is actually analogous with the general characteristics of skill acquisition (see Fitts 1954, Fits & Posner 1967, Anderson 2000), which is the model used in the study of the cognitive structure of expert performance.The study of cognitive learning structures and acquisition of skills is often focused on technical skills and the automation of skills until the competence in question is acquired (as for example in learning to drive a car).The steps of the skill-learning model are; 1) the cognitive stage, 2) the associative stage and 3) the autonomous stage.
The famous '10,000 Hour Rule' is a research finding in the field.According to this rule, it takes approximately 10,000 hours of deliberate practice to master a skill (Ericsson 1996, Ericsson et al. 2006).In the study, conducted in Berlin 1993 groups ranked by excellence at the Berlin Academy of Music, and then correlated achievement with hours of practice.They discovered that the best had put in about 10,000 hours of practice, the good 8,000 and the average 4,000 hours.In later research this rule was applied to other disciplines (sports, the arts, science) and the similar results were found (Ericsson et al. 1993(Ericsson et al. , 2006)).The question of expertise might be relevant also in the context of the study concerning the enrichment of magical thinking.The development of expertise and virtuosity might not be characteristics only of gifted musicians, chess-masters or top-level athletes, which are the groups most studied.Would anybody who practices anything deliberately (for 10,000 hours) become a master in the skill?
In the context of dual-process theory and the recent findings concerning magical thinking, I am looking for the enrichment of magical thinking and development in expertise in the context of Reiki case.Before that, I will give a brief overview of the theoretical frame of reference of this study.
Two minds
Two fundamentally different ways of thinking-intuitive and reflective-have been a subject of interest and speculation throughout the history of science (e.g.Spinoza, Leibniz, Locke, Schopenhauer, Freud, James).The rapid advances in studies focusing on the mind17 support the propositions of these early scholars.Over the past 20 years, there has been an accumulation of a considerable body of empirical evidence for dual processing in learning, reasoning, decision-making, and social cognition (Evans & Frankish 2009, Evans 2010).While the emphasis and details differ somewhat between theorists, there is a broad consensus that the two processes (minds) might include the set of features itemised here.
Intuitive thinking evolved early; it is fast, spontaneous, automatic, concrete, heuristic, holistic, contextualised, mainly unconscious, and it is biased by personal emotions, experiences, beliefs, associations and generalisations.Analytical thinking, by contrast, evolved relatively recently and is slow, controlled, reflective, logical and abstract.By means of systematic processing, more or less, humans can correct and transpose reasoning based on intuitive presumptions.At times, the intuitive and the reflective minds might also compete with each other.The latest findings indicate that the unconscious, intuitive mind is more in charge of our so-called conscious thinking than we have hitherto believed.(Wegner 2002, Evans & Frankish 2009, Evans 2010.)The processing of the intuitive mind is unobserved.For example, stereotypical presumptions might direct our reasoning and behaviour in ways we consciously do not accept.
The study of religions and the intuitive mind
Intuitive thinking has been the primary target of scholars in the field of CSR since the beginning of this recent inter-disciplinary research, as actual religiosity is by and large intuitive. 18Many CSR scholars have touched on the question of the basic duality of the mind in pursuance of other contributions to the research programme. 19Ilkka Pyysiäinen, particularly, has discussed in detail the distinction between what is intuitive and what explicit in religious thought in general and specifically in relation to counter-intuitive concepts and agency (2004a, 2004b, 2005, 2009).He clarified the wide field of dualprocess studies by listing eleven different dichotomies characterising the two systems of reasoning (Pyysiäinen 2004b). 20Pyysiäinen has emphasised that the cognitive functions supported by 'the A-system' (intuitive thinking) are, for example, intuition, fantasy, creativity, imagination, visual recognition, and associative memory (Pyysiäinen 2004b: 135, see also Sloman 1996).These functions are often observable in religious contexts. 21 18 Todd Tremlin (2006: 172-82) has been involved in discussions within CSR concerning how these two minds are differentiated from each other in religious thought.Dan Sperber's (1997, see also Mercier & Sperber 2009) contributions deal with cognitive architecture from the view of an evolutionary and massively modularist framework.19 For example; the minimal counter-Intuitiveness hypothesis (MCI) of Pascal Boyer (Boyer 1994(Boyer , 2001;;Boyer & Ramble 2001;Barrett & Nyhof 2001), the notion of hypersensitive agent detection device (HADD; Guthrie 1993, Barrett 2000), the intuitive theism hypothesis of Deborah Kelemen (1999aKelemen ( , 1999bKelemen ( , 1999cKelemen ( , 2004;;Kelemen & DiYanni 2005), the hazard precaution model of ritual behaviour by Boyer andPierre Lienard (2006a, 2006b), E. Thomas Lawson's and Robert N. McCauley's ritual form hypothesis (1990), the model of religiosity theory of Harvey Whitehouse (2004) and the notion of afterlife belief put forward by Jesse M. Bering (2002).20 There have been several kinds of concepts about the intuitive and the analytical mind.For example, automatic vs. controlled, heuristic vs. analytical, reflexive vs. reflective, associative vs. rule-based, implicit vs. explicit etc. (Pyysiäinen 2004b).21 Furthermore, Finnish scholars like Jani Närhi have contributed to the research on intuitive thinking in connection with paradise representations,while Elisa Järnefelt is revising her dissertation concerning creationist thinking (Närhi 2008(Närhi , 2009)).
Close to the dual-process model is the observation that natural intuitions tend to overwrite theological doctrines and drive behaviour.CSR scholar Justin Barrett has termed the theory concerning the difference between idealized theological doctrines and the beliefs people actually have, as 'theological correctness' .Barrett argues that there seem to be two parallel God concepts.The basic concept is used in real-time, fast processing of information, while the learned, more complex concepts are used when theological doctrines are explicated (Barrett 1999, Barrett & Keil 1996).What we think we believe in, and what we spontaneously assume when there is no time or space for rational thinking, are fundamentally based on two different types of reasoning.Later Barrett (2004) argues, on the basis of diverse studies from CSR and cognitive psychology, that religious belief is natural.It is intuitively satisfying because it is cognitively easy to accept in the frame of cognitive constraints (Barrett 2004: 17).
Mood and cognition
Besides cognitive constraints in the processes of intuitive and reflective thinking, theories of mood and cognition from social psychology are relevant in the context of this study.Experimental studies of 'positive affect' (PA) and 'faith in intuition' (FI) have predicted superstitious beliefs and sympathetic magic (King et al. 2007, Hicks et al. 2010) and, thus, are relevant in this study.Like the studies concerning subjective rationality (first identified by William James in 1893), also called the feeling of meaning, which pertains to a feeling about an event or experience which one has found to feel 'right' .This feeling of rightness is responsible for our perception that experiences make sense (Mangan 2000).How does mood direct cognition, and what kind of mood is needed to activate the process of enrichment of magical thinking?According to CSR scholar Pascal Boyer (1996: 626), 'enrichment arises from the broad initial, intuitive principles, together with the expectations that they trigger, towards the complex theoretical structures' (as for example religious doctrines).
Magical thinking
The terms 'magic' and 'magical thinking' have a wide range of meanings in the earlier history of the study of religion.Also, many interesting hypotheses have been presented in the long tradition of scholarly discussions concerning magic.In what follows I will highlight just three names from the history, as the definitions of these scholars have proved still to be applicable a hundred years later.Two of them are so-called Victorian anthropologists, Edward B. Tylor (1871) and James Frazer (1911), who were the first to define 'the uni-versal laws of magic' .They were both looking for patterns of thought underlying magical actions.Tylor pointed to the importance of analogical reasoning and he, in accordance with the evolutionist framework, claimed that primitive people replace cause and effect with associations of ideas based on similarity, contagion and contiguity.Frazer elaborated Tylor's ideas on magic into the famous typology of sympathetic magic.The law of contagion or contact is based on the principle that two things once in contact will retain a connection regardless of time or space.Whereas the law of similarity is based on the principle that like attracts like.For example, the similarity between plant parts and body parts indicated their efficacy in treating diseases in those body parts.
The French anthropologist and sociologist Marcel Mauss (1950) emphasises that the concept of mana is also an important element of magical thinking.He pointed out that the essence (mana) is unitary, and thus, remains in every part taken from the whole (pars pro toto).The concept of mana, according Mauss, connoted the driving force, or essence, that travels along the lines determined by sympathy (Mauss 1972: 117).
In the early twentieth century, magical thinking was situated at a lower grade in the hierarchy of the evolution of human mental processing.Magical thinking was seen as fundamentally different from the Western style of thought.It was also believed among Western scholars that there is a major difference between magic and religion, and that the primitives were still too immature to be able to practise Christian monotheist religious thinking.The tenacious belief that there is no need for magical thinking as education increases the level of knowledge, is still alive.The latest evidence, nevertheless, supports the argument that humans, regardless of education or secularisation, are apt to think in magical ways, as the two minds theory predicts.(Wegner 2002, Evans 2010.)In the late twentieth century Carol J. Nemeroff, Paul Rozin and colleagues found that Frazer's, Tylor's and Mauss's principles of magic seem to work in the thinking of educated, Western adults (Rozin et al. 1986;Rozin & Nemeroff 1990;Nemeroff 1995;Nemeroff & Rozin 1994, 2000) The findings are very interesting and open up our understanding concerning magical modes of thinking, also in non-religious contexts.Of particular interest were studies concerning different kinds of disgust.Their findings indicate, for example, that people conflate germs with evil, as reflected in, for example, refusing to wear the sweater, said to have belonged to Hitler or some serial killer, even if it was sterilized.Emotions seemed to override rational thinking.
The law of contagion holds that physical contact between the source and the target results in the transfer of some effect or quality (essence) from the source to target.Qualities may be physical, mental, or moral in nature, and negative or positive in valence. 22'The most relevant feature of a source-in the mind of the perceiver or practitioner-is what is believed to be transmitted.Both properties and modes of transmission may be metaphoric al. ' (Nemeroff & Rozin 2000: 4.) According to Rozin and Nemeroff beliefs about negative contagion are more general than positive ones (Rozin & Nemeroff 1990: 208).And further, negative beliefs are stronger in situations of conflict.Contact with a host of negative things (e.g.unknown strangers, malicious others, their possessions or bodily residues, death and physical corruption of any kind) is felt to be physically dangerous and/or morally debasing to the person.Contamination and pollution are the terms used when an essence and its effect are negative in valence.There are also scientifically validated instances of contagion, for example, in germs and the transmission of illnesses.
Still, magical contagion is far broader in terms of what may be transmitted and how (Nemeroff & Rozin 2000: 4).'In the broader concept transmissible properties include physical or moral properties and may be harmful or beneficial.Thus goodness and evil are as transmissible as influenza.' (Nemeroff 1995: 147.)Contact with smaller set of positive things such as loved ones or personifications of goodness or holiness (for example, the Virgin Mary), or their possessions or residues, can be felt to enhance or elevate the self (Nemeroff & Rozin 2000: 7).
Nemeroff, Rozin and their colleagues state that magic is an intuitive, and possibly universal, aspect of human thinking, ranging from spontaneous, vague, 'as if ' feelings, all the way to explicit, culturally taught beliefs.Magical thinking involves the sympathetic principles of similarity and contagion, and the notion of an imperceptible force (essence) that drives, carries, or provides the mechanism for effects (Nemeroff & Rozin 2000).
What is magical thinking in terms of intuitive processing?Magical thinking is based mainly on the processes of the intuitive mind, although it also represents inferences typical of reflective thinking.For example, different kinds of explanation models and 'folk-theories' concerning the unseen world, or spiritual beings and their aims and wills, are popular 'analytical' concepts in the new spirituality. 23In the religious context intuitive, magical thinking is highly ranked because it is cognitively effortless due to the nature of concepts.Intuitive decisions are not, however, unconscious decisions; they are rather based on feeling instead of reflection (Evans 2010: 166).'When we "go with our gut" we choose to do what feels right.Reflective thought is a slow, cognitively expensive, and tiring process.Intuition and feeling are fast and easy bases for decision-making.The feeling that we do things because we consciously intend to do them has been shown to be a powerful illusion.We can and do confabulate explanations for our own behaviour, giving ourselves and others false introspective reports of the reasons for our actions.' (Evans 2010: 169.)As intuitive thinking is, among other things, strongly context-bound, spiritually-based alternative therapies or religious environments are the most useful contexts in which to study the delivery of magical meaning-making.That is, magical thinking is best observed through spontaneous speech, which is shared in the positive atmosphere created during the gatherings of those who are interested in holistic healing.
In the following anlysis I will observe the data from Reiki training courses by focusing on magical thinking and the contextual cues which support and enrich it.The data of the case-study has been collected in the field by way of participant observation which includes tape-recordings.
Three stages of analysis
The analysis of enrichment of magical thinking among Reiki practitioners is presented in the frame of three stages or steps.As explained above, the stages progress as follows: the first stage is the entry into magical world, the second involves the acquisition of magical skills, and the third development of magic al expertise.An analogy with the general characteristics of skill acquisition model concerns a three-stage process of becoming expert in some skill.
This analytical frame of reference, in the context of dual-process theory and the new findings concerning classical definitions of magical thinking, makes the observation of the process of enrichment of magical thinking visible.Through the analysis I try to understand what are the contextual cues some phenomenon.Scientific theories have to argue their claims and prove them according to the criteria of the research community.Lay persons' 'theories' are usually not challenged by counter-criticism.
supporting magical, intuitive thinking; how the magical thinking is routinised, and what is the impetus to continue the magical practices.
The first stage: the entry into the magical world
In the data, participating in Reiki training and the act of accepting the initiation through the tuning ritual is an entry into the magical world of thinking.In the initiation ritual and during the healing practices everybody gets the subjective experience of the working of Reiki energy.The Reiki Master reinforces the view that after the initiation, besides the ability to heal, the participants are connected to the unseen world via Reiki energy, and that gives them an ability to listen to their own inner, intuitive voice (or the voices of different spiritual beings).Supposedly, these people, by virtue of participating in the Reiki course, are apt to think intuitively.They come to the course on their own initiative, and they are curious to learn more about Reiki.Some of them have had positive experiences of Reiki treatment, and some have heard about Reiki from close relatives or friends.The educated guess here could be that very sceptical thinkers do not take part in weekend Reiki courses. 24he source of magical contagion is universal Reiki energy.It is transferred during the initiatory ritual from its cosmic source to the Reiki trainee with the help of the Reiki Master.Actually, according to Reiki doctrine, the external spirit does not overtake the body as in spirit possession.Humans are rather 'the channels' through which spiritual power can operate.
On the whole, the atmosphere in the course was very positive.Many had positively loaded expectations of the Reiki course beforehand.During the jointly shared time, the group became closer, and positive affects strengthened among participants by means of social influence. 25Positive affects (PAs), faith in intuition (FI) and feeling in meaning supported the enrichment of intuitive, magical thinking as the experimental studies predicted.Narratives told by the teacher (Reiki Master), evidence heard ('it works') and the approving attitude of the teacher created positive atmosphere felt in the field.
Although many elements typical of new spiritual movements (e.g.soft lights, candles, the scent of incense) were absent, as the course took place in the gymnasium of an ordinary school, the use of drumming created the feeling of a sacred space.A shaman drum does not feature in original Reiki prac-tices, but several Reiki Masters have integrated their Reiki practice with items or practices from other spiritual traditions.(Melton et al. 1990, Macpherson 2008.)According to the expertise model, subjects develop a declarative encoding (knowledge about facts and things) of skill during the first stage.That is, they commit to memory a set of facts relevant to the skill.Learners typically rehearse these facts as they first perform the skill (Anderson 2000: 281;Fitts & Posner 1967: 11-15).Practising the new skills, step by step, first by drawing the symbol on paper and in the air, then learning the hand positions, and finally practising the healing in group healing sessions, where each in turn lay down on the floor, others keeping their hands on certain places in the body, is concrete and practical and includes 'ordinary' acts.At the same time, with these bodily practices, the meaning of the practices is associated with the flow of the universal healing energy, Reiki.At this stage, participants come to understand what Reiki skill is composed of.According to the model first described by Fitts (1954), attention is significant for the acquisition of skill at this point in the process.
At the entry stage not many normative rules were explicated by the teacher .On the contrary, every question was answered with approval and positivity.For example, the question of the source of Reiki energy compared with Christian beliefs presented no problem at all.'The loving energy covers every thing and the source is the same, it just has different names.' The social influence of the group affects our own views, and 'the main reason people believe things is because other people believe them as well, especially when those other people are members of the same social group.' (Evans 2010: 150.)The second stage: acquisition of magical skills The second stage, involving the acquisition of skills, might be seen also as an analogy to the classic ritual-theory of Arnold van Gennep (1909/1960) as this stage of practising the skills is a liminal state (marge in Gennep's vocabulary).The initiation is received, but the future is 'unknown' .In this data the second stage is critical in the sense that probably those not impressed enough, or those facing contradictions with religious or moral beliefs acquired earlier, might decide not to continue practising. 26articipants have been advised by the Reiki Master to rehearse new skills once a day during at least three weeks at home.Repeated practice leads to automation, as the connections among various elements required for successful performance are strengthened.Through the rehearsal of the Reiki healing technique, the trainers accustom themselves to the magical ways of thinking as they were encouraged during the course.
The third stage: the development of magical expertise
In the third stage development from novice to master begins.Those trainees who have arrived at this stage, participate in the Reiki II course, the next level.They have practised Reiki technique and they are interested in learning more about Reiki.The difference between the participants in Reiki I and II courses is in behaviour.These are not novices anymore, and they show it by speaking openly about their intuitions (about everything).Explanations based on magical contagion are usual.Narratives of miracles made with the help of Reiki are now told by others than the teacher, as in the first course.Bursts of creativity, fantasy, intuition, and imagination fill the air as participants express freely and spontaneously their thoughts and feelings.Some of them have done tattoos of Reiki symbols on their skin.Some tell how they drew Reiki symbols on paper and put them in their bras or in clothing, under the pillow, in their wallet, or some other personally significant place.The participants also reveal how they have decorated their homes with symbols.Several, creative ways of using Reiki symbols are compared.The meaning and usage of Reiki symbols have been extended to cover protection, success in love affairs or prosperity.People laugh merrily at each other's ideas for using Reiki in new ways.After the new tunings (opening up new channels for new symbols), participants tell detailed narratives of fantasy travel.Obviously, the atmosphere feeds the imagination of participants.The role of the teacher is smaller than in the 'novice-course' .
It appears that magical causal reasoning has automatised rather quickly among the participants.
Along with magical thinking, normative rules based on intuitive and magical biases are strengthened.This is reflected in the rituals that practitioners have created.Rituals aiming at protection are more usual than in the first stage.Also, cleansing rituals become more important as more complex doctrines are adopted through the reinforcement of magical thinking, due to implicit learning in the new social group.Studies show that we have a fundamental tendency to form ourselves into in-groups and out-groups (Evans 2010: 155).
Conclusions and discussion
In this article, I have observed the enrichment of magical thinking in the context of spiritually-based healing which is practised on Reiki courses.27At first, I introduced a piece of ethnography, precisely, the magical act of initi ation.During the act the essence of universal Reiki energy is believed to transfer from the cosmic source via the Reiki Master into the student.
The magical ritual seems to follow the line of magical thinking outlined in the studies concerning magical contagion by Nemeroff, Rozin and colleagues.
According to the latest study presented above, magical thinking is included in the processes of intuitive mind in many respects.Classical theories of sympathetic magic have proven to be a part of our natural way of thinking.Experimental studies have shown that the law of contagion seems to operate also in modern people's minds, mostly on unconscious level.Magical thinking is easy, it feels right and it gives the feeling of meaning and control over one's life.In the studied context, positivity, heuristic creativity and a joyful atmosphere were strongly present in contrast to above mentioned studies, which focused mostly on feelings of food disgust and magical thinking about illness virulence, or were observed in the contexts of stressful and uncertain events.
It seems obvious that, in a spiritually-based healing course such as the Reiki training illustrated here, triggers that activate intuitive, magical thinking, are strongly present.The triggers are contextual cues.As the intuitive mind is contextually bound, heuristic processes are easily activated.Context drives cognition, in this case, towards intuitive and magical thinking.Contextual cues support intuitive thinking and biases beyond magical thinking.The atmosphere and social influence are probably the most important factors in creating a context that supports intuitive thinking, as the studies in social psychology have stated.The positive atmosphere needed in a healing context arises on the basis of many elements which are cognitively easy to adopt.These include, for example, emotionally touching cues, personal experiences and narratives which support magical meaning-making.Also the proclaimed intention in the field is to feed the 'intuition' of participants.The concept of intuition in everyday speech and in spiritual contexts is comprehended as one's inner voice, which is believed to be a source of true knowledge.
We can assume that a spiritually-based healing practice, like Reiki, is one possible entry into the world of magical thinking.The enrichment of magical thinking was clearly observable during the timeline of one weekend course, and specially, when compared with the second course, when participants had already practised Reiki techniques and the automation process was going on.The analytical framework of three stages exemplifies the process of enrichment during the Reiki course.In the first stage (entry), those who are interested in, or curious about holistic healing, acquainted themselves with Reiki.Then in the stage of the acquisition of skills, participants deliberately practise their new skills.Those who are willing to continue practising, take part in the next level course, which is the third stage; the development in expertise.From the second course (and the third stage in the analysis) onward, development in expertise begins.
The impetus to continue practices might originate in feelings of joy or other positive feelings, including the experience of learning a healing method.Positive feelings both precede and follow when things are cognitively easy to learn.Everybody can and does learn the skills.Also 'the 10,000 hours rule' is very fascinating to consider in this context, as a path to religious expertise.
Then, a new question occurs: what kind of expert is the magical vir tuoso?The characteristics of a magical expert would probably include creativity, fantasy, intuition, imagination and associative, magical thinking.One of the master's aims is to release the mind from (too) reflective thinking, and trust more in intuition.What is then the role of the reflective mind, if practices like Reiki can be felt to enhance or elevate the self?As Gerald Clore and colleagues have put it: ' Although positive mood sometimes leads to better performance in creative tasks, some tests find that a happy mood also leads sometimes to more responses and more errors.Because positive affect may signal success in a task, it may lead to an early exit from any particular stage of the process.This may result in impulsiveness and a tendency to go with whatever responses come to mind, including novel ones.However, creativity also includes relational, holistic, integrative thinking.' (Clore et al. 2000: 46.)The hypothesis presented here, that some kind of mixture of both implicit and explicit learning (might) play a significant role when explaining why participants of spiritually-based healing practices or similar still continue practices within 'the world of magical thinking' , is worth further study.
In this context, Justin Barrett's concept of 'theological correctness' could be seen to be analogous with 'medical correctness' , as conceptions and beliefs ac-cording to illness and healing depart from those based on analytical, scientific thinking.In the field of spiritually-based healing there are plenty of concepts, practices and beliefs which have little resemblance to official medic al practice.On the contrary, intuitive, magical thinking tends to override medic al doctrines and drive behaviour.Supposedly, medical sciences do not support holistic ways of thinking, and, when people get ill or sick, the search for meaning behind one's illness arises.Concepts based on intuitive and magical thinking make meanings which feel right and are both cognitively easy and intuitively satisfying.Furthermore, positivity affects health and gives a feeling of control over one's sickness and health.
When observing magical thinking, two views occur.On the one hand the innate cognitive constraints determine what kind of information we are dealing with, and how we are dealing it.On the other hand, contextual cues direct cognition.In this article I focused on contextual cues directing the cognition towards magical thinking, and showed that in the Reiki context, at least, the cues have a supportive role both in the process of enrichment of magical thinking, and in creating the experience of a spiritually guided healing process.
Sources
Field diary from Reiki courses 1/2010, 2/2010.Data is in trust of author.Field diary and material from mind-body-spirit festivals.Data is in trust of author.Transliterated tape-recordings from Reiki courses.Data is in trust of author.
, K. Anders Ericsson and his colleagues divided students into three 15 Cognitive and social psychology have both studied the same subject, unaware of each other until 2006 when Jonathan St. B. T. Evans and Keith Frankrish organised the first major conference 'In Two Minds: Dual-process Theories of Reasoning and Rationality' and brought together scholars from different disciplines to discuss the contributions to dual-process theory.16 The history and the developmet of CSR is nicely presented in, for example, Aku Visala's award-winning study Religion Explained?A Philosophical Appraisal of the Cognitive Science of Religion (2009, published by Ashgate in 2011).
|
2019-05-03T13:09:47.064Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "8612b27232ae94c76879a611adfecbefb22555a9",
"oa_license": "CCBY",
"oa_url": "https://journal.fi/scripta/article/download/67420/27717",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8612b27232ae94c76879a611adfecbefb22555a9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
243940007
|
pes2o/s2orc
|
v3-fos-license
|
Hepatocellular Carcinomas with Granulomatous Inflammation In Tumor Stroma: Clinicopathologic Characteristics
Objective: To determine the frequency of granulomatous inflammation within hepatocellular carcinoma (HCC) and its clinicopathologic associations. Material and Method: Fifty-eight HCCs (51 explants, 3 lobectomies, and 4 segmentectomies) were reviewed. Results: Five (8.6%) cases (F/M=1/4, mean age: 63.6) were identified with granulomas.1/5 had history of neoadjuvant therapy. 4/5 patients presented with early stage (pT1/2). All were well-differentiated (Grade1-2/4). The mean number of tumor foci was 3.6, with a median size of 2.2 cm. All of them had advanced fibrosis. No difference was identified from cases without granulomas (n=53) in terms of prognosis and aforementioned parameters (p> 0.05). Granulomas were mainly concentrated in peripheral parts of the tumors. One case with nodule-in-nodule formation had granulomas lined along the border of the inner nodule. In 2 cases, granulomas were identified in steatohepatitic areas, while another had clear cell change. Only 1 had necrotizing granulomas, none with acid resistant bacilli. Two cases revealed concomitant granulomas in the adjacent liver parenchyma in addition to the tumor stroma. Except for one with a history of tuberculosis, none of the cases had a granulomatous disease. Conclusion: This is the largest case series of HCCs with granulomas by far. Our data revealed neither clinicopathologic and prognostic difference nor definite etiology related to granulomas. Yet, association with steatotic and clear tumor cells suggests the role of cytoplasmic content, while distribution of granulomas points to host immune response.
INTRODUCTION
Granulomatous inflammation is a unique type of chronic inflammatory response (1). Although it does not indicate a definite etiology, its detection limits the differential diagnosis list leading to effective treatment. Granulomas may be associated with infectious [e.g. tuberculosis (Tbc)] or noninfectious diseases (e.g. sarcoidosis, Crohn's disease) and local irritants (e.g. necrotic material) (1). Tumorrelated granuloma stays as a rare etiology in this differential diagnosis list.
Malignancy related granulomatous inflammation was observed as early as 1911 and throughout the time, it has been defined mainly in 3 locations; in tumor draining lymph nodes (LNs), distant organs, or within tumor stroma (2)(3)(4)(5). 'Sarcoid-like reaction' is a commonly used term for all these 3 forms, but is mainly preferred to define a systemic inflammatory response resembling sarcoidosis in both clinical and pathologic aspects.
Hodgkin lymphoma and disgerminoma/seminoma (in about 15% of cases) constitute well-known examples of malignancies characterized by granulomas in tumor stroma (6). They are followed by some rare types of malignancies; mainly carcinomas but also a few sarcomas (7,8). Hepatocellular carcinoma (HCC) is one of these rare carcinomas represented by only 4 case reports in the last 40 years (9)(10)(11)(12), when patients with comorbidities such as sarcoidosis and Tbc are excluded (13)(14)(15). There is virtually no systematic analysis regarding the frequency and clinicopathologic characteristics of HCC cases harbouring granulomatous inflammation.
This study aimed to determine the frequency of granulomatous inflammation in HCCs ('granulomatous cases'), as well as to define the clinicopathologic features of these cases. We also aimed to determine the etiology and prognostic impact of this reaction by comparing them with HCCs without granulomas ('non granulomatous cases').
Histopathological and Histochemical Analysis
All Hematoxylin-eosin slides were reviewed by a single observer. Whenever needed for conflicting parameters, two pathologists decided together.
All the relevant parameters required to determine pT (AJCC 8 th ed.) were noted in additon to tumor size, histologic grade (based on Edmondson and Steiner grading system), histologic subtype, and tumor necrosis of five largest foci (16,17). Granulomas and their characteristics; distribution throughout the tumor (in the center or at the periphery), presence of necrosis, accompanying inflammatory cells (lymphocytes and eosinophils), and Langhans type giant cells were reviewed. Intratumoral inflammation (apart from granulomatous inflammation) was screened at 10x and arbitrarily scored as none, minimal (barely perceptible), and moderate/dense (easily perceptible).
Slides of background liver (57 of 58) and regional LNs of dissected cases (16 of 58) were also examined regarding fibrotic stage and granulomatous inflammation.
Ziehl-Neelsen staining was performed in each tumor block with stromal granulomas.
Evaluation of Clinical Parameters
Information on the patients' gender, age, etiology of chronic liver disease, history of neoadjuvant therapy and follow-up information were obtained through pathology databases, patients' charts and national database of death certificates. The patients who died within the first 30 days of the postoperative period (9 patients in non granulomatous group) were excluded from the survival analysis.
Clinicopathological variables were compared according to the presence of granulomas. Since the granulomatous cases' group was low in number, continuous variables were compared with the Mann-Whitney U Test, and proportions of categorical variables were compared with Fisher's Exact Test. Phi-coefficient and Cramer-V tests were used to assess the strength of association.
Clinical outcomes were recorded and analyzed by Kaplan-Meier curves, and the differences in clinicopathological features and overall survival between groups compared by log-rank analysis.
Ethical Aspects
The study was conducted in full accordance with local GCP guideline and current legislations, while the permission was obtained from the institutional ethics committee (Date: 7.17.2019, Approval number: 583) for the use of patient data for publication purposes.
Clinicopathologic Features of the Study Cohort
The patients were four males and one female [F/M=0.25, vs 0.2 in cases without (w/o) granulomas]. The mean age was 63.6 years (vs. 57.2 in HCC w/o granulomas) ( Table I).
Of the 5 cases with granuloma, 2 had Hepatitis B, 1 had Hepatitis C, 1 had non-alcoholic steatohepatitis, and 1 had multiple factors (Hepatitis B and alcohol) leading to cirrhosis. The non-granulomatous group had similar etiological distribution, as viral hepatitis was the main cause of chronic liver disease. In contrast to granulomatous ones, approximately one-tenth (7.6 %) of the non-granulomatous cases were devoid of advanced fibrosis. The proportion of patients with neoadjuvant therapy was roughly similar between the two groups (20% in granulomatous and 28% in non-granulomatous cases). All the patients in study cohort were organ confined (Table I).
Mean numbers of tumor foci were 3.6 and 2.6 in granulomatous and non-granulomatous cases, respectively. Mean and median tumor sizes of granulomatous cases were smaller (3.9 cm vs. 4.75 and 2.2 cm vs. 3.2 cm in nongranulomatous cases), although not reached statistical significant difference (Table I).
Totally 131 tumor nodules were investigated in 58 cases, while 17 of them were noted in 5 granulomatous cases. Four of 5 cases were multifocal, with 1 of 4 (Case 1), 2 of 5 (Case 3), 3 of 5 (Case 4) and 2 of 2 (Case 5) tumor foci with granulomas. Collectively, granulomas were detected in 9 tumor foci (Table II). No statistically significant difference was found between the groups regarding these documented features (Table I).
No drug history was identified except for anti-hypertensive and anti-diabetic medications. Only 1 of 5 patient was treated for Tbc 7 years before (Case 1), with sequel changes at the apex of the lung. Of note, this was a nonnecrotizing case (Table II).
Histopathologic Details of Granulomatous Inflammation
Granulomas were localized mainly in circumferential regions of tumor stroma (n=6), within ~2 mm (approximately 10x objective diameter) from the tumor/nontumor interface, even very rarely in touch with tumor pseudocapsule (Case 2). In 2 foci with steatohepatitis-like features, granulomas were concentrated specifically in these areas, instead of tumor periphery (Case 1 and 4, Figure 1A) while intermingled with clear cells in two foci (Case 3, Figure 1B). One focus with nodule-in-nodule formation had multiple granulomas located on the fibrotic pseudocapsule surrounding the inner nodule (Case 5). This focus also had granulomas at the peripheral part of the outer nodule.
Regarding histologic subtypes of 17 foci in 5 granulomatous cases, two foci (Case 1 and dominant focus of Case 3) were the steatohepatitic (SH) subtype (more than 5% of tumor represents SH features) while another (one focus of Case 4) was characterised with focal (less than 5%) SH features. The macrovascular invasion was not identified in granulomatous cases, in contrast to one-tenth of non-granulomatous ones (5 of 53, 9.4%). The microvascular invasion was detected 40% and 32% of cases, respectively in granulomatous and non-granulomatous groups.
Except for one case, all granulomatous cases presented with early stage (pT1/2) in comparison with 74% of HCC cases w/o granulomas.
Clinical Course
Follow-up and overall survival times (min-max: 1-60 months) were available in all cases. Nine patients died perioperatively.
Of the remaining 49 patients, 44 were non-granulomatous cases. Since only 4 patients died in this group, and the data of 9% (4/44) does not allow the Kaplan-Meier analysis, median survival could not be calculated. Cases without granulomas had 1-yr, 3-yr and 5-yr survival rates of 89.4%. One of them was alive with multiple intraabdominal recurrences at 29 th month.
Only 1 case revealed necrotizing granulomas with palisading histiocytes (Case 5, Figure 3). Langhans-type giant cells were common and identified in 6 of 9 foci.
The moderate density of lymphocytic infiltration was intermingled with histiocytes in all granulomas. No eosinophils were identified. Granulomas did not contain any tumor cells, either.
Two tumor foci revealed rare granulomas in the adjacent liver parenchyma (Case 3 and 4). Lymph nodes were dissected in only one patient (Case 1) which did not reveal (naked) and compact 'sarcoid-like' granulomas relatively in circumferential regions (21). Since lymphocytes were accompanying the histiocytes in all granulomas, no sarcoidlike granulomas were detected in this study and it was not possible to comment on such a distribution difference.
In addition to tumor stroma, granulomas may also be detected in the nonneoplastic parenchymal part of the tumorous organ (22). This phenomenon was seen in only 2 of 58 cases (Case 3 and 4).
The granulomas were not fairly uniform. There were predominantly non-necrotizing (7 foci, 4 cases) and less frequently necrotizing ones (2 foci, 1 case), some with palisading histiocytes (1 foci), similar to the infrequent reporting of necrotizing granulomas in the literature (23). All of them had a mononuclear infiltrate as reported in literature (21). However, none of them had eosinophils, unlike Kojima et al.'s findings (24).
When it comes to the underlying mechanism of granuloma formation, T cell-dependent reaction to degrade tumor particles is the recognized pathogenesis although the exact antigens in each type of tumor are not known (25). In our opinion, their propensity to locate circumferentially as well as the conspicuous alignment around inner nodule of one case (Case 5) are supportive histologic features of the host reply to tumor antigens.
Soluble tumor antigens also induce a granulomatous response wherever they drain (i.e., regional LNs, liver) (3,25). This cohort revealed only one example for granulomatous lymphadenitis (in a case w/o granulomas in his tumor) and 4 cases with rare granulomas in the Among cases with granulomas (n = 5), 2 died of disease. Median survival was 33.6 months. One of them was alive with the humerus metastasis at 18 th month.
The overall survival was not found to be different between the groups (p = 0.12) (Figure 4).
DISCUSSION
Among the 58 HCCs included in this study, which to our knowledge represents the first cohort to date analyzed for this purpose, 5 cases (8.6 %) had granulomas. Granulomatous inflammation in HCC appears to be a very rare histologic finding in the literature, reported in middleaged patients with viral hepatitis and/or cirrhosis (Table III) (9)(10)(11)(12). There are also 2 more cases not included in the table since there is no detailed information about the patient's characteristics. One of these is a HCC case with intratumoral granulomas written by Neville et al. (18), and the other is a rare case diagnosed as hepatocellular neoplasm of uncertain malignant potential (19).
Granulomas accompanying malignancy can be located within tumor stroma in a randomly dispersed fashion. They can also show tendency to the peripheral regions of neoplastic lesion, such as beneath the tumor pseudocapsule in capsulated lesions (but still within the tumor) or at the edge of tumor stroma, creating a border between neoplasia and surrounding parenchyma (7,(20)(21)(22). The latter was the predominant pattern detected in this cohort (6 of 9 foci). Bässler and Birke reported lymphocyte-poor Drugs are also reported as a causative factor of granulomas in hepatocellular lesions. Ichikawa et al. recorded chemo embolic lipiodol droplets, while others reported the role of oral contraceptives (OC) (11,27). None of the granulomatous cases had neoadjuvant therapy or OC history in this study.
Bieze et al. reported 5 hepatocellular adenomas (HCA) with granulomas in the tumor stroma and/or background liver tissue. Since 4 of them were inflammatory type HCA, they pointed to the impact of prolonged chronic inflammatory stress (27). Unlike this report, the inflammatory infiltrate nonneoplastic liver (2 cases w/o granulomas and 2 cases with granulomas in their tumor -Case 3 and 4).
Specific to necrotizing granulomas, two additional mechanisms are also discussed. Bassler and Birke interpreted necrosis as a hypersensitivity reaction triggered by persistent antigen expression due to their patient's recurrence history of breast carcinoma (21). And since nonviable tumor cells were detected in necrotizing granulomas, Coyne pointed out the role of necrotic tumor cells (23). However, the one necrotizing case (Case 5) in this series had neither a previous history of malignancy nor necrotic tumor cells in the granulomas.
The cytoplasmic content of tumor cells is another suspected reason of granulomatous reaction. The high glycogen content seen in seminomas, and clear cell and papillary renal cell carcinomas is stated as a trigger of this reaction. The striking accumulation of granulomas in close relation with steatohepatitic tumor cells appears to support this In summary, hepatocellular carcinomas with granulomatous inflammation are not a very rare finding (8.6%), discovered in smaller tumors with lower grades (Grade 1-2). Granulomas are usually the non-necrotizing type. Their peripheral location within tumor stroma and relation to clear cells and steatohepatitic tumor areas are remarkable. Clinicopathologic features and prognosis appear to be similar to hepatocellular carcinomas without granulomas, although definitive interpretation is not possible due to the limited numbers and follow-up time. Since the number of patients was low, these non-significant results may not represent a true indifference.
This study suggests the interference of the host immune system against tumor antigens and the role of cytoplasmic content. However, further studies are needed to elucidate the underlying mechanisms of this reaction type and to establish the clinical impact. score was minimal for 4 of 5 cases except Case 4 with moderate/dense infiltration, with no statistically significant difference between the groups (p>0.05).
It is worth noting that since granulomatous inflammation has a wide range of etiology, their presence in a malignant case raises the question of exclusion of the other reasons to argue the aforementioned pathogenesis. The differential diagnosis can be extremely difficult, especially in cases with systemic granulomatous response known as 'tumorrelated sarcoid-like reaction' (7,28). In this cohort, none of the patients had a history of any other granulomatous disease not only before the surgery but also along the follow up period, except one granulomatous case had a history of Tbc, which had been effectively treated 7 years ago (Case 1). In Case 1, the small, noncaseating structure of the granuloma, and additionally and more importantly the presence of accompanying giant cells in its immediate vicinity and concentration of this histiocytic reaction solely to the steatotic area of tumor, were the most suggestive features that it was not associated with Tbc ( Figure 1A). The hilar lymph nodes of the explant were also devoid of any granulomas, which minimizes the possibility of systemic/infection related inflammatory response. Besides, the patient had no clinical findings for Tbc.
The prognostic impact of the granulomas remains an unanswered question that has recently generated remarkable interest among cancer researchers, especially with the introduction of immune checkpoint inhibitors (ICIs) (22). Despite some previous conflicting reports, there is progressively increasing literature indicating its role in local tumor regression as well as metastasis prevention (21,29). Recently, sarcoidosis-like granulomatous reactions (SLR)-similar to sarcoidosis in aspects of both histology and clinical manifestation -have been described as a side effect of ICIs. There is evidence that these drugs show their effect as an anti-tumor agent through this reaction (30). They inactivate proteins (synthesized by immune cells or tumor cells) that inhibit antitumor T cell activity (31). In several malignancies, comparison of patients with and without SLR supports an association between this reaction and a better clinical response, as well as better overall survival (32). Our findings were open to interpretation in either way in terms of prognostic effect of granulomas. Although no overall survival difference was identified in this study, the granulomatous cases had worse survival rates. On the other hand, they were associated with better prognostic factors. Higher grade (3-4/4), macrovascular invasion, and relatively larger tumor size were identified in the nongranulomatous group.
|
2021-11-11T06:23:50.908Z
|
2021-11-10T00:00:00.000
|
{
"year": 2022,
"sha1": "e772fbd2c730f6190a26f9516907b026324abc0d",
"oa_license": "CCBY",
"oa_url": "http://www.turkjpath.org/pdf.php3?id=1998",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b41f13e1d4d16719c5a13552e9a90fb4cb9c9d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
108293678
|
pes2o/s2orc
|
v3-fos-license
|
Stroke risk factors and treatment variables in rural and urban Austria: An analysis of the Austrian Stroke Unit Registry
Background and objectives Differences in stroke risk factors and treatment variables between rural and urban regions in Austria were analyzed retrospectively as European data on this topic are scarce. Research design and methods We performed statistical analysis using group comparisons and time series analysis of data of the Austrian Stroke Unit Registry between 2005 and 2016. 87411 patients were divided into three groups (rural, intermediate, urban) according to the degree of urbanisation classification of the European Commission/Eurostat. Results Patients in the rural group were significantly younger, more often female, had a lower pre-stroke disability, and were more frequently transported by an emergency physician. Vascular risk factors were significantly higher in urban patients, leading to a higher rate of microangiopathic etiology. Onset-to-door (ODT) and Onset-to-treatment times were significantly higher in the rural group, but ODTs decreased over time. Door-to-needle times and time to first vascular imaging were significantly lower in the rural group. Intravenous thrombolysis and rehabilitation rates were lower in urban patients. Discussion and implications Contrary to previous literature predominantly from outside of Europe, vascular risk factors were higher in Austrian urban patients. Further, rural patients had higher intravenous thrombolysis and rehabilitation rates maybe because of lower pre-stroke disability. ODTs in rural patients were generally higher, but they decreased over time, which might be a consequence of better education of the public in noticing early stroke signs, better transportation and education of emergency medical personnel, better advance notification to the receiving hospital and implementation of Stroke Units in rural areas.
Introduction Ischemic stroke is responsible for a significant portion of disease burden and deaths, but outcome and incidence rates vary significantly between countries, as well as urban and rural regions [1]. Disparities between urban and rural regions in stroke care are increasing, which makes this topic increasingly important from a public health perspective [2].
There is a lack of sufficient data on stroke in rural areas, especially in Europe. Moreover, European evidence is largely of small scale [3][4][5][6]. A possible reason might lie in the fact that epidemiological data from Europe is confounded by a selection bias as most studies are published on data collected from only a few countries and mostly of urban populations [7]. Data from other world regions hint to suboptimal care in rural regions [8]. This might be explained by the level of education in recognition of stroke symptoms by the population, paramedics training, and transit times to hospitals [9] or the fact that patients in rural regions were less likely treated in Stroke units and to receive quick brain and vascular imaging, as well as consultations from neurologists and therapists and rehabilitation [10,11]. In addition, there seem to be differences in stroke risk factors between rural and urban populations [12].
Given the lack of data about differences in risk factors, management, and outcome of stroke in European rural and urban regions, we analyzed the respective data from the Austrian stroke registry.
Methods
The Austrian Stroke Unit Registry prospectively collects data on standard characteristics, management, and outcome of stroke patients admitted to one of the currently 38 Austrian Stroke units. It is financed by the Federal Ministry of Health and is centrally administered by the Gesundheit Ö sterreich GmbH. Stroke-relevant data is documented since 2003 in an anonymized fashion and scientific analyses have to be approved and supervised by an expert committee. Data entry, data protection, administration, and scientific analysis are regulated by the Federal Law on Quality in Health, the Federal Law on Gesundheit Ö sterreich GmbH § 15a, and the Stroke Unit Registry Act.
This study analyzed registry data from 2005 to 2016 due to the low number of established Stroke units in Austria between 2003 and 2005. At the time of analysis there were n = 144 419 data sets available in the registry, from which n = 103 810 corresponded to ischemic strokes. Excluding those where no geographic information was available, n = 87 411 cases were finally included. Using the postal code of each patient we categorized all data sets into 3 groups according to the degree of urbanisation (DEGURBA) classification of the European Commission/Eurostat. This classification is based on the population size and density and contiguity of local administrative units level 2/municipalities (LAU2)-for a medical study using the same classification, see [13]. In a second step, these LAU2 are classified in the following sense: densely populated area (alternate name: cities or large urban area) with at least 50% of the population living in high-density clusters, intermediate density area (alternate name: towns and suburbs or small urban area) with less than 50% of the population living in rural grid cells and less than 50% lives in a high-density cluster, and thinly populated area (alternate name: rural area) with more than 50% of the population lives in rural grid cells (Fig 1). Applying this methodology on our data, the 3 groups consisted of the following numbers of data sets: urban n = 28 640, intermediate n = 21 505, and rural n = 37 266. Of these cases, n = 11 057, n = 7 118, and n = 11 122 in the 3 respective groups (urban, intermediate, rural) were reached for a follow-up telephone interview 3 months after the incident. The interviews were performed with either the patient and/or the care giver, or the treating physician.
The following variables of the Austrian Stroke Registry were included in the analysis: age, gender, risk factors (arterial hypertension, diabetes, previous stroke, heart attack, hypercholesterolemia, atrial fibrillation, other cardiac diseases, peripheral artery occlusive disease, smoking, regular alcohol consumption, acute alcohol intoxication), several other pre-hospital variables, such as the modified Rankin scale (mRS) before the event, mode of transportation (ambulance with/without emergency physician or other), as well as the onset-to-door time (ODT); variables at and during admission to the Stroke unit, such as National Institutes of Health Stroke Scale (NIH SS) and mRS, etiology (e.g. microangiopathic, macroangiopathic, cardioembolic, others, unkown), door-to-needle time (DNT), onset-to-treatment time (OTT), the time to first cerebral and vascular imaging, the intravenous (iv) thrombolysis rate, and the rate of interventional endovascular treatment. Finally, outcome variables, NIH and mRS at discharge from the Stroke unit, mortality rates, and referral rates to rehabilitation, were analyzed. Analyzed data from the follow-up interview include mRS and rehabilitation rates. For all time points patients with mRS 0 and 1 were summed as well as patients with mRS 2-5.
The statistical analyses were performed with the software package R. Comparisons between groups included χ 2 -tests and Kruskal-Wallis tests. We corrected for multiple comparisons using the Bonferroni-method. The level of significance was at least p < .001. Furthermore, the time series of ODTs were modeled by an autoregressive model. We applied model selection with AIC for determining the appropriate order of the autoregressive time-series models. Finally we assessed the clinical relevance of our statistically significant findings by looking at the relative difference of parameters between the urban and the rural group.
Results
A detailed overview of all results is delineated in Tables 1-3. In summary, there was a significant difference in age and gender (each p<0.001), with higher age in the urban compared to the other groups [median age urban (Q 0. 25 Further, a significant difference in all risk factors between patients groups was found (p<0.001), except atrial fibrillation, other cardiac diseases and acute alcohol intoxication ( Table 1). Etiology of stroke differed significantly between groups, with a higher rate of microangiopathic strokes in urban regions (p<0.001) ( Table 2). ODT and OTT were significantly higher in the rural group (p<0.001), whereas DNT was significantly higher in the urban group (p<0.001) compared to the respective other groups. Times to first cerebral (p = 0.009) and vascular imaging (p<0.001) were significantly higher in the urban group (p<0.001) ( Table 2). Autoregressive time series models revealed that for ODTs of the rural and urban group a first order autoregressive model is preferred, while a (Fig 2). Treatment variables differed significantly between groups, with a higher iv thrombolysis and referral rate to rehabilitation in the rural group (both p<0.001), whereas the rate of endovascular treatment did not differ significantly (p = 1.000).
An analysis of severity of stroke and disability variables before, during and after Stroke unit treatment (Table 3) revealed a lower disability of rural patients (measured through the mRS) before the event and 3-months after the event (both p<0.001), but not at admission or discharge from the Stroke unit (both p = 1.000). Median NIH SS reached 4 points in all groups at admission and 2 in all groups at discharge.
Relative differences between rural and urban group were listed in Tables 1-3.
Discussion
In this study we analyzed data from the Austrian stroke Registry in order to contribute data from a high-income European country on differences between rural and urban populations in stroke risk factors and treatment variables, because until now literature was dominated by studies on non-European and low-income countries, and these findings cannot be easily translated to high-income countries [14]. Our analysis revealed that urban patients in Austria show a different risk profile compared to those living in intermediate or rural areas. In detail, they had higher rates of arterial hypertension, diabetes, hypercholesterolemia, smoking, alcohol consumption, peripheral artery occlusive disease, prior heart attacks and strokes. All these variables showed a relative difference between groups about 20% and more percent, except for arterial hypertension where the relative difference was small. Congruently, pre-stroke disability and microangiopathic stroke etiology was lower in rural patients compared to the other groups, although the relative difference in pre-stroke disability was rather small. Interestingly, this is contrary to previous data revealing higher rates of hyperlipidemia and prior stroke in rural patients in China [12] and a higher Body-Mass-Index, a more sedentary lifestyle, and higher cholesterol levels in a Swedish sample of rural inhabitants [15]. We think that there are several possible explanations for this: First, health systems and several socioeconomic variables, such as income, education etc. differ between Austria and developing countries, especially in rural populations, which might lead to a healthier lifestyle and higher focus in preventive medical measures in rural Austria. Second, the Swedish sample was of smaller size and not focusing on stroke patients, which might explain differing results. Finally, we observed differences in some variables, such as age, gender, and NIH scores at admission and discharge between groups, but they were low in absolute values and relative differences implying low clinical significance. Nevertheless, the overall consistent pattern between risk factors and etiology in the groups makes us confident that these differences are indeed clinically relevant.
Another interesting aspect of our analyses was ODTs differing significantly between the groups at the beginning, but not at the end of the observation period due to a decreasing trend. Overall a high relative difference of ODT was obvious. One could hypothesize that this trend follows the evolution in patient transport due to the technical development and advances in helicopter availability in recent years [16]. Furthermore, decreasing ODTs might be a consequence of better education of the general public in noticing early stroke signs, education of emergency medical service personnel, and advance notification to the receiving hospital [17] or the implementation of Stroke units in rural Austrian regions in recent years [18]. An inverse relation was found for DNT and times to first cerebral and vascular imaging in our study with higher times in urban than rural patients. While the relative difference for DNT and the time to cerebral imaging was small, it was implying high clinical significance for the time of first vascular imaging. This might be explained by differences in hospital sizes with shorter within hospital distances between the emergency room, imaging facilities and the Stroke unit in rural areas.
Furthermore, we found a statistically and clinically significant difference in iv thrombolysis rates with more rural patients receiving this treatment. In a Canadian sample, no such difference could be found [11] and actually most evidence hints to poorer health care in rural hospitals [19][20][21][22]. Higher iv thrombolysis rates and higher OTTs at once in the group of rural inhabitants are indeed an interesting result that cannot be easily explained. In fact, differing stroke severity cannot have influenced these results as our analyses of disability ratings at admission and discharge from the Stroke unit show a similar degree of stroke severity. A possible explanation might be the fact that rural inhabitants had a lower pre-stroke disability leading to a lower rate of contraindications for iv thrombolysis. Lower pre-stroke disability could, in turn, be a consequence of lifestyle differences and subsequent lower cardiovascular risk factors and prior heart attacks/strokes in rural patients. On the contrary, the rate of endovascular treatment was not significantly different between groups. Although this analysis is based on a far lower number of data points, it is nevertheless in line with recent evidence from Austria [23].
The limitations of our study evolve out of the data analyzed: First, the Austrian Stroke Unit Registry only includes patients that are treated at Stroke units and not those being admitted to hospitals without a Stroke unit, to the general ward due to clinical characteristics or capacity reasons, or do not consult a doctor at all. On the one hand, one might speculate that such cases are more prevalent in rural than in urban regions and therefore data of the registry is biased. On the other hand, the number of overall stroke patients treated in Stroke units in Austria increases constantly and was over 60% in 2013 [24]. However, the relevant Austrian treatment recommendations emphasize Stroke unit treatment as being the standard of care and limits iv thrombolysis to the Stroke unit setting. We believe that risk factors for strokes treated outside of stroke units should not be significantly different of our stroke unit patient sample. Finally, treatment decision-making (e.g. inclusion and exclusion criteria for iv thrombolysis) should not differ significantly between urban and rural regions of Austria. In light of these considerations, we think that the Austrian Stroke Registry data are highly suited to be the basis of our analyses [25]. Second, the data available in the registry are regulated by law and therefore other possibly relevant variables not covered by the registry, such as ethnicity, socioeconomic status, or physical activity, were not available for analysis and could not be accessed due to anonymity. Third, our study is not population based in an epidemiological sense. Forth, the division of data into groups was based upon the postal code of patients. Even though this methodology has been applied before [13], it cannot be excluded that some patients might have been treated in a hospital in another area than their hometown/city, but this should apply to all groups to a similar extent. Further, each Stroke unit in Austria has a determined catchment area, which makes it most probably that patients are treated in the Stroke unit nearest to the postal code of residence.
We think that our results contribute significantly to existing literature on differences in stroke risk factors and treatment variables between rural and urban regions as prior data on European countries, especially of high-income, was limited. European countries have different geographical characteristics compared to other high-income countries in North America and Australia and huge differences in health systems exist when compared to low-income Asian or African countries. Therefore implications for Europe cannot be drawn out of data collected on those continents. Contrarily, we believe the results of our study based on Austrian data could be applicable to other countries with a similar geography, health care system, socioeconomic status of inhabitants etc, such as Germany, Switzerland, and other Western European countries.
Our results favor the implementation of preventional measures concerning cardiovascular risk factors in order to promote public health concerning stroke especially in urban regions of Europe. Even though investments in the development of transportation have already led to significant improvements in stroke management of rural patients in the past, a further reduction of ODTs in order to reach the urban 'benchmark' is necessary. Moreover, we call for further studies in other European countries in order to demonstrate comparability of our results. Finally, it will be interesting to see how recent developments in technique and accessibility of endovascular treatment will influence treatment and outcome variables of urban and rural regions in Austria.
|
2019-04-12T13:03:23.792Z
|
2019-04-10T00:00:00.000
|
{
"year": 2019,
"sha1": "3aa1fc3f89b97a3897e09839ce8cd25f0d863be2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0214980&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3aa1fc3f89b97a3897e09839ce8cd25f0d863be2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249877203
|
pes2o/s2orc
|
v3-fos-license
|
Plasma Brain Natriuretic Peptide Levels in Children with Chronic Kidney Disease and Renal Transplant Recipients: A Single Center Study
Pediatric chronic kidney disease (CKD) patients, as well as kidney transplant patients, are at an increased risk of developing cardiovascular disease. BNP measurement, as a biomarker of cardiovascular risk, has been recommended to this high-risk population. Plasma BNP levels were measured in 56 CKD children in either pre-dialysis stage, hemodialysis (HD) or renal transplant recipients (RTRs) and in 76 sex- and age-matched healthy controls. BNP levels were investigated in HD children, before and after the completion of their HD session. BNP levels in total CKD population, in pre-dialysis stage patients and on HD were significantly higher, compared to the respective controls. HD children had higher BNP levels compared to CKD patients in the pre-dialysis stage. Moreover, post-HD BNP concentration was slightly higher than pre-HD, with the difference being marginally statistically significant. BNP was positively correlated with eGFR, creatinine, cystatin-C and parathormone and negatively with albumin and 25-hydroxyvitamin D. A positive correlation between BNP concentration and the ratio of E/A in pulse-wave Doppler echocardiography was also observed. In conclusion, CKD pediatric patients, mainly those undergoing HD, have high plasma BNP levels which do not decrease after the HD session. This is indicative of a greater risk for future cardiovascular disease.
Introduction
A total of eleven to thirteen percent of the overall population of the world, suffers from chronic kidney disease (CKD) [1]. In European countries, pediatric CKD ranges from 11-12 per million of the age-related population (pmarp) and eight pmarp for CKD stages 3-5 and 4-5 respectively, with a male to female ratio of about 1.3 to 2 [2]. Pediatric CKD is associated with increased morbidity and mortality; in children with CKD stage 5, the mortality is 30 times higher than in their healthy peers, while a significant mortality rate is also observed in renal transplant recipients (RTRs) [2].
the Department of Nephrology of the "P. & A. Kyriakou" Children's Hospital. The study was approved by the hospital's Ethics Committee and a written informed consent was received from all participants' parents or from individuals older than 18 years old, before their enrollment in the study.
Study Population
The study population consisted of 56 children and adolescents, aged 2.8 to 20 years old, with CKD stages 2-5 and 76 sex-and age-matched healthy controls. The most frequent causes of CKD were congenital anomalies of kidney and urinary tract (CAKUT) (n = 36, 64.3%), followed by hereditary nephropathies (n = 7, 12.5%) and glomerulonephropathies (n = 7, 12.5%). Other renal diseases were observed in four children (7.1%) whereas in two patients (3.6%) the cause of CKD was uncertain. At the time of evaluation, 24 out of 56 CKD patients were in pre-dialysis stage, 14 were under chronic HD and 18 were RTRs. A total of 12 out of 14 HD patients were under conventional hemodialysis thrice weekly and the remaining two patients four times per week. Every session lasted approximately 4.5 h. Gambro AK 2005 and Nikisso model dbb 05 were used for dialysis. Dialyzers were chosen according to body surface area (BSA) and low-flux membranes were used. The RTRs patients were under prednisolone and mycophenolate mofetil in combination with cyclosporine or tacrolimus. There were thirty-three (60%) of the total CKD population that were under antihypertensive therapy at the time of evaluation. Inclusion criteria consisted of the initiation of HD sessions and performance of renal transplantation, at least 2 and 6 months before the time of evaluation, respectively. Patients in pre-dialysis stage with CKD stages 1 and 2, on peritoneal dialysis (PD), as well as patients with congenital heart diseases, pulmonary disorders or liver dysfunction and current infections, were excluded from the study.
CKD Definition and Staging Classification
The criteria recommended by the Kidney Disease Quality Outcomes Initiative (K/DQOI) were used for the definition of CKD and the estimated GFR (eGFR) according to the KDQOI CKD classification for CKD staging [14]. In all patients, including RTRs, eGFR was calculated by the Schwartz formula [eGFR = k × (height in cm)/serum Cr (mg/dL)] in mL/min/1.73 m 2 [15].
Patient Evaluation
In all participants (patients and controls) a complete physical examination was carried out and a morning fasting blood sampling was obtained. In CKD patients, a detailed history of renal disease was recorded, including age of onset, disease etiology, medication use, duration of HD and transplantation.
Anthropometric Measurements
Baseline demographic data, including age (years), body weight (kg), height (cm), body mass index (body weight in kg per height in m 2 ) and body surface area (m 2 ) were collected. The body weight (BW) and height (Ht) were measured to the nearest 0.1 kg with an electronic scale (SECA) and to the nearest 0.1 cm with a wall stadiometer (Hyssna), respectively, with the subjects lightly dressed and barefoot. The standard deviation scores (z-scores) of BW, Ht and body mass index (BMI) were also calculated according to a standardized age-and sex-specific calculator. In HD patients, the "dry weight" was determined clinically. Blood pressure (BP) was measured thrice with a cuff size suitable for their arm and the child in a sitting position, using an electronic automatic oscillometric device (Dynamap). An average value of three measurements of systolic (SBP) and diastolic BP (DBP) equal or above the 95th percentile, according to chart percentiles for age, sex and height, was defined as hypertension [16]. Standard deviation of SBP and DBP Scores, were also calculated with the use of a standardized calculator based on age, sex and height. In HD children, body weight and blood pressure were measured twice, before and after the completion of HD.
Cardiac Evaluation
A cardiac evaluation was performed in all patients at the time of the enrollment into the study. HD patients were evaluated after the end of the HD session. M-mode, 2D-echocardiography and pulse-wave Doppler were carried out by a senior pediatric cardiologist. M-mode and 2D calculations were carried out with the use of suitable 3.5 and 5.5 MHz probes for the children's age, in a lying position and after rest, by a Siemens Acuson Sequoia Ultrasound Machine. The left ventricular diameter in the end diastole (LVEDD in cm) and in the end systole (LVESD in cm), posterior wall thickness in diastole (PWT in cm), inter-ventricular septum thickness in end diastole (IVS in cm), left atrial size (LA in cm) and ejection fraction (EF%) were measured. An EF ≤ 55% was used to assess systolic dysfunction [17]. Measurements of IVS, PWT and LVEDD were used to calculate left ventricular mass (LVM in g) according to the Devereux formula: LVM (g) = 0.8 × 1.04 × [(IVSd + LVd + PWd)3 − LVD3] + 0.6 g [18]. LVM was indexed to height and expressed in g/m 2.7 (LVMI) in order to minimize the effect of age, sex, and obesity [19]. LVMI values equal or above the 95th percentile for sex and agespecific reference intervals for healthy children were used to diagnose left ventricular hypertrophy (LVH) [20]. Relative wall thickness (RWT) was also calculated by the equation: RWT = (2 × PWT)/LVEDD. RWT ≥ 0.42 was indicative of concentric hypertrophy and RWT < 0.42 of eccentric hypertrophy [21]. Pulse-wave Doppler echocardiography was used to measure mitral valve early diastolic flow velocity (E in m/sec), late atrial filling velocity (A in m/sec) and deceleration time of E wave (DT in m/sec). The ratio between E and A (E/A) was calculated and a value E/A < 1 was regarded as grade 1 diastolic dysfunction (abnormal relaxation) [17]. Additionally, age-and sex-related standard deviation scores (z scores) of echocardiograph parameters were calculated, with the use of the Boston Children's Hospital z-score system (https://zscore.chboston.org/ accessed on 23 November 2021) and the Canadian Society of Echocardiography calculator (http://csecho.ca/mdmath/ accessed on 23 November 2021).
Laboratory Data
All participants had venous blood samples collected from them in the morning (8-9 a.m.), after an overnight fast. Routine laboratory investigations included serum urea, creatinine, cystatin C, uric acid, albumin, electrolytes, aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma-glutamyltransferase (γGT), electrolytes and lipid profile, a complete blood count, plasma intact PTH (iPTH) and serum total 25-hydroxyvitamin D (25(OH)D). In HD children the markers of renal function and electrolytes were measured twice, before and after dialysis. Estimated GFR was also calculated pre-and post-HD. Anemia was defined when the levels of hemoglobulin fell below the lowest normal for age and sex limits.
BNP Measurement
BNP was measured in plasma samples, obtained after centrifugation (1600× g for 10 min at 4 • C) of 2 mL whole blood collected in K2EDTA (ethylene diamine tetra acetic acid) tubes which were immediately placed and transported on ice. The obtained plasma was transferred to Eppendorf tubes and stored at −70 • C until the time of the analysis. Plasma BNP Fragment (BI-20852W) was measured by ELISA according to manufacturer's instructions. The kit came by Biomedica Gruppe (A-1210 Wien, Divischgasse 4, Wien, Austria). The intra-and inter-assay variation of the kit were CV 6% and 8% respectively, and the detection limit was 171 pmol/L (0.171 pmol/mL). In HD patients, the BNP levels were measured twice, before and immediately after the end of HD session.
Statistical Analysis
For Data analysis, the IBM SPSS Statistics v25 software (International Business Machines Corporation, Armonk, NY, USA) was used. The main examined parameter, BNP, was tested for normality of distribution by means of visual assessment (histogram of frequencies) as well as by means of Kolmogorov-Smirnov test and was found to be non-normally distributed. Consequently, all data were treated accordingly, and non-parametric tests were implemented. Scale variables are presented as medians and interquartile range (IQR). Categorical variables are presented in actual numbers and selected group percentages. Correlation between scale variables was assessed using the Spearman test. Correlation between categorical variables was assessed using the Fisher's exact test for four-fold tables, and using the chi squared test for variables with more than two categories. Distribution of scale variables across groups of categorical variables was assessed using the Mann-Whitney U test for variables with two categories, and using the independent samples median test for variables with more than two categories. For paired measurements before and after intervention, the Wilcoxon test was implemented. Where applicable two-tailed tests were implemented. For the further assessment of scale variables as confounding factors, the z-score was also calculated and examined. Finally, the independency of correlation between BNP and other parameters was tested using multiple linear regression analysis. A p value of <0.05 was considered statistically significant.
Population Description
Our study population consisted of 56 CKD patients, 34 males (61%) and 22 females (39%), aged 2.8 to 20 years old (median age: 11.6 y IQR: 8.4, 14.6 years) with CKD stages 2-5, of which 24 (43%) were in pre-dialysis stage, 14 (25%) underwent chronic HD and 18 (32%) were RTRs. There was no significant difference in median age between the three patient subgroups (p = 0.362). In total population, the median duration of CKD was 9.5 years (IQR: 5.3, 12.3 years). In the subgroup of HD patients, the median time on HD treatment was 0.9 years (IQR: 0.4, 1.8 years), while in the RTRs subgroup, the median time from renal transplantation was 3.8 years (IQR: 0.7, 9.3 years). Thirty-three patients (59%) were under antihypertensive medications, while pre-hypertension and hypertension was observed in 14 (25%) and 7 (12.5%) patients respectively. Low total 25(OH)D (<20 ng/mL) and increased intact-PTH levels (>55 pg/mL) had 23 (41%), and 39 (69.6%) of patients, respectively. Anemia was established in 29 patients (52%). The descriptive characteristics of the total CKD population and patient's subgroups are presented in Table 1. Regarding the cardiac parameters, increased RTW indicative of concentric hypertrophy was observed in six patients (three on HD, one in pre-dialysis stage, two RTRs) and of eccentric hypertrophy in one HD-patient. Diastolic dysfunction (E/A ratio < 1) and systolic dysfunction (EF ≤ 55%) were found in three and two patients, respectively. A total of four children, all on HD, had thick IVS. Increased LVEDD, PW, and DT, had one, two and two patients respectively. Only one HD-patient had LVH (increased LVMI). The values of cardiac parameters are shown in Table 2.
BNP Levels in Patient Subgroups vs. Controls
The median plasma BNP concentration was significantly higher in the intervention group compared to the sex-and age-matched healthy control group (n = 76, p < 0.001), while BMI z-score did not differ significantly between the two groups (p = 0.407). Similarly, median BNP was significantly higher in the subgroup of CKD patients in pre-dialysis stage (p = 0.030), as well as in the HD subgroup before and after dialysis compared to the corresponding control individuals (p < 0.001 and p < 0.001 respectively). However, no significant difference was found between the RTRs subgroup and the corresponding controls (p = 0.273). The above findings can be viewed in Table 3.
Correlations between BNP Levels and Patient Subgroups
Among the different patients' groups, the HD group before dialysis had significantly higher plasma BNP levels compared to the CKD group in the pre-dialysis stage (p = 0.001), as well as to the transplantation group (p = 0.001). Within the group of HD patients, median post-dialysis BNP levels were borderline higher than pre-dialysis levels (3.75 pmol/mL, 2.78-5.17 pmol/mL vs. 3.24 pmol/mL, 2.00-4.75 pmol/mL, p = 0.048, Wilcoxon test).
The median age was similar between the three subgroups of CKD patients (p = 0.362), therefore it should not be considered as a possible confounding factor. In contrast, the median BMI was significantly lower in HD patients and higher in transplanted patients, and therefore could be a confounding factor. BNP levels varied by BMI, although this variation does not seem to be the only reason for the significant difference in BNP levels between the three subgroups of CKD patients.
In a multiple linear regression model, assessing BNP as the dependent variable and including as independent variables those that demonstrated unifactorial statistical significance, none of the two parameters, BMI or subgroup of patients, retained statistical significance. This can be attributed to the small number of patients and their relative uneven distribution in subgroups. Thus, the role of BMI as a confounding factor could not be conclusively examined.
In contrary, gender, pubertal stage, duration of CKD and medication use (prednisolone, other immunosuppressive and antihypertensive drugs) were not associated with median plasma BNP levels (p > 0.05). A significant inverse correlation was found between plasma BNP levels and eGFR (Figure 2, p < 0.001) and a positive one with CKD stage, with the higher levels found in CKD stage 5 (p = 0.005).
Moreover, a significant positive correlation exists between plasma BNP and creatinine (p < 0.001), cystatin C (p < 0.001), as well as with parathyroid hormone levels (p < 0.010). BNP was also significantly inversely correlated with 25-hydroxyvitamin D levels (p = 0.038), while it had a borderline negative correlation with albumin levels (p = 0.05). These correlations are shown in Figure 3. In contrary, gender, pubertal stage, duration of CKD and medication use (prednisolone, other immunosuppressive and antihypertensive drugs) were not associated with median plasma BNP levels (p > 0.05). A significant inverse correlation was found between plasma BNP levels and eGFR (Figure 2, p < 0.001) and a positive one with CKD stage, with the higher levels found in CKD stage 5 (p = 0.005). Moreover, a significant positive correlation exists between plasma BNP and creatinine (p < 0.001), cystatin C (p < 0.001), as well as with parathyroid hormone levels (p < 0.010). BNP was also significantly inversely correlated with 25-hydroxyvitamin D levels (p = 0.038), while it had a borderline negative correlation with albumin levels (p = 0.05). These correlations are shown in Figure 3. As far as cardiac and hemodynamic parameters are concerned, only the E/A ratio was significantly inversely correlated with BNP (R2 Linear 0.104, p = 0.034 Spearman's As far as cardiac and hemodynamic parameters are concerned, only the E/A ratio was significantly inversely correlated with BNP (R2 Linear 0.104, p = 0.034 Spearman's correlation) (Figure 4). There was no statistically significant correlation between BNP levels and EF (p = 0.182), LVM (p = 0.092), LVMI (p = 0.950), SBP (p = 0.955), DBP (p = 0.230) or HR (p = 0.113). Finally, no significant correlation was found between BNP levels and CRP (p = 0.860), presence of dyslipidemia (p = 0.376) or anemia (p = 0.431). Multiple linear regression analysis was carried out, including BNP as the dependent variable and all parameters which in univariate analyses demonstrated statistically significant correlation with BNP (age, eGFR, BMI z-score, E/A ratio, creatinine, cystatin C, albumin, iPTH and total 25(OH)D). The stronger independent predictor of BNP levels was cystatin C, followed by creatinine, albumin and total 25(OH)D (Table 4). All other variables lost their significance. Interestingly enough, eGFR per se was not highlighted as an independent prognostic factor for BNP, based on the above model. This could potentially be due to the fact that parameters such as cystatin C and creatinine can be indirect indicators of CKD and therefore various levels of eGFR. Hence, multiple linear regression was run again, this time including as independent variables eGFR, age, BMI z-score, albumin, PTH, 25-hydroxyvitamin D and E/A ratio, as these were considered clinically relevant parameters. In that model, eGFR was indeed demonstrated as an independent prognostic factor. In fact, it was the only factor that maintained correlation (coefficient: −0.029, p = 0.03). However, it has to be mentioned that this model was characterized by low accuracy (24%) which means that there is a large proportion of the variability of BNP that aforementioned models cannot predict. Potential reasons for that are the relatively small number of patients and the lack of values at the extremes. Finally, no significant correlation was found between BNP levels and CRP (p = 0.860), presence of dyslipidemia (p = 0.376) or anemia (p = 0.431). Multiple linear regression analysis was carried out, including BNP as the dependent variable and all parameters which in univariate analyses demonstrated statistically significant correlation with BNP (age, eGFR, BMI z-score, E/A ratio, creatinine, cystatin C, albumin, iPTH and total 25(OH)D). The stronger independent predictor of BNP levels was cystatin C, followed by creatinine, albumin and total 25(OH)D (Table 4). All other variables lost their significance. Interestingly enough, eGFR per se was not highlighted as an independent prognostic factor for BNP, based on the above model. This could potentially be due to the fact that parameters such as cystatin C and creatinine can be indirect indicators of CKD and therefore various levels of eGFR. Hence, multiple linear regression was run again, this time including as independent variables eGFR, age, BMI z-score, albumin, PTH, 25-hydroxyvitamin D and E/A ratio, as these were considered clinically relevant parameters. In that model, eGFR was indeed demonstrated as an independent prognostic factor. In fact, it was the only factor that maintained correlation (coefficient: −0.029, p = 0.03). However, it has to be mentioned that this model was characterized by low accuracy (24%) which means that there is a large proportion of the variability of BNP that aforementioned models cannot predict. Potential reasons for that are the relatively small number of patients and the lack of values at the extremes.
Discussion
In the current study, the overall CKD and RTRs patients had significantly higher median plasma BNP levels, compared to the healthy sex-and age-matched controls. Comparing the three subgroups, dialyzed, no-dialyzed and RTRs, with their respective controls, only the first two subgroups had BNP levels significantly higher than their controls. Patients undergoing HD had higher BNP concentrations than patients in the pre-dialysis stage. BNP levels and CKD stage display a significant correlation, with CKD stage 5 patients having higher levels. Pre-dialysis BNP levels in HD patients were moderately significantly lower than post-dialysis ones. BNP concentration was significantly positively correlated with serum creatinine, cystatin C, and parathyroid hormone levels and negatively with eGFR, albumin and 25-hydroxyvitamin D levels. Concerning cardiac parameters, only the E/A ratio had a significantly inverse correlation with plasma BNP levels. In multiple linear regression analysis, iPTH and the E/A ratio lost their significance.
Similar to our results, Hedving et al., showed significantly higher BNP levels in pediatric CKD patients undergoing HD and in pre-dialysis stage compared to healthy controls, whereas BNP levels of RTRs did not differ from those of healthy controls [22]. The levels of NT-pro-BNP have also been found to be higher in children on HD and on peritoneal dialysis (PD) than in controls, with HD patients having higher levels than PD ones. In the same study, no significant difference in NT-proBNP concentration was observed between CKD children in pre-dialysis stage and healthy controls [23]. Increased pre-dialysis NT-proBNP concentration in CKD children on HD compared to the controls has also been reported by others [13].
Regarding the pre-and post-dialysis BNP levels, we found a moderate significant increase in median BNP levels (p=0.048) immediately after the completion of the low-flux membrane dialysis session. Other studies, both in HD adults and pediatric patients, have shown conflicting results. A decrease, no change or increase in BNP or NT-proBNP levels have been reported [13,[23][24][25]. Increased BNP levels 30 min before the HD session and a variable decrease or increase in its levels 30 min post-HD were observed in 33 asymptomatic children with CKD stage 5, with the change not being significant. The authors also reported that pre-and post-dialysis BNP levels were independent predictors of adverse outcome [24]. Similarly, a recent study showed that after a single HD session, there were no significant changes in the plasma NT-proBNP values. According to the authors, this may be due to the fact that dialysis sessions are often shorter than expected, and as a result patients fail to achieve normal blood volume [23]. In contrast, a significant reduction in BNP levels measured immediately after high-flux HD session was found in 30 CKD children. This reduction was interpreted as heart unloading and peptide removal by filtration [25]. Similarly, higher NT-proBNP levels before the initiation of HD compared to those 30 min after the end of HD session, using low-flux membrane dialyzers, have been reported in CKD children. In addition, HD patients with LV dysfunction had significantly higher NT-proBNP levels, both before and after HD, compared to those without LV dysfunction [13]. Finally, studies in adult CKD patients undergoing HD have also shown a decrease, no change or increase in BNP or NT-proBNP levels after the end of the HD session [26][27][28].
A variety of factors have been attributed to the contradictory results of the studies, including the type of NP measured (BNP or NT-proBNP), the type of the dialyzer used in HD (low-flux or high-flux membranes), and the number of hours spent undergoing dialysis each week [29,30]. BNP has a smaller molecular weight and a shorter half-life compared to NT-pro BNP [31]. High-flux membranes display a higher ultrafiltration rate than low-flux ones, resulting in a significant reduction in BNP concentration after high-flux membrane dialysis and a smaller one after low-flux membrane dialysis [29,32]. Regarding NT-proBNP, the authors reported a significant decrease after high-flux membranes HD and an increase after low-flux membranes HD [29,32]. Finally, the time of blood collection, in relation to the HD session, as well as the measurement of peptides in blood serum or plasma in different studies, may also affect BNP and NT-proBNP levels. In general, the potentially confusing effect of HD sessions on blood BNP or NT-proBNP levels, needs to be taken into consideration [30].
Our study population showed an inverse correlation between BNP levels and both age and BMI. In contrast, BNP levels showed no association with either gender or pubertal stage. In general, age, gender, pubertal status and BMI have been reported to modify plasma BNP levels [7]. Infants and children have lower BNP levels than adults [33]. BNP concentrations remain steady after the first month of life, without any significant change up to the 10-12 years of age mark [8,33]. During adolescence, BNP levels increase significantly [7,33]. BNP levels in children, showed no gender-related differences to BNP levels, up to the beginning of adolescence. [7,8]. As adolescence progresses, BNP levels gradually increase, with girls having higher levels than boys. This may be due to the direct positive effect of female steroid hormones and the negative effect of male sex hormones on BNP production by cardiomyocytes [8]. This sexual difference persists in adulthood. [7]. In contrast, another study in healthy children showed no gender or age-related differences in NT-proBNP levels [34]. Finally, a negative association of BNP levels with BMI has been observed in most studies, both in healthy subjects and patients with CV diseases [7,35]. In contrast, no correlation between BMI and NT-proBNP levels in HD patients has been reported [36].
The different results of the studies referring to the normal levels of BNPs may be due to methodological differences since BNPs values are method dependent [11]. Furthermore, the existing different pathophysiologic stimuli and cardiovascular hemodynamics, both in healthy individuals and in patients with heart failure (HF), can be responsible for the wide fluctuation of plasma BNP concentrations [8].
A significant inverse correlation was found, between BNP levels and eGFR and a positive one between BNP levels and CKD stage, serum creatinine and cystatin C levels. An increase in BNP or NT-proBNP levels with decreasing eGFR and increasing CKD stage has been reported in other studies [10,12,36,37]. In a study by Rinat et al., eGFR was an independent factor influencing BNP and NT-proBNP values [12]. Similarly, decreased eGFR was independently associated with elevated NT-proBNP levels in RTR children and young adults [38]. A significant positive correlation between BNP (or NT-proBNP) levels and renal function parameters such as creatinine and cystatin C, has also been reported [36,39].
We did not observe any correlation between BNP levels and anemia. Similarly, in a study in HD children, NT-proBNP levels were not correlated with anemia [36]. In contrast, a significant negative correlation of BNP or NT-proBNP and anemia has been found in several studies [12,22]. Moreover, Hedvig et al., showed that BNP concentration >100 pg/mL had a high predictive value for the incidence of anemia [22].
In the current study, multiple linear regression analysis showed that low albumin and 25(OH)D levels were independent predictors of high BNP levels. In contrast, a positive correlation between BNP and iPTH levels was observed only in unifactorial analysis. A negative or no correlation of BNP (or NT-proBNP) with 25(OH)D levels, have been reported by others [40,41]. An inverse correlation between BNP or NT-proBNP and albumin levels has been reported in patients with cardiac heart failure and poor long prognosis [42]. Finally, higher BNP levels in CKD patients with high iPTH than those with low iPTH, as well as no relation between the above parameters, have been reported [22,36,37,41].
Data concerning the relationship between BNP and indices of functional and morphological cardiac abnormalities are also conflicting. A significant association between BNP levels and heart geometry has been reported in pediatric CKD patients, with elevated BNP levels being a predictor of abnormal heart geometry [12,22]. Notably, studies support that both BNP and NT-proBNP could serve as surrogate markers of myocardium stress in pediatric CKD stages 3-4 patients, despite the fact that both peptide levels are affected by GFR [12,22].
In the present study, there was no correlation observed between BNP levels and LVMI, in all patient groups. Similarly, no association between NT-proBNP and LVMI or volume overload has been observed by others in CKD children in pre-dialysis stage [43]. In contrast, other studies have found a positive correlation between BNP or NT-proBNP levels and LVMI in CKD children as well as in RTRs [22,23,38,44]. Our study showed that the E/A ratio, a marker of diastolic function, had a significant inverse correlation with BNP. The significance was not maintained in multiple linear regression analysis, for the assessment confounding factors. Many studies report a correlation between BNP and NT-proBNP levels and indices of diastolic dysfunction in CKD children [12,13,23,44]. Moreover, routine measurement of BNP in CKD children on PD, in order to evaluate the risk of functional and morphological cardiac abnormalities, has been recommended [44].
In a recent study on pediatric patients undergoing HD, echocardiograph parameters were assessed before and after HD session. Pre-and post-HD BNP levels were positively correlated with different echocardiographic variables, mirror LV diastolic function, before and after HD session, respectively. A significant reduction in LV and LA diameters, as well as of trans mitral E velocity and the E/A ratio after HD session was reported [25]. A positive correlation between pre-HD BNP levels and DT has also been reported [24].
Concerning other cardiac and hemodynamic parameters, our study showed no significant associations between BNP levels and EF, SBP or DBP. Again, bibliographic data remain controversial in the matter. A negative correlation between EF, a marker of systolic dysfunction, and BNP or NT-proBNP has been reported by others [13,24]. In contrast, in a recent study on HD children, EF was not affected by HD and its values were not correlated to pre-and post-HD BNP levels [25]. Finally, although there are data supporting a positive correlation between BNP (or NT-proBNP) levels and systolic and/or diastolic blood pressure in CKD children, other studies showed, in accordance with our results, that NT-proBNP levels did not correlate with SBP or DBP [12,13,23,37,38].
In a previous study from our team, that used the same sample pool as the present study, concerning urotensin II (UII) levels, another prediction marker of CVD in CKD patients, showed that CKD children in pre-dialysis stage and RTRs had significantly higher levels than healthy subjects. Moreover, whereas UII levels in HD children did not differ significantly from healthy controls before HD, they did increase significantly at the end of the HD session [45].
In conclusion, CKD children and adolescents undergoing HD and those in the predialysis stage have increased plasma BNP levels, whereas RTRs have BNP levels similar to age-and sex-matched healthy controls. HD patients (CKD stage 5) have the highest BNP levels, which do not decrease after the end of the HD session. The renal function markers, such as creatinine and cystatin C, are independent predictors of high BNP levels. BNP was also inversely correlated with the E/A ratio, a marker of diastolic dysfunction. The above findings indicate that pediatric CKD patients are at an increased risk for cardiovascular diseases.
Limitations of the Study
The main limitations of the present study include the relatively small number of CKD children, especially those on HD, and the lack of repeated measurements of the BNP levels. Additional limitations include the assessment of wet weight in HD children that was based only on clinical findings and the cardiac evaluation of the HD children that took place only once, after the end of the HD session. Informed Consent Statement: A written consent was received from all participants and their parents, after being informed of the purpose of the study.
|
2022-06-21T15:04:27.448Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "f12cb76074588d03b714eb297af4e0c2edc52f07",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/9/6/916/pdf?version=1655628387",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d539c74be34bcc632f12963445272de16ae037a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263191979
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of Cephalotyre (Ras) Cheese Supplemented with Turmeric Powder
Cephalotyre (Ras) cheese supplemented with turmeric ( Curcuma longa L.) powder was investigated for color, chemical, rheological and sensorial characteristics during ripening as an attempt to produce functional dairy product have the therapeutic effects of turmeric. Also, identification and quantification of turmeric phenolic compounds and antioxidant activity were investigated for the tested sample. The phenolic compounds profile of different turmeric extracts showed that the turmeric had the highest content of different phenolic compounds such as gallic acid, catechin, syringic acid, ellagic acid, coumaric acid, vanillin, ferulic acid, naringenin, and cinnamic acid. The addition of prepared turmeric powder in the Ras cheese processing increased the dry matter content gradually as the turmeric level increased. Also, the color factor increased as the turmeric level increased in comparison with control cheese. The addition of turmeric powder to Ras cheese increased their hardness while other texture characteristics were close to control cheese. The flavor and appearance of turmeric Ras cheese were slightly higher than control cheese except the highest level of turmeric.It could be concluded that the addition of prepared turmeric ( Curcuma longa L.) powder to Cephalotyre (Ras) cheese as a natural color agent improved the chemical, physical, rheological and sensorial characteristics during ripening at the low level of 0.25%without any appeared defects during ripening period.
Introduction
Ras cheese is the most popular hard type cheese produced in Egypt which is similar to the Greek "Cephalotyre" cheese (Abou-Donia, 2002).Ras cheese is usually made from cows' milk or mixture of cows' and buffaloes' milk, it is ripened for at least three months at 12-15 °C and about 80% relative humidity )El-Sayedet al.,1993).Ras cheese produced in small industrial units located in the Delta region (Phelan et al .,1993).The popularity of Ras cheese is mainly due its unique taste and aroma (El- Kholy, 2015).Flavor is the most significant attribute for the consumer but color and appearance create the first impress and greatly influence on the acceptability of Ras cheese.Annatto has been used for over two centuries as a food color especially in cheese and the various forms are now used in a wide range of food products.The food color annatto is obtained from the outer layer of the seeds of the tropical tree Bixa orellana L. Turmeric (Curcuma longa L.) is a plant distributed throughout tropical and subtropical regions of the world.It is widely cultivated in Asian countries, mainly in China and India.Turmeric contains 69.4% carbohydrates, 6.3% protein, 5.1% fat, 5.8% essential oils, and 3-6% of curcuminoids (Amalraj et al.,2017).Turmeric is an essential spice all over the world with a distinguished human use particularly among the Eastern people (Ravindran et al.,2007).Apart from this using as spice, it is used as traditional medicine in Asian countries such as India, Bangladesh and Pakistan because of its beneficial properties ) Chattopadhyay et al.,2004).It is called turmeric (Kurkom in Egypt while Zarchooveh in Iran) and has been in continuous use for its flavoring, and medicinal properties ) Govindarajan, 1980).The coloring principle of turmeric is called curcumin, which has yellow color and is the essential component of this plant ) Ammon et al.,1992).
Turmeric is highly regarded as a universal panacea in the herbal medicine with a wide spectrum of pharmacological activities.Current traditional medicine claims its powder against gastrointestinal diseases, especially for biliary and hepatic disorder, diabetic wounds, rheumatism, inflammation, sinusitis, anorexia, coryza and cough )Ammonet al.,1992).These medicinal properties of turmeric caused it to be considered as a spice with multifunctional medicinal properties.
Therefore, the aim of the present work was to examine the effect of turmeric powder on color, chemical, rheological and sensorial characteristics of Cephalotyre (Ras) cheese during ripening.Also, identification and quantification of turmeric phenolic compounds and antioxidant activity were investigated.
Materials and methods
Raw materials and chemicals: Turmeric and curry were obtained from the herbal market, Cairo, Egypt.Fresh cow's and buffalo's milk were obtained from the herd at the Faculty of Agriculture, Cairo University, Egypt.Microbial rennet powder (RENIPLUS) extracted from Mucormiehei was purchased from Gaglio Star, Spain.Cheese starter culture used in the cheese manufacture mixture of Lactococcuslactis subsp.lactis and Lactococcuslactis subsp.cremoris was obtained from Egyptian Microbial Culture Collection (MIRCEN), Ain Shams University, Egypt.Gallic acid, 2,2-diphenyl-1-picrylhydrazyl (DPPH) and Folin-Ciocalteau phenol reagent were purchased from Sigma Aldrich, Germany.Other used chemicals were of analytical grade.
Methods
Preparation of turmeric extract: Turmeric extract was prepared according to Tanvir et al. ( 2017) as follow: dried turmeric samples were cleaned and ground into a fine powder by laboratory mill.Finelypowdered (10 %) dried turmeric (prepared) was extracted using methanol solvent in comparison of both commercial turmeric powder (Ready) and curry powder (Mixture).Extracts were stored at 5 °C for 24 h then filtrated through two layers of cheese cloth followed by centrifugation at 5000 rpm for 15 min at 5 °C.The supernatant of extracts were individually concentrated by rotary evaporator to dryness under reduced pressure.
Total phenolic content (TPC) of turmeric: TPC of turmeric extracts were determined colorimetrically using Folin-Ciocalteau reagent, in accordance with the modified method described by Lafka et al. (2007).TPC was calculated by a calibration curve prepared with Gallic acid as a standard and expressed as mg gallic acid equivalent (GAE/ml).
Antioxidant activity of turmeric:
The antioxidant activity of turmeric extracts were evaluated by using the stable 2,2-diphenyl-1-picryl-hydrazyl (DPPH) radical scavenging method according to Matthuset al. (2002).The radical scavenging activities of the samples tested, which were expressed as a percentage inhibition of DPPH, were calculated according to the following formula: Where: A, the absorbance at 515 nm of the control sample; A0, the final absorbance of the test sample at 515 nm.
Identification and quantification of turmeric phenolic compounds: Phenolic compounds of methanolic turmeric extracts were identified and quantified by HPLC using an Agilent 1260 series.The separation was carried out using C18 column (4.6 mm x 250 mm i.d., 5 μm).The mobile phase consisted of water (A) and 0.02% tri-floro-acetic acid in acetonitrile (B) at a flow rate 1 ml/min.The mobile phase was programmed consecutively in a linear gradient as follows: 0 min (80% A); 0-5 min (80% A); 5-8 min (40% A); 8-12 min (50% A); 12-14 min (80% A) and 14-16 min (80% A).The multiwavelength detector was monitored at 280 nm.The injection volume was 10 μl for each of the sample solutions.The column temperature was maintained at 35 °C.Phenolic compounds of each sample were identified by comparing their relative retention time with those of the standard mixture chromatogram.The concentration of an individual phenolic compound was calculated on the peak area measurements, then convert to μg phenolic compound per gram of turmeric.
Ras cheese manufacture: Ras cheese was made by the conventional method as described byHofiet
al. (1970).
Ras cheese was produced using mixture of cows' and buffaloes' milk; the milk mixture was divided into 4 portions.The first portion was colored with food grade annatto color and served as control (C); the other three portions was supplemented with prepared turmeric powder at the level of 0.25, 0.5, and 1.0 g/100 milk which served as T1, T2, and T3 respectively.Fresh Ras cheese samples and after 1, 2, and 3 months during ripening were taken for analysis.
Ras cheese chemical analysis: Ras cheese samples were chemically analyzed according to AOAC (2000) for total solids, fat, total protein, ash and soluble nitrogen contents.Acidity as a lactic acid was determined by the titration method with 1/9 N NaOH, according to Ling (1963).
Ras cheese color measurements:
Color of Ras cheese samples was measured using a Hunter colorimeter model D2s A-2 (Hunter Assoc.Lab Inc., VA, USA) following the instruction of the user manual Hunter colorimeter as described by Hunter (1975).
Textural profile analysis (TPA) of Ras cheese: TPA were performed using a Universal Testing Machine (Co metech, B type, Taiwan) using 25-mm-diameter perplex conical-shaped probe, and then the generated plot of force (N) versus time (s) was recorded.TPA parameters were determined according to the definition given by the International Dairy Federation (IDF, 1991)from the resulting force-time curve for textural attributes such as hardness, chewiness, cohesiveness, gumminess and springiness were calculated.
Organoleptic evaluation of Ras cheese:
The sensorial properties of the experimental cheese samples when fresh and after 1, 2, and 3 months during their ripening period at 12±2 ○ C was evaluated according to Pappas et al. (1996).Cheese was assessed by 20 panelists from the staff of Dairy Science Department, National Research Centre, with a maximum score points of 50 points for flavor, body and texture (40 points) and 10 points for the cheese's appearance.
Statistical analysis:
The results average values were analyzed by SAS soft ware (SAS, 1999).usingANOVA procedure for analysis of variance.The results were expressed as mean ± standard error and the differences between means were tested for significance using Duncan's multiple range at p≤ 0.05.
Total phenolic content of turmeric extracts
Plant phenolics are important constituents that contribute to functional quality, color, and flavor and have significant roles both as singlet oxygen quenchers and free radical scavengers, helping to minimize molecular damage (Tanviret al., 2015).TPC of prepared turmeric (4.74 mg GAE/100 g) is higher than the commercial turmeric (Ready) while the curry had the highest TPC value (Fig. 1).It could be due to the turmeric powder was prepared in the laboratory without any heat treatment which affects the phenolic compounds content compared to commercial form of turmeric, while the highest TPC of curry mainly due to it is a mixture of different herbals contains turmeric.These findings were lower than those mentioned by (Qader,et al., 2011) who indicated that TPC in turmeric ranged from 6.15% to 16.07% in ethanolic extracts; while higher than those mentioned by Wojdyło et al. (2007) who indicated that TPC in turmeric was 1.72 mg GAE/100 g.However, the polyphenols content of turmeric as a spice was varied depends on genotopic, environmental differences namely (climate, location, temperature, fertility, diseases and pest exposure), choice of parts tested, time of taking samples, and determination methods ( Kim and Lee,2004;Shanet al.,2005).
Identification of turmeric phenolic compounds
The phenolic compounds profile of different turmeric extracts showed that the prepared turmeric had the highest content of different phenolic compounds such as gallic acid, catechin, syringic acid, ellagic acid, coumaric acid, vanillin, ferulic acid, naringenin, and cinnamic acid, in comparison of other turmeric extracts (Table 1).It is confirmed the TPC content and antioxidant activity of turmeric extracts as shown in Fig. 1, especially for the prepared turmeric sample.However, ferulic acid was shown to inhibit the photo-peroxidation of linoleic acid (Wang, 2003).Catechin
Antioxidant activity of turmeric extracts
The health benefits of phenolics are primarily derived from their antioxidant potentials because the radicals produced after hydrogen or electron donation are resonance stabilized and thus relatively stable.To counter the potential hazards of oxidative damage, the dietary consumption of antioxidant phenolics including phenolic acids and flavonoids may be regarded as the first line of defense against highly reactive toxicants (Denre, 2014).
Antioxidant activity of prepared turmeric is slightly higher than the commercial turmeric (Ready) (Fig. 1) which could be due to the TPC content (Table 1) of turmeric samples.Also, curry had the highest antioxidant activity among other turmeric samples as shown in Fig. 1, which might be due to TPC content of curry from different herbals other than turmeric.However, turmeric contains 2-9% curcuminoids (curcumin is the most abundant curcuminoid in turmeric), as well as its derivatives (bisdemethoxycurcumin and demethoxycurcumin) which have been shown be a powerful scavenger of oxygen free radicals (Anand et al.,2008;Priyadarsini, 2014).Kim and Lee (2004)
Chemical characterization of Ras cheese
The addition of prepared turmeric powder in the Ras cheese processing increased the dry matter (DM) gradually as the turmeric level increased as shown in Table 2, without significant (p≤ 0.05) differences.It could be due to the chemical composition of turmeric which contains 69.4% carbohydrates, 6.3% protein, 5.1% fat, 3.5% minerals, and 13.1% moisture (Amalraj et al., 2017;Nasri et al., 2014).These findings were in line with those reported by (Al-Obaidi, 2019).Also, it could be noted that DM content of all Ras cheese treatments increased throughout their ripening period which mainly due to the loss of moisture during ripening (Fahmy, 2003).Also, Table 2 showed that the protein and water-soluble nitrogen (WSN) as a ripening index for cheese as it reflects the extent of proteolysis of Ras cheese were increased as the turmeric level increased (Sousa et al., 2001).However, WSN in cheese is primarily formed by coagulating enzymes, plasmin or cell-wall envelope proteases at the early stage of proteolysis.It is well recognized that protein breakdown is an essential factor for equal flavors and texture change during the ripening period (Youssef et al., 2019;El-Sayed et al., 2020).Also, it could be noticed that protein and WSN contents of all Ras cheese treatments increased throughout their ripening period progressed which mainly due to the loss of moisture during ripening (Fahmy,2003).
The fat content of turmeric Ras cheese samples was lower than control cheese, as well as their fat level was decreased as the turmeric level increased (Table 2).Also, it could be noticed from Table 2 that the ash content of turmeric Ras cheese treatments was higher than control cheese which can be attributed mainly to the minerals content of turmeric powder (Nasri et al., 2014;Amalraj et al.,2017).
As shown in Fig. 2, Ras cheese acidity of control Ras cheese was close to the lowest level of turmeric powder (0.25%, T1), as well as the acidity of Ras cheese (T2 and T3) were decreased as the turmeric level increased.Similar results were also noted by Sousa et al. ( 2001)who reported that herbal extracts with high phenolic contents in the cheese samples prevented the increase in the pH during storage.Also, it could be noticed that the acidity of all Ras cheese samples increased as their ripening period progressed which could be due to the production of acidic compounds as a result of fermentation of residual lactose and degradation of intermediates components of protein and fat (Sousa et al., 2001;El-Hofi et al.,2010).
Color attributes of Ras cheese
Table 3 shows that the color factor (b) which refer to yellow to blue colors was increased as the turmeric level increased in comparison with control cheese mainly due to the yellow color of turmeric.However, turmeric contains 3-4% curcumin which responsible for its yellow color (Nasri et al., 2014).
Also, the yellow color of cheese with turmeric decreased (I) value and increased (a) value of colors (Table 3).
Rheological properties of Ras cheese
The addition of turmeric powder to Ras cheese increased their hardness compared to control cheese.Other texture characteristics including springiness, cohesiveness, gumminess, and chewiness of turmeric Ras cheese (T1 and T2) were close to control cheese while the highest level of turmeric decreased such textual attributes in comparison of control cheese.
Sensorial characteristics of Ras cheese
Flavor (Fig. 3a) and appearance (Fig. 3b) of turmeric Ras cheese were slightly higher than control cheese, while the highest level of turmeric decreased both flavor and appearance attributes of cheese.Similar findings were also mentioned by Al-Obaidi (2019) who reported that the highest level of turmeric powder had a negative effect on the cheese flavor.Also, the addition of turmeric improved the texture of Ras cheese treatments expect the highest level of turmeric (Fig. 3c).Hence, the sensorial property reflects that the overall acceptability was increased as the turmeric level increased up to 0.5% (Fig. 3d).However, Hosny et al. (2011)reported that Karish cheese with turmeric had the highest flavor score without any changed during cold storage compared to control cheese.
Conclusion
It could be concluded that the addition of prepared turmeric (Curcuma longa L.) powder to Cephalotyre (Ras) cheese as a natural color agent improved the chemical, physical, rheological and sensorial characteristics during ripening at the low level of 0.25%.Further studies will be done for nutritional evaluation of the resulted Cephalotyre (Ras) cheese as a functional product especially for cardiovascular protection.
, Chattopadhyay et al.(2004), and Shan et al. (2005) mentioned that many of phenolic compounds considers nonenzymatic antioxidant with radical-scavenging power such as catechins, flavonols (kaempferol), and phenolic acid (caffeic and coumaric acid).Hence, some authors (Katsube et al., 2004; Katalinic et al., 2006)have demonstrated a linear correlation between the content of total phenolic compounds and their antioxidant capacity.Also, Wojdyło et al.(2007)reported that Polish species were rich in phenolic constituents and demonstrated good antioxidant activity measured by different methods.
Table 2
Chemical characterization of Ras cheese with turmeric (C.longa) powder during ripening.Ras cheese supplemented with 1.0% of prepared turmeric powder.All parameters are represented as mean of replicates ± standard error.Means with different small superscript letters in the same row and different capital superscript letters in the same column are significantly different at p ≤0.05.
Table 3 .
Color attributes of Ras cheese with turmeric (C.longa) powder.
Table 4
Texture profile analysis of Ras cheese with turmeric (C.longa) powder.Ras cheese supplemented with 0.50% of prepared turmeric powder; T3, Ras cheese supplemented with 1.0% of prepared turmeric powder.N, newton; mm, millimeter; N.mm, newton millimeter.All parameters are represented as mean of replicates ± standard error.Means with different small superscript letters in the same row and different capital superscript letters in the same column are significantly different at p ≤0.05.
|
2023-09-28T15:20:14.386Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "36b06df93fc7130cb1a54682962e5859332ac4ba",
"oa_license": null,
"oa_url": "https://ejnh.journals.ekb.eg/article_317757_b7caf96599347dbeb0897e278f457153.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f01503c2ae507580d587699f085a3bfd5ced33e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
255864854
|
pes2o/s2orc
|
v3-fos-license
|
A novel method of grading gastric intestinal metaplasia based on the combination of subtype and distribution
Studies have shown the value of subtypes and distribution of gastric intestinal metaplasia (GIM) for prediction of gastric cancer. We aim to combine GIM subtypes and distribution to form a new scoring system for GIM. This was a cross-sectional study. No GIM, type I, II, and III GIM of gastric antrum and corpus scored 0–3 points respectively. Then the severity of the whole stomach was calculated in two ways: 1. The gastric antrum and corpus scores were added together, with a score ranging from 0 to 6, which named “Subtype Distribution Score of Gastric Intestinal Metaplasia (SDSGIM)”. 2. Direct classification according to a table corresponding to that of OLGIM. We compared the SDSGIM among benign lesions, dysplasia, and cancer and drew receiver operating characteristic (ROC) curve to determine the optimal cut-off value. According to the cut-off value and the classification from the table, the predictive ability of these two methods were calculated. 227 patients were included. For SDSGIM, benign lesion group was significantly different from dysplasia or cancer group. Area under curve of ROC curve was 0.889 ± 0.023. The optimal cut-off value was 3. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SDSGIM for malignancy were 89.5%, 78.0%, 74.6%, 91.2% and 82.8%. And those for the second classification method were 84.2%, 82.6%, 77.7%, 87.9%, and 83.3% respectively. This study firstly combined GIM subtypes with its distribution forming a novel scoring system, which showed high prediction accuracy for malignant lesions.
Background
Development of gastric adenocarcinoma is a complex multistep process described as Correa's cascade [1], which includes an important pre-malignant stage of gastric mucosal intestinal metaplasia (GIM). The annual incidence of gastric cancer has been reported to be 0.25% in patients diagnosed with GIM [2]. However, there is currently no specific treatment for GIM, so it is very important to identify high-risk patients who will progress to gastric cancer and regularly monitor [3,4].
Several high gastric cancer risk factors have been identified, including physiological and life-style components. The physiological factors, such as bile reflux, older age, male sex, and body mass index (BMI) were shown to predispose to development of GIM and gastric malignancies. Some of these factors are interconnected. For instance, higher BMI and abdominal fat deposits promote bile reflux by increasing intra-abdominal pressure, thus,
Open Access
Cancer Cell International *Correspondence: ruihuashi@126.com 1 Medical School of Southeast University, No. 87 Dingjiaqiao,, Nanjing 210009, China Full list of author information is available at the end of the article increasing probability of intestinal malignancies [5]. The most influential life-style risk factors, smoking and drinking [6], were indicated to aggravate chronic gastritis. Infection with H. pylori, a prolonged adverse exposure factor, is known to increase the severity and distribution of precancerous lesions [7].
The OLGIM (Operative Link on Gastric Intestinal Metaplasia) staging system was also developed [8] to comprehensively assess and score the severity of GIM. According to the proportion of intestinal metaplasia cells under the microscope, the GIM severity of gastric antrum (at least 3 biopsies) and gastric body (at least 2 biopsies) is divided into four levels (none, mild, moderate, and severe (or marked)), and then integrate the severity of GIM of the antrum and gastric body and stratify the GIM according to the OLGIM grading table (Additional file 1: Table S1, Figure S1). However, some study guidelines didn't recommend to use OLGIM in clinical practice [3,4] due to the subjectivity of pathological grading and the requirement of multiple biopsies. With the development of chromoendoscopy and digital chromoendoscopy (DC), new diagnostic choices were introduced. However, there are some limitations of this method. Chromoendoscopy requires application of dyeing agents, which is complicated and time-consuming procedure. Therefore, it has been gradually replaced by electronic dyeing endoscopes. In 2016, Pimentel-Nunes Pedro et al. [9] proposed the concept of endoscopic grading of gastric intestinal metaplasia (EGGIM). The method is mainly based on using of the narrow band imaging (NBI) system to score the extent of GIM in the whole stomach. The method is effective and helps to identify high-risk patients. Although this method has strong clinical practicability and allows to observe micro vascularization, it also has disadvantages of being subjective and time-consuming. The method relays on high-resolution DC images and highly trained and experienced physicians. Therefore, a new simple and effective risk stratification scheme is needed to identify high-risk patients for further regular follow-up and careful examination (Additional file 2).
Several studies have shown that patients with GIM in the gastric corpus have higher chances of developing gastric cancer compared to patients with GIM of the gastric antrum [10,11]. However, controversial data was published about the predictive ability of GIM subtyping for dysplasia/cancer [12][13][14]. Several study guidelines didn't recommend using it [3,4] in the past. But more and more studies [15,16] including a meta-analysis [17] have shown that type II and III intestinal metaplasia (Subtype of GIM include Type I II and III, and from type I to III the chance of malignant transformation is increasing) may indeed mean a higher risk of gastric cancer in recent years. Both GIM subtypes and distribution in the antrum and body of stomach have a certain predictive value on GIM progression to gastric cancer, although no attempts have been made to combine these two factors and form a new GIM risk stratification system. Therefore, we aim to combine GIM subtypes and distribution to form a new scoring system and evaluate its ability to predict the risk of gastric dysplasia/cancer.
Patients and data collection
This cross-sectional study was carried out at Zhongda Hospital of Southeast University, which was performed based on the Declaration of Helsinki. All patients signed an informed consent before endoscopy. Patients who were hospitalized in our Gastroenterology from June 2019 to October 2020 and was performed with NBI and white light endoscopy by 3 physicians (RH.S, W.X and YD.F) were continuously included. Among them, malignant group (dysplasia/cancer) will be included until October 2020, and benign group will be included until 2020. The following patients were excluded: 1. Patients with diffuse gastric cancer; 2. Patients with GIM, but without specific subtype; 3. Patients with unclear final diagnosis; 4. Patients who suffered from any other malignant lesions except gastric mucosa tumors. Basic clinical data such as sex, age, smoking, alcohol, H pylori, bile reflux were collected.
Test for H. pylori
The presence of H. pylori infection was determined using 13 C-urea breath test or the rapid urease test of gastric biopsy tissue. We considered the presence of H. pylori confirmed when either of these two tests were positive.
Procedure of endoscopy
GIF-H260 (Olympus) was used in this study for endoscopic examinations. All endoscopic examinations were conducted by three independent physicians (RH.S, W.X, YD.F), who scanned the lesser curvature of the antrum (including angle) and the lesser curvature of the corpus using NBI endoscopy after a white light examination. At least one biopsy sample was then taken from the location that most likely to have GIM of these two areas respectively. A random biopsy was performed from the two areas respectively when physicians didn't find any suspicious lesion. If there were other suspicious lesions, additional biopsies were performed. The most severe subtype of GIM of the same area was chosen to analyze.
Histological assessment
All biopsies were separately fixed in buffered 10% formalin and embedded in paraffin. Paraffin-embedded samples were sliced and stained using eosin and hematoxylin (H&E), alcian blue, and the periodic acid Shiff reaction using the modified Giemsa method. Histological findings were recorded according to the updated Sydney System [18]. GIM subtype was then classified into complete and incomplete types. Another staining method was performed using high iron diamine (HID)-alcian blue for sulphomucin identification to distinguish type II from III GIM among the specimens with incomplete GIM.
Scoring scheme
No GIM, type I, II, and III GIM were scored 0-3 points, respectively. The most severe subtypes of GIM of the gastric antrum (including gastric angle) and the gastric body were scored separately. The GIM severity of the whole stomach was calculated using two approaches. First, the gastric antrum and gastric body scores were added together, with a final score ranging from 0 to 6. This scoring system was named "Subtype Distribution Score of Gastric Intestinal Metaplasia (SDSGIM)". Second, direct classification according to " Table 1" which was similar to the method of OLGIM classification. The gold standard was the final pathological diagnosis, according to which the patients were divided into benign lesions and malignant lesions (dysplasia/cancer).
Statistical analysis
SPSS version 18.0 (IBM Corp, Armonk, NY, USA) was used to analyze the date. Continuous variables were summarized as mean ± standard deviation and compared using Student's t test or Mann-Whitney U test. Categorical data were summarized using frequency tables. Univariate and Multivariate analysis was carried out to compare the impact of gender, age, H. pylori infection, smoking and drinking status, bile reflux, and SDSGIM scoring between benign lesions and malignant lesions (dysplasia/cancer). Receiver operating characteristic (ROC) curve was used to assess the diagnostic accuracy for dysplasia/cancer of SDSGIM scoring and to determine the optimal cut-off value. According to the optimal cut-off value and the classification data from Table 1, the patients were classified as high-risk (≥ cut-off value of SDSGIM; stage III/IV in Table 1.) and low risk (< cut-off value of SDSGIM; stage 0-II in Table 1) for malignancy. Using the pathological results as the gold standard, the sensitivity, specificity, likelihood ratios, and accuracy of these two classification methods for dysplasia/cancer were calculated. Furthermore, for patients with dysplasia/ cancer, further comparisons were made between patients with multiple lesions and a single lesion using SDSGIM. The analysis was extended to evaluate the predicting value of SDSGIM for multiple malignant lesions. P value < 0.05 was considered to be statistically significant.
Results
The study included 227 patients with an average age of 60.0 ± 10.9. The average age of patients with benign lesions, dysplasia, and cancer was found to increase along with the severity of diagnosis. There were 132 patients with benign lesions, 46 with dysplasia, and 49 with gastric cancer. All malignant lesions were pathologically diagnosed after ESD. Of the 227 patients, 100 were H. pylori positive (44.1%) and the average number of biopsies per person was 2.6 ± 0.7. For GIM, a total of 64 patients tested negative, 34 patients had GIM only in the gastric antrum, of which most (30/34) were in the benign lesion group. We found that 129 patients had GIM in the gastric corpus, and most of these patients (87/129) were with malignant lesions. Analysis of GIM subtypes of the whole stomach indicated that the majority of patients with type I GIM had benign lesions (23/28), while majority of patients with GIM subtype III had malignant lesions (57/72). The conditions of smoking, drinking, and bile reflux in each group were also shown in Table 2.
For SDSGIM, benign lesion group (Mean ± SD: 1.4 ± 1.6) was significantly different from dysplasia (4.1 ± 1.5) (P < 0.001) and cancer group (4.5 ± 1.4) (P < 0.001). While there was no statistical difference between dysplasia and cancer group (P = 0.206) (Table2; Fig. 1). In order to exclude the influence of confounding factors, after dividing the lesions into benign and malignant groups (Dysplasia/Cancer), we further carried out a multivariate analysis. The results showed that the difference of SDSGIM between the two groups was still statistically significant (P < 0.001) ( Table 3). Receiver operating characteristic (ROC) curve of the SDSGIM lesion scoring data was drawn as shown in Fig. 2. The area under ROC curve (AUC) was 0.889 ± 0.023 (95%CI 0.843-0.934), showing that the used method had a good ability to distinguish benign from malignant lesions. According to the ROC curve data, the best cut-off value was estimated to be 3. Therefore, two groups were constructed using SDSGIM ≥ 3 for dysplasia/cancer group and SDSGIM < 3 for benign lesions groups. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of SDSGIM scoring for malignancy were 89.5% (83.
Discussion
Hierarchical systems for GIM grading, including OLGIM and EGGIM, have several disadvantages including requirement of multi-point biopsies, poor objectivity, time-consuming protocol, and the necessity for highly trained specialists in endoscopy. Multiple studies have confirmed the value of GIM subtyping for predicting gastric cancer [15,16]. A recent meta-analysis [17] showed that compared to complete GIM, incomplete GIM was associated with a 3.3-fold (95% CI, 1.96-5.64) increased risk of gastric cancer and 1.7-fold increased risk of progression to dysplasia. Compared with the severity of GIM that was used in the OLGIM system, subtype scoring is more commonly utilized in clinical pathology reports. In addition to subtypes, the distribution of intestinal metaplasia in gastric mucosa can also predict higher risk of progression to gastric cancer. GIM often originates in the lesser curvature of the gastric antrum and gradually spreads [19]. Compared with GIM detected solely in the gastric antrum, GIM in the stomach body may indicate a higher risk of malignant transformation [17]. The OLGIM system recommended five biopsies to grade the GIM. However, Meng Wang et al. [20] showed that biopsies from the lesser curvature of the gastric corpus, the angle, and the lesser curvature of the antrum could also accurately reflect the severity of GIM of whole stomach with fewer biopsy tissues. These results are similar to those found by Akiko Saka et al. [21]. Based on the above conclusions, we scanned the lesser curvature mucosa of the gastric antrum and corpus to evaluate the distribution and subtypes of GIM, establishing a novel GIM scoring system to provide a new method of GIM risk stratification.
The analyses of the 227 cases were consistent with previous studies. Gastric corpus GIM and type III GIM appeared more frequently in malignant lesions (dysplasia/cancer). After scoring the antrum and corpus separately, the two methods of GIM grading for malignant lesions-SDSGIM (cut-off was 3) and OLGIM risk stratification (Table 1; stage ≥ III)both showed good prediction accuracy. In addition to intestinal subtype and GIM distribution, some factors such as smoking and drinking status, older age, gender, Bile reflux and H. pylori infection should also be To exclude the confounding interference of these factors, a multivariate analysis was also performed. The analysis demonstrated a significant difference between scored benign and malignant lesions using SDSGIM method (P < 0.001). Furthermore, we explored the predictive value of SDSGIM scoring method for multiple malignant lesions. However, SDSGIM did not demonstrate predictive ability for malignancies with multiple lesions. Gisela Brito-Gonçalves et al. [22] reported a trend for patients with extensive GIM (EGGIM > 4) who had a higher risk of multiple lesions. The trend was not found significant using multivariate analysis. Suggestively, small sample size and the contingency of biopsy may be associated with negative predictive results and/ or absence of statistical significance. Consequently, this conclusion requires further investigation. This study also had some limitations. The chance of poor choice of biopsy sites may affect the scoring results, although the identification of GIM using NBI endoscopy [9] can help improving the stability of biopsy results. This study is a single-center small-sample study that decreases the power of the findings. Moreover, the source of the study objects was only inpatients which indicates the limited representation. Accordingly, our data warrants further prospective large-scale multicenter studies which should include outpatient sources. Besides, some important factors, including BMI and family history of cancer were not included in the analysis due to incomplete date, although some other confounding factors were included and analyzed using multivariate method. And this study could not compare the results of novel SDSGIM with OLGIM for lack of related date (OLGIM result) as a retrospective study. Finally, the rationality of the risk stratification OLGIM method (Table 1) requires further verification.
|
2023-01-17T14:23:43.737Z
|
2021-01-20T00:00:00.000
|
{
"year": 2021,
"sha1": "c44576257885e753c4381729d48b0be972f6f59e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12935-021-01758-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c44576257885e753c4381729d48b0be972f6f59e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
233672005
|
pes2o/s2orc
|
v3-fos-license
|
SUSTAINABLE DEVELOPMENT OF CROP-LIVESTOCK FARMS IN AFRICA
Crop-livestock farms across Africa are highly variable due to in agroecological and socioeconomic factors, the latter shaping the demand and supply of livestock products. Crop-livestock farms in Africa in the 21st century are very different from most mixed farms elsewhere in the world. African crop-livestock farms are smaller in size, have fewer livestock, lower productivity and less dependency on imported feed than farms in most countries of Europe, the Americas and the intensive agricultural systems of Asia. This paper discusses the role African crop-livestock farms have in the broader socio-agricultural economy, and how these are likely to change adapting to pressures brought on by the intensification of food systems. This intensification implies increasing land productivity (more food per hectare), often leading to more livestock heads per farm, producing fertilized feeds in croplands and importing feed supplements from the market. This discussion includes (1) the links between crop yields, soil fertility and crop-livestock integration, (2) the increasing demand for livestock products and the land resources required to meet to this demand, and (3) the opportunities to integrate broader societal goals into the development of crop-livestock farms. There is ample room for development of crop-livestock farms in Africa, and keeping integration as part of the development will help prevent many of the mistakes and environmental problems related to the intensification of livestock production observed elsewhere in the world. This development can integrate biodiversity, climate change adaptation and mitigation to the current goals of increasing productivity and food security. The inclusion of broader goals could help farmers access the level of finance required to implement changes.
Please note that technical editing may introduce minor changes to the manuscript text and/or graphics which may affect the content, and all legal disclaimers that apply to the journal pertain. In no event shall HEP be held responsible for errors or consequences arising from the use of any information contained in these "Just Accepted" manuscripts. To cite this manuscript please use its Digital Object Identifier (DOI(r)), which is identical for all formats of publication.
THE SOIL FERTILITY QUEST
Low crop yields (below 1 t·ha −1 ) and low fertilizer use are still a widespread reality in Africa. Over the past 40 years, there have been numerous attempts and heavy investments in research and development to increase fertilizer use to reduce the human impacts of food shortages [1] . Development agencies have been promoting the use of animal manure to improve soil fertility, under the premise that this would be a good motivation for efficient use of an otherwise neglected resource. However, studies have shown that recycling of manures alone has limited potential to increase crop productivity to the levels required in the continent to achieve food self-sufficiency [2] . Research has shown though, that there is room to improve the composting process in order to conserve more nutrients in manure [3] . In many cases accumulation of manure is achieved where livestock are confined, and this resource is often poorly managed leading to nutrient losses especially that of nitrogen. Techniques for more effective capture, storage and use of livestock manure are urgently required to increase the returns from this manure and from the purchase of off-farm nutrients. Creating specific incentives that mitigate climate change (by reducing emissions of the greenhouse gases N2O and CH4) may help improve manure management systems in rural areas [4] . A recent focus of development has shifted toward the promotion of bio-digesters that lead to a net loss of carbon due to CH4 leakage from the mixed farm [5] . The volumes of manure currently managed are only sizable in commercial dairy farms and represent an important concern as major sources of pollution in urban and peri-urban crop-livestock systems [6] .
Nutrient balances in many African countries are negative because of the nutrient mining nature of smallscale agriculture with low fertilizer use. One of the main reasons for the low yields in Africa is related to this low and spatially variable input use; only exceeding 50 kg·ha −1 N in a few countries with subsidized programs, such as in Malawi [7] . Increasing agriculture and livestock production in Africa will undoubtedly increase the rate of on-farm nutrient losses if the current trends continue, and therefore planning for intensification must look to avoid the past mistakes made in agricultural systems of mid and high-income countries. For example, the intensification of livestock production in the European Union has been associated with biodiversity loss due to landscape fragmentation, and the deterioration of soil, air and water quality, and increased GHG emissions [8] . These effects are caused by high livestock densities and widespread fertilizer use which lead to increased ammonia emissions, N deposition and nutrient losses to waterbodies. Similar trends are observed in China where the increasing demand of animal protein has led to the expansion of large-scale landless livestock operations in addition to small backyard mixed farms [9] . Both systems have poor manure management practices and this has led to the pollution of major rivers in China. Although the scale is different, in Africa the intensification of livestock production not only accelerates nutrient turnover but is also a conduit for import of external nutrients since many African mixed farmers purchase small amounts of forage, cereal milling byproducts and mineral supplements as soon as livestock production becomes commercial. In view of the risk associated with intensification, future crop-livestock farms must be integrated, aiming to close nutrient cycles by (1) efficiently using the biomass produced and recycling all livestock waste, (2) ensuring that stocking rates do not exceed the productive capacity of the land, and (3) avoiding the import of excessive amounts of feeds that create farm and regional economic dependencies, which should be feasible due to their small scale. Avoiding these imports is also important as farms and regions that are heavily reliant on imported feed are more exposed to market fluctuations and supply chains vulnerabilities.
The intensification of crop-livestock farms has to integrate the biomass and nutrient flows between cropping and livestock activities (shown schematically in Fig. 1) to lead to sustainable outcomes. The evolution of this intensification process, motivated by economic drivers, determines changes in the scale of production and the integration of biomass and nutrient flows due to changes in the management of livestock and their feeding systems. At low levels of intensification ( Fig. 1A), livestock are fed mostly in grasslands, and the flows of biomass and nutrients to croplands are limited. Increasing livestock numbers require increased feed supplies from croplands ( Fig. 1B), with opportunities for tighter crop-livestock integration through the recycling of manure. Further intensification often leads to more specialization and a shift in the feed supply from grasslands to croplands, and the subsequent need to manage soil fertility with external inputs (Fig. 1C). The most intensive form of crop-livestock farm relies heavily on external inputs and shows less integration of biomass and nutrient flows at farm level, and results in on-farm and sometimes regional nutrient surpluses (Fig. 1D).
Fig. 1
Schematic representation of the intensification of crop-livestock systems indicating the sources of feeds and sources of soil fertility along a hypothetical intensification gradient (concept adapted from Fernandez-Rivera and Schlecht [10] ). Four examples show the magnitude of the biomass and nutrient flows. (A) Low levels of intensification and of integration, most feed is provided by grasslands, and the fertility of croplands relies on fallows; (B) more intensive livestock production creates stronger feed biomass and manure flows; (C) higher stocking densities require the more specialized production of feeds on croplands and an increase in external inputs, opportunities for integration are high; and (D) at high levels of intensification, external inputs are more important for maintaining high levels of production and exports in detriment to the integration of biomass and nutrient flows.
Different stages of this intensification process and different degrees of integration between crop and livestock activities can be found across the globe and at different points in time. The last stage of intensification needs to be avoided though, due to the high costs of externalities highlighting why policies are needed to help protect broader production and environmental goals. The intensive system is often more
(D)
Supplements Manure efficient to produce food, but without effective policy, it creates problems with surpluses in farm and regional nutrient budgets.
DEMAND FOR LIVESTOCK PRODUCTS
As the lifestyles of Africans become more affluent with the rise of the middle class [11] , increased demand of milk and meat will require intensification of livestock production in order to improve self-sufficiency. This intensification of crop-livestock farms though, will create additional pressures on natural ecosystems, such as land use change, loss of forest cover and soil degradation, all of which will cause a reduction in the carbon sink [12] . Low-emission development strategies could help increase production while concurrently contributing to climate change mitigation. This development should aim to preserve soil health and rebuild soil fertility, thus removing carbon from the atmosphere and aiding in climate change mitigation. Also, this development must include human nutritional and social goals, so that the greening of crop-livestock farms benefits the communities and facilitates access to finance for poor farmers. Africa has vast areas of land that could be put into production or restored for future production of wood, crops and livestock [13] , so investments now could create green jobs within the economy [14] . This is an opportunity for an alternative model of development, one that avoids the environmental burden associated with high livestock densities and high input use. Once soil, water and air are overloaded with nutrients and fine particulate matter (PM2.5) from intensive agricultural activities, mistakes are hard to rectify, solutions are difficult to implement, expensive and are often resisted by the public [15] .
Crop-livestock farms make a critical economic contribution to nutrition and income diversification [16] and in many countries livestock provides draft power for crop production. Livestock in addition can help sustain household consumption in farming systems exposed to recurrent drought [17] , and therefore keeping a healthy and well-fed livestock population generates multiple benefits. However, this is not the reality in many of the small mixed farms in Africa, where a combination of high livestock densities for a given agroecology, overgrazing and climate variability lead to recurrent livestock mortality, morbidity and low overall productivity [18] . Climate change is an important stressor for livestock production, more so in the tropics where changes in temperature and increased frequency of heatwaves cause heat stress decreasing feed intake which in turn affects milk production [19] . There are however positive developments in this field, with emergency and development organizations helping communities in East Africa to increase resilience by diversifying livestock production, reducing the impacts of climate risks through landscape management, lowering stocking rates and accessing livestock insurance using a science-based approach [20] . Reducing the risk of production in crop-livestock farms is needed and more research on the potential for the commercialization of insurance could help improve land productivity and increase the opportunities for farmers to invest in sustainable management.
INTEGRATING BROADER GOALS TO CROP-LIVESTOCK SYSTEMS
Given the challenges faced by African crop-livestock farmers, it seems timely to integrate future development with more ambitious goals including protecting soil health, biodiversity, contributing to climate change adaptation and mitigation in addition to delivering crop and livestock products for food security and incomes. The current evidence indicates that the development of crop-livestock farms in Africa is likely to show more and not less integration of crops and livestock across landscapes in order to increase resilience to climate change and as a diversification strategy. An analysis of farms across a wide rainfall gradient (500-1200 mm) throughout East Africa showed that most farmers wanted to have more livestock, while those living in the dry areas wanted to have larger areas under cropping giving a clear indication of mixed farms as the common aspiration [16] . However, there are biological and economic limits to the expansion of mixed farming and therefore there is a role for policy to anticipate and to guide the intensification processes so that these meet multiple goals, including the delivery of environmental outcomes. In light of the expected expansion of mixed farms, new government policies are urgently required, particularly as current government policies across sub-Saharan Africa rarely include clear recommendations for manure management [21] . A good step forward would be to anticipate livestock densities and nutrient loads to the environment that are safe for people and nature.
There are many examples of beneficial crop-livestock intensification with varying degrees of integration across the African continent (Table 1). In the dry agropastoral environments of East, West and Southern Africa, crop-livestock integration mostly occurs at a landscape scale, providing multiple opportunities and challenges. Integration of cropping into the agropastoral systems of Borana in southern Ethiopia is an example of climate change adaptation [22] , with the limits to further integration imposed by land degradation due to overgrazing. The combination of crop and livestock farming improves food security and incomes in West Africa because small ruminants contribute significantly to reducing the seasonality of cash flows [23] . In northern Zimbabwe, integration allows management of landscape-scale nutrient flows that make crop production possible in otherwise infertile lands, while crop residues are critical to sustain livestock through the dry season [24] . Mixed crop-livestock urban and peri-urban Urban and peri-urban, Niger Income and nutritional diversity Nutrient pollution, zoonoses [6] Engaging in dairy in southern Mali was perceived as an opportunity to increase farm profits with greater crop production through the recycling of increasing volumes of manure [25] . The intensification of croplivestock farms in Mali must include not only pulses, but also the genetic improvement of local breeds which have currently low productivity. Urban and peri-urban farming systems in West Africa supply an important share of the crop and animal products to these burgeoning cities. In Niamey, Niger, very small farms produce vegetables, cereals and large and small ruminants which contribute importantly to household income and nutrition but also to the concentration of biomass and nutrients that create major sources for environmental pollution [6] . Further development of urban and peri-urban farms across Africa requires sound policies and guidelines for manure management that also incorporates targets to limit the risk of zoonoses [18] .
Mixed intensive farming is widespread in the highlands of Kenya, where small farm sizes force farmers to cultivate most of their land and to feed livestock with feeds collected from various places due to the limited availability of grazing land. A study showed that the biomass produced in these mixed farms and the nutrients available from manure to recycle on-farm are insufficient to produce acceptable crop yields, and that improvements in manure management can only have small positive effects [26] . Analyzing biomass flows in and around the largest montane forest of Kenya, another study indicated that the shortages of feeds on dairy farms are frequently offset through forest grazing, which reduces the capacity of the forest to store carbon and deliver other critical ecosystem services [12] .
Africa is often low (e.g., dairy cows produce less than 1500 l of milk per lactation), many African countries are net importers of animal-derived products such as milk powder and cheese. Changing this reality will require fast adoption of technology (e.g., fodder conservation to regulate seasonal feed fluctuations and milk processing technologies to reduce losses), which needs to be adapted to the African context to minimize the reliance on foreign industry for key inputs and knowledge. The transformation of mixed farms should also rely more on green energy (solar, hydro and wind) to reduce the dependency on fossil fuels, with farming practices adapted to harness local plant and animal diversity.
Crop-livestock farms could benefit from the climate change mitigation schemes since there are many initiatives that are designed to offset carbon emissions, contributing to food security and halting land degradation such as the UNEP Decade of Ecosystem Restoration (https://www.decadeonrestoration.org).
Restoring degraded lands will certainly help reduce the pressure on existing croplands and will be an investment in future food self-sufficiency for the African youth. Productive crop-livestock farms can generate employment, especially if a processing industry develops around villages, towns and cities. Keeping livestock and cultivating crops will also be critical to climate change adaptation since all the evidence indicates that farm and landscape diversity helps farmers stabilize income [16] and recover from shocks [17] .
There is ample room for development of crop-livestock farms in Africa, and keeping integration as part of the development will help prevent the environmental problems observed elsewhere in the world. This development can integrate more goals than just increasing productivity, and the inclusion of broader goals could help farmers access the level of finance required to implement changes. The most important next steps will be for African farmers and their decision makers to decide what development route best fits their societal goals and world vision. Given the challenge presented by climate change, strategies that relies on the exploitation of African plant and animal diversity and green energy for production and processing are the most promising. In particular, the more widespread cultivation of specific African grasses such as brachiaria (Brachiaria humidicola) and Napier grass (Pennisetum purpureum) can help deliver on these multiple goals. Brachiaria produces strong biological nitrification inhibitors in their rooting system [27] , and that is why this grass can be used to produce forage and to control nitrogen losses and NOx emissions. Napier grass in another productive native African grass that can be cultivated in steep terrain and thanks to its excellent nutritional properties and suitability for fodder conservation it can be used to manage feed shortages during the dry season contributing to reduce the carbon footprint of intensive dairy farms [28] . In any option, smallholders need to have access to knowledge, the required assets and inputs to manage their land in a way that is, in the long-term, economically and environmentally sustainable.
|
2020-09-10T10:24:33.450Z
|
2021-03-29T00:00:00.000
|
{
"year": 2021,
"sha1": "3f1ae7ba90db9ea0578fe62a1202fba8c28ee771",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15302/j-fase-2020362",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dab38a40ffe54fa3ae25a5cac2bfea04c190259d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
119260777
|
pes2o/s2orc
|
v3-fos-license
|
Non-relativistic bound states in a moving thermal bath
We study the propagation of non-relativistic bound states moving at constant velocity across a homogeneous thermal bath and we develop the effective field theory which is relevant in various dynamical regimes. We consider values of the velocity of the bound state ranging from moderate to highly relativistic and temperatures at all relevant scales smaller than the mass of the particles that form the bound state. In particular, we consider two distinct temperature regimes, corresponding to temperatures smaller or higher than the typical momentum transfer in the bound state. For temperatures smaller or of the order of the typical momentum transfer, we restrict our analysis to the simplest system, a hydrogen-like atom. We build the effective theory for this system first considering moderate values of the velocity and then the relativistic case. For large values of the velocity of the bound state, the separation of scales is such that the corresponding effective theory resembles the soft collinear effective theory (SCET). For temperatures larger than the typical momentum transfer we also consider muonic hydrogen propagating in a plasma which contains photons and massless electrons and positrons, so that the system resembles very much heavy quarkonium in a thermal medium of deconfined quarks and gluons. We study the behavior of the real and imaginary part of the static two-body potential, for various velocities of the bound state, in the hard thermal loop approximation. We find that Landau damping ceases to be the relevant mechanism for dissociation from a certain"critical"velocity on in favor of screening. Our results are relevant for understanding how the properties of heavy quarkonia states produced in the initial fusion of partons in the relativistic collision of heavy ions are affected by the presence of an equilibrated quark-gluon plasma.
I. INTRODUCTION
When matter is immersed in a thermal medium many of its properties change. In principle, no strictly stationary bound state exists, because interactions with the particles of the medium lead to a finite lifetime for all states (including the ground state). This is equivalent to a broadening of the energy levels, i.e. an imaginary part of the energy eigenvalues, which depends on the density and on the temperature of the medium.
Of particular interest is the case in which the bound state moves with respect to the thermal medium. The first experimental investigations and theoretical developments of this system were done in condensed matter physics [1]. From the analysis of atoms moving across a plasma, it has been shown that a number of phenomena may take place. First of all the Debye screening of the Coulomb potential depends on the relative velocity between the bound state and the plasma. Moreover, the propagation of a bound state through the medium produces a fluctuation of the induced potential which leads to a density trail. Finally, the moving particle loses energy and is eventually stopped by the plasma.
A renewed interest in the properties of bound states moving in a thermal medium arose in recent years due to the advent of high energy heavy-ion colliders. In particular, one is interested in understanding whether some modifications of the properties of heavy quarkonia (HQ) states produced in the early stage of the heavy-ion collision can be a signature of the presence of a deconfined plasma of quarks and gluons. In their pioneering work [2], Matsui and Satz showed that the Debye screening of the color interaction between two static heavy quarks may lead to the dissociation of HQ in a thermal medium. This effect should be experimentally detectable by the suppression of the corresponding yields. The suppression of HQ states means that the yield of HQ observed in heavy-ion collisions is smaller than the yield of HQ one would obtain multiplying HQ production rates in p-p collisions by the number of nucleons participating in the collision and taking into account the normal nuclear absorption; see e.g. [3] for a brief review. The first study of moving HQ was then performed in [4], in which the dependence of the Debye mass on the velocity of propagation of the heavy quarks with respect to the quark-gluon plasma (QGP) was determined. Subsequent analyses have confirmed the effect and studied the formation of wakes in the QGP [5][6][7][8][9][10].
One may wonder whether the drift of bound states is important, as it is the case for heavy flavors. Indeed, measurements of heavy flavor production in PHENIX [11] via single electron measurements results in a large v 2 , which suggest that there is significant damping of heavy quarks while they travel across the fireball. Therefore, in heavy-ion collisions the thermal bath expansion may drift the heavy quarks in a phenomenon similar to advection in normal fluids. This picture has also received support from microscopic calculations of heavy quark diffusion in the quark-gluon plasma [12]. However, we expect that the drag of a heavy quarkonium is less important than that of a heavy quark. This is because an isolated heavy quark has a net color charge while a heavy quarkonium at distances larger than its radius is colorless. Hence, in general, we expect that the HQ states produced in the early times of the collision will not be comoving with the thermal medium, and, therefore, our calculations will be relevant for them. On the other hand HQ states produced through recombination are expected to roughly comove with the thermal bath. This is because both heavy quarks have been drifted by the QGP before recombining in a HQ.
Suppression of the J/Ψ was first observed at the CERN SPS [13]. However, in contrast with the naive Debye screening scenario, further experimental investigation of the J/Ψ yields at PHENIX [11], led to the observation of a strong suppression at forward rapidity rather than at mid-rapidity. Recently, there have been efforts in studying this problem with the use of non-relativistic effective field theories (EFTs) [14][15][16]. The EFT techniques are very useful for problems that have different energy scales, as is the case of HQ in a thermal medium. Using these techniques it has been shown that, at least in perturbation theory, the dissociation of bound states is due to the appearance of an imaginary part in the potential [14,15,17].
In the present paper we study how a moving thermal bath affects the properties of bound states. One of the points is to assess whether the results obtained in a static medium are modified when considering the relative motion between the bound state and the thermal medium. We consider the simplest systems, hydrogen-like atoms moving at a constant velocity, v, across a homogeneous thermal medium. We study two different cases, the first one corresponds to temperatures smaller or of the order of the typical momentum transfer, the second one corresponds to temperatures larger than the typical momentum transfer. We always assume that the temperature is much smaller than the mass of the particles forming the bound state.
In the first case we restrict ourselves to the hydrogen atom. We consider separately temperatures smaller than the typical momentum transfer and temperatures of the order of the typical momentum transfer. In both temperature ranges we provide the matching procedure and evaluate the energy shifts and decay widths for the stationary states of the system. We build the effective theory for this system first considering moderate values of the velocity and then the relativistic case. We show that for large values of the velocity, a new separation of scale occurs and a different EFT must be constructed for dealing with bound states. In this case the separation of scale is such that the corresponding EFT resembles some aspects of the soft collinear effective theory (SCET) [18].
In the second case, namely, for temperatures larger than the typical momentum transfer, we also consider muonic hydrogen. Since the mass of the particles forming the bound state is much larger than the mass of the particles in the thermal bath, this system resembles very much a heavy quarkonium in a thermal medium of deconfined quarks and gluons. As a consequence, this part of the present work is of direct relevance for understanding how the properties of heavy quarkonia states produced in the initial fusion of partons in the relativistic collision of heavy ions are affected by the presence of an equilibrated quark-gluon plasma. We study the behavior of the real and imaginary part of the two-body potential for various values of the velocity of the bound state with respect to the thermal bath employing the hard thermal loop (HTL) approximation. Regarding the real part of the potential we reproduce known results, and extend them to higher speeds. The imaginary part of the potential is calculated for the first time. We demonstrate that screening overtakes Landau damping as the dominant mechanism for dissociation at a certain critical velocity. This paper is organized as follows. In Section II we introduce some general remarks about the propagation of particles in a thermal bath. In Section III and Section IV we study the hydrogen atom moving at moderate velocities and ultrarelativistic velocities with respect to the medium respectively. In Section V we study the real and imaginary part of the static potential of muonic hydrogen in a moving thermal bath for temperatures larger than the typical momentum transfer. The results of this last section are directly applicable to the HQ case. Finally, we present our conclusions in Section VI.
II. GENERAL FRAMEWORK
In our study we shall employ a reference frame in which the bound state is at rest and the thermal medium moves with a velocity v. We choose this frame because it facilitates the application of non-relativistic effective field theories, in particular, Non-Relativistic QED [19] (NRQED) and potential NRQED [20] (pNRQED) 1 . These EFTs are extremely convenient to handle the three different scales of non-relativistic systems at vanishing temperature [21,22], and have already proved useful to analyze these systems in a static thermal bath, in which additional scales occur [14,23]. They allow one to organize the calculations in such a way that only one scale is taken into account at each step, which, together with the use of dimensional regularization, makes computations much easier.
We shall assume that the plasma (or black-body radiation) is in thermal equilibrium at a temperature T . Since we are considering the reference frame in which the plasma is moving with a velocity v the particle distribution functions are given by where the plus (minus) sign refers to fermions (bosons). In the reference frame where the thermal bath is at rest β µ k µ = k0 T , while in a frame where the plasma moves with a velocity v we have that is the Lorentz factor. This frame has been successfully used in the past, for example in [24]. Studying a bound state in a moving thermal bath is akin to study a bound state in non-equilibrium field theory [25]; in that case the Bose-Einstein or Fermi-Dirac distribution functions are substituted by a general distribution, which in our case will be the boosted Bose-Einstein or Fermi-Dirac distribution functions reported in Eq. (1).
The vector v brings in the problem a number of complications. First of all, it breaks rotational invariance. Second, when v gets close to 1 new scales are induced, which has serious implications for the use of EFTs. For instance, if we are working with an EFT in which we have integrated out all scales larger than µ, we can no longer argue that if µ ≫ T the Lagrangian of this EFT is not affected by the temperature. This is because the Boltzmann suppression is not only controlled by T , but rather is a non-trivial function of T and v. In order to illustrate this point, let us analyze the distribution functions in Eq. (1) in more detail.
We begin with a thermal bath consisting of massless particles. Taking into account that in non-equilibrium field theory the collective behavior always enters through on-shell particles or antiparticles, we have (in the case of particles) that where k = |k| and θ is the angle between k and v. The distribution functions in Eq. (1) can now be written as where we have defined the effective temperature Intuitively, the dependence of the effective temperature on v and θ can be understood as a Doppler effect. Indeed Eq. (5) is analogous to the change in the frequency of light caused by the relative motion of the source and the observer. Consider a particle in a thermal bath of radiation moving with velocity v. From the point of view of the particle it will see in the forward direction blueshifted radiation and in the backward direction redshifted radiation. This corresponds, respectively, to an effective temperature which is higher in the forward direction than in the backward direction. Therefore, the effective temperature corresponds to the temperature of the radiation as measured by the moving observer and by a minor abuse of language we shall talk about blueshifted and redshifted temperatures. Notice that analogously to the relativistic Doppler effect, the effective temperature in the transverse direction, i.e. in the direction corresponding to θ = π/2, is redshifted.
For v ≪ 1, one has that T eff (θ, v) ∼ T for any value of θ and one single scale T controls the Boltzmann factor in Eq. (4). However, for v close to 1, the values of T eff (θ, v) strongly depend on θ, which gives rise to an interesting case for an EFT analysis. In order to proceed further, it is convenient to use light-cone coordinates. We choose v in the z direction and define Then, we have that where Therefore, in light cone coordinates, it becomes explicit that the distribution function actually depends on two scales, T + and T − . For any value of v it is clear that T + ≥ T ≥ T − and moreover T + correspond to the highest temperature measurable by the observer, while T − corresponds to the lowest temperature measurable by the observer.
For small values of v, T + ≃ T − and the shift in the temperature in the forward and backward directions are negligible. In this case no further separation of scale is needed and one has a single temperature scale, T , as mentioned before. For v ≃ 1, we have T + ≫ T − , namely, two well separated temperature scales, which must be properly taken into account in our EFTs. Note that configurations with light-cone momenta such that k + ≫ T + or k − ≫ T − are exponentially suppressed. Then we can separate the remaining configurations in two regions (in light-cone momenta): • A collinear region, corresponding to k + ∼ T + and k − T − .
• An ultrasoft region, corresponding to k + ≪ T + and k − T − .
The existence of these two regions has to be taken into account in the matching procedure between different EFTs. In this paper we shall analyze two different situations in which v is close to 1, the case m e ≫ T + ∼ 1/r ≫ T − ≫ E and the case T + ∼ m e ≫ 1/r ≫ T − ≫ E.
We would like to remark that although in Eq. (3) we have assumed for simplicity that the particles of the thermal bath are massless, our approach applies to a plasma that consists of both massless and massive particles. Note that our discussion, from Eq. (6) on, holds independently of what the dispersion relation of the particles in the thermal bath is. If some particles in the plasma have mass M ≫ T , we know that they are exponentially suppressed in the thermal bath, and this must be true in any reference frame. We can easily verify it by substituting k − = (M 2 +k 2 ⊥ )/k + in Eq. (7), and by noticing that its minimum is attained at β µ k µ = M/T , which confirms that for M ≫ T thermal effects due to these particle can indeed be neglected.
III. HYDROGEN ATOM AT MODERATE VELOCITIES
In the present section we shall assume that the velocity is moderate, say v 0.5, so that T + ≃ T ≃ T − , and separately study the cases T ≪ 1/r and T ∼ 1/r (r is the size of the bound state, and hence 1/r of the order of the typical momentum transfer). For simplicity, we also assume that the proton is infinitely heavy.
As we have explained in the previous section, in a thermal bath at a temperature T ≪ M , particles with mass M are exponentially suppressed independently of the value of v. In particular, if M ∼ m e indicates the mass of electrons and positrons in the plasma, then these particles are irrelevant in our analysis. In this range of temperature the hard thermal loop effects will not appear and the doubling of degrees of freedom plays no role [14]. This implies that only the transverse photons are sensitive to thermal effects and, hence, the leading order interaction, namely, the Coulomb potential, will not be modified.
A. The T ≪ 1/r case As a starting point we consider the pNRQED Lagrangian at vanishing temperature (we use the form given in Eq. (6) of [21]). We will be able to evaluate the corrections to the binding energy E n and to the decay width Γ n due to the thermal bath up to the order m e α 5 . There are two different diagrams that contribute at this order. The first one is the tad-pole diagram which is given by where the solid lines represent the atom propagator and the wavy line corresponds to the photon propagator. In the integral we set the number of dimensions D = 4, because the integral is convergent and k µ corresponds to the loop momentum. Notice that the contribution of this diagram is independent of v because the loop integral has no indices and no external momentum enters in it. This is in fact true for any tad-pole diagram of this kind. Therefore one can read the result from the v = 0 case [14]. Next we consider the "rainbow" diagram, p i is the momentum operator of the electron, |r symbolizes an eigenstate of the Coulomb Hamiltonian of energy E r , and I ij (q) is defined as follows: Since the only two independent tensors are δ ij and v i v j , one can use the decomposition where and then Upon substituting Eq. (11) in the expressions above, we find that The computations of these integrals is done in Appendix A. The results for the imaginary and real part of A are, respectively, given by and The imaginary part of B turns out to be given by while for the real part of B we have The expressions above can be computed numerically for any value of the parameters. The thermal corrections to the energy and the decay width (for arbitrary angular momentum) are given by the following expressions: and which in general will depend on the relative velocity v. Analytical expressions for T ≫ E and for T ≪ E, where E ∼ E n , the binding energy scale, are derived below.
The T ≫ E case
For T ≫ E the leading contribution to the integrals in Eqs. (16), (17) and Eqs. (18), (19) can be analytically determined and upon substituting the corresponding expressions in I ij we find that for the real part of I ij and for the imaginary part of I ij .
It is interesting to observe that, in this limit, the terms non-local in q (Bethe-log type) coincide with those obtained for vanishing velocity. Hence, all dependence on v is encoded in an anisotropic potential and kinetic term. This result will also serve as a cross-check of the 1/r ∼ T computation that will be carried out in the next section.
In order to obtain the energy shift and the decay width from Eqs. (22) and (23) we consider separately the S-wave states and states with non-vanishing angular momentum. We display below slightly more general results, which hold for an ion of charge Z as well.
In the S-wave states the expected value of any tensor operator n|O ij |n ∝ δ ij , therefore terms proportional to P p ij , can be ignored (because P p ij is traceless) and we find that where φ n (0) is the wave function at the origin. The corresponding change in the width turns out to be given by States with non-vanishing angular momentum are more difficult to deal with. It is convenient to decompose I ij by the tensors δ ij and vivj v 2 instead of P s ij and P p ij . For the real and imaginary parts of I ij we find respectively and where To determine the energy shifts and the decay widths we fix v in the z-direction, and make use of the following identities and where Y lm are the spherical harmonics. Note that |n is used as a short-hand notation for |nlm , where n is the principal quantum number, l the orbital angular momentum and m its z-component. With these expressions we obtain the general forms of the shifts of the energy levels and the corresponding widths are given by where lml ′ m ′ |l ′′ m ′′ are the Glebsch-Gordan coefficients (normalization and sign conventions are as in [26]). It is interesting to observe that the decay widths (25) and (32) decrease as the velocity increases.
The T ≪ E case
For temperatures T ≪ E the coefficients A and B simplify and upon replacing their expressions in Eq. (12) we obtain that When this expression is used in the evaluation of the energy shift in Eq. (20) and of the decay width in Eq. (21), we need to calculate for x = s, p. Note that only the term proportional to P s ij contributes because P p ij is traceless. Thus, the contribution from the rainbow diagram in the limit T ≪ E for states with vanishing angular momentum is given by and is independent of the velocity v. Hence, the dominant contribution of the rainbow diagram above cancels the contribution of the tad-pole diagram, which we have seen to be independent of v as well. Then the thermal corrections in this case are very suppressed, of the order O(α 3 T 4 /E 3 ), like in the case of the thermal bath at rest.
B. The T ∼ 1/r case In the temperature regime T ∼ 1/r, the temperature is high enough so that its effects must be taken into account already in the matching between NRQED and pNRQED, namely it affects the potential. Therefore, several diagrams are modified by the presence of the temperature. However, like in the calculations for the thermal bath at rest, there are only four diagrams that give a relevant contribution. All other diagrams give contributions that either vanish or can be shown to cancel out by local field redefinitions. We schematically analyze the relevant diagrams below. Then, as a cross check, we compare the calculations for T ∼ 1/r in the limit of low temperature, with the results that we derived in the previous section for E ∼ T in the limit of high temperature, and we find agreement.
The four diagrams that must be taken into account in the matching procedure between NRQED and pNRQED are the following: • The tad-pole diagram which comes from the D 2 2me term in the NRQED Lagrangian and gives the contribution This diagram is quite similar to the diagram we evaluated in the previous section, but now the solid line corresponds to the electron field instead of the hydrogen atom. As already discussed, this diagram is independent of the velocity v.
• The rainbow diagram is given by where the solid line corresponds to the electron field and the wavy line corresponds to the photon. In order to have a consistent EFT we have to expand and therefore the contribution of the rainbow diagram can be written as where and For the time being we do not evaluate these integrals; as we shall clarify soon we only need to evaluate T ij .
• The thermal correction of the Coulomb potential corresponds to the diagram where the tensor T ij is the same as in Eq. (40) and p and p ′ are the momenta of the incoming and outgoing electrons, respectively. The solid thick line here corresponds to the ion propagator and the solid thin line is the electron propagator.
• The last diagram to consider is the relativistic tad-pole, that is the same as the previous tad-pole diagram, but now the vertex comes from the D 4 8m 3 e term in the NRQED Lagrangian. The contribution of this diagram is given by where the tensor R ij is defined in Eq. (41). Notice that the term on the right hand side of Eq. (39) cancels the corresponding contribution of this diagram. Therefore, the sum of the rainbow diagram and of the relativistic tad-pole diagram is independent of R ij .
• The remaining non-vanishing diagrams give contributions analogous to the ones of Eqs. (36), (37) and (42) of [14]. Their net effect can be shown to be zero by local field redefinitions, like in the case of the thermal bath at rest. Now, let us consider how these diagrams combine. In particular, we would like to obtain the pNRQED Lagrangian that matches all these terms. By inspection we find that the pNRQED Lagrangian is given by which as already noted does not depend on R ij . We evaluate T ij in a similar way as was done for I ij in the T ∼ E case. The result in the M S subtraction scheme is where we have decomposed T ij in terms of δ ij and v i v j /v 2 , and ρ(v) is defined in Eq. (28). Upon making a local field redefinition to remove the term with a time derivative in (44), one can identify the corrections to the potential, in a similar way as it was done in Eq. (45) of [14]. The final form of the thermal correction to the pNRQED Lagrangian reads where V c (r) is the Coulomb potential at vanishing temperature. With simple modifications this Lagrangian can be put in a form so that we have an atom field instead of an electron and a nucleus field (see [14] for more details). Now that we have computed the corrections to the pNRQED Lagrangian for the case T ∼ 1/r we also need to compute the contribution from the ultrasoft scale with this Lagrangian. These contributions can be computed from the tad-pole and the rainbow diagrams as in Eqs. (9) and (10), where the Bose-Einstein distribution function can be expanded because we are now in the case T ≫ E, and therefore Upon substituting the expansion above in Eq. (9) we find that the contribution of the tadpole diagram vanishes in dimensional regularization, because it has no scales (it is independent of the external momentum). The rainbow diagram gives a contribution similar to the one in Eq. (10), with the replacement I ij → J ij , where and In order to be consistent with (45), the M S scheme has also been used here to remove the UV divergences. The thermal corrections to the energy levels and to the decay width coming from the soft and the ultrasoft scales in the case T ∼ 1/r are respectively given by and Note that the µ dependence in the correction to the Darwin term in (50) is canceled out by the µ dependency of J ij (q) in Eq. (49). Note also that, upon substituting (48) in (51), the expression for the decay width reduces to that of (32). Furthermore, the thermal corrections to the binding energy above in the limit of low temperature coincide with (24) and (31). Therefore, the limit of low temperature in the case T ∼ 1/r agrees with the limit of high temperature in the T ∼ E case.
IV. HYDROGEN ATOM AT RELATIVISTIC VELOCITIES
When the bound state moves at high speed with respect to the thermal bath one has to take into account that the effective temperature measured by the bound state in the forward direction is blueshifted and that the effective temperature in the backward direction is redshifted. In particular one has that T + ≫ T − , and therefore T + and T − , which we have defined in Eq. (8), are two well separated energy scales that must be properly taken into account in the EFT. In particular it is possible that T + and T − are in two distinct energy ranges. In this case the analysis of the system differs considerably with respect to the case of the thermal bath at rest. We shall study two different situations of this sort: the first one corresponds to the case T + ∼ 1/r ≫ T − ≫ E and the second one to the case T + ∼ m e ≫ 1/r ≫ T − ≫ E. Recall that we are assuming that T ≪ m e and hence, as we have already stressed in Section II, we can neglect positrons and electrons in the thermal bath.
Since T + ≪ m e we can use NRQED at vanishing temperature as the starting point, but we would like to integrate out also the 1/r scale in order to construct the pNRQED for this situation. As in the soft collinear effective theory, it is convenient to split the photon field A µ into two different components, a collinear one A col µ that takes into account photons with k + ∼ T + and k − ∼ T − and a ultrasoft one A us µ that takes into account photons with k + ∼ k − ∼ T − (or smaller). Notice that both types of photons have virtualities λ that fulfill (1/r) 2 ≫ λ; this means that neither of these two types of photons have to be integrated out in the matching between NRQED and pNRQED. We carry out the matching below using an electron field and a nucleus field in pNRQED (rather than an atom field), as it was done in [21].
Matching between NRQED and pNRQED for collinear photons
In the matching procedure between NRQED and pNRQED we have to determine the effective vertex between nonrelativistic electrons and collinear photons. In pNRQED the interaction of non-relativistic electrons with collinear photons cannot be given by the minimal coupling diagram shown in Fig. 1 for kinematical reasons, as we argue next. Let us call p the momentum of the incoming electron and k the momentum of the incoming collinear photon. The non-relativistic electron in pNRQED must have p 0 − p 2 2me ≪ 1/r, because m e /r is precisely the typical virtuality of the electrons that have been integrated out in the matching between NRQED and pNRQED. However, if the photon is collinear, namely k 0 ∼ 1/r, then the virtuality of the outgoing electron is of order m e /r, in contradiction with the fact that electrons with such a virtuality do not appear in pNRQED. Therefore, the interaction between electrons and collinear photons in pNRQED is given at leading order by 4-point processes as the one presented in Fig. 2. The matching procedure is outlined in the following equation (52) The NRQED diagrams on the left hand side have to match the pNRQED diagram on the right hand side. This part of the pNRQED Lagrangian involving collinear photons is given at the required order in Appendix C 1. The part of the pNRQED Lagrangian involving ultrasoft photons only is the same as in the case with the thermal bath at rest.
Computation using pNRQED
Since we have determined the pNRQED Lagrangian, we can now calculate the contribution of thermal collinear photons to the self-energy of the hydrogen atom [from the part of the Lagrangian reported in Eq. (C1)]. We shall use the Coulomb gauge and therefore only spatial components contribute. Moreover, the condition ∇ · A = 0 for collinear photons means that A 3 (x) ≪ A ⊥ (x) and we only need the first and second terms of Eq. (C1). The contribution of collinear photons to the self-energy of the hydrogen atom is given by where the zig-zag line corresponds to a collinear photon and the solid line represents the hydrogen atom. As already noticed in the discussion of Eq. (9), the tad-pole diagram is not sensitive to the relative motion between the bound state and the thermal bath, because no external momenta enter into the loop. The corresponding shift to the energy levels is given by As we shall see soon, the contribution coming from thermal collinear photons is the dominant one, however it does not depend on the quantum numbers of the state and therefore it cannot be seen in the emission spectra. There are two one-loop contributions of ultrasoft photons to the self-energy of the hydrogen atom. The tad-pole contribution is given by and we find that in dimensional regularization this integral vanishes. The second contribution is due to the rainbow diagram where There are three different integration regions that contribute to this integral • the region with k + , k − ∼ q , and • the region with k + ∼ q and k − ∼ q(T − /T + ) .
Note that k + /T + ≪ 1 in all the regions. Evaluating the contributions of each region (see Appendix B) and putting them together we find that where P s ij and P p ij are defined in Eq. (13) and and Note that in the v → 1 limit, the coefficients a and b are equal to the coefficients A and B reported, respectively, in Eqs. (16), (17) and Eqs. (18), (19) in the limit q ≪ T − . Therefore, in the same limit, we have that K ij → I ij . The thermal corrections to the energy and decay widths due to the ultrasoft photons can be written as For S-wave states we obtain the following expressions for the energy shifts: and decay widths For states with non-vanishing angular momentum l we find that and The total thermal width is given by the ultrasoft contribution reported in Eq. (66), for S-wave states, or in Eq. (68) for states with non-vanishing angular momentum. In order to obtain the total thermal energy shift the collinear contribution given in Eq. (54) must be added to the ultrasoft contributions given in Eq. (65) for S-wave states, or in Eq. (67) for states with non-vanishing angular momentum. Note that the latter turns out to be totally independent of the velocity. Note that the decay widths (66) and (68) are decreasing functions of the velocity, like in the moderate velocity case. Furthermore, the results above agree with those of Sec. III A in the v → 1 limit.
We shall now consider a highly relativistic hydrogen atom immersed in a thermal bath at a temperature T ∼ 1/r. We shall assume that the relative velocity between the hydrogen atom and the thermal bath is such that the temperature in the forward direction is blueshifted to the electron mass, that is T + ∼ m e , while in the backward direction the effective temperature is redshifted to 1/r ≫ T − ≫ E. The effective temperatures T + and T − are now very well separated scales, therefore this situation is specially suitable for the use of EFT.
In the construction of the effective theory we start with QED at vanishing temperature, because m e ≫ T . However, the existence of collinear photons must be taken into account in the matching between QED and NRQED. On this aspect the matching procedure is akin to the one in SCET. We shall schematically describe this matching procedure below. Regarding collinear photons, they have a virtuality of order (1/r) 2 , and they must be integrated out when matching from NRQED to pNRQED. Finally, the contributions of ultrasoft photons are calculated in pNRQED. In this case pNRQED does not include collinear photons, which already have been integrated out. The interaction with ultrasoft photons is exactly the same as in the previous case. Their contribution is given by exactly the same diagram as in Eq. (56) and therefore one obtains the energy shifts reported in Eq. (65), for S-wave states, and in Eq. (67), for states with non-vanishing angular momentum, and the widths are given by the same expressions reported in Eq. (66), for S-wave states, and in Eq. (68), for states with non-vanishing angular momentum.
Matching between QED and NRQED for collinear photons
In QED, when a non-relativistic electron absorbs a collinear photon, it turns into a relativistic electron. This means that the NRQED Lagrangian cannot have this kind of 3-body interaction (a similar argument was used in Section IV A 1). Hence, in NRQED the interaction with non-relativistic electrons has to be a 4-body interaction.
In this case, there is the additional complication that on the QED side we have bispinors, while in NRQED we have only spinors. This can be solved using the non-relativistic projector. The matching equation takes the form where Z is the wave function renormalization of NRQED that depends quadratically of the momentum. The result for this matching is given in Appendix C 2.
To match NRQED with pNRQED we have to integrate out collinear photons. The contribution of collinear photons to the self-energy is given by where we have used the Lagrangian of Eq. (C6) in the Coulomb gauge. Note that in this gauge thermal effects are only due to the spatial components of A µ and A 3 ≪ A ⊥ , and we only need the terms proportional to c 1 , c 2 , c 11 , and c 13 reported in Appendix C 2. This diagram was already evaluated in the case of the thermal bath at rest. Since tad-pole diagrams are unaffected by the motion of the thermal bath, the result remains the same, and the only effect is a constant energy shift in the pNRQED Lagrangian, which amounts to the following shift of the effective mass of the electron: Regarding heavy quarks, the net effect of collinear gluons is a very tiny shift of the heavy quark mass by an amount of δm Q ∼ α s T 2 /m Q , which is irrelevant for the stability analysis of heavy quarkonia. Much more important for the stability analysis is how the Coulomb potential changes at high temperatures and this will be studied in the following section.
V. THE STATIC POTENTIAL OF MUONIC HYDROGEN IN THE RANGE T ≫ 1/r
In muonic hydrogen the proton is orbited by a muon and the bound state consists of two heavy particles. Since the muon is about 207 times heavier than the electron, muonic hydrogen is much more compact than standard hydrogen and the energy levels of the system have about 207 times the energy of standard hydrogen. Muonic hydrogen is investigated in order to have high precision measurements of the proton properties [27], mainly by Lamb shift measurements [28]. The study of muonic atoms is also important for muon-catalyzed fusion processes [29], which are under experimental investigation at RIKEN [30] and Star Scientific [31].
The study of muonic hydrogen in a thermal bath with m µ ≫ T ≫ m e is akin to the study of HQ states in the quark-gluon plasma with m Q ≫ T ≫ Λ QCD ≫ m q , where m Q is the mass of the heavy quark and m q (q = u, d, s) is the mass of light quarks [23]. The reason is that in both cases the temperature is on the one hand much smaller than the masses of the particles that form the bound state and on the other hand much larger than the mass of the particles in the thermal bath. There are thermally excited electrons and positrons in the QED plasma and thermally excited light quarks in the QGP which can modify the Coulomb interaction between the two heavy particles. Actually, we shall assume that the temperature is high enough so that we can neglect the masses of the light particles of the plasma.
Apart from the modification of the static Coulomb potential, the propagation of a particle in the medium produces a fluctuation of the induced potential which leads to a variation in the density of the plasma. If the plasma behaves as a liquid the moving bound state can produce a wake. These effects were first analyzed in condensed matter physics (see e.g. [1]) and then studied in the context of heavy-ion collisions [4][5][6][7][8][9][10] and in strongly coupled N = 4 supersymmetric Yang-Mills plasmas [32][33][34][35].
In the present section we study the modifications to the leading-order potential between two heavy sources in relative motion with respect to the thermal bath at a velocity v. We evaluate the potential in the HTL approximation assuming that the temperature of the plasma is T ≫ 1/r. The real part of the potential is screened by massless particles loops, both in QED and QCD (the only difference between QED and QCD in our results is, apart from trivial color factors, the value of the Debye mass m D ). The real part of the potential between a quark and an antiquark moving in a thermal bath was first computed in the HTL approximation in [4] and then more recently in [8]. Recent perturbative calculations [17] at vanishing velocity have pointed out the importance of the imaginary part of the potential. So far its effect has not been taken into account in a moving thermal bath.
In the Coulomb gauge the potential is obtained by the Fourier transform of the longitudinal photon propagator, for k 0 ≪ |k|, where ∆ R (k) and ∆ A (k) are respectively the retarded and the advanced propagators and ∆ S (k) is the symmetric propagator. For a bound state comoving with the thermal bath, it is enough to compute the retarded self-energy in the rest frame of the thermal bath and then using and one can determine the potential. In the expression above, f (k 0 , T ) is the distribution function of the longitudinal photons in the thermal bath. However, the last relation does not hold for a bound state moving through a thermal bath [25], and must be substituted by the following one: where u µ = γ(1, v) is the 4-velocity. Thus, in order to determine the propagator one has to evaluate the self-energies Π R (k, u) and Π S (k, u). The retarded self-energy Π R (k, u) was computed in [4] and here we only show the result in the reference frame where the bound state is at rest 2 , θ is the angle between k and v, and Regarding the symmetric self-energy of the longitudinal photons Π S (k, u), the computation is similar to the one done for the retarded self-energy in [4]. Consider the full symmetric self-energy tensor Π s µν . It obeys the Ward identity and is symmetric, Then, it must have the following structure: where Π 1 and Π 2 are two scalars and is the component of u µ orthogonal to k µ . Since Π s µν is a tensor, we can determine the values of Π 1 and Π 2 in any reference frame, and it is convenient to consider the comoving frame, i.e. the frame in which the thermal bath is at rest. It is useful to define the tensor where is the component of k µ orthogonal to u µ , and k 2 ⊥ = k 2 − (k · u) 2 . It is clear that P µν u ν = P µν k ν = 0, and therefore P µν projects four-vectors in the direction orthogonal to u µ and k µ . By means of Eqs. (82) and (84) it is easy to show that P µν u ν ⊥ = P µν k ν ⊥ = 0, as well. Then we have that and in the comoving frame one has that the only nonvanishing components of P µν are and then in this frame which is precisely the transverse component of the photon (gluon) self-energy Π S T in the Coulomb gauge. This quantity has been computed for vanishing velocity in [25] and we find that The scalar quantity u µ u ν Π s µν has a simple interpretation in the comoving frame, where it turns out to be given by Π s 00 . Then we have that and we can now compute the symmetric self-energy in the frame where the muonic hydrogen is at rest and the thermal bath is moving with a velocity v. In this frame we have that We have all the necessary quantities to construct ∆ S using Eq. (75) and then the propagator in Eq. (72). When the limit v → 0 is taken, we obtain for ∆ S the same result as in refs. [14,15,17].
In the previous discussion we have not distinguished between the case of moderate velocities and the case of relativistic velocities, as we did in the hydrogen atom calculations. This is motivated by the fact that the results found in Section IV are identical to the ones that can be deduced by taking the v → 1 limit of the results of Section III. However, it is interesting to sketch how the computation of ∆ 11 would be carried out in light-cone coordinates for v ∼ 1. One would start with the NRQED Lagrangian that includes the interaction with collinear photons of Eq. (C6). Since the virtuality of the collinear photons is of order T 2 , and T ≫ 1/r, they can be integrated out before evaluating the potential. At leading order, this gives rise to an energy shift similar to the one reported in Eq. (71). In the light sector of the NRQED Lagrangian there are also interactions between soft photons and collinear electrons and integrating out collinear electrons one obtains the HTL Lagrangian. If the scale T − is much smaller than 1/r one should consider its effects in the ultrasoft photons. However, these photons can only give subleading contributions by means of loop corrections. From now on we consider that the distinction between moderate and relativistic velocities is not essential and will not be done.
From the Fourier transform of the ∆ 11 propagator we have determined the real and imaginary parts of the potential in the HTL approximation. The potential is anisotropic, and in Fig. 3 we display the plots of the real (upper panels) and imaginary (lower panels) part of the potential for v = 0, v = 0.55, and v = 0.99 respectively. We consider two directions: the first is along the direction of movement of the thermal bath (right panels) and the second one is along the direction orthogonal to the thermal bath (left panels). We plot only positive values of r because the potential is symmetric for r → −r. We normalize the real part of the potential to αm D , which describes the typical strength of the potential in the v = 0 case. The imaginary part of the potential is normalized to αT . With these normalizations the displayed shapes hold both for muonic hydrogen and for heavy quarkonium.
Regarding the real part, we observe that for v ∼ 0.55 it is very similar (in fact, the curves overlap to a large extent) to the real part in the v = 0 case, being Debye screened at a distance of order m D and not very asymmetric. For v ∼ 1, however, although the real part of the potential remains Debye screened at roughly the same distance, it develops a rather large anisotropy. Indeed, an oscillation is observed in the direction of motion, which leads to the formation of a wake in the plasma.
Concerning the imaginary part of the potential for v ∼ 0.55, it is not very asymmetric and remains very similar to the v = 0 case. It monotonically increases and keeps the same pattern until v ∼ 0.9. From that velocity on the imaginary part decreases and the anisotropy grows (see the v = 0.99 curve). In the direction parallel to v one has that there is a mild increase with respect to the v = 0 case for r ≃ 4m D , and an oscillatory behavior at larger r is also displayed. In the direction orthogonal to v there is an enhancement of the potential at short range and a decrease at large distances. Note that the imaginary part of the potential vanishes in the origin for any value of v and in any direction.
In Fig. 4 we plot the contour lines for the real and imaginary parts of the two-body potentials between two heavy particles with opposite charges, for various values of the velocity. We focus on the short distance regime (the normalization for the real part is slightly different from the one in Fig. 3 because we have inverted the sign in order to match the normalization of [4]), because the distances relevant for dissociation are in the range rm D 1 [14]. Various plots of the contour lines of the real part of the two-body potential were also shown in [4] (see also [8]) and it can be seen that we obtained exactly the same results. The contour lines for the imaginary part of the two-body potential are reported for the first time here, and we observe that an important anisotropy exists even at short distances.
Let us next estimate the dissociation temperature in a way similar to refs. [14,23]. If we assume that the typical momentum transfer k is larger than the velocity dependent screening mass m 2 D (v, θ) ∼ |Π R (k, u)|, we obtain that at k ∼ e finite for v close to 1 4 . This means that the typical k for which the real and imaginary parts of the potential have the same size is smaller than the screening mass. Since a screened potential only supports bound states of a typical k larger than m D (v, θ), we conclude that at relativistic velocities, unlike the case of moderate velocities, the dissociation occurs due to screening (i.e. at the scale T d ∼ m e e), as originally proposed by Matsui and Satz [2], rather than due to Landau damping [14,15,17]. This can also be qualitatively understood from our plots. For v large and increasing, we see from Fig. 3 that the real part of the potential increases whereas the imaginary part decreases. Therefore, from some v on, the real part of the potential dominates over the imaginary part and one can find the approximate wave functions of the system by solving a standard Schrödinger equation with a real potential. The decay width may be then calculated in perturbation theory by sandwiching the imaginary part of the potential between those wave functions. The wave functions of bound states go to a vanishing value at the distance where the real part of the potential becomes flat. From Fig. 4 it is also clear that the real part of the potential at short distances becomes steeper at increasing v. On the one hand this implies that no bound state exists from a certain velocity on. On the other hand it implies that when bound states still exist their wave functions are increasingly localized close to the origin. Since the imaginary part of the potential goes to zero at the origin, it follows that the decay width of such states is also going to zero at increasing v. v c at which screening overtakes Landau damping as the dominant mechanism for dissociation, by equating eT to e 2/3 T (1 − v 2 ) 1/2 above. We obtain v c ∼ √ 1 − ae 2/3 , where a is a numerical factor of order one 5 . A quantitative study of all these issues may be carried out by numerically solving the Schrödinger equation with the full (complex) potential, for instance along the lines [36][37][38]. This is however beyond the scope of this paper.
Regarding the real part of the potential, we find it interesting to compare the results reported in Fig. 3 with the recent results obtained for super Yang-Mills theory using AdS/CFT [34]. For this theory it was stated that the potential could be approximated with a Yukawa potential and the dependence on the velocity encoded in a screening length that depends on v and θ as follows: where h(v, θ) is a function that is almost constant for any v and θ. The expression above does not give a good approximation of the potential in the HTL approximation. In particular the Debye screening for v = 0.99 is strongly dependent on the angle θ. If we try to fit the exponential behavior of the longitudinal and transverse directions, we find a screening length which is about a factor 2 larger in the longitudinal direction with respect to the transverse direction. This can also be inferred from the fact that the scale of m 2 D (v, θ) must be given by |Π R (k, u)|. We obtain in the case v → 1 Be aware that the θ above is the angle between the velocity and the momentum transfer, whereas the θ in (91) is the angle between the velocity and the relative position. In any case, the expression (92) shows that at ultrarelativistic velocities a strong anisotropy for real space potential is expected, as confirmed by our figures and discussed above.
Regarding the oscillatory part of the potential one might wonder whether it is due to the weak coupling approximation. However, as shown in [10] for the potential produced by a single charge, in the HTL resummation approach the oscillations of the potential are larger than in the HTL approximation. Moreover, one would naively expect that with increasing coupling the wakes should be larger than in the weak coupling approximation.
VI. CONCLUSIONS
The EFT theory for the description of bound states in a thermal medium has several interesting aspects. When the bound state moves with a moderate speed with respect to the medium the resulting EFT is quite similar to the one developed for the bound state at rest. We have taken into account the suitable modifications in Section III, for the hydrogen atom in the cases T ≪ 1/r and T ∼ 1/r. However, when the speed is close to 1, one has to consider two well separated scales, T + and T − , defined in Eq. (8), and in the corresponding EFT one has collinear as well as soft degrees of freedom. The effective temperatures T + and T − can be in two different energy ranges and in Section IV we have considered two specific cases: the first one corresponds to T + ∼ 1/r ≫ T − ≫ E and the second one corresponds to T + ∼ m e ≫ 1/r ≫ T − ≫ E. Note that in this case large logarithms of T − /T + appear in the calculation. The factorized results displayed in Appendix B may be useful for a resummation of these large logarithms. It is reassuring that our results for moderate velocities are able to reproduce the ones obtained for the v ∼ 1 case. For all the cases above we observe that the thermal decay width monotonically decreases with the velocity. This means that the faster the bound state moves across the thermal bath the more stable it becomes.
Finally, in Section V we have considered the case T ≫ 1/r allowing for light fermion pairs in the thermal bath. In atomic physics this state could be the muonic hydrogen in a thermal bath of electrons and positrons, while in heavy-ion collisions it may represent heavy quarkonia in the quark-gluon plasma at very high temperatures. We have determined how the imaginary and real component of the two-body potential are modified for nonvanishing velocities of the bound state with respect to the medium. Regarding the real part of the potential we have reproduced known results, and extended them to higher speeds. The imaginary part has been calculated for the first time. Its behavior is similar to the one determined for the thermal bath at rest for moderate velocities, but it tends to zero at velocities close to 1. This implies that Landau damping [14,15,17] is not the relevant mechanism for dissociation of bound states from a certain critical velocity v c on, which has been estimated in the previous section. Screening, as originally proposed by Matsui and Satz [2], becomes then the relevant mechanism. Our results for the thermal decay width disagree with the qualitative estimate of ref. [39], and with the more quantitative one of ref. [40]. We believe that the main reason for the discrepancy is due to the fact that the velocity dependence of the interaction is not properly taken into account in those works. Note that our results for the imaginary part depend crucially on the use of the correct non-equilibrium expression in Eq. (75), which leads to Eq. (90).
In the present paper we have paved the way for a more detailed study of the propagation of bound states in a thermal medium. We have assumed that the medium is a weakly coupled plasma, moving homogeneously and at a constant temperature, therefore our study needs a number of refinements to be realistically applied to HQ states in heavy-ion collisions. In that case, one should consider the expansion and cooling of the thermal medium, as well as possible anisotropies [41][42][43][44]. In any case, we expect that the qualitative features we observe, namely that the decay width decreases with increasing velocity, and hence that Landau damping ceases to be the relevant mechanism for dissociation at a certain critical velocity, will remain true.
and for the real part ℜA(q) = 1 8π Using Eq. (15) and the above equations we obtain that the imaginary and real part of the coefficient B are respectively given by and ℜB(q) = 1 8π As a cross-check, in the v = 0 limit we find that the relation B = A D−1 is fulfilled, and combining this with the identity one recovers the results reported in [14].
Appendix B: Computation of the contribution from ultrasoft photons in section IV
In this appendix we compute the matrix elements of K ij defined in Eq. (56) in the various integration regions identified in Section IV.
The k+, k− ∼ T− region
The quantities a and b defined in Eq. (58) can be computed from K ij as follows and for k + , k − ∼ T − we find that The first term in the square brackets vanishes by symmetry considerations and we find that and 2. The case with k+, k− ∼ q In this region we have that and it is useful to calculate separately the imaginary part of the integrals with the first and the second terms in the brackets. The reason is that the computation of the first term in dimensional regularization is technically difficult, while the second one is quite straightforward. We first compute the imaginary part of the term lineal in T − with a cut-off to separate the region k + ∼ q from the region with k − ∼ q(T − /T + ). Thus we consider a cut-off Λ such that q ≫ Λ ≫ q(T − /T + ) and we obtain and Then, the remaining terms are computed in dimensional regularization. Summing all the terms we obtain that 3. The case with k+ ∼ q, k− ∼ q(T−/T+) In this region we have that For simplicity, we compute the imaginary part using a cut-off (as we did in the previous subsection) and the real part using dimensional regularization. We obtain that the real and imaginary parts of the trace of K ij are, respectively, given by where P stands for principal value. Moreover, we find that
|
2011-07-25T17:11:54.000Z
|
2011-05-06T00:00:00.000
|
{
"year": 2011,
"sha1": "f5e21f6e4e0d2b4d9a0ba9888988aef5fe77317b",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/131766/1/605164.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "40e56d093360c744605a57dd6e25112f55150997",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248581262
|
pes2o/s2orc
|
v3-fos-license
|
The Discrepancy between Actual Performance and Self-Awareness among Adolescents with Executive Function Deficits
Adolescents with executive function deficits (EFD) struggle to perform complex daily activities and have difficulty being self-aware of their performance. This study aimed to compare actual performance with self-awareness of performance among adolescents with EFD before and after a metacognitive intervention. The participants consisted of 41 adolescents aged 10 to 14 years, previously diagnosed with EFD. All performed the Children’s Cooking Task (CCT), and completed the Behavioral Rating Inventory Executive Function—Self-Report (BRIEF-SR) and the Self-Awareness of Performance Questionnaire. Significant positive differences were found in the time duration and the total number of errors from the CCT and three BRIEF-SR subscale scores before and after the intervention. No significant differences were found in self-awareness of performance. After a cognitive intervention, adolescents with EFD improved their performance of a learned skill, but their self-awareness of their performance remained unchanged. These results may imply that EFD inhibits self-awareness development, and that self-awareness may not depend on task performance, but, rather, is influenced by other external factors. The article reports the secondary analysis from the results of the Functional Individualized Therapy for Teenagers with Executive Deficits (FITTED) intervention on human participants.
Introduction
Self-awareness is a complex, higher-level cognitive function that reflects a person's ability to self-monitor, recognize and correct errors during a task [1]. It may also influence their ability to select appropriate task strategies [2]. Self-awareness develops gradually during childhood, beginning with the awareness of concrete aspects of behavior or physical characteristics, and graduating into more abstract concepts [3]. Reports have shown that children's self-awareness increases with age, consistent with developing cognitive and linguistic skills [4]. Children show increased performance awareness from a young age and can better identify positive and negative aspects of their actual performance [5]. However, understanding the consequences of cognitive limitations in recognizing a problem when it occurs, or predicting a future problem, only develops in adolescence and young adulthood [6].
Adolescents with executive function deficits (EFD) differ from their peers with typical development (TD) in their struggle to complete daily life tasks [7][8][9], especially when environmental changes require them to adjust their thinking and actions [10]. Adolescents with EFD are characterized by disorganization, forgetfulness, the inability to multitask proficiently, and limitations in their ability to self-regulate behavior insightfully [11]. In turn, adolescents with EFD cope with limitations in daily functioning at home (e.g., day-to-day organizing, planning, and shifting focus), at school (e.g., learning ability and prioritizing responsibilities), and in social environments (e.g., understanding social situations and making friends) [12,13]. The literature has indicated that adolescents with EFD also have self-awareness deficits [14,15]. A lack of self-awareness regarding their differences with TD peers may negatively affect this group of adolescents [7,15]. Having no other coherent explanation for their functional difficulties, they may develop misattributions and negative self-beliefs [16].
Typically, self-awareness of impairment is evaluated by comparing a participant's performance on neuropsychological tests and self-rating of cognitive performance [17,18]. Common impairment awareness tools include the Questionnaire of Executive Functioning [17], the Behavior Rating Inventory Executive Function (BRIEF), and its corresponding self-report, the BRIEF-SR [2,11].
Assessing self-awareness of performance is more complicated because it requires an evaluation of the discrepancy between actual and estimated performance during a specific activity [2]. Thus, its assessment cannot rely on self-reports alone. To the best of our knowledge, few studies have assessed self-awareness of performance among typical adolescents [2,19], let alone adolescents with EFD.
This study is a secondary analysis of data from a larger study on the effectiveness of a unique occupational therapy intervention, the Functional Individualized Therapy for Teenagers with Executive Deficits (FITTED), created to improve executive function in adolescents with EFD profiles. The FITTED is an 8-week, metacognitive, occupationbased program that aims to assist adolescents with EFD in improving performance and satisfaction with everyday life goals. An expanded explanation of that study can be found in [8].
This study compares the actual cooking task [20,21] performance with performance self-awareness before and after the task, prior to and following the FITTED intervention. We expected to find significant differences between pre-and postintervention scores in (a) actual performance of the cooking task, (b) self-awareness of the cooking task performance, and (c) self-awareness of EFD impairments.
Participants
Study participants were recruited through community advertisements aimed at young adolescents with and without difficulty in daily functioning. We excluded participants with known psychiatric, emotional, or autism spectrum disorders; physical disabilities; or neurological diseases. This study presents a secondary analysis that includes 41 young adolescents (10-14 years) with EFD profiles who participated in the FITTED intervention [7]. Participants were characterized as having an EFD profile if their parent-reported scores were above the normal range (65 or higher) on the BRIEF behavioral regulation index (BRI) or metacognition index (MI).
Procedure
The participating institution's Ethics Committee approved this study (253/13), and all the adolescents and their parents signed informed consent prior to participation. In the primary study, those adolescents who met the inclusion criteria for the FITTED intervention were invited to individual sessions to complete the cooking task, which an expert occupational therapist administered and scored. Figure 1 presents the study design. The participants completed the BRIEF-SR and performed the Children's Cooking Task (CCT) assessment pre-and postintervention. All participants completed the Self-Awareness of Performance Questionnaire (SAP-Q) before and after the CCT and FITTED intervention.
BRIEF-SR
The BRIEF-SR [22] is a valid and reliable self-report instrument to assess executive function in 11-to 18-year-olds. Its 80 questions correlate to the BRIEF parent version in its four MI and four BRI subdomains. Adding the MI and BRI scores creates an overall global executive composite (GEC) score. Clinically significant t scores (M = 50 and SD = 10) are those that are 65 and above. The test-retest reliability of the BRI and MI were 0.84 and 0.87, respectively, and the internal reliability in the standardized sample was α = 0.80-0.98. The internal reliability of this study's entire scale was α = 0.95.
CCT
The CCT is a performance-based evaluation [20,21] developed to assess executive function and multitasking abilities. It has high internal consistency (α = 0.81), moderate test-retest reliability for the total number of errors (0.65), and moderate concurrent validity with the BRIEF. It has been validated in Hebrew [7].
In the CCT, each participant is asked to follow two easy recipes: chocolate cake and fruit cocktail. Ingredients, utensils, and six recipes are laid on a table with an instruction sheet that shows the name of the dish, an ingredients list with illustrations, and numbered preparation steps with illustrations. Tasks are timed (min), and scores are classified into two error levels: descriptive and neuropsychological (to assess executive function and multitasking abilities). According to the CCT manual [23], these levels determine the number of errors by error type (descriptive), without reference to how or why they occurred; total errors (neuropsychological) allow a description of the reasons why each error occurred to be added.
SAP-Q
This clinician-administered questionnaire is based on an instrument to assess the general awareness of performance [24,25], and is modified specifically for cooking performance tasks [26,27]. Before the task performance, the clinician asks participants three questions, which they rate from 1 (high estimation) to 5 (low estimation). These questions relate to performance ("How do you think you will do on the cooking task?"), expected difficulty ("Do you think you will have difficulty performing the cooking task?"), and estimated time ("How long do you think it will take you to perform the cooking task?"). After the task, participants are asked three more questions addressing the estimation of performance ("How do you think you did on the cooking task?"), satisfaction ("Are you satisfied with the way you performed the cooking task?"), and accuracy ("How accurately do you think you performed the cooking task?").
Instruments 2.3.1. BRIEF-SR
The BRIEF-SR [22] is a valid and reliable self-report instrument to assess executive function in 11-to 18-year-olds. Its 80 questions correlate to the BRIEF parent version in its four MI and four BRI subdomains. Adding the MI and BRI scores creates an overall global executive composite (GEC) score. Clinically significant t scores (M = 50 and SD = 10) are those that are 65 and above. The test-retest reliability of the BRI and MI were 0.84 and 0.87, respectively, and the internal reliability in the standardized sample was α = 0.80-0.98. The internal reliability of this study's entire scale was α = 0.95.
CCT
The CCT is a performance-based evaluation [20,21] developed to assess executive function and multitasking abilities. It has high internal consistency (α = 0.81), moderate test-retest reliability for the total number of errors (0.65), and moderate concurrent validity with the BRIEF. It has been validated in Hebrew [7].
In the CCT, each participant is asked to follow two easy recipes: chocolate cake and fruit cocktail. Ingredients, utensils, and six recipes are laid on a table with an instruction sheet that shows the name of the dish, an ingredients list with illustrations, and numbered preparation steps with illustrations. Tasks are timed (min), and scores are classified into two error levels: descriptive and neuropsychological (to assess executive function and multitasking abilities). According to the CCT manual [23], these levels determine the number of errors by error type (descriptive), without reference to how or why they occurred; total errors (neuropsychological) allow a description of the reasons why each error occurred to be added.
SAP-Q
This clinician-administered questionnaire is based on an instrument to assess the general awareness of performance [24,25], and is modified specifically for cooking performance tasks [26,27]. Before the task performance, the clinician asks participants three questions, which they rate from 1 (high estimation) to 5 (low estimation). These questions relate to performance ("How do you think you will do on the cooking task?"), expected difficulty ("Do you think you will have difficulty performing the cooking task?"), and estimated time ("How long do you think it will take you to perform the cooking task?"). After the task, participants are asked three more questions addressing the estimation of performance ("How do you think you did on the cooking task?"), satisfaction ("Are you satisfied with the way you performed the cooking task?"), and accuracy ("How accurately do you think you performed the cooking task?").
Data Analyses
The data were processed using SPSS 26. The sample did not distribute normally, so nonparametric tests were used. For the CCT and BRIEF-SR, Mann-Whitney tests were conducted to examine pre-and postintervention differences. Differences in the SAP-Q between the pre-and postintervention phases were analyzed using the Wilcoxon test for two related samples. Cohen's d [28] was calculated for effect size, where 0.10 was considered a small effect, 0.30 a medium effect, and 0.50 a large effect.
Assessing self-awareness of performance requires an evaluation of the discrepancy between actual and estimated performance during a specific activity. Thus, new variables were calculated for the estimation before and after the CCT assessment:
Pre-and Postintervention CCT Assessment Scores
As shown in Table 1, significant differences were found between the pre-and postintervention scores in the actual CCT performance, including a decreased time duration (Z = −4.30; p < 0.001) and a reduction in the total number of performance errors (Z = −4.93; p < 0.001). Table 2 shows the significant differences found between the pre-and postintervention scores, regarding self-awareness of impairment, as measured by the BRIEF-SR GEC (Z = −2.29; p = 0.20) and MI (Z = −2.81; p = 0.005). No significant differences were found in the BRI (Z = −1.42; p = 0.15). However, significant differences were found in aspects of the BRI clinical scales, specifically, emotional control (Z = −2.31; p = 0.02) and monitor (Z = −2.06; p = 0.04). From the MI indices, significant differences were found in planning (Z = −2.40; p = 0.002), organization of materials (Z = −2.38; p = 0.02), and task completion (Z = −3.37; p = 0.001).
Pre-and Postintervention SAP-Q Scores
Although significant differences were found between the CCT assessing actual performance and the BRIEF-SR questionnaire (Table 3), only two SAP-Q items presented significant differences between pre-and postintervention: estimation of performance (Z = −2.127; p = 0.03) and time estimation (Z = −2.00; p = 0.04). Moreover, no significant differences were found in the variables time estimation gap before and after the cooking task (Z = −0.28; p = 0.77), time estimation gap before and actual time duration in the cooking task (Z = −1.33; p = 0.18), and time estimation gap after and actual time duration in the cooking task (Z = −1.52; p = 0.13).
Discussion
This study emphasizes the discrepancy between actual performance and performance self-awareness among adolescents with EFD before and after a metacognitive intervention. As expected, significant differences were found in the cooking task assessment, indicating that the adolescents improved their performance greatly after completing the FITTED intervention. The participants reduced their time duration, total number of errors, and error types. Previous research on children with acquired brain injuries and severe dysexecutive syndrome [6] supports the improved task performance in our study.
Previous studies have not reported differences in self-awareness of EFD through the BRIEF-SR questionnaire [29,30]. However, in this study, the BRIEF-SR scores showed significant differences pre-and postintervention in five scales: emotional control and monitor from the BRI, and plan, organization of materials, and task completion from the MI. These differences could mean that the adolescents' self-awareness of their EFD did indeed change.
The FITTED intervention features supported the improvements in the CCT and BRIEF-SR in some scales. The FITTED intervention incorporates self-monitoring techniques with structured experience to assist adolescents in rediscovering themselves and redefining their knowledge of their strengths and weaknesses [8]. Such techniques may improve the adolescents' ability to inhibit, self-regulate, and then respond to and channel self-directed executive actions. After the intervention, the participants paid more attention to the recipes, collected information more efficiently, and inhibited actions and reactions before performing the steps. They also adhered to the task sequence, added fewer unnecessary actions, succeeded in estimating amounts, and needed less assistance, as expressed by the decreased number of questions they asked [7].
As such, we expected-but did not find-a significant improvement in performance self-awareness and not just in actual performance. This lack of change prompts the following questions: Why did the intervention not affect the adolescents' self-awareness of performance? Are they unaware of their ability to perform the task better? Do other components inhibit their ability to "see" and report their improvement?
Three potential explanations are suggested to explain these unexpected results. First, adolescents with EFD are described as having impaired performance in complex daily living activities, requiring more ongoing assistance from adults, needing substantially more time to complete tasks, and engaging in far more dangerous activities than their TD peers [7][8][9]. Those difficulties may cause adolescents to become more distanced from the feedback they receive. Their difficulty in executing inhibition, using memory efficiently, exercising mental flexibility, and exhibiting self-control may delay the development of self-awareness. These characteristics could lead to them paying little attention to feedback from the environment and, thus, failing to integrate and update the self-knowledge necessary to develop selfawareness. Their neurological monitoring system, such as feedback, feedforward, and a comparative mechanism, may be damaged or impaired due to neurodevelopmental disorders or other health conditions that cause unawareness [31].
Second, adolescents are in a challenging period of development that includes comparing themselves to others while developing self-identity [32]. The combination of adolescence and living with EFD may affect their ability to cope, progress, and become more self-aware [33]. This may lead to various forms of unawareness, resulting from psychologically motivated denial [31]. This denial is a coping mechanism that people create as protection from a painful reality or from recognizing distressing aspects of themselves in the face of adversity [34]. Denial can prevent people from acquiring effective coping skills and developing realistic goals [35,36]. Adolescents' choices to deny their skills and challenges seem understandable and may serve as a protective strategy from personal failure [37,38].
A third explanation could be that adolescents with EFD profiles experience years of struggle, particularly in filling the gap between the external and their own environments [12]. Adults tend to misunderstand EFD performance issues among adolescents and refer to their externalizing behavior as lazy, lacking motivation, or willful misbehavior [39]. Those adolescents may receive harmful feedback, which may influence their self-awareness [31]. According to Toglia and Kirk [40], subjective cognitive abilities are based mainly on subjective feelings of effort and failure. These beliefs may impede their ability to develop healthy and adaptive self-awareness.
Conclusions
Actual performance and self-awareness of executive function impairment improved following the metacognitive intervention, but self-awareness of performance did not. Self-awareness of performance is not an automatically developed process, it is a skill that requires nurturing and development [6]. Clinically, there is a need to consider self-awareness in the evaluation process. If viewed as a significant therapy goal, self-awareness can strengthen the ability of adolescents with EFD to self-monitor, recognize, and correct errors during a task, and select appropriate task strategies. Additionally, improving awareness of specific task performance may take longer than improving the actual performance. Thus, there is a need to train, practice, and build many experiences for children with EFD to help them develop increased self-awareness.
Theoretically, this study provides additional evidence highlighting this population's complexity. We found improved performance and achievement in daily function goals, but the adolescents' self-awareness of their performance stayed the same. These adolescents need continued follow-up, even after completing the treatment process. It may be assumed that their awareness is not always task-dependent, and more components are involved.
This study leaves unanswered questions and underscores the need for further research. We tested self-awareness using questions before and after performing a cooking performance task. It is crucial to examine the findings of other performance tasks related to adolescent daily functioning, such as writing, play, and social participation activities, to understand whether self-awareness of performance is task-dependent and consistent, even when performance has improved. Further, we analyzed self-awareness of performance in only one way. It is necessary to assess self-awareness of performance using different tools to verify the reliability of the self-awareness questionnaire. Moreover, other well-known factors that contribute to EFD, such as depression and anxiety, may not have been taken into account in the current study.
Follow-up studies should examine factors such as adolescent self-awareness over time, changes with age in adolescence as a variable, and results with and without therapeutic intervention, as well as referring to mental and emotional components that relate to adolescence, with tools such as the Behavior Assessment System for Children (BASC) [41]. Additional components related to the adolescent's environment, such as parental attitudes, educational frameworks, the adolescent's developmental and medical history, and emotional elements that may affect self-awareness, should be examined. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets generated and/or analyzed during the current study are not publicly available due to ethical restrictions, but are available from the corresponding author on reasonable request.
|
2022-05-10T15:02:30.919Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "02f0b6d18da0e19e9c05888df195bafb2eb841a7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/9/5/684/pdf?version=1652257007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28e7745483629778b18d71ba1eb5cee34784440b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256402306
|
pes2o/s2orc
|
v3-fos-license
|
Semiconductor-based electron flying qubits: review on recent progress accelerated by numerical modelling
The progress of charge manipulation in semiconductor-based nanoscale devices opened up a novel route to realise a flying qubit with a single electron. In the present review, we introduce the concept of these electron flying qubits, discuss their most promising realisations and show how numerical simulations are applicable to accelerate experimental development cycles. Addressing the technological challenges of flying qubits that are currently faced by academia and quantum enterprises, we underline the relevance of interdisciplinary cooperation to move emerging quantum industry forward. The review consists of two main sections: Pathways towards the electron flying qubit: We address three routes of single-electron transport in GaAs-based devices focusing on surface acoustic waves, hot-electron emission from quantum dot pumps and Levitons. For each approach, we discuss latest experimental results and point out how numerical simulations facilitate engineering the electron flying qubit. Numerical modelling of quantum devices: We review the full stack of numerical simulations needed for fabrication of the flying qubits. Choosing appropriate models, examples of basic quantum mechanical simulations are explained in detail. We discuss applications of open-source (KWANT) and the commercial (nextnano) platforms for modelling the flying qubits. The discussion points out the large relevance of software tools to design quantum devices tailored for efficient operation.
Introduction
Flying qubits are originally intended to serve as a communication link within a quantum computer [1] and represent a vital part of global road-maps towards secure data transmission -the so-called quantum internet [2]. Recently, in-flight manipulations of photon-number states (so-called Fock states) have shown that the flying qubit architecture can also be applied as a stand-alone quantum processing unit [3]. Owing to this progress on photonic quantum-computation approaches [4,5], flying qubits are typically associated with photons [6]. Employing so-called "time multiplexing" architectures, photonic quantum computing can in principle scale up to millions of qubits. The strongly probabilistic nature of photonic two-qubit gates renders the far-reaching photonic coherence however a double-edged sword making the realisation of photonic quantum computing challenging. Though, a flying-qubit architecture can also be built on the basis of other quantum systems such as the electron [7]. The charge of an electron causes Coulomb interaction with its electro-magnetic environment, which exposes its quantum properties to decoherence but enables well-controlled single-particle manipulations and multi-qubit coupling.
Historically, flying qubits stem from the research field of quantum optics which addresses -in opposition to wave optics -the granular nature of light. The need to control the ultimate grain of light -a single photon -led to the advent of the first singlephoton source in 1974 [8] and over several decades, various promising approaches have been developed [9]. A prominent example are deterministic single-photon sources based on quantum dots (QD) [10,11]. Since these photon emitters allow for efficient coupling to a nanophotonic cavity [12,13] and provide a high degree of indistinguishability [14,15] and brightness [16][17][18], they turned out as highly suitable sources for quantum communication [6]. Yet, the extraction efficiencies of quantum dot based single-photon sources are not at the level to perform photonic quantum computing. Nevertheless, the demand for photonic components such as on-demand single-photon sources, quantum photonic processors and single-photon detectors fostered the emergence of start-ups such as Quandela, 1 QuiX, 2 or SingleQuantum, 3 to name a few. Photonic quantum computing is currently pursued by the start-up PsiQuantum 4 which uses heralded photons. Here parametric down conversion is used to create from a non-deterministic single photon source a pair of photons serving as signal and idler. Measuring the idler photon with an additional single-photon detector ensures that indeed a single photon (signal) has been generatedhowever, at the expense of additional hardware.
In order to implement photonic quantum computation, two-qubit gates are a necessity. As photons do not interact, operating two-qubit gates on physical qubits is very difficult. It is possible in theory to use non-linear effects such as Kerr effects, but these effects are so small that it is not possible in practice, at least in known media [5]. Another route is to use photon detectors and the associated projective measurement as a way to entangle photons. While photon detectors can indeed produce entanglement they do so in a probabilistic way, i.e. depending on the result of the measurement, the remaining state will be entangled or not. However, such probabilistic gates cannot be used on a large quantum computer as the probability for the correct circuit to be applied will decay exponentially with the number of two-qubit gates. Protocols have been proposed (such as the Knill-Laflamme-Milburn protocol) [4] trying to mitigate this issue and to enhance the probability for the correct gate to be applied. This approach comes however along with significant cost since many (n 1) ancilla photonic qubits are required to guarantee that the correct gate is applied (with an error probability of the order of 1/n) [5]. From this point of view, the development of non-photonic approaches represents a promising and competitive avenue. For sake of completeness, let us also point out a different approach to photonic quantum computing based on so-called squeezed states. Here the qubits are generated as a superposition of multiple photons in a light pulse. This approach is pursued by the start-up Xanadu. 5 It has conceptual similarity to the Leviton single-electron transport which will be presented below in Sect. 2.3.
As for photons, electron quantum optics started by probing the discrete nature of the electrons. The granularity of electrons was undermined by the quantum fluctuations of the current [19] -the so-called shot noise -similar to photon noise [20,21]. The first experiments mimicking textbook experiments of quantum optics, but with electrons, date back to 1999 with electronic Hanbury Brown and Twiss experiments [22][23][24]. These achievements were succeeded by the realisation of the first electronic Mach-Zehnder interferometer [25], a quantum device that nicely shows up the wave nature of the electron and which is a key tool for qubit manipulation. All of these pioneering experiments that fostered the idea of electronic quantum control for computing applications have however been performed applying a DC current -meaning a continuous stream of billions of electrons. Only in 2007 the invention of the first single-electron sources [26] opened up the possibility of performing electron-quantum-optics experiments at the single-particle level. This achievement triggered tremendous progress on electron quantum optics over the last two decades bringing various single-electron sources [26][27][28][29][30] that reach now emission efficiencies larger than 99% [30][31][32][33][34] -a value far superior to latest single-photon sources [11,16,18,35,36]. Besides emission, also single-electron detection has significantly progressed. Whenever a single electron is captured on a sufficiently long timescale -of the order of a microsecond or more -also detection efficiencies well above 99% are achieved [31,32]. The efficient control on the single-particle level points out the large potential to exploit single flying electrons for quantum applications.
From measurements on charge qubits in stationary double quantum dots [37][38][39][40], the coherence time of electron flying qubits is expected to be of the order of a few nanoseconds. Decoherence is, thus, the major obstacle for quantum implementations with single flying electrons. The young research field of electron quantum optics is however constantly producing new findings bringing the coherence properties of flying electrons closer to macroscopic scales [41][42][43][44]. From the quantum-computation perspective, the central figure of merit is the number of operations that can be performed within the coherence time. At present, this number is about 1000 for leading approaches such as superconducting qubits, trapped ions, or spin qubits [45][46][47][48][49]. In order to achieve such operation fidelity with electron flying qubits, a time control at the picosecond scale is required. Such ultrafast in-flight manipulation is presently being pursued in the FET-Open project Ul-traFastNano [50].
The EU project UltraFastNano 6 aims to advance ultrafast operations in nanoelectronic devices to demonstrate the first electron flying qubit. The concept is similar to that of a photonic quantum computer but, instead of photons, electrons are used as carrier of quantum information. The project is a concrete example where the academic and the industrial sector are joining forces to develop and benchmark the tools that are required by the value chain of emerging quantum industry. Often, the initial market of tools for sufficiently advanced technologies is too small to be pursued by larger corporations, opening a niche-market opportunity for small and medium-sized enterprises (SME). This demand is satisfied by the industrial partner nextnano GmbH 7 -an SME based in Munich (Germany) with 12 employees and 300 customers in more than 35 countries -which develops a software tool calculating the quantum mechanical properties of semiconductor nanodevices. The synergy of academic and industrial partners on numeric simulations and experimental implementations within UltraFastNano fosters progress on electron flying qubits opening a novel branch of quantum industry.
In this article, we review three promising experimental routes towards electron-flyingqubit implementations and discuss the potential of numerical simulations to speed up experimental development cycles towards quantum-computing applications. The reviewed experimental pathways differ mainly in the way the electron qubit is transported. In particular, we address single-electron transport by means of a surface-acoustic wave, emission from a quantum-dot pump and Levitons. For each transport approach, we point out the specific aspects where numerical simulations are key to unveil efficient routes for followup implementations. Identifying these target aspects of cutting-edge electron-quantumoptics experiments, we finally present generic numerical simulations providing insights that are decisive for the development stages towards electron flying qubits. Confronting numerical simulations with latest experimental results, we point out the capability of the numerical simulations to guide experimental implementations faster to success.
Pathways towards the electron flying qubit
Different than for a classical bit where the states 0 and 1 basically correspond to the charging state of a capacitor, the electron flying qubit is defined via the presence of a single electron in two paths of transportation. The quantum state of the thus-defined flying qubit can be depicted on a Bloch sphere as shown in Fig. 1a. The north and south pole of the sphere represent the classical states of the electron being in one of the two paths of transportation (0 and 1). The probability to end up in one of these states is represented via angular coordinates of the sphere θ and φ that make up the quantum state ψ: The thus-defined quantum state of the flying electron is fully controllable in an electronic Mach-Zehnder interferometry setup as sketched in Fig. 1b. The quantum interferometer hosts two regions where the paths of transportation approach each other and couple via a narrow potential barrier (see horizontal lines). According to the potential landscape within this coupling region -see Fig. 1c -, the flying qubit state undergoes periodic oscillations (angle θ ) caused by coherent tunneling. In between the two tunnel-coupled regions, the flying qubit picks-up a quantum phase φ that is tunable via the potential along the two paths and the enclosed magnetic flux. Employing a surface-gate-defined quantum showing surface-gates (golden) defining the potential landscape and thus the transport paths in the two-dimensional electron gas (2DEG) located below the surface. The white crossed boxes indicate Ohmic contacts enabling electrical connection to the 2DEG. (e) Measurement of quantum oscillations as function of a perpendicular magnetic field B and the side-gate voltage V G . Data adapted from Ref. [42] interferometer realised in a GaAs-based heterostructure (see Fig. 1d), basic quantum operations of such an electronic flying qubit have already been successfully demonstrated (see Fig. 1e) with a continuous current of electrons [42]. The big challenge ahead is to perform such quantum state control on the level of single electrons and to couple several of the so-obtained electron flying qubits to generate a set of non-local entangled quantum states.
In semiconductor devices, a single electron is manipulable in a surface-gate-defined nanoscale structure such as a quantum dot or a waveguide. The majority of such implementations are performed within the two-dimensional electron gas (2DEG) formed near the surface of a GaAs-based heterostructure [7], which have typically coherence lengths of several tens of micrometers [41,42,51,52]. Applying a set of negative voltages on these gates, one can shape the potential landscape in the 2DEG and thus form and control the nanoscale devices. So far, there are two methods to guide the electrons along the desired paths: electrostatic waveguides [42,53] or -at high magnetic field -quantum-Hall edge states [25,54]. The 2DEG quality has been continuously improved [55,56] and combined with clever device design [41], which allowed to push the phase coherence length up to several hundreds of micrometers in recent experiments [43].
The availability of quantum dots serving as highly efficient single-electron sources and receivers led to the development of single-electron-transport techniques based on sur-face acoustic waves [28,29,57] and voltage-modulation pumping [58,59]. Fostered by technological progress and a growing understanding of the underlying physical mechanisms [60], these approaches now achieve transfer efficiencies well above 99% [31][32][33][34]. The quantum-dot-based transport systems represent the electronic counterpart to the deterministic single-photon source we have mentioned earlier. Besides that, different avenues have been explored such as the so-called Leviton that is a well-protected collective single-electron excitation generated by an ultrafast Lorentzian voltage pulse [30,61,62]. As the aforementioned photonic squeezed state that is a special kind of a coherent laser pulse, a Leviton represents a special form of a classical voltage pulse. The progress in these experimental routes -surface acoustic waves, electron pumps and Levitons -opened up the way to realise a flying-qubit platform with electrons instead of photons. In the following sections, we outline these three conceptually different approaches towards the electron flying qubit in more detail and discuss how numerical simulations have played a key role to interpret the experimental results guiding nanodevice design to the next generation.
Electron qubits surfing on a sound wave
In III-V semiconductor compounds, such as the presently discussed GaAs-based devices, sound is accompanied by an electric potential wave due to its piezoelectric properties allowing charge displacement [63]. At the first glance, acousto-electric transport seems a rather brute approach to move an electron qubit. A surface acoustic wave (SAW) has proven itself however as an efficient and well-controllable transport medium. Figure 2a shows a schematic of the single-electron transport approach. A SAW is typically generated with an interdigital transducer (IDT) -a device that is well established in modern consumer electronics products [64]. Applying a finite, resonant input signal on the IDT, a SAW is emitted which then travels relatively slowly with a characteristic speed of about 3 μm ns -1 towards a surface-gate-defined nanoscale device [65]. The circuit is constructed on the basis of fully depleted transport channels whose ends are equipped with QDssee Fig. 2b -serving as highly efficient single-electron source and receiver. Each QD is equipped with an adjacent quantum point contact (QPC) allowing to trace its charge occupation. After loading a single electron at the source QD via a sequence of voltage variations on the corresponding surface gates, a SAW is emitted. The SAW train typically has a duration of tens of nanoseconds and wavelength of 1 μm. When arriving at the depleted transport channel, the potential modulation of the SAW forms a train of moving QDs propagating through the surface-gate-defined rail. After loading an electron at the source QD, this SAW train allows to shuttle the electron through the quantum rail to a distant receiver QD [28,29].
The robustness of a SAW train enables acousto-electric transfer of a single electron in a nanoscale circuit approaching macroscopic dimensions. An experimental investigation of a 22-μm-long SAW-driven single-electron circuit consisting of two tunnel-coupled channels -see single-electron circuit in Fig. 2a -achieved single-shot transfer efficiencies larger than 99% [31]. Here, the exact sending position within the SAW train is controlled via the delay of a picosecond-scale voltage-pulse trigger applied on the source QD. Adjusting the potential landscape in the tunnel-coupled wire (TCW), it is possible to partition the electron wave function via directional coupling at will between the two transport channels. Figure 2c shows an exemplary measurement of the single-shot transfer probability P as function of potential detuning in the TCW. The partitioning data shows a constantly high transfer efficiency despite the detuning of the TCW potential. The shape [31] of the partitioning data bears important information on the time-evolution of the flying quantum state that is of central importance for quantum applications.
To draw the right conclusions from the single-shot partitioning data, numerical simulations are essential. In the presently discussed example, time-dependent simulations of the electron's propagation through the TCW region, revealed charge excitation -as schematically shown via the potential landscapes shown in Fig. 2d -due to insufficient SAW confinement. In these calculations, the stationary potential is calculated with the nextnano software based on the true sample geometry and the electric properties of the heterostructure. Superposing this potential landscape with the dynamic modulation of the SAW, one can prepare an electron in its ground state and simulate its propagation through the device as shown in Fig. 2e. Setting a SAW-modulation as present in the experiment (A ≈ 17 meV), the simulation shows a picture that is in good agreement with the experimental observation: As the electron enters the TCW, insufficient confinement within the moving, acousto-electric QD provokes charge excitation that prevents the appearance of tunnel oscillations. Instead, the probability to find the electron spreads according to the excitation spectrum. Unlike the experiment, numerical simulations allow to examine the effect of various device parameters in a systematic and fast way. For the presently discussed example, the time-dependent simulations particularly showed up the importance of the acousto-electric in-flight confinement. Augmenting the SAW amplitude by a factor of three (A ≈ 45 meV), the time-dependent simulations predict preservation of quantum confinement at TCW transit resulting in tunnel oscillations [31]. Since an increase of the acousto-electric power of this scale is technically feasible, the time-dependent quantum simulation points out a central, easily addressable aspect in the realisation of electron flying qubits transported by sound.
Following the numerically guided pathway of augmented acousto-electric amplitude, we anticipate coherent in-flight manipulations of flying charge qubits on technically relevant length scales soon in single-shot experiments. For a flying qubit employing the electron's charge, first observations of tunnel-related probability oscillations have been already reported from experimental studies on SAW-driven transport of a continuous stream of single electrons [66,67]. In congruency with the prediction of the aforementioned time-dependent simulations, the threshold of the SAW amplitude to significantly confine an electron in a single acousto-electric minimum was recently determined as A = (24 ± 3) meV in flight-time measurements [68]. For electron flying qubits defined by spin, increased SAW amplitude have already helped to demonstrate coherent transport of an entangled electron pair over 6 μm distance [69]. The coherent acousto-electric transfer of a single electron between remote quantum dots marks a new route to link quantum information in semiconductor qubit circuits where numerical simulations will certainly play a central role to identify key aspects and speed up experimental cycles.
Hot-ballistic electrons
On the contrary to the aforementioned acousto-electric transport approach, a lithographically-defined, but highly-tunable, quantum dot can also be employed to emit a single electron at high energy. These hot-electron sources for the controlled emission of single and multiple particles are of high interest from the perspective of higher-temperature operation and isolation from environment. In these devices, electrons can be emitted at an energy ∼ 100 meV above the Fermi energy, hence the cooling of the Fermi sea at millikelvin temperatures may not be necessary. Besides, hot electrons can be transmitted through a depleted channel, eliminating undesirable interactions with the Fermi sea. For the controlled emission of single and multiple particles, hot-electron sources are driven by strong potential modulation determining the timing of electron emission via slow, stochastic tunneling through a barrier. This process has a potential advantage of a high purity, meaning that the energy and time window into which the particles (or wave packets) are emitted fluctuates little between successive emissions. On the other hand, due to a large phase space available, the inelastic scattering rate during propagation can be high, leading to short decoherence time.
This is in contrast to the electrons confined in SAW potential or the Levitons, for which the electrostatic confinement or the limitation in available states (by the filled Fermi sea), respectively, protect the states from scattering processes. The nature of inelastic processes for hot electrons injected in GaAs/AlGaAs heterostructures has been investigated in Ref. [70][71][72][73][74][75][76]. At zero or small magnetic fields, the dominant scattering process is electron-electron interactions by the Fermi sea. For low-energy electrons (a few tens of meV above the Fermi energy), the electron-electron interactions continue to be the dominant process at higher magnetic fields applied perpendicularly to the plane of two-dimensional electron gas. For high-energy electrons (∼100 meV above the Fermi energy), the magnetic confinement to the channel edge limits the spatial overlap with the Fermi sea and consequently suppresses electron-electron interactions. Instead, the emission of longitudinal optical phonons [77] becomes the dominant process. Generally, the optical-phonon emission rate at high magnetic fields tends to be smaller than the electron-electron scattering rate at low magnetic fields, and therefore the ballistic transport length tends to be larger at higher magnetic fields. The suppression of backscattering processes due to chiral transport also contributes to a longer transport length at higher fields.
The technology to use hot electrons for electron quantum optics experiments is relatively new and not much information has been gathered regarding their suitability for applications in quantum information processing. In order to explore the potential to use these hot-electron sources for the preparation of flying qubits, it is important to gain very precise knowledge of the relevant properties of the injected particles. These are (i) the time-and energy interval into which particles are emitted (ii) the purity of the wave packets, namely the precision in the time-and energy-interval in which the particle is detected in every driving cycle. At the same time it is crucial to analyse, (iii) how these properties are affected during the propagation of wave packets along depleted channels and to minimise a possible deterioration of the signal. These aspects hence need to be tested and control over them has to be obtained.
The detection of hot-ballistic electrons emitted by a single-electron source was made using a scheme shown in Fig. 3 [72]. The energy distribution of hot electrons was obtained from the transmitted current through a detector barrier. In addition to the main distribution around the emission energy, replica of the distribution with discrete energy steps ∼ 36 meV were experimentally observed. They were attributed to the emission of longitudinal-optical phonons. Further studies on phonon interactions [73,74] led to a method to suppress phonon emission probabilities by softening the edge potential [75,78]. This technique was used to extend the phonon scattering length to as much as ∼1 mm. Using the long ballistic length, time-of-flight measurements were performed to extract the electron drift velocity ranging from 30 to 130 μm ns -1 [79].
The time-of-flight measurements used a time-gating technique to measure the electron arrival-time distribution [80]. This was later developed into a tomographic measurement of quasi-probability distribution in the energy-time phase space by controlling the ramp speed of gate voltage [81][82][83]. This measurement revealed an energy-time correlation of the distribution imprinted by the ramp speed of source energy state during the emission process (see Fig. 4) [82]. The projection of this distribution onto the time or energy axis gives the arrival-time or energy distribution of the emitted states. The purity of the observed distribution was only 0.04, and therefore the observed state is likely to be a mixed state. We note that the time and energy resolutions of the experiment in Ref. [82] were estimated to be σ t 0.3 ps and σ E 0.8 meV, giving σ t σ E 0.36 , implying that this [79]. The red and blue arrows show two paths taken by the electrons emitted by a quantum-dot pump at the left. The difference in arrival times from the two paths are used to estimate the drift velocity. The two-dimensional electron gas in the paths are depleted by a surface gate. (b) A schematic of Landau levels near the edge where the two-dimensional electron gas is depleted. "Usual" edge states forming near the Fermi energy are indicated by black circles. The high-energy states where hot electrons travel are indicated by a red circle. (c) Energy diagram of hot-electron emission from an electron pump (blue-coloured potential on the left) and its detection by energy-dependent barrier (red-coloured potential on the right). (d) The determination of the electron emission energy E 1 by the measurement of transmitted current as a function of the detector barrier height [72]. f is the operating frequency of the electron pump Figure 4 Hot-electron quasiprobability distribution. Wigner quasiprobability distribution (a map to visualise the particle's state in phase space by translating the wave function) plotted on the energy (E)-time (t) phase space using the time-dependent barrier described in Ref. [82]. E 0 and t 0 are arbitrarily chosen central values of the electron emission energy and arrival time. This plot shows that the quasi-probability distribution has a correlation between energy and time, implying that the emission energy is lifted as the electron leaves the source method is capable of resolving the minimum uncertainty limit ( /2). Therefore the observed low purity is not due to poor measurement resolutions, but is likely due to noises in electron emission process from the source. In this set of experiments, the smallest arrivaltime distribution observed was σ t,min 5 ps, and the smallest product of time and energy widths was ∼ 30 times larger than the minimum uncertainty limit (taking into account the energy-time correlation). A method has been proposed in Ref. [84] to emit each electron into Gaussian-shaped minimum uncertainty states. Another important experimental technique is the full counting statistics of the electron number partitioned by a beam split-ter (a tunnel barrier). This has been demonstrated using noise measurements [85] or a trap coupled to a single-charge detector [32,86].
The time scale of electron emission is directly reflected in the width of the emitted wave packets. In quantum optics experiments with electrons, this time scale is important for the visibility of interference effects. In order to obtain these insights, the times at which single or multiple particles are emitted from the hot-electron sources can be studied analytically or numerically taking into account the time-dependent modulation of the single-particle energy levels, as well as of the shape of the tunneling barrier between quantum dot and conductor [80]. Importantly, theoretical studies have recently shown that Coulomb interaction between electrons on the dynamically driven quantum dot have a strong impact on the energies at which the particles are emitted. Most crucially this impact on the electron energies also directly influences the time scale on which the emission of the different particles takes place [87]. This theory furthermore predicts that the separation of time scales becomes particularly relevant for energy-dependent barriers [88]. Different schemes of how to read out these different relevant emission time scales using side-coupled detector dots [89] or nonadiabatic pumping schemes [90] have been suggested. This last scheme in particular also addresses relaxation times due to phonons during the emission process.
Based on the work described above, one can expect that realisations of Mach-Zehnder experiments will become possible with these types of sources, as suggested in Ref. [91,92], similar to previous proposals for single-and two-particle interferometers for minimalexcitation single-particle sources [93][94][95]. Ref. [92] studied phase-averaging effects, which are particularly important for the temporally-short, high-energy single-electron wave packets. As a result it becomes necessary to tune asymmetric interference arm lengths and delay time, which could be achieved by tuning the drift velocity. These analytical studies [91][92][93][94][95] assume an emission of pure states and ideal beam splitters, which are over-simplified compared to a realistic experiment. Hence, in order to improve the device characteristics, more realistic numerical modelling of these aspects could be a helpful complement. While the electron coherence of hot electrons is yet to be demonstrated, the short length of electron wave packet in time domain and the ability to control their emission timing with a picosecond resolution can be useful in ultrafast electronics applications. In-situ voltage sampling under cryogenic environment has been demonstrated with a bandwidth potentially exceeding 100 GHz [96]. This technique was used to determine the precise gate voltage ramp profile for quantum tomography measurements [82].
Leviton qubits flying over the Fermi sea
A conceptually different approach to realise an electron flying qubit is to generate a singleelectron wave packet directly from the Fermi sea. This approach seems counterintuitive as a perturbation of the Fermi sea excites both electrons and holes and does in principle not allow the generation of a pure single-electron wave packet. L.S. Levitov and co-workers came up with an original idea to form a collective electron excitation flying over the Fermi sea without leaving a hole [97][98][99]. It has been shown that a voltage pulse of Lorentzian shape: where e is the elementary charge and h is Planck's constant. A Lorentzian pulse fulfilling this quantization condition is shown in Fig. 5a. Figure 5b shows the corresponding excitation spectrum -meaning the occupation of states above and below the Fermi energy. The calculation shows a distribution that is characteristic for Leviton excitation. The collective wave function is only occupying the states right above the Fermi level (zero energy) forming a pure electronic excitation that is robust against relaxation. It took almost 20 years until the theoretical concept of a Leviton was demonstrated in experiment [30]. The reason for this long delay was mainly related to the difficulty in generating clean and sufficiently short voltage pulses of Lorentzian shape that are injected directly via an Ohmic contact of the quantum device. Compared to the aforementioned quantum-dot-based sources, the Leviton approach brings the advantage that nanolithography techniques are not required to define single-electron emitters. At last, progress in microwave engineering has bridged this gap and allowed one to verify this original concept. The experiment demonstrated minimization of shot noise due to the absence of holes via Leviton formation and Hong-Ou-Mandel type experiments with very high degree of indistinguishability. To study the wave function of such flying charge excitation [99][100][101][102][103][104][105][106], quantum tomography protocols have been developed allowing a measurement of the Wigner distribution function [61,62,107,108]. In addition, time-resolved experiments have shown propagation of Leviton-like flying charges over distances of more than 80 μm without measurable dispersion [53]. Owing to the occupation of states right above the Fermi sea, Levitons are expected to have extremely good coherence properties [109][110][111][112] compared to other injection schemes [113][114][115] making them highly promising candidates for electron-flying-qubit implementations.
The next important step to benchmark these benefits is the implementation of a quantum interferometer with Levitons. Since Levitons are simply injected via voltage pulses on an Ohmic contact, the geometry of such a single-qubit device is very similar to that of early experiments [42]. Figure 5c shows an SEM image of a possible implementation with schematic indications of the interferometer paths. The propagation velocity of the injected Leviton pulse is expected to be on the order of 100 μm ns -1 [53]. Since the dynamics of such propagating pulses within a quantum interferometer have not been investigated yet, it is important to have experimental control of the pulse width to fit the flying charge excitation within the tunnel-coupling regions (≈ 2 μm) requiring pulses with full-widthat-half-maximum smaller than 20 ps. The generation of such pulses with cutting-edge microwave synthesis approaches is possible but at the technical limit. A promising alternative route allowing pulses with widths of 1 ps or smaller is the optoelectronic generation via ultrafast photo switches [116][117][118][119][120]. Besides a proper Leviton source, it is of utmost importance to design a quantum interferometer structure allowing for qubit manipulation with maximum efficiency. For this purpose, numerical simulations serve as a useful tool to model the evolution of quantum states along the interferometer structure. In order to deduce the coherence length of a certain implementation it is necessary to measure the strength of the quantum oscillations for devices with successively increasing island-length. The knowledge on these aspects of a single electron flying qubit made up by Levitons will be decisive for the applicability in quantum-computing implementations.
One route to realise flying qubits based on Levitons employs electronic waveguides defined by surface gates in the 2DEG of a GaAs/GaAlAs heterostructure [42,53]. An alternative platform is transport along quantum Hall edge channels [25,41]. It is formed when a 2DEG is placed in a very large magnetic field. In this regime the bulk becomes insulating and the only quantum channels carrying the current occur on the sample edge. It is applicable in the aforementioned GaAs framework or lightly doped graphene. The implementation of quantum point contacts (QPCs) enables the realisation of an electronic beam-splitter. By combining two QPCs, an electronic Mach-Zehnder interferometer is realised [25,121] allowing full qubit manipulation on the Bloch sphere. Being chiral, the propagation along quantum edge channels offers a very long mean free path and coherent transport [43]. In this regime, all the electron quantum optics tools are realizable such as Hanbury-Brown-Twiss [122] and Hong-Ou-Mandel interferometry [54]. Single electron sources based on Levitons have been also implemented, particularly in graphene [123,124]. Compared to quantum wires based on electronic waveguides at low magnetic field, the advantage is a nearly perfect free propagation of electrons, thanks to the chiral nature of the edge channels (electrons cannot go back after scattering on impurities). The drawback is the use of few Tesla magnetic fields and the chiral nature of the propagation which makes coupling of more than two electron flying qubits challenging.
Single-shot detection represents a major task to realise an electron flying qubit with Levitons. For the aforementioned investigation of quantum oscillations, statistical measurements are sufficient. In order to control single Leviton qubits individually, it will be however necessary to detect the presence for each flying electron via capacitive coupling to an ultra-sensitive quantum detector. One possible implementation of such a quantum detector is a spin qubit that is operated in a regime where it is extremely sensitive to charge fluctuations [125]. At present, this type of detector is capable of sensing a few electrons and enables a quantum non-demolition measurement [126]. The quantum detector is able to record the presence of a passing flying electron without perturbing its quantum state that can in turn be reused after detection for further quantum manipulations. This aspect is an important advantage over single-photon detectors where the photon disappears after detection. Another possible implementation that has been put forward recently guides the flying electron through a meander structure which is capacitively coupled to two large metal electrodes. The passage of the flying electron beneath the two surface electrodes generates an oscillating voltage signal. This detector is expected to have a sub-electron sensitivity [127] and, when properly integrated into a quantum circuit, can also be adapted for quantum non-demolition measurements.
Another aspect of major importance is the scalablilty of surface-gate-defined quantuminterferometer devices. Figure 6 shows an SEM image of a prototype multi-qubit implementation hosting four quantum interferometers. Simultaneous operation of the electron flying qubits is accomplished via an extended bridge cross-connecting island-gate of each device. To implement a two-qubit gate in such a setup the Coulomb interaction of flying electrons is exploited. Let us consider the case where two Levitons are simultaneously sent through a pair of neighboring quantum interferometers. By adjusting the potential barrier of a Coulomb-coupling gate (C) -as shown in Fig. 6 -the flying electrons are exposed to their respective Coulomb potential which introduces a quantum phase φ causing entanglement. The phase induced by each of the two electrons is proportional to the coupling constant and the interaction time, hence to the gate length. The Coulomb-coupling strength can be adjusted by changing the gate voltage on the electrostatic gates defining the phase-exchange window. The coupling region can be as short a 1 μm for the case of ballistic electrons [128,129] and much shorter for the case of SAW-transported electrons since the propagation speed is 100 times smaller. If a π phase shift is induced, the probability of detecting an electron in the output port |0 or |1 is inverted, hence realising a controlled phase gate C φ . Combining this experimental setup with two interferometers one can even go one step further and test Bell's inequalities as proposed in references [7,130]. In this case, all of the four beam splitters are used and a π phase shift is induced with the Coulomb coupler enabling the formation of a maximally entangled Bell state. The scalablilty of electron-flying-qubit implementations is very similar to that of photonic circuits where multiple Mach-Zehnder interferometers are connected in parallel and series [131].
The central challenge to build a quantum computer is to scale up a qubit system. For the latest technological stage, millions of physical qubits would be required [132]. This scalability problem is inherent to any qubit that needs to be addressed individually via an external parameter such as a gate voltage or a laser. Important issues to be solved on the way to build a universal quantum computer are presently the improvements of the fidelity of the qubits as well as their connectivity [133].
Electron flying qubits using Levitons could allow one to implement an original architecture to build a universal quantum computer as schematized in Fig. 7. Although the architecture of Fig. 7 is a theorist view at this stage, it has very appealing features, in particular the fact that it is structurally different from the mainstream approach that uses localized qubits. Indeed in the mainstream approach, the hardware corresponds directly to each qubit: for instance for spin qubits, one needs a certain number of electrostatic gates per qubit to confine the electron, address it with microwaves and eventually measure its state. It follows that the hardware footprint is proportional to the number of qubits. In contrast, in this 'synchrotron-like' quantum computer, the flying qubits are stored in a loop and fast quantum routers are used to bring them to single-qubit gates, two-qubit gates, delay lines or measuring apparatus [134]. Hence, the hardware footprint can in principle be extremely small: a few quantum routers (one per type of gates or measurement) are sufficient to control an arbitrarily large number of qubits. The Leviton qubits are created on demand and one only needs a loop, which is large enough to hold Levitons while they go around it.
The second, perhaps more important, advantage of this architecture is the connectivity of the two qubit gates: using the delay line shown in the schematic, one could move the Figure 7 Architecture for a universal quantum computer using Levitons. The quantum switches (in brown) send the qubits to the various quantum gates. Single qubit rotations Hadamard (H), two-qubit controlled-phase (C-φ) and measurement along z (M Z ) are implemented during the flight qubits so that any pair of flying qubits could be put next to each other and, hence, one could apply two qubit gates between any pair of qubits. This is again in contrast to the mainstream approach where each qubit being localized, it can only interact with a few other qubits, usually its nearest neighbours. Such a dramatic increase in connectivity could have deep consequences to reduce the overhead of quantum error correction and fault tolerant operations.
Another advantage of the flying qubit architecture for quantum computing is that qubits can easily be recycled: old qubits can be expelled from the loop and fresh ones incorporated while ancilla ones can be used to calibrate or test the various parts of the circuit in order to isolate and retune sections that are not performing correctly. This flexibility could again be very instrumental in quantum error correction in order to get rid of rare lethal errors. Indeed, in quantum error correction, not all errors are equal; some, even if rare, are lethal to the calculation [135]. In this respect, a long-term advantage of the flying qubit architecture is the possibility to correct these rare errors. Altogether a functional flying qubit technology could make quantum error correction affordable, bringing the millions of qubits which are required to build a fault tolerant quantum computer down to tens of thousands. Alternatively, the electron-flying-qubit approach could be used to complement other approaches by providing a 'quantum bus' that implements the missing long-range coupling between distant localised qubits. Experimentally, we are still in the early stage of the development of such an electron-flying-qubit platform. Yet, it is very interesting and appealing to see that it leads to a conceptually very distinct object from the localized qubit approach. This means in return that there is a lot of room for a new architecture to be invented to bypass the intrinsic limitations of the ones that are pursued so far.
To end this section on the experimental progress on electron flying qubits realised in semiconductor devices, we would also like to point out promising approaches to manipulating single electrons on other unique platforms. Alternative to the here-described semiconductor devices, single electrons can be confined on the surface of liquid helium [136][137][138][139] or rare earth atoms such as neon, argon or krypton [140]. These systems provide a two-dimensional electron system with ultra-high mobility and strong Coulomb interaction. Similar to SAW-driven single-electron transport discussed in Sect. 2.1, electrons on the surface of liquid helium can be transported with very high precision through coupling to an evanescent piezoelectric SAW [139]. Besides that, electrons can be attracted to the surface of a solid crystal made from rare-earth atoms in vacuum. For the case of a solid neon substrate, a single electron has been trapped with electrostatic gates and coupled to a superconducting microwave resonator [140]. This allowed to observe coherent coupling of motional electron states to a single microwave photon with coherence properties similar to state-of-the-art charge qubits [141].
Numerical modelling of quantum devices
Numerical simulations play an important role in the development of quantum computing architectures and the flying qubit platform is no exception. Achieving a full stack of numerical tools to compute and predict the various properties of the devices is key to certify that the devices behave as they are supposed to and allows one to eventually optimise their behaviour. Figure 8 shows a typical stack that is being developed for flying qubit architectures. At the bottom are the device simulations that incorporate the material modelling as well as the geometry of the device. These are usually performed at the self-consistent electrostatic quantum level, i.e. the electrostatic problem is solved simultaneously with the quantum problem associated with the active part of the device (typically the region around the GaAs/AlGaAs interface in the devices discussed in this article). The self-consistent potential can be used by quantum solvers to calculate the quantum transport properties of the device, e.g. the conductance or the current noise or other observables. Those properties can be directly compared to DC experimental measurements to obtain a direct feedback on the quality of the modelling and its calibration. The proprietary nextnano [142] platform or the open-source KWANT software [143] are complementary tools that can be used for this stage.
Once the static properties are well understood, one can proceed to simulate the propagation of the electron flying qubits, including voltage pulses and the associated Levitons, in real time. The TKWANT extension [144] of KWANT provides the necessary environment for such simulations (e.g., to study the role of Coulomb repulsion at the time-dependent mean field level). The next level is a proper treatment of many-body effects aiming to account for e.g. interactions between different Levitons or various relaxation and dephasing mechanisms (such as one electron decaying into two electrons and one hole). We note that there are no general purpose simulation approaches that can handle this problem in a "blackbox" way. At the top of the stack are "pure" quantum computer simulators where the actual underlying physics has been hidden and one simulates only the effective dynamics of the computational degrees of freedom (potentially with some extra noise or dissipation terms to account for the actual limitations of the devices). As indicated by the arrows on the schematic, the different parts of the stack provide parameters to calibrate the other levels. As one goes up the stack, one usually must give up some microscopic details in order for the computations to remain affordable. Therefore, the calibrations must be done with care for the errors not to accumulate. Below, we focus at first on the static simulation part of the stack with a special emphasis on the calibration of the simulations with respect to the experiments and on the modelling of real nanodevices. At the end of this section, we briefly address time-resolved and many-body simulations.
Static quantum mechanical simulations
Tuning a single qubit into optimal operation is so far a tedious task. An attempt to find such conditions trying various setups at random is time and resource consuming. In order to go easier beyond experimental proof of concept (also known as Technology Readiness Level (TRL) 3), it is thus crucial being able to predict the viability of a certain sample design prior to its physical realisation. As outlined above, precise potential calculations combined with dynamic quantum mechanical simulations are playing a key role in this regard enabling validation of electron-flying-qubit technology in the lab (TRL 4). Being able to predict the reliability of a certain sample geometry paves the way to implement and setup electron flying qubits in a reproducible manner -enabling validation of the technology in a demonstrative or even commercial setting (TRL 5 and 6).
Since the basic elements of the electron flying qubits (interconnects, TCWs and interferometers) are exploiting to a great extent single-particle physics, they require highquality quantum mechanical simulations for one electron in complicated electrostatic potentials. The necessary information is obtained from a numerical solution of the stationary Schrödinger equation, see Eq. (4) below. The precise solution can be found by using a platform such as the nextnano software. Its advantages include the possibility to adapt the numerical procedure for different materials, various geometries of the nanoconductors and shapes of the gate-induced potentials.
Let us review the basics of the static quantum mechanical simulations, some features of the nextnano tools, and provide examples of how these tools can be used for calibration of the experiments and engineering the nanodevices.
Basic equations and methods of static single-particle simulations
Basic targets of the static quantum mechanical simulations include the study of the shape of electron wave functions and the energy-dependent transmission through one nanounit and through the entire circuit [145][146][147]. The transverse profile (i.e. along the direction being perpendicular to the propagation direction) allows one to judge whether the quantum wires are close to the desirable setup and to control, e.g., the absence (presence) of tunnelling between two isolated (coupled) wires. Ballistic transmission of the electron through the circuit is even more important. When various units are connected, there are always spatial inhomogeneities which can result in reflection of the propagating electron. The reflection hinders the flying qubits from their normal operation and must be minimised as much as possible. To this end, one can numerically find an energy corresponding to transparency windows for a realistic circuit and to work further in a vicinity of this special energy range. The quantum mechanical system can be modeled at different levels of approximations that range from a semi-classical description to an effective mass approximation to a multiband k · p model. Considering conduction-band electrons within the single-band approximation, the envelope wave functions, ψ n , are solutions of the stationary Schrödinger equa-tionĤ whereĤ is the Hamiltonian operator of the closed quantum system, E n are the energy levels defining the energy spectrum of the system, n are quantum numbers marking different single electron quantum states, and x = (x, y, z) is the space coordinate. The Hamiltonian operator is the sum of a kinetic energy operator (p) and a potential energy V (x), where the electron momentum operator is defined in the standard way asp = -i ∇. Here, (p) is the dispersion relation describing the momentum dependence of the electron energy which accounts for all effects governed by the crystalline lattice, and V (x) is the inhomogeneous potential in which the electron propagates. V (x) contains the electrostatic potential and conduction band offsets at material interfaces. For example, in the simple case of a homogeneous isotropic material where the electrons move almost freely, one can use the effective mass approximation which yields (p) = (p 2 x + p 2 y + p 2 z )/2m * , where m * is the effective mass of the electron in the material.
The potential φ(x) describes the electrostatics within the system and is the solution of the Poisson equation where ε(x) is the material-dependent permittivity, and ρ(x) is the charge density throughout the system. This charge density is given by where n(x) and p(x) are the electron and hole densities, and N + D (x) and N -A (x) are the ionized donor and acceptor concentrations, respectively, e is the (positive) elementary charge, and ρ fixed (x) contains immobile space or surface charges.
Here, the electron density n(x) explicitly depends on the energy levels E n and envelope wave functions ψ n from Schrödinger's equation, Eqs. (4), (5). For a finite system at equilibrium, the electron density is given by where E F is the Fermi level (or chemical potential), T is the temperature, and k B is the Boltzmann constant. Thus, the electrostatic potential φ(x) depends on the energy levels E n and wave functions ψ n , but also enters the Schrödinger equation, Eqs. (4),(5), as part of the potential energy operator V (x). This shows that the Schrödinger equation and Poisson equation (Eq. (6)) are coupled and need to be solved self-consistently. The self-consistently obtained spectrum and wave functions can be used further to calculate quantities which explain and describe quantum transport through various nanodevices connected to external leads. Two such quantities are (i) the partial local density of states (pLDoS), n ( x, y, E), and (ii) the energy dependent transmission, T ij (E). The pLDoS is the probability to find in a given space-point the propagating electron which has a given energy, E, and has been injected in a given lead. We note in passing that the local density of states is the sum of the pLDoS over all leads. Hence, the coordinate dependence of pLDoS illustrates how the electron with a given energy propagates through the device [148]. The energy dependent transmission, T ij (E), is determined by the probability for the electron which is injected into lead i to reach lead j.
The pLDoS and the transmission from one lead to another can be found by using the retarded Green's function,Ĝ R (E) = ([E + iα]1 -Ĥ) -1 where E is the electron energy and α → 0 + is a mathematical regularizer which reflects the retardation of the physical response (see Ref. [148] and Ref. [149] for details). In the space-coordinate representation, the coordinate-dependent Green's function can be expressed via the wave functions and the spectrum (the so-called spectral and Lehmann representations). Hence, the solution of the Schrödinger equation provides the input needed for the theoretical study of transmission.
The transmission provides valuable information on quantum interference occurring in the TCW or the Aharonov-Bohm (AB) interferometer. In the original setup, the AB interferometer involves the magnetic field, B = curlA, which can be included into the study as a shift of the momentum operator by the vector potential,p →p -eA. In practice, a magnetic field variation is too slow on the time scales needed for qubit operation, so electrostatic manipulation of the gates is much more practical. Nevertheless, for optimisation of the design, experiments and simulations in the presence of a magnetic field are still useful.
Schrödinger's equation can only be solved analytically for some specially chosen potentials, whereas, in the general case, spectra and wave functions can only be found numerically. The numerical solvers are applied after discretization, which means that the continuous space is reduced to points on a grid and derivatives are substituted by differences. Here, the grid spacing is an important parameter which controls the accuracy of the numerically obtained answers. In our qubit devices, layer structures and dopant distributions create a triangular shaped quantum well along the substrate growth direction. In this quantum well, quantum confinement effects cause the electrons to form a twodimensional electron gas which is modulated in the two directions perpendicular to the substrate growth direction in accordance with the influence of the gate geometries. For such 3D devices where thousands of eigenstates have to be taken into account, efficient solvers for the Poisson and Schrödinger equations such as preconditioned conjugate gradient for Poisson and Arnoldi iteration for Schrödinger are mandatory in order to overcome the huge computational costs. Moreover, achieving self-consistency between the Poisson and the Schrödinger equation is not easy and requires the use of special techniques such as predictor-corrector methods [150] in order to robustly obtain solutions. In strongly nonlinear regimes such as in the quantum Hall regime, other techniques such as Ref. [151] might be needed.
As we have already mentioned, the simulation of quantum transport and thus obtaining the pLDoS and the transmission requires the use of Green's function techniques [152], which are computationally extremely expensive in the most general case. Fortunately, the ballistic limit of quantum transport suffices for the accurate description of flying qubits. This allows the so-called Contact Block Reduction (CBR) method [149,153] to be used here in order to reduce the computational cost down to a point that even large threedimensional devices of arbitrary shape and with an arbitrary number of contacts can be easily modelled.
The nextnano software and its applications for engineering flying qubits
Starting from the year 2000, the nextnano software had been developed at the Walter Schottky Institute of the Technische Universität München. Later, it resulted in the spin-off company nextnano GmbH. The software, which is now further developed by this company, is a user-oriented platform meant for modelling various semiconductorbased nanodevices, cf. Refs. [142,154,155], including optoelectronic elements and qubits.
The main focus is on the simulation of the quantum mechanical properties of such devices. The nextnano's core product, the nextnano++ software [142], is a 3D Schrödinger-Poisson-Current/CBR solver for nanotransistors, LEDs, laser diodes, photodetectors, quantum dots, nanowires, solar cells and qubits. The second product, nextnano.NEGF [152], is a quantum transport solver targeting quantum cascade lasers and resonant tunneling diodes. The nextnano software (including its early versions) has been successfully used to optimise the design of semiconductor-based (charge and spin) qubits [156][157][158][159][160][161]. Below, we focus on several important applications of this software for engineering the electron flying qubits.
Appropriate models for quantitative simulations: Quantum devices made from GaAs semiconductor heterostructures can easily be engineered by proper design of the gate geometry. To ensure the best performance of the electronic device -that is to find the most suitable gate geometry -it is crucial to know the exact electrostatic potential landscape generated by the electrostatic gates. This requires to take into account the material parameters such as 2DEG density and mobility, dopants concentration, induced surface charges, etc.
Traditionally, the workflow to determine the optimum gate geometries for a given heterostructure has been an iterative process between device fabrication in clean room facilities and low-temperature characterisations. This is immensely time consuming and resource demanding. The ideal workflow is presented in Fig. 9 where the iterative process takes place mainly at the modelling stage, before the device fabrication.
To find an accurate model for quantitative simulations, Chatzikyriakou et al. [162] developed a model using the nextnano software and benchmarked it with experimentally measured QPCs with a wide range of geometries. They assumed a layer of surface charges and a spatially uniform doping concentration, both having a frozen ionization state due to the very low temperatures at which the experimental measurements are taken [31,163]. First, 1D simulations of GaAs/AlGaAs heterostructures (Fig. 10a), with a Schottky gate on top, are employed in order to deduce the doping concentration such that the simulation reproduces experimentally measured characteristics of these heterostructures that exist on the chip that hosts the QPCs. These structures are covered by metal electrodes that are very large compared to the QPC gates (>500 nm in each Cartesian direction) and that are finally connected to the QPC gates. Then, removing the gate from the simulated het- (e) Electron density n 2DEG along the constriction (y = 0; dashed line in (d)) for three values of V G . (f) Electron density n 2DEG below the surface gate (black square) and at the middle of the constriction (red circle). The simulated QPC pinch-off occurs when the 2DEG is completely depleted erostructure, the surface charges are adjusted so that the 2DEG electron density is equal to that taken from Hall measurements on the same wafer, at T = 4.2 K (frozen surface states). To simulate the region where electron transport takes place, 3D simulations are carried out with the exact gate geometry of the quantum device. These gate geometries are directly imported from the computer-aided design (CAD) layouts (standard files in GDS format) using the open-source Python package nextnanopy [164]. Figure 10 shows a typical example of electron depletion in the 2DEG when applying a gate voltage to the electrostatic gates that face each other. When a small negative voltage is applied, the electron density under the gates is first depleted, forming only a narrow 1D constriction in the center of the two gates. Reducing further the voltage, the 1D channel is completely depleted and the transport channel is pinched-off. The simulation shows a remarkable agreement with the experimental pinch-off value (V G = -1.8 V).
In the same spirit, one can use such simulations to calculate the potential variations seen by the electrons within the 2DEG. An example of a complex quantum device with a tunnel-coupled wire from Takada et al. [31] is shown in Fig. 11. Using the exact gate ge- Figure 11 Electrostatic potential landscape. (a) Scanning electron micrograph of the entrance of a tunnel-coupled region. Adapted from Ref. [31]. (b) Electrostatic potential induced at the 2DEG from 3D simulations using realistic gate geometries (fainted black layout) and typical voltage values applied to the electrostatic gates. Equipotential lines are shown as continuous lines. Vertical cuts (blue and red) show the double-well potential before and within the tunnel-coupled region. The black line represents the path which follows the minimum in the potential landscape ometries and voltages from the experiment, the electrostatic simulations reveal the variation of the potential along the path which an electron would follow before entering the tunnel-coupled region (black line). In these experiments, the single electron is excited to higher energy states which were attributed to the abruptness in the potential landscape at this location. This undesired excitation could be mitigated by optimising the device geometries thanks to quantitative modellings.
The shape of the electrostatic potential in the 2DEG plane is input to further calculations of the energy-dependent transmission function which corresponds to the probability that an electron is reflected or transmitted along the different paths in the flying qubit structure. One-dimensional cuts through this potential in the uncoupled wires (blue) and within (red) the tunnel-coupled regions are shown in Fig. 11b. One can see parabolically shaped double-well confinement potentials, cf. Fig. 1c. Such potentials and the interplay between symmetric and anti-symmetric states with respect to the direction perpendicular to the propagation direction have been analyzed numerically in Ref. [145] where detailed features of the transport measurements such as in-phase and anti-phase oscillations of the two output currents as well as a smooth phase shift when sweeping a side gate have been reproduced. By injecting an electron into the upper rail |0 , the wave function will evolve into a superposition of symmetric and anti-symmetric states. While travelling through the interaction region, the wave function of the electron will then pick up a phase and will evolve into a superposition of |0 and |1 [7] as shown in Fig. 12 by a simulation example.
Simulations of pLDoS and transmission through nanodevices: Eq. (1) describes how the ideal electron flying qubit is expected to operate: The electron is injected in either the upper or lower incoming channel (see Fig. 1b) and propagates without reflection through the quantum device, where the electron state is rotated in the Hilbert space. This rotation can be illustrated with the help of the Bloch sphere, Fig. 1a. Angles θ and φ are generated by the tunneling regions and interferometers, respectively. The output state is a coherent Figure 12 nextnano simulations of the electron partial local density of states in the tunnel-coupled wire (TCW: panels (a-e)) and the TCW -Aharonov-Bohm interferometer -TCW nanodevice (TCW-AB-TCW: panels (f-j)). Both devices are connected to four terminals (marked by white numbers). The background shows the potential landscape defined by the voltage on the surface gates, cf. Fig. 13(a,d). The electron with a given energy (E = 9.2 meV for TCW and E = 7.5 meV for TCW-AB-TCW) is always injected into the upper incoming channel from the 1 st lead, |0 state. The states at the output leads are indicated at the top of each panel and explained in the main text. Panels (a-e): the pLDoS in TCW for increasing the tunneling barrier voltage (described by V T ). Panels (f-j): the pLDoS in TCW-AB-TCW for increasing voltage on a side gate of the bottom path (described by V g ) superposition of the states |0 and |1 . It is controlled by the tunneling barriers and either by the magnetic field or by the asymmetric bias of the interferometer. Since using the magnetic field is technologically inconvenient, we focus in this section on the asymmetrically biased interferometers.
There are two points which have fundamental importance for engineering the electron flying qubits. Namely, one needs an experimental setup where, on one hand, the reflection is reduced to a minimum and, on the other hand, the sensitivity of the electron state to the respective gate voltages is high. Let us explain how using the nextnano software helps to find such a setup.
To this end, nextnano enables the calculation of the pLDoS and the transmission of the nanodevices using the CBR method [149,153]. In the following, we demonstrate such calculation via two cases that are the central building blocks of the electron-flying-qubit architecture. Firstly (see left columns of Fig. 12 and Fig. 13), we address electron propagation through a tunnel-coupled wire (TCW). We remind readers that here at y = 0 a narrow potential barrier is present between two transport channels that allows for coherent tunneling of the electron. Secondly (see right columns of Fig. 12 and Fig. 13), we Fig. 1b,d) where two TCWs embrace an AB island enabling full control of the quantum state via magnetic and geometric phase manipulations. The potential profile of both models is shown in Fig. 13a,d. For such study of a nanoscale device, materials properties have to be specified in the nextnano input file such that the potential energies are properly set. We have used GaAs for the device and leads, and adjusted the potential energy in different regions representing high insulating barriers (red lines), tunneling barriers (light blue lines), and gates (green regions). The potential energy at the gates can be tuned by applying gate voltages and the strengths of the barriers are given in Fig. 13 and its caption.
Several examples of the pLDoS are shown in Fig. 12 which have been generated using the nextnano software. In these simulations, the electron has been injected into lead no. 1 with a given energy that is E = 9.2 meV for TCW and E = 7.5 meV for the quantum interferometer (TCW-AB-TCW). Slices of the pLDoS are shown at these energies.
Let us first discuss the TCW (left column of Fig. 12) for different voltages (V T ) applied on the tunnel-barrier gate. The TCW is able to change only the angle θ -see Eq. (1). Thus, the output state can be written as | TCW ∝ cos(θ /2)|0 + sin(θ /2)|1 . When the tunneling barrier is absent (Fig. 13a), one observes at the output an equal superposition of |0 and |1 which can occur at either θ = π/2 or θ = 3π/2, both corresponding to two points on the equator of the Bloch sphere (Fig. 1). Since a small increase of V T drives the output state to |0 (Fig. 13b) corresponding to the north pole of the Bloch sphere, we conclude that in barrier-less setup θ = 3π/2. With further increasing V T , the output state becomes |1 , i.e. the south pole of the Bloch sphere (Fig. 13c, θ = π ) and returns to the equator (Fig. 13d, θ = π/2). The latter point is the opposite equator point to that of the barrier-less setup. When the tunneling barrier becomes high (Fig. 12d) one enters the regime of two fully decoupled transport channels with output state |0 . Clearly, these coherent tunnel oscillations of the electron wave function manifest themselves in the quantum oscillations of the transmission via a TCW (see Fig. 13c).
Secondly, let us focus on electron propagation through the quantum interferometer ( Fig. 12f-j). Increasing the potential on a side gate of the lower transport channel (V g ) modifies the geometric phase of the electron's quantum state and changes the second angle φ in Eq. (1). This, in turn, causes coherent oscillations between the output terminals 3 and 4, i.e. oscillations between output states |0 and |1 , and results in quantum oscillations of the TCW-AB-TCW transmission (see Fig. 13f ). The tunnel regions in the example of Figs. 12f-j are the same and each of them changes the angle θ by π/2. This is apparent from Fig. 12a where V g = 0 and φ = 0: the output state after successive rotation in two TCW regions is |1 , i.e. the total change of θ in the TCWs is π . Therefore, each individual connection changes θ by π/2. This allows one to approximate the output state as | TCW-AB-TCW ∝ (e iφ -1)|0 + (e iφ + 1)|1 [7]. Similar to the analysis of the pLDoS in the TCW, we can now trace rotations of the electron state with increasing V g . Two output states shown in Fig. 12g,i correspond to two opposite points on the equator of the Bloch sphere with φ = π/2 and φ = 3π/2. The south pole of the Bloch sphere is reached in Fig. 12j despite a substantial blockage of the lower transport path by the strong gate potential of the side gate.
To conclude the discussion of the pLDoS, we note that a detailed analysis of the flying qubit geometry in relation with experiments has also been performed using a combination of the theoretical approach and the KWANT software, see Ref. [145].
Having discussed electron propagation at a qualitative level via the pLDoS, let us now investigate electron propagation in a more quantitative way employing energy dependent transmission, T ij (E) from the first lead to the output leads no. 3 and no. 4. First, we study the dependence on the injection energy (Fig. 13b,e). The energy of the incoming electron is counted from the potential energy of the lead no. 1. All device parameters are fixed at this stage. Transmission is zero if the electron energy is smaller than the energy of propagating states of the device, that is below 6.2 meV for the chosen parameters. Above this threshold, transmission starts to grow. However, it is first accompanied by substantial reflection of the electron to the input leads no. 1 and no. 2 which are described by T 11 and T 12 (not shown). This is apparent in the central panels of Fig. 13, up to an energy of about 7 meV, where the total transmission, T total = T 13 + T 14 (magenta lines), is smaller than the ideal value, T total < T ideal = 1. In a second step, we have identified the energy at which T total 1 (i.e. reflection is minimised) but T 13 , T 14 = 0, 1. In this regime, we expect strong sensitivity of the electron state |ψ , Eq. (1), to the gates and the barriers of the quantum device. Two examples of these energies are shown by red dashed vertical lines and are studied at the second stage of the simulations. Fixing these two energies for the TCW and the quantum interferometer, we have studied the dependence of the transmission on the potential of the tunnel barrier, V T (Fig. 13c), and the side gate, V g (Fig. 13f ). The goal of this stage is twofold: We identify regions of the parameters where T total (magenta lines) is close to the ideal value of 1 and, simultaneously, the rotation of the electron states is pronounced. The latter condition is fulfilled by the crossover between regimes T 13 < T 14 and T 13 > T 14 . The vicinity of the crossover can be chosen as an operation range of the qubit (or of the qubit element), provided the reflection is almost absent. In order to find optimal parameters which provide both ideal transmission and range for manipulation of the quantum phase, simulations such as the here-discussed case are helpful to identify experimentally relevant voltage ranges. The here-discussed simulations indicate that sufficient control is obtained in TCW and AB regions with the length ≥ 500 nm and further allow to identify optimal operation voltages. Let us discuss these aspects in more detail for the TCW and the quantum interferometer.
TCW : A simple phenomenological scattering theory predicts that T 13 cos 2 (δφ TCW /2), T 14 sin 2 (δφ TCW /2), δφ TCW ≡ δkL TCW , where δk = k 1k 2 is the difference of wave vectors of two modes with the lowest energy, which support the transmission, and L TCW is the effective spatial scale of the region, where the tunneling takes place [7]. The quantum phase δφ TCW is expected to grow with increasing the length of the tunneling barrier, L tun , which is equal to 300 nm in the example of the left column of Fig. 13. We distinguish L TCW and L tun since the former depends on the shape of the electrostatic potential inside the device. Therefore, one may expect L TCW L tun . This inequality has been confirmed by a comparison of the scattering theory (green and cyan dots in Fig. 13c) with the outcome of the true 2D simulations (solid lines in the same figure). L TCW is the only adjustable parameter of this comparison. The value of δk has been found by using the dispersion relation of almost free electrons propagating in a semiconductor, E 1,2 = ( k 1,2 ) 2 /2m * . Energy levels E 1,2 have been obtained from 1D simulations of the spectrum at the 1D transverse cross section in the center of the device (dashed line in Fig. 13a). Interestingly, the ratio L TCW /L tun is almost insensitive to the transverse size of the device. An excellent agreement between the 2D simulations and the scattering theory suggests that the latter can be used as a compact model of TCW in simulations of more complicated circuits. Such a simplification will allow one to minimize computer resources needed for the simulations. The inset in Fig. 13c shows that the range of V T , where the quantum oscillations occur, shrinks with making L tun , and correspondingly the space for the quantum interference, smaller. Since T total is very close to 1 (no reflection) in the entire range 0 < V T < 1 eV, such an idealized qubit would operate properly in a vicinity of any crossover point where T 13 T 14 , e.g. V T ∼ 0.1 eV or V T ∼ 0.33 eV.
Interferometer: When the electron modes propagate through the upper and lower arms of the electrostatic version of the Aharanov-Bohm interferometer, they acquire a relative phase which governs the quantum interference. If the interferometer is connected directly to the leads, the transmission through the device can be estimated as T AB = cos 2 (eδV τ AB /2 ) [165]. Here, τ AB is the flight time of the electron through the unit. In the ballistic case, it is the ratio of the interferometer length over the electron velocity, τ AB = L AB /v. We have introduced the relative total potential, δV = V u -V l (integrated over the upper, V u , or lower, V l , arm), which the electron feels inside the interferometer. The interference oscillations are more pronounced in longer devices, compare results presented in Fig. 13f and its inset. To avoid complexity, we do not discuss here semi-phenomenological analytical calculation of transmission for the composite TCW-AB-TCW device and do not compare the scattering theory with the 2D simulations. Similar to the TCW device, it is useful to tune V g to the vicinity of the crossover point where T 13 T 14 , i.e. either V g ∼ 0.1 eV or V g ∼ 0.5 eV in the example of Fig. 13f. Note, however, that T total < 1 in both cases. Clearly, the preference should be given to the regime with smaller reflection, i.e. the second crossover point in the simulated example.
The above predictions of numerical simulations serve as important input for experimental realisations of the flying qubit. Integrating the simulation additionally in a feedback loop of the workflow (Fig. 9) would enable to find optimised device geometries tailored to the different approaches such as Levitons or SAW. To conclude this section, we would like to mention that the applications of the nextnano software can be very broad since it can straightforwardly be adapted for modelling devices made from other semiconductor materials (e.g., industrially highly relevant SiGe), for including effects of the magnetic field on the interferometer, for mimicking dephasing and decoherence with the help of the artificially connected leads, to name just a few.
Low energy time-resolved and many-body simulations
In the previous section, we have focused on the simulation of the static properties of the devices. Solving the corresponding quantum-electrostatic problem allows one to understand how macroscopic parameters, usually the geometry of the electrostatic gates that are set to typical values of order ≤ 1 eV, influence the active quantum part of the device where the relevant energies are in the meV range. Once this is understood, the next step is to simulate actual time-resolved experiments that involve sub-meV physics (typical time in the 1-100 ps range). The goal is to understand the propagation of pulses, the coupling between different pulses (at the origin of the two-flying-qubits gates), the renormalisation of the velocity due to Coulomb interactions [53] and other effects such as different decoherence and relaxation channels. These theoretical developments are very much on-going research for which no standard approaches have yet emerged. As in-depth discussion of these aspects goes beyond the scope of the present review, we refer to Ref. [7] for pointers to the literature or to Ref. [166] for an introduction to the non-interacting formalism and to Ref. [144] for illustrative time-dependent simulations of the propagation of voltage pulses, in particular in the quantum Hall regime [167].
These methods have not been included into the nextnano software, however, TKWANT, the time-dependent extension of the KWANT software is able to provide an appropriate platform which is complementary to the nextnano one. Both KWANT and TKWANT software packages are distributed under the BSD license which imposes minimal restrictions on the use and distribution of this software. Consequently, algorithms developed by the KWANT team could be incorporated into commercial software packages targeting specific quantum industry applications, such as electron-flying-qubit devices. The principal developers of KWANT are from CEA Grenoble and TU Delft.
Note that there are no general purpose simulation approaches that can handle manybody problems in an exact and systematic way except in very particular cases. Most approaches rely on some approximation scheme whose validity must be checked a posteriori. A promising route followed by some of us to design a systematic method with a controlled accuracy uses calculations of high order processes (i.e. processes where electrons interact strongly) made possible by the use of a machine learning approach to evaluate the corresponding high dimensional integrals [168].
Conclusion and outlook
The realisation of flying qubits with single electrons opens a novel, viable route of quantum technology with considerable potential for quantum-computation applications. In this review we introduced the novel electron-flying-qubit approach and discussed three equally promising transport techniques -surface acoustic waves (SAW), hot-electron emission from quantum-dot pumps and Levitons -which are rapidly advancing. Owing to similarities between the different approaches -such as emission from a gate-defined quantum dot in SAW-transport and the electron pump -we suspect that progress in one-field will also drive the others. Based on latest progress and relevant simulation cases, we showed that numerical modelling of quantum devices is decisive to speed up experimental deployment cycles towards the first implementation of an electron flying qubit. We anticipate that automatised optimisation of the device design via numerical modelling will enable nanofabrication tailored for efficient quantum operations.
In order to make the electron flying qubit competitive with cutting-edge approaches in the field of quantum computation, it is of central importance to develop ultrafast realtime control of quantum operations. An appealing approach to implement such in-flight quantum operations is to use ultrafast voltage pulses in the picosecond range and below. On-chip optoelectronic conversion of a femtosecond laser pulse is so far the most promising technique to generate electrical pulses on the picosecond scale [116][117][118]. Combined with recent conversion efficiency improvements of these optoelectronic devices [119,120,169,170], such a real-time control is in reach and is currently pursued in the UltraFastNano project. Using these techniques, single-electron wave packets with a temporal width of 1 ps can be generated. The thus-enabled miniaturisation of quantum interferometers will allow the implementation of hundreds of quantum operations within the coherence time. Furthermore, ultrafast gate control will provide a possibility to resolve quantum states in real time. Rather than measuring the coherent oscillations of the electron qubits by varying the strength of the tunnel coupling [42], one can simply control the tunnel barrier in a time-resolved manner. This enables to keep the electrostatic confinement potential of the entire device constant and only vary the tunnel barrier on the time scale needed for the quantum operation.
The progress in the field strongly depends on the availability of tools for the reliable modelling of the quantum devices. The simulations must possess enough predictive power to suggest the most suitable device geometry prior to the fabrication of the device in a clean room. Iterations and tests of the devices are costly and time consuming and should be reduced to the strict minimum with the help of the high-precision professional simulations. Adding more and more qubits into quantum circuits will increase drastically the experimental parameter space for device tuning. Therefore, automatic tuning of all the gate voltages by using concepts from artificial intelligence and machine-learning would have to be implemented in platforms for the theoretical modelling. We anticipate that the synergy of semiconductor quantum technology with cutting-edge numerical simulations paves the way for electron-flying-qubit implementations fostering the industrial applicability of quantum computation.
|
2023-01-31T14:49:28.623Z
|
2022-08-10T00:00:00.000
|
{
"year": 2022,
"sha1": "ba3a5c0ab65691aebf020c421032340c0bf9e206",
"oa_license": "CCBY",
"oa_url": "https://epjquantumtechnology.springeropen.com/counter/pdf/10.1140/epjqt/s40507-022-00139-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ba3a5c0ab65691aebf020c421032340c0bf9e206",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
}
|
46922826
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of autoregressive integrated moving average (ARIMA), generalized linear autoregressive moving average (GLARMA), and random forest (RF) time series regression models for predicting influenza A virus frequency in swine in Ontario, Canada
Influenza A virus commonly circulating in swine (IAV-S) is characterized by large genetic and antigenic diversity and, thus, improvements in different aspects of IAV-S surveillance are needed to achieve desirable goals of surveillance such as to establish the capacity to forecast with the greatest accuracy the number of influenza cases likely to arise. Advancements in modeling approaches provide the opportunity to use different models for surveillance. However, in order to make improvements in surveillance, it is necessary to assess the predictive ability of such models. This study compares the sensitivity and predictive accuracy of the autoregressive integrated moving average (ARIMA) model, the generalized linear autoregressive moving average (GLARMA) model, and the random forest (RF) model with respect to the frequency of influenza A virus (IAV) in Ontario swine. Diagnostic data on IAV submissions in Ontario swine between 2007 and 2015 were obtained from the Animal Health Laboratory (University of Guelph, Guelph, ON, Canada). Each modeling approach was examined for predictive accuracy, evaluated by the root mean square error, the normalized root mean square error, and the model’s ability to anticipate increases and decreases in disease frequency. Likewise, we verified the magnitude of improvement offered by the ARIMA, GLARMA and RF models over a seasonal-naïve method. Using the diagnostic submissions, the occurrence of seasonality and the long-term trend in IAV infections were also investigated. The RF model had the smallest root mean square error in the prospective analysis and tended to predict increases in the number of diagnostic submissions and positive virological submissions at weekly and monthly intervals with a higher degree of sensitivity than the ARIMA and GLARMA models. The number of weekly positive virological submissions is significantly higher in the fall calendar season compared to the summer calendar season. Positive counts at weekly and monthly intervals demonstrated a significant increasing trend. Overall, this study shows that the RF model offers enhanced prediction ability over the ARIMA and GLARMA time series models for predicting the frequency of IAV infections in diagnostic submissions.
Introduction
Influenza A virus (IAV) circulates in swine populations worldwide and has recently been characterized by the continuous emergence of novel viral recombinants and variants in some regions [1][2][3]. Coupled with the complex demographics of swine populations and their high birth and replacement rate, such viral diversity could result in increased incidence of influenza infection and present a challenge for the development of infection and disease control strategies in animal populations. This could also cause some concerns from the public health perspective, similar to those caused by the spill-over infections from swine to people observed in 2012 in the US [4]. Thus, the development of new surveillance methods for IAV has its merits from multiple perspectives. Among different goals of surveillance, an important objective is the establishment of the capacity to forecast with the greatest accuracy the number of influenza cases likely to arise. Such an objective could be accomplished on the basis of statistical datadriven models, and is important whether the infection occurs as a major epidemic of a single strain, during the endemic state characterized by the continuous circulation of existing strains, or under the limited emergence of novel strains. Such an approach to forecasting could represent the basis for planning resource allocation by both animal and public health authorities. Of course, a reliably high forecasting accuracy would be key.
Diagnostic submissions for IAV from swine populations in Ontario, Canada, repeatedly peak in January and April [5]. For diseases that show recurrent seasonal patterns or occur in cyclic patterns, time series models are the most widely used statistical models by health researchers for forecasting [6]. Time series forecasting is commonly performed using autoregressive integrated moving average (ARIMA) models [6] that can accommodate both trend and seasonal variations. ARIMA models are typically selected by maximizing some measure of predictive accuracy. However, a drawback of ARIMA models is that they assume a Gaussian distribution of the response. Given count data, a Box-Cox transformation of counts using either a logarithmic or power transformation may yield approximately Gaussian-distributed data. Nevertheless, Gaussian modeling with transformed data may result in an inaccurate predictive distribution.
Another approach developed by Davis et al. [7] uses generalized linear autoregressive moving average (GLARMA) models. These models accommodate time series of counts that are assumed to follow a Poisson distribution. Recently, Dunsmuir et al. [8] developed an automated algorithm that provides for model identification within a given class of models and an assessment of model adequacy in regression modeling of count time series that follow Poisson, negative binomial or binomial distributions.
An additional, alternative approach for modeling count data is to use random forest (RF) models as developed by Breiman [9]. RF models offer a rule-based methodological approach that recursively partition data, creating regression trees. RF models have been successfully applied in many fields, including public health studies. Studies by Cootes et al. [10] and Kane et al. [11], among others, suggest this modeling approach provides computational efficiency and high predictive accuracy.
Despite the fact that ARIMA, GLARMA, and RF models have been used in several studies, these approaches have never been applied to time series data for IAV surveillance or any other pathogens in swine populations. Furthermore, recent studies highlight the fact that swine have the highest rate of emergence of new viral infectious agents and, therefore, enhanced surveillance and comprehensive assessment are needed [12]. Thus, the objective of this study is to compare the performance of ARIMA, GLARMA, and RF models with respect to predicting the frequency of IAV in diagnostic submissions from swine populations in Ontario. Our main interest was to identify a model that would predict increases in the number of diagnostic submissions and positive virological submissions with a high degree of sensitivity, using data based on diagnostic submissions to the Animal Health Laboratory (AHL; University of Guelph, Guelph, Ontario, Canada). We were also interested in investigating the occurrence of seasonality and the long-term trend of IAV infections by applying time series and recursive partitioning modeling approaches to the same data from the AHL.
Data processing
Our data set contain the test-level records from porcine submissions that were voluntarily supplied from Ontario swine farms between May 2007 and December 2015 at the largest animal health diagnostic laboratory (AHL) in Ontario. We processed the data and extracted relevant information for the analysis, as shown in Each diagnostic submission contained one or more samples, and all samples were tested for IAV for research, monitoring, or diagnostic purposes. The test results for research and monitoring were excluded from the analysis based on the rationale that they might not represent actual clinical influenza disease in a herd. The diagnostic test results indicated if the samples were tested using serological and/or virological methods. We disregarded the serological findings since a serological diagnosis is based on the detection of antibodies, and could be a consequence of vaccination or historical exposure at an unknown point in time. Virological tests were from procedures such as real-time reverse transcription polymerase chain reaction (rtPCR), immunohistochemistry, or virus isolation. These techniques were applied to either different samples or the same sample within a submission. Test results interpreted as "inconclusive", "suspicious", or "weak positive" were considered negative. When a categorical result (positive/negative) was not declared for quantitative real-time rtPCR test results, we used a cycle threshold (Ct) to declare a test result. Ct is the (amplification) cycle number when fluorescence increases above the background level. The reagents employed were manufactured by Life Technologies, and we used their recommended Ct of 36 as a positive cut off value; that is, Ct 36 was used to declare a positive test. Any test that indicated a positive result for influenza A virus was considered a positive individual virological test, and a submission with at least one positive individual virological test was considered a positive submission. Test results reported descriptively were excluded from the analysis. The number of daily submissions, and the number of daily positive diagnostic submissions at the herd level, were aggregated into monthly and weekly intervals with variables corresponding to the date of the beginning of the week or month, resulting in four individual historical datasets. A week was considered to run from Monday to Sunday and each study year included 52 weeks. The 53 rd week for years 2007 and 2012 was omitted to ensure the same number of weeks in each study year and enable simple conversion of the outcome measures into comparable time series. The time series of the number of diagnostic submissions, and positive virological submissions at weekly and monthly intervals were analyzed individually.
Data
Each historical data set contains information on the outcome measures and variables that were included in the analysis. The outcome measures of interest were the time series of: (i) the number of diagnostic submissions per week, (ii) the number of diagnostic submissions per month, (iii) the number of positive virological submissions per week, and (iv) the number of positive virological submissions per month. A detailed description of the explanatory variables used is given under each model description.
Statistical methods
A time series decomposition was performed on the four historical time series. Using the results from the decomposition, ARIMA, GLARMA and RF models were built to assess and predict the frequency of IAV in the swine population in Ontario. The predictive accuracy of each modeling approach was evaluated via the root mean square error (RMSE) and the normalized root mean square error (NRMSE). We also assessed the models' ability to anticipate increases and decreases in the number of diagnostic submissions and positive virological submissions at weekly and monthly intervals.
We implemented a seasonal-naïve method based on weekly/monthly averages over past years for each historical time series and it was used as a benchmark comparison for the ARIMA, GLARMA and RF models. The seasonal-naïve method was assessed in the same way as the other models.
The statistical analyses were performed using R version 3.3.1 [13] with the significance level set at P < 0.05. An enhanced description of each methodology is contained in the Supplementary Material (S1 File). The computation methods for the RMSE and NRMSE are also elaborated upon in the Supplementary Material (S1 File).
Time series decomposition. Before applying the time series techniques, the four outcome measures were investigated for temporal autocorrelation in the residuals using the Durbin-Watson test. Under this test, the null hypothesis is that the residuals are serially uncorrelated, and this is tested against the alternative hypothesis that they follow a first-order autoregressive process. The value of the test statistics observed suggested a pattern of positive serial correlation for each series (d < 2). This was also confirmed by examining the time series graphically. Applying a filtering procedure, the time series were decomposed into trend, season, and remainder components using the STLPLUS function [14]. The procedure is based on a local regression smoother. The naïve smoothing parameter, n s , was set to 19 lags based on the seasonal-diagnostic plots. The value of trend window, n t , was calculated considering the frequency of the time series and the seasonal smoothing parameter and assessed with trenddiagnostic plots [14]. For the weekly time series, n t.week was set to 87 lags; for the monthly time series, n t.month was set to 21 lags. The robust STLPLUS estimation procedure was used based on the seasonal-diagnostic and trend-diagnostic plots. For the robust procedure, the number of inner iterations used was 2 and the number of outer iterations used was 5, which provided convergence of the procedure.
ARIMA. We first considered the representation of the observed time series via an ARIMA model. Identifying and fitting an ARIMA model can be quite complex and time consuming as it can have a large number of parameters. Therefore, the model estimation procedure was performed using the stepwise automatic algorithm with the AUTO.ARIMA function in R. The best of all possible models was selected according to Akaike's Information Criterion (AIC) [15]. A Box-Cox transformation was used to help satisfy the ARIMA assumptions.
GLARMA. Because the time series of the number of diagnostic submissions and positive virological submissions per week and per month consist of counts, it is natural to model them using GLARMA models. The GLARMA modeling process was performed on the four historical count time series using the R package GLARMA [8]. The explanatory variables used were the linear trend and the season. The trend represents an increase by one unit over the entire study period and was centered at the mid-point of the study period. The season effect was introduced as a categorical variable with either four levels (winter, spring, summer, and autumn, with summer used as the reference level) for the weekly historical count time series or 12 levels (12 months of the year, with August used as the reference level) for the monthly historical count time series. The ARMA components were selected based on the estimated autocorrelation and partial autocorrelation functions using the residuals from the generalized linear model regression. The best model was selected based on the Wald test, the likelihood ratio test, and the AIC; here these measures were always in agreement. The response distribution for each time series was selected depending on the estimated value of the shape parameter: if the parameter was significant, a negative binomial distribution was used; otherwise, a Poisson distribution was used. The validity of the assumed distribution was examined via the probability integral transformation.
Random forests. Finally, random forest regression [9] was used to analyze the four time series. Regression was performed with the R randomForest package [16]. Explanatory variables included were the linear trend and the season (as for the GLARMA model), as well as the count time series up to five lags. The importance of each variable was calculated. The performance of RF-based regression was evaluated and optimized for the smallest error estimate via 10-fold cross-validation (CV) and the "out-of-bag" (OOB) error.
Retrospective analysis
A retrospective analysis was performed on each historical time series for the period from May 2007 to December 2015. In this approach, the ARIMA, GLARMA, and RF models were built to assess the effect of seasonality and the long-term trend of IAV infections on the number of diagnostic submissions and positive virological submissions at weekly and monthly intervals.
Simulated prospective analysis
A simulated prospective analysis was performed on each historical time series to compare the performance of the ARIMA, GLARMA and RF models. The simulations started by training a model on the first 44 weeks (or months) of data. The process proceeded by iteratively adding a successive week (or month), retraining the model using the updated data, and predicting the number of submissions or positive submissions, excluding the training period. This process is known as "forecast evaluation with a rolling origin" [17]. The ARIMA, GLARMA and RF models in the simulated prospective analysis were built in a similar fashion to the retrospective analysis. The predictions and residuals were examined graphically to verify the adequacy of different aspects of the model. We also implemented leave-one-season-out cross-validation, LOSO, where each season was successively "left out" from the training period and used for validation.
To investigate the ability of each model to predict increases and decreases in the number of diagnostic submissions and positive virological submissions, confusion matrices were constructed where predicted increases and decreases were classified into correctly (and incorrectly) identified actual increases and decreases. The accuracy for each modeling approach was calculated to determine the proportion of the total number of predictions that were correct. Sensitivity, the proportion of correctly identified increases, was also computed.
Retrospective ARIMA, GLARMA, and RF
Overall, 1414 unique submissions from swine herds were submitted to the AHL. Of the 1304 (92.2%) diagnostic submissions, 1100 (84.4%) were tested with virological procedures (Fig 1). Of these, 1095 (99.6%) submissions, including 312 (28.5%) positive submissions, were aggregated based on the submission date to obtain the number of diagnostic submissions and positive virological submissions per week and month. In total, 463 weekly and 104 monthly observations were converted into the time series used for analysis. Total weekly diagnostic submissions ranged from 0 to 11, increasing from an average of 2. The four count time series are shown in Fig 2. The series seem to exhibit seasonal fluctuations. The trend from the weekly and monthly diagnostic submissions is apparent from visual inspection and seems to behave in a cyclic manner but with some tendency to upward drift. The positive counts at weekly and monthly intervals show a slow increasing trend.
The ARIMA analysis indicated the presence of trend in the time series of the number of monthly diagnostic submissions, and the number of weekly and monthly positive virological submissions. First differencing reduced the effect of the trend. The coefficient estimates of the retrospective ARIMA components are provided in S1 Table in Table 1 and S2 Table. The RMSE and NRMSE in the retrospective analysis vary among the models from 0.966 to 4.429 (Table 1) and 0.111 to 0.210 (S2 Table), respectively, where the large values relate to the monthly submission prediction.
In the GLARMA model retrospective analysis, the likelihood ratio test and Wald tests indicate that the GLARMA model provides a better fit than the generalized linear model. Based on the estimated value of the shape parameter, the monthly count time series were modeled with a Poisson GLARMA, and the weekly count time series with a negative binomial GLARMA. The diagnostic plots of the probability integral transformation indicate that the models with the chosen serial correlations are adequate. The results from the analyses reveal a significant upward trend in the number of weekly and monthly positive virological submissions in the study period (P < 0.01, P < 0.01, respectively); however, this upward trend is not significant (P = 0.22 and P = 0.12, respectively) in the number of diagnostic submissions at weekly and monthly intervals. Relative to the baseline summer season, the winter, spring, and fall season regression terms were all found to be highly significant (P = 0.003, P = 0.0004, P = 0.001, respectively), indicating that season has a significant impact on the number of weekly diagnostic submissions. Only the fall season was significant (P = 0.0419) for the number of weekly positive virological submissions. Furthermore, relative to the August baseline, late fall months, early winter months, spring months and June were found to have a significant impact on the number of monthly diagnostic submissions (P < 0.05). Fall and early winter months and May were found significant (P < 0.05) when modeling the number of monthly positive virological submissions. With the retrospective regression RF model, we examined the relative influence of the explanatory variables on the count time series and present the results in Table 2. The importance values for season were found to be among the highest for the monthly and weekly submission counts, but not for the monthly and weekly positive submissions. Trend was found to have the largest importance value for the monthly and weekly positive submission counts. This may suggest that (as was found under the GLARMA model) season affects the monthly and weekly count time series of diagnostic submissions, and the upward trend affects the monthly and weekly count time series of positive diagnostic submissions.
Simulated prospective ARIMA, GLARMA, and RF
The predictive accuracy of the simulated prospective time series models are shown in Table 1 and S2 Table. Overall, the predicted and actual counts are very close. The RMSE ranges from 1.018 to 5.1694 with the smallest value for each of the four time series corresponding to the RF Table). Fig 4 illustrates predictions with the simulated prospective models for the last three years. The plots show that high and low counts are not well predicted. Fig 5 contains Table. Overall, the results are similar to those reported in Table 1 and S2 Table, and the magnitude of the difference between the two validation processes is very small (Table 1 and S2 and S4 Tables). The simulated prospective model validation results are provided in Table 3 and S5-S15 Tables. Overall, the accuracy of correctly identifying increases and decreases in the number of diagnostic submissions, and positive virological submissions at weekly and monthly intervals, is over 50% (Table 3 and S5-S7 Tables). However, overall the RF model outperformed the GLARMA and ARIMA models. The proportion of increases and decreases correctly identified by the RF model ranged from 61% to 68% (see Supplementary Material (S5-S7 Tables)). Furthermore, the RF tends to predict the actual increases with a higher degree of sensitivity than the GLARMA and ARIMA models, ranging from 0.6 to 0.69. The prospective validation results for the seasonal-naïve method are presented in S8-S11 Tables. This method poorly identified the actual increases in the counts, resulting in low accuracy and sensitivity (particularly, for predicting increases in the number of positive virological submissions at weekly and monthly intervals). The simulated prospective model results based on the LOSO cross-validation are given in S12-S15 Tables. The accuracy with regard to correctly identifying increases and decreases in the number of diagnostic submissions and positive virological submissions at weekly and monthly intervals ranges from 45 to 73%, with the highest and lowest accuracy for each of the four time series corresponding to the RF and ARIMA models, respectively. More specifically, accuracies were 56-73%, 48-59% and 45-62% for the RF, GLARMA and ARIMA models, respectively. Additionally, the proportions of correctly identified increases found were 62-88%, 56-72% and 0-55% for the ARIMA, RF and GLARMA models, respectively. Fig 6 shows the normal quantile-quantile (QQ) plots of the residuals, plotting the predicted quantiles against the theoretical quantiles. The points for weekly diagnostic submissions, monthly diagnostic submissions, and monthly positive virological submissions seem to fall on a straight line, indicating that the residuals are normally distributed. The QQ plots for the weekly positive time series are roughly linear from -1 to 1 (about 68% of the data), and then the points curve off in the extremities, This suggests the residuals have more extreme values in either the right or left tails than would be expected if they came from a normal distribution. These values correspond to poor prediction of sudden increases or decreases observed.
Discussion
The literature on different alternatives to analyze time series of IAV or other pathogens in swine populations for surveillance purposes is sparse. To the best of our knowledge, this is the first paper to focus on different approaches for forecasting IAV frequency in swine and to compare the outputs of such approaches to a suitable reference for the purpose of calculating sensitivity and predictability. Based on the AHL swine diagnostic IAV data, we have shown that the prospective RF model outperforms the prospective time series models. A similar conclusion was reached by Kane et al. (2014), who considered the RF and ARIMA models for the prediction of avian influenza outbreaks.
The retrospective and prospective analyses conducted herein highlighted different aspects of modeling performance. Evidence of seasonality could be detected in four time series to various extents. GLARMA models based on the number of submissions per week and per month suggested the existence of seasonal differences among individual months (compared to August) and calendar seasons. Similarly, the RF models based on the latter two time series also indicated season as the most, or second most, important term in the model, respectively. This agrees with a general consensus among experts that respiratory diseases have peaks in the first or second periods of the year [18,19]; it is also consistent with previous results [5]. Evidence for seasonality based on the number of positive virological submissions was less consistent. While several months had statistically higher positive submissions than the month of August for monthly positive submissions, no season other than the fall could be identified as statistically significantly different to summer for weekly positive submissions. In addition, the season effect had among the lowest importance scores among all variables included in RF models. Therefore, strong conclusions about the existence of seasonality based on the number of positive virological submissions cannot be made based on these data alone. However, an increasing trend in the frequency of positive submissions based on monthly and weekly data was detected regardless of the modeling approach considered. This could be a consequence of: (i) improved laboratory methods, or (ii) more effective sampling strategies applied in the field, or (iii) an increasing trend in IAV frequency, or (iv) all, or some, of the above combined.
Prospective ARIMA, GLARMA and RF models were evaluated with respect to their predictive abilities. Results were compared with those obtained with the seasonal-naïve method. Each of the models had problems predicting sudden increases and decreases in the number of diagnostic and positive virological submissions. Such difficulties with forecasting "shocks" are commonly found in economics [20] and outbreak investigations [11,21], among others. An overall comparison of the three time series models indicated that RF models outperformed the ARIMA and GLARMA. The RMSE (or NRMSE) for the prospective forecast of the RF models was the lowest of the three methods. Furthermore, RF models were found to have a tendency to predict increases in counts with a higher degree of sensitivity than ARIMA and GLARMA models. This could be due to several reasons. For instance, in the prospective forecasts the explanatory variables for the GLARMA models remained the same when the model was retrained from one iteration to the next, as well as across the four different time series. However, the parameters of ARIMA and the explanatory variables for RF changed. In fact, within the RF algorithm predictors were sampled at each node of a tree when the model was retrained from one iteration to the next. These adaptive advantages might in part explain the lower RMSE (or NRMSE) of RF over ARIMA and GLARMA, and the lower RMSE (or NRMSE) of ARIMA over GLARMA, in prospective forecasts. Another reason might be that a Box transformation of counts in the ARIMA models did not approximate a Gaussian distribution very well, leading to poor predictive performance. We note that the RF model does not rely on any distributional assumptions and that, although it seems to have higher sensitivity than the other two methods, this model appears to be poorer at predicting decreases in submissions. A possible explanation for not detecting increases as well as decreases could be the existence of factors associated with the variation of the counts that were not included in the analyses.
The three models tested outperformed the seasonal-naïve method. In particular, the predictive values found under the RF model were more accurate and detected the actual increases with a much higher degree of sensitivity than the naïve method. This could be explained by the fact that the seasonal-naïve method ignored all predictor information.
The ARIMA, GLARMA and RF models' predictive abilities were assessed under the LOSO cross-validation. The RF model was more accurate in predicting the number of diagnostic submissions, and the ARIMA model was more accurate in identifying the actual increases in the positive counts. We note that the three models performed better on the weekly than on the monthly time series. The reason could be that the weekly data had more observations.
In this study, the predictive accuracy of ARIMA, GLARMA and RF models was assessed under different cross-validation approaches. All of the methods yielded qualitatively similar conclusions. However, the application of the validation techniques to time series forecasting was found to not be straightforward due to the inherit serial correlation and non-stationary nature of the data. Therefore, the LOSO cross-validation was used for all three models and the 10-fold cross-validation was used for RF models as part of Breiman's RF algorithm [9]. It should be noted that both cross-validation techniques are based on partitioning data into training and test sets to estimate the expected prediction error. It is feasible to apply both techniques to RF models, and in both approaches RF models tended to predict increases with a higher degree of sensitivity than ARIMA and GLARMA models. On the other hand, GLARMA models under the LOSO cross-validation failed to predict increases in the number of monthly positive virological submissions. Failure of this cross-validation technique in a time series context was also demonstrated in a study by Moreno-Torres et al. [22] among others. Furthermore, it was found to be difficult to 10-fold cross-validation for GLARMA models because of the need to select appropriate lags for the ARMA components. That is, the algorithm for a GLARMA model is designed in such a way that the use of the 10-fold cross-validation requires the manual specification of the degree of serial dependence for each training set. Moreover, the misspecification of lag structure leads to identifiability issues and lack of convergence of the likelihood optimization algorithm. These were found to result in a very timeconsuming validation procedure. There was also a problem with the application of the 10-fold cross-validation for ARIMA models on the weekly time series. The validation process was computationally exhaustive because ARIMA models are fully iterative and have computationally intensive fitting.
Considering the above, it appears that overall the RF is the most accurate model among the ones tested. Based on these findings, and on the fact that detecting increases in disease frequency (sensitivity) is important for veterinary authorities, public health planners and policy makers [23,24], we conclude that the RF models could potentially be used for predicting weekly and monthly counts of IAV submissions, under the conditions that were considered in this analysis. Certainly among the three considered approaches, RF models appear to be the most suitable choice for ongoing reporting systems, and they appear particular suited for predicting increases in disease frequency.
One limitation of this study was the surveillance nature of the data. The rate of swine submissions is likely associated with numerous and varied reasons affecting voluntary participation. The development of innovative strategies to promote participation in a surveillance program would be beneficial for both the swine industry, and due to the pandemic potential of novel influenza virus infections, human populations. Another limitation was that our analyses did not include a number of variables that might be associated with IAV frequency. So, the inclusion of environmental factors (e.g., temperature, humidity) in surveillance models for IAV in swine populations might be another area for further investigation.
Conclusion
The results from the simulated prospective analysis suggested the RF approach tends to predict increases in the number of diagnostic and positive virological submissions at weekly and monthly intervals with a higher degree of sensitivity than ARIMA and GLARMA models. The predictive performance of each prospective modeling approach was evaluated with the RMSE and NRMSE, which were found to be smallest for the RF model. Overall, the RF modeling approach offers enhanced prediction ability over ARIMA and GLARMA time series models for the diagnostic data under the conditions considered in the analysis of this study. The retrospective ARIMA, GLARMA, and RF models indicate that the fall months and January have the most significant impact on the (increasing) number of weekly and monthly diagnostic and positive virological submissions for IAV infections in Ontario swine populations. A significant linear increasing trend was found for the positive counts at both weekly and monthly intervals. Future research should explore formulations of time series with other factors that could influence the frequency of IAV in swine populations.
Supporting information S1 Table. Results of the fitted retrospective autoregressive integrated moving average time series model. (PDF) S2 Table. Predictive accuracy via the normalized root mean square error (NRMSE). Predictive accuracy was evaluated for the autoregressive integrated moving average (ARIMA), generalized linear autoregressive moving average (GLARMA), and random forest (RF) time series models. (PDF) S3 Table. Predictive accuracy for the seasonal-naïve method. Predictive accuracy was evaluated via the root mean square error (RMSE) and the normalized root mean square error (NRMSE). (PDF) S4 Table. Predictive accuracy using leave-one-season-out cross-validation. Predictive accuracy was evaluated via the root mean square error (RMSE) and the normalized root mean square error (NRMSE). (PDF) S5 Table. Confusion matrix for predicted monthly diagnostic submissions. Counts were predicted with the prospective autoregressive integrated moving average generalized linear (ARIMA), generalized linear autoregressive moving average (GLARMA), and random forest (RF) time series models. (PDF) S6 Table. Confusion matrix for predicted weekly diagnostic submissions. Counts were predicted with the prospective autoregressive integrated moving average (ARIMA), generalized linear autoregressive moving average (GLARMA), and random forest (RF) time series models. (PDF)
|
2018-06-21T12:41:03.760Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "3e27b2177128c164c48cad13acf22168e617d2e1",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0198313&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "161a4c7d9f0cf92f3d5015c2a8a2d5d8f0f8e1d9",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
}
|
5841746
|
pes2o/s2orc
|
v3-fos-license
|
Perspective highlights on biodegradable polymeric nanosystems for targeted therapy of solid tumors
Introduction: Polymeric nanoparticles (NPs) formulated using biodegradable polymers offer great potential for development of de novo drug delivery systems (DDSs) capable of delivering a wide range of bioactive agents. They can be engineered as advanced multifunctional nanosystems (NSs) for simultaneous imaging and therapy known as theranostics or diapeutics. Methods: A brief prospective is provided on biomedical importance and applications of biodegradable polymeric NSs through reviewing the recently published literature. Results: Biodegradable polymeric NPs present unique characteristics, including: nanoscaled structures, high encapsulation capacity, biocompatibility with non-thrombogenic and non-immunogenic properties, and controlled-/sustained-release profile for lipophilic and hydrophilic drugs. Once administered in vivo, all classes of biodegradable polymers (i.e., synthetic, semi-synthetic, and natural polymers) are subjected to enzymatic degradation; and hence, transformation into byproducts that can be simply eliminated from the human body. Natural and semi-synthetic polymers have been shown to be highly stable, much safer, and offer a non-/less-toxic means for specific delivery of cargo drugs in comparison with synthetic polymers. Despite being biocompatible and enzymatically-degradable, there are some drawbacks associated with these polymers such as batch to batch variation, high production cost, structural complexity, lower bioadhesive potential, uncontrolled rate of hydration, and possibility of microbial spoilage. These pitfalls have bolded the importance of synthetic counterparts despite their somewhat toxicity. Conclusion: Taken all, to minimize the inadvertent effects of these polymers and to engineer much safer NSs, it is necessary to devise biopolymers with desirable chemical and biochemical modification(s) and polyelectrolyte complex formation to improve their drug delivery capacity in vivo.
Introduction
Biodegradable polymeric nanoparticles (NPs) and nanosystems (NSs) are deemed to be very efficient drug delivery systems (DDSs) that are extremely safer than any other non-biodegradable polymers and lipids used for gene/drug delivery. [1][2][3][4] The biodegradable polymers are also bioactive and hence can be used as polymertherapeutics, [5][6][7][8] which can also be exploited for targeted delivery of a wide range of small and large molecules (e.g., human growth hormone, 9 insulin, 10 anti-tumor agents, 11 contraceptives, 12 vaccines, 13 anticancer drugs, 14 and antibiotics 15 ) in a controlled, sustained or pulsatile manner. 16 It should be highlighted that the liberation of encapsulated/incorporated drugs from these polymers can be carefully controlled and the drug concentration in the target site is maintained within the therapeutic window. 17 Biodegradable polymers are considered as ideal biomaterials for the development of controlled-/ sustained-release DDSs as well as therapeutic devices such as degradable implants, impermanent prostheses, and degradable 3D scaffolds for tissue engineering. To develop effective therapeutic devices, one needs to use the most compatible biodegradable polymers depending on the endpoint biological uses based on their specific BioImpacts, 2017, 7(1), 49-57 50 physicochemical, biomechanical and enzymatic/ hydrolytic degradation properties. 18 In fact, significant efforts, time and resources are required to engineer biomaterial with unique properties towards development of sophisticated biotherapeutics. Further, the currently implemented biomaterials in various clinical settings need to be revisited to address all the issues associated with application of biopolymers in vivo. Most of these issues are in close association with the physicochemical interaction of the applied biopolymers with the target tissue/cells. Of these, for instance, inadvertent immunologic reactions can dramatically limit their uses while such drawback can be beneficial when the endpoint objective is the activation of immune system by vaccination/immunization. Besides, compelling evidence on long term safety of these materials seems to be necessary. In this review, we highlight the importance of biodegradable polymers and the key issues that have crucially contributed to their limits/ slow evolution towards thier applications in biomedical/ pharmaceutical fields.
It should be stated that a polymer with a C-C backbone can resist the degradation, while heteroatom-containing polymers can show some degrees of biodegradability depending on the structural properties of the polymer. Hence, inclusion of degradable chemical linkages (e.g., ester, amide and anhydride) can improve the degradation processes. Technically, several intrinsic physicochemical and formulation properties (e.g., structural chemistry, molecular weight, hydrophilicity/hydrophobicity, water absorption, surface charge, type and morphology of formulation, surface modification, and degradation and erosion mechanism) of degradable polymers can affect their compatibility and interaction with biological settings. Fig. 1 illustrates the chemical structures of some selected important biodegradable polymers used in various biomedical applications.
Natural biopolymers
The natural biodegradable polymers, known as biopolymers, encompass various classes of counterparts such as polysaccharides (e.g., starch, cellulose, chitin, chitosan) and naturally existing proteins (e.g., collagen, laminin and fibronectin). Nowadays, many researches have been directed towards the use of natural polymers, 19 which are particularly suitable for medical and pharmaceutical applications due to their biocompatibility and biodegradibility. 20 Fig. 1 epitomizes some commonly used natural biodegradable polymers. 17,21 Natural polymers blending with synthetic polymers (e.g., poly(vinyl alcohol), poly(ethylene oxide), poly(vinyl pyrrolidone)) provide possibility for fabrication of bioartificial/biosynthetic polymers as a new class of advanced materials with improved mechanical properties and biocompatibility in comparison with those of single components. This class of biomaterials can be tailored to adequately mimic human tissue components, and are applicable in cell-based transplantation, tissue engineering and gene therapy. 22 For example, chemically modified hyaluronic acid and gelatin is investigated to deliver the mesenchymal stem cells to repair the osteochondral defects in a rabbit model. After 12 weeks, defects were completely ameliorated with the cartilage and repaired. 23
Synthetic degradable polymers
There exist various synthetic biodegradable polymers such as (poly(hydroxylbutyrate), poly anhydride copolymers, poly(orthoester)s, polyphosphazenes, poly(amidoester)s, poly(cyano acrylate)s and PLGA. 14,24,25 PLGA is a widely used polymer that has been approved by the United State Food and Drug Administration (FDA) for various therapeutic/diagnostic applications. This class of polymers offer enormous potentials as drug carriers, in large part due to their biodegradability, biocompatibility, and possibility for development of sustained-/controlled-/pulsatilerelease and targeted delivery. 14,24,26 rate in comparison with PLGA 85:15 due to the higher hydrophilic GA content of the copolymer. 25,[27][28][29] The attractive features of PLGA-based NPs/NSs (e.g., small size, high structural integrity, stability, ease of fabrication, tunable properties, sustained-/controlled-release capability, and surface functionalization characteristics) make them versatile therapeutic delivery vehicles. However, there exist some drawbacks for the PLGA-based NPs in terms of physiochemical and biological properties that limit their applications in pharmaceutical/biomedical fields. These pitfalls include (a) poor loading efficiency for hydrophobic drugs, (b) high burst release, (c) uptake by the reticuloendothelial system (RES), (d) poor stability in water, (e) difficulties in producing particles below 100 nm in diameter, (f) less circulation time in the body, (g) aggregation, and (h) manufacturing scale-up issues. To resolve such constraints, the main focus is now on the development of hybrid PLGA NPs. [30][31][32] Technically, PLGA NPs can be formulated by emulsification-diffusion, solvent emulsion-evaporation, interfacial deposition, or nanoprecipitation methods. 33 However the scale-up process of PLGA NPs' formulation by means of these methods appears to be costly.
Biodegradable synthetic polymers with threedimensional scaffolds are widely used in tissue engineering. Ultrasonically blended suspension of cellulose-nanofibers (CNFs) with PLGA have been fabricated and the obtained scaffolds appeared to possess suitable mechanical strength and biocompatibility for the cultivation of NIH 3T3 cells, which have been used in tissue engineering. 34 For example, PLGA coated beta-tricalcium phosphate (β-TCP) scaffold loaded with vascular endothelial growth factor was synthesized and cell proliferation and attachment was investigated. It was found that the scaffold with sustainedand/or localized-release of VEGF could be favorable for bone regeneration in vitro. 35
Semisynthetic degradable polymers
The morphological and chemical modifications of natural polymers produce semisynthetic polymers that are better suited for processing and production of materials with potential of mineralization and conversion to biomass. 36 Chitosan is a semisynthetic natural based polymer that is primarily obtained from chitin, which is the second abundant polysaccharide in nature. 37 Chitosan is obtained by deacetylation of chitin in alkaline condition mostly from the shell waste of shrimps, crabs, krills and lobsters. 21 Chitosan is soluble in 0.1 N acetic acid and is a positivelycharged linear polymer, which can be formulated as homogenous NPs through simple mixing of the polymer with negatively-charged drugs or nucleic acids. Chitosan NPs have shown great advantages, in large part due to their non-immunogenicity and possibility of introducing larger size of genes into host cells in comparison with the viral vectors. 20,[38][39][40][41] To improve the applicability of chitosan and its various derivatives (e.g., carboxylated, thiolated and acylated structures) for pharmaceutical/ biomedical applications, they have so far been decorated with various functional groups such as polyelectrolyte/ polyionic complexes. 19 Polyelectrolyte complex of chitosan and gelatin hydrogels prepared at pH 6.5 could be optimized for tree-dimensional bioprinting at room temperature. 42 Also, the modified chitosan with diacetate and triacetate is used as novel matrix to sustained release of doxorubicin (DOX). NPs loaded with DOX indicated high encapsulation efficiency, sustained-release pattern and enhanced cellular accumulation. Further, chitosanbased NPs could improve the oral bioavailability of DOX. 43 Targeted therapy of solid tumors Development and progression of solid tumors are in close relation with Warburg effect and aberrant glucose metabolism through glycolysis resulting in excess production of acidic byproducts whose efflux can deregulate the pH of tumor microenvironment (TME). In solid tumors, cancerous cells upregulate the expression of glucose transporter (GLUT-1) and some key enzymes and transporters (MCT-1, NHE-1, CA IX and H+ pump V-ATPase) to fulfill the high energy requirement. This deviant phenomenon provokes the efflux of protons into extracellular fluid (ECF), acidifying ECF (pH ~6.6), while the pH of cancer cells holds up to about 7.4. 44 These anomalous phenomena form a permissive milieu in favor of cancer cells survival, proliferation and invasion. The cocktail of various enzymes with acidic pH can remodel the extracellular matrix (ECM) that favors (a) the epithelialmesenchymal transition (EMT) process necessary for survival of cancer cells, (b) migration of cancer cells form its primary setting to distant organs/tissues, and (c) formation of TME with altered interstitium in which the interstitial fluid pressure (IFP) is markedly high opposing the permeation and penetration of anticancer agents into the core of solid tumors. 45 It is believed that, during metastasis, the EMT process assists cancer cells to avoid anoikis through various mechanisms enabling them to circulate within the blood stream or even lymphatic routes and colonize beyond its primary niches. 46,47 Further, neovascularization within solid tumors are often incomplete encompassing nonintegrated endothelial cells with pores and gaps (120-1200 nm) between these cells, upon which the tumor vasculature shows an enhanced permeability and retention (EPR) effect. 48,49 This latter phenomenon have widely been used for passive targeting of solid tumors 50 despite opposing impact(s) of high IFP of TME. 51,52 Stimuli-responsive polymers are able to alter their physical properties in response to environmental changes (e.g., temperature, pH, light, ultrasound, etc.), at which they are considered as superior carriers for targeted delivery of drugs to and on-demand drug release at the site of solid tumors. 53 The pH difference of TME with normal tissues seems to be the main driving force for development of pH-sensitive DDSs. Biodegradable pHresponsive polymeric carriers in the form of micelles, vesicles or NPs have great potential to provide selective/ on-demand drug release at tumor sites, which can be then BioImpacts, 2017, 7(1), 49- 57 53 rapidly degraded with no/trivial undesired impacts. Poly (β-amino ester) as a biodegradable cationic polymer is used in development of pH-sensitive DDSs. At low levels of pH (≤6.5), the polymer dissolves rapidly and releases the drug. 54 Hydrogels based on PCL, methacrylic acid (MAA), and Pluronic is developed as a potential biodegradable polymer for drug delivery uses. The hydrolytic degradation behavior of hydrogel was shown to be enhanced with the increase of PCL mainly due to the acid cleavage of ester bonds. 55 Biodegradable polymeric micelles composed of PEG and polycarbonate functionalized with disulfide and carboxylic group can be synthesized as pH and redox dual responsive DDS. The DOX-loaded micelles with small particle size and narrow size distribution indicate high drug loading capacity. When NPs were exposed to the endosomal pH of 5.0, DOX release rate was found to be accelerated by at least two-fold. The DOX-loaded micelles showed enhanced cytotoxicity in nude mice bearing BT-474. 56 Hydrophilic thermo-sensitive biodegradable polymeric nanocarriers, as another example of smart DDSs, are collapsed at hyperthermic condition of 42°C which causes greater drug release and may lead to a synergistic effect of chemotherapy and hyperthermia for treatment of solid tumors. 57,58 A new pH-/temperature-sensitive, biocompatible, biodegradable, and injectable hydrogel based on poly(ethylene glycol)-poly(amino carbonate urethane) (PEG-PACU) copolymers is developed for the sustained delivery of human growth hormone (hGH). The prepared copolymer is sol at the low pH and temperature (pH 6.0, 23°C), while it forms gel in the physiological condition (pH 7.4, 37°C). In vivo investigation of the prepared hydrogel confirmed the in situ gel formation and controlled degradation at the injection site. 59 Wang et al prepared a thermo-sensitive hydrogel of chitosan/ hydroxypropyl methylcellulose/glycerol and the hydrogel showed the in situ gel formation at physiological condition (pH ranging from 6.8 to 6.9 at 37°C). The synthesized hydrogel indicated low toxicity, good fluidity, thermosensitivity, biodegradability and controlled-release of bovine serum albumin. 60 Furthermore, the biodegradable polymeric carriers have been modified by tumor targeting agents such as specific ligands (e.g. folic acid), 61 antibodies 28 and aptamers 62 to enhance the NPs translocation into tumor cells. PEG-PCL-PEG thermo-sensitive hydrogel containing a tumor- Fig. 2. Schematic illustration of advanced multifunctional drug delivery systems. Image was adapted with permission from our previous publication. 58
Fathi and Barar
BioImpacts, 2017, 7(1), 49-57 54 targeted biodegradable folate-poly(ester amine)/DNA complexes has been synthesized and investigated for targeted gene delivery. The hydrogel composite indicates slight cytotoxicity with high transfection efficiency in vitro. The synthesized hydrogel with sustained gene release and local gene delivery could met the demand for the effective tumor-targeted gene delivery system. 63 Taken all these issues into consideration, it seems that the effective therapy of cancer demands specific targeting of the cancerous cells by smart NSs and delivery of anticancer agents to the target cells but not the healthy normal cells. An efficient delivery of anticancer drugs specifically to solid tumors requires implementation of nanocarriers with high payload capacity and suitable permeability and degradability within the TME. 64,65 Advanced biodegradable NSs armed with homing devices have the ability to penetrate into TME and target the cancerous cells solely while they impose no/little effects on the healthy cells and immunosurveillance activity of the immune system. 14,45,[66][67][68] Fig. 2 represents schematic structures of advanced DDSs and multifunctional NSs used for targeted therapy of cancer.
It should be also pointed out that because of the wide-range applications and different properties of the polymeric biomaterials, there is no ideal polymer with universal use. Hence, depending on the endpoint use of the biopolymer, the right structure and formulation must be selected/devised while there exists an array of macromolecular biomaterials that may meet the need(s) for development of therapeutic NSs. Biodegradable polymers, no matter synthetic or natural, can be degradable in vivo into biocompatible by-products through enzymatic transformation (e.g., hydrolysis). 17 Several factors (chemical structure and composition, distribution of repeat units in multimers, presence of ionic groups, structural configuration, molecular weight, morphology and pH) can affect the biodegradation process of polymeric system. 25,70 Hence a better understanding about all these influencing factors can facilitate the development of advanced DDSs and multimodal NSs. As shown in Table 2, several biodegradable polymeric nanoformulations are under investigation for treatment of the wide spectra of diseases. Further, the biodegradability of these polymers make them as the safest implantable systems that can be degraded, and hence need no subsequent surgical operation to removal of transplanted system. 17 A multilayer cylindrical implant made of PLGA was used for the controlled-and extended-release of DOX molecules in murine breast cancer. This implant system compared to the DOX traditional IV route, could delivered greater amount of DOX with better coverage of local tumor, preventing metastatic spread and less drug toxicity without weight loss, splenomegaly and cardiac toxicity. 79 Biodegradable scaffolds composed of PLA and β-tricalcium phosphate is developed for complex maxillofacial reconstruction. Biocompatibility tests with mesenchymal stem cells indicated better proliferation, without toxicity. The porous interconnected structures make possible cellular adhesion and vascular proliferation. The in vivo investigation in rats led to complete bone ingrowth within 30 days with minimal inflammatory impacts. 80
Final remarks
To engineer the most compatible biomaterials, a number of central characteristics need to be met. These materials must (a) pose no/trivial inflammatory response; (b) possess a degradation time coinciding with their function; (c) have appropriate mechanical properties for their intended use; (d) produce nontoxic degradation products that can be readily reabsorbed or excreted; and (e) include appropriate permeability and processability for designed application. 81 These properties are greatly affected by a number of features of degradable polymeric biomaterials including, but not limited to: material chemistry, Due to the wide range use of polymeric biomaterials, a single, ideal polymer or polymeric family does not exist. Instead a library of materials is available to researchers that can be synthesized and engineered to best match the specifications of the material's desired biomedical function. Current efforts in biodegradable polymer synthesis have been focused on custom designing and synthesizing polymers with tailored properties for specific applications by: (i) developing novel synthetic polymers with unique chemistries to increase the diversity of polymer structure, (ii) developing biosynthetic processes to form biomimetic polymer structures and (iii) adopting combinatorial and computational approaches in biomaterial design to accelerate the discovery of novel resorbable polymers. Taken all, it should be taken into consideration that an ideal biodegradable polymeric DDS must be tailored in a way that it provides a number of imperative characteristics such as (a) suitable permeability and drug release profile based on physicochemical properties (e.g., lipophilicity and hydrophilicity) of cargo molecules, (b) biodegradability and biocompatibility, (c) tensile strength, and (d) possibility for surface modification and decoration. 17 It should be also pointed out that, for broadening the potential applications of biodegradable polymers, they should be modified utilizing several methods such as random and block copolymerization, grafting, blending and composites forming, which lead to new advanced biomaterials with unique properties including high performance, low cost, and good processability. 82 Given the fact that various non-biodegradable polymers and lipids used as DDSs and/or gene delivery systems (GDSs) impose intrinsic inadvertent cytotoxic and genotoxic impacts 1-4, 83-87 and some inevitable downsides of the currently used biodegradable polymers, we need to advance DDSs/GDSs towards mimicking the natural polymers found in human body. Perhaps, it is the right time to move on and implement the biopolymers of the human body, and engineer human-origin polymeric scaffolds to be able to specifically deliver drugs into the target cells/tissue without any detrimental impacts on the healthy normal cells by delivery vehicles per se.
Ethical issues
There is none to be declared.
Competing interests
No competing interests to be disclosed.
What is current knowledge? √ Biodegradable polymers (synthetic, semi-synthetic and natural) used for development of NPs possess unique characteristics including nanoscaled structures, high encapsulation capacity, biocompatibility and controlled-/ sustained-release profile for lipophilic/hydrophilic drugs. √ PLGA based NPs as a synthetic biodegradable polymeric system, is FDA approved and used widely. √ Chitosan as a semisynthetic natural based polymer can be chemically modified to improve its applicability for various pharmaceutical/biomedical applications.
What is new here? √ Advanced biodegradable NSs armed with homing devices have ability to penetrate into TME and target the cancerous cells solely with no/little effects on the healthy cells. √ Stimuli-responsive biopolymeric NSs offer selective drug release at tumor sites.
|
2017-06-04T05:21:18.332Z
|
2017-02-20T00:00:00.000
|
{
"year": 2017,
"sha1": "ab9cc0f3899714d5c31783fe8cf9190be976847d",
"oa_license": "CCBYNC",
"oa_url": "https://bi.tbzmed.ac.ir/PDF/bi-16372",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab9cc0f3899714d5c31783fe8cf9190be976847d",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
17929578
|
pes2o/s2orc
|
v3-fos-license
|
5‐HT 2B receptor antagonists attenuate myofibroblast differentiation and subsequent fibrotic responses in vitro and in vivo
Abstract Pulmonary fibrosis is characterized by excessive accumulation of connective tissue, along with activated extracellular matrix (ECM)‐producing cells, myofibroblasts. The pathological mechanisms are not well known, however serotonin (5‐HT) and 5‐HT class 2 (5‐HT 2) receptors have been associated with fibrosis. The aim of the present study was to investigate the role of 5‐HT 2B receptors in fibrosis, using small molecular 5‐HT 2B receptor antagonists EXT5 and EXT9, with slightly different receptor affinity. Myofibroblast differentiation [production of alpha‐smooth muscle actin (α‐SMA)] and ECM synthesis were quantified in vitro, and the effects of the receptor antagonists were evaluated. Pulmonary fibrosis was also modeled in mice by subcutaneous bleomycin administrations (under light isoflurane anesthesia), and the effects of receptor antagonists on tissue density, collagen‐producing cells, myofibroblasts and decorin expression were investigated. In addition, cytokine expression was analyzed in serum. Lung fibroblasts displayed an increased α‐SMA (P < 0.05) and total proteoglycan production (P < 0.01) when cultured with TGF‐β1 together with 5‐HT, which were significantly reduced with both receptor antagonists. Following treatment with EXT5 or EXT9, tissue density, expression of decorin, number of collagen‐producing cells, and myofibroblasts were significantly decreased in vivo compared to bleomycin‐treated mice. Receptor antagonization also significantly reduced systemic levels of TNF‐α and IL‐1β, indicating a role in systemic inflammation. In conclusion, 5‐HT 2B receptor antagonists have potential to prevent myofibroblast differentiation, in vitro and in vivo, with subsequent effect on matrix deposition. The attenuating effects of 5‐HT 2B receptor antagonists on fibrotic tissue remodeling suggest these receptors as novel targets for the treatment of pulmonary fibrosis.
Introduction
Pulmonary fibrosis is characterized by an excessive accumulation and remodeling of connective tissue and is considered to be the result of an imbalanced wound healing response, which fails to terminate correctly (Wilson and Wynn 2009). Progressive pulmonary fibrosis is often detected late, when tissue remodeling is extensive, creating a challenging approach for effective treatment (Meltzer and Noble 2008). To date there are no curative therapies for pulmonary fibrosis, leaving lung transplantation as a last resort.
The accumulation and deposition of extracellular matrix (ECM) in fibrosis are thought to rely on myofibroblasts, a cell-type featuring increased expression of alpha-smooth muscle actin (a-SMA) and enhanced production of ECM proteins such as collagens and proteoglycans (Lijnen and Petrov 2002;Venkatesan et al. 2002). Myofibroblast differentiation is induced by profibrotic growth factors, and especially by transforming growth factor (TGF)-b1 (Verrecchia and Mauviel 2007). Another profibrotic mediator is serotonin (5-hydroxytryptamine ), and studies support 5-HT-driven production of other profibrotic mediators and cell differentiation (Gairhe et al. 2012;Chen et al. 2014). The physiological levels of 5-HT are normally low due to uptake from plasma into platelets (Mercado and Kilic 2010). However, upon tissue damage, for example, endothelial injury, platelets are activated and released 5-HT, which results in increased local concentrations of 5-HT (Schattner 2014;Mauler et al. 2015). An alternative source of pulmonary 5-HT originates from pulmonary neuroendocrine cells (Pan et al. 2006) and activated mast cells (Kushnir-Sukhov et al. 2007;Larsson-Callerfelt et al. 2013a).
The differentiation into myofibroblasts and subsequent ECM synthesis appear to be key events in the development of pulmonary fibrosis, and thus represent a highly interesting therapeutic target (Hinz 2007). We therefore hypothesized that 5-HT through interaction with 5-HT 2B receptors participate in remodeling processes observed in pulmonary fibrosis, by exerting effect on myofibroblast differentiation. This study aimed to investigate potential antifibrotic effects of two small molecular 5-HT 2B receptor antagonists with slightly different receptor affinity, EXT5 and EXT9. The antifibrotic effects of the antagonists were studied both in vitro and in vivo. Our results suggest involvement of 5-HT 2B receptors in myofibroblast differentiation and ECM production in fibrosis, emphasizing the need for further investigations of 5-HT 2B receptor antagonists as potential therapeutics for pulmonary fibrosis.
Western blot analysis of a-SMA in human lung fibroblasts HFL-1, densely seeded in six-well cell culture plates, were stimulated with either TGF-b1 (10 ng/mL), 5-HT (1 lmol/L or 10 lmol/L), or the combination of TGF-b1 and 5-HT, with or without the 5-HT 2B receptor antagonists EXT5 (10 lmol/L) or EXT9 (10 lmol/L). In addition, a preventive regimen (pretreatment with the receptor antagonists 1 h before adding the combination of TGF-b1 and 5-HT) was investigated. After 24 h, cell lysates were collected in supplemented 1% NP-40 (Sigma-Aldrich) and total protein content in cell lysate was quantified with BCA protein assay kit (Thermo Scientific). Collected lysates were then reduced, and size separated protein samples were transferred to a PVDF membrane (Merck Millipore, Darmstadt, Germany). The membrane was incubated for 2 h with a-SMA antibody (0.3-1 lg/ mL, ab5694, Abcam, Cambridge, UK), along with b-tubulin antibody (1:30,000-40,000, ab6046, Abcam) or GAPDH antibody (0.2 lg/mL, sc-47724, Santa Cruz) in blocking buffer (0.5% casein in 0.5% TBS-Tween), before washing in TBS-Tween, and incubation (30 min, RT) with a secondary DyLight680 or DyLight800-conjugated antibody, prior to final washing steps. Core protein bands of a-SMA and endogenous controls (GAPDH and btubulin) were imaged with Odyssey FC (LI-COR Inc., Lincoln, NE), controlled and analyzed by Image Studio v.3.1 (LI-COR Inc.). Data are presented as fold change of a-SMA compared to control (medium alone or the combination of TGF-b1 and 5-HT).
Measurements of proteoglycan synthesis in human lung fibroblasts
Confluent HFL-1 cells were subjected to a stepwise culturing regimen. Cells were first cultured in 0.4% FCIII supplemented DMEM for 2 h and then stimulated with TGF-b1 (10 ng/mL), 5-HT (1 lmol/L), or the combination of TGF-b1 and 5-HT with or without the receptor antagonists EXT5 (10 lmol/L) and EXT9 (10 lmol/L) in 0.4% FCIII supplemented sulfate-poor medium (Invitrogen, Waltham, MA). After 2 h, [ 35 S] (50 lCi/mL) was added to the wells. Cell medium and lysate were then collected after 24-h stimulation. Total protein content was quantified with BCA protein assay kit (Thermo Scientific), and proteoglycans were extracted by column-based separation and size separated with gel electrophoresis as described previously . Bands of sized separated [ 35 S] sulfate-labeled proteoglycans were quantitated for the proteoglycans decorin, versican, perlecan, and biglycan with densitometry. The gels were imaged with Molecular Imager FX (Bio-Rad, Laboratories Inc., Hercules, CA) and analyzed with Quantitative One 4.6. The amount of proteoglycans was related to total protein content (mg), and results were compared to controls (medium alone or the combination TGF-b1 and 5-HT).
Bleomycin-induced pulmonary fibrosis in vivo model
Female C57/Bl6 mice, aged 12.5 w (Scanbur research A/S, Karlslunde, Denmark), were injected subcutaneously three times/week for 2 weeks with bleomycin 50 IE/animal (Baxter Medical AB, Kista, Sweden) as described previously (Rydell-Tormanen et al. 2012). Treatment groups (seven mice/group) received daily per oral (p.o.) administration of either EXT5 (30 mg/kg) or EXT9 (30 mg/kg) dissolved in water-based Tween80 (2.5 w/v %, Sigma-Aldrich) in parallel to the bleomycin administrations. Positive controls received bleomycin only and negative controls received saline injections, with p.o. administration of vehicle. Following sacrifice, 14 days poststudy initiation, lungs were fixed in 4% formaldehyde and embedded in paraffin. Blood was collected from subclavian arteries and serum stored at À20°C. Study protocols were approved by the local ethics committee (Malm€ o/ Lund, Sweden, M103-14).
Immunofluorescence
For all analysis, 4-lm rehydrated sections and a standard protocol were used. Briefly, sections were rehydrated, pretreated with proteinase K (20 mg/mL) for 30 min (37°C), and washed in TBS. A primary antibody was applied and the sections incubated for 1.5 h (RT), before rinsing in TBS. A secondary antibody (standard dilution 1:200) was applied and incubated for 45 min (RT). The sections were then rinsed (for double staining, new primary and secondary antibodies were applied according to the protocol) and mounted. The following primary antibodies were used: a-SMA (1:2000, C6198, Sigma-Aldrich), prolyl-4-hydroxylase (P4H, 1:200, CTX101468, GeneTex, Irvine, CA), or decorin (1:200, BAF1060, R&D Systems), nuclei were visualized with DAPI and for negative controls primary antibodies were omitted. All quantifications were done in a blinded fashion.
Myofibroblasts
Myofibroblasts (defined as solitary cells copositive for a-SMA and P4H) were visualized by double staining and the number of double-positive cells were manually counted in five randomly obtained images of lung parenchyma (20 9 magnification), and results were given as number of double-positive cells/mm 2 .
Collagen-producing cells
The number of single-positive cells for P4H was used as a measurement of ongoing fibrosis (as P4H is necessary in collagen synthesis); the number of P4H single-positive cells, negative for a-SMA, was manually counted in five randomly obtained images of lung parenchyma (20 9 magnification), and results were given as number of single-positive cells/mm 2 .
Decorin
Tissue area positive for decorin was analyzed according to a previously described protocol (Rydell-Tormanen et al. 2014). The positively stained area was calculated using ImageJ, and the results were given as positively labeled area per image (0.14 mm 2 ).
Tissue density
Tissue sections were stained with hematoxylin/eosin according to standard protocol and the fraction of tissue of the total lung parenchymal area (%) was analyzed in 7-10 randomly obtained images (20 9 magnification) per animal using ImageJ (Wayne Rasband, NIH, Bethesda, MD).
Collagen staining
Tissue sections were stained with Masson's trichrome (HT15, Sigma-Aldrich) according to manufacturer's protocol, dehydrated, and mounted in pertex. Whole tissue sections were scanned (20 9 magnification) using Olympus (Hamburg, Germany) VS120 slide scanner with XV image processer L100 VS-ASW. With Visiopharm (VIS 6.1.0, Hoersholm, Denmark), parenchymal tissue were manually determined and analyzed for collagen content. Larger airways, vessels, and pleura were excluded from regions of interest. Positive-stained area for collagen (blue staining) were quantified and related to total tissue area (excluding airspaces). Results were given as positivelabeled area of collagen versus total tissue area (%). Image viewer software VS-OlyVIA (version 2.9) (Olympus Soft Imaging solutions GmbH; M€ unster, Germany) was used for image visualization.
Statistical analysis
Data were statistically tested using Graph Pad Prism 5.04 (in vitro data, La Jolla, CA) and Analyse-It v.2.26 (in vivo data, Leeds, UK). The following statistical analyses were performed: one sample t-test (P value two tailed) for in vitro analyses and Kruskal-Wallis test with LSD post hoc test for in vivo analyses. Significant P -values are defined as *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.005, ****P ≤ 0.001.
Results
Profiles of receptor antagonists and 5-HT 2A and 5-HT 2B receptor expression in HFL-1 The 5-HT 2B receptor antagonists EXT5 and EXT9 are structurally well-defined benzylidene aminoguanidine derivatives that fulfills Lipinski's rule of five (Leeson 2012). Receptor studies of EXT5 and EXT9 indicated that the antagonists not only primarily antagonized 5-HT 2B receptors, but also presented low to moderate affinity to the 5-HT 2A and 5-HT 2C receptors. Furthermore, the two antagonists possessed slightly different selectivity to the 5-HT 2 receptors (Table 1). Our experiments in vitro utilizing 1 lmol/L of 5-HT, necessitated an antagonist concentration of 10 lmol/L to effectively compete for 5-HT 2B receptor occupancy, since 5-HT is a potent agonist, demonstrating an half maximal effective concentration (EC 50 ) of approximately 2.4 nmol/L (Eurofins Panlabs Inc., 2016). The expression of 5-HT 2 receptors on HFL-1 was identified with antibody labeling, confirming the expression of both 5-HT 2A and 5-HT 2B receptors, but not 5-HT 2C by this cell type (Fig. 1).
The effect of the antagonists on individual proteoglycans (decorin, biglycan, perlecan, and versican) revealed that EXT5 (10 lmol/L) significantly reduced decorin synthesis (P = 0.0317, Fig. 3C and D), whereas there were no significant effects of EXT5 on the production of either perlecan, versican, or biglycan. EXT9 had no significant effects on individual proteoglycans (data not shown).
5-HT 2B receptor antagonists attenuates bleomycin-induced pulmonary remodeling
Development of pulmonary fibrosis in response to subcutaneous administration of bleomycin was confirmed by significantly increased tissue density of the alveolar parenchyma in animals exposed to bleomycin compared to animals receiving saline injections (P < 0.001, Fig. 4A and B). The increased tissue density was completely abolished (P < 0.005) by daily administration of the 5-HT 2B receptor antagonists EXT5 (30 mg/kg) or EXT9 (30 mg/kg) (Fig. 4A).
The number of myofibroblasts increased significantly from 9 AE 4 to 53 AE 11 cells/mm 2 (P < 0.0001, Fig. 5A) following bleomycin administration compared to animals receiving saline. Daily administration with EXT5 or EXT9 significantly (P < 0.001) reduced the number of myofibroblasts (29 AE 8 cells/mm 2 and 33 AE 8 cells/mm 2 , respectively) compared to animals receiving bleomycin. Myofibroblasts were visible as solitary cells copositive for a-SMA and P4H, within the lung parenchyma (Fig. 5B). Bleomycin administration resulted in increased collagen synthesis, detected as an increased number of P4H singlepositive cells, compared to animals receiving saline injections (Fig. 5C). Daily administration of EXT5 significantly reduced the bleomycin-induced increase of P4H-positive cells (P = 0.0091), whereas EXT9 had no effect (P = 0.06, Fig. 5C). P4H-positive cells were identified as solitary, single-positive cells in the lung parenchyma.
In addition, bleomycin administrations caused increased total collagen content in the lung parenchyma (P < 0.01), however, treatment with receptor antagonists did not affect the amount of collagen ( Fig. 6A and B).
Bleomycin was also associated with a significantly increased decorin expression within the lung parenchyma compared to saline (P < 0.005, Fig. 7A and B). This increase was abolished by daily administration of the 5-HT 2B receptor antagonist EXT9 (P < 0.05), whereas EXT5 had no significant effect (Fig. 7A).
Effect of 5-HT 2B receptor antagonists on TNF-a and IL-1b in vivo
Both EXT5 and EXT9 reduced the systemic levels of TNF-a (P ≤ 0.005) in comparison to animals receiving bleomycin alone (Fig. 8A). A strong tendency toward bleomycin-induced increase in TNF-a was found, but we were unable to confirm this statistically (P = 0.077). Serum concentration of IL-1b was significantly reduced by EXT9 (P = 0.037), compared to animals receiving bleomycin (Fig. 8B). Administration of EXT5 and EXT9 had no significant effect on the production of IL-2, IL-4, IL-5, IL-6, IL-10, IL-12p70, KC-GRO, or INFc.
Discussion
The present study highlights a role for 5-HT 2B receptors in the development of pulmonary fibrosis, and our in vitro and in vivo results imply that 5-HT 2B receptor antagonists attenuate myofibroblast differentiation and subsequent ECM synthesis.
Pulmonary fibrosis has been suggested to be the result of aberrant wound healing (Wilson and Wynn 2009), with myofibroblasts considered the key effector cells. Both TGF-b1 and 5-HT have been shown to induce myofibroblast differentiation and increase ECM deposition in different studies (Lijnen and Petrov 2002;Venkatesan et al. 2002;Konigshoff et al. 2010;Dees et al. 2011 by intratracheal bleomycin administration, 5-HT 2A/2B receptor antagonist displayed effects on tissue remodeling (Konigshoff et al. 2010). Interestingly, 5-HT has previously been described to promote tissue repair in liver (Nocito et al. 2007;Ebrahimkhani et al. 2011), highlighting the involvement in regenerative processes. with TGF-b1 (10 ng/mL), 5-HT (1 lmol/L), TGF-b1 (10 ng/mL) + 5-HT (1 lmol/L), or cultured in medium alone (control) was analyzed with densitometry, normalized to total protein content, and compared to control (B) The combination of 5-HT (1 lmol/L) and TGF-b1 induced significantly increased total proteoglycan production that was inhibited by the addition of 5-HT 2B receptor antagonists EXT5 (10 lmol/L) and EXT9 (1 lmol/L) (control: TGF-b1 + 5-HT [1 lmol/L]) (C). Synthesis of the proteoglycan decorin following incubation with TGF-b1 (10 ng/mL), 5-HT, or TGF-b1 and 5-HT was quantified, normalized to total protein content, and compared to cells incubated with only medium (control) (D). No increased production of decorin was found in response to any treatment, but a tendency toward increased production was detected following stimulation with a combination of TGF-b1 and 5-HT. Treatment with the 5-HT 2B receptor antagonist EXT5 (10 lmol/L) decreased the decorin production significantly (E) compared to cells costimulated with TGF-b1 and 5-HT (control). For statistical analysis, the one sample t-test (P value two tailed) was used; *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.005 versus control, n = 3-5 (n = number of experiments). In this study, biologically relevant concentrations of 5-HT (Wouters et al. 2007) and TGF-b1 (Blaauboer et al. 2014) (Xu et al. 2007) were used in combination as potent fibrotic stimuli in vitro, inducing synthesis of total proteoglycans and the proteoglycan decorin in human lung fibroblasts. 5-HT alone did not appear to have any significant effects on ECM in our in vitro system; however, other studies have shown that 5-HT may exert its effect as a helper agonist in synergy with other mediators (Li et al. 1997;Larsson-Callerfelt et al. 2013a). In similarity to previous studies Larsson-Callerfelt et al. 2013b), TGF-b1 did not induce decorin production, which likely is related to the role of decorin as a negative regulator of TGF-b 1 , due to the binding and neutralizing of significant amounts of this growth factor (Yamaguchi et al. 1990). Decorin is also essential for correct collagen fibrillization and thus also deposition (Orgel et al. 2009), and upregulation of decorin has been shown to reduce lung fibrosis induced by TGF-b1 (Kolb et al. 2001). Interestingly, in our in vivo study, subcutaneous bleomycin administration enhanced tissue density, myofibroblast quantity, decorin expression, and collagen deposition and synthesis (measured indirectly as increased collagen-synthesizing cells by P4H) in the lung parenchyma. Nearly all of these effects were significantly reduced following treatment with the 5-HT 2B receptor antagonists, supporting our obtained in vitro findings in human lung fibroblasts, as shown by a reduction in a-SMA and proteoglycans.
The decreased fibrosis (shown by decreased tissue density in vivo) is probably associated with the reduced numbers of myofibroblasts and collagen-producing cells. Fewer cells result in decreased decorin synthesis, thus underlining the ability of the 5-HT 2B receptor antagonists Figure 4. 5-HT 2B receptor antagonists decreased bleomycin-induced pulmonary fibrosis. The lung density, that is, fraction tissue within a lung section, was significantly increased following bleomycin (BLM) administrations, and this increase was completely abolished by the administration of the 5-HT 2B receptor antagonists, EXT5 (30 mg/kg) and EXT9 (30 mg/kg) (A). Illustrative images (B) of 4-lm thick hematoxylin/eosin-stained tissue sections from a control animal (Sal) and following BLM administration (BLM), as well as after treatment with EXT5 and EXT9 (30mg/kg). Scale bars represent 50 lm and applicable to all images. Data are presented as individual animals (mean AE SD, n = 5-7), the Kruskal-Wallis test with LSD post hoc test was used for statistical analysis; *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.005, ****P ≤ 0.001. to influence the formation of fibrotic tissue. The specific mechanism whereby 5-HT 2B receptor antagonists exert effect on myofibroblast differentiation and potentially ECM production is not well known. Our results support a close interaction between TGF-b1 and 5-HT, where 5-HT may potentiate the fibrotic properties of TGF-b1. 5-HT 2B receptor antagonization has been suggested to physically interfere with the TGF-b1 pathway, thus preventing activation of p38 that is essential for myofibroblast differentiation (Hutcheson et al. 2012). Interestingly, we found that EXT5 and EXT9 displayed somewhat dissimilar effects, both in vivo and in vitro. Our receptor studies indicated that the compounds present high affinity and functionality to the 5-HT 2B receptor and low to moderate affinity to the 5-HT 2A and 5-HT 2C receptors. Thus, brief involvement of these receptors cannot be completely ruled out. The variations of the two compounds in functionality profiles, as well as in binding profiles and possibly also in bioavailability may therefore reveal specific characteristics favorable for regulating diverse antifibrotic events. In this study, we have used a well-characterized human fetal lung fibroblast cell line that features several advantages in reproducibility and also regenerative capacity; nonetheless primary adult fibroblasts resemble more the clinical picture of lung fibrosis. Further studies with primary cells from patients with different stages of disease are warranted to further explore the role of 5-HT and effects of 5-HT 2B receptor antagonists.
In this study, we used repeated systemic administrations of bleomycin to induce pulmonary fibrosis (Rydell-Tormanen et al. 2012;Andersson-Sjoland et al. 2016), in contrast to most studies that use local (intra-tracheal) administration. Local administration results in extensive epithelial damage, acute inflammation, and subsequent heterogeneous pulmonary fibrosis. In contrast, systemic administration results in a more homogenous parenchymal fibrosis that develops in parallel with a mild inflammation (Rydell-Tormanen et al. 2012). The most commonly used approach to study pulmonary fibrosis in vivo relies on homogenization of the lung, thus including pleura, airways, and vessels as well as lung parenchyma, and the possibility to study specific compartments or cell types is rendered impossible. Thus, immunohistochemistry has several advantages in vivo, but most importantly it allows for a detailed study of a specific compartment, such as lung parenchyma that is most commonly affected in pulmonary fibrosis, thus excluding pulmonary vessels and airways. The receptor antagonists EXT5 and EXT9 did not show any significant effect on collagen deposition in the lung parenchyma. However, the evaluation of total collagen quantity does neither consider collagen turnover, subtypes of collagens, nor the structure and stability of the proteins, making it challenging to assess the compounds regulatory effect on this matrix component in relation to the disease model.
Pulmonary fibrosis may develop due to diverse pathological conditions (Wynn 2011). However, due to the close proximity between epithelial and endothelial cells within the lung parenchyma, any damage to either cell type will affect the other. By systemic administration of bleomycin, we have emphasized to mimic these processes. Circulating platelets passing a site of activated or injured endothelium will become activated and release 5-HT, an Figure 6. Collagen content in parenchymal lung tissue was increased in bleomycin-treated mice. Total collagen content related to total examined area was quantified in tissue slides stained for Masson's trichrome. Bleomycin (BLM) administrations increased collagen content in lung parenchyma compared to saline (Sal) (A). Treatment with EXT5 or EXT9 (30 mg/kg) did not affect the amount of collagen compared to BLM. Illustrative images showing Masson's trichrome staining in 4 lm tissue slides; portraying collagens (blue), muscle fibers, and cell cytoplasm (red) (B). Data are presented as individual animals (mean AE SD, n = 5-7) and statistical analysis was performed by the Kruskal-Wallis test with LSD post hoc test; *P ≤ 0.05, **P ≤ 0.01. Immunofluorescence negative control (omitting the primary antibody) showed no labeling. Scale bars represent 20 lm and applicable to all images. Data are presented as individual animals (mean AE SD, n = 5-7) and statistical analysis was performed by the Kruskal-Wallis test with LSD post hoc test; *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.005, ****P ≤ 0.001. Figure 8. 5-HT 2B receptor antagonists decreased serum levels of TNF-a and IL-1b. To investigate the immune response induced by bleomycin (BLM) and the effect of the 5-HT 2B receptor antagonists EXT5 (30 mg/kg) and EXT9 (30 mg/kg), serum was analyzed by a multispot sandwich immunoassay. The level of TNF-a (A) was reduced by the administration of both EXT5 and EXT9, whereas IL-1b (B) was decreased after administration with EXT9. Protein levels are presented as pg/mL, n = 6-7, and the Kruskal-Wallis test with LSD post hoc test were used for statistical analysis; *P ≤ 0.05, ***P ≤ 0.005. event that is associated with increased endothelial permeability, myofibroblasts count, and ECM deposition (Dees et al. 2011;Andersson-Sjoland et al. 2016). The 5-HT 2B receptors have been associated with tissue remodeling caused by platelet activation and vascular damage (Dees et al. 2011), thus highlighting platelet recruitment and activation as potential important fibrotic mediators as hypothesized in Figure 9. In the present study, treatment with 5-HT 2B receptor antagonists also resulted in systemically reduced levels of the proinflammatory cytokine TNF-a and IL-1b, which is supported by previous studies showing anti-inflammatory properties of these compounds (Wengl en et al. 2010). Our disease model, utilizing systemic administrations of bleomycin and per oral administrations of therapeutic agents, proposes a systemic evaluation of cytokine regulation. Due to this, broncholavage fluid was not examined as distal pulmonary regions and not central airways were examined. Other studies have shown that systemic administration of bleomycin in mice induces IL-1b in serum, as well as in lung (Hoshino et al. 2009), with similar expression patterns overtime, making the evaluation of serum cytokines essential in examining pathological events in lung fibrosis. This pilot study has generated a first indication of the compounds abilities to regulate cytokine quantities in serum. The study was not optimized for measuring proinflammatory cytokines, for example, timedependent sampling, which in part could explain the absence of a significant cytokine increase following bleomycin treatment. The impact on systemic inflammation by these compounds in pulmonary fibrosis warrants further investigation.
Conclusions
In conclusion, our study implies a profibrotic role for 5-HT 2B receptors in myofibroblast differentiation and ECM production both in vivo and in vitro, and may have an effect on systemic inflammation. We propose these receptors to be important in this regard and further suggest that 5-HT 2B receptor antagonists may have potential to modulate and therapeutic target aspects of profibrotic processes in pulmonary fibrosis.
|
2018-04-03T04:27:47.462Z
|
2016-08-01T00:00:00.000
|
{
"year": 2016,
"sha1": "ebed8f921fce47a7ec418cfd1001e343946ef25d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14814/phy2.12873",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebed8f921fce47a7ec418cfd1001e343946ef25d",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
55417695
|
pes2o/s2orc
|
v3-fos-license
|
Effect of ultrasound power on physicochemical and rheological properties of yoghurt drink produced with thermosonicated milk
doi: 10.9755/ejfa.2015-09-719
INTRODUCTION
Efforts in the development of novel technologies have recently increased in order to meet consumer expectations for high quality food products. Ultrasound treatment is one of these technologies with a potential in food industry. Ultrasound or sonication refers to an oscillating sound pressure wave with a frequency greater than the upper limit of the human hearing range, i.e. above 16 kHz (Dolatowski et al., 2007;Soria and Villamiel, 2010). During sonication, longitudinal waves are formed in a liquid medium, which creates alternating compression and expansion regions. Pressure change in liquid forms small gas bubbles. These bubbles continue to expand, and a volume is reached where they cannot absorb more energy. At this point, a rapid condensation is obtained. This phenomenon is called cavitation (Jambrak, 2011) or acoustic cavitation (Chandrapal et al., 2012). Cavitation may result in pressure over 101 MPa, and large amounts of energy are released. This energy heats up regions around the bubbles and cause chemical reactions. Cavitation may occur in the applications of low frequency (20-100 kHz) and high power ultrasound used in food processing. Extremely high temperature (up to 5000 K) and pressure (up to 50 MPa) build up in cavitation bubbles (Jambrak, 2011). Reactive radicals can be formed in these bubbles. Besides chemical reactions, severe physical forces such as microjets, shear forces, shock waves and turbulence can occur by the acoustic cavitation. For example, stability of capsules containing pharmaceutical and food ingredients formed by ultrasonication can be substantially enhanced by the chemical crosslinking between protein molecules formed during sonication process (Chandrapal et al., 2012). According to the effects they form in a medium, ultrasonic frequencies can be classified as low frequencyhigh power (20-100 kHz), medium frequency-medium power (100 kHz-1MHz) and high frequency-low power (1-10 Mhz) (Bhaskarachary et al., 2009).
High intensity ultrasound treatment may be applied on its own, in combined with moderate heat (thermosonication, TS) or combined with heat and pressure (manothermosonication, MTS). In the literature, there are a variety of research and review articles regarding the applications of ultrasound treatments in food industry (Mason et al., 1996;Knorr et al., 2004;Dolatows Suzuki et al., 2007;Riener et al., 2009;Carcelen et al., 2012;Jambrak et al., 2012;Pingret et al., 2013). Ultrasonication has been used in dairy industry in the field of research and development for various purposes such as food preservation, regulation of enzyme activity and improvement of microstructure by ingredient interaction. The physical effects enhanced by ultrasonication include the homogenization of fat globules, improvement of whey ultrafiltration, viscosity development, lactose crystallization, yoghurt production with improved rheological properties, reduction of total fermentation time of yoghurt and cutting of cheese blocks (Chandrapal et al., 2012).
Yoghurt and yoghurt-like fermented milk products have been widely consumed throughout the world. Differences in the production techniques of yoghurt drink is critical for the production of yoghurt drink with different flavor (Kocak and Avsar, 2010). Two major methods are used industrially to produce yoghurt drinks, dilution of yoghurt with water (Atamer, 1986) and the incorporation of starter cultures to standardized milk (Colakoglu andGursoy, 2011, Akkaya et al., 2015). Plain yoghurt drinks are popular in Central Asia, Anatolia, Balkans and the Middle East while yoghurt drinks with fruits and sweeteners are mostly preferred in Europe and the US (Colakoglu and Gursoy, 2011).
Optimal structural characteristics such as viscosity, gel structure, water holding capacity, particle size, particle density and particle distribution are the most important factors determining consumer preferences for yoghurt drinks. Serum separation and optimum consistency have been industrially used as quality parameters for commercial production. The effect of ultrasonication on the serum separation and the physicochemical and structural properties of yoghurt drinks have been previously reported by Ertugay et al. (2012) and Erkaya et al. (2015); however, these studies are focused on the application of ultrasound and/or thermosonication process on final products rather than raw milk. In this present study, the effect of ultrasound power on the physicochemical and rheological properties of yoghurt drink produced with thermosonicated milk was determined.
Materials
Raw cow milk samples were obtained from local farms in Burdur, Turkey. Analytical grade chemicals were used in chemical analyses.
Thermosonication
Thermosonication treatments were performed using an ultrasonic processor (Bandelin Sonopuls UW3200, Germany) working at a constant frequency of 24 kHz. An ultrasonic probe (TT13) with a diameter of 13 mm was attached to the processor. Standardized milk samples of 800 mL were transferred into beakers (1L) in a circulating water bath and equilibrated for 5 min at 70°C. The ultrasonic probe was immersed in the geometrical center of a beaker. Ultrasonic processor was turned on for 15 min with a power output of 100, 125 or 150 W.
Yoghurt drink production
Yoghurt drinks were manufactured in the laboratory. The approximate chemical composition of raw cow milk included 11.56% total solids, 3.02% fat, 3.01% protein, 4.74% lactose, 0.15% acidity (as % lactic acid) and pH was 6.47. Raw milk sample was standardized to a 7% total solid content with distilled water. Conventionally heat treated samples at 90°C for 10 min were used as control, and both heat treated and thermosonicated milk samples were cooled to 45±1°C and inoculated with 2.5% of commercial starter cultures including Streptococcus thermophilus and Lactobacillus delbrueckii subsp. bulgaricus (YO-MIX TM 496, Danisco, France). Inoculated samples were incubated at 43.5°C until the pH of 4.5±0.1. After incubation, fermented milk samples were rapidly cooled to 20±1°C in a cold water bath by mixing thoroughly with a mechanical overhead stirrer (Mtops MS3040, Misung Scietific Co. Ltd., Korea) at 200 rpm for 2 min. All samples were stored at 4±1°C up to 10 days. Samples were analyzed at the 1 st , 5 th and 10 th days of storage. Yoghurt drink samples were produced in triplicates.
Chemical and physicochemical analyses
Total solids, fat, lactose and protein contents of raw milk samples were analysed by an infrared milk analyser (Bentley B150, Bentley Instruments Inc., USA). The pH values of milk and yoghurt drink samples were determined by a pH meter (Jenco 6173, Jenco, San Diego, CA, USA). Titratable acidity values of milk and yoghurt drink samples were determined according to TS 1018 (Anonymous, 1981) and Oysun (2001), respectively. Yoghurt drink samples were analyzed for total solid contents (ISO/IDF, 2010), fat (Gerber method) (Oysun, 2001) and total nitrogen [Dumas method by the Dumatherm analyzer (Gerhardt GmbH & Co. KG, Königswinter, Germany)]. Protein content was calculated by using the factor 6.38.
Serum separation
Yoghurt drink samples were transferred into 100 mL volumetric cylinders and stored at 4°C for 10 days. Serum separation was determined as the volume of separated serum at the top on days 1, 5 and 10 (Koksoy and Kilic, 2003).
Rheological analyses
Rheological measurements were carried out using a Brookfield viscometer (Model DV-II Pro, Brookfield Engineering Laboratories, USA) with a spindle of RV2. Yoghurt drink samples (0.8L) in 1L beakers was used for measurements at a constant temperature of 5±1°C. Analyses were performed at 10s intervals and repeated 3 times. Measurements were taken at shear rates between 30 and 120 rpm. Appearent viscosity and torque (%) values for each shear rate were recorded during rheological measurements. Flow behavior indices (n) and consistency coefficients (K, Pa.s n ) were calculated by using the power law model, δ= K(γ) n where δ is shear stress (Pa) and γ shear rate (s -1 ) (Steffe, 1996).
Color measurement
Color values of yoghurt drink samples were determined in CIE (Commission International de L'Eclairage) scale of L*, a* and b* by a tristimulus colorimeter (Model CR-400, Konica Minolta, Japan). Color measurements were taken by using reflectance specular included with D65 illuminant, 10° observer angle and 8-mm aperture. Each yoghurt drink sample (20 mL) was placed in an optical glass cell provided by the manufacturer of the colorimeter (diameter of the cell 34 mm) and four measurements were taken at 3-second intervals at approximately 5°C. Measurements were averaged for each sample. For lightness (L*), 0 indicates black and 100 indicates white. Positive values of a* and b* indicate red and yellow, respectively while negative values of a* and b* indicate green and blue, respectively.
Statistical analyses
Analysis of variance (ANOVA) of the SAS software program (The SAS System for Windows 9.0, Chicago, USA) was used to determine statistically significant differences. Separation of means for significant differences was conducted using the Duncan's multiple-range test at α = 0.05 level. Data were presented as means of three replicates (± standard deviation).
Proximate composition of yoghurt drinks
Total solids, protein and fat contents of yoghurt drink samples are presented in Table 1. The composition of control and experimental yoghurt drinks was very similar. Thermosonication treatment, storage time and their interaction had an insignificant influence on the total solids, protein and fat content of yoghurt drinks (p>0.05).
Acidity affects major storage stability parameters such as serum separation and rheological properties of yoghurt drinks (Ozdemir and Kilic, 2004). Titratable acidity values of yoghurt drink samples during storage are shown in Fig. 1. There were insignificant differences (p>0.05) in titratable acidity values between control samples and yoghurt drinks produced with thermosonicated milk. Similar results were reported in yoghurt drinks by Uzunoglu (2012) and Erkaya et al. (2015). Storage period influenced the acidity values significantly (p<0.05) while its interaction with thermosonication treatment was insignificant (p>0.05). In general, acidity values of all samples gradually increased after the first day of storage until the 10 th day. Accordingly, when all the titratable acidity values of yoghurt drink samples considered (n = 12 per group), an average titratable acidity value of 0.43% at the beginning of storage increased to 0.48% and 0.52% after 5 and 10 days of storage, respectively. Similar results were noted by Tamucay-Ozunlu and Kocak (2010a) who reported that acidity values of yoghurt drinks increased from 0.46% to 0.52% during storage period.
Thermosonication treatment and its interaction with storage period had an insignificant influence on the pH values of yoghurt drink samples (p>0.05) ( Table 2). The change pattern of pH values for yoghurt drinks in all samples was very similar during 10 days of storage. All pH values of yoghurt drink samples decreased during storage period (p<0.05). pH values of yoghurt drinks slightly decreased at the 5 th day of storage, then slightly increased at the 10 th day but this change was statistically insignificant (p>0.05) except for the sample produced with (Tonguc, 2006). Our results were in good agreement with these studies.
Serum separation
Serum separation in acidic fermented dairy products is the formation of serum phase at the top, which is due to the loss of water from a continuous protein matrix. Serum separation is one of the main quality parameters in yoghurt and yoghurt drink. The results of this research indicated that thermosonication process significantly decreased the serum separation of yoghurt drink during storage (Table 3). Individual effects of thermosonication process and storage and their interaction on serum separation were found statistically significant (p<0.01) ( Table 3). Increasing ultrasound power reduced serum separation while serum separation increased with the storage period. Initially serum separation values of yoghurt drink samples were found statistically similar. Throughout the storage, serum separation was not observed in yoghurt drinks produced with milk treated at 150W ultrasound power for 15 min. Riener et al. (2009) reported increased water holding capacity in yoghurts made with thermosonicated milk. In another study, increasing ultrasound power applied directly to yoghurt drink samples reduced serum separation (Ertugay et al., 2012). The authors reported that serum separation value was about 34% in control samples while it reduced to about 10% in samples treated with 44.54W ultrasound power for 4 minutes. Our results were in accordance with these studies.
Ultrasonication itself generates considerable amount of heat in a liquid matrix (Demirdoven & Baysal, 2009;Riener et al., 2009), and the control of this heat is significant in determining the individual effect of ultrasonication on the physical and functional properties of liquid foods like milk. In a study by Nguyen and Anema (2010), the effect of ultrasound on the properties of skim milk used in the formation of acid gels was distinguished from the heat-related effects of ultrasonication. The authors sonicated skim milk at 50W for up to 30 min at different temperatures (20, 40, 60 and 70°C) to produce acid gels, and the maximum firmness (final G′) values were obtained for the skim milk ultrasonicated at 60 or 70°C in 15 min, which were very similar to that for the traditionally heated milk. In this study, ultrasound effect was found less influential on the modification of skim milk properties than the heat generated effect during ultrasonication. The authors also reported that ultrasonication slightly reduced the particle size of the casein micelles and produced acid gels with slightly increased final G′ values, and these effects were additive to those of heating. Heat generated effects were reported to include the denaturation of the whey proteins, the modification of the particle size of the casein micelles and influences on the interaction of the denatured whey proteins with the casein micelles. This present study was mainly focused on the thermosonication effect on yoghurt drink properties, and increases in viscosity and decreases in serum separation values of yoghurt drinks in comparison to conventional heat treatment were mostly from the combined effect of ultrasound power (100, 125 and 150W for 15 min) and heat treatment (70°C for 15 min) applied.
Viscosity
Rheological properties of yoghurt drink produced by the traditional method and thermosonicated raw milk at different ultrasound power are presented in Table 4. High R 2 values (ranging from 0.974 to 0.996) indicated that the power law model was appropriate for determining the rheological properties of yoghurt drink samples. Depending on the ultrasound power used, ultrasonication process of raw milk at 70°C increased the apparent viscosities of yoghurt drink samples. Apparent viscosity values increased with an increase in the ultrasound power. The viscosity of yoghurt and yoghurt-like fermented dairy products is one of the most important quality parameters and physical structure is directly related to the viscosity of the products. The physical structure of these products is associated with the physical interactions of proteins with each other and entrapping serum and fat globules in protein network (Lucey, 2004;Riener et al., 2009). In yoghurt and yoghurt Different letters indicate statistically significant differences across the table (p<0.05) Gursoy, et al: Effect of thermosonication on yoghurt drink quality drink products, the denaturation of entire whey proteins is critical for a better gel structure formation. Riener et al. (2009) reported that whey protein denaturation ratio in milk samples thermosonicated at 400W for 10 minutes was 50% less than the ratio in milk samples treated with traditional thermal process at 90°C for 10 minutes. The authors stated that the molecular interactions that provide the gel structure formation and stabilization in yoghurts produced with thermosonicated milk samples were different than those in yoghurts produced with traditional thermal processed milk samples. Thermosonication is more likely to promote the disassociation of casein micelles into subunits that can form strong networks by re-aggregating strongly with each other and/or with partially denatured whey proteins during fermentation (Riener et al., 2009). The authors confirmed the hypothesis by comparing scanning electron microscopy (SEM) images of yoghurt samples. The microstructure of yoghurt samples produced with thermosonicated milk was more stringent and complex than the microstructure of control yoghurts, which increased the viscosity obviously.
In this present study, the highest apparent viscosity value (244mPa.s at 100rpm) was determined in yoghurt drink produced with milk sonicated at 150W ultrasound power for 15 minutes at 70°C. Apparent viscosity of yoghurt samples increased with the severity of sonication power applied to milk and apparent viscosity values decreased over storage time (Table 4). In yoghurt drink samples, the apparent viscosity decreased with an increase in shear rate (Figs. 2 and 3). Flow behavior indices of yoghurt drink samples ranged from 0.480 to 0.919, meaning that they all exhibited a non-Newtonian type of flow behavior. Similar results were also reported by Bayraktaroğlu and Obuz (2008) and Ertugay et al. (2012) in yoghurt drinks with different fat contents.
Color
Color values of yoghurt drink samples determined in CIE (Commission International de L'Eclairage) scale of L*, a* and b* at the first day of the storage are presented in Table 1. High L* value is a significant quality parameter of yoghurt drink. Insignificant differences in L*, a* and b* values were found between yoghurt drinks produced with milk thermosonicated at different ultrasound powers in comparison to control yoghurt drink (p>0.05). Storage time and its interaction with thermosonication treatment were also insignificant statistically (p>0.05). Erkaya et al. (2015) produced yoghurt drinks by diluting yoghurt which had been manufactured with raw milk pasteurized at 90°C for 10min, and the authors found that L* values of yoghurt drinks produced with milk sonicated in an ultrasonic bath at a frequency of 35kHz for 1, 3 and 5 min followed by heat treatment at 60, 70 and 80°C changed between 81.22±0.34 and 83.62±0.28 while L* value of control sample was 83.54±0.18. Unlike our results, they reported that b* values of samples increased with an increase in power and time of ultrasonication. Dissimilarities may arise from the differences in raw materials used and thermosonication conditions. Color values of yoghurt drinks in our study were similar to those reported by Guler and Park (2009) for yoghurt samples.
CONCLUSION
Overall results indicated that the processing of milk with thermosonication treatment significantly increased viscosity while decreasing serum separation values of yoghurt drinks in comparison to conventional heat treatment (p<0.01). During the storage period, serum separation was not detected in yoghurt drinks produced with milk thermosonicated at 150W ultrasound power for 15 min. Under the conditions studied, thermosonication process at different ultrasound powers applied in yoghurt drink production did not alter the proximate composition and color values of final products. Thus, thermosonication can be successfully used as an alternative process to conventional heat treatment with positive effects on serum separation and viscosity properties of yoghurt drinks. Although some evidences concerning to pasteurization effect of ultrasonication have been previously reported for milk (Bermúdez-Aguirre et al. 2009a,b), future studies are needed to determine its pasteurization efficiency under the conditions used and its effect on the sensory properties of final products.
|
2018-12-06T23:23:02.840Z
|
2016-01-05T00:00:00.000
|
{
"year": 2016,
"sha1": "aeb69f2e8839bc38d2e966fe2633cf7fb4f3e135",
"oa_license": "CCBY",
"oa_url": "http://www.ejfa.me/index.php/journal/article/download/1029/750",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "aeb69f2e8839bc38d2e966fe2633cf7fb4f3e135",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
237340172
|
pes2o/s2orc
|
v3-fos-license
|
Oil-Based Polymer Coatings on CAN Fertilizer in Oilseed Rape (Brassica napus L.) Nutrition
Fertilizer coating can increase the efficiency of N fertilizers and reduce their negative impact on the environment. This may be achieved by the utilization of biodegradable natural coating materials instead of polyurethane-based polymers. The aim of this study was to detect the effect of calcium ammonium nitrate (CAN) fertilizer coated with modified conventional polyurethane enhanced with vegetable oils on the yield and quality of Brassica napus L. compared to CAN fertilizer with a vegetable oil-based polymer and to assess the risks of nitrogen loss. Three types of treatments were tested for both coated fertilizers: divided application (CAN, coated CAN), a single application of coated CAN, and a single application of CAN with coated CAN (1:2). A single application of coated CAN with both types of coating in the growth stage of the 9th true leaf significantly increased the yield, the thousand seed weight, and oil production compared to the uncoated CAN. The potential of using coated CAN may be seen in a slow nitrogen release ensuring the nitrogen demand for rapeseed plants throughout vegetation and eliminating the risk of its loss. The increased potential of NH4+ volatilization and NO3− leaching were determined using the uncoated CAN fertilizer compared to the coated variants. Oil-based polymer coatings on CAN fertilizer can be considered as an adequate replacement for partially modified conventional polyurethane.
Introduction
With the world's exponential population growth and diminishing of arable lands, the agriculture industry has faced a great challenge of crop and food resources for the past decades [1,2]. Predictions are that the earth's population could approach 9.5 billion by 2050, which may result in an almost double increase in food demand and crop production. In one specific example, cereal production is expected to increase from 940 million tons to 3 billion tons a year [3,4]. Satisfying increasing grain yield demands has been achieved by enhancing the use of mineral fertilizers to cropland soil. However, the excessive application of fertilizers presents one of the main sources of polluting soil (heavy metals), water (nitrates leaching into groundwater), and air environments (emission of greenhouse gases), which could be a threat to human health [5,6].
Nitrogen occupies a unique position among essential plant nutrients. Nitrogen and water availability are considered the two major limiting factors in plant growth and development of metabolic processes-nutrient distribution, photosynthesis, biomass, and ultimately yield building [7][8][9]. The deficiency of nitrogen strongly decreases chlorophyll content, enzymatic activity, photosynthesis, respiration rate, and yield of crops [10]. Nitrogen can be directly absorbed by plant roots in inorganic forms (mineral nitrogen) as ammonium (NH 4 + ) and nitrate (NO 3 − ). These forms are the key components of nitrogen fertilizers such as ammonium nitrate (AN) and urea included in the two most widespread nitrogen fertilizers [11].
Results and Discussion
The evaluation of the effect of coated fertilizers was created by comparing the data within the groups using the treatments with the same fertilizer application system (divided, single, and blends). Each method of fertilization was assigned with a control treatment (the treatments D and S). D served as the control variant for the group with a divided application, and S served as a control for the group with a single application and blends.
Yield and Oiliness of Rapeseed and N Content in Plant Biomass
The appropriate type of fertilizer and method of fertilization is important for the high yield production of rapeseed. Several studies describe the increase in yield and qualitative parameters of crops after using coated fertilizer application [38][39][40][41]. Our study showed that the use of coated CAN fertilizers has no negative effect on the yield and qualitative parameters of winter rapeseed. Statistical evaluation of the data shown in Figure 1 revealed no significant differences between the treatments in the groups with divided application (D, D-opu, D-o) and blends (S, Bl-opu, Bl-o). A significant positive effect was recorded in the group of treatments with a single application of coated CAN fertilizers (opu-CAN-oil-based polyurethane-coated CAN; o-CAN-oil-based polymer-coated CAN) in seed yields and oil contents. Seed yields of this group showed a trend of opu-CAN > o-CAN > CAN with opu-CAN up to 18% higher in comparison to the uncoated CAN. Similar results were recorded in the study by Tang et al. [42], in which a single basal application of coated nitrogen fertilizers contributed to the increase of the yield and rice quality in comparison to the divided application. A different trend was recorded in the case of the oil content that reached up to 5.5% higher after a single application of oil-coated CAN fertilizer compared to the use of the uncoated CAN fertilizer. The presumption was that the total nitrogen applied in the single application of coated fertilizers was released over a longer period of time and thus was present in the phase of the seed formation confirmed by Tian et al. [38]. In this study, the increase was recorded by an average of 17.3% after the application of coated fertilizers in rapeseed yield rates compared to the control. This study also proved that lower doses of the total N applied in coated fertilizers contributed to a yield increase of 14.2%, which confirmed their environmental potential in terms of nitrogen release. The study by Lu et al. [43] showed the positive effects of CRFs application on rapeseed yield manifested in the increase of rapeseed pods from 27 to 32% in comparison to non-coated urea. In comparison to the treatments with coated CAN fertilizers, a single application of the uncoated CAN (treatment S) proved the decline in the parameters of oil production and thousand seed weight (TSW) shown in Table 1. Similar positive effects of coated CAN fertilizers were proved on yield and qualitative parameters of rapeseed. It can be concluded that o-CAN may be a proper alternative instead of opu-CAN.
Figure 1.
Rates of yield and oiliness of rapeseed. The groups of the treatments D-divided application; S-single application; Bl-blend. The columns represent the mean (n = 4), error bars present the mean standard deviation (SD). The same letters at the bottom of the columns describe no statistically significant differences between the treatments (Fisher's LSD test, p ≤ 0.05). Each group of the treatment (D, S, Bl) was evaluated separately. Groups of treatments D-divided application, S-single application; Bl-blend; opu-oil-based polyurethane polymer, o-oil-based polymer, TSW-thousand seed weight. The same letters next to the numbers depict no statistically significant differences between the treatments (Fisher's LSD test, p ≤ 0.05). Each group of the treatment (divided, single, blend) was evaluated separately. The values represent the mean (n = 4) ± standard deviation (SD).
The data from Figure 2 indicate a connection between the yield rates and the nitrogen concentration in aboveground plant biomass. In general, plants can only consume a part of nutrients (in our case nitrogen) from conventional fertilizers, and the rest may be subject to losses to the environment [44]. This trend is mainly visible in the treatment with the application of conventional uncoated CAN fertilizer in a single dose (S), resulting in a significantly lower concentration of nitrogen in plant biomass in the growth stage of flower bud emergence (t2) compared to the growth stage of stem elongation (t1). This decrease indicates that the overdose of quickly released nitrogen in uncoated CAN fertilizer led to N-loss available for direct plant consumption and ultimately caused the lowest yield and oil content. The declining trend in the supply of the available form of N, released from conventional uncoated CAN, during the period and the increased supply of mineral N released from coated CAN is also evident from the assessment of N content in aboveground biomass ( Table 2). The nitrogen content in the plant shows a gradual release of Bl-blend. The columns represent the mean (n = 4), error bars present the mean standard deviation (SD). The same letters at the top of the columns describe no statistically significant differences between the treatments (Fisher's LSD test, p ≤ 0.05). Each group of the treatment (D, S, Bl) was evaluated separately.
The data from Figure 2 indicate a connection between the yield rates and the nitrogen concentration in aboveground plant biomass. In general, plants can only consume a part of nutrients (in our case nitrogen) from conventional fertilizers, and the rest may be subject to losses to the environment [44]. This trend is mainly visible in the treatment with the application of conventional uncoated CAN fertilizer in a single dose (S), resulting in a significantly lower concentration of nitrogen in plant biomass in the growth stage of flower bud emergence (t2) compared to the growth stage of stem elongation (t1). This decrease indicates that the overdose of quickly released nitrogen in uncoated CAN fertilizer led to N-loss available for direct plant consumption and ultimately caused the lowest yield and oil content. The declining trend in the supply of the available form of N, released from conventional uncoated CAN, during the period and the increased supply of mineral N released from coated CAN is also evident from the assessment of N content in aboveground biomass ( Table 2). The nitrogen content in the plant shows a gradual release of the available forms of this nutrient from the coated CAN that is particularly evident in the group of singly applied fertilizers (S). While the nitrogen content detected in the aboveground mass of rapeseed fertilized with uncoated CAN (S) was detected almost 4 and 2 times higher in the term t1 compared to the treatments with coated CAN (S-opu, S-o) in the term t2, the nitrogen content of the treatments fertilized with coated fertilizers was increased. These values show that the oil-coated CAN is able to release nitrogen more rapidly than the oil-based, polyurethane-coated CAN and thus may supply the plant's demand for this nutrient. Nitrogen contents in plants, treated with coated fertilizers applied in blends with conventional CAN (Bl-opu, Bl-o), can confirm this trend.
The relationship between the optimal nitrogen supply and its impact on the yield and oil content of rapeseed is described in many studies [45,46]. A similar trend was recorded in the treatments with coated CAN fertilizers applied in blends with the uncoated CAN fertilizer (Bl-opu, Bl-o). Nitrogen content in plant biomass in the growth stage of stem elongation decreased about 1.3% and 0.9% compared to the uncoated CAN fertilizer applied in a single dose. The N content in plant rapeseed showed the most even N pumping during vegetation in the variant with divided application and a single application of coated CAN fertilizers. oil content of rapeseed is described in many studies [45,46]. A similar trend was recorded in the treatments with coated CAN fertilizers applied in blends with the uncoated CAN fertilizer (Bl-opu, Bl-o). Nitrogen content in plant biomass in the growth stage of stem elongation decreased about 1.3% and 0.9% compared to the uncoated CAN fertilizer applied in a single dose. The N content in plant rapeseed showed the most even N pumping during vegetation in the variant with divided application and a single application of coated CAN fertilizers.
. Groups of treatments D-divided application, S-single application; Bl-blend; opu-oil-based polyurethane polymer, o-oil-based polymer. The same letters next to the numbers depict no statistically significant differences between the treatments (Fisher's LSD test, p ≤ 0.05). Each group of the treatment (divided, single, blend) was evaluated separately. The values represent the mean (n = 4) ± standard deviation (SD).
Mineral Nitrogen Content in the Soil
The release of nitrogen from coated CAN fertilizers significantly affected the dynamic change of the soil mineral N (N min ) content in the growth process of rapeseed. Contents of N min and its ionic forms (NO 3 − , NH 4 + ) were determined in the soil in three experimental phases (t1-t3,). Although, enough of the available nitrogen can be essential for direct plant consumption. The excessive content may inevitably increase its loss in soil [47]. Average contents of N min in soil (without differencing into layers), shown in Table 3, serve as an overview of nitrogen release development in the treatments during the rapeseed vegetation. Groups of treatments D-divided application, S-single application; Bl-blend; opu-oil-based polyurethane polymer, o-oil-based polymer. The same letters next to the numbers describe no statistically significant differences between the treatments (Fisher's LSD test, p ≤ 0.05). Each group of the treatment (divided, single, blend) was evaluated separately. The values represent the mean (n = 4) ± standard deviation (SD).
One of the important aspects of coated fertilizers is the longevity of nutrient release in sufficient levels for plant uptake. The use of coated CAN fertilizers in each form of the application (D, S, and Bl) has shown a positive effect on N min release pattern, as can be seen from Figure 3. The effect was visible, especially in the period between the first (t1) and the second term (t2) of soil samples collection that was significantly milder compared to conventional uncoated CAN. The relatively accelerated release of nitrogen was observed in high Nmin concentration after the application of fertilizers (t1 single application, t2 divided application) in the treatments with conventional uncoated CAN shown in Table 3. Rapid release Nmin was visible mainly in the single application in which Nmin concentration decreased rapidly up to 65.4% between t1 and t2 (up to 22 days). Our assumption was that although the part of the soil Nmin was obtained from the soil through plant roots, the great contrast in Nmin concentration was due to N loss (NH4 + volatilization and NO3 − leaching) between t1 and t2. On the contrary, the data of the soil samples, collected in the harvest time The relatively accelerated release of nitrogen was observed in high N min concentration after the application of fertilizers (t1 single application, t2 divided application) in the treatments with conventional uncoated CAN shown in Table 3. Rapid release N min was visible mainly in the single application in which N min concentration decreased rapidly up to 65.4% between t1 and t2 (up to 22 days). Our assumption was that although the part of the soil N min was obtained from the soil through plant roots, the great contrast in N min concentration was due to N loss (NH 4 + volatilization and NO 3 − leaching) between t1 and t2. On the contrary, the data of the soil samples, collected in the harvest time (t3), showed relatively high levels of N min in the treatments with divided (especially D-o) and single (S-opu, S-o) application of fertilizers in comparison with conventional CAN treatments. Dynamic of gradual N min release was most visible after a single application of both coated CAN (S-opu, S-o) with no definite decrease in N min content in t3. A single application of oil-based polyurethane-coated CAN fertilizer (S-opu) caused an increase by 14.2% in t3 in mineral nitrogen content compared to t2 in soil. These findings corresponded to the data of yield and qualitative parameters (Figure 1), in which a single application of coated CAN fertilizer (S-opu) proved to be the most effective. The assumption was that the amount of released nitrogen reached sufficient levels for the plant demand in the time of the experiment duration from these treatments, thus leading to the increased nitrogen use efficiency and subsequently to a more positive environmental impact (lower risk of N loss). Our data are consistent with the findings of Xiao et al. [48], who described that the total N min content continued gradually to an increase in the top layer of soil on the ninetieth day after the application of coated fertilizers, while high levels were maintained in the middle and bottom layer of soil.
The positive effect of coated CAN fertilizers on N min content was also visible in the nitrogen distribution between soil layers during the experiment ( Figure 3). The application of conventional uncoated CAN fertilizer (D and S treatment) showed high N min concentrations mainly in the top and middle layers of the soil right after fertilization. The treatments with coated CAN fertilizers showed that N min content was, in general, focused mainly on the top layer of the soil during t1 and t2. N min content was evenly distributed between each layer of the soil in the harvest time (t3). This indicates that both coated CAN fertilizers (opu-CAN and o-CAN) proved a high ability of gradual nitrogen release leading to more efficient nitrogen use by the plant and a reduction in the environmental risk. A gradual N min release by coated fertilizers was also described in the study by Zheng et al. [49], who found that the application of coated fertilizers resulted in enhanced N min concentration in soil, especially during later crop stages.
Considering the placement of the fertilizers (the placement on the soil surface without incorporation to the soil), the highest potential for the NH 4 + volatilization is most likely to be closest to the soil surface [50]. Ammonium nitrate (used CAN in our experiment), depending on N dose and irrigation, belongs to the conventional nitrogen fertilizers with a high potential of NH 4 + volatilization [51]. This assumption was confirmed by the data obtained from the top layer of the soil samples ( Figure 4). The data showed the greatest potential for NH 4 + volatilization in the treatments with conventional uncoated CAN (D and S treatments) expressed in significantly high NH 4 + concentrations in t1 and t2. Analogous to N min , the uncoated CAN potential of volatilization was visible between t1 and t2, in which the NH 4 + concentration decreased up to 39.8% in soil. Higher NH 4 + concentrations were accountable to the use of conventional uncoated CAN (1/3 of the total N dose) after the application of blend fertilizers (Bl-opu, Bl-o). Similarly, the S variant (a single application of uncoated CAN) was resolved in its rapid release. NH 4 + contents in Bl-opu and Bl-o were detected almost over half lower in t1 than in the S treatment; therefore, major risks of NH 4 + losses were not found. In addition to the volatilization, a rapid NH 4 + release also presents the risk of the increased concentration of nitrates as an initial component of nitrification in soil and thus increased the risk of NO 3 − leaching [52]. The positive effect of coated fertilizers was expressed by significantly lower NH 4 + concentrations during t1-t3 in comparison to conventional uncoated CAN. The data were indirectly consistent with the findings of Xiao et al. [48], who mentioned that the application of coated fertilizers resulted in lower NH 4 + rates in soil samples in comparison to conventional uncoated nitrogen fertilizer. A gradual NH 4 + release was also expressed by the increase of showed significantly higher NH 4 + contents in t3 treatments compared to the treatments with conventional uncoated CAN. On the contrary, NH 4 + contents showed no significant difference in the S treatment in Bl-opu and Bl-o. This led to an assumption that all nitrogen contained in coated fertilizers and applied in blends was released during the rapeseed vegetation, predetermining the blend application as the most suitable alternative. Contents of NO3 − were monitored as the main potential source of N loss in the soil samples due to their high leaching ability. One of the first studies by Liegel and Walsh from 1976 [53] proved that the application of controlled-release N fertilizers was the most effective technique in sandy irrigated soils with a high risk of nitrate leaching. Preventing the leaching of nitrates presents one of the greatest environmental challenges in terms of nitrogen fertilizer use. The estimation of the potential for N losses due to the NO3 − leaching from the experimental treatments were provided by the isolation of the data from the bottom and middle layers of soil. The data obtained from the middle layer (ML) of the soil ( Figure 5) served for the evaluation of potential NO3 − migration to the lower layers of the soil, which might consequently lead to its leaching into the groundwater. The data obtained from the bottom layer (BL) of the soil ( Figure 6) served to evaluate the potential of nitrates leaching to the groundwater during the rapeseed vegetation and directly after its harvest. Contents of NO 3 − were monitored as the main potential source of N loss in the soil samples due to their high leaching ability. One of the first studies by Liegel and Walsh from 1976 [53] proved that the application of controlled-release N fertilizers was the most effective technique in sandy irrigated soils with a high risk of nitrate leaching. Preventing the leaching of nitrates presents one of the greatest environmental challenges in terms of nitrogen fertilizer use. The estimation of the potential for N losses due to the NO 3 − leaching from the experimental treatments were provided by the isolation of the data from the bottom and middle layers of soil. The data obtained from the middle layer (ML) of the soil ( Figure 5) served for the evaluation of potential NO 3 − migration to the lower layers of the soil, which might consequently lead to its leaching into the groundwater. The data obtained from the bottom layer (BL) of the soil ( Figure 6) served to evaluate the potential of nitrates leaching to the groundwater during the rapeseed vegetation and directly after its harvest.
As predicted, significantly, the highest potential for NO 3 − leaching was due to rapid nitrogen release from conventional uncoated CAN fertilizers recorded in single or divided CAN application. The potential for NO 3 − leaching after uncoated CAN application was possible to confirm from the data of NO 3 − concentrations in t1 and t2 shown in Figures 5 and 6. The NO 3 − content of ML and BL was detected over three times higher (>3.3) in the treatment fertilized with a single application of uncoated CAN in t1 compared to the treatments with coated CAN fertilizers. The data showed that the NO 3 − decrease was found up to 73.9% in ML and up to 75.5% in BL in the S treatment between t1 and t2. Considering the amount and duration (up to 14 days), it is most likely that nitrates of the uncoated CAN fertilizer were lost due to the nitrate leaching. These findings corresponded with the data by Zhang et al. [54], who discovered that the rates of the leached nitrates As predicted, significantly, the highest potential for NO3 − leaching was due to rapid nitrogen release from conventional uncoated CAN fertilizers recorded in single or divided CAN application. The potential for NO3 − leaching after uncoated CAN application was possible to confirm from the data of NO3 − concentrations in t1 and t2 shown in Figure 5 and Figure 6. The NO3 − content of ML and BL was detected over three times higher (>3. As predicted, significantly, the highest potential for NO3 − leaching was due to rapid nitrogen release from conventional uncoated CAN fertilizers recorded in single or divided CAN application. The potential for NO3 − leaching after uncoated CAN application was possible to confirm from the data of NO3 − concentrations in t1 and t2 shown in Figure 5 and Figure 6. The NO3 − content of ML and BL was detected over three times higher (>3. Identical to N min and NH 4 + , the positive effect of coated CAN fertilizers was recorded in the form of gradual NO 3 − release over the course of the whole experiment. Gradual release of nitrates was discovered to be the most visible between t2 and t3 in coated fertilizers. The increased NO 3 − contents were observed up to 64.7% in ML and up to 119.9% in BL. While the NO 3 − amount was decreased in ML and BL in the treatments fertilized with uncoated CAN, the coated CAN fertilizers were able to supply the plants with nitrogen even in the later stages of the development. Compared to the low levels of NO 3 − content in the treatments with conventional CAN fertilizers (due to rapid nitrogen release and subsequent N loss). This increase correlated with the data of seed yield and qualitative parameters (Figure 1) and can be used as a potential supply of available nitrogen for the next crops. The data correlated with the findings of Xiao et al. [48]. Similar N min release (especially NO 3 − ) was proved using oil-based polymer-coated CAN, which can be a proper alternative for oil-based polyurethane-coated CAN. This fact is not suitable for future use due to polyurethane's lower biodegradability. The positive effect of coated fertilizers on nitrates leaching was recorded in several studies [55][56][57][58].
Materials and Methods
The pot experiment was performed under controlled conditions in the vegetation hall of Mendel University in Brno, Brno, Czech Republic (49 • 12 36.94" N and 16 • 36 49.95" E).
Plant Material and Growth Conditions
Rapeseed (Brassica napus subs. napus) cv. DK Exception (Bayer s.r.o, Prague, Czech Republic) was used in this study. Mitscherlich pots (STOMA GmbH, Siegburg, Germany) were filled with 6 kg of air-dried and <2 cm sieved soil and placed in the vegetation hall. The properties of the used soil for the pot experiment are shown in Table 4. Ten seeds of rapeseed were sown in 2 cm depth in each pot. Three weeks after sowing, the number of rapeseed plants was adjusted to three plants per pot.
Experimental Design
In the experiment, coated CAN fertilizers were compared with a conventional uncoated CAN. The same total dose of nitrogen was applied in all treatments using different N sources such as calcium ammonium nitrate (CAN, up to 13% N-NH 4 + and 13% N-NO 3 − , Lovochemie a.s., Lovosice, the Czech Republic), oil-based polyurethane-coated CAN (opu-CAN) and oil-based polymer-coated CAN (o-CAN). Coated fertilizers were prepared by spreading the coating on conventional fertilizer CAN using the LDP-3 fluidized bed granulating machine (Changzhou Jiafa Granulating Drying Equipment Co., Ltd., Changzhou, China). The coating consisted of oil-based polyurethane polymer (opu-CAN-coating up to 7.6 wt.%, up to 13% N-NH 4 + and 13% N-NO 3 − , VUCHT a.s., Bratislava, Slovakia) and oil-based polymer (o-CAN-coating up to 6.1 wt.%-triglycerides of fatty acids, up to 75 wt.% of which unsaturated were up to 45 wt.%, polylactic acid up to 10 wt.%, up to 13% N-NH 4 + and 13% N-NO 3 − , VUCHT a.s., Bratislava, Slovakia). The composition of the polyurethane-based coating (opu-CAN) differed from the conventional polyurethanes prepared by the reaction of the diisocyanates with the polymeric diols. The polymeric diols were replaced with the vegetable oil having hydroxy groups in its structure. The prepolymer, obtained by this way, was finally applied in the crosslinking. These modifications led to a substantial increase in the biodegradable fraction of the coating. The prepolymer was completely replaced with a more biodegradable component in the oil-based coating (o-CAN). The biodegradable fraction of the coating material is further increased in this way.
The individual treatments and fertilizer addition are detailed in Table 5. The fertilizers were applied to the soil surface. Each treatment was replicated 8 times in a complete randomized block design in the vegetation hall (Figure 7). N sources such as calcium ammonium nitrate (CAN, up to 13% N-NH4 + and 13% N-NO3 − , Lovochemie a.s., Lovosice, the Czech Republic), oil-based polyurethane-coated CAN (opu-CAN) and oil-based polymer- coated CAN (o-CAN). Coated fertilizers were prepared by spreading the coating on conventional fertilizer CAN using the LDP-3 fluidized bed granulating machine (Changzhou Jiafa Granulating Drying Equipment Co., Ltd., Changzhou, China). The coating consisted of oil-based polyurethane polymer (opu-CAN-coating up to 7.6 wt.%, up to 13% N-NH4 + and 13% N-NO3 − , VUCHT a.s., Bratislava, Slovakia) and oil-based polymer (o-CAN-coating up to 6.1 wt.%-triglycerides of fatty acids, up to 75 wt.% of which unsaturated were up to 45 wt.%, polylactic acid up to 10 wt.%, up to 13% N-NH4 + and 13% N-NO3 − , VUCHT a.s., Bratislava, Slovakia). The composition of the polyurethane-based coating (opu-CAN) differed from the conventional polyurethanes prepared by the reaction of the diisocyanates with the polymeric diols. The polymeric diols were replaced with the vegetable oil having hydroxy groups in its structure. The prepolymer, obtained by this way, was finally applied in the crosslinking. These modifications led to a substantial increase in the biodegradable fraction of the coating. The prepolymer was completely replaced with a more biodegradable component in the oilbased coating (o-CAN). The biodegradable fraction of the coating material is further increased in this way.
The individual treatments and fertilizer addition are detailed in Table 5. The fertilizers were applied to the soil surface. Each treatment was replicated 8 times in a complete randomized block design in the vegetation hall (Figure 7). The treatments of fertilizer were divided into 3 groups according to the term of application and the type of fertilizer chosen. The first group was the divided application of fertilizers (the designation of the treatments with D). The total nitrogen dose was divided into two parts; the first was applied by the conventional uncoated CAN in the 1st term (1st Fertilization), the second dose was applied by uncoated CAN (treatment D) and coated CAN (treatments D-opu and D-o) in 2nd term (2nd Fertilization). The second and third groups consisted of treatments with a single application of total nitrogen dose in one term (1st Fertilization), where fertilizers of one type (the designation of the treatments with S) and fertilizers of a mixture (the designation of the treatments with Bl) were applied. The fertilizer mixtures (Bl) were created by mixing conventional CAN and coated CAN in a 1:2 ratio (converted to N rate). The treatments of fertilizer were divided into 3 groups according to the term of application and the type of fertilizer chosen. The first group was the divided application of fertilizers (the designation of the treatments with D). The total nitrogen dose was divided into two parts; the first was applied by the conventional uncoated CAN in the 1st term (1st Fertilization), the second dose was applied by uncoated CAN (treatment D) and coated CAN (treatments D-opu and D-o) in 2nd term (2nd Fertilization). The second and third groups consisted of treatments with a single application of total nitrogen dose in one term (1st Fertilization), where fertilizers of one type (the designation of the treatments with S) and fertilizers of a mixture (the designation of the treatments with Bl) were applied. The fertilizer mixtures (Bl) were created by mixing conventional CAN and coated CAN in a 1:2 ratio (converted to N rate).
The pot experiment was carried out under semi-natural conditions (under a rain shelter) in the vegetative hall. Figure 8 shows the average daily temperature and the average daily relative humidity during the experiment. A controlled watering regime, used identical for all treatments (pots), was in the experiment. Plants were watered to 70% of maximum water holding capacity throughout the growing season. The pots were hand-watered with demineralized water on the soil surface.
Rapeseed plants were harvested manually by cutting above the soil surface from each pot. The rapeseed was threshed using a laboratory thresher (HALDRUP LT-20, Haldrup GmbH, Ilshofen, Germany).
The rape seeds were purified from coarse impurities by repetitive sifting. Rapeseed yield was measured in three plants within each pot, and the value was adjusted to 9% of moisture. Seed yield was determined by weighing (laboratory scale PCB Kern, KERN & Sohn GmbH, Balingen, Germany) and exceeded as gram per pot (g/pot). Seeds were then counted and hand-ground in mortar for further analysis of the oil content. Groups of treatments D-divided application, S-single application; Bl-blend; opu-oil-based polyurethane polymer, o-oil-based polymer.
The pot experiment was carried out under semi-natural conditions (under a rain shelter) in the vegetative hall. Figure 8 shows the average daily temperature and the average daily relative humidity during the experiment. A controlled watering regime, used identical for all treatments (pots), was in the experiment. Plants were watered to 70% of maximum water holding capacity throughout the growing season. The pots were hand-watered with demineralized water on the soil surface. Rapeseed plants were harvested manually by cutting above the soil surface from each pot. The rapeseed was threshed using a laboratory thresher (HALDRUP LT-20, Haldrup GmbH, Ilshofen, Germany).
The rape seeds were purified from coarse impurities by repetitive sifting. Rapeseed yield was measured in three plants within each pot, and the value was adjusted to 9% of moisture. Seed yield was determined by weighing (laboratory scale PCB Kern, KERN & Sohn GmbH, Balingen, Germany) and exceeded as gram per pot (g/pot). Seeds were then counted and hand-ground in mortar for further analysis of the oil content.
Plants and Soil Sampling
The evaluation of soil mineral nitrogen content (NO 3 − , NH 4 + ) and nutritional plant properties was provided in the soil samples and plant biomass collected in the specific experimental phases shown in Table 6. The collection of the soil samples was carried out by a probe with the aligned tip. After the collection, the soil profile was divided in three zones for the observation of mineral nitrogen movement in soil and subsequently frozen for further analysis (Figure 9).
Plants and Soil Sampling
The evaluation of soil mineral nitrogen content (NO3 − , NH4 + ) and nutritional plant properties was provided in the soil samples and plant biomass collected in the specific experimental phases shown in Table 6. The collection of the soil samples was carried out by a probe with the aligned tip. After the collection, the soil profile was divided in three zones for the observation of mineral nitrogen movement in soil and subsequently frozen for further analysis (Figure 9). The plant biomass was dried at 50 °C and homogenized to determine the nitrogen content in the dry matter. The plant biomass was dried at 50 • C and homogenized to determine the nitrogen content in the dry matter.
Analytical Methods
The N min determination was provided according to the methodology by Zbíral et al. [62], who described that nitrate and ammonium nitrogen was extracted from the soils with a solution of neutral salt (1% of K 2 SO 4 ). The NH 4 + determination was carried out spectrophotometrically (λ 660 nm ). The NO 3 − contents were determined by ISE (Ion Selective Electrode) [63].
The nitrogen determination was provided in aboveground plant biomass according to the methodology by Zbíral et al. [64]. Nitrogen contents were determined by the Kjeldahl method using the Kjeltec 2300 device (Foss, Hillerød, Denmark).
The thousand seed weight (TSW) determination was performed using a laboratory counter MK (MEZOS spol. s r.o., Hradec Králové, the Czech Republic). The determination was carried out by weighing the number of 2 × 500 seeds to prevent possible measurement errors.
The determination of seed oil content was provided according to the methodology of the Central Institute for Supervising and Testing in Agriculture [65]. The oil content was determined gravimetrically after the extraction of the samples with diethyl ether using the Soxhlet method based on the NMR extraction of rapeseeds in a continuous flow extractor Minispec mq series TD-NMR (Bruker Corporation, Ettlinger, Germany).
Statistical Analysis
The effect of the treatment on the evaluated parameters was statistically analyzed in the STATISTICA 12 program (TIBCO Software, San Jose, CA, USA) [66]. The effect of the treatment on the seed yield, oiliness, oil production, thousand seed weight, nitrogen concentration and content in aboveground plant biomass and the content of mineral nitrogen (ammonium and nitrate) in soil were analyzed separately for each group of the treatment (divided, single and blend application of fertilizers). The normality and homogeneity of variances were verified, respectively, by Shapiro-Wilk and Levene values at p ≤ 0.05. The influence of the monitored factors was analyzed via analysis of variance (level of significance p ≤ 0.05). The effect of the treatment on the mentioned parameters was analyzed using two-way analyses of variance with the treatment such as fixed effect and the pot used as the random effect to take into account the grouping of individuals in the same pot. The differences between the means were evaluated by the Fisher's (LSD) test.
Conclusions
The use of coated CAN fertilizer proves the potential to gradually release acceptable nitrogen during the growing season in winter rape nutrition and thus continuously meet the needs of plants. Compared to the effect of conventional CAN, the use of coated CAN fertilizers has been shown to increase the efficiency of nitrogen fertilization and reduce its losses. A suitable method seems to be the application of a mixture of conventional CAN and coated CAN in a ratio of 1:2 during spring fertilization, ensuring a sufficient amount of rapidly releasing N during the regeneration of rapeseed and its slower release during further developmental stages. The CAN fertilizer coated with a biodegradable oil-based polymer proves the ability to release the optimum amount of nitrogen for canola nutrition.
|
2021-08-29T06:16:17.813Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1e35d6a6618d782e52c5a56d5b4df9322d56bcc6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/8/1605/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd3b5f965a0883448c415195ab98ff7ba7bb8b0b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8293894
|
pes2o/s2orc
|
v3-fos-license
|
Particle dependence of elliptic flow in Au+Au collisions at $\sqrt{s_{NN}}=$ 200 GeV
The elliptic flow parameter ($v_2$) for $K_S^0$ and $\Lambda+\bar{\Lambda}$ has been measured at mid-rapidity in Au + Au collisions at $\sqrt{s_{_{NN}}}=200$ GeV by the STAR collaboration. The $v_2$ values for both $K_S^{0}$ and $\Lambda+\bar{\Lambda}$ saturate at moderate $p_T$, deviating from the hydrodynamic behavior observed in the lower $p_T$ region. The saturated $v_2$ values and the $p_T$ scales where the deviation begins are particle dependent. The particle-type dependence of $v_2$ shows features expected from the hadronization of a partonic ellipsoid by coalescence of co-moving quarks. These results will be discussed in relation to the nuclear modification factor ($R_{CP}$) which has also been measured for $K_S^0$ and $\Lambda+\bar{\Lambda}$ by the STAR collaboration.
Introduction
The elliptic component of the event-wise azimuthal anisotropy of particle production (i.e. elliptic flow or v 2 ) is thought to probe the early stages of relativistic heavy ion collisions [2]. Measurements of v 2 for identified particles [3,4] and charged hadrons [5,6] at the Relativistic Heavy Ion Collider (RHIC) indicate a conversion of spatial anisotropy to momentum anisotropy near the hydrodynamical limit [2,7,8]. For p T greater than 2 GeV/c, however, the charged hadron v 2 deviates from hydrodynamical calculations and saturates at a large value approximately independent of p T up to 6 GeV/c [6]. The measurements of v 2 for K 0 S and Λ + Λ also indicate a saturation at moderate p T . Models using large parton energy loss [9,10] and transport opacity [11] have been discussed in relation to the saturation and centrality dependence of charged hadron v 2 at large p T . A saturated v 2 could also arise if the extent of the p T region where soft processes dominate the spectrum is particle dependent [12]. In this case the hyperon v 2 may continue to rise at intermediate p T (perhaps following hydrodynamic model calculations) while v 2 of the less massive meson decreases. It has also been suggested that if a partonic state ‡ For a complete collaboration list see [1] exists prior to hadronization, the process of particle formation at moderately high p T , by string fragmentation, parton fragmentation [13] or quark coalescence [14,15,16,17], may lead to a dependence of v 2 and R AA on particle type. As such, it's possible that these measurements will provide information on the existence and nature of an early partonic state.
Analysis and Results
This analysis uses 1.6 × 10 6 minimum-bias trigger events and 1.5 × 10 6 central trigger events detected in the STAR detector system [18]. The particles, K 0 S , Λ and Λ, were identified from the charged daughter tracks produced in the decays K 0 S → π + + π − , Λ → p + π − and Λ → p + π + . A detailed description of the analysis, such as track finding, decay vertex topology cuts, and the estimation of detection efficiency, is given in Refs. [4,19].
We use the yield as a function of ( , where φ ij is the azimuthal emission angle of particle i in event j and Ψ R j is the reaction plane angle for event j, where, to remove autocorrelations, the decay daughter tracks associated with particle i are excluded from its calculation. Within statistical errors, the Λ v 2 is the same as Λ v 2 , so they are summed together. Table 1. The systematic errors for minimum bias v 2 from non-flow effects (n-f) and background contamination (bkg). The values represent the absolute errors. The p T resolution, δp T /p T is also listed. Possible sources of systematic error in the calculation of v 2 are correlations unrelated to the reaction plane (non-flow effects), uncertainties in the extraction of yields from the invariant mass distributions, the particle momentum resolution (δp T /p T ), and biases introduced by the cuts used in the analysis. Table 1 lists the dominant systematic errors for three transverse momenta. The non-flow systematic error is dominant. The non-flow effects for charged particle v 2 are discussed in Refs. [6,5] but, the particle dependence of these effects has not been measured. We assume a similar magnitude of non-flow contribution to Λ + Λ and K 0 S v 2 . A 4-particle cumulant analysis of Λ + Λ and K 0 S v 2 will be less sensitive to non-flow effects but, to be conclusive, will require a larger data sample than is currently available. S and Λ + Λ as a function of p T for the centrality intervals, 30-70% (a), 5-30% (b), 0-5% (c), and 0-80% (d) of the geometrical cross section.The p T dependence of the v 2 for all the centrality bins has a similar trend. There is a saturation and particle dependence at moderate p T for each of the centrality intervals and, as such, the saturation for the minimum-bias v 2 (0-80%) cannot be due to the superposition of drastically different p T dependencies in various centrality bins. This measurement establishes the saturation and particle dependence of v 2 at the moderate to high p T region. approximately the kinetic energy of the hadron. In the low momentum region the K 0 S and Λ+Λ v 2 appears to fall on a single straight line. The hydrodynamic calculations in Fig. 3 seem to capture this trend. In the bottom panels of Fig. 2 we've scaled the v 2 values by the eccentricity of the overlap region for the various centralities. Hydrodynamic models predict that v 2 should scale with this initial eccentricity.
In Fig. 4 (top) we show v 2 (p T ) for K 0 S , Λ + Λ, and charged hadrons [20] along with hydrodynamic model calculations of v 2 for identified particles [7]. Below p T ∼ 1.2 GeV/c v 2 is consistent with the calculations and in agreement with the previous results for K 0 S and Λ + Λ at √ s NN = 130 GeV [4]. Contrary, however, to hydrodynamical calculations, where at a given p T heavier particles will have smaller v 2 values, the measured v 2 of the heavier hyperon saturates at a value significantly larger than the v 2 of the lighter K 0 S meson. The p T scale where the measured v 2 deviates from the hydrodynamical prediction is particle dependent with the hyperon v 2 following the prediction up to p T ∼ 2.0 GeV/c while the K 0 S v 2 deviates much sooner. The ratio (R CP ) of the yields in central and peripheral collisions scaled by the number of binary nucleon-nucleon collisions (N binary ), may also be sensitive to the effects of energy loss and hadronization via parton coalescence [16,9]. In Fig. 4 (bottom) we show R CP for K 0 S , Λ+Λ and charged hadrons using the centrality intervals 0-5% (central) and 60-80% (peripheral) [21,22]. The charged hadron spectrum at p T > 2 GeV/c for the 60-80% centrality bin approximately follows binary collision scaling without medium modification [22]. As such, when this bin is used, R CP approximates R AA . The bands in Fig. 4 represent the expected values of R CP for N binary and N part scaling including systematic variations within the calculation [22].
For p T < 5 GeV/c, the K 0 S and Λ + Λ yields are suppressed (relative to N binary scaling) by different magnitudes. In addition, the p T scale associated with the onset suppression has a dependence on particle-type that is similar to the dependence in v 2 for the onset of saturation. At p T ∼ 5.0 GeV/c, R CP values for K 0 S and Λ + Λ are both approaching the value of the charged hadron R CP .
Discussion
Although R CP depends only on the yield in the central and peripheral bins, and the v 2 in Fig. 4 is from a minimum-bias centrality interval, the two parameters may be intimately related. The differential elliptic flow (v 2 (p T )) measures the ratio of the p T spectrum of particles emitted in the direction of the reaction plane (in-plane) to that of particles emitted perpendicular the reaction plane (out-of-plane). In hydrodynamic models, we expect the pressure gradient to be larger in the in-plane direction than the out-of-plane direction. In this case, v 2 will be the ratio of a p T spectrum with more hydrodynamic flow to one with less flow. As such, taking the ratio (R CP ) of a p T spectrum that exhibits large flow (central) and one exhibiting less flow (peripheral) should lead to a very similar p T and particle-type dependence. A detailed study of v 2 and R CP for identified particles should reveal the extent (in p T and centrality) to which hydrodynamical models are valid.
A surface emission scenario, where partons traversing a dense medium experience large energy losses, has been discussed in relation to the large, p T independent v 2 measured for charged hadrons [10]. This mechanism should also lead to a suppression of particle production in central Au+Au collisions. This scenario, however, is inconsistent with STAR measurements of v 2 and R CP for K 0 S and Λ + Λ. The smaller suppression manifested in the Λ + Λ R CP contradicts the larger azimuthal anisotropies manifested in Λ + Λ v 2 values. In addition, calculations based on a surface emission model [10] cannot produce v 2 values as large as those measured.
The absence of a net suppression of Λ + Λ for p T from 1.8-3.5 GeV/c in central Au+Au collisions could also indicate the presence of dynamics beyond the framework of parton energy loss followed by fragmentation. The stronger dependence on centrality (and thus parton density) for baryon production indicated by the larger R CP would naturally be expected from multi-parton mechanisms such as gluon junctions [23], quark coalescence [14], or recombination [16]. Within the framework of these models, the measured v 2 and R CP features may reflect the anisotropy and hadronization properties of the bulk quark matter. Fig. 5 shows v 2 of K 0 S and Λ + Λ as a function of p T where the v 2 and p T values have been scaled by the number of constituent quarks (n). Above p T /n ∼ 0.8 GeV/c, the v 2 /n vs p T /n is the same, within errors, for both species. In a scenario where hadrons at intermediate p T (∼ 1 − 5 GeV/c) are predominantly formed from bulk partonic matter by quark coalescence, e.g. Ref. [14], v 2 /n should reveal the v 2 developed by partons prior to the hadronic phase. The verification of this scenario, to the exclusion of other possible explanations, would be strong evidence for the formation of a quark-gluon plasma at RHIC. The scenario discussed in Ref. [12], where soft processes dominate the hyperon v 2 up to a higher p T than mesons, could also lead to a particle dependence for R CP that is qualitatively consistent with these measurements. Quantitative calculations of v 2 and predictions for R CP from this scenario, however, are still needed. Up-coming measurements of R CP for identified particles in d + Au collisions at RHIC will also make it possible to study the effect of initial state interactions on R CP .
Summary
We have reported the measurement of v 2 for p T up to ∼ 6.0 GeV/c for K 0 S and Λ + Λ from Au + Au collisions at √ s NN = 200 GeV. For p T < 1.2 GeV/c, hydrodynamic model calculations agree well with the p T and mass dependence of the measured v 2 . At this low momentum region K 0 S and Λ + Λ v 2 lie on a single straight line when plotted versus m T − m 0 . In the moderate p T region, however, the particle type and p T dependence of v 2 suggests hydrodynamics no longer describes the collision dynamics. The value of v 2 for K 0 S saturates earlier and at a lower value than the Λ + Λ v 2 . Measurements of R CP show that the suppression of particle production in central collisions depends on particle-type in a similar way. The measurement of the particle-type and p T dependence of v 2 and R CP at moderate p T may provide a unique means to establish the existence (and study the properties) of a quark-gluon plasma that may be formed in collisions at RHIC.
|
2019-04-14T03:06:45.252Z
|
2003-03-12T00:00:00.000
|
{
"year": 2003,
"sha1": "a65115a1d53953f677975959baef977dcbaadd68",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-ex/0305008",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a65115a1d53953f677975959baef977dcbaadd68",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237210505
|
pes2o/s2orc
|
v3-fos-license
|
Joint EANM/SNMMI/ESTRO practice recommendations for the use of 2-[18F]FDG PET/CT external beam radiation treatment planning in lung cancer V1.0
Purpose 2-[18F]FDG PET/CT is of utmost importance for radiation treatment (RT) planning and response monitoring in lung cancer patients, in both non-small and small cell lung cancer (NSCLC and SCLC). This topic has been addressed in guidelines composed by experts within the field of radiation oncology. However, up to present, there is no procedural guideline on this subject, with involvement of the nuclear medicine societies. Methods A literature review was performed, followed by a discussion between a multidisciplinary team of experts in the different fields involved in the RT planning of lung cancer, in order to guide clinical management. The project was led by experts of the two nuclear medicine societies (EANM and SNMMI) and radiation oncology (ESTRO). Results and conclusion This guideline results from a joint and dynamic collaboration between the relevant disciplines for this topic. It provides a worldwide, state of the art, and multidisciplinary guide to 2-[18F]FDG PET/CT RT planning in NSCLC and SCLC. These practical recommendations describe applicable updates for existing clinical practices, highlight potential flaws, and provide solutions to overcome these as well. Finally, the recent developments considered for future application are also reviewed.
Lung cancer
Lung cancer is a major cause of cancer death in both men and women, with an incidence of 11.6% and mortality of 18.4% worldwide (World Health Organization cancer report 2020 [1]). Despite declining in incidence, it is estimated to remain the leading cause of cancer deaths in the USA in 2040 [2]. The two main types of lung cancer are non-small cell lung cancer (NSCLC) and small cell lung cancer (SCLC).
NSCLC represents more than 80% of lung cancer cases and includes two subtypes: (a) non-squamous (including 40% adenocarcinoma, 5-10% large-cell carcinoma, and other subtypes) and (b) 30% squamous cell (epidermoid) carcinoma [3]. Since 2017, NSCLC is staged according to the eighth edition of the IASLC (International Association for the Study of Lung Cancer) in tumour, nodes, and metastases (TNM) based on the American Joint Committee on Cancer (AJCC) staging system [4]. Patients are grouped in stages I, II, III, and IV. Approximately 55% of cases have distant metastases at diagnosis, while roughly 30% present with locally advanced disease, including mediastinal lymph node involvement [5].
SCLC represents fewer than 20% of lung cancers. Despite the fact that TNM staging [6] also has been proposed for SCLC, it is commonly classified in two clinical stages based on the possibilities of including the disease in radiotherapy (RT) fields: (a) limited stage that typically includes TNM stage I to III and (b) extensive stage that includes TNM stage IV (presence of metastases), but also cT3-4 tumour (multiple lung nodules) and/or tumour/nodal volume that is too extended to be encompassed in a tolerable radiation plan. Around 66% of SCLC cases present with metastatic disease [7,8].
Radiotherapy
External beam radiation therapy (EBRT) focuses radiation, mainly high energetic photons, but sometimes electrons, protons, and heavy ions, from outside the body onto the tumour. Newer EBRT techniques enable lowering the radiation dose to nearby healthy tissues. These include the following: (a) Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are an advanced form of three-dimensional conformal radiation therapy. Using inverse treatment planning, beams from different angles are shaped according to the target form and their intensity is adjusted throughout the treatment to optimize dose to the target while limiting dose to surrounding normal tissues.
(b) Stereotactic body radiation therapy (SBRT), also known as stereotactic ablative RT (SABR), is most often used to treat the primary tumour only, particularly in earlystage lung cancer and increasingly used to treat oligometastatic disease. Instead of giving a small dose of radiation (typically 2 Gy) each day for several weeks (usually [4][5], SBRT uses focused beams of high-dose radiation (typically 6-18 Gy) in fewer (usually 2-8) treatment sessions. Such plans achieve a high biological effectiveness, i.e., introduce a high level of tumour cell kill while sparing the surrounding tissues.
Common clinical indications for RT in lung cancer
The following recommendations for RT in lung cancer are based on the National Comprehensive Cancer Network (NCCN), the European Society of Medical Oncology (ESMO), and the Advisory Committee for Radiation Oncology Practice of the European Society for Radiotherapy and Oncology (ESTRO-ACROP) guidelines [5,[8][9][10][11][12]. In NSCLC, RT is recommended in the following situations: Early-stage disease -SBRT as primary treatment in stage I and selected node-negative stage IIA disease when patients are medically inoperable or when patients refuse surgery. In case of positive pathological margins, postoperative RT is also advocated. Locally advanced NSCLC -depending on the age and comorbidity of the patient, concurrent or sequential chemoradiotherapy (CRT), or RT alone, is the standard in inoperable (node-positive) stage II disease and in unresectable stage III disease. Yet, even in potentially resectable cases, decisions on the optimal local treatment strategy -either surgery or RT -will be based on expected benefits and side effects. RT will be delivered with three-dimensional conformal radiotherapy or more commonly with IMRT. Advanced/metastatic NSCLC -local palliation or prevention of symptoms (such as pain, bleeding, or obstruction of vessels or bronchi) or definitive local therapy to unifocal or oligo-metastases (the latter most frequently being addressed with SBRT); Furthermore, RT also has a role in the two stages of SCLC, as part of either definitive or palliative therapy, as follows: Limited stage SCLC -concurrent CRT, ideally delivered with twice daily RT sessions, is the treatment of choice for stage IIB-III SCLC, although a sequential approach may be preferred in case of an initial volume that is not amenable to RT or in a patient unfit to tolerate such an intensive treatment scheme. In rare cases, when resection revealed unexpected positive lymph nodes in patients with clinical stage I-IIA SCLC, postoperative loco-regional RT will be considered. Extensive stage/metastatic SCLC -consolidation thoracic RT after partial or complete response to systemic therapy, especially if there is low burden of extrathoracic metastatic disease.
Both in limited stages responding to chemotherapy and in extensive disease stages without progression after chemotherapy, prophylactic cranial irradiation has been shown to be of benefit and should be offered to the patient.
Selective versus elective nodal irradiation
According to ESTRO-ACROP guidelines [8,12], elective nodal irradiation is not recommended. Selective nodal irradiation (e.g. lymph nodes with proven metastatic involvement or highly suspicious on imaging) instead of elective nodal irradiation (e.g. all lymph node territories included in the primary tumour drainage) gives the opportunity of increasing the dose to the involved lymph nodes, while reducing toxicity [13][14][15].
PET/CT imaging
A PET/CT system is an integrated imaging device, capable of acquiring both PET and CT scans. Reconstructed PET and CT images are spatially co-registered with the caveat that the CT is acquired very rapidly, while the PET is usually acquired in multiple steps over several minutes. PET/ CT fusion is the simultaneous display of co-registered CT and PET images. The CT component of a PET/CT scan can be acquired with variable parameters (e.g., mAs, kVp, pitch, with or without contrast) to suit the clinical need or according to local protocols and regulations, for instance using a low-dose, low-resolution CT scan only for attenuation correction and anatomical localization, or a higher-dose, higherresolution CT if greater anatomic detail is required.
2-[ 18 F]FDG PET/CT is a standard imaging modality for staging, selection for curative RT, defining and delineating the target volume in the RT planning phase, and detection of residual or recurrent disease. 2-[ 18 F]FDG PET/CT can also be used for treatment response assessment, and it is the strongest and independent predictor of overall survival after RT [21][22][23][24][25]. [26]. Since 2-[ 18 F] FDG PET/CT has higher staging accuracy than CT alone, it may reduce healthcare costs by avoiding unnecessary RT or surgery, enabling better selection of patients amenable to curative treatment intent and reducing toxicity [21,27].
The limitations of 2-[ 18 F]FDG PET/CT include (a) suboptimal brain staging due to high 2-[ 18 F]FDG uptake in normal cerebral tissue. Magnetic resonance imaging (MRI) continues to be the primary modality to detect brain metastases; (b) uptake in reactive or granulomatous nodes and in infectious processes, which may usually be recognized by experienced readers based on the distribution of abnormality (pattern recognition) or CT images; (c) subcentimeter nodules, mucinous adenocarcinomas with a relatively small amount of cells, and low-grade malignancies are insufficiently detected; (d) chest wall invasion assessment is suboptimal; and (e) respiratory blurring causing misregistration between the PET and CT components, particularly at the lung bases, which may be addressed by respiratory motion correction techniques (see "Respiratory motion correction").
According to the ESMO guidelines about NSCLC [9], correct diagnostic work-up is necessary to detect regional lymph node metastases prior to multidisciplinary management.
When abnormal mediastinal and/or hilar lymph nodes are found on CT and/or PET, endoscopic (endobronchial) ultrasound [E(B)US] is recommended over surgical staging. EUS-guided fine needle aspiration complements 2-[ 18 F] FDG PET by improving the overall specificity and positive predictive value to 100%, with an overall accuracy of 97% [38]. Based on data from 5 meta-analyses, Peeters et al. [39] calculated that the addition of E(B)US can decrease the false negative rate of 2-[ 18 F]FDG PET/CT (from 13 to 3% in enlarged nodes, and from 6 to 1% in normal-sized nodes). However, for 2-[ 18 F]FDG PET/CT-positive but E(B) US-negative nodes, the false negative rate of E(B)US was as high as 14-16%. Therefore, these authors recommended to include such PET+/EBUS− nodes in the RT planning volume. Moreover, since a negative EBUS cannot rule out metastatic disease reliably, they suggested proceeding to surgical staging/mediastinoscopy if PET findings are highly suspicious for mediastinal invasion.
The joint guideline by the European Society of Gastrointestinal Endoscopy (ESGE), together with the European Respiratory Society (ERS) and the European Society of Thoracic Surgeons (ESTS) [40], for the diagnosis and staging of lung cancer, recommends that EBUS be performed in peripheral NSCLC without clear mediastinal involvement on CT or PET/CT if at least one of the following apply: (i) enlarged or 2-[ 18 The ESMO guidelines on SCLC [10] also recommend excluding mediastinal lymph node involvement if a surgical approach is an option for patients with limited stage.
Target volumes in RT
In RT planning, it is important to define the target lesion and to delineate the following volumes [41]: (a) Gross tumour volume (GTV) includes the lesion that can be imaged.
(b) Clinical target volume (CTV) contains the GTV plus a margin for subclinical disease spread, which cannot be imaged.
(c) Internal target volume (ITV) is the margin needed around the CTV to compensate for possible motion or deformation of the CTV, considering respiratory motion.
(d) Planning target volume (PTV) ensures that the RT dose is delivered to the CTV, compensating for systematic and random uncertainties during treatment planning or delivery. (Fig. 1)
2-[ 18 F]FDG PET/CT for target volume definition and delineation
2-[ 18 F]FDG PET/CT plays an important role in RT planning of lung cancer [42][43][44]. It improves tumour definition and has the advantage of reducing inter-and intra-observer variation when used to guide target volume delineation [45,46]. Target volume definition entails the identification of all recognizable tumour locations, allowing the delineation of the GTV of the primary tumour and GTV of the lymph nodes separately, if anatomically distinguishable. During image interpretation, the challenge is to define those tumour and/ or nodal volumes that should be included in the GTV, thus aiding in their subsequently delineation and discrimination from organs at risk (OAR). Usually, metabolic information from PET is used to identify tissues that contain tumour, and the anatomic information from CT is used to delineate (the margins of) the primary tumour and lymph nodes provided that there is sufficient contrast to define these margins [44].
RT planning using 2-[ 18 F]FDG PET/CT is particularly helpful in identifying tumour boundaries in case of extrathoracic or mediastinal tumour extension, when the tumour and normal tissue have similar visual appearance on CT, and when there is atelectasis caused by compression of the airways by tumour (enabling the discrimination between collapsed lung and tumour) [44,47]. In lesions with high 2-[ 18 F]FDG uptake intensity, the spill over effect can artificially increase apparent GTV beyond that confirmed by anatomical boundaries. This situation can be solved using different contrast, level, and window settings on both PET and CT imaging [48,49]. The pre-set lung window setting (approximate window -W = 1600 and level -L = -600) should be used to delineate tumour surrounded by lung tissue, while the mediastinum pre-set window setting (approximate W = 400 and L = 20) should be used to delineate lymph nodes and primary tumour invading the mediastinum or chest wall [12]. Although there are no validated quantitative approaches for PET contouring, the procedure can be improved with visual calibration of the W/L settings, for example, standardizing signal intensity visually according to the normal background, or using linear grayscale for PET images alone. For PET/CT image fusion, it is recommended to use a linear scale with one or at most two colours [44].
The time between staging 2-[ 18 F]FDG PET/CT and start of RT should not exceed 3 weeks, because disease may progress rapidly, invalidating prior target definition [8,12,44].
In patients having undergone neoadjuvant/induction chemotherapy prior to radiation treatment planning, 2-[ 18 F]FDG PET/ CT scan prior to chemotherapy needs to be taken into account when identifying metastatic lymph nodes. Lymph nodes fulfilling the above mentioned criteria for inclusion in the target volume (see "2-[ 18 F]FDG PET/CT for lung cancer staging"), still need to be included irrespective of their 2-[ 18 F]FDG uptake or appearance on CT after chemotherapy. For inclusion in the target volume, the initial 2-[ 18 F]FDG PET/CT is to be registered in the subsequent 2-[ 18 F]FDG PET/CT acquired in radiation treatment position. Caution must be taken regarding geometrical alignment as well as CT dose calibration.
The recent multicentre, randomized, controlled PET-PLAN Trial (ARO-2009-09, NCT00697333) confirmed the safety of using 2-[ 18 F]FDG PET/CT to define the target for primary tumour and selective nodal treatment in patients with locally advanced NSCLC undergoing CRT [15]. The study showed that the mean total RT dose was significantly higher in the 2-[ 18 F]FDG PET-based target group than in the conventional target group, allowing doses of 68 Gy or more to be achieved more frequently (47% vs. 33% of cases). The risk of loco-regional progression in the 2-[ 18 F]FDG PETbased target group was lower than in the conventional target group (14% vs. 29%), without increasing toxicity.
Considering the lack of information in the literature specifically about SCLC, the following paragraphs about primary tumour and lymph node definition and delineation will mainly focus on NSCLC.
Primary tumour 2-[ 18 F]FDG PET/CT has the advantage of increasing inter-and intra-observer reproducibility. This enables the reduction of the primary tumour GTV in at least 13-17% of patients compared with CT-measured tumour volume [50,51].
The Phase II prospective trial by the Radiation Therapy Oncology Group, RTOG 0515 [52] demonstrated that 2-[ 18 F]FDG-derived tumour volumes were significantly smaller than those derived by CT alone (86.2 vs. 98.7 mL), resulting in RT planning modification.
A systematic review and meta-analysis [53] estimated that the use of 2-[ 18 F]FDG PET/CT imaging for RT planning purposes led to changes in target definition in 36% of cases (43% in NSCLC and 26% in SCLC) and a change of treatment intent from curative to palliative treatment in 20% of cases (22% in NSCLC and 9% in SCLC).
Lymph nodes Several reasons for false negatives and false positive lymph nodes on 2-[ 18 F]FDG PET imaging are widely reported in literature [54]. For instance, tumours with low cellular density (such as carcinoid, mucinous, and lepidic adenocarcinoma histology) or subcentimeter in size may not show 2-[ 18 F]FDG uptake higher than background. Conversely, areas of inflammation or infection, including granulomatosis (e.g. tuberculosis, sarcoidosis, and Langerhans cell histiocytosis), pneumoconiosis (e.g. asbestosis, anthracosis, and silicosis), and post-surgery and post-irradiation fibrosis may show 2-[ 18 F]FDG uptake unrelated to tumour.
There is consensus that 2-[ 18 F]FDG PET be used to define lymph nodes included in the GTV. Although pathological confirmation was not systematically obtained in every study, several authors concluded nodal staging by 2-[ 18 F]FDG PET improved GTV definition and delineation, enabling dose intensification to involved nodes, while reducing irradiation and resultant toxicity to normal tissues. The RTOG 0515 trial [52] reported that 2-[ 18 F]FDG changed nodal GTV contours in 51% of patients.
A few studies compared the target definition based on CT and/or 2-[ 18 F]FDG to surgical information, which was considered the gold standard. Vanuytsel et al. [55] observed that the inclusion of 2-[ 18 F]FDG PET/CT information changed nodal GTV in 62% of patients compared to CT information, and improved GTV coverage compared to pathological data from 75% with CT alone to 89% with 2-[ 18 F]FDG PET/CT. Nevertheless, true nodal GTV may still be underestimated, in particular in higher TNM stages [56].
The delineation of regional nodal disease on PET has been conducted in similar ways as that for the primary tumour. Taking all available information into account, it is recommended that 2-[ 18 F]FDG positive lymph nodes only be omitted from the RT plan in the setting of a representative negative nodal biopsy, for instance, showing granulomatous disease [39,57]. Nestle et al. [58] reported improved inter-observer agreement for mediastinal involvement on 2-[ 18 F]FDG PET/CT after a standardized training process for PET readers.
2-[ 18 F]FDG PET/CT for response evaluation and residual or recurrent disease detection
Local tumour recurrence after RT usually occurs within 2 years after treatment and represents a diagnostic challenge. According to the NCCN, ESMO, and ESTRO-ACROP guidelines [5,9,11], follow-up imaging after lung cancer surgery or SABR should be done with chest CT. The selective use of 2-[ 18 F]FDG PET is recommended when recurrence is suspected based on serial CT scans, to differentiate true malignancy from benign conditions, such as atelectasis, consolidations, and radiation-induced fibrosis.
2-[ 18 F]FDG PET/CT may help to differentiate recurrent tumour from post-radiation fibrosis if sufficient time has elapsed since last treatment, to avoid false positive uptake due to inflammation [59,60]. Considering the low positive predictive value of 2-[ 18 F]FDG PET, pattern recognition is important for detecting recurrence. The areas treated with SBRT can have well-defined intense 2-[ 18 F]FDG uptake up to 6 months after treatment, and low-level, ill-defined uptake can last up to 2 years because of radiation pneumonitis in the surrounding lung parenchyma [61,62]. Moreover, local recurrence tends to be more focal, whereas inflammation has a more diffuse appearance [63,64]. 2-[ 18 F]FDG uptake and structural lung parenchyma changes in geographic distribution concordant to the prior radiation treatment area may assist in differentiating between these entities. Software that allows fusion of radiation dose-volume contours as DICOM-object with PET images can be particularly helpful in this regard. Moreover, the 2-[ 18 F] FDG uptake pattern suggesting post-radiation pneumonitis may precede symptoms [65,66], while oesophageal toxicity causing clinical symptoms can be detected as increased linear 2-[ 18 F]FDG uptake along the oesophagus [65,67].
The reduction in 2-[ 18 F]FDG accumulation at the tumour site after treatment indicates tumour response and is associated with better prognosis [68][69][70]. A decrease in 2-[ 18 F]FDG uptake may be an earlier indicator of response to treatment, occurring before a decrease in tumour size. The greater the decline in uptake, the better the response. Considering that 2-[ 18 F]FDG PET/CT has a high negative predictive value, a residual uptake equal to or below background is defined as a complete metabolic response [68,70]. However, it is debatable whether background is better evaluated in liver or blood pool. Metabolic response criteria, such as the PET Response Evaluation Criteria in Solid Tumours (PERCIST), that assess change in standardized uptake value (SUV) corrected for lean body mass (SUL) have also been used to quantify metabolic response to treatment in clinical trials [71]. The Hopkins criteria consider focal 2-[ 18 F] FDG uptake greater than that of the liver (scores of 4 and 5) to represent residual disease [72]. Considering possible false-positive findings on 2-[ 18 F]FDG PET, patients suitable for salvage therapy should undergo a biopsy for confirmation because it remains difficult to differentiate fibrosis from residual disease or local recurrence. Currently, there is not a validated method or reference cut-off SUVmax number to accurately differentiate responders from non-responders.
2-[ 18 F]FDG PET/CT for predicting outcome after RT
Some studies have suggested that the presence of heterogeneous tumour uptake on baseline 2-[ 18 F]FDG PET/CT can predict local failure and can therefore be used to define areas at risk of recurrence after treatment [73,74].
Usmanij et al. [24] verified that changes in metabolic parameters could predict response to concomitant CRT as early as the end of the second week of treatment in patients with locally advanced NSCLC; i.e. a total lesion glycolysis (TLG) decrease ≥38% was associated with a significantly longer one-year progression free survival (80% vs. 36%). In a meta-analysis, Na et al. [23] reported that the SUVmax in the primary tumour both before and after RT was able to predict patient outcome with regard to local control and overall survival. Other studies have also shown that post-treatment response assessment with 2-[ 18 F]FDG PET/CT can predict survival [24,75,76].
Goal
The aim of this guideline is to provide general information about 2-[ 18 F]FDG PET/CT in lung cancer (both NSCLC and SCLC) and specific considerations for RT planning with an emphasis on the collaboration between nuclear medicine physicians and radiation oncologists. In this guideline, concepts about target definition, target delineation, and pertreatment evaluation will be included.
This field is rapidly evolving, and this guideline may rather be appreciated as a dynamic document than a definitive document, nor is it a summary of all existing protocols. Local variations should be taken into consideration when applying this guideline, preferably in a multidisciplinary setting.
Physicians
RT planning for lung cancer is at the intersection of radiation oncology, nuclear medicine, and diagnostic radiology expertise. It has been shown that mutual training and close collaboration of specialists from these fields optimize the RT target delineation process [77]. It is important to consistently check hybrid images co-registration before any target delineation and a joint GTV delineation by a radiation oncologist together with a nuclear medicine physician and radiologist, produced encouragingly consistent results [78].
Treatment planning includes professionals trained in multimodality imaging according to interdisciplinary training programs, who are also participating in the lung multidisciplinary tumour board. Specific training programs, including case-based training (e.g. ESTRO target volume determination courses, https:// www. estro. org/ Cours es), should be consulted to facilitate exchanges between specialties.
The nuclear medicine physician or PET/CT specialist confirms that the radiopharmaceutical administration and image acquisition are according to the guidelines [42] and verifies that the acquired image is adequate for medical diagnosis. Informed consent might need to be obtained for 2-[ 18 F]FDG PET/CT, according to national/institutional requirements.
The target volume definition and delineation for treatment planning is performed by the radiation oncologist. Where radiation oncology departments own a PET/CT scanner and conduct their own simulation scans, it is recommended that staff performing the target delineation are properly trained in 2-[ 18 F]FDG PET/CT image interpretation. In any case, consultation of a radiologist and/or nuclear medicine physician should be easily accessible, for example when in doubt about possible physiologic uptake or abnormal findings during the delineation process.
Different approaches are proposed in the interpretation and GTV delineation (see "Interpretation and target volume delineation"). Therefore, it is recommended to develop departmental instructions for GTV delineation, which should include testing of the reproducibility of metabolic GTV delineation within a nuclear medicine department. All delineation steps for GTV should be performed or supervised by radiation oncologists according to local practice. An additional peer review by another radiation oncologist is highly recommended because inter-observer variation in the delineation is one of the main uncertainties in RT planning of lung cancer.
Technologists
PET/CT scans should be performed by a qualified registered and/or certified nuclear medicine technologist [79]. If specific PET knowledge and training have been gained by nuclear medicine technologists, they should be able to perform PET quality control testing. It is advisable that technologists receive and maintain training in each other's fields to create a group of professionals with complete competence to acquire PET/CT scans in the RT setting. In practice, cooperation between departments and personnel is fundamental to guarantee adequate execution of the protocol.
Imaging technologists are responsible for proper patient preparation to achieve optimal 2-[ 18 F]FDG biodistribution, tracer administration according to radiation safety requirements, adequate handling of radioactive patients during the imaging procedure, appropriate PET/CT image acquisition and reconstruction, and acquisition of the planning CT scans (with or without intravenous contrast). The technologists trained in RT planning are also involved in the imaging process. They are responsible for installing the RT equipment on the PET/CT (e.g. flat bed, treatment positioning devices) and for ensuring stable and reproducible positioning of the patient confirming that it is suitable for RT planning according to the region of the body to be treated and guarantying patient's comfort. This also includes respiratory gating, if available and validated at the site, and marking of the isocentre reference points on the patient. The patient positioning is of utmost importance to avoid mistakes and pitfalls that interfere with the RT planning and treatment, some common human situations that should be bared in mind are the site of treatment, isocentre position, inserted references and measurements, time of contrast and bolus administration, and any additional medication. A multidisciplinary approach of both nuclear medicine and RT technologists is needed when verifying image quality and applicability for RT planning.
Physicists and information technology personnel
A multidisciplinary and collaborative approach should also apply to physicists, information technology personnel, and technical support team. Quality control of the PET/CT should be done by a medical physicist with special expertise in nuclear medicine. PET/CT scanners must adhere to regional, national, and international quality standards, including international dosimetry and radiation precautions for patients and staff alike. A further task is to develop and implement more refined and reproducible methods of PET/CT segmentation (automatic algorithms and/or artificial intelligence-based) to improve the detection of lung lesions. A medical physicist should ensure adherence to good practice, perform radiation dose monitoring, and develop algorithms to minimize the radiation exposure [80,81]. Quality control of the RT equipment should be done by a physicist with expertise in RT. The physical RT planning and the dose calculation should be reviewed by a dedicated physicist before the final treatment plan has been approved.
Procedure and specifications of the examination
As the availability of imaging modalities assisting RT planning is variable between institutions and continuously evolving, embedding 2-[ 18 F]FDG PET/CT imaging in the RT plan is considered in the ESTRO-ACROP current guidelines [8,11,12], but should be tailored to local workflow. The workflow should be defined and managed in a multidisciplinary manner. Several specific aspects need to be considered in 2-[ 18 F]FDG PET/CT-based treatment planning of lung cancer. These will be described in the following paragraphs.
Request
The acquisition and interpretation of imaging studies are guided by the clinical questions that need to be answered. The request for a 2-[ 18 F]FDG PET/CT in RT position should be written (preferably digitally) and contain all standard information for an oncological 2-[ 18 F]FDG PET/CT, in particular, location of the (former) primary lung tumour and/ or known metastases, previous radiation therapy dates, dose and locations, and previous/simultaneous chemotherapy regimens prescribed. Information about other lung diseases (e.g. tuberculosis, sarcoidosis, and other granulomatous diseases, pneumonia), prior talc pleurodesis, thoracic surgery, or recent biopsy should also be provided. It should explicitly include the request for performing the scan in the RT position. In most cases, the administration of intra-venous contrast will be requested, and, in these cases, kidney function (or glomerular filtration rate) and history of contrast allergy should be noted.
Patient preparation and precautions
Patient preparation should be done according to the "2-[ 18 F] FDG PET/CT EANM procedural guidelines for tumour imaging version 2.0" and the American "Society of Nuclear Medicine and Molecular Imaging (SNMMI) procedure guideline for tumour imaging with 18F-FDG PET/CT 1.0" [42,82]. This includes fasting for at least 4 h prior to imaging, proper hydration, verification of a serum glucose level <11 mmol/L, and resting in a quiet and warm environment during the 2-[ 18 F]FDG-uptake time that should ideally last 60 ± 5 min.
The administration of intravenous contrast improves primary tumour delineation, regional lymph nodes identification, and OAR definition on CT. In such cases, kidney function and history of contrast allergy should be verified before intravenous contrast injection. The administration of contrast media and premedication should follow local chest CT radiological protocols.
If no diagnostic thoracic CT is available, it should be considered to include an additional low-dose, deep-inspiration thoracic CT to adequately evaluate the lung parenchyma. This is also important for comparison with previous or later thoracic CTs.
Patient setup should be performed with levelling lateral and sagittal lasers, to ensure accurate alignment and positioning. Reference ink or tattoo marks of the isocentre should be used (one on the right side, left side, and ventral centre) to ensure reproducibility of setup at the time of treatment [83].
Radiopharmaceuticals
The administration of 2-[ 18 F]FDG should follow the EANM/ SNMMI guidelines about PET/CT imaging in the oncology context and should be in concordance with the "As Low As Reasonable Achievable (ALARA)" principle, which may enable the reduction of administered activity, mainly in the newest generation scanners [42,82,84].
Hardware
According to the International Atomic Energy Agency (IAEA) consensus report 2014, the PET/CT scanner should be equipped with a flat RT table top, patient positioning devices, and the CT component should be calibrated for its safe use in RT planning and dose calculation [44,85]. The EORTC recommendations for RT planning in lung cancer state that a stable and reproducible patient positioning is essential [86]. If possible, patients should be in supine position with both arms above the head. It is recommended to use support devices for arms and knees to improve the position reproducibility and patient comfort. The equipment used for patient immobilization should be similar when performing PET/CT and RT. The PET/CT imaging should be verified before contouring to avoid co-registration misalignment, even in hybrid PET/CT systems.
Protocol/image acquisition
2-[ 18 F]FDG PET/CT performed from the mid-thighs to skull base, after bladder voiding, is recommended, and the acquisition details should follow the EANM/SNMMI procedural guidelines and also the specifications of the PET/CT scanner used [42,82]. When 2-[ 18 F]FDG PET/CT is performed for staging with the possibility of doing RT planning in one setting, a flat RT table-top should be used. Patients should be informed about the need to place tattoo marks.
Respiratory motion correction
Respiratory motion may have impact on tumour localization, delineation, SUV quantification, and, consequently, dose delivery in RT. Despite being highly dependent on the implementation of the PET manufacturer, based on the "Report of the American Association of Physicists in Medicine Task Group 76" [87], respiratory motion correction may be organized in the following four categories: (1) Motion-encompassing methods include (a) slow CT scanning, (b) four-dimensional (4-D) CT/respirationcorrelated CT, and (c) forced shallow breathing with abdominal compression. In slow CT acquisition, multiple respiration phases are averaged per slice. The disadvantage is the increased dose compared with conventional CT scanning and the loss of resolution due to motion blurring, and therefore, it is not recommended for lung tumours that are adjacent to either the mediastinum or the chest wall.
A suitable solution for obtaining high-quality CT data in the presence of respiratory motion is 4-D CT or respiration-correlated CT. The 4-D CT could be combined with a 3-D or a 4-D PET scan. In 4-D acquisitions, the scans are retrospectively binned into a number of breathing cycle phases, using a respiratory tracking system [44,88]. The impact of additional 4-D PET information to 3-D PET is promising but is still a matter of active investigation [89][90][91][92], with several translational research projects within prospective SBRT trials, such as the Freiburg mono centre phase II STRIPE trial or the EORTC 2113-0813 Lungtech trial [93,94]. A limitation of 4-D CT is that it may be affected by variations in respiratory patterns during acquisition and various techniques are currently being investigated to reduce these respiratory artefacts [95,96].
The forced shallow breathing with abdominal compression technique employs a stereotactic body frame with an attached plate or an inflatable belt that is pressed against the abdomen. The applied pressure to the abdomen reduces diaphragmatic excursions, while allowing limited normal respiration. Implementation of these techniques is highly dependent on patient cooperation, and previous training can help increasing image quality.
(2) Controlled breathing methods include (a) moderate or deep-inspiration breath-hold, (b) active-breathing control, (c) self-held breath-hold without respiratory monitoring, and (d) self-held breath-hold with respiratory monitoring [97,98]. Moderate or deep-inspiration breath-hold is advantageous because it significantly reduces respiratory tumour motion and changes internal anatomy (the diaphragm pulls the heart posteriorly and inferiorly) in a way that often protects critical normal tissues.
Breathing control is a method that enables reproducible breath-hold. After a nose clip is put on, the patient breathes through a mouthpiece connected via flexible tubing to a spirometer according to the technologist's instructions. The patient breathes normally through a device consisting of a digital spirometer to measure the respiratory trace. In active breathing control, the patient is connected to a balloon valve.
Self-held breath-hold with or without respiratory monitoring means that patients hold their breath at some point in the breathing cycle according to a respiratory monitor device or voluntarily, respectively.
(3) Respiratory gating includes gating based on (a) external respiration signal or (b) internal fiducial markers that are implanted in or close to the tumour using percutaneous or bronchoscopic techniques. It involves image acquisition in a defined part of the breathing cycle, and the gating characteristics are established according to the patient's respiratory motion [99]. The gating PET/CT imaging improves the assessment of intra-tumour heterogeneity and may be adequate for dose painting [100] (see "Imaging tumour metabolism and dose painting"). (4) Data-driven gating techniques -instead of using hardware-driven motion correction strategies (as described in previous sections), new methods are being explored using data-driven software analysis. Some examples include (a) motion characterization directly from a patient's gated scan using the signal to create a single optimal bin, and leading to conformal adaptive imaging [101], and (b) motion information extraction from the reconstructed images [102]. The real-time data-driven motion correction, as opposed to post-processing methods, represents an important innovation in the speed of processing data for clinical practice [103]. Currently, there is no robust evidence to choose one method over the other, so the decision is based on the availability in the department and its implementation should follow institutional and national regulations.
Interpretation and target volume delineation
2-[ 18 F]FDG PET/CT images should be discussed between the nuclear medicine physician, the radiologist, and the radiation oncologist to define the best treatment planning. Several PET-based tumour volume delineation methods have been evaluated, and algorithms for semiautomatic 2-[ 18 F] FDG PET segmentation have evolved exponentially in the last decade [104,105]. Since 2017, artificial intelligencebased segmentation approaches seem to outperform previously state-of-the-art algorithms [105,106]. Classifications of PET auto-segmentation methods can be based on image processing algorithms, pre-and post-processing steps, and automation level [105]. Some of the possible segmentation methods are summarized in Table 1, and may include the following four groups: (1) Manual methods -visual interpretation and manual delineation of PET-based GTV using a computer mouse slice-by-slice are common in daily practice and widely used [107]. One of the main disadvantages in manual delineation of PET images is its strong operator dependence that can, therefore, result in high intraobserver variability and reduced reproducibility [108]. (2) Threshold-based segmentation methods -they are commonly used because of their simplicity. They rely on a fixed or adaptive threshold (e.g. SUV, background noise, contrast, signal-to-noise ratio, size), above which all voxels are considered to belong to the tumour volume. Fixed thresholding alone should be avoided since it strongly depends on tumour contrast, size, shape, and heterogeneity [105]. It may be used only as an initial guidance for a subsequent manual delineation. Adaptive thresholding approaches accounting for both contrast and size are more appropriate although they require a scanner specific phantom based calibration procedure [105]. (3) Image processing methods -these auto-segmentation approaches allow delineation of uptake semi-automatically without prior calibration and have been developed to either guide or generate the tumour volume [105,109,110]. According to ESTRO-ACROP [12], tumour volume should be delineated on the CT image, with guidance of the PET. These methods may help to overcome the low signal-to-noise ratio and the poor spatial resolution of PET images. They may explore the image contrast and the spatial resolution of CT; may create combinations of coregistered PET, CT, or MR data sets; or may use deep learning to analyse a large number of imaging features. Some examples include the following six methods: gradient based method, hybrid method, deformable contour models, model-based methods, statistical image methods, multimodality-based methods, and machine learning methods [111][112][113][114][115][116][117][118][119][120].
(4) Consensus methods -the combination of several segmentation methods improves segmentation accuracy when compared to a single method. Additionally, it compensates the weaknesses of individual methods and, therefore, may be advantageous for RT planning [121]. Currently, three consensus algorithms are available: majority vote (MJV), simultaneous truth and performance level estimation (STA-PLE), and automatic decision tree-based learning algorithm for advanced segmentation (ATLAAS) [122][123][124][125].
It has been shown that consensus methods improve accuracy and reproducibility in volume segmentation compared to all separate segmentation methods in different experimental circumstances [121,126,127].
The state-of-the-art 2-[ 18 F]FDG PET auto-segmentation algorithms, relying on advanced image analysis paradigms, seem to be more accurate than approaches based on manual methods and 2-[ 18 F]FDG activity thresholds [105,106]. However, optimization to scanning conditions, tumour type, and tumour location is still necessary. Currently, there is no approved method and, therefore, all auto-segmentation contours should be critically verified by a physician [105]. An institutionally well organized manual delineation may cover the needs in clinical routine, albeit potentially more time consuming.
Documentation/reporting
The interpretation and reporting of 2-[ 18 F]FDG PET/ CT scans should be done by trained and certified nuclear medicine physician, or a radiologist trained in 2-[ 18 F]FDG PET/CT, and with experience in lung malignancies. One combined report including both PET and CT information, or two separate reports for each imaging modality with a summary of the main findings and an integrated conclusion, may be written, according to local circumstances and national reimbursement policies.
The following aspects should be included in a structured report: (1) patient and study identification; (2) clinical information (including the question from the referring clinician, complementary information obtained from the medical history or data collected from the clinical process); (3) procedure including the administered radiopharmaceutical activity, route of administration, uptake time, blood glucose level, PET scanner type, field of view, CT protocol (low dose or dedicated), additional imaging acquisition (e.g. respiratory-gating or delayed thoracic images), details on administered intravenous contrast, ancillary medications, reconstruction technique, and if the PET/CT was performed for RT planning; (4) comparison studies used for correlation; (5) main findings described by order of importance (may follow the TNM staging classification, anatomic site, or hybrid formats); and (6) summary and final impression aiming to answer to the clinical question, to mention the TNM staging in the initial evaluation, to classify the study as complete metabolic response, partial metabolic response, stable disease, pseudo-progression or progressive metabolic disease in restaging, and to provide guidance to the referring doctor. When using PET/CT scans for delineation, the person performing the delineation should be trained in 2-[ 18 F]FDG PET/CT image interpretation. Moreover, it is advisable to document the method of delineation (manual or automatic), including whenever appropriate the threshold used and/or other methodology related parameters (e.g. %SUVmax) to facilitate a second definition of the GTV if necessary (see "27").
Equipment specifications, quality control, and radiation safety in imaging
The EANM/SNMMI procedural guidelines for tumour imaging apply for quality control of PET [42,82]. Also, it is recommended to adhere to the EANM Research Ltd (EARL© http:// earl. eanm. org/ cms/ websi te. php) accreditation program, which is aimed at harmonizing quantification among different equipment in a wide range of tumour types and is available for 2-[ 18 F]FDG PET/CT and PET/MRI [128,129].
It is recommended that the PET/CT equipment used for RT is in accordance with the requirements for RT planning, including the flat table-top, positioning devices, laser systems, and increased gantry diameter [85]. According to the national and/or international guidelines, the quality control of the PET/CT hardware should also include the quality control of the CT, the PET, and the PET/CT alignment [85,130,131]. In the RT context, it is required that the quality control follows the RT recommendations, including table positioning and movement, and laser geometry and accuracy [83,132].
When put into perspective to the dose received from external beam RT, the radiation dose to patients from PET/ CT imaging is negligible. The majority (40-60%) of the radiation exposure of technologists is related to 2-[ 18 F]FDG preparation, injection, and patient positioning [133,134].
Measures to reduce the personnel exposure to radiation should be promoted, and some examples include patient instruction before 2-[ 18 F]FDG injection, trained staff in positioning patients, and room preparation prior to patient arrival [84].
For tumour delineation purposes, it is recommended to review the acquired images, namely the alignment of the CT and PET components. Then, the images should be transferred to the RT planning system to enable the final display of the PET/CT images on the planning computer. It is important that each part of the process has undergone appropriate quality assurance testing and that the complete process has been validated.
Safety, infection control, and patient education concerns
Imaging should follow local safety protocols, but some guidance may be obtained from the "American College of Radiology Position Statement on Quality Control and Improvement, Safety, Infection Control and Patient Education" [135].
Imaging tumour metabolism and dose painting
Dose painting is a sophisticated approach to selectively deliver dose to different parts of a tumour, including delivery of higher doses to treatment-resistant areas, rather than escalating the dose to the whole tumour [136]. Usually, areas of high pre-treatment 2-[ 18 F]FDG uptake within the primary tumour are considered to be more aggressive. Therefore, these areas may be considered the target for dose-escalation [109,137,138].
Defining the biologic target volume, i.e. tumour subvolumes requiring a higher or lower dose based on the tumour microenvironment, is a crucial step in RT planning for dose painting and possibly partial dose-escalation, termed boosting. Several methods for pre-treatment segmentation have been proposed, but none of them has been proven superior to another [104,106,139]. However, there are some promising results. For instance, the Netherlands randomized phase II PET-boost trial (NCT01024829) [140] showed the feasibility of dose-escalation using an integrated boost to the primary tumour or high 2-[ 18 F]FDG uptake regions (>50% SUVmax) whilst keeping the pre-defined dose constraints. In this trial, the dose could be escalated to at least 72 Gy in 75% of patients, without increasing the dose to the OAR.
Intermediate/mid-treatment 2-[ 18 F]FDG PET/CT and adaptive RT
Image-based adaptive RT was initially introduced in an effort to overcome the challenge of tumour motion, but it also enabled an earlier assessment of treatment response [141][142][143]. It offers an opportunity to identify ineffective therapies and switch to an alternative treatment regimen, preventing futile radiation toxicity. Interim imaging can be performed anytime during the scheduled treatment duration. Several authors have demonstrated that 2-[ 18 F]FDG activity changes remarkably during the course of RT. They found that change in mid-treatment 2-[ 18 F]FDG activity correlated with post-RT response, which was predictive of overall survival [75,[144][145][146]. Wang et al. [75] concluded that a 75% decrease of SUV predicted overall survival and 2-year progression free survival (hazard ratio of 0.97 for both). The prospective study RTEP1 analysed 2-[ 18 F]FDG PET/CT examinations performed during thoracic RT (the first PET was performed before the first RT fraction, and five additional PET scans were performed after each 14-16 Gy of dose up to the total dose of 70 Gy), given either alone or with chemotherapy [147]. They observed an average 50% decrease in SUVmax at approximately 40-45 Gy (i.e., during week 5 of RT). The subsequent multicenter RTEP2 study (NCT01261598) [148] demonstrated the prognostic value of 2-[ 18 F]FDG PET/CT during curative-intent RT with or without concomitant chemotherapy in patients with NSCLC. The SUVmax of the PET2 scan, performed during the fifth week of treatment, was the single variable predictive of death or tumour progression at 1 year in multivariate analysis.
Additionally, the prospective phase 2 RTOG1106 trial (NCT01190527) [149] in patients with locally advanced NSCLC showed the feasibility of dose escalation to persistent 2-[ 18 F]FDG avid tumour seen on mid-treatment 2-[ 18 F] FDG PET/CT. The interim analysis led to an improved 2-year loco-regional tumour control rate, reaching an infield and overall local regional tumour controls rate of 82% and 62%, respectively. The final results of this trial are expected by the end of 2021. However, a phase 3 study will be required before this adaptive approach starts being used in standard clinical care.
PET/MRI
There are few studies comparing 2-[ 18 F]FDG PET/MRI and 2-[ 18 F]FDG PET/CT in lung cancer patients, but both seem to show similar high diagnostic performance [160][161][162]. Nevertheless, MRI of the chest or sub-regions of interest could be added to the workup in cases with chest wall infiltration, superior sulcus tumours (including Pancoast tumours) or para-spinal tumours [163]. To allow a co-registered planning, MRI sequences should be acquired in the RT planning position.
Radiomics
Radiomics is an emerging field with significant potential for prognostic stratification, RT planning, and response assessment in patients with lung cancer. Radiomics involves the extraction of a large number of quantitative features from medical images using advanced imaging processing and analysis tools, and it is actively explored in lung cancer [164][165][166][167]. The integration of artificial intelligence in this radiomic-driven pipeline may also allow mainstreaming their use in clinical practice [168,169]. Some studies indicate that an analysis of pretreatment 2-[ 18 F]FDG PET/CT images based on the use of radiomics may allow to predict local control for patients undergoing SBRT [166,170]. A recent retrospective multicentre trial study [170] showed that both PET and CT features were predictive of local control, with a predictive model combining two PET features reaching a sensitivity and specificity of 100% and 81%, respectively. Another group found that specific PET features closely correlated to tumour volume definition, specifically, in larger tumours [171]. Due to the variability in acquisition and reconstruction protocols, a "Radiomics Quality Score" was created in order to harmonize the radiomic feature calculation methods and protocols, enabling comparison between different studies [172][173][174].
Supplementary information
The Society of Nuclear Medicine and Molecular Imaging (SNMMI) is an international scientific and professional organization founded in 1954 to promote the science, technology, and practical application of nuclear medicine. The European Association of Nuclear Medicine (EANM) is a professional non-profit medical association that facilitates communication worldwide between individuals pursuing clinical and research excellence in nuclear medicine. The EANM was founded in 1985. SNMMI and EANM members include physicians, radiologists, technologists, and scientists specializing in the research and practice of nuclear medicine.
The European Society for Radiotherapy & Oncology (ESTRO) was founded in 1980, and is a non-profit scientific organisation that fosters the role of radiation oncology in order to improve patients' care in the multimodality treatment of cancer. With over 6,500 members inside and outside Europe, ESTRO supports all the radiation oncology professionals in their daily practice. ESTRO members include radiation oncologists, medical physicists, radiobiologists and radiation technologists and members of the wider oncology community. Its mission is to promote innovation, research, and dissemination of science through congresses, special meetings, educational courses and publications.
The SNMMI and EANM periodically define new guidelines for nuclear medicine practice to help advance the science of nuclear medicine and improve the quality of service to patients throughout the world. Existing practice guidelines are reviewed for revision or renewal, as appropriate, on their fifth anniversary or sooner, if indicated. Each practice guideline, representing a joint policy statement by the SNMMI/EANM, has undergone a thorough consensus process in which existing evidence has been subjected to extensive review. The SNMMI and EANM recognize that the safe and effective use of diagnostic nuclear medicine imaging requires specific training, skills, and techniques, as described in each document. Reproduction or modification of the published practice guideline by those entities not providing these services is not authorized.
These guidelines represent an educational tool designed to assist practitioners in providing appropriate care for patients. They are not inflexible rules or requirements of practice and are not intended, nor should they be used, to establish a legal standard of care. For these reasons, and those set forth below, both the SNMMI and the EANM caution against the use of these guidelines in litigation in which the clinical decisions of a practitioner may be called into question.
The ultimate judgment regarding the propriety of any specific procedure or course of action must be made by the physician or medical physicist in light of all the circumstances presented. Thus, there is no implication that an approach differing from the guidelines, standing alone, is below the standard of care. To the contrary, a conscientious practitioner may responsibly adopt a course of action different from that set forth in the guidelines when, in the reasonable judgment of the practitioner, such course of action is indicated by the condition of the patient, limitations of available resources, advances in knowledge or technology subsequent to publication of the guidelines, local regulatory requirement, or reimbursement frameworks. The practice of medicine includes both the art and the science of the prevention, diagnosis, alleviation, and treatment of disease. The variety and complexity of human conditions make it impossible to always reach the most appropriate diagnosis or to predict with certainty a particular response to treatment.
Therefore, it should be recognized that adherence to these guidelines will not ensure an accurate diagnosis or a successful outcome. All that should be expected is that the practitioner will follow a reasonable course of action based on current knowledge, available resources, and the needs of the patient to deliver effective and safe medical care. The sole purpose of these guidelines is to assist practitioners in achieving this objective.
Members of the EANM Oncology Committee (Sofia C. Vaz, Judit A. Adam, Ken Herrmann, Lioe-Fee de Geus-Oei), EANM Physics Committee (Dimitris Visvikis), EANM Technologists Committee (Andrea Santos), the SNMMI representative (Heiko Schöder) and the Advisory Committee on Radiation Oncology Practice (ACROP) of ESTRO (Bernard Dubray, Wouter van Elmpt, Yolande Lievens, Esther G.C. Troost), invited experts from Europe (Roberto C. Delgado Bolton, Pierre Vera) and Australia (Rodney J. Hicks) took part in developing this guideline. holder in this company, and he is also an honorary Trustee of the International Cancer Imaging Society and honorary Board Member of Neuroendocrine Cancer Australia. All the remaining authors declare no conflict of interest.
Disclaimer This guideline summarizes the views of the EANM Oncology and Theranostics Committee, the SNMMI and ESTRO. It reflects recommendations for which the EANM/SNMMI/ESTRO cannot be held responsible. The recommendations should be taken into context of good practice of nuclear medicine and radiation oncology and do not substitute for national and international legal or regulatory provisions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-08-18T20:40:11.833Z
|
2022-01-13T00:00:00.000
|
{
"year": 2022,
"sha1": "5385a24e88c2ba3f890a836d344ca0b88b313222",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-021-05624-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "d797179410668efea2844ef792a8c930a02a0ef3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269651883
|
pes2o/s2orc
|
v3-fos-license
|
Identifying SCC Lesions Capable of Spontaneous Regression by Using Immunohistochemistry: A Systematic Review and Meta-Analysis
Introduction Keratoacanthoma (KA) and squamous cell carcinoma (SCC) are two cutaneous conditions with morphological resemblance, which can complicate the diagnosis in some cases. Using immunohistochemistry staining of biomarkers could be beneficial in resolving this obstacle. Objectives We investigated a variety of biomarkers assessed in different studies in order to find the most important and helpful biomarkers for differentiation between SCC and lesions capable of spontaneous regression. Methods MEDLINE via PubMed and Google Scholar database were used to identify relevant literature up to 15 June 2022. The aim of our analyses was to determine the capability of biomarkers to distinguish between SCC and lesions capable of spontaneous regression using calculated individual and pooled odds ratios (OR) and 95% confidence intervals (CI) and I2 tests. Results Six potential biomarkers were CD10 with pooled OR= 0.006 (95% CI: 0.001–0.057) and I2=0%; COX-2 with pooled OR=0.089 (95% CI: 0.029–0.269) and I2=17.1%; elastic fibers with pooled OR= 6.69 (95% CI: 2.928–15.281) and I2=0%; IMP-3 with pooled OR=0.145 (95% CI: 0.021–1.001) and I2=44.5%; P53 with pooled OR=0.371 (95% CI: 0.188–0.733) and I2=55.9%; AT1R with OR=0.026 (95% CI: 0.006–0.107). Conclusions We suggest the utilization of the following IHC biomarkers for discrimination between lesions with spontaneous regression such as KA and SCC: CD10, COX-2, and elastic fibers.
Introduction
Cutaneous squamous cell carcinoma is the second most common non-melanoma malignant tumor of the skin, following basal cell carcinoma (BCC), and is also the leading cause of death related to non-melanoma skin cancer.The incidence rate of cutaneous SCC is rising continuously, primarily because of population aging and an increased screening rate [1,2].It is characterized by the uncontrolled proliferation of atypical keratinocytes within the epidermis, which should be excised.Apart from population aging, other risk factors are mainly genetic factors, male sex, smoking, immunosuppression, and ultraviolet irradiation, primarily due to sun exposure [3,4].Early diagnosis of such lesions is crucial.The diagnosis is based on the appearance, location (sun-exposed), and the patient's medical history.More importantly, the physician's suspicion would lead to more evaluation and eventually to reaching the diagnosis [5].The gold standard for SCC diagnosis is still to obtain a skin biopsy and histopathologic evaluation [6].Keratoacanthoma (KA), on the other hand, is considered a premalignant lesion with the potential capacity for transformation into SCC and is, therefore, a precursor of SCC.However, meta-analysis studies have pointed out a 12% probability of the transformation of KA into SCC [7].Indeed, KA is a spontaneously regressing type of SCC [8].If not transformed into SCC, KA would regress spontaneously within weeks [9].Similar to SCC, the gold standard method of diagnosis of lesions capable of spontaneous regression is tissue biopsy and histological findings [7].
It should be mentioned that some studies consider KA, which are lesions capable of spontaneous regression, as a subbranch of SCC [10,11], and other studies do not consider these two diseases as separate from each other [12].Although the features of these 2 types of lesions are alike in some aspects, the outcomes diverge.Hence, it is imperative to discriminate between these lesions.Nevertheless, a solid criterion is lacking for this manner [9].A number of studies have assessed the role of diverse cellular and nuclear markers in the differentiation between these two lesions.Some of these markers have been evaluated in several studies, while other markers have been determined in single studies.In this study, we intended to analyze the effectiveness of these markers in identifying lesions capable of spontaneous regression like KA and SCC.
Methods
The present systematic review and meta-analysis was performed based on the PRISMA statement.
Search Strategy and Screening
To determine research studies that assessed Immunohistochemistry (IHC) biomarkers participating in differentiating between SCC and lesions capable of spontaneous regression, a literature search was conducted using 'MEDLINE via PubMed and Google Scholar database up to 15 June 2022.The following keywords were used in the search: "keratoacanthomas," "KA," "lesions capable of spontaneous regression," "squamous cell carcinoma," "SCC," "differentiation," "diagnoses," "biomarkers," and "IHC."The authors screened the titles, abstracts, and full texts of selected articles to choose the relevant articles.
Inclusion and Exclusion Criteria
The inclusion criteria based on the full text were: (1) assessing the IHC biomarkers in differentiating between SCC and lesions capable of spontaneous regression; (2) analyzing the IHC biomarkers on subtypes of skin cancers that must contain lesions capable of spontaneous regression and SCC.
The exclusion criteria were: (1) publications not in English; (2) non-IHC methods; (3) analysis of other subtypes of skin cancers without containing both SCC and lesions capable of spontaneous regression.
Methods: MEDLINE via PubMed and Google Scholar database were used to identify relevant literature up to 15 June 2022.The aim of our analyses was to determine the capability of biomarkers to distinguish between SCC and lesions capable of spontaneous regression using calculated individual and pooled odds ratios (OR) and 95% confidence intervals (CI) and I 2 tests.
Conclusions:
We suggest the utilization of the following IHC biomarkers for discrimination between lesions with spontaneous regression such as KA and SCC: CD10, COX-2, and elastic fibers.
Data Extraction
Two authors reviewed all the suitable publications.Extracted data were organized into an Excel spreadsheet.The following data were collected from each study: first author's name, publication year, journal, biomarker(s), sample size (total and individual SCC and lesions capable of spontaneous regression), IHC staining positivity of lesions capable of spontaneous regression samples, and the significance of statistical analyses (obtained P value).
Statistical Analysis
R software was used to conduct statistical analyses to compare the odds ratio (OR) with 95% CI of SCC and lesions capable of spontaneous regression.Pooled ORs with 95% CI and I 2 test for heterogeneity were calculated for the biomarkers investigated in at least 2 publications.The consistency of studies was evaluated by the I 2 heterogeneity test, which is interpreted as follows: 0% represents no inconsistency, and 100% represents total heterogeneity.The significance of heterogeneity was considered if the P value was <0.1.
Relevant Studies and Flowchart
Among 64 relevant manuscripts, 33 were excluded based on the inclusion and exclusion criteria.Thirty-one relevant publications from 1989 to 2021 were reviewed, and data were extracted for analysis (Table 1).Overall, 43 biomarkers were studied, of which 14/43 were assessed in at least two studies, and 23/43 were investigated once.This selection is shown in Figure 1.The OR and 95% CI of these biomarkers were evaluated.Finally, seven significantly effective biomarkers that could differentiate between SCC and lesions capable of spontaneous regression were selected and are discussed.
Cluster of Differentiation 10
The cluster of differentiation (CD) 10 is a cell surface ectoenzyme marker used for the diagnosis and differentiation of cancers [42].There is a correlation between tumor cell proliferation and the number of CD10+ dermal tumor-associated macrophages (TAM) and epidermal Langerhans cells (LC) in the development of epidermal tumors.Indeed, these components are important cellular elements of the tumor microenvironment.It has been reported that the number of LCs in SCC and malignant melanoma is lower than in normal skin.
It is assumed that the induction of CD10+ stromal cells may be associated with the infiltration of TAMs and loss of LCs.
Therefore, CD10 + stromal cell induction, increased TAMs, and decreased LCs are related to each other, and these 3 items correlate with the rate of tumor proliferation [27].Two similar studies were evaluated to determine whether CD-10 can serve as a differentiating biomarker for SCC and lesions capable of spontaneous regression [27,34].Together, these two studies had 60 samples, of which only four samples of lesions capable of spontaneous regression were positive for CD-10 IHC staining compared with all positive SCC samples (33 samples).The pooled OR was calculated at 0.006 (95% CI: 0.001-0.057)for lesions capable of spontaneous regression compared to SCC, meaning that the tendency of SCC lesions to have positive IHC staining for CD-10 was 166.7 times higher in comparison with lesions capable of spontaneous regression.There was also no statistically significant heterogeneity between studies (I 2 =0%).
Cyclooxygenase 2
Cyclooxygenase 2 (COX2) is a key enzyme that produces prostaglandins involved in the inflammatory process [43].
A total of two studies were reviewed for this biomarker [23,29].The pooled OR of these studies (lesions capable of spontaneous regression compared with SCC) was calculated at 0.089 (95% CI: 0.029-0.269).These two studies did not have heterogeneity since I 2 =17.1%.For this reason, the OR of COX-2 IHC staining for SCC compared with lesions capable of spontaneous regression was 11.2.
Elastic Fiber
Elastic fibers are extracellular components that exist in many tissues, such as the skin, and are important for the skin's physiological processes [44].Data extracted from two studies [30,45] were assessed for this biomarker, revealing a calculated pooled OR of 6.69 (95% CI: 2.928-15.281)for lesions capable of spontaneous regression compared with SCC.Of 118 total samples, 47 were positive for lesions capable of spontaneous regression, while 15 were positive for SCC.The calculated I 2 was 0%, meaning that the studies were consistent with one another.
Insulin-like Growth Factor 2 mRNA-binding Protein
The insulin-like growth factor 2 (IGF-2) mRNA-binding protein (IMP) is a protein family that binds to IGF-2 mRNA and regulates its transcription [46].Previous studies evaluating IMP-3 as a biomarker for differentiating between SCC and lesions capable of spontaneous regression were conducted by Soddu et al. and by Kanzaki et al. [31,32].In the former study, 9/34 were positive for lesions capable of spontaneous regression compared to 19/33 for SCC.In the latter study, all eight samples of lesions capable of spontaneous regression were negative, while 10/15 of SCC samples were positive for IHC staining of IMP-3.The pooled OR of these two studies was 0.145 (95% CI: 0.021-1.001),which is defined as a 6.9 times greater tendency of SCC lesions for
Angiotensin II Receptor Type 1
Angiotensin II receptor type 1 (AT1R) is one of the two types of angiotensin II receptors, which, after activation, is responsible for the homeostasis of blood pressure and body electrolytes, alongside other effects.Among the other effects of
Discussion
Mentioned above is that the gold standard for the diagnosis of both SCC and lesions capable of spontaneous regression is biopsy sampling and histopathological evaluation, and that all of the evaluated lesion samples in the studies that we reviewed for IHC staining of various biomarkers had been previously diagnosed for SCC or as lesions capable of spontaneous regression.Therefore, it is worth mentioning that IHC staining of these diverse biomarkers is beneficial where a net diagnosis between these two lesions, with the capability of spontaneous regression, is complex and challenging [9,50].More importantly, it should be considered that this evaluation, the consistency of these studies was assessed by statistical analysis.
The reviewed studies for CD10 had a similar trend of OR for comparison of SCC and lesions capable of spontaneous regression.Therefore, the results of the statistical significance of this biomarker for differentiation are reliable.
CD10 is expressed in the epithelial cells of various tissues and has been widely used in diagnosing different skin cancers, including SCC, BCC, and melanoma [51].The overexpression of CD10 in skin cancer cells promotes rapid tumor progression and proliferation, leading to a higher grade and a larger tumor size [52][53][54].
Following the analysis for CD10, the pooled OR was calculated at 0.006 (95% CI: 0.001-0.057)for lesions capable of spontaneous regression compared to SCC.For this reason, CD10 can distinguish between SCC and lesions capable of spontaneous regression, since SCC lesions are 166.7 times more likely to be positive for the IHC of this biomarker.
Therefore, SCC lesions are biologically more progressive, with a higher proliferation rate, than lesions capable of spontaneous regression [55].All reviewed studies had the same OR calculation equal to 0.006.Although the number of studies was limited, the combined total sample size of these studies was acceptable.Furthermore, there was no significant heterogeneity between studies, as I 2 = 0%.Taken together, CD10 can be used as an applicable biomarker to differentiate between these lesions.
Another biomarker is COX-2, which is expressed in skin lesions and through the production of prostaglandins [56]; it is involved in the initiation, invasion, and angiogenesis of tumors and also participates in the suppression of the immune system [57,58].UV exposure can induce COX-2 expression, leading to skin malignancies [59,60].COX-2 has been demonstrated to be effective in the differentiation between benign and malignant skin lesions [59,61].
Two studies were reviewed to assess the capability of COX-2 to differentiate between SCC and lesions capable of spontaneous regression.These studies were consistent since the calculated I 2 =17.1 % was not significant.Moreover, the trend of OR of these studies was similar.In this Although the activation of AT1R has homeostatic effects, studies suggest its role and expression in different types of malignancies [79].Also, a number of studies have investigated its expression on cancerous cells, revealing AT1R's role in tumor genesis [80].Consistent with this implication, several treatment approaches were utilized for the management of a number of malignancies, and the results were promising [81,82].Moreover, IHC staining of this receptor was evaluated in previous studies for skin lesions, which suggests a high probability of its differentiation capability between skin lesions [83].AT1R was investigated in one study, by Takeda et al. [13].SCC significantly expressed AT1R compared with lesions capable of spontaneous regression, with OR (lesions capable of spontaneous regression compared with SCC) of 0.026 (95% CI: 0.006-0.107).
Ki-67, aside from the biomarkers discussed above, had a significant OR in our analyses as well.This biomarker was investigated in six studies [14,25,34,37,38,84], five of which were excluded during the analyses due to infinite calculated ORs, which were insensible.The OR of the remaining study was 0.143, indicating a probability of IHC positivity for Ki-6 comparing lesions capable of spontaneous regression with SCC, which was significant.However, when we assessed the excluded studies, all SCC and lesions capable of spontaneous regression samples for each study were entirely positive.This implies that Ki-67 may not be an acceptable biomarker for the differentiation between SCC and lesions capable of spontaneous regression.
Some of the evaluated biomarkers had a high calculated OR compared with other biomarkers, although they were not statistically significant.These biomarkers were B2M, BAK, Dsg1 &2, HSP60, and MIB-1, and their calculated ORs were 18.1, 4.9, 10.7, 8.81, and 0.074, respectively.
Elastic fibers are components of dermal connective tissue that maintain the elasticity of the skin.They have considerable distinguishing capabilities between malignancies.Loss of these fibers more often occurs in malignant tumors than in benign lesions [62][63][64].The calculated pooled OR, comparing lesions capable of spontaneous regression with SCC, was 6.69 (95% CI: 2.928-5.281)with I 2 =0%.Considering the total sample size of these studies, which was 118, and no significant heterogeneity, the results are reliable.The tendency of OR for these two studies was consistent (6.167 and 13.333), confirming the reliability of the results.
The other biomarker is IMP-3.The members of the IMP family are IMP-1, IMP-2, and IMP-3, which bind to the transcript of IGF-2 mRNA [65].Particularly, the overexpression of IMP-3 has been detected in several malignant cancers [65][66][67][68][69][70].Also, the role of this protein has been investigated as a biomarker to differentiate between benign and malignant lesions, including melanoma [71] and SCC [72].Furthermore, IMP-3 increases the migration of the malignant cells and leads to the invasiveness of the tumor [73].For this reason, as it is likely that IMP-3 can differentiate between SCC and lesions capable of spontaneous regression, it was evaluated in this manner.
In our study, two studies were reviewed for the assessment of IMP-3 IHC staining.These studies were not significantly heterogeneous; however, the calculated I 2 was 44.5%, which is high compared to other biomarkers.Therefore, the results should be considered more cautiously.The pooled OR was 0.145 (95% CI: 0.021-1.001),with individual ORs equal to 0.265 and 0.031.This means that SCC lesions' biological behavior is more invasive than that of lesions capable of spontaneous regression.Although this pooled OR was considered statistically significant, IMP-3 was 3.9 times more likely to be positive for SCC in the Soddu et al. study [32], while in the Kanzaki et al. study [31], it was 32.2 times more likely to be positive for SCC.This suggests the diversity between the results of these two studies.In other words, IMP-3 should be taken into consideration for the differentiation between SCC and lesions capable of spontaneous regression, but more cautiously and in addition to other biomarkers.
P53 protein is one of the most studied tumor suppressant proteins recognized to date.It is capable of regressing and inhibiting tumors, and the development of many tumors may occur with the mutation of the P53 gene [47].As discussed in other studies, skin cells with previously mutated P53 genes can develop skin lesions after exposure to the sun [74].It would seem that P53 is a good biomarker for differentiating between malignant tumors and other lesions, especially in the skin, as other studies have investigated in the past and recently [75,76].
The highest number of studies in our study were reviewed for this biomarker.Among these 12 studies [14, 18, Therefore, these biomarkers could be potentially evaluated in future studies for this purpose.
The limitation of our study may be the high number of investigated biomarkers among the small number of studies for each biomarker.Also, there was heterogeneity in the type of reference standard of the studies we reviewed; because the reference standard of the studies was different from one another, and since the number of our studies was not enough, we could not group the studies that were similar in terms of the standard reference into a subgroup or exclude some of them.Therefore, heterogeneity may be seen in the results that was explained above; for future studies, it is therefore suggested to consider studies that have a similar standard reference (such as hematoxylin).In addition, the range of the marker's cutoff for positivity or negativity was also the same as above.Of course, it should be mentioned that the cutoff of our studies was almost the same, and there were no statistically significant differences.
Conclusion
In summary, the results of our study show that a number of biomarkers, including CD10, COX-2, and elastic fibers, have a high capability of differentiating between SCC lesions and lesions with the capability of spontaneous regression, such as KA, in cases with a difficult diagnosis.IMP-3, P53, and AT1R could also be utilized in this manner; however, more investigation is required.The presence of these biomarkers was investigated through IHC staining and can be used in the clinical approach to SCC and KA lesions.
Figure 1 .
Figure 1.Prisma flow diagram illustrating the selection of articles.
diversity in biomarkers indicates that the diagnosis based on one biomarker is insensible.Hence, a combination of these biomarkers should be utilized to differentiate between SCC and lesions capable of spontaneous regression.For this reason, we investigated a variety of biomarkers assessed in different studies to identify the most significant and useful biomarkers for this differentiation.Of all 43 biomarkers, after statistical analyses of the extracted data, six biomarkers had significantly more probability of distinguishing between the two entities.Five out of six biomarkers were assessed in two or more studies, including CD-10, COX-2, elastic fibers, IMP-3, and P53.The other biomarker, AT1R, was evaluated in only one study.Other biomarkers that were investigated repeatedly were Ki-67, B2M, PCNA, P16, P21, BAK, Bcl-2, Caspase-3, and CD-1a.Repeated biomarkers are highly important since they give us information across the studies, and their analysis results are more reliable.For a complete
Table 3 . Results of 24 biomarkers with single study.
, 23, 25, 28, 35-38, 48, 77, 78], two were excluded due to the calculation of OR since the results were calculated as infinite.Of the remaining ten studies, the trend of OR was towards lesions capable of spontaneous regression, while for other studies, this trend was towards SCC.The calculated OR of that single study was 2.67.Considering the weight of the study (11.93%) compared with other studies, this calcu- sive behavior compared to lesions capable of spontaneous regression.Taken together, with a considerable sample size, we can indicate COX-2 as an acceptable biomarker for the differentiation between SCC and lesions capable of spontaneous regression.20
|
2024-05-10T23:46:22.911Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "d8d6351c692fba7003051cc70a08bbfe21a93496",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d8d6351c692fba7003051cc70a08bbfe21a93496",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
54839457
|
pes2o/s2orc
|
v3-fos-license
|
Perturbation Method for Linear and Non-Linear Fractional Order Systems and Integral Representation for Evaluation of Integrals
abstract: In the present work, the authors used the Laplace transform perturbation method to solve certain linear and non-linear systems of fractional differential and difference equations with constant coefficients with the fractional derivatives in the Caputo sense. We also considered the problems of string vibrations in different cases with fractional damping. Another purpose of this article is to evaluate certain integrals. Illustrative examples are also provided.
Introduction and Definitions
In the present study, the fractional derivatives are understood in the Caputo sense.The reason for adopting the Caputo definition is as follows: There are several approaches to the generalization of the notion of differentiation to fractional orders e.g.Riemann-Liouville, Grnwald-Letnikov, Caputo and generalized functions approach [12].Riemann-Liouville fractional derivative is mostly used by mathematicians but this approach is not suitable for real world physical problems since 2000 Mathematics Subject Classification: 26A33, 34A08, 34K37, 35R11, 44A10
84
A. Aghili and M.R. Masomi it requires the definition of fractional order initial conditions, which have no physically meaningful explanation yet.Caputo introduced an alternative definition, which has the advantage of defining integer order initial conditions for fractional order differential equations [4].
By a I α t f (t) we denote the fractional integral of f with order α > 0 on [0, t] defined as This integral is sometimes called the left-sided fractional integral.For the concept of fractional derivative we will adopt Caputo's definition which is a modification of the Riemann-Liouville definition and has the advantage of dealing properly with initial value problems in which the initial conditions are given in terms of the field variables and their integer order which in the case in most physical processes.The Caputo fractional derivative is more suited than the usual Riemann-Liouville derivative for the applications in several engineering problems due to the fact that it has better relations with the Laplace transform and because the differentiation appears inside instead than outside, the integral, so to alleviate the effects of noise and numerical differentiation.
For an arbitrary real number α > 0 (n − 1 ≤ α < n, n ∈ N ) Caputo fractional derivative is given as The direct Laplace transform of a function f (t) defined for 0 ≤ t < ∞ is the ordinary calculus integration problem where F (s) is analytic in the region Re(s) > c and f (t) = 0 for t < 0. This result is called complex inversion formula.It is also known as Bromwich's integral formula.The one-dimensional convolution theorem of f (x) and g(x) is given by Two-parameter function of the Mittag-Leffler type is defined by the series expansion Proof: See [11].✷ Theorem 1.2.Let f (t) be continues, positive, and increasing for 0 < t < +∞.
Then the following complex integral relationships hold true, Proof: a) Let us consider the following integral changing the order of integration which is permissible yields where h is Heaviside's unit step function.b, c) The proofs are straight forward.Note that (b) is a particular example of (a).
d) Let us consider the following integral changing the order of integration which is permissible yields at this point, by using the table of Laplace transform and elementary properties of Heaviside's unit step function, we get
Perturbation-Laplace Transform Method for Solving Fractional Order System
Most scientific problems and physical phenomena occur nonlinearly.Except in a limited number of these problems, finding the exact analytical solutions of such problems are rather difficult.Therefore, there have been attempts to develop new techniques for obtaining analytical solutions which reasonably approximate the exact solutions.In recent years, several such techniques have drawn special attention, such as Hirota's bilinear method, the homogeneous balance method, inverse scattering method, Adomian's decomposition method -ADM-, the variational iteration method -VIM-, and homotopy analysis method -HAM-as well as homotopy perturbation method -HPM-.The method has been used by many authors to handle a wide variety of scientific and engineering applications to solve various functional fractional equations.In this method, the solution is considered as the sum of an infinite series, which converges rapidly to accurate solutions.Recently, considerable research work has been conducted in applying this method to the fractional linear and nonlinear equations.The concept of He's homotopy perturbation method is introduced briefly for applying this method for problem solving.The results of HPM as an analytical solution are then compared with those derived from Adomian's decomposition method -ADM-and the variational iteration method -VIM-.The results reveal that the HPM is very effective and convenient in predicting the solution of such problems, and it is predicted that HPM can find a wide application in new engineering problems [5].Convergence of the homotopy perturbation method can be found in [20].In following, we solve fractional Chen's system with perturbation-Laplace transform method.Li and Peng found that chaos does exist in ChenAEs system with a fractional order.Deng, Li and Lu [24] studied the stability of n-dimensional linear fractional differential equation with time delays.By using the Laplace transform, they introduced a characteristic equation for the above system with multiple time delays.they discovered that if all roots of the characteristic equation have negative parts, then the equilibrium of the above linear system with fractional order is Lyapunov globally asymptotical stable if the equilibrium exist that is almost the same as that of classical differential equations.
Perturbation Method for Linear and Non-Linear Fractional 87 Problem 2.1.We consider generalized fractional Chen's system , Solution.According to perturbation method, we replace the previous system by By setting the above representations of x ε , y ε and z ε in the above system, we get If we set equal all the coefficients of the same exponents of ε on both sides, we get The Laplace transform of the first system gives where At first, we obtain Therefore and and lastly By application of the previous method successively, we get all x 2 , y 2 , z 2 , . . . .A special case: Applying the Laplace transform inversion, we get Perturbation Method for Linear and Non-Linear Fractional 89 where α = Therefore Fractional order systems, or systems containing fractional derivatives and integrals, have been studied by many authors in the engineering area.Additionally, very readable discussions, devoted specifically to the subject, are presented by Oldham and Spanier (1974) and Miller and Ross (1993) and Podlubny (1999).It should be noted that there are a growing number of physical systems whose behavior can be compactly described using fractional system theory.
Computation of Certain Integrals and Inverse Laplace Transform of the Object Functions by Means of Integral Representation
An interesting application of Laplace transforms involves the evaluation of integrals.In this section, we have implemented integral representation method to evaluate certain integrals and inverse Laplace transform of the object functions.Lemma 3.1.By using the integral representation, we may show that where J ν is Bessel function.
Proof: It is well-known that Then, the left hand side of the above integral relation can be written as following .
Note that C1 and C2 are contour integration.On the other hand and by setting x = n 2 + ν + 1 2 in the above relation, we get Perturbation Method for Linear and Non-Linear Fractional
91
where I 0 is modified Bessel function of zero order and h is Heaviside's unit step function.
Proof: It is shown that dz.
According to definition of Laplace transform, we must have ε > 0 or ✷
Evaluation of Integrals
In applied mathematics, the Kelvin functions ber ν (x) and bei ν (x) are the real and imaginary parts, respectively, of J ν (xe 3πi/4 ) where x is real, and J ν (z), is the ν-th order Bessel function of the first kind.Similarly, the functions Ker ν (x) and Kei ν (x) are the real and imaginary parts, respectively, of K ν (xe πi/4 ), where K ν (z) is the ν-th order modified Bessel function of the second kind.The Kelvin functions were investigated because they are involved in solutions of various engineering problems occurring in the theory of electrical currents, elasticity and in fluid mechanics.One of the main applications of Laplace transform is evaluating the integrals as discussed in the following.
Proof: Let us define the following function Laplace transform of I(t) yields Changing the order of integration, which is permissible, leads to or, After simplifying, we get At this point, by using table of integrals or residue theorem, we have the following Taking inverse Laplace transform of the above relationship, we arrive at Letting t = 1, we get = ber(2 ξ).
✷
Perturbation Method for Linear and Non-Linear Fractional 93 Problem 4.2.With the aid of integral representation, we can find inverse Laplace transform of the following object functions Solution.Using the Sonine integral relation, we have For the proof, we may use the following integral representation for modified Bessel function With the change of variable s 2 +a 2 s 2 τ = t, we obtain By replacing u = √ x 2 + z 2 , we obtain Also, where H is Struve's function [22].
Perturbation Method for Linear and Non-Linear Fractional 95
Fractional Oscillations and Fractional Delay Systems
In this section, the authors considered certain time fractional differential equations which are a generalization to the problems of harmonic oscillators studied earlier by many researchers in the literature.In this work, only the Laplace transformation is considered as it is easily understood and being popular among engineers and scientists.The basic goal of this work has been to employ the Laplace transform method for studying the above mentioned problem.The goal has been achieved by formally deriving exact analytical solution.The transform method introduces a significant improvement in this field over existing techniques.The study on fractional calculus equations, i.e., fractional-order differential equation (FODE) and fractional-order integral equation (FOIE) which can describe more accurate behaviors of real physical phenomenon and systems have become a hot topic in the last decades.Fractional derivative provides a perfect tool when it is used to describe the memory and hereditary properties of various materials and processes, this is the main reason that fractional differential equations are being used in modeling mechanical and electrical properties of real materials, rheological properties of rocks, and many other fields.As an important application field of fractional calculus, the topic about fractional-order control and system has attracted many researchers to work on.
The Laplace transform inversion yields In special case, when α = β , v 0 = 1 and y 0 = 0, we get We plot y(t) for v 0 = k = λ = 1, y 0 = 0 and different values of α and β.where k, k 1 and k 2 are the springs modulus of each of the three springs and x(t) , y(t) are the displacements of the masses from their position of static equilibrium.The masses of springs and the damping are neglected.
Let us assume that L{x(t)} = X(s), L{y(t)} = Y (s).Using the Laplace transform, We may obtain 5.3.B) Let us consider the following fractional difference equation with the initial condition Solution.By using Laplace transform, we have Hence Without loss of generality, assume that λ 1 < λ 2 .Then For the special case λ 1 = k 1 = k 2 = 1 and λ 2 = 2, we plot y(t).We have shown y(t) and z(t) when T = 1.
Perturbation Method for Linear and Non-Linear Fractional 103
Conclusion
In recent years, integral transforms have become essential working tools of every engineer and applied scientists.The Laplace transform, which undoubtedly is the most familiar example, is basic to the solution of initial value problems.In this article, the authors used the Hybrid perturbation-Laplace transform method to solve certain linear and non-linear systems of fractional differential and difference equations with constant coefficients.We also considered the problems of string vibrations in different cases with fractional damping.Constructive examples are also provided.
Problem 5 . 2 .
The vibrations of the mechanical system of two masses attached to three springs with fixed ends are governed by the following fractional differential equations with different orders C 0 D 2α t x + k 1 x = −k(x − y) C 0 D 2β t y + k 2 y = k(x − y), Perturbation Method for Linear and Non-Linear Fractional 97 0.5 < α, β ≤ 1, y ′ (0) = −x ′ (0) = − √ 3k
|
2018-12-11T05:32:14.586Z
|
2014-02-21T00:00:00.000
|
{
"year": 2014,
"sha1": "49ad587abc1249f60d8bfd1d974aeee020cee0a3",
"oa_license": "CCBY",
"oa_url": "https://periodicos.uem.br/ojs/index.php/BSocParanMat/article/download/21411/751375140098",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d1725e66c210a20785f5ab9939b1bb9b98c33b2f",
"s2fieldsofstudy": [
"Engineering",
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
252616054
|
pes2o/s2orc
|
v3-fos-license
|
Y Chromosome Haplotypes Enlighten Origin, Influence, and Breeding History of North African Barb Horses
Simple Summary Bred over centuries in the Maghreb region, on a corridor between the Arab and the Western world, the North African Barb horse has been touched by many influences in the course of history. The present study investigated the paternally inherited Y chromosome in today´s Barbs and Arab-Barbs collected from North Africa and Europe, with the aim to link genetic patterns and narrative history. A broad Y chromosomal spectrum was observed, as well as regional disparities among populations. Y chromosomal patterns illustrated a tight connection of Barb horses with Arabians and several other breeds, including Thoroughbreds. Besides, results depict footprints of past migrations between North Africa and the Iberian Peninsula. Abstract In horses, demographic patterns are complex due to historical migrations and eventful breeding histories. Particularly puzzling is the ancestry of the North African horse, a founding horse breed, shaped by numerous influences throughout history. A genetic marker particularly suitable to investigate the paternal demographic history of populations is the non-recombining male-specific region of the Y chromosome (MSY). Using a recently established horse MSY haplotype (HT) topology and KASP™ genotyping, we illustrate MSY HT spectra of 119 Barb and Arab-Barb males, collected from the Maghreb region and European subpopulations. All detected HTs belonged to the Crown haplogroup, and the broad MSY spectrum reflects the wide variety of influential stallions throughout the breed’s history. Distinct HTs and regional disparities were characterized and a remarkable number of early introduced lineages were observed. The data indicate recent refinement with Thoroughbred and Arabian patrilines, while 57% of the dataset supports historical migrations between North Africa and the Iberian Peninsula. In the Barb horse, we detected the HT linked to Godolphin Arabian, one of the Thoroughbred founders. Hence, we shed new light on the question of the ancestry of one Thoroughbred patriline. We show the strength of the horse Y chromosome as a genealogical tool, enlighten recent paternal history of North African horses, and set the foundation for future studies on the breed and the formation of conservation breeding programs.
Introduction
The history and origin of the North African horse have been long debated [1]. Still, there is no confirmation of horses inhabiting Africa, or evidence of domesticated horses roaming around the continent in early prehistoric time, but discussions about an "Equus Algericus" found near Tiaret (Algeria) still remain [1,2]. However, historical and archeological findings indicate that the introduction of the domesticated horse to North Africa was likely in the late second millennium BCE, via several routes following human migrations and conquests (e.g., through Strait of Gibraltar or Egypt) [3][4][5].
The origin stories of the North African Barb horse lead off the Barbary coast in the Maghreb region (today's Algeria, Tunisia, Morocco), hence the name "Barb". Foremost, Numidian horses and their crosses are especially discussed as founders of the breed [6,7]. Complex patterns of human and horse migrations in the North African region peaked around the 7th century, concurrent with the Muslim conquests [8,9]. Later, during the occupation of the Iberian Peninsula by the Moors, from the early 8th to the late 15th century, migrations between North Africa and Iberian Peninsula were frequently ongoing [3,8,10,11] and the influence of the North African horses onto Iberian stocks was substantial [12,13].
Numerous myths exist on the multilayer history of the Barb horse, for example, phenotypic traits relate the discussion about the progenitors to Mongolian horses, as well as the rare light-colored (cream-gene) and piebald (sabino) horses, corresponding to the Turkoman and the Akhal Teke breed [1,14]. Barbs had a prominent role as war horses and for breeding in Europe [1,12]. Notably, Barb horses were used in the Punic wars (264-146 BCE) that were fought between Romans and Carthage, and later exported to Europe by Carthaginian conquests [14]. Likewise, more heavy horses were introduced to the Maghreb region first by Romans (from 146 BCE) and later in the 17th century by Louis XIV [7]. However, after the 18th century, breeding declined dramatically because Barb horses were no longer used for the military cavalry, due to the shift of military tactics that began in the 19th century [1,15]. More recently, from the end of the 19th century onwards, cross-breeding of North African and coldblooded horses from France resulted in the "Breton-Barb". In addition, crosses of the Barb horse with Thoroughbreds, Anglo-Arabs, and French Trotters in North Africa were reported [1,12,15]. Above all, systematic cross-breeding with Arabian horses founded the "Arab-Barb" breed in the Maghreb region. In the 20th century during both world wars, French colonial cavalry and later also under Rommel s regime, captured Barb horses and this contributed to their diffusion throughout Europe [16]. Moreover, from 1965 onwards, the African horse sickness significantly reduced North African Barb horse populations and prevented horse export to Europe for over ten years from Algeria [17], and from Morocco during 1987-1991 [18].
In 1987, the "Organisation Mondiale du Cheval Barbe (OMCB)" was founded to preserve the purebred Barb horse and its cross populations ("derivates"), especially the Arab-Barb horse [19]. The OMCB is nowadays recognized as a competent authority for setting up the breeding programs. Breed registries were only recently established for Barbs and Arab-Barbs in the Maghreb region (1886 in Algeria, 1896 Tunisia, and 1914 Morocco) [1,14]. Since then, the studbooks remained open so that phenotypically classified horses can be entered retrospectively, even if no known ancestry can be proven (defined as "Inscription à Titre Initial", "ITI") [20]. Additionally, European registries are established (in France in 1989, Germany 1992, Switzerland 1993, and in Belgium from 1992-2017) and their studbooks are closed. Barb horses and the Arab-Barb horses are separated in different studbooks or studbook sections according to the OMCB stud-book regulations. The stud-book section for Arab-Barbs is still open for Arab/Barb crosses as well as crosses of Arab-Barbs with either Barbs or Arabs. All over, studying ancestry and breeding histories in North African horses via pedigree documentation is limited.
The census population size in the Maghreb countries is about 5500 for Barbs and 180,000 for Arab-Barbs [21,22]. Out of those, 1800 Barbs and 26,000 Arab-Barbs are registered in studbooks. In contrast, the European subpopulation constitutes about 2800 Barbs and 4000 Arab-Barbs, out of which 520 and 440 horses (Barbs and Arab-Barbs, respec-tively) are registered for breeding in the OMCB recognized studbooks. They produce about 160 foals per year [21,22]. The breeding programs for Barbs and Arab-Barbs are mainly based on characteristic phenotypic traits, robustness, and behavior rather than uniform breeding goals. Today, these horses are used for "Fantasia" (also known as "Tbourida" in Morocco and "Mchef " in Tunisia) a traditional equestrian war game dating back to the 16th century, as well as for agricultural work, carriage, riding, dressage, and equestrian art, as well as racing (only Arab-Barbs) in North Africa [1,12,19]. In Europe, they are used as leisure horses, for endurance-riding, historical dressage, jumping, and working equitation [1,16].
According to the diverse use and breeding areas, the North African Barb and Arab-Barb horse populations are characterized by broad phenotypic variation [1,22,23]. Within the Arab-Barbs, this strongly depends on the percentage of Arabian ancestry [24,25]. Investigation of blood group markers, protein, and DNA polymorphisms in North African subpopulations showed a pronounced genetic variation within the Barbs and the Arab-Barbs. Private alleles and high levels of heterozygosity were noted, however, no significant genetic differentiation was observed between Barb and Arab-Barb populations [26][27][28][29]. Likewise, apparent phenotypic differences distinguish the purebred Barb horse from the Arabian horse [1,23,25,30,31]. Microsatellite analysis showed similarities between the Arab-Barb and Arabian horses and a clear genetic separation of both breeds from Thoroughbreds [27][28][29]. The maternally inherited mitochondrial DNA showed close genetic relationships between Iberian breeds and Barb horses [11,32]. Nevertheless, the relationship between the North African Barb and the Arab horse has been continuously debated, till today [33].
A prominent genetic marker for inferring the ancestry of populations is the nonrecombining, male-specific region of the Y chromosome (MSY). The MSY is inherited exclusively from the father to his sons and thus MSY haplotypes (HTs) mirror the paternal lineages in a population. MSY analysis is best established in humans where it is widely used in population genetics, genealogical research, and forensics [34][35][36]. In domestic horses, the MSY was long excluded from population genetic studies due to the lack of informative sequence polymorphism (reviewed in [37]). Nevertheless, a stable MSY HTs topology based on slowly evolving biallelic markers was constructed by mapping next generation sequencing (NGS) data to a 6.5 Mb horse MSY draft reference [38]. The MSY HTs of domestic horses are clearly distinct from those in the extant Przewalski's horses. The most pronounced MSY signature among domestic horses is the~2000-year-old "Crown" haplogroup (HG), recounting various breeds from Central and South Europe, East Asia, North and South America [38,39]. It was proposed that the dominance of the Crown HG is a hallmark of the recent breeding influence of stallions of Oriental origin [38,40]. The crown topology supports the hypothesis [41,42] that only a limited number of stallions contribute to today s horse population. Only some Asian horses [43,44] and Northern European breeds (e.g., [45]) seemed to be unaffected by the recent Oriental introgression, and thus kept their autochthonous HTs outside the Crown ("Non Crown"). Within the Crown, three HGs were defined (H, A, and T) and the HT signatures of three English Thoroughbred founders [38], as well as Arabian patrilines [39] were recently successfully delineated.
In horses, MSY analysis can unmask patrilines that contributed to a breed; thus, impart motifs of their male demography, and shed light on complex breeding histories. In this study, we investigated MSY HTs in North African Barb horses with the aim to link Ychromosomal patterns to narratively known historical events. We hypothesize that the long-lasting input of foreign blood and complex migrations in the Maghreb region will be mirrored in their MSY HT spectrum. In addition, due to indigenous origin, regional and less intensive selection strategies [1], we might detect the preservation of autochthonous HTs in some North African horses' patrilines.
Sample Set
Biological samples were collected from 119 males, of Barbs (n = 84) and Arab-Barbs (n = 35) in Morocco, Algeria, Tunisia, and the European subpopulations. To ensure that many patrilines were represented in the dataset, pedigree information (available for 86 horses), provided by breeding authorities and associations, was considered in the sampling strategy as previously described [39]. Hence, oversampling of relatives was averted from the dataset by keeping six males per foundation sire at maximum. Additionally, we included 33 randomly sampled horses without pedigree information (10 European and 23 North African samples) to complement and capture population variation beyond documented patrilines. The dataset including individual male tail line information for ancestors born prior to 1990 is given in a string format in Table S1.
MSY Genotyping
We inferred MSY haplotype spectrum of 119 samples according to the previously reported horse Y phylogeny [38,39]. For genotyping, we created a downscaled HT structure based on 65 selected HT-determining variants as markers (61 SNVs, 3 short Indels, and 1 microsatellite, see Supplementary Table S2). The resulting tree served as the backbone and samples were placed onto branches of the tree via MSY marker screening.
For variant screening, genomic DNA was isolated from hair roots or blood with the nexttec ® DNA Isolation Kit. The DNA was then diluted with TE buffer to the uniform concentration of 5 ng/µL. Genotyping of variants was performed using competitive allelespecific PCR SNV genotyping assays (KASP™, lgcgroup.com (accessed on 2 July 2021)), following the standard protocol on a CFX96 Touch™ Real-Time PCR Detection System. Samples with known allelic state were included as positive controls, while DNA from females and non-template controls were used as negative controls. Information on variants (coordinates on LipY764, alleles, and flanking regions) are published in [38,39].
Genotyping of the amplicon length of the tetranucleotide microsatellite fBVB (GATA14 /GATA15) was performed on an ABI 3130xl Genetic Analyzer, as previously described [38]. In synopsis, for the fragment analysis, one PCR primer was tagged with FAM fluorescent dye (fwd_FAM: ACAACCTAAGTGTCTGTGAATGA; rev: CCCAATAATATTCCACT-GCGTGT, expected amplicon length 204 bp). PCR was carried out in a 20 µL reaction volume containing 0.4 µM of each primer. The reaction temperature was increased to 95 • C for 5 min for initial DNA denaturation, followed by 35 cycles of 30 s at 95 • C, 40 s at 58 • C annealing temperature and 40 s at 72 • C, and a final extension step of 30 min at 72 • C. Finally, GeneMarker ® was used to size the alleles relative to the internal size standard.
Genotyping was conducted in a consecutive manner by first testing the Crown determining variant rAX. If a sample carried the derived C-allele for this variant, allocation of the sample into main Crown HGs H, A, or T was conducted by testing markers fYR, rW, and rA. Each sample was then typed for the markers determining the substructure of the HG it clusters into. We then merged the genotyping information of all tested variants and imputed the allelic state of markers that were not tested or detected in the sample set according to the previously published HT structure [38,39] (see Figure 1 and Supplementary Table S2). We generated a median-joining HT network with program Network 10.2 [46] and redrew it as a HT frequency plot ( Figure 1 [38,39]. HT determining variants used to construct the downscaled tree for genotyping are denoted on branches in red. Additional information is given in Supplement, Table S2. Clustering of 119 North African horses based on genotyping result is illustrated as pies. Pie radiuses are scaled to the number of allocated individuals and colors of the portions correspond to different breeds. HG names are labeled accordingly. HTs located on internal nodes are denoted with an asterisk (*) and trailed with dashed lines that originate from corresponding internal nodes. Unascertained variants that would determine * HTs are denoted with question marks (?). HTs framed with blue and/or red borders denote that they were detected previously in Arabian (blue border) and Thoroughbred (red border) horses [38,39]. Non-colored points express HTs that were not detected in the North African sample set. Gray list on the sides of the network indicates the breeds the HTs were previously reported [38][39][40]; (b) Number of individuals that allocate within detected HTs. Sample information details are given in Supplement Table S1.
Results
To investigate the MSY HT spectra of North African horses, 119 males representing 84 Barbs and 35 Arab-Barbs were genotyped. The results showed that all samples allocated into the Crown HG. In total, we distinguished 18 HTs and all three previously defined Crown HGs (A, H, and T) were represented in our sample set. The broad Crown MSY HT spectra was comparable in Barbs and Arab-Barbs ( Figure 1). This is in contrast to patterns in other today`s breeds [38,39] that showed distinct clustering on the tree. Remarkably, only half of the males analyzed carried defined HTs, whereas 61 males got placed at internal nodes of the backbone topology (See Figure 1 and Table S2). The samples allocated at inner nodes are marked with an asterisk (*) in their HT identifier and distinguished with dashed lines in Figure 1. For instance, the sample that allocates into Tb-oB* HT carried the derived allele for the fUJ marker and was placed onto the branch Tb-oB, but it carried the ancestral allele at the markers determining subsequent HTs in our backbone tree (rP, qFM, fQI, and fBVB). The inner node clustering of samples occurs when the HT of the horse is not represented by the tree due to ascertainment bias, and only the HG and the branching point could be determined.
More than half (56%, n = 67) of the analyzed individuals are distributed across two HGs, Am (n = 34) and Hs-b (n = 33), respectively (see Figure 1). Other than North African [38,39]. HT determining variants used to construct the downscaled tree for genotyping are denoted on branches in red. Additional information is given in Supplement, Table S2. Clustering of 119 North African horses based on genotyping result is illustrated as pies. Pie radiuses are scaled to the number of allocated individuals and colors of the portions correspond to different breeds. HG names are labeled accordingly. HTs located on internal nodes are denoted with an asterisk (*) and trailed with dashed lines that originate from corresponding internal nodes. Unascertained variants that would determine * HTs are denoted with question marks (?). HTs framed with blue and/or red borders denote that they were detected previously in Arabian (blue border) and Thoroughbred (red border) horses [38,39]. Non-colored points express HTs that were not detected in the North African sample set. Gray list on the sides of the network indicates the breeds the HTs were previously reported [38][39][40]; (b) Number of individuals that allocate within detected HTs. Sample information details are given in Supplement Table S1.
Results
To investigate the MSY HT spectra of North African horses, 119 males representing 84 Barbs and 35 Arab-Barbs were genotyped. The results showed that all samples allocated into the Crown HG. In total, we distinguished 18 HTs and all three previously defined Crown HGs (A, H, and T) were represented in our sample set. The broad Crown MSY HT spectra was comparable in Barbs and Arab-Barbs (Figure 1). This is in contrast to patterns in other today's breeds [38,39] that showed distinct clustering on the tree. Remarkably, only half of the males analyzed carried defined HTs, whereas 61 males got placed at internal nodes of the backbone topology (See Figure 1 and Table S2). The samples allocated at inner nodes are marked with an asterisk (*) in their HT identifier and distinguished with dashed lines in Figure 1. For instance, the sample that allocates into Tb-oB* HT carried the derived allele for the fUJ marker and was placed onto the branch Tb-oB, but it carried the ancestral allele at the markers determining subsequent HTs in our backbone tree (rP, qFM, fQI, and fBVB). The inner node clustering of samples occurs when the HT of the horse is not represented by the tree due to ascertainment bias, and only the HG and the branching point could be determined.
More than half (56%, n = 67) of the analyzed individuals are distributed across two HGs, Am (n = 34) and Hs-b (n = 33), respectively (see Figure 1). Other than North African horse, these HGs were so far only detected in some South American and Iberian breeds [35,36]. Besides, we observed grouping of 28 (24%) males into Ao-aA1a* and Ao-aD2 HTs. Those HTs were designated recently as signatures for Arabian horses [39]. The arrangement of the internal branching points in the strictly hierarchical MSY HT tree topology reflects the emergence of the mutations over time. Hence, the HTs Ao-aA* (n = 2) and Ao-aA3 (n = 2) can be interpreted as hints to earlier introduced lines of presumably Arabian origin, that evolved and are still preserved in the North African Barb horse. We further aggregated ten males in the Tb HG. Among those, two males clustered onto early branching points (T2* and Tb-oB*) and six were allocated in the HT Tb-oB1*. This HT was previously reported in Akhal Teke, Turkoman, Thoroughbreds, as well as Arabian horses [38][39][40]. The Tb-oB1* in North African horses can be explained as the recent influence of stallions from that region. Noteworthy, we detected Tb-oB3b1*, the HT basal to the HTs detected in the progeny of the Thoroughbred s founder sire 'Godolphin Arabian', which are (Tb-oB3b1a/b/c) [38], in a Barb breeding stallion from Morocco. We found the signature of recent influence of Warmblood or Thoroughbred in a single horse from France carrying Tb-oB3b1b, but did not observe the typical Thoroughbred and Trotter HGs Tb-dW and Tb-dM [38,40]. Moreover, ten males carried HGs, which are today mainly found in Coldbloods and European Ponies [39,40], namely Ad-h (8), Ad-b (1), and Ao-n (1). Here, we again observed well resolved HTs (for example Ad-hA1), as well as earlier branching off HTs (Ad-bN*, Ad-h*).
Roughly half of our sample set was collected in Europe and the other half in Algeria, Morocco, and Tunisia (see Figure 2 and Table S1). The samples from Algeria and Morocco clustered in 8 HTs each. The European samples clustered into 16 HTs. Seven HTs were represented only within this population group in our sample set, noting that two HTs, Ad-hA* and Tb-oB*, were detected in ITI horses directly imported from, respectively, Algeria and Morocco. The broad HT spectrum detected in samples collected in Algeria, Morocco, and Europe was not corroborated by Tunisian data. All collected Barbs (n = 9) and Arab-Barbs (n = 2) from Tunisia and all males exported from Tunisia to Europe (see below) allocated into HT Hs-bL.
Animals 2022, 12, x 6 of 13 [35,36]. Besides, we observed grouping of 28 (24%) males into Ao-aA1a* and Ao-aD2 HTs. Those HTs were designated recently as signatures for Arabian horses [39]. The arrangement of the internal branching points in the strictly hierarchical MSY HT tree topology reflects the emergence of the mutations over time. Hence, the HTs Ao-aA* (n = 2) and Ao-aA3 (n = 2) can be interpreted as hints to earlier introduced lines of presumably Arabian origin, that evolved and are still preserved in the North African Barb horse. We further aggregated ten males in the Tb HG. Among those, two males clustered onto early branching points (T2* and Tb-oB*) and six were allocated in the HT Tb-oB1*. This HT was previously reported in Akhal Teke, Turkoman, Thoroughbreds, as well as Arabian horses [38][39][40]. The Tb-oB1* in North African horses can be explained as the recent influence of stallions from that region. Noteworthy, we detected Tb-oB3b1*, the HT basal to the HTs detected in the progeny of the Thoroughbred´s founder sire 'Godolphin Arabian', which are (Tb-oB3b1a/b/c) [38], in a Barb breeding stallion from Morocco. We found the signature of recent influence of Warmblood or Thoroughbred in a single horse from France carrying Tb-oB3b1b, but did not observe the typical Thoroughbred and Trotter HGs Tb-dW and Tb-dM [38,40]. Moreover, ten males carried HGs, which are today mainly found in Coldbloods and European Ponies [39,40], namely Ad-h (8), Ad-b (1), and Ao-n (1). Here, we again observed well resolved HTs (for example Ad-hA1), as well as earlier branching off HTs (Ad-bN*, Ad-h*).
Roughly half of our sample set was collected in Europe and the other half in Algeria, Morocco, and Tunisia (see Figure 2 and Table S1). The samples from Algeria and Morocco clustered in 8 HTs each. The European samples clustered into 16 HTs. Seven HTs were represented only within this population group in our sample set, noting that two HTs, Ad-hA* and Tb-oB*, were detected in ITI horses directly imported from, respectively, Algeria and Morocco. The broad HT spectrum detected in samples collected in Algeria, Morocco, and Europe was not corroborated by Tunisian data. All collected Barbs (n = 9) and Arab-Barbs (n = 2) from Tunisia and all males exported from Tunisia to Europe (see below) allocated into HT Hs-bL. Table S1. Summary information of genotyping results and regional differences are visualized Table S1. Summary information of genotyping results and regional differences are visualized with bar plots. The x axis on the bar plots corresponds to detected HTs, while the y axis indicates number of samples that correspond to each of the bars (HTs). The samples assigned to inner nodes are marked with an asterisk (*) in their HT identifier. Red stars indicate HGs that were found exclusively in the corresponding subpopulation (e.g., seven HGs denoted with red stars in the European subpopulation are found only among samples collected in European countries, and were not observed in samples from Maghreb countries).
Among the 66 European samples, nine were collected from horses imported from Algeria (4), Morocco (4), or Tunisia (1). Complementing pedigree information was available for another 56 European samples (see Supplementary Table S1). This documentation reveals that the majority, namely 50, of the European males also directly trace back paternally to Maghrebian stallions exported from Algeria, Morocco, and Tunisia to Europe during the last 35 years (see Figure 3 and Supplementary Table S1). Hence, only seven out of the 66 males in the European dataset could not be linked explicitly to a hitherto known Maghrebian line from documented records. Among those, five individuals descend from four stallions, who were inscripted as ITI in the course of the foundation of the French studbook in 1989. For one sample, we had no pedigree information, and for one founder, the country of origin was unknown (see Supplementary Table S1).
Overall, the full dataset (n = 119) included 33 individuals without pedigree information (10 European and 23 horses from Maghreb) and the HT pattern in horses with and without pedigree were comparable (Supplementary Table S1). with bar plots. The x axis on the bar plots corresponds to detected HTs, while the y axis indicates number of samples that correspond to each of the bars (HTs). The samples assigned to inner nodes are marked with an asterisk (*) in their HT identifier. Red stars indicate HGs that were found exclusively in the corresponding subpopulation (e.g., seven HGs denoted with red stars in the European subpopulation are found only among samples collected in European countries, and were not observed in samples from Maghreb countries).
Among the 66 European samples, nine were collected from horses imported from Algeria (4), Morocco (4), or Tunisia (1). Complementing pedigree information was available for another 56 European samples (see Supplementary Table S1). This documentation reveals that the majority, namely 50, of the European males also directly trace back paternally to Maghrebian stallions exported from Algeria, Morocco, and Tunisia to Europe during the last 35 years (see Figure 3 and Supplementary Table S1). Hence, only seven out of the 66 males in the European dataset could not be linked explicitly to a hitherto known Maghrebian line from documented records. Among those, five individuals descend from four stallions, who were inscripted as ITI in the course of the foundation of the French studbook in 1989. For one sample, we had no pedigree information, and for one founder, the country of origin was unknown (see Supplementary Table S1). Supplementary Table S1.
Discussion
The significant role of North Africa, as a transit route, during the Islamic conquest and migratory movements between countries of the region [3,14], raised our interest on the Y chromosomal signature of North African Barb horses. While the MSY HT signatures of the Arabian and the Thoroughbred and their recent breeding influences are well described [38,39], the historically impactful North African horse remains enigmatic. We applied MSY haplotyping in a total of 84 Barbs and 35 Arab-Barbs, whereas half of our samples were collected in Europe and the other half in Algeria, Tunisia, and Morocco (see Figure 2 and Supplementary Table S1) and hypothesized that the MSY signature will mirror the variety of encountered influences. On the other hand, due to the documented indigenous origin and regional subgroups in North Africa, we expected partial representation of autochthonous patrilines.
The results of haplotyping indicate that no distantly related lineages were retained in the collected sample set since all horses clustered within the Crown HG. In line with previously determined time to the most recent common ancestor [38], we can state that the MSY of North African horses only reflects the last 1500 years of population history. The sole detection of the Crown mirrors influences of Oriental stallions [40]. Interestingly, we report a broad HT spectrum of North African horses across the Crown HGs (18 HTs). However, unlike other breeds (like Arabians and Thoroughbreds), for which it was possible to pin-point characteristic HGs and even discriminate discrete sublines with the use of pedigrees [38,39], the diffused HT distribution result in a tangled MSY footprint of North African horses. The observed preservation of a variety of HTs may be the consequence of less intensive selection on males and different breeding goals in North African regions. Interestingly, MSY results were comparable in Barbs and Arab-Barbs. This verifies the inter-crossing and gene flow till today between the North African horse populations, as already depicted with autosomal genetic markers [27,28,48].
However, the broad HT spectrum was not supported from Tunisian samples (n = 11), where all nine Barbs and two Arab-Barbs were monomorphic, carrying a single HT (Hs-bL) ( Figure 2). This may demonstrate geographical disparities in breeding goals, supported by regional differences reported in the phenotype [1,23,30], as well as genetic spatial interpolation (e.g., [27]). In contrast, the analysis of microsatellites resulted in similarity of Moroccan and Tunisian Barb horse populations [29]. Regional differences are highlighted when we compare the HTs represented in Europe to the Maghreb region. Samples from European countries harbored seven HTs that were not represented in the samples collected in North Africa. Three of those patrilines were imports from North Africa after 2001 and four HTs trace back to the French ITI-inscriptions in 1989. Their private HTs may be explained with geographical separation of former exports to France. Additionally, we found two HTs each private for Moroccan and Algerian Barbs (Tb-oB3b1* and Ad-bN*, respectively). Compared to Tunisia, we observe similar MSY patterns in Europe, Morocco, and Algeria. One explanation for greater similarity of HTs among the latter three could be the tighter historical connection between those regions (export especially of ITI horses from Morocco and Algeria to Europe as seen in Figure 3). Nevertheless, we should interpret these findings with caution since it is possible that despite our efforts to collect a representative sample set from the Maghreb, the numbers of horses available from Tunisia was lower (n = 11). Hence, we could have underestimated HT diversity in that region.
All we see today is what is left throughout the time, and the MSY is a perfect tool to trace patrilines that shaped present populations. The relationship between the North African Barb and the Arabian horse has been continuously debated [33]. We noted a prominent clustering to Ao-aA1*, a HG previously detected in Arabian lines [39]. The detection of numerous Arabian HTs demonstrates the significant influence of Arabian stallion lines in Barbs and Arab-Barbs. A clear Arabian signature was visible in about a third of the analyzed samples. For the Arab-Barbs, the results are not surprising since the breed is based on Barbs refined with Arabians [49]. On the other hand, assignment of "purebred Barbs" to Arabian HGs may reflect, as hypothesized, recent historical migratory movements resulting in admixture, because the studbooks for the "purebred Barbs" are still open in North Africa and stallions without pedigrees are used for breeding.
Two third of the analyzed samples (85 North African horses) did not carry the Arabian signature HTs. Particularly interesting is that among those were 27 Arab-Barbs. In addition, we detected indications of recent upgrading with European Coldbloods in four males (Ad-hA1), which could be explained with the discussed influence of Coldblood stallions imported to North Africa [12]. Moreover, only a single individual carried an unambiguous sign of Warmblood or Thoroughbred male ancestry (Tb-oB3b1b) [38].
Barbs were used for upgrading and formation of many modern breeds [12,50]. There have also been reports on their contribution to Thoroughbreds, Anglo-Arabs, and French Trotters. Interestingly, North African horses HTs share branching points basal to the HTs observed in many todays Coldblood, British and European Ponies (Figure 2; detected in Ad-h, Ad-b, and Ao-n HTs) [39,40], which can be interpreted as the influence of the North African horses had on those breeds further back in time. Deeper investigation is needed to validate the proposed correlation.
A particularly remarkable finding was the observation of the HT basal to the HTs spread through the Godolphin Arabian sire line (Tb-oB3b1*) [38] in a Barb horse. There is still controversy about the ancestry of Godolphin Arabian, one of the foundation sires of the English Thoroughbred (exported from Tunisia to France in 1731). He is often referred to as Godolphin Barb due to his North African origin [51] and phenotypic marks different from the Arabian horse [1,12,13]. The MSY finding, namely detection of the basal Godolphin Barb HT in a Barb horse, again fuels the discussion on the origin of Godolphin Arabian, whether he was a Turkoman stallion with partial Arabian blood [52] or corroborates the hypothesis that the Tb-oB3b1 HG made its way into the Thoroughbred via the Barb horse [1,49].
When we look further back in time, from the Carthaginian civilization in the 1st century and Muslimic conquests in the 7th century to recurrent migrations with Iberian Peninsula (8th to 15th century), North Africa served as a main migratory route for many cultures [3,14]. Every culture that was present in the region could have left footprints in the horses' genomes, and this was depicted on the MSY. Notably, influence from the Middle East could be attributed to inner clustering of individuals to Ao-aA* and Tb-oB*, as well as allocation to Ao-aA3, Ao-aD2, Tb-oB1*, and T2* HGs. This grouping may indicate previously discussed influence of the ancestors of Arabian and Turkoman lineages on North African horses.
From the viewpoint of interactions between the North African regions and the Iberian Peninsula, previous research delineated homogenous mtDNA patterns within ancient [53] and modern [11,32] horse populations in Iberia and North Africa. Particularly, it is speculated that Barb and Iberian horses have a common origin [54]. A great number of North African horses that were analyzed [32] shared mtDNA HTs reported in South American and Iberian breeds. Accordingly, we note that two highly frequent HGs (Am and Hs-b) represented in our dataset also allocate Iberian and New World horse breeds, like Marchador (Am), Lusitano, and Sorraia (Hs-b) [39]. Iberian and New World breeds are not yet comprehensively studied for their MSY HTs, but the preliminary joint clustering could reflect the gene flow and recent shared ancestry of North African Barb and Iberian horses. However, to fully explain the assumed shared ancestry further back in time, as well as the magnitude of gene flow, and indices on New World horses ancestry, we should complement the dataset with additional Iberian and New World horse breeds in the future. Early separated populations, like the West African Barb, the Spanish Barb (USA), and South American breeds, as well as ancient DNA samples from the Maghreb, should enlighten another chapter in horse history. Additionally, basal allocation of samples in the tree topology and underrepresentation of private HTs (Figure 1) raises a discussion on technical limitation of our analysis. The MSY backbone topology was constructed based on the ascertainment panel from [39], where five Barbs and one Arab-Barb were sequenced. However, it seems this is still insufficient, and more individuals need to be sequenced in order to clarify MSY signatures private for North African horses, in particular in HGs Hs-b, and Am.
Overall, North African horses retained the print of the "early Oriental influence" starting with the Muslim conquests. With the observed broad HT spectrum, these horses could be a reservoir of genetic diversity-although their population is small. Further investigation of additional males, especially from the Maghreb regions, is needed to precise influential patrilines, as this is of particular practical interest for breeding. The MSY patterns should be considered together with autosomal markers, as well as mitochondrial DNA, while constructing necessary conservation breeding programs, to preserve the North African Barb horse.
Conclusions
Our study highlights the value of the Y chromosome analysis for horse population genetics and for the first time, enlightens recent paternal population history of the North African Barb horses. Obtained MSY HT spectra point to, on the one hand, that stallions were probably wide-spread hundreds of years preceding the formation of modern horse breeds, and on the other hand, indicate the impact on historical migrations and recent upgrading. However, with our approach, it is at the moment not possible to pin-point where and when the ancestors of North African Barbs came from, as well as the direction of gene flow. Future analysis on ancient DNA, as well as inclusion of more diverse Barb populations, are essential for dating of the origin of HGs, and exact inference of genetic influences. In addition, the ascertainment bias represented with HTs that are not fully resolved indicates that, even though the Crown is well described, there is still a lot left to explore in future research. Finally, our findings enhanced our knowledge of paternal ancestry of the breed and provided basis for future work and establishment of conservation breeding programs.
Bénédicte Fournel (France). We also thank Royal Horse Breeding Association (SOREC, Morocco) for the support.; Open Access Funding by the University of Veterinary Medicine Vienna.
|
2022-09-30T15:05:52.970Z
|
2022-09-27T00:00:00.000
|
{
"year": 2022,
"sha1": "978781822baa5fd219bf29a3e303a341d0ecad79",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/12/19/2579/pdf?version=1664276557",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91bf08bd09d7a73781dc321ec5f6eb5ea7fa1672",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
74299191
|
pes2o/s2orc
|
v3-fos-license
|
The Common Cold , Influenza , and Immunity in Post-Pandemic Times : Lay representations of Self and Other among older people in Sweden
The need for new knowledge about lay representations of contagions, immunity, vaccination, common colds, and influenza has become clear after the A(H1N1) pandemic and the resulting challenges regar ...
The Common Cold, Influenza, and Immunity in Post-Pandemic Times: Lay representations of Self and Other among older people in Sweden B. Lundgren
I. Introduction
On the cold winter days during my childhood in the 1950s in the north of Sweden, my mother's voice echoed every time I went outdoors."Put on your cap, don´t forget your gloves -or you will catch a cold".If she was really worried, she warned me about the risk of getting pneumonia.Cold weather was in itself seen as a danger, and dressing accordingly was a way to handle it.Safely out of sight from my mother's eyes, I took off my cap and put it in my schoolbag where it remained until I returned back home.
I have smiled at myself when hearing my own admonitions to my (now grown-up) children, "Put on your cap, you'll get a cold", and I have noticed their own looks of tired reluctance.The warnings have sounded alike through the generations, but the difference would be that I know more about the scientific facts -no, you don't catch a cold from being out in cold weather, there has to be contagion, there has to be a virus, and there are many kinds of viruses.In any case, dressing warmly still seems like good common sense.
In Sweden, the older generations, i.e. those born in 1950 or earlier, have lived through times of many colds, seasonal flus, and the last three pandemic influenzas, and a few of them even lived through the Spanish flu.These generations have also been intermittently subjected to several public health interventions and improvements, including child immunization, school health care, occupational health improvements, hygienic requirements, various kinds of diagnostic screenings, monitoring of risky behaviors, health controls, seasonal influenza vaccination, etc.They have probably caught many colds over their lifetimes, and statistically they will have been afflicted by influenza every fifth year on average (Kucharski et al., 2015).Surprisingly little is known about how older people in general interpret and handle the kind of diseases that range from the rather trivial common cold to influenza-like illnesses and the life-threatening phases of influenza.The obvious need for this knowledge became particularly clear in face of the A(H1N1) pandemic, and such knowledge is important for dealing with the challenges regarding pandemic preparedness for the future.
Pandemic influenza is considered to be a recurrent threat to public health and a global issue related to biosecurity, preparedness, and control (Lakoff, 2008;MacPhail, 2010;Lohm et al., 2015), and this places significant demands on public health authorities to react efficiently and responsibly before and during a pandemic.Sweden is one of many Western countries with highly developed pandemic preparedness schemes.These procedures evolved from experiences with the SARS and avian flu epidemics, and efforts to improve these procedures have intensified since 2005.
For many decades, vaccination has been considered to be the most effective means of fighting both seasonal influenza and new strains of pandemic influenza.The onset of the A(H1N1) pandemic in June 2009 was a starting point for enacting measures that were planned for in the pandemic preparedness documents.In Sweden, the most important measure was a mass-vaccination intervention that was made possible by an advanced purchase agreement from 2007 with a vaccine producer (Socialstyrelsen, 2011).That the main adult population expected or demanded vaccination was more or less taken for granted.The worry was rather that young people would not be willing to get vaccinated, so special campaigns were launched to get attention from these groups.The massvaccination was deemed successful with over 60% uptake (Socialstyrelsen, 2011).
The A(H1N1) pandemic turned out to be milder than was initially expected, and the handling of the pandemicespecially the mass-vaccinationhas since been criticized for many reasons, including the economic costs, the miscalculation of the disease burden, the 'crying-wolf' that the authorities have been accused of, and most importantly the severe side effects of the Pandemrix vaccine that caused narcolepsya lifelong neurological disorderin 200-300 children and young adults (Medical Products Agency, 2011;Lundgren, 2015a).Taken together, the A(H1N1) experience and the global issue of pandemic preparedness need to be examined from new angles (Lohm, et al., 2015;HEG Expert Group, 2011), and this means that we need to know more about the general public's representations of contagions, immunity, common colds, and influenza.
Aims
The aim of this article is to investigate how older lay people, mostly women, reflect on the common cold and its relation to influenza and pandemics.How do people perceive the different infections and the immunological reactions they cause?Are common colds and influenza considered as problems to be solved by different strategies or technologies or do they represent something unavoidable (Davis et al., 2015) and inherent in human existence?What kind of measures do people take when on the one hand they are looking inwardsfocusing on their own bodies and corporeal immunities (Davis et al., 2015) and on the other hand when looking outwards on their bodily and spatial interactions with other people, spaces, and places?Does this entanglement of self-perceptions, experiences, recollections, beliefs, and actions have any bearing on pandemic preparedness?
II. Method
The material for this article is 67 written responses to a semi-structured questionnaire sent out from a Swedish folk life archive at the Nordiska museet, Sweden's largest museum of cultural history, in Stockholm.These kinds of questionnaires have been used frequently in Swedish ethnological and folkloristic research.As I have discussed in an earlier article (Lundgren, 2015b), the folk life archives have a long history in Sweden and represent the development of the changing roles of their regular contributors.From the beginning of the 1920s, the archives have used the written accounts of people's actual experiences to collect facts about traditional life, but since the 1960s the respondents have been expected to take a more independent position as interpreters of society rather than passive messengers (Klein, 2003).Respondents choose voluntarily if they want to participate in a specific questionnaire or not, and they are free to choose the length and style of writing and may express themselves in whatever way they prefer.
Nordiska Museet has sent out 460 questionnaires on different subjects since 1928.Recent topics have included "Alcohol in my life", "Our monarchy", "Love", "Amateur photography", and "Mass radiography".Today the list of respondents contains 220 persons from all over the country, a majority of whom are women (80%).Most of them are now quite old and have served as respondents for many years.This particular questionnaire (Nm 243 Common Cold and Flu) was sent out in October 2014.It was constructed by the author and aimed at encouraging people to reflect on issues concerning the common cold, influenza, and pandemics in terms of causes, symptoms, and cures.The questionnaire also provided space for associations and meaning-makings concerning contagions and immunity (Nordiska museet 2014).
There were 67 responses to the questionnaire (30% response rate) from 58 women and 9 men.The majority (49 women and 7 men) were 65 years or older.They lived in different parts of the country and had various social backgrounds.For the most part, the respondents treated the bio-medically framed questions with great confidence that their life-long experiences would suffice, and this was precisely what was hoped for with the questionnaire.Some of the questions deliberately touched upon complicated biomedical issues such as "How do you think the immune system works"?"Where do you think it is 'situated' in the body?"These kinds of questions were designed to make people think of their embodied experiences in a more reflexive way, but it is possible that such questions could at times also make people feel uneducated and stupid (cf. Martin 1994, p12).A few also 'confessed' to having looked some things up in a medical handbook.
III. Reflecting on the Common Cold and Influenza
The answers to the questionnaire indicated on the one hand that the topic of colds and the flu were well known and trivial (maybe even boring) to the respondents, and on the other hand that writing caught their interest once they began reflecting on the questions.Thinking and writing about colds and influenza enabled reflections on the body, contagions, and immunity while using "what one knows without knowing that one knows it" (Linger, 2005, p18).These reflections came from their memories of what their parents had told them, what they knew from their earlier experiences, and the experiences of their friends and relatives, and these were added to the biomedically framed knowledge they had gained in one way or another.
I will discuss the following most salient themes in the responses to the questionnaire: 1) Common colds and flus as ritualized common experiences, 2) Me, my body, and my immune defense, and 3) Regulations of place, space, and behaviors.
The first theme comprises common narratives of symptoms, diagnostics, and the course of events in catching a cold or the flu, including suggested remedies and the need to change one's everyday behavior.The second and third themes include discussions about personal responsibilities for keeping oneself fit and healthy and thereby helping one's immune system.These strategies also work to create social and spatial regulations of behaviorssuch as hygiene and avoidance -that are connected to morally framed 'othering' processes.
Common colds and the flu as ritualized common experiences
Everyone has a view on colds and what to do about them (Tyrrell and Fielder 2002:151).
The common cold is a highly prevalent and relatively innocuous group of diseases that are most often selfdiagnosed and self-treatable.Studies have shown that there is a high degree of shared beliefs and explanations about causes for the common cold between lay people and professionals (see Baer et al., 2008, p153).However, some cross-cultural studies have also described lay notions of causality for the common cold that are tied to folk concepts.These include notions such as the hot-cold balance, exposure to dampness or drafts, or exposure to cold after a hot bath or when one has wet hair (Helman, 1978;Baer et al., 2008, p150).Deborah Lupton (2003, p08) has argued that the more common and less serious the illness, the more likely it is that the lay representation is informed by traditional folk concepts.However, in this material, only a few of the respondents talked about cold weather, drafts, or dampness as causal explanations, and the most common explanations fit well with current scientific knowledge.Although there were variations in how the respondents described the cause of the common cold and the flu, most responses talked of how viruses (sometimes they mentioned bacteria instead) resulting in colds or influenza were brought into the body by viral droplets through the air and/or transferred via handled objects ('fomites').
The cold is spread through droplets, when you´re sneezing, coughing, etc. through the air.And through contact by handshakes and all the contact you have when you are out among people, in shops, and on public transport.And all the codes that you have to key inon the small keys there is a whole bunch of bacteria, I guess.It is also good to avoid eating from buffets at times when colds and influenza are common.When I get home I always try to sanitize my hands or wash them with soap and water before I put away my groceries and stuff like that.(KU 20976, woman born 1945).
The respondents had clear opinions about how to recognize the well-known symptoms -the "assorted seeping, dribbling, spraying of excessive bodily fluid" (quoted in Greenhough, 2012a, p291)from colds and flus while reflecting on life experiences reaching back to childhood.The answers provided many examples of diagnosis as a self-performed subjective ritual of disclosure (cf.Rosenberg, 2002, p242): I notice that a cold is on the march from many things; I get tired, feel poorly, get a small headache, maybe a little fever.I look pallid, my hair is tired, and soon the runny nose […] If it's influenza I feel much more tired.Muscular pain is usually seen with the flu.[…] Quite soon the coughing starts.I usually cough terribly through much of the night and day.Sometimes I have believed that I would be kicked out from my home because I am so disturbing.I cough so deeply and so loudly.It's awful.And there is mucus, mucus, mucus.I cannot swallow and I don´t want to swallow so I spit it out […] I never see any doctor […] It will take time and I will endure (KU 20976, woman born 1945).
Colds and flus, like all infectious diseases, are the results of a contagion that comes from the outside.The outside-in movement was apparent in all responses, but also the inside-out transmission when infecting other people.
Once the contagion has 'come in' to you, you are 'infected' and a candidate for playing the role of the individual sufferer (Rosenberg, 2002, p242).However, for the common cold both the status of being infectious, a carrier (cf.Newman et al., 2015, p19), and the sickness role were blurred and open to contestation.If you surrendered to being ill and went to bed too early, you could suffer one kind of criticism.Some of the respondents talked about this as a 'male behavior' (cf.Lohm et al., 2015, p121).On the other hand, if you did not mind your symptoms so much and carried on working or meeting with other people you could be criticized for spreading disease.One of the few answers from a younger person, a woman born in 1967, showed her awareness of this avoidance-norm when she described why she went to work as usual as long as she did not have a fever: I know some of my colleagues get very angry when I come to work coughing, but I usually claim that they are already infected because you are most contagious before the symptoms occur (KU 20972, woman born 1967).
However, this blurriness concerning if, how, and why it was possible to perform a sickness role also created space to make your own decisions and to make the best of your situation.The onset of a common cold created time and space for resting a while, to lie down, and to have a socially accepted reason to be alone and to keep away from people, to have a cozy time.Many respondents described the voluntary rituals surrounding being nice to yourself and interrupting your other daily activities, not because the illness (such as the flu) forced you to, but because a cold invited you to take some rest.1 In the evening I can have a really hot bath and lie down in bed with a book and a cup of tea within reach.I turn the light off early and if I am lucky I can sleep for ten hours (KU 20981, woman born 1942).
I often say that I won´t get infected, but sometimes it happens that I catch a cold.Then I try to take the opportunity to sleep a little extra, hopefully several times every day.On rare occasions when I have a headache I take a Magnecyl and put on a warm nightcap.Often I get well quite fast.I drink warm milk and make sure I have warm feet and more blankets […] I think you should help the body to get well by not moving too much (KU 20983, woman born 1929).Some also reflected on the peculiar state of not knowing if one is sick or well: The common cold is a strange illness because I am not really ill, but not well either, just generally feeling miserable, really miserable […] I absolutely don't want to stay in bed, because it feels so shabby to lay in bed all day with a cold, but I like to rest on a freshly made bed, by an open window, with a warm blanket and a good book in the afternoon (KU 20966, woman born 1948).
Several cures were mentioned, from different kinds of herbs, fruits, and plants (garlic, ginger, and oranges), to health-enhancing products based on natural remedies (Echinacea, vitamin C) and liquors (cognac, whisky, Jägermeister) (cf.Baer et al., 2008, p160).Quite often these cures were passed inter-generationally, being taught by parents and transferred to children.Several reasons were mentioned for trying them, such as because your mother or father used to do that, because "at least they won't hurt you", and because they experienced that it was possible to influence the course of events and get healthy quicker, i.e. the rituals were vehicles for addressing diseases (Napier, 2002, p22).Some cures were used just because they added to the somewhat cozy experience of being nice to yourself.
Although many studies talk of the uncertainties concerning influenza diagnoses among clinicians (Prior, Evans & Prout, 2011;Lohm et al., 2015), most of the respondents, when reflecting back on their life experiences, were quite certain how to separate a cold from the flu.The flu is heavier, it 'attacks' you with muscle pain, headache, and fever, simply forcing you to surrender, but with an expectation of recovery: "When you get the flu, you have no choice, you have to rest (KU 20986, woman born 1948)." Often the causal agents, inhibiting agents, and the means of transmission (cf.Prior et al., 2011. p925) were conflated in the responses.One woman talked of 'bacteria' instead of viruses when describing the 'uninvited intruders' that should be removed as soon as possible, or 'defeated' as she writes, by the white blood cells, but also by herself as a healthy person: The white blood cells, that hopefully are alert, are doing their job fast and effectively.And I expect that from them because I consider myself to be a healthy and sound human being (KU 20976, woman born 1945).
In the following, I will reflect on the conflation of inhibiting agents and means of transmission and how this is seen through the lens of embodied personal responsibility for body and health, together with what was perceived as the most effective agentthe immune system.
IV. Me, My Body, and My Immune Defense
During the growth of immunological science, several immunological motifs, metaphors, and models have transcended into the language of discourses of philosophy and other branches of social and cultural theory (cf.Martin, 1994;Napier, 2002;Cohen, 2009;Anderson, 2014).The metaphorical transfer has also taken place the other way around.Ed Cohen has argued that the concept of biological immunity has its roots in a political concept, immunitas, that has been appropriated into biomedical contexts (Cohen, 2009), something that Michelle Jamieson has questioned when discussing politics and biology not as separated but as ontologically entangled (Jamieson, 2015).This is not the place for digging deeply into the vast literature on immunology and its transgressions, but following the work of Martin (1994) I will compare some of the fundamental concepts and distinctions within immunology to what was brought up in the responses to the questionnaire.
Emily Martin wrote in 1994 that the concept of the 'immune system' emerged (Martin, 1994, p16) after an era of increased awareness of hygiene and had "moved to the very center of our culture's conception of health" (Martin, 1994, p86).Now, more than 20 years later, the position of the immune system in the center seems to be even more established, even among older lay people without any specific connections to medical education, biomedical professions, or the pharmaceutical industry.Obviously, the respondents to my questionnaire did not distinguish between or reflect on the cellular or bio-physiological immunological functions, but still the immune system took center stage in the descriptions of how to protect oneself from colds and the flu.However, the immune system's relation to the body and to the subjective self were perceived in various ways.
The metaphorical uses of central immunological distinctions, providing cultural meaning-making about what happened in the body, were easy to trace in the responses.One reappearing and culturally reworked distinction is the one that biomedical discourses draw between innate immunity and acquired or adapted immunity.The innate immunity is the first line of defense and consists of the inherited mechanisms of recognition and defense against microorganisms.The adaptive immunity is a specialized system that creates immunological memory after having reacted to a specific pathogen (Janeway et al., 2001).Although some answers reflected on the automatic, inherited, and independently functioning immune system, they put much more effort into describing it as taught, adapted, and dynamic.It was described as not being created at its maximum potential (cf.Martin, 1994, p201) and as something that needed to be nurtured, exercised, and tended to (cf.Davis et al., 2015, p13;Martin, 1994): Those antibodies circulating in the blood and the functions of the white blood cells to destroy invaders are not equally effective in every person.This differs from individual to individual, and diseases affect us differently.And you have your own responsibility to take care of yourself when disease strikes (KU 20963;woman born 1932).
The nurturing of the immune system provided ways to imagine yourself as a healthy and responsible person who would be able not only to face the trivial colds, but also the more severe flu epidemics and perhaps even pandemics.Imagining immunity (cf.Wald, 2008, p29) in this way meant placing a lot of trust and respect in the immune system, but also in one's own responsibility and capacity (Moore 2010, p101).
However, the distinction between imagining the immune system as something you inherited (innate) and/or as something that you taught and helped (adapted) often collapsed and was transferred into a culturally based metaphorical use of the concept of self/nonself, which since the 1960s has been axiomatic in immunological biomedicine and, although contested, still dominates much of the texts in medical handbooks and in popular medical discourse.
In the responses, there were several varying articulations of self and 'helpers' that indicated profound "sensibilities about the body" (Martin 2004, p33).Sometimes it was 'I' or 'me' acting, sometimes together with 'my body', and sometimes it was just 'the body'.Sometimes the 'immune defense' was included within the concept of 'me' and/or 'my body'.The immune defense could also be seen as an agent working on its own, with or without help from 'me' or 'the body': Most often I don't get a cold, but I don´t know for certain why.I usually look at it as if I have conquered the cold.If I meet someone who has a severe cold, I usually feel sick a couple of days later, but it very seldom breaks out.I guess I have had the infection in my body but have been able to handle it (KU 20958, woman born 1941).This respondent used the pronoun 'I' to explain that she had conquered the cold by herself, and she did not place any agency in something detached from herself.In another part of her response, she wrote about another ritual she performed when she felt she was getting a cold: she "suns away the cold" by sitting in a sunny spot.Later in the text, her immune system was mentioned as a specific asset, which she was trying to nurture with QiGong every morning and sometimes 'Louhan Patting', fresh air, and sunlight.On the whole, however, she regarded her own self as the most important actor, sometimes with the help from fever, which she separated from herself, and something she did not want to act upon: "I want to let the fever have its own way.Fever is a way for the body to fight the infection".In phrasing it like this, she saw 'the fever' and 'the body' as entities to help herself but they were not completely the same as herself.
It was quite common in the written responses to look at oneself in this way, with a continuum from what was regarded as the specific 'I' who acted, to the agency of the 'helpers' -'the body', 'the fever', or the 'immune defense'.In many answers, these agents were intertwined with and dependent upon responsible action from the 'I': Building your immune defense is a long process.It depends on what you eat, how you exercise, and also hopefully feeling mentally well.But it is not always that easy.The world is not always good (KU 20963, woman born 1932).
Many answers revealed how people nurtured or helped the immune defense: The body protects itself against influenza and viruses with a strong immune defense.It is situated in the stomach and you build it up by eating fruits and vegetables.Everything that brings vitamins or minerals is good.You should be outside in the fresh daylight at least 15 minutes every day and exercise at least 30 minutes every day […] I don´t believe in vaccination.You have to endeavor to take care or your body (KU 20975, woman born 1959).
Having a holistic view and engaging in different kinds of therapeutic self-help were also considered to be of help to the immune system: It is my way of living and it means that for example I don´t get stuck in conflicts or negative emotions.I handle things, clear things out, and go on […] I don´t have to spend a lot of energy to defend myself against certain feelings, and consequently I have more energy for all the biological processes that keep the body going, including the immune defense […] what I am trying to say is that I have a holistic view.You can´t really separate body and psyche from each other, not even for something as trivial as the common cold (KU 20980, woman born 1947).
In other responses, the body and the immune defense were specific autonomous actors separated from the 'I': The body has built its defense, which should protect us from invaders that don't have the right to be there.
Where it [the immune defense] sits and how it looks, I have no idea … I usually don´t take any vaccines because I think that the body feels well when it takes care of a cold or an influenza sometimes … the body has to work, otherwise it stops producing antibodies, because if there is nothing to take care of, why bother to produce them?(KU 20965, woman born 1943) Many studies have pointed to the militarization of medical thinking, particularly in immunology but also in other medical areas, for example, the "war on cancer" (cf.Martin, 1994;Napier, 2002Napier, , 2012)).Although it was common in the responses to speak about 'invaders', 'uninvited intruders', 'attack', 'defense', etc., there were also some variations in the metaphorical uses, such as 'patrolling policemen' and hard-working 'small, kind blood cells'.
There were also different opinions on whether vaccination was put into the same military categories or if it was thought of as weakening the immune system.Just as quoted above, many respondents included arguments about vaccination in their perceptions of the function of the immune system.Many people over 65 expressed anger about being put into a 'risk group' after the age of 65 (cf.Evans et al., 2007).Suddenly they got information from the authorities on the need for being vaccinated against seasonal flu.One woman, who was angry about what she perceived as an ageist attitude in a "terrible letter in her mailbox" suggesting that she get a flu vaccination, wrote: Of course it is not the case that your immune defense suddenly changes after a certain date (KU 20980, woman born 1947).
Another woman wrote: I will soon turn 70, and I absolutely don't regard myself as belonging to a risk group because I am over 65!I feel a bit angry over this careless thinking … I have trust in my own body and its ability to cure me in the best way.Furthermore, I build up my immune defense every time I get sick.My immune defense, and the white blood cells, who do all the work, are exercised, and this 'training' is lost with vaccination (KU 20976, woman born 1945).
V. Regulation of Places, Spaces, and Behaviors
It is well known that pandemics and epidemics cause people to actively reconfigure social and spatial relations (Greenhough 2012a, p282).In my material, the concept of nonself, whether interpreted as microbes or as other people, provided a conceptual tool for the regulation of places and behaviors (cf.Davis et al., 2015, p5).
Although the respondents did not regard the common cold or even influenza to be very serious, normative accounts were presented about how to change your own behavior, i.e. being on the watch for certain places, spaces, and people to avoid.This concerned both the 'othering' of the origins of influenza implying the risk of having contact with foreign places or people, and the othering of problematic behavior where some people were considered not to take responsible action such as having good hygiene or a healthy lifestyle.Sometimes these two were also connected.
The foreign dimension was most apparent when specifically answering questions about pandemics, which were commonly phrased as a consequence of people living close to animals: These nasty influenza viruses come from Asia where people live close to their domestic animals such as tame birds and pigs (KU 20981, woman born 1942).
This was also said about the swine flu, in spite of the huge media attention to the outbreak of this pandemic in Mexico and California: I think the swine flu came from Asia.Maybe the bird flu turned into the swine flu.… Influenza pandemics start in the poor parts of the world (KU 20967, woman born 1938).
One of the questions was how people interpreted the word 'pandemic'.The foreign dimension (Asia, or somewhere South-Eastern, and Africa) was further emphasized in the answers suggesting that pandemics came as a result of an overcrowded earth and were nature's way of handling the problem.The word 'pandemic' was also articulated together with value-laden words such as 'panic', 'something big and dangerous', 'something frightening', 'tsunami', and 'the Black Death'.One person associated the word with the animal 'panther' and one with the animal 'panda' (!).The variations in the answers suggest that there is no absolute and coherent way of interpreting what a pandemic implies.As Abeysinghe (2015, p64) has argued, this also goes for the WHO in their failure to produce a robust definition of 'pandemic' concerning A(H1N1).
The articulation of viruses and contagions together with far-away places also made people consider long airplane flights to be a risk factor -"sit for a long time in a closed space and share expired air with many people" (KU 20958, women born 1941).The far-away dimension could also reside at home, which was now correlated to unhygienic behavior: I was in a store where they sold fresh cakes and pastries that you could pick yourself with a set of tongs that you were supposed to use.I waited for my turn to get apple cakes.In front of me in the queue were a woman and a man.I could not tell what kind of language they spoke.The woman took a cake with her hand and held it in front of the man who shook his head and the cake was put back.They went on until the man nodded.Then I left the queue.I don´t want to eat what other people have picked up with their hands.You can´t wash a cake (KU 20931, woman born 1937).
Obviously, it was important for this woman to mention that she was unable to tell what language the couple in front of her spoke.To her, it was a strange, supposedly far-away language.It was important also to mention that the tongs should have been used, but in this case were not.In telling the story in this manner, she also created a cultural and ethnic othering of un-hygienic behavior.There were also other examples in the responses where this othering was not restricted only to pointing out people from other countries and speaking other languages, but rather emphasizing bad hygienic behavior: It is easy to observe if you go by subway on an ordinary day during the cold season; almost everyone sits there sniffling, coughing, and sneezing, many directly into the air or into their hands!Instead of into the bend of the arm.I believe that knowledge about contagions among people is incredibly low (KU 20978, man born 1948).
The subway is a place with an extremely high risk for contagions, I thinkbad circulation, many people, crowded.Washing your hands after every trip on the subway is a must, and you must also to pull your scarf up over your nose and mouth (KU 20949, woman born 1971).
What to do about the perceived risk of widespread contagions?According to the responses, different measures were taken, including hand sanitizers, wearing gloves, touching door handles with the end of the sleeve, not shaking hands, not touching one's own face, and changing seats or queues to avoid people who are sneezing or coughing.
I was strict with my hand hygiene long before everybody else […] My habit is to wash my hands with soap and water the first thing I do after taking off my coat.My children have learned this and they have taught their children to do the same (KU 20963, woman born 1932).
Another example was about family lore with talk about relatives who infected others because they did not eat healthy food and, therefore, often caught colds: I remember a couple of older relatives from the countryside who used to have heavy colds all the time.We guessed that it was because they never ate fruits or vegetables.Probably there were not so many vitamins in their diet.They often infected other relatives (KU 20958, woman born 1941).
The capacity for nurturing the immune system also served as a dividing measure to distinguish responsible people from those who did not play their part in the projects of 'imagining immunities', attending to the logic of social responsibility (Wald 2008, p22), and creating healthy communities.The "unselfconscious embodiments of modes of behavior" (Napier 2002, p76) and the unhealthy lifestyles of other people also caused frustration and "empowered powerlessness", as phrased by Martin (1994, p122).Being a socially responsible individual (Wald 2008, p112), or "response-able" (Greenhough 2012a, p295), for one's own health might not be enough because the outcome was also dependent on other people's choices.The strategy of avoiding viruses and bacteria was not the only one described in the answers.Some talked of the wish to engage with the pathogen (because it was good for you and the immune system was exercised), rather than to avoid it (cf.Napier 2002, p21;Davis et al., 2015, p21).Some respondents were quite relaxed and talked of a kind of peaceful coexistence (Greenhough 2012a, p283) with the virus and bacteria and acknowledged the unavoidability of infection (Lohm et al., 2015, p123): Of course there are contagions, but you can't go around all the time thinking about it.They are everywhere (KU 20979, woman born 1935).
One of the few male respondents went further and expressed the need for exposing oneself to microbes to keep the immune defense working and that one should not interfere in that: I have no special hygienic rules to protect myself from colds.Instead, I believe in the theory that you should expose yourself to bacteria and viruses to keep the immune defense of the body in shape (KU 20968, man born 1952).Some also expressed some nostalgic longing for the past during their childhood, where they experienced themselves as being in a necessary relationship with different kinds of germs or viral companions (cf.Davis et al., 2015. p13;Greenhough 2012a, p281).This meant that they did not conceive of this microbial otherness as something principally dangerous (cf.Napier 2012, p128, 130).On the contrary, one woman also regarded the incoming viral cold to be a kind of a "catharsis" for the body and as something to be grateful for (KU 20976, woman born 1945).
VI. Discussion & Concluding Remarks
According to Davis et al. (2015. p2), immunity talk is multiple.In this study, this talk included lay interpretations of immunological self/nonself with its implied use of fundamental metaphorical distinctions and sometimes military metaphors, as well as interpretations that are related to modern medicine and that are used to inform public health governance and body politics (Davis et al., 2015, p3).
The respondents placed significant emphasis on what Davis et al. (2015, p3) has called 'choice immunity', meaning that subjects were practically and morally responsible for their way of living and that they experienced confidence in how they could help their immune systems to conquer colds or the flu.The self-confidence was shaped like a common-sense life story, nurtured by the homogeneity of the experiences in a life-long perspective together with influences from public health narratives about self-help practices and healthy behavior.The more or less shared models of explanations between the informants and biomedical information (cf.Baer et al., 2008, p61) strengthened the experienced trust.Even influenza-like illnesses and influenza were mostly seen as endurable using a plethora of suggested coping strategies.
The explanatory models about the viral causes of disease also created models for spatial regulation (changing queues, moving away from people who cough and sneeze, wearing gloves, washing hands) or moralizing about relevant behaviors.People used hygiene and infection-avoidance measures while at the same time they tried to cultivate their own immunity as a defense against future viruses (cf.Davis et al., 2015, p2).
The different accounts about colds and flu contained compatible and complementary versions of the self, including the embodied self, the self as narrative, and the autonomous self (Brison 1997, p15).All three versions were entwined in one another, but depending on what kind of explanatory work (Brison 1997, p14) was going on, the respondents emphasized different things in their stories.Whether talking of the self as something distinct from the body or as something identical to the body, the embodied self was given a prominent position in narratives about symptoms, resistance, remedies, and recoveries.The ownership of the body and its organs (Church, 1997, p86) was an important theme, showing how they could apply their intentional will to their bodies, or parts of their bodies, such as the immune system, through nurturing, training, and exercising for achieving the utmost efficiency.
In narrating their selves together with their bodies, a dimension of legacy, time, and purposeful agency was added with help from self-regulation and the responsible autonomous self who made choices and decided about actions.Overall, the narratives were about trust, value, and respect in the body, in the immune system (cf.Martin, 1994. p80), in the lived experiences, and in the self, but not always trust in other people's parallel interpretations and behaviors.Many accounts showed how they faced empowered powerlessness in their interactions with other people where they suspected a lack of relational personhood or relational solidarity (Baylis et al., 2008, p5).
As mentioned earlier, most of the responses were from women, which means that it was difficult to analyze if there were any gendered differences.Research has confirmed that the relations between sex/gender and influenza and vaccination are still unclear in many respects.The WHO report "Sex, gender and influenza (2010)" concludes that the precise impact of sex and gender on influenza infection and vaccination is unknown because most studies do not disaggregate data by both age and sex.The WHO report suggests that the outcome of infection is generally worse for women, but the outcome also varies across geographical regions.Differences in hormone concentrations and immune responses together with pregnancy and/or other biological mechanisms seem to have an impact, as do societal and behavioral differences that contribute to differences in exposure (WHO 2010, p37).
Several feminist works on women and health have pointed out the gender dimension as being crucial in understanding health and illness.Of special interest for this study is Sarah Moore's article about the healthy body as gendered (Moore, 2010).In discussing the "new morality of health" she argues that femininity involves a certain attitude toward the body, that the body is essentially uncontrollable, that it is good in and of itself, and that it is synonymous with the self.Sarah Nettleton has argued that the morality of health especially affects women because they are seen as being responsible for initiating healthy lifestyles and for experiencing guilt and worry when they fail (Nettleton, 1996;Moore, 2010, p104).As I have shown from the answers to the questionnaire, this picture can be nuanced.The male answers, although they were few, did not explicitly point to other attitudes toward the body than the women's when it came to handling colds and flu.For men and women alike, the bodyand especially the immune systemwas in many ways regarded as controllable, and it was hard to find real evidence for gendered norms.A few women mentioned that men were more sensitive to colds and flu, i.e. they gave in to being 'sick' faster than women.This was also pointed out by Lohm et al. (2015) in their discussion about the 'man flu' (p121).But there was also the opposite opinion, stating that it takes a longer time for men to confess to being ill than for women.Regarding female guilt and worry (Moore, 2010), I have not been able to observe anything in this material when it comes to illness experiences of influenza, whether seasonal or pandemic.
By focusing on the responses to the questionnaire, my wish for this article was to be able to contribute to a critical epistemology (Farmer, 2001, p40) about colds and flu.Generally, the material revealed a shared understanding of the common cold that matched the medical understanding (cf.Baer, 2008, p151), as well as shared understandings of what causes influenza and its origins.What seems to differ, however, is that vaccination as a preventive measure was not seen as the grand solution among the respondents, and there were some gaps between the individual experiences of common colds and influenza that counteracted the information campaigns from authorities.The most obvious gap is that, although recognizing the effects that colds and especially flus might have, some had difficulties in seeing themselves as 'frail' people in need of special preventive measures (cf.Cedrachi et al., 2013).As Greenhough has put it, "The common cold is more likely to invoke stoicism than panic " (2012a, p285).This also seemed to be the case when it came to seasonal flu.As I have discussed, the 'immunity talk' was more about how to treat your own health and how to help the immune system to work as good as possible (cf.Prior, 2003, p50).
In face of the A(H1N1) pandemic in Sweden, the national pandemic preparedness plan emphasized mass-vaccination as the dominant conceptual framework and as the most effective measure, i.e. it placed a bio-political intervention ahead of non-pharmaceutical managements such as avoidance and hygiene measures.Although the vaccination had a high uptake, it is not obvious what the exact motivations were for people to comply (see Lundgren, 2015b).In hindsight, when the respondents reflected on the vaccination intervention, the explanation most often was about the fear mongering or the exaggeration of the threat from both the authorities and the media (cf.Lundgren 2015 a, b): That information about the swine flu was a terrible case of fear mongering.You thought you were facing the Black Death.Did people in the National Board of Health and Welfare believe in this or did they have shares in the pharmaceutical companies?Where is all the vaccine now?Couldn´t it be sent to Africa so the poor people down there can be relieved from this flu, then it would be of some use (KU 20982, woman born 1929).
From the responses, it is certainly not obvious that vaccination against influenza would be what people demanded or expected from medical authorities.One sign of this was the irritation about being put in a 'risk group' just because one had reached the age of 65.In addition, it is not apparent if a certain point was imagined where people would find it necessary to have a vaccination even if it were a pandemic threat.
Lohm et al. have argued that the radical uncertainty about seasonal and pandemic influenza resonates with 'risk society' and can have the "unforeseen consequence of increasing ontological insecurity, that is, weakening people's sense of well-being in the real world" (2015, p117).For this study, my overall impression is that people put trust and faith in their own abilities to cope, not only with different kinds of colds, but also with influenza-like illnesses and influenza.Their emphasis on healthy lifestyles, on ways of helping the immune system, on different remedies and cures, and on the strong faith in recovery all in all pointed to a sense of security and control although they acknowledged that sometimes there could be unavoidable dangers that are inherent in life itself.I will end with a short outlining of three issues of interest to pandemic preparedness in relation to the public.The first is the use of the word 'pandemic', both from a public-health perspective and from an individual perspective.The different interpretations in this study showed that there were several diverging, and often emotional, associations with the word.A clear and consistent usage of the concept would be beneficial.The second issue concerns the military framings that are commonly used in immunological discourses.Politicians and public health authorities should be careful in framing pandemics and biosecurity into military or hostile terminology, although it might supply "a conceptual bridge between pandemic influenza and biosecurity" (Davis et al., 2015, p4).The cases of excluding and stigmatizing 'others' that I have observed in the responses might be endorsed or enhanced in a way that could prove harmful for pandemic security.A third point would be to elaborate on the findings in the responses pointing to notions of what is termed as 'network immunity'the idea that the immune system also depends on "productive, ongoing relations with the other" (Davis et al., 2015, p3;Napier, 2012, p121), thereby relying on people's trust and resilience rather than on their fear.
|
2017-07-11T08:56:51.104Z
|
2015-12-17T00:00:00.000
|
{
"year": 2015,
"sha1": "cc5c60d62abd9e45cb3ef3c30080d3bbc5364b1a",
"oa_license": "CCBY",
"oa_url": "http://hcs.pitt.edu/ojs/index.php/hcs/article/download/200/260",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc5c60d62abd9e45cb3ef3c30080d3bbc5364b1a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
122841639
|
pes2o/s2orc
|
v3-fos-license
|
A quantum transport model for the double-barrier nonmagnetic spin filter
. A model for calculating the current and spin polarization in a double-barrier InGaAs resonant tunnelling structure is described with the aim to account for phase-breaking scattering. It is based on the nonequilibrium Green’s function method with both elastic and inelastic (LO-phonon) scattering described within the self-consistent first Born approximation. It has been found that the maximum current spin polarization of around 0 . 4 in the ballistic limit decreases to around 0 . 1 for scattering transport with scattering-induced broadening of quasi-bound states of around 4meV.
Introduction
Recently, it has been suggested that resonant tunnelling structures (RTS) [1] based on III-V semiconductor heterostructures are a promising candidate for a nonmagnetic spin filter, [2,3,4,5]. The mechanism is based on electron tunnelling through quasibound states which are spin-split due to the spin-orbit interaction, manifested as Rashba [6] and Dresselhaus [7] effects.
Numerical estimates of the attainable spin polarization reported so far ignored the detrimental effects of carrier scattering. Here we use the method of nonequilibrium Green's functions [8] adapted for modeling quantum transport in nanodevices, [9,10,11].
The model
The spin-dependent transport through the InGaAs double-barrier resonant tunneling structure (RTS) shown in Figure 1 is a consequence of the Rashba spin-orbit interaction (RSOI) where k x and k y are the components of the lateral (perpendicular to the growth direction, denoted as the z axis) wave vector k. σ x , σ y are the Pauli matrices and α is the Rashba parameter [12] α = eE exth E ext being the externally applied electric field while m = 0.023m 0 , E g = 356meV and ∆ SO = 410meV are the InAs band parameters. RSOI creates the energy difference ∆E(|k|) = 2α|k| Figure 1. The value of α Figure 1. The effective electron potential profile in a biased InGaAs RTS. The conduction band offset is taken to be ∆E c = 792meV, m GaAs = 0.067m 0 , the barrier and well width L b = 3nm and L w = 6nm, respectively, as in [2] while the Fermi energies in the contacts are E F,E = E F,C = 20meV.
obtained from (2) for E ext = 20mV/nm (corresponding to the current peak in the J(V CE ) curves shown in Figure 2) has an order-of-magnitude agreement with values found experimentally in [13]. For k = k F (E F = 20meV) it yields ∆E(k F ) = 2meV. Since the spin splitting has the same order of magnitude as the scattering-induced broadening of the resonant energy levels [1], we conclude that to model the spin polarization in a RTD, one has to go beyond the ballistic transport picture. The quantities describing the electronic system are the greater (G > (E)) and lesser (G < (E)), Green's functions. Once G > (E) and G < (E) are found, it is straightforward to obtain the electron concentration, density of states and current density, [11]. The interaction with InAs optical phonons (ω LO = 29meV) and elastic phase breaking are calculated within the self-consistent first Born approximation, the procedure for which we briefly outline here.
We start by finding the retarded (Σ R (E)), lesser (Σ < (E)) and greater (Σ > (E)) self-energies. They are obtained by adding up the contributions of various interactions. In particular, we have The indices E and C refer to terms due to interaction with emitter and collector reservoirs, respectively, [11]. Σ γ ph (E) and Σ γ el (E) describe the LO phonon and elastic phase breaking. In the first iteration, these two are assumed zero. At the end of each iteration, new values for the scattering self-energies are obtained and compared with previous ones. Typically, 10 to 30 iterations were required to obtain a relative error below 10 −5 .
Then, we find G R (E) according to V (z) comprises the band-edge potential and the linear term due to the external field, see Figure 1. σ = ± denotes the spin subband. For calculations we use the |k, z basis with N grid points for the z axis and the finite difference approximation. Thus, for fixed values of σ, |k| and E, G R (σ, |k|, z, z ′ ; E) becomes a N × N matrix. The lesser and greater Green's functions are found from the kinetic equation where G A (E) is the Hermitian conjugate of G R (E) and the z-integration goes from the emitter to the collector contact (left and right end of the structure shown in Figure 1). New values of the lesser and greater scattering self-energies are given by the first Born approximation (ϕ ='ph','el') ± in front of the phonon energyhω ϕ means + for λ =< (electron relaxation from E +hω ϕ to E) and − for λ => (hole relaxation from E −hω ϕ to E). The above expression accounts only for spontaneous phonon emission, which is appropriate for low temperatures such that the average number of phonons n B (hω LO ) = (exp(hω LO /kT ) − 1) −1 is much smaller than 1. For elastic scattering, we put ω el = 0. Finally, the imaginary part of the retarded scattering self-energy is given by.
while the real part of Σ R (E) is neglected since it merely shifts the energy levels in the system. The self-energies found from (7) and (8) are local (∼ δ(z − z ′ )) due to the assumption that the coupling to scatterers is independent on k. It is exact for the deformation potential coupling to optical phonons, but not for polar coupling in which case the effective local scattering strength U 2 can be found by averaging, see Appendix C of [14]. In this work U 2 ph = 2500meVÅ/D InAs and U 2 el = 250meVÅ/D InAs with D InAs = m/2πh 2 (local density of states of a spin subband in a InAs two-dimensional electron gas) have been used. These gave average elastic and phonon scattering rates of around 10 12 s −1 at the current main peak shown in Figure 2 inset (V CE ≈ 270mV) and the enhanced value for phonon scattering of around 10 13 s −1 at the phonon peak (V CE ≈ 320mV), which is comparable to values reported in Ref. [10]. For V CE ≈ 270mV it has been found that the quasi-bound state broadening is 0.8meV in the ballistic limit, 4.5meV with only elastic scattering and 4.9meV when both elastic and inelastic scattering are taken into account. Thus the scattering-induced broadening of quasi-bound states found using the above scattering strengths is consistent with typical values found in the literature [1].
The energy resolved spin-subband current density entering the system through contact γ = E, C (emmiter and collector) is given by [11] The total spin-subband current densities flowing through the device are obtained by integrating i γ (σ, E) over energy while the total current density is J = J(+) + J(−). In absence of a lateral electric field, the Kramers degeneracy implies a zero spin polarization of the total current. Thus, the one-sided collector geometry is assumed, [4], in which case the spin polarization is given by 3. Results Figure 2 shows the dependence of P and current density J (inset) on the external bias. For reasons of compactness, curves for J(+) and J(−) separately are not shown. They have a shape similar to J (multiplied by a factor of 0.5) and are mutually displaced in voltage by approximately ∆V CE = 2mV ∼ ∆E(k F )/e. Maxima in P occur at places where there is a sudden change of the magnitude of either J(+) or J(−): around the current turn-on (V on CE ≈ 245mV) the J(−) component suddenly rises leading to a negative peak in P ; the positive peak in P occurs at the current turn-off voltage (V off CE ≈ 285mV) where J(−) goes to zero before J(+). The polarization is reduced by the broadening of features in J(V CE ), which is the main effect of carrier scattering. The maximal value of P for the fully ballistic transport (U ph = 0, U el = 0) of around 0.4 is reduced to around 0.1 when elastic scattering is present. LO-phonon scattering causes the phonon-peak at V CE ≈ 320mV in both subband currents, but it is so wide that no feature in P (V CE ) is observed. Compared to the case when only the elastic scattering is taken into account (U ph = 0, U el = 0), the case with LO-phonon scattering does not create a significant difference in P (V CE ) because when V CE is large enough to make the resonant transmission with phonon emission possible, there are no carriers that can be resonantly transmitted (without scattering) hence no sharp features in J(V CE ) exist. Three cases have been considered: ballistic transport (diamonds), elastic scattering only (stars) and both elastic and LOphonon scattering (circles).
In summary, we have described a model to account for effects of carrier scattering on spin polarization of the current. By choosing numerical values of scattering strengths that yield average relaxation times of around 1ps (and around 0.1ps for resonant LO-phonon emission), the peak in spin polarization is found to decrease from 0.4 in the ballistic case to around 0.1 when carrier scattering is present.
|
2019-04-20T13:02:22.409Z
|
2010-07-01T00:00:00.000
|
{
"year": 2010,
"sha1": "43afb9b4a479cfa8bd04633f5f904f167ddda09e",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/242/1/012008/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8ed478f7d4a5d5cfed83a492e4734c1b1ba68820",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9497363
|
pes2o/s2orc
|
v3-fos-license
|
Initial Data for Black Holes and Black Strings in 5d
We explore time-symmetric hypersurfaces containing apparent horizons of black objects in a 5d spacetime with one coordinate compactified on a circle. We find a phase transition within the family of such hypersurfaces: the horizon has different topology for different parameters. The topology varies from $S^3$ to $S^2 \times S^1$. This phase transition is discontinuous -- the topology of the horizon changes abruptly. We explore the behavior around the critical point and present a possible phase diagram.
Several black objects solutions exist in spacetimes with dimensionality greater then four containing compact dimensions. Among these solutions are black strings (BS) and black holes (BH)-two black objects with distinct horizon topology. However no general analytic solution is known for a black object in a compactified space-time in more then 4d spacetime.
Gregory and Laflamme (GL) [1,2] discovered that a uniform black string, a product of a Schwarzschild solution with a line, develops a dynamical instability if a compactification radius is 'too large'. They postulated the existence of a new branch of non-uniform black string solutions. The endpoint of an unstable uniform string was expected to be a BH. However, Horowitz and Maeda [3,4] argued that the GL instability cannot cause a uniform BS to decay to a BH, at least in a finite affine time. This is because of the 'no tear' property of horizons. They conjectured that the endstate of such a decay would be rather a non-uniform BS. Non-uniform BS solutions connected to the GL point were constructed perturbativly by Gubser [5]. Wiseman [6] found non-uniform BS solutions using fully non-linear numerical calculations. However, Wiseman stresses that these solutions are most likely unstable.
We consider here the five dimensional spacetime with one compact dimension of length 2L. We denote the coordinate along the compact dimension by z. A neutral black object has a characteristic dimensionsfull parameter, its asymptotic 4d mass, M . It is convenient to describe the system by a single dimensionless parameter, ζ ≡ G 4 M/(2L), where the effective Newton constant is G 4 = G 5 /2L. If ζ ≪ 1 the black object is expected to resemble a 5d Schwarzschild BH since local measurements cannot probe the compactness of the z direction. The horizon of this BH is therefore expected to have an S 3 topology. At the opposite end, ζ ≫ 1, one expects that the horizon would extend over the compact dimensions wrapping it completely. The horizon's topology of this object, S 2 × S 1 , suggests the name 'BS '. An uniform BS is described by a 4d Schwarzschild solution times a circle. A non-uniform BS is characterized by a non-trivial z-dependence along the circle. An attempt, based on a thermodynamical reasoning, to construct the phase diagram that would include consistently different phases of solutions was recently undertaken by Kol [7].
In this letter we make a step toward the self-consistent study of black objects in compact 5d background. We solve numerically the initial-value problem for a moment of time-symmetry for spacetimes containing black objects. We determine the apparent horizon of the existing black objects in these solutions and we follow the transition from spherical to cylindrical topology in a family of solutions with a different dimensionless parameter ζ.
Although the configurations we find are not 5d static solutions, these solutions, give us some insight on the behavior of the black objects in a compactified background. A similar approach was previously applied [8] to explore the time-symmetric BH problem in the brane world scenario. We consider a time-symmetric slice through the black object's space-time. To generate the black object solution we consider a configuration with an artificial matter around the origin. This matter gives rise to a non-flat spatial metric along the slice. We solve numerically for this metric. Then we solve for the apparent horizon of the black object. The method allows us to determine the spatial geometry of the black object. Since we solve along a time-symmetric slice, this spatial geometry constitutes the initial data for subsequent dynamical time evolution of this black object's geometry. Limiting ourselves only to the determination of the initial data we find explicitly time-symmetric solutions with a distinct horizon topology and construct the phase diagram for a family of such solutions parametrized by ζ. We show that there is a phase transition within this family. We identify the critical ζ c and show explicitly that if ζ c is reached from below a corresponding BH solutions becomes deformed and the horizon has a cigar-like shape. If ζ c is approached from above, the corresponding BS solution becomes more and more non-uniform. However, this cannot be considered as evidence for existence of a non-uniform static BS solutions. This initial data may relax to a uniform string during its dynamical evolution. We find that the transition between both topologies is not smooth: the spherical-like horizon jumps suddenly to become a cylindrical-like horizon. Put differently, neither the BS pinches nor the BH has its south and north poles touching each other.
The method that we describe here enables us to study properties of black objects with different apparent horizon's topology in a single consistent numerical scheme. Our method does not require a prescription of the topology of the horizon. The topology is not predetermined but rather it is a derived result of numerics. Usually, to obtain a static black object solution of the 5d Einstein equations one would had to specify the topology of the horizon. In other words there is no a single numerical scheme that can find black objects with distinct horizon topology.
Let us consider a time-symmetric, spacelike hypersurface Σ t with a vanishing extrinsic curvature. The Hamiltonian constraint reads where t µ is the unit normal to Σ t and T µν is the 5d stressenergy tensor. From now on we work in units where c = G 5 = 1. The momentum constraint is trivially satisfied provided that the matter is static and the extrinsic curvature of this slice vanishes. We choose the metric on Σ t to be conformally flat This ansatz has been adopted for simplicity. We didn't expect that the static solution will be conformally flat. However, we expect that the trend we discuss is insensitive to this assumption. The method can be easily generalized to non-conformal choices. More complicated metrics will be considered elsewhere.
With this choice of the metric the constraint equation takes the form Here we have defined ρ ≡ T µν t µ t ν > 0, the physical energy density. This ρ is the matter density, distributed around the origin. In practice, we pick a localized matter distribution that occupies some finite region around the origin. This artificial matter gives rise to a non-trivial ψ in a way that does not involve specifying an inner boundary of a black object. For sufficiently concentrated mass an apparent horizon appears. For lower values of ρ the solution describes a momentary static star.
To solve the elliptic Eqn (3) we have to specify boundary conditions. These conditions are: (i) At the equatorial plane, z = 0, we have a reflection b.c. ∂ z ψ = 0; (ii) At the symmetry axis, r = 0, we have also a reflection symmetry: ∂ r ψ = 0; (iii) Periodicity in z implies also reflection symmetry at z = L: ∂ z ψ = 0; (iv) At r → ∞ we set ψ → 1 + G 4 M/r, which is consistent with the 4d asymptotic behavior * . This condition is implemented * The asymptotic metric is expected to take the form of a 4d Schwarzschild times a line. This metric generally cannot be brought to the conformally flat form (2). However, asymptotically, as r → ∞, a transformation is possible at the leading order. The asymptotic behavior of the conformal factor would become ψ → 1+G4M/r. The length of the circle tends asymptotically to 2L so we can read the asymptotic 4d mass in the units of the effective Newton constant G4 = G5/2L. by rewriting it as ∂ r (rψ) = 1, eliminating the need to specify M . Instead, M is determined from the numerical solution.
There is a wide literature about equations like (3), see [9] and references therein. This equation is ill posed and it does not have a unique solution. In order to bring the equation to a well posed form we rescale the matter density ρ →ρψ −3−s , with some s > 0 and nonphysical densityρ, see e.g. [9]. The resulting equation is well posed and has a unique solution. The rescalling of the matter density is unimportant as we are interested only in the external vacuum part. The solution for ψ is found using relaxation.
We obtain a sequence of momentary time-symmetric solutions by fixing the density of the artificial matter and its location asρ = 10 6 Θ(0.5 − r)Θ(0.5 − z), and varying continuously the length of the compact asymptotic circle, 2L. The figures below are obtained for this source. We checked that our results are not affected by the specific choice of the source. Taking the smooth distributionρ = 10 6 exp(−r 2 /σ 2 r ) exp(−z 2 /σ 2 z ) + exp(−(z − 2L) 2 /σ 2 z ) with various σ r and σ z we found the same overall picture. The values of ζ c varied, as expected, depending on the source. The variations in ζ c are ζ c = .98 − 1.8. Moreover, taking σ r ≃ 0.5, σ z ≫ L, i.e. practically cylindrical source, we were able to reproduce the uniform BS solution. The position of the horizon in this solution (see Eq. (5) and the subsequent discussion) was determined to within 0.1%. This provides an independed check on the overall accuracy of our calculation in addition to other standard tests of convergence, errors scaling etc.
To solve the equation for ψ we used grids, covering the domain 0 < z < L, and 0 < r < R cut , with typical grid spacings of ∆r ∼ 0.02 and ∆z ∼ 0.01. The value of R cut , where the grid was cutoff and the asymptotic b.c. (iv) was implemented, has been taken as R cut = 5, 10 and 20. We checked that the results are insensitive to variation of R cut , provided that R cut > 5.
To envisage the spatial metric around the black objects in Fig.1 we plot the contours of ψ in two cases that correspond to a BH solution and a non-uniform BS solution. The matter is located near the origin and is encircled by the horizon in either case. The geometry outside the apparent horizon has an axisymmetric structure and it becomes asymptotically flat. The ψ contour lines are spherical near the origin and become cylindrical as r increases.
Once we obtain ψ we determine the existence of the apparent horizon. An apparent horizon is defined by a zero-expansion of the null rays generating the horizon [10]. For the time-symmetric hypersurface Σ t this condition can be written as where n µ is the the unit normal to the apparent horizon. In both plots one observes how the deformation of the contour lines fades asymptotically. We plot also the apparent horizons in both cases. These are designated by thick curves. In the upper plot that corresponds to the BS phase there are two horizons. The inner spherical apparent horizon, designated by the dotted thick curve, and the outer cylindrical horizon, designated by the solid curve.
To simplify the treatment, we distinguish between two different topologies for the horizon.
(1) When the horizon has the topology S 2 × S 1 we choose cylindrical coordinates (r, z) and we solve for a curve r = h(z).
(2) When the horizon has the topology of S 3 it is convenient to transform to spherical coordinates R, χ defined by r = R sin(χ), z = R cos(χ) The horizon in the R, χ plane is given by a curve R = h(χ).
The unit normal to the curve that defines the horizon is The parameter C(r, z) could be read from the normalization condition n µ n µ = 1. In both cases we solve numerically equation (5) to obtain the position of the apparent horizon.
A useful qualitative parameter employed as a measure of the non-uniformity of a BS [5] is λ ≡ 1/2(r max /r min − 1) where r min and r max are the minimal and the maximal 4d Schwarzschild radii of the apparent horizon. For a uniform string λ = 0. In the BH phase λ = ∞. Therefore, for a BH we define another parameter λ ′ ≡ R max /R min − 1, where R max and R min are the 5d Schwarzschild radii of the horizon. This parameter gives an idea of deformation of the BH's horizon. We find that there are two topologically distinct apparent horizon solutions. At small ζ the topology of the horizon is S 3 and the horizon is close to be exactly spherical. When ζ increases we see that the horizon begins to deform, deviating from a spherical shape but still remaining topologically 3-sphere. At a certain value, ζ m ≃ 1.78, a phase transition takes place -the topology of the apparent horizon changes form S 3 to S 2 × S 1 . In fact there are two apparent horizons in the BS phase. The outer horizon, has a cylindrical topology, while the inner one has a spherical topology. In Fig.2 we plot the sequence of solutions parametrized by ζ. In this figure there are finite values of ζ when λ = 0 or λ ′ = 0. In fact for these ζ the deformation of the horizon becomes so small that cannot be resolved by our numerics and we put the corresponding lambdas to zero.
Another interesting result is the measure of geometrical deformation of the horizons. In the BS and the BH phases this measure is supplied by λ and λ ′ respectively. The non-uniformity of the BS is displayed in the upper panel of Fig. 2. Near the critical point the most non-uniform string has λ ≃ 0.22. The non-uniformity disappear as ζ increases. For ζ ≥ 3.0 the BS becomes a uniform BS. The deformation of the horizon in the BH phase could be seen in the bottom panel of the same Figure. The maximal radius, R max , always occurs at the axis, r = 0. The BH becomes more and more oblate and stretched along the symmetry axis, as we approach the critical point. The most deformed BH has λ ′ ≃ 0.15 just before the transition.
The phase transition is discontinuous. At the critical value of ζ c the spherical horizon jumps suddenly to become a cylindrical. To get insight on the behavior of the phase transition, in the BH phase we have computed the proper distance along the r = 0 axis from the the BH horizon at the axis to z = L: ℓ = L r (5) ψ(r = 0, z)dz .
This distance decreases as we increase ζ. One could expect that as horizon grows and as ℓ → 0 the north pole of the BH would tend towards its south pole and they will touch. However, we find that as ζ → ζ c ℓ reaches a finite value. We plot the behavior of ℓ as a function of ζ − ζ c in Fig. 3. One observes that ℓ tends to a positive constant just before the transition. The initial data that we have constructed here is analogous to the Misner initial data [11] for a family of momentary static two equal mass BHs in 4d GR. Misner choose a sequence of conformally flat metrics with the conformal factor parametrized by a certain parameter µ that is related to the mass of the BHs and their proper mutual separation. As µ varies the shape of the initial apparent horizons varies. If the BHs are close enough, that occurs for small µ, a new apparent horizon suddenly appears, surrounding both BHs on the initial hypersurface. In other words, there is a critical µ 0 that divides two distinct possibilities for the topology of the apparent horizon on the initial slice. Just by looking at this initial data sequence one has an indication that the event horizons for two BHs will merge and form a distorted BH during an actual evolution. The value of µ when this merge occurs, generally would not coincide with the theoretical, µ 0 . In fact, the numerical evolution [12,13] of Misner initial data shows that the qualitative picture obtained for the sequence of the initial data is correct. The actual critical value of µ does not coincide with µ 0 , they are, however, not that different from each other.
Here we have an infinite array of BHs that are approaching each other simultaneously. We have shown that there is a family of initial data parametrized by ζ.
When the separate BHs in the array are getting closer they become distorted form the spherical shape. At a critical value ζ c the separate horizons are engulfed suddenly by a single cylindrical-like horizon. The effective cylindrical horizon after the transition is non-uniform.
It is important to stress that the sudden jumps of the apparent horizon topology are different from the first order transition that can happen between different phases of static solutions, discussed in [5,7]. This is because the apparent horizon that we discuss here is not casual as it is defined only locally. The event horizon is a global concept and it is expected to be larger than the apparent horizon. Only for static solutions both horizons coincide. Since our solution is not static the jumps of the apparent horizon cannot exclude the possibility of a smooth transition between the BHs and the BSs event horizons, discussed in [14].
The concrete numerical value of ζ c isn't important as it is just a number characteristic to the specific initial data sequence. However we expect that the qualitative behavior would be similar in the static solutions as well. We believe that a dynamical evolution of our initial data would confirm this qualitative picture and will yield actual critical value, ζ c .
|
2018-04-03T00:11:01.691Z
|
2002-11-22T00:00:00.000
|
{
"year": 2003,
"sha1": "00fc4ade703b1ef7bbca239749e97516af6ea744",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0211210",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dd87447356fae9a8325ac2af08cc8035385fd38f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
269007487
|
pes2o/s2orc
|
v3-fos-license
|
Promotion of self-directed learning abilities among Chinese medical students through preparing for career calling and enhancing teaching competencies in medical education: a cross-sectional study
Background Medical students face a heavy burden as they are tasked with acquiring a vast amount of medical knowledge within a limited time frame. Self-directed learning (SDL) has become crucial for efficient and ongoing learning among medical students. However, effective ways to foster SDL ability among Chinese medical students are lacking, and limited studies have identified factors that impact the SDL ability of medical students. This makes it challenging for educators to develop targeted strategies to improve students’ SDL ability. This study aims to assess SDL ability among Chinese medical students and examine the effects of career calling and teaching competencies on SDL ability, as well as the possible mechanisms linking them. Methods Data were collected from 3614 respondents (effective response rate = 60.11%) using cross-sectional online questionnaires and analyzed using IBM SPSS Statistics 22.0. The questionnaire comprised a Demographic Characteristics Questionnaire, Self-directed Learning Ability Scale (Cronbach’s alpha = 0.962), Teaching Competencies Scale, and Career Calling Scale. Results The average SDL ability score of Chinese medical students was 3.68 ± 0.56, indicating a moderate level of SDL ability. The six factors of the Self-directed Learning Ability Scale—self-reflection, ability to use learning methods, ability to set study plans, ability to set studying objectives, ability to adjust psychological state, and willpower in studying—accounted for 12.90%, 12.89%, 12.39%, 11.94%, 11.34%, and 8.67% of the variance, respectively. Furthermore, career calling was positively associated with SDL learning ability (β = 0.295, p < 0.001), and SDL learning ability was positively associated with teaching competencies (β = 0.191, p < 0.01). Simple slope analysis showed that when the level of teaching competencies was higher, the influence of career calling on SDL ability was stronger. Conclusions Chinese medical students’ SDL ability has room for improvement. Medical students could strengthen their willpower in studying by setting milestones goals with rewards, which could inspire their motivation for the next goals. Teachers should guide students to learn experience to improve students’ reflective ability. Educators play a crucial role in bridging the gap between career calling education and SDL ability enhancement, highlighting the significance of optimal teaching competencies. Colleges should focus on strengthening teachers’ sense of career calling and teaching competencies.
Background
Medical education functions in a highly dynamic environment [1].Both during their time in college and in their future professional careers, medical students need to continually update their medical knowledge to maintain their clinical skills [2].They are required to acquire more information than students of other subjects and expected to develop a lifelong learning ability [3].However, the overwhelming volume of medical information [4], extensive literature [5], and limited time [6] impose a huge burden on medical students.In the face of these challenges and the evolving nature of medical education, BMC Medical Education † Chen-xi Zhao and Zi-jiao Wang have contributed equally to this work.
Promotion of self-directed learning abilities among Chinese medical students through preparing for career calling and enhancing teaching competencies in medical education: a cross-sectional study Chen-xi Zhao 1 † , Zi-jiao Wang 2 † , Xiao-jing Yang 2 , Xing Ma 3 , Ying Cui 1 , Yan-xin Zhang 1 , Xin-hui Cheng 1 , Shu-e Zhang 2* † , Qing-feng Guo 1* † and De-pin Cao 2* † it has become an urgent challenge for educators to help develop efficient and ongoing self-directed learning (SDL) abilities among medical students.
After Malcolm Knowles first introduced the concept of SDL, several scholars understood that SDL would significantly advance medical students' careers [5][6][7][8].Since then, educators have increasingly focused on medical students' abilities to effectively plan their time and acquire the necessary professional skills for their future careers.Holec deeply studied the concept of SDL ability, considering it the capability to take responsibility for one's learning [9], including setting learning objectives, deciding on learning content and pace, choosing learning methods and technologies, monitoring learning progress, and evaluating learning outcomes.SDL has gradually become an indispensable competency for medical students [10], helping them stay abreast of the latest medical advancements [11].Without a sense of SDL for academic learning, medical students may struggle to meet professional demands in the present and future [12].Therefore, it is critical to integrate a sense of SDL into daily life to adequately prepare medical students for their careers.
Nevertheless, many medical students struggle with effective time management [13].Due to the traditional exam-oriented education system in China, some Chinese medical students, accustomed to acquiring knowledge from teachers, remain passive learners with limited learning activity [14,15].These students may not have the ability to learn independently when faced with problems [16] and may lack a sense of learning consciousness [17], ultimately lagging behind the pace of medical development.This is a significant problem that could affect the quality of medical services in China in the coming years.To become outstanding medical practitioners, Chinese medical students must urgently enhance their SDL ability.Fostering an adaptive and sustainable SDL ability among Chinese medical students is an imminent requirement that medical educators must address.
Before proposing appropriate strategies to enhance this ability, it is imperative to evaluate the current SDL ability of Chinese medical students.However, after reviewing the literature, we found minimal research on measuring the current level of SDL ability among medical students.Therefore, we required a suitable tool to assess the current SDL ability of Chinese medical students and identify effective strategies to enhance it.Based on previous literature, this study explored two factors that may have a significant influence on Chinese medical students: career calling [18,19], and teaching competencies [20,21].
In recent years, there has been an increasing focus on medical practitioners' career calling.Scholars have defined career calling as a subjective experience in which individuals are determined to work voluntarily and positively [22], indicating a passion or drive toward working in a particular field [23].Bunderson noted that when individuals strongly identify with their jobs, they tend to focus all their attention on work [24].Moreover, a high level of career calling is related to positive emotions [25], and this active feeling can lead to proactive behaviors [26].In essence, career calling can maintain the passion of medical students for learning, encouraging them to actively plan and conduct their studies [27].Can it be extrapolated that the higher the career calling of a Chinese medical student, the better their SDL ability?Based on these predictions, career calling may serve as a protective factor in fostering SDL ability.Therefore, this study aimed to explore the relationship between career calling and SDL ability, along with constructive factors to mobilize SDL ability.
Teaching philosophy reflects an individual's beliefs and values about teaching and learning.It discusses the self-identity of teachers and how they educate others [28].Thus, teachers can play the role of a bridge between career calling education and medical students' learning.Teachers with strong beliefs and values related to career calling may influence students in a subtle way.Thus, it is important to focus on teachers' teaching capacity.Studies have defined teaching competencies as comprising teachers' personal characteristics, knowledge, skills, and attitudes required in various teaching environments [29].Teachers with appropriate characteristics that align with educational requirements can benefit students' academic achievements [30].Deep subject knowledge can also improve students' grades [31], while good teaching skills can direct students' focus toward learning [32].Additionally, positive attitudes toward teaching can promote students' positive attitudes [33].Overall, teachers' teaching competencies influence students' learning and academic achievements and are highly significant for nurturing future talents.Therefore, this study posits that teaching competencies play a positive moderating role between Chinese medical students' career calling and SDL ability.In practical teaching, teachers' teaching competencies directly impact students, who observe and judge these competencies more objectively and comprehensively.To better evaluate teaching competencies, this study used "students' perception of their teachers' competencies in teaching" as an evaluation method.
This study aimed to measure Chinese students' SDL ability level and explore the correlations among career calling, teaching competencies, and SDL ability.To accomplish these aims, we proposed the following two hypotheses: Hypothesis 1 Career calling is positively associated with SDL ability among Chinese medical students.
Hypothesis 2 Teaching competencies positively moderate the relationship between career calling and SDL ability among Chinese medical students.
Ethics statement
The procedures of this study adhered to the guidelines of the Declaration of Helsinki and were reviewed and approved by the Ethics Committee of the Institutional Review Board of Harbin Medical University(ECHMU: HMU202072).Each participant provided written online informed consent before participating in this study.All data collected from the participants were kept anonymous and confidential to protect their privacy.
Survey design and data collection
Initially, according to the calculation method and standard requirements for the cross-sectional sample size based on Zhou et al. [34], the minimum sample size for this study was calculated to be 1824.Considering a minimum response rate of approximately 40% based on previous online survey experience, the sample size was expanded to 4560.To further ensure data quality, we determined the final number of respondents to be 6000.
After determining the sample size, six medical universities were selected based on their size, academic programs, research performance, admission scores, and number of students.Different specialties and grades were then randomly selected in each university.These universities are located in Nanjing, Guangzhou, Dalian, Harbin, Mudanjiang, and Daqing.
To ensure the cost-effectiveness, time-effectiveness, and accessibility of the study [35], a cross-sectional anonymous online survey was conducted using a multistage stratified convenient sampling method to collect data from medical students from July to September 2021.Based on the characteristics of medical students, we used a multi-staged stratified convenient sampling method, with quotas allocated by the division of students' years and majors.First, we grouped medical students according to their majors.Next, we further divided these groups into smaller groups based on their years.Finally, we distributed questionnaires and received responses in accordance with the predetermined quantity.The survey was conducted through the online survey platform "Questionnaire Star." The researchers monitored the collected questionnaires in real-time through the platform and used it to effectively manage the data.
Data quality control
Data quality is key to ensuring the reliability and validity of a study.In this study, a data quality control process was implemented in three stages: questionnaire design, survey administration, and data processing.
Questionnaire design
The questionnaire included three "seriousness test questions" placed at the beginning, middle, and end.These questions prompted respondents to select specific answers to test their seriousness [36].Additionally, a "self-evaluation question of answer quality" was included at the end of the questionnaire for respondents to evaluate the quality of the questionnaire.Each participant was allowed to respond only once.
Survey administration
One or two research leaders were selected from each university to conduct an "accurate survey" of the target participants.This ensured that all the questionnaires were completed by the target groups.
Data processing
During data processing, strict data screening criteria were applied.Responses with incorrect selections to any of the "seriousness test questions" were deleted.Respondents who took less than three hundred seconds to complete the questionnaire were considered "speeders, " and their questionnaires were deleted.Questionnaires that participants suggested deleting were also excluded.Finally, each remaining questionnaire was reviewed by the authors, and those with an irregular distribution of answers were deleted.
Study instruments
A Demographic Characteristics Questionnaire, Selfdirected Learning Ability Scale, Teaching Competencies Scale, and Career Calling Scale were used.Permissions were obtained for using the Teaching Competencies Scale and the Career Calling Scale.
Measurement of demographic characteristics
Eight demographic information was collected using a self-designed questionnaire: gender, grade, major, experience of leadership, hometown, monthly living expenses, parenting style, and education level of parents.Student grade was collected as a continuous variable ranging from 1 to 5. The majors of students were categorized into eight groups: "basic medical science, " "clinical medicine, " "stomatology, " "public health, " "pharmacy, " "medical technology, " "nursing, " and "others." Leadership experience was divided into "student leaders" and "ordinary students." Students' hometowns were categorized as "rural" or "urban." The monthly living expenses of students were categorized into four groups: RMB "0 ∼ 1000", "1000 ∼ 1500", "1500 ∼ 2000, " and "2000 and above".Parenting style was divided into four categories: "neglecting, " "permissive, " "authoritarian, " and "authoritative." The education level of parents was categorized as "primary school or below, " "junior middle school, " "high school, " "junior college, " or "bachelor's degree or above."
Measurement of SDL ability
According to the definition of SDL ability in previous studies [37,38], SDL ability was divided into six dimensions: ability to set studying objectives, willpower in studying, ability to set study plans, ability to use learning methods, ability to adjust psychological state, and ability to self-reflect.A 28-item instrument designed by the authors was used to measure SDL ability level.In a previously published article, the self-designed SDL ability scale was tested and implemented [39].To ensure the applicability of the scale, a pre-survey was conducted with 454 students, which showed good reliability and validity.Items were scored on a 5-point Likert scale ranging from 1 "totally inconsistent" to 5 "totally consistent, " with higher scores representing a higher degree of SDL ability.
Measurement of teaching competencies
The teaching competencies of the teachers were assessed using a 5-item Teaching Competencies Scale, a questionnaire developed for German students by Thomas and Müller [40].Items were scored on a 5-point Likert scale ranging from 1 "totally not in line with" to 5 "fully in line with, " with higher scores indicating a higher level of teachers' teaching competencies.The cross-cultural adaptation of the scale into Chinese included performing forward and backward translations, with an assessment of its cultural equivalence and clarity.High reliability was demonstrated in the reliability analysis, with a Cronbach's α-coefficient of 0.943 for the scale in this study.
Measurement of career calling
Medical students' career calling level was assessed using the 4-item Career Calling Scale revised by Dik et al. [41].The scale has been cross-culturally adapted and verified in other studies in China [42,43].Items were scored on a 5-point Likert scale ranging from 1 "never" to 5 "every day, " with higher scores indicating a higher degree of career calling.Cronbach's α-coefficient for the Career Calling Scale in this study was 0.843.
Statistical analysis
This study utilized Amos version 24.0 software and SPSS version 22.0 for statistical analysis, and a twotailed p < 0.05 was considered statistically significant.We assessed the suitability of the data for factor analysis by conducting the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett's χ [2] test of sphericity.Subsequently, exploratory factor analysis (EFA) was conducted to explore the causal structure.We employed the principal component analysis (PCA) and varimax-rotation method to extract six factors, removing items with factor loadings lower than 0.4.
Confirmatory factor analysis (CFA) with maximum likelihood was used to validate the factor structure of the Self-directed Learning Ability Scale.Various indexes were used to assess model fit, including root mean square error of approximation (RMSEA), goodness-of-fit index (GFI), adjusted goodness-of-fit index (AGFI), and normed fit index (NFI), among others.The normed chisquare (χ [2]) and goodness-of-fit test (χ 2 /df) were used to evaluate the null hypothesis that the model fits the data.However, achieving this in large sample sizes can be challenging, so we utilized the aforementioned indexes as criteria for our analysis.Cronbach's alpha coefficient was used to measure the reliability of our instrument.
We used descriptive statistics and frequencies to analyze the demographic variables and total scores of the three scales.SDL ability scores across different demographic categories were examined using independent samples t-test or one-way ANOVA.In cases where one-way ANOVAs were found to be significant, we conducted least-significant-difference (LSD) tests for multiple comparisons.Pearson correlation analysis was used to examine correlations among SDL ability, teaching competencies, and career calling.Hierarchical multiple regression analysis was employed to test the moderating effect of teaching competencies on the relationship between career calling and SDL ability.All variables related to SDL ability in univariate analysis (p < 0.05) were included in the hierarchical multiple regression model.We performed the model estimation using PROCESS, a convenient, free, and easy-to-use computational add-on for SPSS documented by Hayes [44].Before conducting the regression analysis for moderating effects, we employed mean centering (subtracting raw scores from the mean) to mitigate multicollinearity.
Results of EFA, CFA, and reliability
A total of 6012 students were invited to participate in this study.Of these, 3614 questionnaires were completed and passed the quality control procedure, yielding a response rate of 60.11%.Results from both KMO and Bartlett's tests demonstrated that the samples met the criteria for factor analysis criteria, with a KMO measure of sampling adequacy of 0.975.
The six-factor model explained 70.12% of the variance, with each factor contributing as follows: ability to selfreflect (12.90%), ability to use learning methods (12.89%), ability to set study plans (12.39%), ability to set studying objectives (11.94%), ability to adjust psychological state (11.34%), and willpower in studying (8.67%).The pattern and structures of the rotated common factors are shown in Table 1.
The results of the CFA are presented in Table 2.The model fit the data reasonably well, with GFI, AGFI, NFI, and RMSEA all indicating a good fit.While the χ 2 /df was slightly higher than ideal, it is considered acceptable in large sample sizes [45].The path diagram with standardized parameter estimates is shown in Fig. 1.
In this study, Cronbach's alpha coefficients were used to assess the internal reliability of the instrument.The Cronbach's alpha coefficient for the total score was 0.962.The alphas for the sub-scales of ability to set studying objectives, willpower in studying, ability to set study plans, ability to use learning methods, ability to adjust
Current SDL ability level among Chinese medical students
The 3; Fig. 2.
Difference in SDL ability based on participant characteristics
Significant differences were observed in SDL ability scores depending on students' demographics, including gender, grade, major, experience of leadership, hometown, monthly living expenses, parenting style, and education level of parents.Details of the scores under different demographic characteristics and LSD test results are detailed in Table 4.The demographic breakdown of participants indicated that 74.41% were women and 25.59% were men.Regarding grades, the majority were freshmen (56.79%), followed by sophomores (18.26%).In terms of majors, 4.10% were in basic medicine, 34.26% in clinical medicine, 6.34% in stomatology, 4.73% in public health and preventive medicine, 15.55% in pharmacy, 11.73% in medical technology, 14.78% in nursing, and 8.52% in other majors.Over half of the participants (53.90%) had experience as student leaders.Regarding hometowns, 55.76% were registered as urban residents, while the rest were from rural areas.In addition, 45.63% of students reported monthly living expenses between 1000 ∼ 1500 RMB.Regarding parenting types, the majority of students (61.23%) reported experiencing permissive parenting.Finally, the education levels of the participants' parents varied as follows: primary school or below (5.67%), junior middle school (32.57%), high school
Correlations among continuous variables
Table 5 presents the correlations among SDL ability, teaching competencies, and career calling.The three variables were found to be significantly correlated with each other.The level of SDL ability was positively correlated with career calling and teaching competencies.Career calling was positively correlated with teaching competencies.Therefore, Hypothesis 1 was supported.
Career calling, Teaching competencies, and SDL ability
Following the suggestions by Aiken and West [46], the data were centered (by subtracting the average value), indicating that teaching competencies significantly moderated the association between career calling and SDL ability, as shown in Table 6; Fig. 3. Therefore, Hypothesis 2 was confirmed, suggesting that teaching competencies positively moderated the relationship between career calling and SDL ability among Chinese medical students.
SDL ability among Chinese medical students
This study investigated the level of SDL ability in Chinese medical students.The results derived from this study indicate that the instrument measuring SDL ability had high reliability and validity, with Cronbach's α exceeding 0.90 and the designed six-factor structure confirmed by CFA.The standardized factor loading coefficient of the items and the cumulative variance contribution rate also confirmed the reliability and validity of the instrument.In summary, the instrument appears to be appropriate for the assessment of SDL ability among Chinese medical students.
The mean score of SDL ability among surveyed medical students was 3.68 ± 0.56 (Mean ± SD).Similar results were found by Yang et al. [47], suggesting that the SDL ability of Chinese medical students is at a moderate level.Among demographic characteristics, gender, grade, major, experience of leadership, hometown, monthly living expenses, parenting style, and education level of Fig. 3 Moderated effect of teaching competencies on the association between career calling and SDL ability parents were found to have an impact on the SDL ability of Chinese medical students.The results of the scoring order of SDL ability in each dimension indicate that Chinese medical students can actively set studying objectives and plans, and are able to use learning methods correctly.However, in the process of SDL, the ability to adjust psychological state, willpower in studying, and ability to self-reflect were relatively low.This indicates that students' executive ability and reflective ability were poor, affecting the effectiveness of SDL, or even making the study plan formalistic.Scholars have found that students may be disturbed by minor distractions before fully engaging in the learning process, leading to potential disruptions in their ability to execute their learning goals, even when they have meticulously planned their studying routines in advance [48].Medical students could add milestones achievement rewards to their study plans.The sense of achievement gained from completing small milestones of learning goals could inspire medical students to move on to the next goal, and thus enhance their willpower to learn.Additionally, studies have pointed out that students may face challenges in describing the influence without drawing lessons from experience [49]; such reflection may be ineffective.In this context, teachers could guide medical students to delve deeply into their experiences hidden behind various events, thereby improving their reflective ability.
Career calling and its positive association with SDL ability among Chinese medical students
The findings of this study confirm that career calling can positively affect SDL ability among Chinese medical students.This result is similar to Lang's findings, which suggest that students with a strong career calling or a steadfast commitment to their professions tend to have higher levels of energy and a greater sense of control over their professional success [50].For medical students, a stronger sense of career calling is associated with a greater SDL ability.Chinese medical students' societal value is closely tied to their academic skills.Additionally, the medical industry requires its workers to cultivate the capacity for lifelong learning to keep up with the latest developments [51].In essence, Chinese medical students must continue learning on the job to maintain their societal value and status.Therefore, Chinese medical students are encouraged to develop SDL abilities during their undergraduate education to become qualified medical practitioners and smoothly transition into formal work.
It is therefore crucial for medical universities to devote sufficient effort to developing medical students' sense of career calling during higher education.Chinese medical universities could implement a series of curriculum changes focused on the missions of the medical profession, aiming to enhance students' sense of responsibility and morality, which in turn would promote selfregulation in learning.By using real-life cases to highlight the responsibilities that medical practitioners bear concerning human life, Chinese medical universities can help students cultivate a noble sense of career calling.This can motivate students to invest more energy and time in academic learning, leading to higher academic achievements through enhanced SDL abilities.
Moderating role of teaching competencies in the positive association between career calling and SDL ability among Chinese medical students
This study provides evidence that teaching competencies can play a positive moderating role between Chinese medical students' career calling and SDL ability.Strong teaching competencies can capture students' attention during lectures.For instance, medical teachers can use engaging teaching techniques to make the transfer of seemingly dull knowledge interesting and memorable.Teaching competencies, such as a passionate teaching attitude, can inspire students to unlock their learning potential [52].Accordingly, when students' learning potential is unleashed, teachers' strong teaching competencies, coupled with a broad knowledge base, can cater to students' academic curiosities.This mutual relationship can stimulate students' interest, leading them to immerse themselves in learning and create a virtuous cycle.As a result, Chinese medical students may recognize the significance of SDL and proactively improve their SDL ability.
The path to a medical professional learning career is undoubtedly challenging and lengthy, but it should not lack academic assistance and motivation.Properly combining extrinsic teaching competencies with intrinsic career calling can provide the physical and psychological energy needed for medical students to advance further.Therefore, from the perspective of medical colleges, addressing how to improve medical teachers' teaching competencies seems to be an urgent issue.Medical colleges could design questionnaires to evaluate existing teaching competencies and gather feedback from students to target improvements in teachers' teaching competencies.Moreover, colleges could invite teachers from other disciplines to deliver lectures and provide training for medical teachers, as few medical teachers have received systematic educational training and may lack knowledge in educational theory and practice [53].Enhancing teaching competencies is an ongoing endeavor, but it can greatly assist Chinese medical students in improving their SDL abilities and optimizing the quality of education.
Limitations
Although the present study reveals important findings, it has some limitations.First, the data collected are crosssectional, indicating that establishing causal relationships among the factors was not possible.Second, considering that each medical college has its own unique circumstances and our sample did not include all medical students in China, the generalizability of the results may be limited.Third, the questionnaires were collected online, which may have introduced response bias and made it challenging to control data quality.In future studies, scholars could consider researching a wider and more diverse sample through face-to-face investigations to address these limitations.
Conclusions
This study displayed the research and development process of the SDL ability and verified the reliability and validity of the SDL ability scale for 6 factor 28 items once again.We found that Chinese medical students' SDL ability is at a moderate level, suggesting room for improvement.We also identified eight demographic factors that influence Chinese medical students' SDL ability and explored the relationships among career calling, teaching competencies, and SDL ability.Both career calling and teaching competencies were found to be effective factors that can strengthen Chinese medical students' SDL ability.
733 Factor 2 :Factor 3 : 543 Factor 4 : 659 Factor 5 : 806 Factor 6 :
completing a learning objective, I will determine the next learning objective as soon as possible.0.Willpower in studying B1.Even if the content of the final exam is a lot, I can stay up late and finish the review.0.723 B2.When there is a conflict between learning and entertainment, I will not affect learning due to entertainment.0.587 B3.In learning, I can do an important but boring thing for a long time.0.623 B4.No matter what difficulties I encounter, I will stick to my learning goal.0.552 B5.I think I have strong self-discipline in the process of learning.Ability to set studying plans C1.I can arrange my study time reasonably.0.613 C2.I can break down a rough learning goal into multi-stage learning steps.0.604 C3.I know exactly what I should learn every day.0.623 C4.I can make plans in my heart on how to complete the learning tasks of each week.0.612 C5.Before I start learning, I ask myself, "what should I learn next?"And other related issues.0.Ability to use learning methods D1.I take the initiative to learn some efficient learning methods.0.681 D2.I can use appropriate learning methods according to different learning contents.0.644 D3.I keep improving my learning methods.0.683 D4.I can use many ways to solve the problems encountered in learning (such as asking teachers, searching on the Internet, etc.).0.673 D5.I pay great attention to observing and learning from other people's learning methods and experiences.0.Ability to adjust psychological state E1.I am always full of energy to learn.0.487 E2.I have ways to prevent the bad emotions that arise during study.0.794 E3.I have ways to ease the bad emotions that arise during study.0.838 E4.When anxiety occurs in my learning process, I will think of some relaxed and happy things to get rid of anxiety.0.Ability to Self-reflect F1.I can summarize a stage of learning.0.632 F2.Before I go to bed, I often think about what I learned today and how well did I learn.0.732 F3.After each stage of study, I would think about whether I had completed the study plan.0.683 F4.I evaluate my learning effect regularly.0.722 F5.I regularly reflect on how to improve my learning.0.671 results indicated that the SDL ability among Chinese medical students was at a moderate level (M = 3.68, SD = 0.56).The scores for specific aspects of SDL ability, from highest to lowest, included the ability to set studying objectives (M = 3.88, SD = 0.68), ability to use learning methods (M = 3.81, SD = 0.61), ability to set study plans (M = 3.71, SD = 0.65), ability to adjust psychological state (M = 3.61, SD = 0.70), willpower in studying (M = 3.56, SD = 0.65), and ability to self-reflect (M = 3.52, SD = 0.70), as presented in Table
Table 1
Rotated factor loading matrix of all items
Table 2
Summary of fit indices Fig. 1 Path diagram for the model with standardized parameter estimates
Table 3
The Means (M), Standard Deviations (SD) score of SDL ability among Chinese medical students (n = 3,614) Fig. 2 Radar chart of SDL ability among chinese medical students
Table 4
Sample characteristics and one-way ANOVA analysis / independent samples t-test of SDL ability of Chinese medical students Note: * Independent Samples t-Test ** One-Way ANOVA
Table 5
Means, Standard Deviation (SD) and correlations of continuous variables (n = 3614) Note: ** P < 0.01; the Pearson Correlation is significant at the 0.01 level (two-tailed)
|
2024-04-10T06:17:45.065Z
|
2024-04-08T00:00:00.000
|
{
"year": 2024,
"sha1": "12dbf7ddbce081e12ad529fcc3e75dd82af8aec3",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "93a89af8e76337a690fc5f5b6c49a9de813a30f2",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15511320
|
pes2o/s2orc
|
v3-fos-license
|
Evolutionary aspects of plastid proteins involved in transcription: The transcription of a tiny genome is mediated by a complicated machinery
Chloroplasts in land plants have a small genome consisting of only 100 genes encoding partial sets of proteins for photosynthesis, transcription and translation. Although it has been thought that chloroplast transcription is mediated by a basically cyanobacterium-derived system, due to the endosymbiotic origin of plastids, recent studies suggest the existence of a hybrid transcription machinery containing non-bacterial proteins that have been newly acquired during plant evolution. Here, we highlight chloroplast-specific non-bacterial transcription mechanisms by which land plant chloroplasts have gained novel functions.
C hloroplasts in land plants have a small genome consisting of only 100 genes encoding partial sets of proteins for photosynthesis, transcription and translation. Although it has been thought that chloroplast transcription is mediated by a basically cyanobacteriumderived system, due to the endosymbiotic origin of plastids, recent studies suggest the existence of a hybrid transcription machinery containing non-bacterial proteins that have been newly acquired during plant evolution. Here, we highlight chloroplast-specific non-bacterial transcription mechanisms by which land plant chloroplasts have gained novel functions.
Evolutionary Footprint of the Chloroplast Genome
Chloroplasts are plant organelles that are responsible for photosynthesis. It is generally accepted that the chloroplasts are derived from a single endosymbiotic event in which the ancestor of the plant lineage engulfed a green photosynthetic cyanobacterium more than one billion years ago. Although green algal cells contain only chloroplasts, land plant plastids differentiate into specialized plastid types that can be distinguished by their structure, pigment composition (color) and function. Chloroplasts are found mainly in the leaves, whereas plastids convert into non-photosynthetic amyloplasts and chromoplasts that accumulate starch and a variety of secondary metabolic products in roots and fruits, respectively. All types of plastids are derived from undifferentiated proplastids in meristematic tissues,
Evolutionary aspects of plastid proteins involved in transcription
The transcription of a tiny genome is mediated by a complicated machinery Yusuke Yagi 1 and Takashi Shiina 2, * 1 Faculty of Agriculture; Kyushu University; Fukuoka, Japan; 2 Graduate School of Life and Environmental Sciences; Kyoto Prefectural University; Kyoto, Japan and plastid differentiation is controlled by environmental and tissue-specific cues.
Plastids have their own genome, whose size and number of genes decreased drastically during plant evolution as compared to genomes of living cyanobacteria. The unicellular cyanobacterium Synechocystis sp. PCC6803 genome is 3,573,471 bp and contains 3,317 genes, whereas the Arabidopsis thaliana plastid genome (154,478 bp) consists of only 87 proteincoding genes for photosynthesis, lipid metabolism, RNA polymerase and ribosome components, along with 37 tRNA and 4 rRNA genes. 1,2 Most cyanobacterial protein-coding genes have been lost or transferred to the host nuclear genome during plant evolution. On the other hand, chloroplast proteome analyses identified more than 3,000 proteins in chloroplasts, indicating that chloroplasts are semi-autonomous and largely dependent on nuclear-encoded proteins. 3 Chloroplast gene expression is generally mediated by prokaryotic machinery analogous to cyanobacterial RNA polymerase and ribosomes. However, no homologues of bacterial transcription factors or bacterial nucleoid proteins have been found in higher plant genomes. On the other hand, recent proteomic analysis of higher plant chloroplasts has identified a number of plant-specific non-bacterial eukaryotic type proteins that are likely involved in plastid gene expression. These eukaryotic type chloroplast proteins might be newly acquired from the host genome after the endosymbiotic event, and might be involved in the regulation of plastid differentiation and the adaptation of plastids to the land environment. This implies that growth phase. Thus, it is suggested that only the essential sigma factor σ 70 has been retained in chloroplasts, while other nonessential alternative sigma factors were lost during the early evolution of the green plant lineage. The sigma factor gene copy number in chloroplasts may have increased as a result of genomic duplication to support plastid differentiation in land plants. The σ 70 recognizes a consensus promoter sequence that is characterized by -10 and -35 core elements spaced at 17-19 nt. Such promoter sequences are commonly found upstream of chloroplast-encoded genes. Molecular characterization of Arabidopsis sigma factors AtSIG1-AtSIG6 demonstrated that they have specific and partially overlapping roles in transcription of photosynthesis and ribosomal RNA genes (For review see ref. 5).
On the other hand, NEP is a phage-type chloroplast RNA polymerase that originated by gene duplication of a mitochondrial RNA polymerase gene. Although higher plants have one or two NEPs, green algae have none. NEP is required for transcription of housekeeping genes such as genes encoding PEP core subunits, ribosomal proteins and a lipid metabolism protein (accD). It is well known that PEP and NEP activities are inversely regulated during chloroplast development from seeds (white) to leaves (green). 6,7 Briefly, NEP becomes active to produce the PEP transcription system and chloroplast translation machinery at an early stage of seed germination. Subsequently, PEP actively transcribes photosynthesis genes to construct and maintain the photosynthesis system, while NEP activity gradually declines during the greening process. High NEP and low PEP activities are characteristic of non-green plastids, such as amyloplasts and chromoplasts in roots and colored fruits, respectively. During tomato fruit ripening, a subset of NEPdependent genes including accD, trnA and rpoC2 are specifically up-regulated. 8 This coordinated functioning of the two RNA polymerases is essential for plastid differentiation. Thus, chloroplasts inherited the bacterial-type transcription system from their cyanobacterial ancestor, but likely have evolved a complex transcription network with multiple RNA polymerases to control plastid differentiation (Fig. 1D).
the chloroplast genome as rpoA, rpoB, rpoC1 and rpoC2 (except in the moss Physcomitrella patens in which the rpoA gene has been transferred to the nuclear genome), whereas all genes for plastid sigma factors have been transferred to the nuclear genome. Interestingly, green algae including Chlamydomonas reinhardtii, Chlorella and the most primitive, Ostreococcus tauri, have only one plastid sigma factor, while land plants such as moss (five genes in Physcomitrella), ferns (three genes in Selaginella), dicots (six genes in Arabidopsis), and monocots (seven genes in rice) possess multiple sigma factors. Phylogenic analysis revealed that all chloroplast sigma factors fall into the σ 70 family, members of which are responsible for transcription of housekeeping genes in bacteria during the exponential higher plant chloroplasts have developed complicated gene expression systems supported by a number of eukaryotic proteins, despite their dramatically reduced genome information.
Basic Transcription Machinery for the Chloroplast Genome
In angyosperms, chloroplast transcription is mediated by two distinct types of RNA polymerases, plastid-encoded plastid RNA polymerase (PEP) and nuclearencoded plastid RNA polymerase (NEP) (for review see ref. 4). PEP is a bacterial type enzyme that is composed of four core subunits (α, β, β′, β″) and a sigma factor allowing recognition of promoter sequences ( Fig. 1A and D). Genes encoding PEP core subunits are retained in RNA polymerase subunits but also pTAC proteins. However, their primary functions still remain unclear. Several pTAC proteins have also been identified as components of PEP-A or co-immunoprecipitated with PEP subunits, suggesting that some pTAC proteins interact directly with PEP (for review see ref. 18). For example, pTAC3 binds directly to the PEP complex and its defect results in decreased transcriptional activity of PEP. 9 Furthermore, chloroplast ChIP analysis demonstrated that the pTAC3 association pattern along the PEP-transcribed region is the same as that of the PEP core alpha subunit. These data suggest that pTAC3 associates with the PEP complex during transcription and is essential for PEP activity.
On the other hand, the maize whirly 1 (WHY1/pTAC1) protein has been identified in the Chloroplast RNA Splicing 1 (CRS1) protein complex, which promotes the splicing of the chloroplast atpF group II intron. 19 ZmWHY1 is part of the plant-specific non-bacterial 'Whirly' protein family, members of which have been described as DNA-binding proteins and are localized at nucleoids in chloroplasts and mitochondria. Genome-wide DNA or RNA immunoprecipitation assays showed that ZmWHY1 is associated with maize chloroplast DNA and with a subset of plastid RNAs including atpF transcripts. Moreover, ZmWHY1 is required for PEPdependent transcription, but not directly involved in either DNA replication or global plastid transcription. Although it is not clear whether the WHY1 protein is associated with PEP, its DNA binding pattern and association with the chloroplast RNA splicing complex suggest that WHY1 is involved in post-transcriptional regulation at chloroplast nucleoids and the coupling of transcription and splicing. In addition to the role of WHY1 in chloroplasts, it has been reported that AtWHY1 also acts as a nuclear transcription factor regulating the salicylic-acid dependent defense system. 20 Likewise, pTAC12/HEMERA protein is also localized to both nuclei and chloroplasts, and is involved in phytochrome light signaling. 21 These dual-localized pTAC proteins might be involved in the crosstalk between chloroplast and nuclear gene expression. genome uncoupled 1 (GUN1) has been plastid elongation regulators, since they possess similarity to the NusG domain and the nuclear transcription elongation factor TFIIS, respectively. 10,11 Molecular characterization of these factors in PEPdependent transcription will provide insight into the PEP elongation mechanisms. Termination of chloroplast RNA polymerase activity was found to occur at intrinsic bacterial-like terminators in vitro. 12 However, most chloroplast 3' termini are generated by RNA processing rather than by termination at accurate positions in vivo. 13
Novel Plant-Specific Transcriptional and Post-Transcriptional Regulators in Plastids
It is known that the molecular size of the PEP complex changes during leaf development. 14 In mustard, two distinct PEP complexes have been identified in chloroplasts: PEP-A and PEP-B, which differ in terms of subunit composition, functional properties and abundance during etioplast-to-chloroplast conversions. PEP-B consists of four proteins corresponding to the predicted sizes of PEP core subunits, whereas PEP-A is larger than PEP-B and contains at least 13 additional polypeptides. Etioplasts in dark-grown seedlings contain PEP-B, whereas PEP-A is predominant in chloroplasts. The PEP complex likely alters in size and activity through PEP-A-associated proteins during chloroplast development. Plastid transcriptionally active chromosomes (pTACs) have been isolated from the chloroplast membrane by treatment with Triton X-100 followed by gel filtration. 15 Electron microscopic observation revealed huge protein-DNA complexes with a mesh-like structure. 16,17 Thus, pTACs are assumed to be a part of plastid nucleoids. Proteomic analysis of pTAC proteins has identified 18 novel non-bacterial proteins that are named pTAC1-pTAC18, together with PEP core subunits, DNA gyrase, DNA polymerase and some ribosomal proteins. 10 Interestingly, most pTAC gene-inactivated mutants display a chlorophyll-deficient phenotype and reduced PEP transcription activity, suggesting that PEP transcription requires not only core
Missing Parts for Chloroplast Transcription
The molecular understanding of transcriptional regulation in bacteria is well advanced. The transcription cycle comprises multiple steps including initiation, elongation and termination (Fig. 1A). RNA polymerase complexes with different types of σ factors recognize different promoter sequences and initiate transcription of specific sets of genes. Gene-specific transcriptional activation and repression are also regulated by DNA-binding transcription factors, although they are not part of the holoenzyme (RNAP core-σ complex). The sigma factor is released from the RNA polymerase holoenzyme during the transition from initiation to elongation, and the RNA polymerase complex is converted into an elongation complex (EC). The EC slides along DNA with the assistance of elongation regulators such as NusA, NusG and GreA. Bacterial transcription is terminated by two mechanisms: Rho protein-dependent termination and intrinsic terminator sequence-dependent (Rho-independent) termination.
In chloroplasts, it has been clearly shown that promoter recognition by PEP is also conferred by sigma factors in a similar manner to that in bacteria. Several promoters in higher plant chloroplasts have unique cis elements. 4 However, no transcription factors or DNA-binding proteins that are conserved between bacteria and higher plants have been identified. In contrast to the initiation steps, much less is known about the post-initiation steps. Recently, we have developed a chloroplast ChIP assay, and analyzed the binding of PEP core subunit, α, along chloroplast DNA. 9 The association of PEP is enriched at promoter-proximal regions, and its signal attenuated toward termination regions, similar to the distribution patterns of E. coli RNAP, suggesting that PEP-dependent transcription initiation, elongation and termination steps are regulated by mechanisms similar to those of bacteria. However, for PEP and NEP elongation and termination factors remain unidentified. Plastid transcriptionally active chromosome 13 (pTAC13) and Etched 1 (ET1) are candidates to be development. Furthermore, recent proteomic analysis of maize chloroplast nucleoids during chloroplast development identified not only basic proteins involved in DNA replication, repair and transcription, but also a number of proteins involved in post-transcriptional events such as RNA processing and editing. 30 It is suggested that the plastid transcription system is spatially and functionally coupled to post-transcriptional events at plastid nucleoids. Interestingly, most of the proteins involved in post-transcriptional processes in plastid nucleoids are not related to bacterial proteins. Further characterization of plastid nucleoid proteins should provide insights into the coevolution of non-bacterial proteins with the basic plastid transcription system.
Summary
Proteomic analysis of plastid transcription machinery such as pTACs and plastid nucleoids has raised the question of why plastid transcription requires many additional factors besides the basic transcription machinery to transcribe a small genome encoding only a hundred genes. It is likely that almost all pTAC and plastid nucleoid proteins were acquired when plants became adapted for life on land, as evidenced by the absence of homologous proteins in green algae such as Chlamydomonas reinhardtii. It is reasonable that green algae have simplified their transcription regulatory system in chloroplasts by reducing the number of cyanobacterium-derived regulatory factors, since green alga chloroplasts exist in a stable environment, the cytoplasm of the host cells. Furthermore, chloroplasts of green algae stay green even in the dark, suggesting a lack of light-dependent regulation of chloroplast development. Indeed, chloroplasts of green algae such as Chlamydomonas retain only one sigma factor and have acquired no NEP. By contrast, chloroplasts in land plants differentiate into several different types of plastids in response to cell differentiation. To gain a plastid differentiation system, chloroplasts were forced to develop both repression (green to white) and activation (white to green) systems for photosynthetic machinery. The acquisition of the One of the major bacterial nucleoid proteins, HU, has been identified in the red alga Cyanidioschyzon merolae and in apicomplexa. However, mosses and higher plants have lost all bacterial-type nucleoid proteins, including HU, 25 suggesting that higher plants have adopted novel eukaryotic-type proteins as nucleoid proteins to compact plastid DNA and regulate nucleoid function. It should be noted that chloroplasts in land plants contain several unique proteins as plastid nucleoid proteins (Fig. 1F). Sulfite reductase (SiR) is a 70 kDa soluble protein and one of the most abundant proteins in the plastid nucleoid. SiR induces the compaction of plastid DNA and effectively represses chloroplast transcription activity in vitro. These results suggest that SiR may regulate the global transcriptional activity of chloroplast nucleoids through changes in DNA compaction. 26 Moreover, small molecular proteins containing a eukaryotic SWIB (SWI/SNF complex B) domain have been recently identified in TAC fractions. 27 Among them, SWIB4, which is localized in both plastid nucleoids and cellular nuclei, has a histone H1 motif and could functionally complement an E. coli mutant lacking the histone-like nucleoid structuring protein H-NS, indicating that SWIB4 might be a counterpart of the bacterial nucleoid proteins that is involved in the maintenance of nucleoid structure.
It has been proposed that plastid envelope DNA-binding protein (PEND) and MAR-binding filament-like protein (MFP1) are also unique nucleoid anchor proteins that bind both DNA and chloroplast membranes. PEND is composed of a cbZIP domain and a C-terminal hydrophobic domain. The cbZIP domain is involved in dimerization of PEND and sequence-specific DNA binding, whereas the C-terminal hydrophobic domain is required for targeting the PEND protein to the chloroplast envelope membrane. 28 In contrast to PEND, MFP1 has been shown to be localized to the thylakoid membranes and its C-terminal domain has DNA binding activity. 29 Thus, it is assumed that two anchor proteins are likely involved in the relocation of plastid nucleoids from the envelope to thylakoid membranes during chloroplast identified as a key player involved in the plastid-to-nucleus signaling that coordinates nuclear gene expression with the chloroplast status via signaling molecules such as chloroplast-generated ROS and Mg-ProtoIX. 22 Interestingly, GUN1 is colocalized with pTAC2 in chloroplasts and has both DNA-and RNA-binding activities in vitro. Genetic analysis has implied that GUN1 integrates multiple signaling pathways responsible for recognition of aberrant chloroplasts, which leads to ABI4-mediated repression of nuclearencoded genes, but the transcriptional roles of GUN1 and pTAC2 in chloroplast gene expression are still unknown.
Taken together, transcription by the bacterial type chloroplast RNA polymerase PEP is mediated by a number of pTAC proteins, which are not conserved in bacteria and have been likely acquired during higher plant evolution. pTAC proteins might be involved in plastid maintenance processes such as plastid differentiation and the recognition of aberrant chloroplasts.
Plastid Nucleoid Proteins
In addition to gene-specific transcriptional regulatory mechanisms, it has also been shown that global plastid transcription activity is under the control of the spatial architecture of the genome. Bacterial DNA is packed into a protein-DNA complex, a bacterial chromosome termed a nucleoid. DNA binding proteins such as HU, Fis, and H-NS induce compaction and supercoiling of DNA through their DNA binding activity (Fig. 1C). 23 Chloroplast transcription also occurs at nucleoids. In bacteria, nucleoid compaction and DNA supercoiling are differentially regulated depending on the growth phase and transcription status (Fig. 1B). Similarly, plastid nucleoids drastically change in size, morphology and localization during chloroplast development. 24 The plastid nucleoid is located in the envelope membrane of immature proplastids, whereas it relocates to the thylakoids in mature chloroplasts (Fig. 1E). Relocation of plastid nucleoids might be involved in transcriptional regulation of plastid-encoded genes during chloroplast development.
NEP type novel plastid RNA polymerase likely enabled a transcriptional switching system in chloroplasts, which allows selective transcription of a set of housekeeping genes in non-photosynthetic tissues, such as roots and flowers. In addition, PEP accessory proteins, including a number of pTAC proteins, may be required for the establishment and maintenance of active transcription of photosynthesis genes through PEP. These non-bacterial factors have been probably acquired from host cells, since plant cells lost most of the cyanobacterium-derived regulatory proteins early during evolution of green algae. Finally, nucleoid architecture and/ or intra-chloroplast localization may be involved in the regulation of plastid transcription activity in response to plastid differentiation, and the regulatory proteins likely have been developed during plant evolution.
Taken together, land plants have acquired several novel non-bacterial proteins that are involved in transcriptional and post-transcriptional regulation of plastid gene expression during chloroplast differentiation and adaptation to the environment. Further molecular characterization of pTACs and plastid nucleoid proteins will provide insights into the complex regulatory mechanisms of plastid gene expression in response to plastid differentiation and environmental cues.
|
2016-05-12T22:15:10.714Z
|
2012-08-14T00:00:00.000
|
{
"year": 2012,
"sha1": "e981afdc4a82b17d5580d9c3718609cda89fdf29",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/trns.21810?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e981afdc4a82b17d5580d9c3718609cda89fdf29",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
56158377
|
pes2o/s2orc
|
v3-fos-license
|
Current-Voltage Characteristics of the Composites Based on Epoxy Resin and Carbon Nanotubes
Polymer composites based on epoxy resin were prepared. Multiwalled carbon nanotubes synthesized on iron-cobalt catalyst were applied as a filler in a polymer matrix. Chlorine or hydroxyl groups were incorporated on the carbon nanotubes surface via chlorination or chlorination followed by hydroxylation.The effect of functionalized carbon nanotubes on the epoxy resin matrix is discussed in terms of the state of CNTs dispersion in composites as well as electrical properties. For the obtained materials currentvoltage characteristics were determined. They had a nonlinear character and were well described by an exponential-type equation. For all the obtained materials the percolation threshold occurred at a concentration of about 1 wt%. At a higher filler concentration >2wt%, better conductivity was demonstrated by polymer composites with raw carbon nanotubes. At a lower filler concentration <2wt%, higher values of electrical conductivity were obtained for polymer composites with modified carbon nanotubes.
Introduction
Epoxy resins are used as high performance polymers because of their excellent mechanical properties, chemical resistance, thermal stability, and low production costs.They are commonly used as surface coatings, electrical insulation materials, adhesives, and glues.However, there are many cases where enhanced electrical conductivity of polymeric materials is needed in an electromagnetic shield to prevent electrostatic charging of electronic devices.Good unique electrical and structural properties of carbon nanotubes (CNT) and their high aspect ratio enable the formation of electrical conducting paths in the polymer matrix at a very small percolation concentration [1,2].In the case of CNT/polymer composites the main problem is to create a good homogeneous dispersion of nanotubes in the polymer matrix in spite of van der Waals interactions in nanotubes [3].To improve interfacial bonding and the dispersion of nanotubes in the polymer matrix, chemical functionalization of CNT is commonly used [4,5].It was reported that epoxy composites filled with well dispersed CNT provided much higher values of electrical and thermal conductivity than samples with poor dispersion [6][7][8].In the literature there can be found reports on the reduction of percolation threshold for composites based on functionalized multiwalled carbon nanotubes (MWCNTs) and epoxy resin [9].On the other hand, although covalent functionalization could be detrimental to the electrical conductivity of nanotubes [10][11][12], it does not always translate into a deterioration of conductivity of composites filled by functionalized CNT.
In this study we present a preparation method of polymer composites based on epoxy resin and multiwalled carbon nanotubes.In the first stage carbon nanotubes were subjected to chlorination in the gas phase followed by the treatment with sodium hydroxide to induce chlorine and/or hydroxyl groups on the CNT surface.The effect of carbon nanotube functionalization on the epoxy resin matrix is discussed in terms of electrical properties as well as the state of CNT dispersion in composites.
Experimental
Multiwalled carbon nanotubes were prepared on an ironcobalt catalyst (which will be referred to as CNT).Details of catalyst preparation were reported elsewhere [13].The synthesis of CNT was carried out in a high-temperature furnace (HST 12/400 Carbolite).In the first stage the reduction of catalyst occurred at 600 ∘ C, in the second the mixture of ethylene with argon was introduced into the chamber and the process proceeded at 700 ∘ C under atmospheric pressure.Detailed characteristics of the carbon material were given elsewhere [13].
The functionalization of CNT was performed in the gas phase using chlorine gas at 400 ∘ C. Chlorination temperature was selected based on previously studies [14].After 3 h chlorine was cut off and carbon nanotubes were cooled to the room temperature under vacuum.In the last step CNT were washed with acetone for 12 h and dried at 100 ∘ C for 12 h.Finally the carbon material was treated with 5 M NaOH for 3 h to substitute chlorine with hydroxyl groups.Thus, the obtained material was boiled in distilled water and dried at 100 ∘ C for 12 h.Carbon nanotubes after chlorination and hydroxylation are denoted as CNT/Cl and CNT/OH, respectively.
CNT/epoxy resin composites (PCs) were prepared by mixing the epoxy resin Epidian 5 (obtained from bisphenol A and epichlorohydrin, the number average molecular weight ≤700, ORGANIKA, SARZYNA SA, Poland) with a commercially available hardener Z-1 (triethylenetetramine, ORGANIKA, SARZYNA SA, Poland) and raw or modified multiwalled carbon nanotubes.The content of MWCNTs in the epoxy matrix was in the range from 0.25 wt% to 2.5 wt%.In the second step the mixture was blended using a mechanical stirrer with the rotation rate of 1900 rpm/min.Liquid resin with dispersed MWCNTs was poured into specially prepared aluminum plaques which facilitated the conductivity examination of the composites.Polymer composites containing raw carbon nanotubes, carbon nanotubes after chlorination, or hydroxylation are denoted as PC/CNT, PC/CNT/Cl, and PC/CNT/OH, respectively.
The morphology of the raw carbon nanotubes was studied using transmission electron microscopy (Jeol JEM 3010) and the amount of metal particles embedded into the carbon material, using thermogravimetry (DTA-Q600 SDT TA Instruments).Mohr titration quantification method of chlorine introduced on the carbon nanotube surface was used to determine chloride in the filtrate after the dechlorination reaction.Dispersion of CNT in the epoxy resin matrix was investigated using optical microscope (Delta Optics).The current-voltage characteristics of the polymer composites were determined at 298 K for cross-conductivity.For this purpose a polymer sample (with a defined shape) was placed between metal electrodes and the current flow at a given voltage was measured.Voltage waveform was coerced using the function generator (waveform generator) Owon AG1022F.Measurements were taken on the voltage increase (voltage ramp rate) 400 V/s (10 Hz) and the amplitude of 20 V (10 V ±).Current measurement was performed using a digital storage oscilloscope UTD2102CEX UNI-T.For the concentration <1.25%, conductance was measured with the static method using Sefelec M1501M teraohmmeter.
Results and Discussion
The morphology of carbon material obtained during ethylene decomposition on the iron-cobalt catalyst was studied previously and the results are presented elsewhere [13].TEM images of the same carbon material submitted to chlorination or chlorination/hydroxylation process are presented in Figures 1(a) and 1(b), respectively.It is clearly visible that in both cases carbon existed in the form of multiwalled carbon nanotubes and the diameters ranged from 20 to 30 nm.We did not observe any obvious differences between the samples after modification regardless of the kind of the treatment.It is worth noting that the methods used for carbon nanotube modification did not destroy the carbon nanotubes structure in contrast to commonly used methods using acids.In the TEM images catalyst particles were not visible which indicates that the catalyst particles had been removed.These observations are consistent with the calculated data based on thermogravimetric curves (Figure 2).The initial carbon material contained about 9.4% of residues.After the chlorination and chlorination/hydroxylation process about 4.6% and 3.0% of ash, respectively, remained in the samples.It means that the content of the catalyst in the carbon material decreased to 30%.Despite the high degree of catalyst particle removal, the structure of carbon nanotubes was also preserved.
The functionalization degree of the samples after chlorination process was described in detail elsewhere [14].The presence of chlorine was confirmed using Mohr titration method and the results are presented in Table 1.Comparing the sample before and after chlorination it was found that chlorine content significantly increased after the modification and equaled about 1.86 [mmol/g].For the material after sodium hydroxide treatment the chlorine content was very low and equaled about 0.33 [mmol/g].It indicates that during hydroxylation process chlorine atoms were substituted by hydroxyl groups.
Optical microscopy enables fast observation of the overall state of nanotube dispersion in the epoxy matrix.Figures 3(a)-3(c) show optical micrographs of the obtained composites containing 1.25 wt% of MWCNTs.Single and small carbon nanotube clusters with sizes less than about 10 m can be readily seen in the sample based on the raw carbon material (Figure 3(a)).Much better dispersion was obtained when multiwalled carbon nanotubes after chlorination (Figure 3(b)) or chlorination followed by a reaction with sodium hydroxide (Figure 3(c)) as a filler in the polymer composite were applied.Although a few MWCNT aggregates can still be observed, their sizes are rather small and are homogeneously distributed in the polymer matrix.It indicates that the functional groups presented on the carbon nanotube surface allow obtaining a homogeneous dispersion which is consistent with findings reported by Yu et al. [15].Functionalization of carbon nanotubes improved the compatibility of the filler with the polymer, which ensured a better wetting and adhesion between both phases as well as a satisfactory distribution of the nanofiller.A surface functionalization of CNT with reactive groups may help in a better dispersion via the improvement in polymer, CNT physical interactions or even chemical bonds between functional groups and polymer chains.
For the obtained composites current-voltage characteristic was determined and presented in Figure 4.It was found that the positive and negative - are symmetric and exhibit a nonlinear current-voltage relationship.Together with the voltage increase, the conductivity increases exponentially.It can be the result of the positive conductivity temperature coefficient of carbon nanotubes.The increase of the power emitted on the percolation paths causes their heating and leads to the increase of their conductivity.To the description - the following equations were applied: where is the current (mA), is the voltage (V), is the electrical conductance for = 0 V (S), is the constant, and where is the current density (A/m 2 ), is the electric field strength (V/m), 0 is the conductivity for = 0 (S/m), and is the constant.In Table 2, the parameters of (1) and in Table 3 the electrical conductivity for = 0 were presented.The 0 values were determined from (2), while for the composites containing 1.25 wt% of CNT from (3), which is presented below: where is the sample conductance determined by static method (S), is the sample cross-sectional (m 2 ), and ℎ is the sample thickness (m).
In Figure 5 the dependence of conductivity on the nanomaterial concentration is presented.It was found that at a higher concentration of raw carbon nanotubes was added to the polymer matrix; the obtained composite material demonstrated the highest electrical conductivity.In the case of composites filled with modified carbon nanotubes the higher conductivity displayed the material filled with CNTs with hydroxyl groups on the surface.Probably the nature of the surface hydroxyl groups improved the dispersion of the nanomaterials which created more percolation paths.At a lower concentration of the filler, the highest values of electrical conductivity exhibited polymer composites filled with modified carbon nanotubes, especially with hydroxyl groups, in contrast to the composites filled with the raw material.
According to the thermogravimetric studies it is known that raw carbon nanotubes possess significantly more catalyst particles than the same material after modification because functionalization process enables not only changes of CNT surface but also metal removal.In a two-component system (raw carbon nanotubes containing metal particles) at a certain concentration (about 1.6% by weight), percolating pathways involving metal particles are formed.In that case the probability of percolation pathways creation is higher because they can be formed not only from carbon nanotubes but also from metal particles.It is obvious that metal has a much higher conductivity than carbon, resulting in a more rapid increase in the conductivity of the composite.It is observed as higher values of electrical conductivity for the composites filled with 2.0 wt% or 2.5 wt% of raw carbon nanotubes.
In the case of samples after modification with a lower content of better conductive catalyst particles, the probability of percolation pathway formation from metal is lower resulting in a lower conductivity of the polymer composite.
A different situation was noticed for polymer composites obtained with a lower filler concentration.Higher values of electrical conductivity were observed for the material with an addition of modified carbon nanotubes.Samples after chlorination or hydroxylation processes (with partial removal of catalyst) contained a larger fraction volume of the carbonaceous material.As a result, more percolation paths were created for modified than raw carbon nanotubes.Finally better conductivity was achieved for PC/CNT/OH and PC/CNT/Cl compared to PC/CNT composites.
The influence of the treatment on the percolation threshold was not noticed, which is presented in Figure 5.According to the percolation theory, the static conductivity of these composites is given by [16][17][18] where is the percolation threshold, the filler content, and the critical exponent.The critical exponent was determined by linearization of dependence (4) to logarithmic form ln = ln + ln( − ).Where is the slope.The percolation threshold occurred for all the samples in the concentration range from 0.95 to 1.2 wt% similarly to findings reported by Allaoui et al. [19].The critical exponent equals 4.9, 3.8, and 3.9 for polymer composites filled with CNT, CNT/Cl, and CNT/OH, respectively.
Conclusions
The effect of carbon nanotube functionalization on the epoxy resin matrix was discussed and the incorporation of chlorine and hydroxyl groups on the carbon nanotube surface was found to improve the dispersion of a filler in the polymer matrix.
For the obtained materials current-voltage characteristics were determined.They had a nonlinear character and were well described by an exponential-type equation.At a higher filler concentration, better conductivity was demonstrated by polymer composites with raw carbon nanotubes.This behavior is probably due to the coexistence of metal particles together with carbon material and their ability to create more percolation pathways at a higher filler concentration.At a lower filler concentration, higher values of electrical conductivity were obtained for polymer composites with modified carbon nanotubes.Modified fillers containing a larger fraction volume of the carbonaceous material create more percolation paths at a lower concentration in the polymer matrix which favors a better dispersion of carbon nanotubes.For all the obtained materials the percolation threshold occurred at a concentration of about 1 wt%.
Figure 1 :Figure 2 :
Figure 1: TEM images of the carbon material after chlorination (a), after reaction with sodium hydroxide (b).
Table 1 :
Chlorine content in the samples analyzed using Mohr titration method.
* one point measured.
|
2018-12-10T11:49:08.487Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "4e56fbebf81cf20b5f8b570fe417068bd36e675c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jnm/2015/405345.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4e56fbebf81cf20b5f8b570fe417068bd36e675c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
144621241
|
pes2o/s2orc
|
v3-fos-license
|
International Criminal Law vs State Sovereignty: Another Round?
This is a review of five recent works which deal with international criminal law. By an analysis of those works, the essay queries whether the relationship between international criminal law and state sovereignty is always accurately conceptualized. International criminal lawyers often see sovereignty as the enemy of international criminal law, though frequently failing to discuss in any depth the nature and malleability of sovereignty. Although international criminal law does involve some challenges to sovereignty, it also needs, and in some ways empowers, that sovereignty too. The works under review tend to pay less attention to the substantive aspects of international criminal law than its institutional part. This is unfortunate, as some of the most interesting interactions between international criminal law and sovereignty occur at this level. The essay finishes with some broader reflections on how the works under review conceptualize the international legal order, regrets the absence at times of engagement with relevant constructivist scholarship but notes that the answer to the question of the precise relationship between international criminal law and sovereignty is unlikely to be agreed upon soon.
Introduction
When sovereignty appears in international criminal law scholarship, it commonly comes clothed in hat and cape. A whiff of sulphur permeates the air. Generally, international criminal law scholars see sovereignty as the enemy. It is seen as the sibling of realpolitik, thwarting international criminal justice at every turn.
Although this may sometimes be an adequate description of reality, the relationship between sovereignty and international criminal law is more complex, and we are beginning to see this coming through in more sophisticated international criminal law scholarship. Indeed the books reviewed here can be seen as belonging to the second wave of post-Cold War international criminal law scholarship. 1 They also represent a more highly developed, worldly-wise approach to international criminal law than some of the earlier literature in the field. 2 Four of the five works under consideration have international criminal law as their primary focus. Two of them are monographs concentrating on the International Criminal Court and its relationship to international law more generally. The first of these, International Justice and the International Criminal Court: Between State Sovereignty and the Rule of Law is by Bruce Broomhall, who is now a Professor at the University of Québec in Montréal. The second, longer, and more optimistic in outlook is The International Criminal Court and the Transformation of International Law by Leila Sadat, a Professor at Washington University. Two of the books are collections of essays edited by Philippe Sands, Professor at University College London. From Nuremberg to the Hague is a short work, consisting of five essays that derive from public lectures arranged by Matrix Chambers, and given at the Wiener Library in London. The larger collection, Justice for Crimes against Humanity, is edited with the Executive Director of Minority Rights International, Mark Lattimer, and covers both legal and personal views on international criminal law from a plethora of scholars and practitioners.
The final work, Justice, Humanity and the New World Order, by Ian Ward, Professor of Law at the University of Newcastle, is a disquisition on the role of sensibility in jurisprudence. It contains a section on international criminal law, which will be the focus of comment on that book here. Of the five, only Ward's is generally critical of 1 For earlier efforts see, e.g., B. Ferencz, The International Criminal Court, A Step Towards World Peace (1980). 2 As Bruce Broomhall says in International Justice and the International Criminal Court [hereinafter International Justice], at vii, 2, 'International justice can work; but to work in a legitimate and a politically, legally and financially viable way requires that problems be honestly addressed and the first steps taken towards defining solutions. . . Oversimplifications will not achieve these aims.' international criminal law, but the fact that this issue is of interest at all in a more general theoretical work, alongside the fact that these books represent only part of the ever-increasing literature on international criminal law, shows that the topic is no longer the preserve of a small number of scholars publishing for a small audience.
Despite all their merits, however, the volumes assessed here also show that international criminal law scholarship has not yet fully come to grips with the interrelationship of international criminal law and sovereignty.
Sovereignty vs International Criminal Law: Are We Sure?
Let us turn, therefore, to this bête-noire of the international criminal lawyer. State sovereignty is often placed in the dock by such scholars, 3 the attitude of whom is accurately summarized by Ian Ward, 4 '[t]he overweening nation-state all too readily begat the horrors of nationalism. The jurisprudence of sovereignty, in turn, all too easily lent a spurious legitimacy to these horrors.' Often, those espousing such an opinion have a point (although nationalism is by no means the only guilty ideology). Sovereignty can also be used to pre-empt fuller debate on the advisability of developing the law. At Rome, for example, 'this would intrude on our sovereignty' was often used as a euphemism for 'we don't like this' per se. As we will see, though, there is more to the relationship between sovereignty and international criminal law than meets the eye. Before moving on to this, however, it is interesting to note the similarities and differences in the approach to sovereignty taken in the more traditionally doctrinal/legal works under review here.
A Sovereignty and Malleability
Antonio Cassese, as noted by Bruce Broomhall in his extremely useful, if rather short, book, has made it clear that in his view 'either one supports the rule of law, or one supports state sovereignty. The two are not . . . compatible'. 5 Although Cassese has both the understanding of legal theory and the practical experience that makes such a view carry considerable weight, it is worth investigating the matter a little further.
To begin with issues of theory, as a number of the works here accept, there are two views of sovereignty. The first of these views is that of sovereignty as pre-legal, in which sovereignty represents a monolithic entity that is of clearly determinate content. This approach to sovereignty, although not absent in some of the debates in Rome, for example on the definitions of crimes, does not reflect how most states and scholars see sovereignty. Bodin's original, fairly absolutist, concept of sovereignty was empirically defended, so to raise such a concept of sovereignty to the normative level would be to derive an 'ought' from an 'is', or perhaps more accurately, a 'was'. 3 See, e.g., King International Justice, at 56. It must also be noted, however, that Cassese's approach to sovereignty is by no means simplistic or Manichean. Indeed, those international lawyers accused of adopting an absolute concept of sovereignty rarely did any such thing. 6 The other idea is that sovereignty is a more flexible concept, with sovereignty being constituted by the international legal order, which defines the basic rights and duties of states, a view typically associated with Hans Kelsen 7 and apparent in such cases as the Wimbledon case in the Permanent Court of International Justice (PCIJ). 8 The works considered here, understandably, tend to take the latter view of sovereignty and the international legal order. To take the view that sovereignty is pretty much absolute and unchangeable tends to lead to a dim view of the prospects of international criminal law. 9 Thus Andrew Clapham, in an excellent chapter in Justice for Crimes against Humanity 10 tells us 'Sovereignty as such is a changing notion which adjusts to the developing nature of international law . . . in the end the debate turns on what one chooses to understand by the term sovereignty and who should be protected . . . the rule that there should be no interference in state sovereignty simply begs the question: what are the rights and duties associated with sovereignty?' (at 305, 312, 313).
Similarly Bruce Broomhall accepts, at one point (at 59), that 'the 'terms and conditions' imposed by the international community on those recognized as participants are variable over time. Qualities that are constitutive of sovereignty, and functional limits to which the exercise of sovereignty is subject, may occasionally appear or disappear, and certainly change their emphasis.' 11 However, he is by no means as certain as Clapham that change has occurred, asserting elsewhere, 'the institution of sovereignty, at least in areas relevant to international criminal law, is in no danger of being replaced or of its importance being radically diminished in the foreseeable future' (at 5). 12 It would appear thus that Broomhall is somewhat sceptical about the transformative nature of international criminal law in relation to notions of sovereignty (e.g. p. 2). We will return to this in a moment. What 'his comments do, however, is give the impression that Broomhall's vision of sovereignty is more static than that of some of the other books'.
It is certainly less dynamic than that of Leila Sadat, who takes the view in her The International Criminal Court and the Transformation of International Law 13 that [t]he negotiation of the Rome Treaty has worked a quiet, albeit uneasy, revolution that has the potential to profoundly transform the landscape of international law. Yet no revolution would be complete without a counterrevolution, and many aspects of the Statute reflect the constraints of classical international law that did not yield to the forces of innovation and revolution at Rome. This is not surprising, for if State sovereignty . . . is often blamed for the violent condition of world affairs, international governance is not necessarily looked upon as a superior alternative. (at 8) Clapham and Sadat may have a point. A perfectly reasonable case can be made that the ICC does represent a new era in international law. 14 Or, as Ian Ward claims in Justice (at 73-95) globalization 'demands that we should radically rethink our politics . . . [and] take a fresh look at the institutions which act as transmission belts for our sentiments and ideals, at the legal systems that are supposed to be an expression of them, and at the jurisprudential conceptions within which we clothe these same sentiments and ideals'.
But, as Frédéric Mégret has implied, the debate on the transformation of international law has been going on for a long time. 15 In the 20th century, there was a procession of claims that international law and society is undergoing fundamental changes. For example, in the 1960s there was Wolfgang Friedmann's assertion that the international legal system was moving from an international law of coexistence to an international law of cooperation. 16 In the 1940s there was Phillip Jessup's A Modern International Law 17 and Jorge Americano's The New Foundations of International Law, 18 and in the pre-war era, there was Alfred Zimmern's distinction between the 'old' and the 'new' diplomacy, the latter represented by the League of Nations. 19 Perhaps the international system has traditionally been characterized by a continual tension in the international legal order between some elements of multilateralism and some of unilateralism. Or as Georg Schwarzenberger put it, states are like Schopenhaur's hedgehogs, huddling together in the cold, but repelled by each other's spines. 20 At the least, we should not be quick to assume that the international order has fundamentally changed, without looking at the evidence closely.
B The International Criminal Court as a Threat to Sovereignty
This alone would be reason to follow Broomhall and to express some doubt that the fundamentals of sovereignty or international law are likely to change. But there is also a question about whether the ICC is really that threatening to sovereignty in the first place. If it is not, then it can hardly be considered likely to transform it. Cherif Bassiouni, for example in his chapter on the ICC in Justice, asserts that 14 For a discussion of this, see, e.g., Mégret, 'Epilogue to an It would appear, therefore, that there is no consensus on the extent to which the ICC represents a fundamental challenge to sovereignty, or requires a reappraisal of the nature of international law.
Queries can rightly be expressed about Bassiouni's exclusion of any supranational element in the ICC, but in relation to the law, Bassiouni has a strong point. 21 Although it is true that the International Criminal Court, being both permanent and having a broad jurisdictional reach is institutionally a huge innovation, the drafters at Rome were very careful to ground the developments they were making in preexisting law. This was because it was certain by the late stages of the Rome conference, if not before, that some states were going to oppose the Rome Statute whatever the outcome. The drafters were fully aware that such states would seize any parts of the statute in advance of international law as a stick with which to beat the new court should the ICC ever seek to exercise its jurisdiction over them as non-parties. 22 It is notable that this debate is also taking place amongst those who support the International Criminal Court. All the works specifically concentrating on international criminal law reviewed here contain defences of the ICC against the critiques levelled at it by the US that it violates pre-existing international law. 23 Interestingly, those authors who assert that the ICC is transformative of the nature of international law may weaken the claim that the ICC is consistent with pre-existing international law. For example, Sadat, in a work that is at once supportive of the ICC, enjoyable and perhaps deliberately provocative, 24 states that: [a]nother aspect of establishing the ICC outside of the United Nations system is the possibility that the Rome Conference represented a Constitutional Moment in international lawa decision to equilibrate the constitutional, organic structure of international law, albeit sotto voce. . . . [various aspects of the Statute and its creation] . . . suggest an important shift in the substructure of international law upon which the Court's establishment is premised. Unable to effectuate the change explicitly, through formal amendment of the Charter, the international community, including not only States but global civil society, seized upon imaginative ways to bring about the shifts in constitutional structure necessary to permit international law to respond to the needs of international society and changing times. 25 In applauding the Rome Statute for this, Sadat concedes too much to the critics of the ICC who say the ICC significantly alters the charter and international law generally. 21 This is not to say that the ICC does not reflect a shift in attitudes. It does. 22 The US has used international legal arguments to claim that the ICC is flawed, see, e.g., Bolton It is more prudent, as James Crawford is in his contribution to the short but substantial Nuremberg, to note that the ICC reflects the fact that international law may have changed slightly (with a greater focus on international criminal law), although not really at the institutional level. 26
C International Criminal Law and Sovereignty
As has already been noted, the relationship between international criminal law and state sovereignty is complex, and perhaps often misunderstood. 27 We must accept that international criminal law does affect state sovereignty (the law on crimes against humanity and genocide in particular) by prohibiting behaviour perhaps previously outside of the purview of international law. Or, as Bruce Broomhall comments, the idea that certain acts 'undermine the international community's interest in peace and security and, by their exceptional gravity, "shock the conscience of mankind"', 28 and thus are not the concern of one state alone. The obligations undertaken by states parties to the Rome Statute, to cooperate with the Court and to, essentially, submit their judicial processes (or lack thereof) to external oversight also have implications for sovereignty.
However the prevention of international crimes cannot occur without sovereignty. Violations of international criminal law were frequent, for example in Somalia, where there was no government that could control the various factions. It is the same in cases such as Sierra Leone, where rebel forces were fighting a government that is weak and does not control much territory. 29 The state (and its powers) have a protective role that cannot be ignored here, at the very least unless and until the UN or another body chooses to take it over. 30 Turning more specifically to the ICC, it also bears recalling that creating that body was an exercise of sovereignty. No other entities than states had the authority to create a permanent international criminal court. So the ICC, perhaps paradoxically, also owes its existence to state sovereignty. The grounding of the ICC in the consent of states means, in particular, that the ICC may lawfully exercise jurisdiction over nationals of non-party states when they commit crimes on the territories of consenting states. There is no reason that states cannot determine that crimes committed on their territory or by their nationals are prosecutable by courts acting on their behalf. In creating the Court, those states have accepted that the ICC may exercise some of their sovereign powers (the right to exercise jurisdiction) in that way. Non-party states have not had their sovereignty limited in any additional way by this concession made by states parties, who have locked themselves into a regime that can take over part of the protective role of the state, by prosecuting offences if the state later becomes unwilling or unable to do so.
Admittedly, the rights of the ICC to do so are hedged with conditions protecting sovereignty, most notably, complementarity. Most of the works reviewed here discuss complementarity, and tend to do so well. 31 However, although some of the authors accept that complementarity was intended to limit the power of the ICC (or, the 'international') over states, 32 the idea behind complementarity can also be seen as a use of state sovereignty for international ends. As Sir Robert Jennings has written in another context, the classical international lawyer's call for a surrender of sovereignty was erroneous. What was and is most urgently needed is not a surrender of sovereignty but a transformation and augmentation of it into new directions by harnessing it, through proper legal devices, to the making of collective decisions, and the taking of effective collective action, over international political problems. 33 The reason for this is that to be effective, international law needs developed domestic structures like courts and police services. 34 Although Jennings' comments were not written with the ICC expressly in mind, it is an excellent explanation of complementarity. 35 States have decided that international crimes ought to be repressed, and have determined that the most effective way of doing this is by encouraging national efforts at prosecution, i.e., using state sovereignty. Indeed, Philippe Sands, in his contribution to From Nuremberg to the Hague identifies this as one of the advantages of complementarity (at 76-77), as it 'recognises that national courts will often be the best placed to deal with international crimes', and provides them with an incentive to act.
The exercise of legislative and adjudicative jurisdiction is an important part of state sovereignty. What the ICC does is provide a mechanism where states are actually encouraged to use their sovereignty in this way. 36 This effect is not necessarily limited to states parties. 37 Still, the extent to which the ICC can provide such an incentive is not helped by what a number of the authors here accept: that the cooperation regime for the ICC is not strong, owing to an unwillingness of states to go too far in relation to their perceived sovereign prerogatives. 38 31 See International Justice, pp. 86-93, Sands, supra note 14, at 74-81. 32 E.g., Sands, supra note 14, at 75; Transformation, at 123-128. 33 Jennings, supra note 29, at 42. 34 Ibid., at 37, 41. 35 And, perhaps more generally of international criminal law, as Broomhall points out, international criminal law had a political project. It is simply one that many people (this author included) support. See The above point can perhaps be generalized a little more. International criminal law may have the effect of limiting sovereignty through its substantive norms (although we will return to this matter later), but it also empowers states in relation to jurisdiction. This should come as no surprise, as can be seen from the double-structured nature of the argumentation in the Lotus case, and the commentary it inspired. 39 To assert jurisdiction over an action is to exercise a form of sovereignty over it, and where the jurisdiction being asserted is extraterritorial, this may cause consternation in the state where the offence occurred. What is at issue is who is to be empowered to exercise sovereignty, the locus delicti alone, or other states?
International criminal law has traditionally adopted a broad view of extraterritorial jurisdiction. For example, passive personality jurisdiction is generally frowned upon in international law, yet it is unquestionably available in relation to international crimes. 40 The broadest jurisdiction granted to states in international law, universal jurisdiction, is granted by international criminal law. As jurisdiction involves one state asserting rights to adjudicate events in (and often involving the officials of) other states, this involves an assertion of sovereignty. Thus international criminal law, by accepting universal jurisdiction and limiting material immunities empowers states, enabling them to expand their sovereign rights to events beyond their borders, through the assertion of such a broad form of jurisdiction. Although most international criminal lawyers would accept that in the case of international crimes this is right, it also shows that sovereignty is not always the enemy. Without sovereignty there are no courts, and without courts there are no prosecutions.
In dealing with universal jurisdiction, however, we also have to take into account the claims that universal jurisdiction is, albeit notionally available to all, in practice a tool of the powerful. This was one of the bases upon which the President of the ICJ, Gilbert Guillaume, opined that to accept universal jurisdiction in absentia would 'be to encourage the arbitrary for the benefit of the powerful, purportedly acting as agent for an ill-defined 'international community'. 41 Guillaume's point might be countered with a claim that all states remain, in spite of modern imbalances of power, equally sovereign, 42 thus legally with equal jurisdictional authority. However, as a number of the authors recognize, international criminal law operates in a political, as well as a legal sphere, so practical opportunities to exercise that jurisdiction are not equally distributed. 43 43 See, e.g., Overy, 'The Nuremberg Trials: International Law in the Making', in Nuremberg, at 29, Lattimer and Sands, supra note 14, at 13-17.
It would be one thing for France to prosecute a former Head of State of Haiti before its domestic courts, and quite another for the Marshall Islands to prosecute a former President of the United States. If regular enforcementthe rule of lawis to become even a clearly emergent reality, then supporters of universal jurisdiction will have to propose credible means of addressing the complex decisions and (sometimes political) value-judgements faced by those operating in real-world situations. (at 126) Mark Lattimer and Phillipe Sands, in the very useful introductory chapter of Justice, go further, and also note that it is by no means solely at the national level that political considerations enter the equation Outside the courtroom at least, international criminal justice cannot be immune from strategic influences. It is plain that global and regional politics renders the commitment of some states to international justice more decisive than that of others. This leads to some uncomfortable conclusions: for example, one could speculate that if the Tribunal had issued indictments against NATO personnel over incidents in the Kosovo war, it might have seriously undermined Western support for the Tribunal and possibly compromised the whole project of international criminal justice, including the International Criminal Court. (p.17) This takes us to the fact that sovereign equality is a legal rather than empirical concept. It also takes us to the crux of Broomhall's argument that the rule of law, insofar as it requires 'consistent, impartial practice . . . raises profound difficulties, at least as the international system exists and is likely to develop' (at 54). He is right that the nature of the international system does not provide an easy welcome for entirely consistent practice, although the situation in relation to selective enforcement may have improved somewhat recently. After all, Belgium, the defendant state in the Arrest Warrant case, is no example of a superpower arbitrarily throwing its weight around.
Substantive International Criminal Law: What Are We Trying to Do?
With the exception of Sadat's Transformation, there is a tendency in the works under review here to downgrade detailed discussion of issues of substantive international criminal law to a secondary level. For example, in Lattimer and Sands' Justice, only Eric David discusses the substantive aspects of international criminal law in any depth (and that discussion is limited to a 10-page chapter). 44 This is unfortunate, as precisely what international criminal law is trying to prevent and punish is a hugely important question, as it provides an insight into what values the law is trying to promote. 45 The complexity of international criminal law's relationship with sovereignty comes through not only in the procedural or institutional aspects of international criminal law. It is also present in substantive international criminal law. Indeed, in at least one instance, substantive international criminal law supports state sovereignty. As David Luban has noted, although crimes against humanity limit states' freedom of 44 There is also a fairly short, albeit sophisticated section on the extent of criminal liability in the chapter by Clapham in Nuremberg, at 50-62. 45 There is a very useful section in International Justice on this point, however, at 41-51.
Downloaded from https://academic.oup.com/ejil/article-abstract/16/5/979/496087 by guest on 28 July 2018 action in relation to their own nationals (thus limiting their sovereignty), aggression has a sovereignty-protecting role. The prohibition of aggression protects states by criminalizing armed violations of their sovereignty. 46 International criminal law certainly has its 'schizophrenias', 47 such as the distinction between national and international armed conflicts. As Mark Lattimer and Phillipe Sands put it in Justice 'the gaps in that protection are sufficiently large to allow much blood to flow in between' (at 11). Sovereignty has a lot to do with what is, or is not, considered to be part of international criminal law, as the distinction between international and non-international conflicts shows. The boundaries of international criminal law are not apolitical. International criminal law has areas of blindness. One of these areas is famine, which is traditionally seen not as a problem of criminal law, or perhaps even law at all, but one of development aid. 48 However, as Alex de Waal has reported '"to starve" is transitive, it is something people do to each other'. 49 Despite an upturn in interest in using criminal law, and the fact that some humanly created famines may come under the definitions of crimes against humanity and genocide, international criminal law proscriptions remain inadequate to respond even to famines that are the result of intentional human decision-making. 50 As Ian Ward tells us in Humanity, 'Law, it should always be remembered, is as potent in its absence as its presence' (at 86).
The fact that international criminal law is not a body of law that has fallen from on high fully formed, but is the outcome of political contestation seems to have been recognized by a number of the works under consideration. Broomhall, for example, quite accurately notes that '[b]ecause the judgement of states, individually and collectively, is subject to diverse extra-legal influences, the process of international criminalization will always be less orderly than its conceptual formulation' (at 39). This is absolutely correct, the modern discussion on whether or not terrorism is an international crime, for example, reflects contestation over where (at the national or international level) such actions ought to be punished, and in other situations, whether criminal law is the appropriate model to adopt.
This political contestation over the substance of international criminal law was clearly in evidence in Rome. As Broomhall notes, the decision in relation to the ICC that the crimes had to be spelt out in considerable detail was not solely because of an abstract commitment to a systematic presentation of international criminal law, but 'also resulted from the awareness of governments that they were designing an institution that could possibly bring indictments against even their highest-ranking officials' (at 31). Indeed he goes further, noting the, perhaps 'promiscuous', 51 use of legal concepts, sometimes for ulterior purposes, mentioning in relation to the nullum crimen sine lege principle that it offered 'a means both of limiting exposure to the obligations imposed by the Statute and of fostering codification and development of the law . . . [as well as reflecting] . . . a desire to forestall any repetition of the criticisms aimed at the Nuremberg Tribunal, which had already been taken into account in the establishment of the ICTY and ICTR' (at 30). Indeed, and it is notable that the approach taken to the ambit of criminal responsibility differs quite significantly between 'safe' and 'unsafe' tribunals, i.e., those which could exercise jurisdiction over their creators and those that cannot. 52 It is a pity that on this, as on a number of points, Broomhall makes highly perspicuous assessments, but does not really expand upon them as much as might be hoped. This is one of the few flaws in what is a sophisticated and well-rounded work.
Broomhall is not the only one to note the interplay of substantive norms and state interests at Rome. Sadat also is fully aware that there might be a fundamental incompatibility between the political agendas of States and the process of codifying, in a progressive manner, the customary international law of war and crimes against humanity. Thus, the codification process was fated to produce a text that represented a set of political compromises, rather than a new set of progressive norms criminalizing behaviour on a broad scale. 53 Like Broomhall, Sadat also highlights the interplay between legal argumentation on how specific the substantive criminal law provisions in the Rome Statute had to be and the extent to which states were prepared to allow the ICC to judge their own nationals (see, e.g., at 174-182). Despite this, Sadat, consistent with her idea that the ICC has probably altered international society, at times takes a very broad view of the normative impact of the drafting process at Rome. She is not alone in this, for example, Lattimer and Sands assert that the Rome Statute 'provides the most comprehensive, definitive and authoritative list of war crimes and crimes against humanity attracting individual criminal liability'. 54 But Sadat perhaps goes the furthest, asserting that the definition process at Rome was a 'quasi-legislative event that produced a criminal code for the world' (at 263). This is part of an argument that the Rome Statute provides a ground floor for definitions of crimes. This would provide a defence against those who claim that if the Security Council were to make the law applicable to conflicts in non-party states (as it has now done in relation to Darfur, Sudan, in Resolution 1593) there could be a violation of the nullum crimen principle. Justice, at 5. Crawford, in his contribution to Nuremberg, is more circumspect, describing the Rome Statute (at 152) as a limited code of international criminal law. Aceves and Hoffmann, in Justice, however, in relation to crimes against humanity, treat the Rome Statute's provision on crimes against humanity as the most authoritative interpretation of crimes against humanity in international criminal law' (at 245). 55 Sadat clearly is concerned with such an argument, see Transformation, at 169. It is easy to agree with the conclusion that the Rome Statute reflects a minimum content of international criminal law. There are very few norms in the Rome Statute that were not already clearly established and, indeed, if the Rome Statute can be criticized for anything, it is for diluting some war crimes prohibitions and raising the bar for the prosecution of crimes against humanity. 56 There are probably only two areas in which the Rome Statute can seriously be thought to be in advance of the law in existence in 1998. The first of these is the criminalization of the recruitment of child soldiers, the second being the inclusion of gender (and perhaps culture) as prohibited grounds of discrimination in crimes against humanity. 57 It would be difficult to argue now that these are not established in international criminal law. As the Canadian implementing legislation for the Rome Statute makes clear, 'crimes described in Articles 6 and 7 and paragraph 2 of Article 8 of the Rome Statute are, as of July 17, 1998, crimes according to customary international law'. 58 But there are also problems with getting to this result the way that Sadat does.
Sadat's argument is that the Rome Statute involved a reconfiguration of the sources of international law, or, in her words the prescriptive jurisdiction of the international community and the adjudicative jurisdiction of the Court are premised on transformative redefinitions of those principles in current international law. Indeed, through a rather astonishing mutation, jurisdictional principles concerning which State may exercise its authority over particular cases have been transformed into norms establishing the circumstances under which the international community may prescribe rules of international criminal law and punish those who breach such rules (at 103). This is difficult to reconcile at times with other statements in the work: Sadat also asserts that 'the definitions of crimes are for purposes of the ICC Statute only, and do not embody progressive developments that may be considered new formulations of customary international law (some would even argue that they do not even embody current international law)'. 59 Despite this, it is unclear why the argument that the Rome Statute definitions are at least a minimal definition of custom cannot be made on perfectly traditional principles relating to the interrelationship of treaties and custom. It is true that the crimes are said, in Articles 6(1), 7(1) and 8(2) to be defined 'for the purpose of this Statute', but Article 10 of the Rome Statute provides that 'nothing in this Part shall be interpreted as limiting or prejudicing in any way existing or developing rules of international law for purposes other than this Statute'.
As far back as the North Sea Continental Shelf case it was accepted that the drafting process of treaties, and treaties themselves, can have a developmental role in 56 See Cryer, supra note 52, at 268-283. 57 Both of which were eminently appropriate innovations (if that is what they were) in Rome. 58 Crimes Against Humanity and War Crimes Act 2000 c. 24, Section 4(4). 59 Transformation, at 169. See also at 146, '[t]he Statute adopted by the Diplomatic Conference is a montage of historically-based texts, massaged during difficult political negotiations, that improved the existing law in some respects but left it either unchanged or more restrictive in other cases' and at 141, where Sadat notes that the substantive law 'is oriented towards the prosecution of "major" war criminals, not their subordinates or other lesser offenders. This is consistent with the approach taken in establishing international criminal tribunals since Nuremberg'. custom. 60 There is no reason not to believe that this happened here. The drafters at Rome were for the most part very careful to stay within the bounds of established custom. As we have seen, there were only a very small number of cases where the drafters stepped even arguably beyond the pre-existing law. Rome was not seen as the place for large steps forward in the law, but as a place for creating a court to enforce some of the law. This is, for the most part, the way in which the ICTY has taken the Rome Statute, its most important statement on the point being a comment of the Trial Chamber in the Kupreškic case: In many areas the Statute may be regarded as indicative of the legal views, i.e. opinio juris of a great number of States. Notwithstanding article 10 of the Statute, the purpose of which is to ensure that existing or developing law is not 'limited' or 'prejudiced' by the Statute's provisions, resort may be had cum grano salis to these provisions to help elucidate customary international law. Depending on the matter at issue, the Rome Statute may be taken to restate, reflect or clarify customary rules or crystallise them, whereas in some areas it creates new law or modifies existing law. At any event, the Rome Statute by and large may be taken as constituting an authoritative expression of the legal views of a great number of States. 61 In the Norman Child Soldiers decision of the Special Court of Sierra Leone, a decision which dealt with one of the few crimes that could be argued to be new in the Rome Statute (and in which the Appeals Chamber agreed with the Security Council, in determining that in fact it was not), 62 even the dissenter Judge Robertson was prepared to accept that the crime crystallized at the negotiations in Rome. 63 This is perfectly consistent with the traditional rules relating to treaties as evidence of customary international law, and there is thus no need to go further and assert that there has been a transformation in the nature of the international law making procedure, albeit one which ended up with what was in some ways, as Sadat put it, a 'lowest common denominator' (at 267) list of crimes. A list which, Broomhall argues, is now being treated as a 'de facto criminal code' (at 29).
Where Are We Going?
So, where does this leave us? Is the international criminal law system always to be ineffective owing to the interplay of the limitations of the ICC's procedure, the lacunae in substantive criminal law and sovereignty? Or are we to move on to a more rule of law-based international criminal justice system? It is quite possible that, as Lattimer and Sands worry in Justice, 'international politics, rather than judicial innovation . . . McCormack states in his well-researched and thoughtful chapter in the same volume, inconsistencies in international criminal law enforcement are 'most readily explicable on the basis of an "us" and "them" mentality' (at 108), where states advocate the prosecution of 'others', whilst having 'an aversion to accept the ugliness of what their own troops have done against the enemy they have come to dehumanise'. 64 The respective works here are all moderately optimistic, although none could be considered naïve or utopian. Sadat's work is perhaps the most upbeat, saying that 'the repartition of competences between national and international jurisdictions incorporated in the Statute as a matter of prescriptive and adjudicative jurisdiction may presage a quasi-federal organization of international legal authority in the future' (at 11). Further along, Sadat insists that it is conceivable, perhaps, that we have reached a stage during which a quantum leap in our thinking and behaviour has become possible -enabling us to transform the prohibitions on the commission of genocide, war crimes, crimes against humanity and aggression into real tools to deter the cruel and powerful. The journey from the Hague to Rome was long and arduous; it is to be hoped that the journey back to the Hague will be shorter, less encumbered, and ultimately successful. Humanity deserves no less.
Broomhall, as we have already seen, has more doubts. His prognosis at times looks fairly bleak: 'The required practice (and consistency of practice) called for by the accountability literature sits uneasily alongside some of the fundamental characteristics of the modern State system' (at 58). This is amplified later in the work: 'the role of States in making key decisions affecting the credibility of international criminal law remains a central fact of the emerging system of international justice, and this fact sits uneasily with any assertion that the international rule of law is gaining strength'. 65 Broomhall also does not see much change in the international legal environment either. This is partially as he considers there to be an inextricable link between international criminal law, 'the call for the reduction in sovereignty and . . . the call for increased use of force in support of international criminal law' (at 56). But, as he notes (ibid.) states are unwilling to put the decision to use force outside of their control, in particular in support of international criminal law. Broomhall's second proposition, about the link to force (which has links to Martti Koskenniemi's point that 'the "criminalization" of international politics, whatever else it may achieve, also strengthens the hand of those who are in a position to determine what acts count as "crimes" and who are able to send in the police' 66 ), is perhaps more controversial. Although some (Antonio Cassese and Madeline Morris are both cited by Broomhall as examples (at 57)) have called for use of force in support of international criminal law (it is not entirely clear that the Cassese quote quite supports this), there are other 64 Justice, at 142. McCormack considers this (ibid) to be one of the strongest arguments in favour of having an international system for prosecution. 65 International Justice, at 185. See also at 103 'Domestic trials will remain fraught with all the political, social, and resource difficulties that have always accompanied them, and the resulting imperfections will be slow to improve'. ways in which coercion can be brought to bear. It is not the threat of military force that persuaded many of the states in former Yugoslavia to cooperate with the ICTY, but economic incentives. Still, these instruments are also open to critique about their lack of transparency and equal application (International Justice, at 57).
However, Broomhall is not entirely downbeat, he identifies a metajuridical reason for hope. This is what he describes as a 'new legitimation environment' in which states operate (at 5), one in which they are increasingly under pressure from NGOs and their electorates to justify their decisions. According to Broomhall, 'it is in this context that the impact of the ICC and international criminal law are most likely to be felt'. 67 Although Broomhall's views here are unquestionably sensible and thoughtful, there is an extent to which two issues could have been further separated out, and the second elaborated on more in the work. The first is the extent to which states which are subject to the Rome regime (be it by becoming parties, or by having personnel subject to its jurisdiction) are likely to begin to prosecute their own nationals to avoid the ICC stepping in. The second is the extent to which states may begin, by doing this, to inculcate the values of international criminal law and normalize the prosecution of international crimes. This may create a feeling that the investigation and prosecution of international crimes is, simply, the normal response to allegations of their commission. In other words states internalize the value of prosecution of international crimes without thought of the external reasons for doing so. 68 Broomhall is cognisant of the first possibility, accepting that [S]tates have begun taking steps to amend national law to reflect the jurisdictional scope of the Rome Statute. Were this trend to extend widely, the resulting enhancement of the capacity of national law to prosecute international crimes, with any additional incentive provided by the jurisprudence of the ICC, could lay the foundations for a significant increase in the number and credibility of national proceedings against international crimes. 69 He also at least alludes to the second: the best remaining hope for the entrenchment of international criminal law as a regular feature of the international system is the development of a deeply rooted culture of accountability that leads to a convergence of perceived interests and of behaviour on the part of the States responsible for enforcing this law. The ICC and related developments may in fact contribute to the emergence of such a culture, although present signals are not uniformly positive (at 3).
Such a statement, in fact, puts Broomhall in a similar position to Amnesty International in 1998, when that organization stated that [t]he true significance of the adoption of the Statute may well lie, not in the actual institution in its early years, which will face enormous obstacles, but in the revolution in moral and political 67 International Justice, at 6. See also at 188. 68 See, e.g., Koh, 'Why Do Nations Obey International Law?', 106 Yale Law Journal (1997) 2599. 69 International Justice, at 93. Although he is more pessimistic when he qualifies himself by saying that despite the Rome Statute, '[d]omestic trials will remain fraught with all of the political, social and resource difficulties that have always accompanied them, and the resulting imperfections will be slow to improve' (at 102-103).
attitudes towards the worst crimes in the world. No longer will these crimes be simply political events to be addressed by diplomacy at the international level. 70 Perhaps the difference between Broomhall and Amnesty International is one of judgment, rather than evidence. However, it is unfortunate that although he seems prepared to concede that states are beginning to take such a view (see, e.g., at 106), Broomhall does not engage in any extended way with the most relevant international relations scholarship, particularly in the area of constructivism. 71 To be fair to Broomhall, IR theorists, including constructivists, have not dealt with international criminal law in any detail. However, such an engagement by Broomhall could have made for a richer finale to what is already an excellent work.
A constructivist account of the development of international criminal law would take very seriously the role of ideas about international criminal responsibility and the effect those have on states, especially how they perceive their interests and what values they internalize and act upon. The ideas in international criminal law include the appropriateness of the repression of certain identified conduct by prosecution, and that such offences affect everyone, threatening the international system as a whole. Such ideas were contained in the Resolutions that created the ICTY and ICTR (827 and 955 respectively), and those institutions acted as repositories and reminders of those ideals.
The way many states see themselves in relation to international criminal law, and the appropriate role of prosecution has changed over the last decade and a half. Constructivism would place emphasis on the fact that a number of states have begun to internalize those ideas and see their own identity as involving a commitment to the prosecution of international crimes. After a while, rhetoric has a habit of becoming at least partially reified. Or, as Edward P. Thompson said, 'the law may be rhetoric . . . it need not be empty rhetoric'. 72 International criminal law is perhaps particularly susceptible to such an analysis, given the suffusion of its own rhetoric with ideals of universality and crimes against humanity as a whole. 73 A constructivist account would build upon this to use the power of ideas and identity to explain how this led to the ICC.
Furthermore, the account would then expand on the role of the ICC in acting as a repository of those ideas, and persuading states, through the incentive to them to adopt domestic legislation, and oversight of prosecutions, to prosecute international crimes. Constructivist accounts could accept that at the beginning this might be on the basis that states would rather prosecute international crimes themselves than have the ICC do it. Later though, through the existence of the ICC as an embodiment of the ideals of international criminal law, and state interactions with it, states would internalize the ideals, and simply prosecute international crimes on the basis that they ought to be prosecuted per se, without regard to the concern that the ICC might otherwise do it. Admittedly, this is a skeletal, and perhaps caricatured, constructivist argument, but it shows how an engagement with such literature could have taken Broomhall further.
Although a realist could retort that the ICC was created as a cheap way of appearing to act against international crimes without having to create an effective regime that could limit the actions of the powerful, there is some evidence in favour of the constructivist view. Lattimer and Sands quite rightly, although not without caveat, point (at 9-10) to the possibility that perceived state interests have begun to shift, to take into account the importance of prosecuting crimes which 'threaten the peace, security and well being of the world'. 74 Having shifted to issues of theory, it is apposite to turn now to Professor Ward's Humanity. This is a work that attempts to show how jurisprudence, and law more generally, took a long turn when it moved away from emotion and empathy. Perhaps understandably therefore, Ward seems sceptical of the coercive forms of international criminal law. 75 He has a very jaded view of the ICTY, for example, seeing it as an example of victor's justice. He takes the view that unlike the US, which avoided the ICTY's jurisdiction over the Kosovo conflict 'in their different ways, all three communities . . . [being prosecuted] . . . were vanquished' (at 130). Ward has a point about selectivity, however, he understates the fact that although the US has not accepted the Rome Statute, 100 states have, and thus have accepted that they ought to prosecute their own nationals, as well as showing they believe the law ought to be applied to others.
The second problem Ward identifies with prosecutions (at 131) is drawn from Hannah Arendt: that such trials are anticlimactic, as evil is banal, and '[f]lashy show trials of certain individuals . . . allow the rest of us to pretend that we are not ourselves in some way responsible'. Against this we might note Alain Finkielkraut's contention that such trials are important as they reiterate the point that we always maintain moral responsibility for our actions: banality is no defence. 76 As to the contention that trials allow us to fool ourselves about our own responsibility, it might be noted that, as Karl Jaspers showed, there are many different types of guilt. 77 There is criminal guilt, political guilt (which is the responsibility of people for the acts of their governors), moral guilt (our moral responsibility for all our deeds) and finally metaphysical guilt, which arises as '[t]here exists a solidarity among men as human beings that makes each co-responsible for every wrong and injustice in the world, especially for crimes committed in his presence or with his knowledge'. 78 74 Rome Statute, preambular paragraph 3. 75 As is Philip Allott, see his Ward's point appears to elide the first and last of the types of responsibility. In contrast Jaspers accepted that although there was a close connection between the forms of guilt, '[t]his differentiation of concepts of guilt is to preserve us from the superficiality of talk that flattens everything out on a single plane'. 79 One leads to criminal punishment, the other, for Jaspers, leads to a 'transformation of human selfconsciousness . . . [and] . . . may lead to a new source of active life, but one linked with an indelible sense of guilt and humility'. 80 It is by no means clear that the acceptance that some ought to bear criminal guilt must lead to a negation of the metaphysical guilt that we may all bear for crimes committed in particular with our knowledge, but which we did not prevent. Indeed, in the two cases where international criminal tribunals have been set up (Yugoslavia and Rwanda), the conflicts have remained in the public eye, and this has led to at times agonised reflection on what states, through the UN, ought to have done to prevent those offences. 81 It is arguable that the swing to accepting the emerging responsibility to intervene 82 (which also has interesting links to the concept of metaphysical guilt) has been assisted, if not catalysed, by the movement towards criminal repression of criminal guilt. 83 It is unfortunate that Ward does not engage with Jaspers directly, given that both have an affinity for Kant, and Jasper's conceptual framework remains of the most nuanced accounts of what we mean when we refer to guilt.
Ward's final argument against the over-use of international criminal law perhaps has more purchase: [t]he forms of law relieve us of the deeper ethical problems, of shared responsibility for the fate of humanity. They also relieve us of the more material responsibilities too. The imprisoning of individual soldiers and politicians does not rebuild schools, hospitals and roads. It does not rebuild trust within devastated societies either. (at 131) This may be true, but it is also the case that the money (and there is a lot of it) that has gone into the ICTY would not have been given to reconstruction. The funds paid to the ICC by its states parties are not taken from the development or reconstruction aid budgets. That is not to say that the Tribunals have been cheap or always costeffective, or indeed that some of the money that has been allocated to them could not have been used constructively elsewhere, for example in rebuilding the Rwandan justice system. It is simply that the existence of those Tribunals has probably released more money from contributing states than otherwise would have been given in aid to the countries currently under their consideration.
Against Ward, it can be argued that the individualization of guilt may help rebuild trust among communities. Haris Silajadzic, the Bosnian foreign minister during the war, told Tim Judah that the Tribunal 'helps a cathartic process in societies on all sides. The message is that you cannot murder, kill or dislocate people without punishment'. However, he also noted 'I am against reconciliation as seen from the Hague perspective. I never wronged anyone. I did nothing wrong, Reconciliation means we have to meet halfway, but that's offensive. I was wronged and almost my entire family was killed. I care about justice and truth.' 84 Ward's suggestion that local courts ought to have prosecuted offences has been partially taken up by the ICTY, with the recent passing of cases to the Bosnian war crimes chamber under ICTY Rules of Procedure and Evidence 11bis. 85 But this procedure has involved the Bosnian chamber proving that it is capable of fair, impartial trials. As McCormack points out in Justice, the actions (or lack thereof) of national trials are why the ICC has been considered necessary (at 107). Ward underestimates these problems.
Ward is far more sanguine about the South African Truth and Reconciliation Commission (TRC) than about the ICTY. Ward sees more humanism in the TRC, and believes that it will help establish a culture of human rights by focusing on 'participating in the pain of others' (at 134). However, the South African TRC is more complex than this. As Alex Boraine, one of the members of the TRC notes in Justice, there is a lingering concern over impunity (at 337) and '[t]he South African experiment, with all its benefits, illustrates vividly the need for an international criminal court' (at 347). Others have gone further, and claimed that the TRC was a flawed institution designed to serve the interests of a new political elite rather than the victims. 86 Either way, it is by no means clear that the TRC has led to reconciliation in South Africa, or contributed to the social justice it was intended to foster.
To take Ward's own suggestion, and to look for assistance to literature, Aleksandr Solzenitsyn was deeply critical of claims that there should be reconciliation and amnesty: 'Fie! What naturalism. Why keep talking about all that? And that is what they usually say today, those who did not themselves suffer, who were themselves the executioners, or who have washed their hands of it, or, who put on an innocent expression: Why rake over all that? Why rake over old wounds? (Their wounds!!)' 87 Ward's response, that there are many more who would prefer restorative over retributive justice is problematic on two grounds. First, it responds to a normative claim with an empirical observation. Second, on its own terms, the assertion needs empirical support, but none is given. 88 84 The reason for Ward's support is that he has hope for humanity, and in the transformative power of empathy. I would like to agree. The only problem is that many people over literally millennia have shown themselves to be prone to the opposite side of human nature. Ward is aware of this, fearing early on that [p]erhaps Hobbes was right, perhaps our lives are meant to be 'nasty, brutish and short'? Not only are we programmed for disappointment, we also appear to be programmed for selfdestruction. How else can we explain the serial horrors of the countless holocausts of the last century? More pertinently perhaps, is there anything we can do to prevent their reoccurrence? (at 13) Ward hopes that sensibility is the way. Others, such as Reinhold Niebuhr, would retort that people need to have their impulses controlled through strict rules, which international criminal law provides. Even if Ward has the better of the argument on human nature, international criminal law and prosecutions of international crimes may help inculcate the values that Ward seeks to foster. For example, as Jaspers said What happened in Nuremberg . . . is a feeble, ambiguous harbinger of a new world order, the need of which mankind is beginning to feel. The new world order is not at hand by any means . . . but it has come to seem possible to thinking humanity; it has appeared on the horizon as a barely perceptible dawn, while in case of failure the menace of self-destruction of mankind looms as a fearful menace before our eyes. . . . our salvation on the world depends on the world order which -although not established in Nuremberg -is suggested by Nuremberg. 89 Indeed, there may be empirical reasons for the argument that resort to criminal law is not a first, but a last resort, and that having tried trusting humanity, we have come to seek to limit its destructive urges. As Leila Sadat puts it, the ICC was created as states, having tried all the other methods of repressing such offences, decided to 'give justice a chance' (at 72). Before we abandon the exercise we need to see that prosecution is not the least worst option. As Sadat notes, the system of international criminal law is in its infancy, and it needs time before the evidence is in and we can simply dismiss prosecution as a means of dealing with international crimes (at 75).
Conclusion
As should be clear from the above, there is plenty to engage with in all the works under review. All have much to say in their favour. Latimer and Sands' Justice has a number of extremely well thought-through chapters, 90 although as might be expected from a fairly lengthy edited collection, the variety of views on offer means that it is difficult to draw an overall 'message' from the work over and above the idea that international criminal law is basically a good thing.
Although Sands' Nuremberg is short, and the chapters tend to show their provenance in pubic lectures, there is considerable analysis in them, which makes them worth careful reading. It is not simply an introductory work, even if some expansion 89 Jaspers, supra note 77, at 54. 90 And, it is fair to say, a greater number than is often the case in the curate's-egg world of the international criminal law edited collection. of the ideas it contains would have been welcome. The same can be said about Broomhall's International Justice. This is the work of a serious and talented scholar, who also has an excellent feel for the subject. The only serious criticism that can be made of the work is that, as we have had cause to note already, the number of thoughts and issues packed into a fairly short work mean that some ideas are not as fully developed as they could have been.
Sadat's work is both longer and more wide-ranging, dealing with almost all aspects of the ICC, procedural and substantive, in addition to attempting to use the creation of the ICC to argue for an alteration in the international legal order. It is interesting to compare the visions of Sadat and Broomhall, which are in some ways similar. Both hope for a better future for international criminal law. What distinguishes them is the extent to which they believe the ICC represents a change in the international legal system. Sadat is optimistic with caveats, Broomhall is cautious, but willing to take a glance toward the clouds. Ward, in a more general manner, looks further and hopes for more, little short of a transformation of society through a rejuvenated set of human sensibilities. Our hearts may be with Ward and Sadat, but our heads are with Broomhall and those who have yet to be convinced of human perfectability through institutions or love.
In some ways this maps on to the ambivalent role that sovereignty plays in international criminal law. An excess of sovereignty and state power can lead to international crimes, as in the Holocaust, but so can a lack of sovereign authority, as in Somalia or Sierra Leone. Ironically, we act through state sovereignty in order to restrict actions justified in the name of state sovereignty. 91 Sovereigns need limitation, but then maybe we all do. Either way, as it is hoped has been shown, whatever human nature, sovereignty is still part of the society in which we find ourselves, and its relationship to international criminal law is multifaceted and not easily reducible to shibboleths on either side. And so it is likely to stay. 91 I owe this felicitous formulation to Neil Boister.
|
2017-10-18T14:55:47.083Z
|
2005-11-01T00:00:00.000
|
{
"year": 2005,
"sha1": "47795c7da0413084c6e839c9437f93db56e6490d",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/ejil/article-pdf/16/5/979/1337384/chi156.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d06dc351283d1929adff6e58fbf9ea4655809930",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
249696897
|
pes2o/s2orc
|
v3-fos-license
|
Current State of Knowledge on the Immune Checkpoint Inhibitors in Triple-Negative Breast Cancer Treatment: Approaches, Efficacy, and Challenges
Triple-negative breast cancer (TNBC) is the most aggressive breast cancer subtype with limited treatment options. Recently, there has been a growing interest in immunotherapy with immune checkpoint inhibitors (ICIs) in TNBC, leading to extensive preclinical and clinical research. This review summarizes the current state of knowledge on ICIs efficacy and their predictive markers in TNBC and highlights the areas where the data are still limited. Currently, the only approved ICI-based regimen for TNBC is pembrolizumab with chemotherapy. Its advantage over chemotherapy alone was confirmed for non-metastatic TNBC regardless of programmed death-ligand 1 (PD-L1) expression (KEYNOTE-522) and for metastatic, PD-L1-positive TNBC (KEYNOTE-355). Pembrolizumab’s efficacy was also evaluated in monotherapy, or in combination with niraparib and radiation therapy, showing potential efficacy and acceptable safety profile in phase 2 clinical trials. Atezolizumab + nab-paclitaxel increased the overall survival (OS) over placebo + nab-paclitaxel in early TNBC, regardless of PD-L1 status (IMpassion031). In IMpassion130 (untreated, advanced TNBC), the OS improvement was not statistically significant in the intention-to-treat population but clinically meaningful in the PD-L1 positive cohort. The durvalumab–anthracycline combination showed an increased response durability over placebo anthracycline in early TNBC (GeparNuevo). Several phase 1 clinical trials also showed a potential efficacy of atezolizumab and avelumab monotherapy in metastatic TNBC. ICIs appear to be applicable in both neoadjuvant and adjuvant settings, and are both pretreated and previously untreated patients. Further research is necessary to determine the most beneficial drug combinations and optimize patient selection. It is essential to identify the predictive markers for ICIs and factors affecting their expression.
Introduction
Breast carcinoma (BC) is the most common malignancy among women worldwide 1 and the leading cause of death among women. 2 BC can be classified into several clinically relevant subtypes based on the expression of estrogen receptor (ER), progesterone receptor (PrR), and overexpression of human epithelial growth factor receptor 2 (HER2). 3 The positivity of the expression of each identifies breast cancer clinical subtype and can predict the effectiveness of targeted therapeutic agents. 4 A distinct breast cancer clinical subtype-triple-negative breast cancer (TNBC)-characterized by the lack of expression of ER, PrR, and no overexpression of HER2, represents approximately 12% to 20% of all BC diagnoses. [5][6][7] TNBC tends to occur more commonly in younger patients, with poor cellular differentiation and a higher stage at the diagnosis. 8,9 The rate of local recurrence in TNBC reaches more than 50%, 10 with a high rate of distant metastases. 11 Thus, TNBC is associated with the least favorable prognosis of all BC subtypes. From the biological point of view, TNBC is not a specific cancer type, but a heterogeneous subset of neoplasms brought together due to their immunohistochemical similarities. 10,12 Thus, due to the molecular differences, the search for new treatment modalities is significantly more complex. So far, there are no targeted molecular-based therapies for TNBC, and it is routinely managed with chemotherapy (ChT), including anthracyclines or taxane-based regimens. 13 The need to develop more effective treatment options for TNBC-affected patients results in extensive research in this field, bringing in many new therapeutic approaches evaluated in clinical trials, including immunotherapy with immune checkpoint inhibitors (ICI).
The immune checkpoints, programmed death-receptor 1 (PD-1) and cytotoxic T cell antigen 4 (CTLA-4), act as negative regulators of T cell immune function. 14 PD-1, expressed by T lymphocytes, interacts with programmed death-ligand 1 and 2 (PD-L1, PD-L2) on tumor cells, inhibiting the T cells' proliferation and production of interferon-γ (INF-γ) and tumor necrosis factor-α (TNF-α), and reducing their survival and cytotoxic abilities. 15 CTLA-4 inhibits the interaction between T cells and antigen-presenting cells (APCs), which weakens the immune response against the neoplastic cells. 16,17 It also binds to the T cells with higher affinity than CD28, a protein that provides co-stimulatory signals required for T cell activation and survival, but without providing co-stimulation. Inhibiting the checkpoints' function facilitates the immune
Immunotherapy in TNBC-Clinical Trials
ICIs that are currently investigated for their efficacy in TNBC include PD-1 inhibitors (pembrolizumab, nivolumab), PD-L1 inhibitors (atezolizumab, avelumab, durvalumab), and CTLA-4 inhibitors (tremelimumab). Details of the clinical trials referred to below are shown in Table 1 (PD-1 inhibitors) and Table 2 (PD-L1 inhibitors). Due to the aforementioned synergy between ICIs and ChT, most of the described trials investigated the efficacy of different combinations of ICIs and ChT regimens.
PD-1 inhibitors
Pembrolizumab + ChT is currently the only ICI-based treatment combination approved by the Food and Drug Administration (FDA) for locally recurrent unresectable or metastatic, PD-L1-positive TNBC. On July 26, 2021, it was also granted accelerated approval for high-risk, early-stage TNBC as neoadjuvant treatment, continued as a single-agent adjuvant treatment. 24 Different pembrolizumab regimens were initially tested in open-label trials. Phase 2 KEYNOTE-086 evaluated pembrolizumab monotherapy as second-or later-line treatment in metastatic TNBC. 25,26 Pretreated patients reached the objective response rate (ORR) of 5.3% (95% CI: 2.7-9.9), and it was slightly higher in the PD-L1-positive subgroup-5.7% (95% CI: 2.4-12.2). Notably, the ORR was lower than in the case of single-agent ChT; however, it presented high durability and fewer adverse events than ChT. 26,27 The disease control rate (DCR) was 7.6% in general, 9.5% in PD-L1-positive and 4.7% in PD-L1negative populations. 26 Previously untreated, PD-L1 positive cohort presented ORR of 21.4% and DCR of 23.8%. 25 However, phase 3 KEYNOTE-119 28 comparing pembrolizumab monotherapy to a single-agent ChT for pretreated (second-or third-line treatment) metastatic TNBC showed a median OS of 9.9 months (95% CI: 8.3-11.4) for the pembrolizumab group and 10.8 months (9.1-12.6) for the ChT group (HR 0.97 [95% CI: 0.82-1.15]), showing no advantage of pembrolizumab over ChT. 28 The placebo-controlled, double-blind phase 3 KEYNOTE-522 trial assessed the efficacy of adding pembrolizumab to neoadjuvant ChT (paclitaxel + carboplatin) followed by adjuvant pembrolizumab vs ChT + placebo followed by adjuvant placebo for non-metastatic TNBC. 29 Results showed a significant increase in both primary endpoints-pathological complete response (pCR) and eventfree survival (EFS) rates in the experimental arm. The pCR rate in the pembrolizumab-ChT group reached 64.8% (95% CI: 59.9-69.5) vs 51.2% (95% CI: 44.1-58.3) in placebo-ChT group. 29 The pCR in the PD-L1-positive population for pembrolizumab and placebo groups was 68.9% vs 54.9% respectively, while in the PD-L1-negative population, it was 45.3% vs 30.3%. This showed a benefit of adding ICI to neoadjuvant ChT regardless of PD-L1 expression, consistently with IMpassion031 trial. 30 An updating analysis of the study showed an increase in EFS in the pembrolizumab group that exceeded expectations based on pCR percentage. 31 At 36 months, the EFS was 84.5% (95% CI: 81.7-86.9) in the pembrolizumab-ChT group and 76.8% (95% CI: 72.2-80.7) in the placebo-ChT group. 31 The most common event was distant recurrence (7.7% in the pembrolizumab-ChT group and 13.1% in the placebo-ChT group). 31 A similar strategy-ICI (pembrolizumab) + ChT (nab-paclitaxel, paclitaxel, or gemcitabine + carboplatin) vs placebo + ChT for previously untreated metastatic TNBC was evaluated in KEYNOTE-355. 32 In the intention-to-treat population, the median progression-free survival (PFS) in the pembrolizumab-ChT group was 7.5 vs 5.6 months in the placebo-ChT group (HR 0.82, 95% CI: 0.69-0.97). In PD-L1-negative patients, median PFS was 6.3 months in the pembrolizumab-ChT group and 6.2 months in the placebo-ChT group (HR, 1.08, 95% CI: 0.77-1.53). Patients with PD-L1 positivity were further subdivided into groups with PD-L1 combined positive score (CPS) of ⩾ 1 and ⩾ 10. For the CPS ⩾ 1 cohort, the median PFS in the pembrolizumab vs placebo group was 7.6 vs 5.6 months (HR 0.74, 95% CI: 0.61-0.90) and did not reach statistical significance. The respective pembrolizumab vs placebo PFS rates were 56.4% vs 46.6% at 6 months and 31.7% vs 19.4% at 12 months. 32 In the CPS ⩾ 10 group, pembrolizumab significantly improved PFS duration, which reached 9.7 months in the pembrolizumab-ChT group and 5.6 months in the placebo-ChT group (HR for progression or death, 0.65, 95% CI: 0.49-0.86). 32 Thus, the study provided further evidence for increased pembrolizumab efficacy in higher PD-L1 enrichment. Uchimiak et al 5 In addition to combining ICIs with ChT, pembrolizumab is also being evaluated on its synergy with other therapeutics. Examples include niraparib-a poly (adenosine diphosphate ribose) polymerase (PARP) inhibitor, ladiratuzumab vedotinan anti-LIV-1 antibody-drug conjugate with a protease-cleavable linker to monomethyl auristatin E, and sacituzumab govitecan-antibody-drug conjugate composed of an antitrophoblast cell surface antigen 2 IgG1 kappa antibody and SN-38, the active metabolite of irinotecan, and a topoisomerase I inhibitor.
PARP inhibitors, apart from inhibiting the detection and repair of DNA damage, 33 were found to increase PD-L1 expression on tumor cells providing more targets for PD-L1 inhibitors. 34 The efficacy of the combination of pembrolizumab and niraparib for metastatic or locally advanced TNBC was studied in phase 2 KEYNOTE-162. Enrolled patients had a median history of 1 prior treatment in the metastatic setting. The ORR and DCR were 21% and 49%, respectively. 35 In the efficacy-evaluable population, 11% achieved a complete response (CR), 11% had partial response (PR), 28% experienced stable disease (SD), and 51% had disease progression. OS could not be determined at the time of publishing. 35 A numerically higher response rate was achieved in groups with confirmed tBRCA mutation vs tBRCA wild-type (ORR = 47% vs 11%, DCR = 80% vs 33%, median PFS = 8.3 vs 2.1 months) and PD-L1-positive vs PD-L1-negative disease (ORR = 32% vs 8%). As long as the ORR difference between BRCA types was similar to the one in the case of PARP inhibitors monotherapy, PFS was nearly 3 months longer. 35 An ongoing phase 1b/2 trial (NCT03310957) studies the combination of ladiratuzumab vedotin with pembrolizumab as a first-line treatment in patients with locally advanced or metastatic TNBC. 36 At the time of writing, after a follow-up of ⩾ 3 months, ORR was 54% (95% CI, 33.4, 73.4), showing an encouraging clinical activity of this regimen and a manageable safety profile. 36 Sacituzumab govitecan is an FDA-approved drug in pretreated metastatic TNBC. Due to promising results of the trials comparing it to ChT's efficacy (significant increase in PFS and OS in the sacituzumab govitecan cohort 37 ), it is now being explored in different combinations. An ongoing phase 2 trial NCT04468061 aims to compare the efficacy of sacituzumab govitecan with pembrolizumab to that of sacituzumab govitecan monotherapy in metastatic, PD-L1-negative TNBC, with PFS being the primary endpoint. 38 The primary completion date is estimated for April 2024. 39 A single-arm, phase 2 clinical trial no. NCT02730130 aimed to determine the safety and efficacy of pembrolizumab with radiation therapy (RT) for mTNBC treatment. 40 By the 13th week of the study, 29% of the patients had died of disease-related complications. Out of the participants evaluable at week 13, 50% had disease progression, 33% had a PR, and 17% had SD which was durable for 30 weeks. 40 Overall, 33% of patients with durable responses presented them outside of the RT field, 40 indicating certain efficacy of pembrolizumab in this combination. The treatment was presented as well tolerated. 40 Another PD-1 inhibitor, nivolumab, was found to inhibit the growth of tumors derived from injecting TNBC cell line into mice model which develops a significant population of human B and T lymphocytes. 41 A phase 2 TONIC trial investigated the efficacy of nivolumab in metastatic TNBC administered after different induction protocols, such as hypofractionated irradiation, low-dose cyclophosphamide, cisplatin, or doxorubicin. Overall, the ORR was 20%, with most responses presented in the cisplatin (ORR 23%) and doxorubicin (ORR 35%) cohorts. 42 The study provided a solid rationale for considering induction treatment before introducing ICIs; however, the specific regimens and timelines are to be explored in further trials.
The combination of nivolumab, paclitaxel, and bevacizumab (anti-vascular endothelial growth factor antibody) as a firstline treatment in patients with HER2-negative metastatic breast cancer is a subject of a single-arm, phase 2, NEWBEAT trial. 43 At the time of writing, the published results regarding specifically patients with TNBC are limited to ORR which reached 83.3% in this subgroup. 43 As the trial is still ongoing, more data can be expected in the future.
To date, research on nivolumab's efficacy in TNBC is not as advanced as in the case of pembrolizumab. Nonetheless, many noteworthy combinations including nivolumab are currently being evaluated for TNBC and we are likely to find out more about its most promising regimes in the following years. The ongoing trials on pembrolizumab and nivolumab in different combinations for TNBC are summarized in Table 3.
7
IMpassion130, a multicenter, randomized, placebo-controlled, double-blind phase 3 study assessed the efficacy of atezolizumab + nab-paclitaxel vs placebo + nab-paclitaxel in patients with previously untreated, locally advanced or metastatic TNBC. 46 Generally, taking into account PD-L1-positive and PD-L1-negative cases, the study found no advantage of atezolizumab over placebo in combination with nab-paclitaxel in the intention-to-treat population: median OS in the atezolizumab group reached 21 vs 18.7 months in the placebo group (HR 0.86, 95% CI: 0.72-1.02). However, the updating analysis of IMpasion130 study provided evidence for atezolizumab's efficacy in patients with PD-L1 immune cell-positive tumors, as in those patients, atezolizumab group median OS reached 25 vs 18 months in the placebo group (stratified HR 0.71, 0.54-0.94) showing a clinically meaningful, nearly 30% reduction in the risk of death in the atezolizumab group. 46 Preliminary results of IMpassion131 study of previously untreated metastatic TNBC showed that atezolizumab with conventional paclitaxel had no survival advantage over placebo + paclitaxel treatment. 47 2) and 5.6 m (95% CI 5.4-6.5), respectively, for the atezolizumab and placebo groups. Mature survival results are to be expected; however, so far, atezolizumab + paclitaxel appears not to be an effective regimen in TNBC and is not recommended by EMA. 48 A particularly noteworthy trial was IMpassion031, assessing neoadjuvant atezolizumab and nab-paclitaxel for early TNBC. 30 The study showed a statistically significant difference between atezolizumab vs placebo with respective pCR rates of 58% (95% CI: 50-65) and 41% (95% CI: [34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49]. Interestingly, the study found no statistically significant difference in pCR rates between PD-L1-positive and PD-L1negative populations. 30 The pCR rates for atezolizumab vs placebo were 69% vs 49% for PD-L1-positive and 48% vs 34% for PD-L1-negative patients, suggesting the potential effectiveness of this combination in early TNBC regardless of PD-L1 status. In an open-label, multicenter phase 1 study no. NCT01375842, atezolizumab monotherapy administered intravenously every 3 weeks for patients with metastatic TNBC was found to be generally well tolerated and of effectiveness similar to ChT. 49 The ORR reached 24% in patients receiving atezolizumab as the first-line treatment and 6% for second-or later-line treatment groups. OS was 17.6 months for first-line patients and 7.3 months for second-or later-line patients. The duration of response ranged between 3 and 38 months with a median of 21 months. 49 As a part of the same study, a 48-yearold woman with a 31-year history of PD-L1-positive TNBC received atezolizumab monotherapy and showed a remarkable CR. 50 Previously, the patient had been treated surgically with adjuvant RT, followed by surgical resection of regional recurrences with adjuvant ChT. She met the PR criteria and immune-related response criteria (irRC) after 4 cycles of atezolizumab, 50 and after re-treatment due to disease progression, she had a PR and a CR 2 months later. 50 Another emerging combination of ChT and PD-L1 inhibitors in early TNBC is durvalumab with anthracycline in the neoadjuvant approach. It was assessed in a multicenter, prospective, randomized, double-blind, placebo-controlled phase 2 trial GeparNuevo. 51 Out of the patients treated with durvalumab, 53.4% achieved a pCR compared with 44.2% treated with placebo, although the difference did not reach statistical significance. However, the difference in pCR in the window cohort (single-agent durvalumab vs placebo 2 weeks prior to neoadjuvant chemotherapy) and the no-window cohort was statistically significant (window: 61.0% vs 41.4%, OR 2.22; non-window: 37.9% vs 50.0%; OR 0.61), suggesting the window treatment regimen to be more promising. 51 Notably, recently presented follow-up results showed a significant increase in response durability in durvalumab-treated patients. 52 Table 3.). The results were consistent regardless of window vs non-window approach. 52 This would further confirm an emerging claim that achieving pCR does not necessarily drive long-term survival in ICI-treated TNBC and may not be as meaningful as in the case of ChT-based treatment. 53 It could be justified by the different mechanisms of ChT's and ICIs' action, as the latter does not aim at tumor reduction via cytotoxicity. Thus, patients with residual disease after ICI are still likely to benefit from the therapy in the long run. 53 Avelumab, apart from acting as a PD-L1 inhibitor, was also found to facilitate the antibody-dependent cellular cytotoxicity of natural killer (NK) cells against tumor cells. 54 In a study on TNBC cancer cell lines in vitro, avelumab's effect on enhancing antibody-dependent cell-mediated cytotoxicity was stronger against tumor cells with higher PD-L1 expression. 54,55 The efficacy of avelumab in monotherapy of locally advanced or metastatic breast cancer was studied in a phase 1 JAVELIN Solid Tumor trial. 56 The ORR was 3% overall, and 5.2% in the TNBC subset all responses being durable. Out of the patients with a CR, PR, or SD, 29.8% had no progression of the disease for ⩾ 6 months. Tumor shrinkage was noted in 45.7% of TNBC patients, in half of which reaching ⩾ 30%. The overall DCR was 31% in the TNBC subset. 56 A case report was published describing a 48-year-old woman with locally advanced TNBC involved in an aforementioned study of avelumab monotherapy, who had also received adjuvant RT on tumor bed and regional 8 Clinical Medicine Insights: Oncology lymph nodes. 57 At the moment of writing, 16 months after the initial diagnosis, the patient remained alive and disease-free. Therefore, the combination of ICIs with RT could also present a potential therapeutic regimen worth further research.
All atezolizumab, avelumab, and durvalumab are currently being evaluated in different combinations in phase 1 to 3 trials as shown in Table 4. There are now attempts to combine PD-L1 inhibitors with PARP inhibitors, sacituzumab govitecan, cytotoxic agents, and others, so further advances in PD-L1-inhibitor-based regimens for TNBC are warranted.
CTLA-4 inhibitors
In contrast to a list of trials evaluating the efficacy of PD-1 and PD-L1 inhibitors in TNBC, the data on CTLA-4 inhibitors' efficacy are more limited. The combination of tremelimumab and RT was examined in a phase 1 study, which enrolled 5 patients with metastatic hormone-receptor-positive BC and 1 patient with mTNBC. 58 It was shown that tremelimumab in combination with RT was generally well tolerated, with manageable adverse events. The overall DCR was 33%, however with no objective response, and the mTNBC patient did not achieve SD. Median PFS was 1.5 months and median OS was 50.8 months since the diagnosis and 27 months since initiating tremelimumab + RT treatment. 58 Currently, tremelimumab monotherapy in advanced solid tumors including TNBC is a subject of an ongoing, phase 2 NCT02527434 trial. After disease progression, patients will have the option of being sequenced to durvalumab monotherapy or durvalumab + tremelimumab combination therapy, for up to 12 months or until disease progression.
Combining different ICIs
The existence of synergy between PD-1 or PD-L1 and CTLA-4 inhibitors' efficacy has been well studied in a setting of metastatic melanoma in a number of clinical trials. In patients with unresectable and metastatic melanoma, combined ICIs turned out significantly more effective, however with an increased risk of adverse events. [59][60][61] In 2 independent trials including patients with advanced melanoma, the HR in respect to median PFS was 0.42 59 and 0.4 60 when comparing the efficacy of nivolumab + ipilimumab vs ipilimumab only. Pre-clinical studies and case reports referred to below showed the potential benefit of this approach in TNBC.
In BRCA1-deficient mice with TNBC, the combination of cisplatin with a simultaneous PD-1 and CTLA-4 blockade inhibited the tumor growth and significantly increased subjects' OS. 62 In the same study, a single checkpoint blockade or double checkpoint blockade without cisplatin gave unsatisfactory results, providing a rationale for the clinical studies of the dual immune blockade in combination with classic ChT agents.
Moreover, a case has been reported of a 50-year-old woman with wild-type BRCA1 (BRCA1wt) and stage IV TNBC with bilateral pulmonary metastases. 63 The patient received treatment of concurrent nivolumab and ipilimumab with regional hyperthermia, followed by 1 low dose of cyclophosphamide and IL-2 with taurolidine. Taurolidine had been suggested to reduce IL-2-caused vascular leak syndrome while maintaining its therapeutic effect in patients with stage IV melanoma. 64 The patient was brought to a durable, complete remission of pulmonary metastases, though the disease progressed in mediastinal and axillary lymph nodes. The patient, initially with a very poor prognosis, remained alive for another 27 months after initiating the treatment. 63 The combination of nivolumab and ipilimumab for TNBC treatment is being evaluated in a few ongoing trials summarized in Table 5.
The combination of a PD-L1 and CTLA-4 inhibitor (durvalumab + tremelimumab) in TNBC was assessed in an openlabel, pilot study, which enrolled 18 patients, 7 of whom had TNBC. 65 Among TNBC patients, the ORR reached 43% and median PFS was not reached, whereas none of the hormone receptor-positive BC patients had an objective response, and the median PFS in this group was 2.2 months. The most common adverse events were hepatitis, electrolyte abnormalities, and rash, while there were no grade 4 or 5 adverse events observed. The regimen is now a subject of phase 2 MATILDA trial (Table 5) for solid tumors including TNBC, so more data on this approach can be expected.
ICIs with cancer vaccines
There are several ongoing clinical trials assessing the efficacy and tolerability of ICIs with cancer vaccines in TNBC treatment. The rationale behind combining cancer vaccines with ICIs focuses on enhancing the vaccine-elicited tumordirected immune response via immune checkpoint blockade. The need for combination therapy derives from overall modest results of cancer vaccine monotherapy even in FDAapproved indications, such as talimogene laherparepvec in advanced unresectable melanoma 66 or sipuleucel-T for metastatic castration-resistant prostate cancer. 67 Several trials evaluating ICI with cancer vaccines in advanced TNBC focus on pembrolizumab with either investigational multi-peptide vaccine PVX-410 (NCT03362060), specific vaccine targeting p53 (NCT02432963) or Galinpepimut-S-a Wilms Tumor-1-targeting vaccine (NCT03761914). Other combinations include durvalumab with PVX-410 (NCT02826434), durvalumab with neoantigen DNA vaccine (NCT03199040), and personalized synthetic neoantigen vaccine with nab-paclitaxel + durvalumab and tremelimumab or ChT (NCT03606967). The ongoing trials evaluating ICIvaccine combinations are summarized in Table 6.
ICIs with NK cells
NK cells are a part of an innate non-specific immune system and have their role in malignancy-targeted response. They interact with major histocompatibility complex (MHC) on altered cells by multiple activating and inhibitory receptors, promoting cytotoxicity through a number of pathways. 68 For instance, MHC-NK cell interaction results in the release of cytotoxic granules and proinflammatory cytokines, such as IFN-γ. 68 IFN-γ acts as an activator of APCs, resulting in the induction of T-helper cell-mediated immune response. 69 PD-1 and CTLA-4 molecules act as negative regulators of NK cells' function, 70,71 which justifies the evaluation of synergy between NK cells and ICI in cancer treatment. Moreover, ICIresistant TNBC often presents downregulation of major MHC class I elements. 72 NK cells' interaction with MHC and their ability to target cells with improper MHC function 72 may comprise a potential gateway for achieving response in these patients. Meta-analysis by Nersesian et al 73 showed an association between increased NK cell infiltration and more favorable prognosis in solid tumors including BC (8 studies on BC, n = 1631 patients, including 278 patients with TNBC). Overall, the BC studies showed a decreased risk of death in patients with documented increased NK cell tumor infiltration (HR = 0.27, 95% CI: 0.09-0.68, P = .027). 73 At the moment of writing, ICI-NK cell regimens for TNBC treatment are a subject of 2 trials-an ongoing phase 1 NCT04551885 (FT516 with avelumab for solid tumors including TNBC) and completed, phase 1b QUILT-3.067 (NCT03387085), assessing avelumab with high-affinity NK (haNK) cell therapy, IL-15 cytokine administration, cancer vaccines, and metronomic chemoradiation for metastatic TNBC. Interim results of the latter appear particularly encouraging with the ORR of 67%, DCR of 78%, CR of 22%, and a PFS ranging from 2 to over 12 months (n = 9 patients). 74
thrombocytopenia (25%) were also reported. 35 Tremelimumab and pembrolizumab + RT-based treatment studies described cases of lymphopenia. 58,82 Overall, the irAEs in TNBC treatment, though frequent, are rather low-grade and controllable. Immunotherapy, as an alternative to ChT, seems to have an acceptable safety profile in TNBC, similar to other cancers. 77,84 It was suggested that high BMI can be a potential risk factor for a worse tolerance of ICI in TNBC, 85 despite its greater efficacy in these patients. 86
Predictive Markers
Overall, TNBC is associated with poor prognosis and high mortality rate. However, this heterogeneous group of neoplasms includes subtypes that respond relatively well to ChT (a so-called "triple-negative paradox"). 87 Research shows that depending on several factors, the immunotherapy's efficacy can also vary in different cases of TNBC. Several predictive markers of the tumor's response to the treatment have been proposed so far. They are highly probable to comprise potential criteria for the choice of treatment methods with the most accurate prediction for a particular patient.
Tumor-infiltrating lymphocytes
The mononuclear immune cells that infiltrate tumor tissue 88,89 (tumor-infiltrating lymphocyte [TILs]) can be identified as either stromal (sTILs) or intratumoral (iTILs). 90 Depending on the study, these can be considered as separate TIL groups or taken together as a whole due to the continuity of the infiltration. 90 TILs level is known to reflect the T H 1 immune response in BC 91 and tends to be higher in more aggressive cancer types. 88 It was confirmed to be both a prognostic and a predictive marker for both ChT and immunotherapy-treated patients with TNBC in a number of studies referred to below. In the KEYNOTE-086 trial (pembrolizumab monotherapy), patients with TILs levels higher or equal to median vs lower than median had ORR of 6% vs 2% in previously treated patients (cohort A) and 39% vs 9% in previously untreated (cohort B). 92 Responders vs non-responders had the mean TILs level of 10% vs 5% in cohort A and 50% vs 15% in cohort B. The relationship between higher TILs level and higher ORR was statistically significant in combined cohorts. 92 Similarly, in KEYNOTE-173 (pembrolizumab + ChT), patients with pCR had higher median sTILs levels before and during treatment. 83 The sTILs levels before the treatment for pCR ypT0/Tis ypN0 were 42% (IQR 10-74) among achievers vs 10% (IQR 5-25) in non-achievers and for pCR ypT0 ypN0 40% (IQR 10-75) for achievers vs 10% (IQR 5-38) for non-achievers. The respective data on median on-treatment sTILs levels were 65% (IQR, 5-89) vs 25% (IQR, 2-48) in case of pCR ypT0/ Tis ypN0% and 65% (IQR, 5-86) vs 25% (IQR, 3-60) for pCR ypT0 ypN0 . 83 The GeparNuevo study (durvalumab + nab-paclitaxel) showed that sTILs levels at the baselines were a statistically significant predictor of pCR in the durvalumab arm, placebo arm, and complete cohort, thus, were not a specific predictor of response to durvalumab. 51 However, change in iTILs during treatment significantly predicted achieving pCR in the durvalumab arm. Similar conclusions for ER-negative/HER2negative tumors were drawn from the BIG 02-98 study comparing doxorubicin-based treatment with the addition of docetaxel. 93 An increase in 10% in TILs level was associated with 17% decreased risk of relapse in the case of iTILs and 15% for sTILs. The risk of death was reduced by 27% and 17% for iTILs and sTILs levels, respectively. In GeparSixto, a study investigating the addition of carboplatin to anthracycline with a taxane; patients with increased sTILs levels had pCR of 59.9% vs 33.8% in patients with low sTILs levels. Thus, it was concluded that sTILs level might be a predictive marker for a response to carboplatin in TNBC, 94 which is currently being evaluated on its synergy with pembrolizumab. 83,95 In FinHER, ECOG 2197, and ECOG 1199 trials, TILs were confirmed to be a significant prognostic factor for patients with ChT-treated TNBC. In FinHER, 10% of TILs increase led to a 13% decrease in the risk of distant recurrence. 96 The ECOGsponsored studies showed a 10% increase in sTILs level to decrease the risk of recurrence or death by 14%, distant recurrence by 18%, and death by 19% in a median follow-up of 10.6 years. 97 At the moment of publishing, the correlation between TILs level and both response to different treatment methods and prognosis is clearly documented for TNBC. Similar results were obtained in the case of non-luminal HER2-positive tumors, however not for luminal BC. 93,98 The TIL level was reported to increase after ChT, 88,99 which can comprise a promising approach for patients with low TILs and provides further justification for the pursuit of finding optimal combinations of ChT and ICI in TNBC treatment. It was further confirmed by the previously mentioned TONIC trial, aimed to evaluate the effects of induction treatment on the tumor microenvironment, which showed a statistically significant increase in the T cell infiltration after induction with cisplatin and doxorubicin. 42 Even though the KEYNOTE-86 trial showed a less favorable response in the previously treated cohort, it did not consider patients' TIL levels, which may have affected the final conclusions. 26 The expression of 4 particular genes-HLF, CXCL13, SULTE1, and GBP1-was found to be associated with the increase in TILs after anthracycline-containing neoadjuvant ChT in TNBC in the training set, but not confirmed in the validation set. 100 Thus, the mechanisms affecting TILs expression and the response to treatment remain unclear and are to be determined in further studies.
PD-1 and PD-L1 expression
The level of PD-L1 expression is a well-established predictive marker of response to immunotherapy in certain malignancies, such as non-small cell lung carcinoma (NSCLC) 101 or urothelial carcinoma. 102 PD-L1 positivity is found in 20%-31% of 14 Clinical Medicine Insights: Oncology TNBC cases. 103,104 However, the methods of assessing PD-L1 expression, establishing PD-L1 cut-off values, and type of studied cells (tumor cells, TILs, or both) have greatly varied between FDA-approved studies, resulting in heterogeneity in concluded PD-L1 predictiveness. 105 The expression of PD-1 and PD-L1 on immune and tumor cells as a predictive marker for immunotherapy-treated TNBC was assessed in several aforementioned clinical trials. However, the immunohistochemistry (IHC) assays used to determine PD-L1 status tend to differ between studies. In IMpassion130 (atezolizumab + nab-paclitaxel) PD-L1 positivity (defined as ⩾ 1% PD-L1 expression on immune cells evaluated via SP142 IHC assay) was associated with a mean increase in median OS of 7 months (HR 0.71 [95% CI: 0.54-0.94]) 46 Median PFS was 7.5 months (95% CI: 6.7-9.2) in the PD-L1 immune cell-positive population and 5.6 months (95% CI: 5.5-7.3) in the PD-L1 immune cell-negative group. 46 Interestingly, a post hoc analysis of 614 patients (68.1% of the IMpassion130 intention-to-treat population) showed a lack of equivalence in PD-L1 positivity prevalence determined by SP142, SP263, and 22C3 IHC assays. 106 Respective PD-L1 positivity (⩾ 1% expression) rates were 46.4% (95% CI: 42.5%-50.4%), 74.9% (95% CI: 71.5%-78.3%), and 73.1% (95% CI: 69.6%-76.6%). 106 Thus, many cases that were PD-L1-negative based on SP142 were designated as positive with SP263 (29.6%) and 22C3 (29.0%). 106 The difference in PD-L1 proportion yielded by SP142 and 22CC3 was also noted in the case of NSCLC 107 and bladder cancer. 108 In IMpassion130, SP142 seemed to be the most accurate assay in terms of determining a potential OS benefit from the therapy; however, the PFS benefit appeared consistent across different IHC assay-defined groups. 106 SP263 PD-L1 ⩾ 4% subgroup could then comprise a potential additional population that would benefit in terms of PFS. Importantly, SP263 PD-L1 ⩾ 4% population excluded 26.3% of SP142 PD-L1 ⩾ 1% patients, 106 suggesting that the optimal patient selection requires considering different IHC assays rather than one specific method with a fixed cut-off.
Tumor mutational burden
The accumulation of somatic mutations within a tumor cell can lead to the creation of neoantigens that are associated with either malignant transformations (driver mutation) or raised genetic instability (passenger mutation). 109 The neoantigens can be recognized by the immune system provoking an immune response. 110,111 Its predictive value for immunotherapy was reported in the case of melanoma 78,112 and NSCLC, 113 but is of no significance for Hodgkin's lymphoma, which responds to ICI despite not having a high tumor mutational burden (TMB). 114 As for metastatic BC, the responders to durvalumab and tremelimumab were found to have a greater number of non-synonymous somatic mutations and higher numbers of predicted neoantigens compared with nonresponders. 65 BCs, in general, are associated with relatively low TMB; however, this potential marker is more abundant in the case of TNBC, 65,109 indicating its greater immunogenicity. Within the TNBC group, relatively high TMB was found in the luminal androgen receptor subtype and low in the case of mesenchymal stem-like subtype. 115 As mentioned, TNBC is a highly heterogeneous set of tumors. The differences in TMB in this group indicate that further evaluation of specific TNBC subtypes could lead to a more precise tumor profiling and better-tailored treatment selection.
Moreover, there seems to be an association between TMB and the level of TILs. In one of the studies on TNBC, for patients with high TMB, the 5-year OS was 100% in highly infiltrated, 76% for moderately infiltrated, and 60% for immune-cold tumors. 116 In the case of TMB-low cancers, the difference between tumors of different levels of infiltration was absent with a 5-year OS of 81%-86%. 116 The difference in OS was statistically significant in the case of highly and moderately infiltrated tumors, but not in low-infiltrated. In immune-cold patients, the OS was reversely correlated with TMB levels suggesting a less favorable prognosis for TMB-Hi cases with low immune infiltration. 116 The actual impact of TMB on immune activities 117 and the correlation between TMB and TIL/ PD-L1 levels [118][119][120] and its predictive value in ICI-treated TNBC require further evaluation.
Mismatch-repair deficiency and microsatellite instability
Deficiencies in DNA mismatch-repair (MMR) leading to microsatellite instability (MSI) are known to cause the development of certain cancers, such as colorectal cancer (CRC) and endometrial cancer. 121 In a study regarding the impact of MSI on OS, the combined HR estimate was 0.65, which indicated a better prognosis for patients with ChT-treated MSI. However, it did not provide satisfying evidence for the predictive value of MSI in respect to ChT for CRC. 121 A study of MSI as a predictive marker of pembrolizumab-treated CRC and non-CRC showed a greater clinical benefit in the MMR-deficient cohort. 122 An analysis of MMR deficiency among BCs suggested a low frequency of this phenomenon in BCs in general and particularly low in non-TNBC. 123 It also showed that not all MMR deficiencies may lead to MSI. However, the small sample did not give satisfactory evidence for MSI being either a prognostic or predictive marker in TNBC.
Gene signatures
Research regarding predictive markers for immune manipulations used in cancer treatment resulted in identifying several pathways more frequently occurring in patients presenting a better response. These pathways include Th-1 signaling and CXCR3/CCR5 ligands and effector immune functions and are referred to as Immunologic Constant of Rejection (ICR). 124 Other immune-regulatory genes include, eg, CD274/PD-L1, PDCD1/PD1, CTLA4, FOXP3, and IDO1. Their expression was found to be strongly correlated with ICR. 124 When divided into 4 clusters based on the immune gene expression level (ICR1 for tumors with the lowest expression-ICR4 for the highest), the prognosis of the tumors representing different groups differed to a certain extent. For instance, basal-like tumors classified as ICR4 had a significantly higher OS than subgroups ICR1 to 3. Overall, the ICR4 tumors had a greater frequency of amplifications and deletions, with a potential immunomodulatory impact. The analysis of TMB also showed a significantly greater number of non-silent mutations with increasing immune-related genes' level. 124 Another 3-gene signature, consisting of the B cell/plasma cell (B/P), T cell/natural killer cell (T/NK), and monocyte/ dendritic cell (M/D) immune metagenes, was reported to be associated with a more favorable response to ChT in BCs in general. 125 Its prognostic value was particularly significant in the case of highly proliferating tumors, with more favorable distant metastasis-free survival in most basal-like tumors. 125 A 1-unit increase in the expression of HLF, CXCL13, SULT1E1, and GBP1 was reported to be significantly associated with better distant relapse-free survival in patients with residual disease after ChT (HR: 0.17, 95% CI: 0.06-0.43) and regardless of the response to ChT (HR: 0.29, 95% CI: 0.13-0.67). 100 No association was found between the expression of the 4-gene signature and the probability to achieve pCR. 100 In the TONIC trial (induction treatment + nivolumab) of metastatic TNBC, the inflammation-related gene signatures were significantly higher in responders than in non-responders. 42 They were found to be upregulated after induction treatment with cisplatin and doxorubicin, which was even more pronounced after nivolumab treatment. 42 No similar trend was observed after irradiation-based induction, 42 suggesting ChT as a preferred induction method of inflammatory-gene signature upregulation.
BRCA1/2 mutation
The proportion of driver mutations and several variants of frequent alleles were reported to be higher in the case of BRCAwt rather than hereditary BRCA mutation (BRCAmut). 126 However, the sole number of mutations is higher in hereditary tumors. 126 Therefore, hereditary BRCA mutation may result in a lesser number of driver mutations being sufficient for the development of cancer. In KEYNOTE-162 study (pembrolizumab + niraparib), the BRCA status was analyzed giving a numerically higher ORR in tBRCAmut group-47% (90% Cl: 24-70) vs tBRCAwt group-11% (90% Cl: 3-26). The DCR was 80% (90% Cl: 56-94) and 33% (90% Cl: 19-51) for respective populations. 35 It was also suggested that the presence of BRCA1/2 mutation is associated with a greater expression of PD-1 and PD-L1, thus leading to a better potential response to ICI. In KEYNOTE-162, the PD-L1 positivity was higher in tBRCAmut patients (80%) compared with tBRCAwt patients (56%). 35 However, at this point, the research on the association between BRCA1 and BRCA2 type and PD-1/ PD-L1 expression has given conflicting results, suggesting either a correlation 127 or lack of relationship 128 between these variables in TNBC.
Body mass index
Interestingly, despite the increased frequency of adverse effects among obese patients, the tumor response to ICIs in TNBC was found to be higher in this group. 86 Higher BMI was also reported to be a positive predictive factor in patients with NSCLC treated with ICI as a second-or later-line of treatment. 129 It may comprise a potential predictive factor for ICI-treated TNBC, though at the moment of writing the data on its significance is limited.
Conclusions
Completed and ongoing clinical trials show that ICIs in TNBC treatment are of promising efficacy and acceptable safety profile. While certain ICIs are already a subject of randomized trials both in monotherapy and in various combinations, some regimens remain described only in case reports or preclinical studies, so future advances in ICI-based therapies in TNBC are warranted. ICIs appear to be applicable in both neoadjuvant and adjuvant approaches, and in both pretreated and previously untreated patients, which raises hope for developing well-tailored, targeted treatment for TNBC in the future. Currently, more attention seems to be drawn toward combination therapy, especially the synergistic effect of ICIs and ChT. Pembrolizumab + ChT is currently the only FDA-approved ICI-based treatment regimen for TNBC. However, given the impressive long-term response to durvalumab + nab-paclitaxel vs nab-paclitaxel only, 52 this combination is likely to follow. Overall, nab-paclitaxel appears to be the most promising co-agent for ICIs, along with carboplatin, known for its efficacy in TNBC and recently reported effectiveness in combination with pembrolizumab.
Further research is particularly necessary for determining the most beneficial drug combinations and optimizing patient selection. An issue of essence is identifying the predictive markers for ICIs and factors affecting their expression. Currently, progress appears to be limited by the inconsistency of reported data and incoherencies between the criteria established in different studies. In the case of determining PD-L1 positivity, recent FDA approval for pembrolizumab-based treatment of CPS ⩾ 10 TNBC is likely to draw the researchers toward the CPS-based approach. The post hoc analysis of IMpassion130 also indicates the importance of the IHC assay choice and its impact on determining PD-L1 positivity. Optimal criteria establishing TIL status are still to be determined.
Thorough evaluation of different TNBC subtypes regarding their molecular and histological profile could also lead to a better understanding of this heterogeneous group and possibly contribute to more accurate treatment tailoring. The attempts to use monoclonal antibodies in TNBC treatment are not limited to ICIs, so establishing the predictive markers for newly emerging therapies together with better profiling of tumors within the TNBC group may greatly facilitate research advances in this field.
Author Contributions
KU and AS-S wrote the manuscript in consultation with AMB-K and AD.
|
2022-06-16T15:09:56.029Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "7f9f754454537bd841abc0bf3e2138954060706c",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/11795549221099869",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b0480626d9001562a70b779b8941d9f71ab2939",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247503874
|
pes2o/s2orc
|
v3-fos-license
|
Comparison Between Offline and Online Distribution Practices for the Insurance Industry in India
Insurance has been the backbone of the Indian society and economy. For India, a developing country, insurance does serve the need to a limited extent as social security practices are absent in India compared to foreign nations. Insurance has been sold mainly through traditional channels with agency garnering a major share of the pie. The study started with a robust literature review, formulating the objectives to examine existing offline distribution channels for insurance and the relative significance of emerging online distribution channel for insurance. Accordingly, hypothesis were promulgated for understanding difference between online and offline insurance in terms of cost and convenience, difference between economies of scale and accuracy, beneficial to customer and/or company, and on the growing trend of young internet users being a high potential target market for insurance business suitable to the online channel. The study further gives suggestions and recommendations for promotion of online channels and highlights their relative significance to various stakeholders.
INTRODUCTION
The insurance industry is very important for any country's economic development. A well-developed insurance sector promotes risk-taking in the economy, as it provides a certain amount of security in case of an unforeseen, loss-causing incident. It also provides much-needed financial security to family members in case of loss of life or health. Since the insurance companies having assets under management represent long-term capital, they also act as a funds pool to invest in long-term projects such as infrastructure development. The Indian economy's globalization and liberalisation in 1991 and formation of Insurance Regulatory Development Authority (IRDA) in year 2001 led to the sudden spike in growth of Insurance Industry. The spirited amplification of Indian Insurance industry is due to the evolving and unbelievable performance of various channels of distribution. These channels are interdependent mediums, interlinked in moving the products, services and information from business houses to consumers. They are considered as the most important links of Insurance Industry which bring an association between consumers; who are looking out to procure insurance policies or products and insurance companies; who are exploring ways to sell those policies or products. They are playing a strong intermediary role between buyer and seller and whose role is to study the overall market, matching the requirements of both parties and thereby helping their clients to choose from best competing offers. There are numerous channels involved in undertaking the business in the industry in the form of issuance of policies and collection of the premium amount. Firstly, independent agents are certified individuals who sell insurance policies or products on behalf of any insurance company. They receive commission from the company on all policies sold and serviced. As far as the marketing of insurance in Indian market is concerned, the agent has been instrumental and major factor in spreading the awareness with respect to market growth and insurance penetration. The only and major difference between individual and corporate agents is that, the latter are considered as employees of insurance companies who sell insurance policies on behalf of the company itself. Thereafter, another channel prevailing strongly in the insurance sector is Bancassurance, which is a joining hands and pact of banks and insurance companies, where insurance policies are sold by existing and upcoming banking networks. It is one of the low cost distribution channels which have emerged as a viable, most preferred and reliable insurance distribution channel in the market. The new and prevailing market condition for private participation has resulted in strong competition in the industry and thus a need for a new and specialized distribution channel was strongly felt that can bolster the clientele in the evaluation of their total risk exposure and present suitable insurance policies to cover such risks. Thus insurance brokers emerged as professional entities with the required technical expertise to offer customized insurance solutions. While an agent represents only one insurance company, a broker may deal with more than one insurer or both or multiple companies both in life and general insurance domain. With the growing awareness of insurance, many customers now prefer to transact either online or through phone or email. For such customers, there is a rise of direct selling channel. It is a process of selling insurance either online or through phone where the insurer sells directly to the insured via its employees.
STATeMeNT OF THe PROBLeM
Focus of the Literature Review has been to improve the understanding of the important constructs and concepts. There is need to study the existing and emerging online distribution channel for life insurance and general insurance. It is necessary to study the viability and convenience of online and offline distribution channel. Marketing has been an ever-evolving field with marketing budget allocation spreading thin across various modes and ways of reaching out to customer. Advent of new technology and adoption of the technology by the mass and target audience is also a big push towards new ways of reaching to customers. One of the powerful tools used by marketers across the big corporate is Offline and Online Distribution Model. The researcher would like to study and analyse 'Online Distribution Model' and 'Offline Distribution Model and how the Corporates across the Indian context can use the same to their advantage. Another area to look at would be the relevance of Offline and Online Distribution Model to bring awareness to consumer.
While planning for the research study various aspects were explored and studied and following points came out very strongly which strengthened the need for the research study. They have been enumerated below: India as a country has a huge and young population and thus there is a huge potential for selling and distribution of Insurance. In spite of insurance industry having been there in the Indian market for many years, a very marginal population has been insured which brings us to the conclusion of India being an untapped market. As per IBEF report, insurance penetration was only 3.69% as of 2017. With the Indian population no more restricted to the rural sector and a lot of migrations happening to the urban sector, the concept of joint family system is giving way to nuclear family system with both spouses normally found working, thus the significance of Insurance. The rural to urban influx is happening not just physically but also virtually i.e. the advent of internet and smart phones have given rise to the shopping trend moving from offline to online. So a lot of scope is there for insurance companies to tap and harness this emerging trend. EMarketer estimates that a quarter of India's population will become digital shoppers by end of 2018, the figure touching 41.6% by 2022. With the number of threatening and retarding accidents and illnesses on the rise, there is huge risk of life on the working population. Insurance also plays a very important role in the improved standard of living of individuals, family and society as a whole due to the innovative policies ensuring, health, accident and income protection along with wealth creation and retirement planning. Due to the increased literacy rate, the population is well aware of its needs and wants and insurance awareness has increased and the need for insurance is seeing an upward trend. With the Government bringing major legislations and policy changes with regard to insurance coverage for the population, there is a highly conducive environment for promoting insurance business.
With more than 460 million internet users, India is the second largest online market ranked only behind China. There will be about 635.8 million internet users in India by the year 2021. In spite of the large base of internet users in India, only 26 percent of the Indian population accessed the internet in 2015. This is surely a significant increase in comparison to the previous years, considering that the internet penetration rate in India stood at about only 10 percent in 2011. (Source: https://www. statista.com/topics/2157/internet-usage-in-india)
SIGNIFICANCe OF THe STUDy
Significance for Regulators and Economy: The economic reforms of 1991, viz. Liberalisation, Privatisation and Globalisation have introduced the structural change in the economy. All controls on business have been removed. The public sector model of growth has been dismantled and now Indian economy is integrated to the Global Economy. We are now in the Market economy. The Market is a nucleus centre to determine the demand, investment, consumption, choice and preferences of consumers/users. Thus, in a competitive market, the management is highly sensitised for Cost, Efficiency, Productivity and Profitability.
Social and Professional Significance: In this social market scenario, the insurance sector has been opened to the multi-national insurance companies. These foreign insurance companies with their marketing management expertise and modern information technology have introduced innovative insurance products and services profiles. This has created a cut-throat competition for business among the insurance companies.
Significance for Consumers and Regulators: India is one of the biggest emerging markets in the global scenario for insurance business. India has approximately 67% of large young, educated and working population according to IBEF report. These are the core group of internet users. Their number has recorded an increase at the rate of 32.8 percent year on year during 2000-2011. It is estimated that their number will surpass 700 million by 2020.
This constitutes the target group of potential buyers of insurance products. In view of this, the insurance companies have to explore the viable distribution channel for insurance products which can suit to their style, convenience and comforts. The Online Sales Channel can suit to their buying behaviour in terms of product profiles, clarifications, guidance, etc.
Significance for Insurance Companies: A global research study projections have shown that the online insurance industry witnessed 11 percent expansion during 2011-2018 (Research by IBIS World: https://www.reportlinker.com/ci02109/Online-insurance.html). The aggregate growth rate of industry is forecasted at 19 percent in 2020. This will generate $ 107 billion to accelerate the effective demand, investment, employment and finally economic growth. Our country is witnessing a growing trend of young, educated working population embraced with internet applications in day to day. There is also increasing number of insurance players in emerging market due to growing insurance business in India. This has increased the competition for insurance business. Moreover there is a change in the buying behaviour of insurance buyers. So, an attempt has been made to conduct a research in exploring the emerging distribution channel of marketing of online insurance to exploit the growing potential of insurance business.
Significance to the Academia: The outcome of this research will add a lot of value for teachers, who want to teach the different types of distribution channels in the insurance industry. They can get insights on digital marketing in insurance. They can also take insights on Consumer Behaviour and Customer Relationship Management. This study will also help the students of Insurance Management on enhancing their knowledge on the various types of distribution channels in insurance industry. They will get a good insight on the careers in the insurance industries distribution departments. (McKinsey & Co., 2018) This report titled 'Digital Insurance in 2018: Driving real impact with digital and analytics' talks about digital insurance and trends driving insure tech, insurance claim settlement in digital age, impact of AI on future of insurance, insurance beyond digital and the rise of aggregators and their impact on traditional insurance. It also throws light on the pathways to digital acceptability. (Bimabazar, 2018) in this article 'The time has come for IRDAI to review its Insurance Distribution Policy' talks about insurance penetration and distribution playing an important role, sales relationships, thinking beyond traditional to increase penetration, focus on rural and downtrodden, sale of government sponsored products, creating a need for insurance and harnessing online channels for selling and overcoming rural challenges. It also talks about product innovations, controlling mis selling and exploring newer distribution channels. (Giri, 2018) in his PhD thesis studies four different aspects of the insurance market in India. First, we try to develop an econometric model for insurance demand at household level. Second, we investigate how individual beliefs, attitudes and social norms affect insurance purchase decisions. Third, we look at why individuals choose different types of policies, whether it is term, endowment or multiple policies, and whether this choice meets their individual needs or is led by social pressure. Finally, we look at the reasons that lead to lapsation of policies and whether this is related to the original motive for insurance purchase. The results from both the third and final chapters of this thesis are indicative of possible mis-selling of insurance in India, where individuals may purchase insurance due to social pressures and aggressive selling tactics by insurance agents. (Sarkar & Das, 2017) depicts that the retail sector is growing at a very fast pace in India. It is one of the pillars of the economy and accounts for about 10 percent of the country's GDP. The Indian retail market has an estimated value of US$ 600 billion and is considered as one of the top five retail markets in the world by economic value. The Indian retail sector growth is one of the fastest globally. Indian consumers are very particular about their products. The consumer choices differ based on online shopping versus offline shopping preference. The online shopping and traditional one has their own advantages and disadvantages. Online shopping doesn't need long distance travelling, offers more choices, remains functional 24*7, offers huge discounts and provides customer reviews facility. In contrast, traditional shopping allows customers to physically examine products which is lacking in online mode. Consumers can use both online and/or traditional mode of shopping depending on their preferences at a particular moment, which results in primarily different behaviours across the two modes of shopping. This article tries to throw some light on the differences emerging out of online shopping behaviour and offline shopping behaviour. (Syed & Acharya, 2017) have given a general review which studies the distribution channels in the Indian non-life insurance industry and the trends that are evident after the liberalisation of the insurance industry with the entry of the private sector. Some new channels introduced in the recent past to improve the insurance penetration and help in the further spread and availability of insurance. The usefulness of each of the channels listed and the customer groups they are catering to have also been specified. The purpose of this study is to introduce the new incumbents to this space to the various alternatives available which is already known to the practitioner. We have also tried to map the progress of the distribution channel network and the trend it is displaying at present and the challenges likely to be faced by the insurance companies in future. (App Design, 2017) Rodney Johnson in his blog 'Mobile a Key Piece to Insurance Industry's Customer Experience Puzzle' says that if you stop and take a look at any technology vertical, mobile has quickly become the most leading thing. With the growth of smart phones, mobile is an extension of us as against just a way to communicate as most of the approximately six billion mobile subscribers across the globe will agree. With 24 billion connected devices by 2020, this mindset isn't going to change anytime soon, and it's already having an huge effect on the insurance industry. Many insurers are following integrated multi-distribution strategies with high focus on creating mobile capabilities so consumers can easily calculate and buy an insurance product, file claims online and more. (Bawa & Chathha, 2016) have analysed the level of awareness among life insurance customers regarding services provided by distribution channels of life insurance industry in India. Using Ordered Probit Regression on primary study of 617 customers reveals that in spite of present efforts of insurance companies, the expected possibility of awareness among customers towards distribution channels is not at all up to the mark. They show complete awareness towards individual agent channel amid the other channels prevailing in the industry. The factors like television and internet are the major sources in spreading complete information among policyholders. The insurers have to improve the level of awareness by looking into the factors viz. information through phone and through friends/ relatives/ colleagues which are unsuccessful to spread complete understanding among their clients. From demographic variables; gender and education have a significant impact with regard to awareness of customers regarding services provided by intermediaries. The analysis shows that life insurers need to improve the understanding of their clients regarding various distribution channels. For successive and thriving growth of the industry, it is necessary to make watchful inspection of the varied alternatives related to various channels. (Mehta et al., 2016) in his research states that there are noteworthy differences between consumers fascinated to shopping online versus in traditional stores. The internet and traditional marketing each have separate features. Online shopping involves no travel, product carrying or limits on shopping hours, offering ease of access, convenience and time saving. In contrast, offline shopping allows physical examination of the products, interpersonal communication but requires high travel and search costs and also has limits on shopping hours. Consumers may use the two channels differently resulting in the same consumers showing different behaviours when shopping across online and offline channels. This study tries to provide a thorough review of past available literature of online vs offline consumer behaviour. (Ezhilarasi & Kumar, 2016) explore the perception of customers on the online insurance. The study tries to find out the factors influencing customers to purchase online insurance products and satisfaction level of online insurance products. This study does a city specific study on the thinking of customers towards online insurance and helps to draw strong conclusions on distribution channels. The appearance of new financial technology and rise of outsourcing services in insurance is creating a highly competitive market condition which has a strong impact on consumer buying behaviour. Thus, it is important for the insurance sector, to have a better understanding of their customer's perceptions towards technology and to ensure improved satisfaction of their customers using online insurance. If they are successful, insurance companies will be able to affect customer behaviour, which will become a major reason in forming suitable strategies in the future. This study finds that various factors are affecting the customers to buy online insurance products and customers are highly satisfied with the online insurance products which are offered by the insurers. studied that online has been a catchphrase in India over the past few years across industries like travel, retail, tourism, insurance etc. The online business model needs the service provider and the consumer to become partners, a relationship that demands a certain level of trust. The insurer even allows the client to self underwrite. The new liaison in this world is "word of mouth" and the clients are compensated with cheaper premiums. The study focuses mainly on digital marketing, advantages and disadvantages of online insurance, Indian landscape, responsibilities to the insurers, E-insurance, challenges faced by the insurer to implement the digital strategy, etc. (Singhal & Shekhawat, 2015) states that consumer buying behaviour is the overall total of a consumer's outlooks, likings, purposes and decisions at the time of buying any product or service. The aim of the paper is to provide a comprehensive and extensive literature review of previous related studies since 1999 till date. The study of numerous literatures for last fifteen years, led to the data mining of various factors affecting online purchasing of various products and services. The most motivating factors have been acknowledged which push consumers to shop online. The study also highlights the various resisting factors, which act as hurdle and distract the consumers towards traditional buying system. (Maheswari & Chandrasekaran, 2015) studied that insurance is a difficult product and is normally sold through channels like individual insurance agents, corporate insurance agents, or insurance brokers. The new style in insurance marketing is to sell the insurance policies through the Internet. Though the Internet is changing the way insurers engage with customers, traditional channels remain important. The agent channels continue to play a central role in the sales cycle in India. The present study looks at the online channel acceptance plan of the insurance agents for their work-related activities through the growth of e-consumption model. The e-consumption model has taken variables from the existing literature like System Quality, Information Quality Perceived Usefulness, and Perceived Ease of Use to understand the agents' technological prowess and has introduced appealing and coercing role of the insurer viz., Management involvement for training on technology and information access mandated by the company through the Agents' Portal. It is found that training for technology is more effective in influencing behavioural patterns than the coercive role. (Arya, 2015) studied that the consumer buying behaviour has always been a renowned marketing subject, widely studied and pondered over the last decades. It is believed that consumers make buying decisions on receiving small selectively preferred pieces of information. Thus, it is very important to understand what and how much information is needed by the customer for him to evaluate the goods and service offerings. (Bhavan, 2015) has studied entire online process of developing, marketing, selling, delivering, servicing and paying for products and services of Coimbatore city. The population is highly tech savvy and the city was filled with offices of many successful entrepreneurs. It has many industries, estates, corporate hospitals and engineering colleges. Future online shopping is bound to grow further in a big way because of the growing youth population there. (Gupta, 2015) in his case Study of Rourkela in Odisha' tries to analyse how consumers decide channels for their purchasing. Specifically, it studies a conceptual model that looks at consumer value proposition for using online shopping versus the traditional shopping. Past study showed that perceptions of price, product quality, service quality and threat strongly force apparent value and buying intents in the offline and online network. Studies of online and offline buyers can be looked at to see how value is shaped in both channels. The objective of this study is to analyse online shopping decision process by comparing offline and online decision making and finding factors that motivate customers to decide on doing online shopping or go for offline shopping. The study finds that females are more into online shopping than male. The last two years, as population is more aware of technology the online shopping has increased enormously. Age group 35 and above are less likely to do online shopping because they are less aware of the technology. However, the respondents said that they will love to do online shopping if price of the product is less than the market. in article 'Swot Analysis for Online Insurance India' proposes a clear action agenda for insurers and lists specific imperatives for each of its element. It suggests that insurers should define digital goal, adapt a digital mindset, inculcate right capabilities within their organisations and accelerate their current digital efforts to benefit from this digital opportunity. (DNA Web desk, 2015) report titled 'Make in India-initiative to foster growth in insurance industry in 2016', industry heads say that in FY15, the insurance industry is witnessing a growth rate of around 12-13%, said M Ravichandran, President -Insurance, Tata AIG Insurance. He further expects the industry to outperform this number in FY16, and the growth rate would be around 15%. K G Krishnamurthy Rao, Managing Director and Chief Executive Officer, Future Generalli India Insurance said that the range of policy initiatives taken by the Government and the regulators have led to renewed optimism about the future prospects of the Indian economy. (Octane Research, 2015) in its report titled 'State of e-Marketing India 2015' talks that the last half decade has been a great journey for Indian online growth. The online users doubled from 120 million in 2011 to 278 million in 2014. Mobile also showed remarkable growth with 900 million mobile connections and 220 million smart phone shipments in India in 2014. They have surveyed 465 marketers who put in their efforts and time and shared their insights and ideas with us. They have also collaborated with DMAi, RAI and IAMAI who have given their support, helping them in extending their reach for this study. This study has trending data, on how online marketing space in India has grown from 2011 to 2015. The study highlights newer insights that online marketing has seen in 2014. The last part of the research study tracks how Digital Marketing (Email & SMS) has changed and how digital marketers in India are using it. (PWC, 2015) carried out a survey "Future of India -The Winning Leap", and inferred that emergence of new technologies in India, particularly mobile, has sparked a social change that's difficult to compute. While mobile, internet, and social media incursion and growth can be quantified; describing the changes in social values and styles that have accompanied those trends is far more difficult. New technologies like virtual walls and virtual mirrors will further enhance the retail customer experience, thereby promoting greater consumption. Virtual mirrors let buyers 'try on' clothes and accessories virtually before making buying decisions. In their view, there is huge potential for online shopping companies owing to the ever-growing internet user base and technology advancements. However, it will have its share of challenges, be it operational, regulatory, or digital. (NagaRaj et al., 2014) states that the insurance sector in India came alive in the early period of the nineteenth century. Numerous acts have been passed from time to time for improved management of the insurance business. Two notable events in the history of insurance are i) the formation of the Life Insurance corporation of India in 1956, which acted as a monopoly till the year 2000 followed by ii) the opening up of the insurance sector to the private players in 1999, who were given the permission to operate either single headed or as joint venture with any other private player(s) and/or overseas partners. This extreme regulatory change ended the monopoly status of LIC and also coaxed the private players to enter into the insurance space. To keep pace with the dynamic industry environment, one of the functions that is capturing strategic space is the arrival of newer channels of distribution, which would take the business to newer markets and serve the customer cost effectively, over time and quickly. This study discusses the numerous different channels (that mushroomed in the deregulated period and changed the overall industry pattern). The research considered seven top private insurance players to understand the recent trends in distribution. The analysis clearly shows a change from the traditional channels by the private sector. (Business Standard, 2014) Masoom Gupte report titled 'Go beyond the traditional' talks that retail and service players are reshaping the methods of traditional distribution touch-points by exploring the unusual, meaning-Purchase a term-insurance policy or holiday off-the-shelf at a supermarket-Book a pre-packaged holiday as you visit a recovering friend at the hospital-Shop at a virtual store while you sip your coffee at your neighbourhood store. Aegon Religare, Kuoni-SOTC, Thomas Cook and Yebhi.com are few brands working hard to convince customers that abnormal can sometimes be more efficient. From the above review process, it has been observed that companies' value chain which is also called P (Place) of Marketing Mix play a crucial role creating value proposition for the customer. The insurance industry which has been applying an offline value chain is in a position to implement the online distribution channel to deliver its products and services looking at the change in consumer buying behaviour shifting towards virtual mode. The penetration of internet and smart phones at throw away prices has increased the user base for the same drastically and these very users who are young and trendy are moving into the virtual market space by virtue of it being there as an app base and the young trying and exploring the new. Whereas the old offline methods of distributing insurance cannot be eradicated, however the virtual penetration can go up drastically. (Kumar, 2013) has studied 'Electronic shopping: a paradigm shift in buying behaviour among Indian consumers' and found that the consumers have accepted online shopping in a positive way. This clearly proves the recent growth of online shopping in the country. However, the frequency of online shopping is comparatively less in the country. Online shopping firms can use the applicable variables and factors inferred from the study, to form their strategies and plans in the country. Better understanding of consumer online shopping behaviour will help companies in acquiring more online consumers and growing their e-business earnings. Plus, due to discounts and benefits from E-commerce, consumers are more willing to make online purchases. With growing popularity of Internet, the number of Internet users will keep growing and more Internet users will become online consumers. (Guru, 2013) states that online shopping is mostly male, young, single and educated. It's also found that internet usage trend like average time spent, place of accessing internet, main tasks accomplished and types of sites visited using internet between both buyers, and non-buyers are almost same. Most of the online buyers ask for product return/money refund in case of product dissatisfaction. It is observed that around 42% of the sample was unsure on buying or not in next 2/3 months. Three most important factors contributing to trust on online merchants were keep promises and commitments, care for customer welfare and help when in problem. (Bashir, 2013) states that online shopping is getting popular among the young generation as they feel more comfortable, find it time saving and convenient. It is found from survey that a consumer deciding to purchase online electronic goods was affected by multiple factors. The main important identified factors were time saving, best price and convenience. Price factor was popular among the consumers because online market prices were usually lower compared to physical markets. People normally compare prices in online stores, review feedbacks and rating about product before making the final selection of product. (Parikh, 2011) revealed that demographic indicators like age, gender, marital status, and income status have been customarily used in consumer behaviour study and market segmentation, shopping trends have also come up as dependable separators for categorising different types of shoppers based on their approach. Researchers have tapped into shopping trends to study loyalty behaviour among elderly consumers, window shoppers, out-shoppers and mall shoppers. By including this shopping behaviour pattern to online shopping, the study aimed at adding to the facts and understanding of consumer response to online shopping. It is becoming increasingly clear that to go on and more importantly to do well, online merchants should accept and actively follow elementary principles of good vending that apply to any medium. Based on the findings from this study, it is expected that the study of shopping trends can also help e-retailers recognize and understand those consumers who prefer to shop online and why. Additionally, shopping patterns could be used to fragment customers and form different strategies based on each segment's comparative inclination to adopt and use online shopping.
OBJeCTIVeS OF THe STUDy
1. To examine the existing offline distribution channel for insurance 2. To examine the relative significance of emerging online distribution channel for insurance 3. To make effective suggestions based on survey of literature and data analysis Hypothesis H1 The existing offline distribution channel is relatively costly and less convenient to the users as compared to online channel. H2: The online distribution channel has relatively high economies of scale and accuracy as compared to the offline channel. H3: Distribution model of online insurance business is beneficial to customers as well as insurance company H4: A growing trend of young internet users is a high potential target market for insurance business suitable to online channel.
Questionnaire has been used as tool for data collection. The collected data from the respondents was analysed using SPSS 21. Reliability analysis has been done to ensure reliability of the questionnaire. . These Reliability values are high and considered to be fairly good enough measure for this study. The Skewness and the Kurtosis were found to be within the stipulated limits suggested for a Normally Distributed data. The researcher has tested the presumptions on the basis of data collected and interpreted using different techniques of analysis. No good research is complete without hypothesis validation. From this point of view the researcher has compared the statements of hypothesis with the findings drawn and has validated the basic assumptions of the study.
Hypothesis Testing & Validation
The researcher has tested the presumptions on the basis of data collected and interpreted using different techniques of analysis. No good research is complete without validation of hypothesis. From this point of view the researcher has compared the statements of hypothesis with the findings drawn and has validated the basic assumptions of the study.
Hypothesis No. 1
H0: The existing offline distribution channel is not relatively costly and less convenient to the users as compared to online channel H1: The existing offline distribution channel is relatively costly and less convenient to the users as compared to online channel Level of Significance α = 5% = 0.05
Interpretation
It is observed from one sample t test result that P value of the variables such as comparison of offline distribution channels and online distribution channels with cost and convenient factor is less than (P< 0.05) at 5% level of significance. Hence, it is revealed that the existing offline distribution channel is relatively costly and less convenient to the users as compared to online distribution channels. It is concluded that online distribution channels are less costly and more convenient than offline distribution channels. So alternative hypothesis is accepted and null hypothesis is rejected at 5% level of significance.
Hypothesis No. 2
H0: The online distribution channel does not have relatively high economies of scale and accuracy as compared to the offline channel. H2: The online distribution channels have relatively high economies of scale and accuracy as compared to the offline channel. Level of Significance α = 5% = 0.05
Interpretation
It is observed from one sample t test result that P value of the variables such as comparison of offline distribution channels and online distribution channels with economies of scale and accuracy factor is less than (P< 0.05) at 5% level of significance. Hence, it is revealed that the online distribution channel has relatively high economies of scale and accuracy as compared to the offline channel. It is concluded that online distribution channels have relatively high economies of scale and accuracy as compared to the offline channel. So alternative hypothesis is accepted and null hypothesis is rejected at 5% level of significance.
Hypothesis No. 3
H0: Distribution model of online insurance business is not beneficial to customers as well as insurance company H3: Distribution model of online insurance business is beneficial to customers as well as insurance company Level of Significance α = 5% = 0.05
Interpretation
It is observed from one sample t test result that P value of the variables such as comparison of offline distribution channels and online distribution channels with beneficial variables factor is less than (P< 0.05) at 5% level of significance. Hence, it is revealed that distribution model of online insurance business is beneficial to customers as well as insurance company. It is concluded that Source: Primary data and Computation online distribution channels are beneficial to customers and company as well. So alternative hypothesis is accepted and null hypothesis is rejected at 5% level of significance
Hypothesis No. 4
H0: A growing trend of young internet users does not have a high potential target market for insurance business suitable to online channel. H4: A growing trend of young internet users is a high potential target market for insurance business suitable to online channel.
Level of Significance α = 5% = 0.05 Statistical Test: Chi-Square test Interpretation
It is observed from chi-square test result that P value of the variables such as comparison of offline distribution channels and online distribution channels with beneficial variables factor is more than (P>0.05) at 5% level of significance. Hence, it is revealed that a growing trend of young internet users do not have a high potential target market for insurance business suitable to online channel. It is concluded that a growing trend of young internet users do not have a high potential target market for insurance business suitable to online channel. So alternative hypothesis is accepted and null hypothesis is rejected at 5% level of significance
Objectives and Their Findings
Objective 1 -To examine the existing offline distribution channel for insurance The current offline distribution channels being used by Insurance Companies and customers is Agency, Brokers, Bancassurance, Company Branch, Sales Force with the most preferred being Agency. It is observed from Table and Figure for rank analysis of Offline Distribution Channels that (51.3%) customers prefer agents as offline distribution channels to obtain insurance and it has given rank 1. Whereas, for rank 2 is sales force (16.52%), rank 3 is kiosk (11.31%), rank 4 is company branch (8.26%), rank 5 is brokers (6.96%) and rank 6 is bancassurance/bank (5.65%). It can be concluded that agents are considered as the most preferable offline distribution channels to obtain insurance.
Objective 2 -To examine the relative significance of emerging online distribution channel for insurance
Online purchasing is easy, efficient and eco-friendly, saves operational cost and process is paperless, Online insurance renewal is easy and one click process with auto-reminders, Customer service is fast, quick and reliable are major factors for using online distribution channels.
The programs undertaken to increase the awareness level of online insurance channels in order of ranking are social media, mobile apps and direct mailers.
Current online distribution channels used by Insurance Companies in order of ranking are webbased application, mobile apps and kiosks with Web based application the most preferred.
Emerging online distribution channel has the chance of frauds in the form of cyber crimes, data insecurity etc. and since it is an indirect channel there is no personal guidance on right purchase
Objective 3 -To make effective suggestions based on survey of literature and data analysis
The customers focus is more on the core distribution model concept in the findings and hence the effort on further refinement and augmentation of online distribution model needs a review. Companies need to analyse and focus on building a Hybrid Channel which will have all the benefits of Online channel and also provide personal guidance and support through virtual face to face help or personal visit.
Customers Findings
The following are the Values of Cronbach 's alpha Coefficient: For the Instruments measuring customers response.
• Level of agreement/disagreement about the various reasons for using offline distribution channels for life insurance/ insurance (Cronbach's Alpha, α =0.925, N=11).
• Level of agreement/disagreement about the use of online distribution channels while purchasing life insurance/ insurance (Cronbach's Alpha = 0.845). • Various limitations faced by customers while dealing with online distribution channels for life/ insurance (Cronbach's Alpha, α =0.921, N=12).
These Reliability values are high and considered to be excellent measure for this study.
• Lack of personal interface/service, doubts in safety of transaction/online scams/cyber security issues, contact updation are the major problems faced by customers while using online distribution channels. • Online purchasing is easy, efficient and eco-friendly, saves operational cost and process is paperless, Online insurance renewal is easy and one click process with auto-reminders, Customer service is fast, quick and reliable are major factors for using online distribution channels. • Direct Interaction between customer and agent/sales force., • Doubt resolution is easy, high tangibility and good customer relationship are the major reasons for using offline distribution channels for insurance.
CONCLUSIONS AND SUGGeSTIONS
From the above findings, it can be inferred that the insurance sector is growing at an annual rate of 21.9%. However, insurance penetration in the country is very low. Most private insurance companies have had joint venture with recognized foreign players across the globe. To achieve success in the marketing of the insurance products, the entire business environment is required to be evaluated. The strategies are to be prepared based on the dynamics of the market trends. A company must have quality people, innovative management, be able to employ technology effectively, besides having the right products and distribution channels to be successful. The need for distribution channels can never be denied because these channels are the major reasons behind the successful working of insurance companies. This study is focused on the offline and online distribution channels in Insurance. It is observed that though offline channel has been there in various forms since inception of insurance, it is gradually losing its usage due to the advent of online distribution and a regular upgrade in the form of technological advancements. Regulatory and capital barriers to enter the insurance industry limit the impact of "standalone" technology companies. However, I believe the use of technical capabilities with a capital backing, regulatory fit and a recognized brand would be transformational for the insurance sector along with product innovations and aggressively leveraging the enabling technologies.
The customers focus is more on the core distribution channel concept in the findings and hence the effort on further refinement and augmentation of online distribution channel needs a review. In fact, the companies should focus on a Hybrid Channel where the online channel is combined with Offline channel to provide the option of personal guidance which will greatly add numbers and penetration. Companies need to appreciate that the events and facilities they provide as a part of their augmentation exercise deliver only marginal returns and some of these factors are critical for getting more customers to switch to online distribution model. Efforts have to be taken for increasing awareness of online insurance distribution channel. Similarly, efforts have to be taken by the companies to make online insurance channel user friendly by more demo usage meetings, user friendly webbased applications, mobile apps and kiosks. Current online distribution channels used by Insurance Companies in order of ranking are web-based application, mobile apps and kiosks with Web based application the most preferred. With the advent of smart phones companies have to create new apps and restructure existing ones. Companies have to make tie ups with Payment apps and give various discount offers for existing and new customers using online channel. Lack of personal interface/ service, doubts in safety of transaction/online scams/cyber security issues, contact updation are the major problems faced by customers while using online distribution channels. Companies have to come out with innovative ideas like speaking apps, online chat options, bulletins and newsletters from government regarding online safety to build confidence among internet users. Companies should have separate section in Web based applications and Mobile Apps for contact updation in line with KYC options offered by banks. Companies should hire and appoint exclusive online business generation employees. Companies should also partner with online business aggregators for offering online insurance. The Insurance companies through their intermediaries and by adopting sound distribution strategies for both offline and online channels can reach the potential customers. There is a need to promote Internet Marketing and Worksite marketing as sales generating channels by both public and private insurance companies.
Hybrid Channel Model Recommendation
By combining traditional distribution with online channels, the companies that follow a Hybrid channel approach enjoy a better reach and coverage. For such companies, approaching consumers directly through digital channels allows them space to move into new and previously unreachable markets and achieve a greater share of their target market. Product selection can be offered to help customers move to higher margin online channel that match their products sold via their agency channel. An online approach to customer also provides existing companies more flexibility in launching and marketing products. Established insurance companies can influence real time data through digital tools to better analyse the expected performance of a pre launch product. Post launch, companies can initiate more targeted and personalized marketing strategies for their traditional as well as online channels.
Advantages of the Hybrid Channel
The table below shows the dimensions of each of the channels illustrating the many advantages that can be leveraged in a Hybrid channel.
Capitalizing on the digital platforms to extend a digital capacity can convey a lot of benefits across financial, operational and market dimensions: Revenue growth -Companies can build their own direct links with customers, increasing involvement and conversion rates through their own digital channels.
Improved margins -Companies no longer have to negotiate margin sharing with wholesale brokers, Managed agencies and program administrators. Expanded market reach -Insurance companies are not bound by geography or their promoters reach when they market and sell their products directly to consumers online. They can sell to various customer segments and go global overnight.
Reduced capital expenditures -Companies can cut down capital investment costs as they don't essentially need to develop costly, time consuming new agency channels to push growth. However, they have to invest in digital channels.
Improved customer data -Companies can utilize the huge data generated by digital tools and platforms to better comprehend their customers' preferences, styles, demographics and purchase. High value segments can be recognized and targeted, while pain areas in the customer journey can be worked upon.
Improved customer relationships -Companies can own their customer relationships by leveraging their data-driven understanding of customer behaviour to deliver a more targeted value proposition.
Comprehensive products -Companies can provide a full line of products across a customer's events like Business Owners Insurance, Workers Compensation, Liability, Home and Auto. The new opportunities for these companies lie in growing the profit share they create from their higher margin from direct channels. Insurance firms could achieve this by increasing bandwidth of their insurance product offerings. However additional capital investment will be required.
The Road Ahead
There is an incessant and noteworthy change in the approach of consumers towards Online Insurance. The change is seen in a phased manner from curiosity to attraction and then to action. The misconceptions, doubts, questions about Online Insurance as a distribution channel are still there as the entire model of Online Insurance is pretty new and unheard in India. Online Insurance distribution as a marketing revolution is of recent origin. The customers focus is still more on the traditional distribution channel concept in the findings and hence the effort on further fine-tuning and refinement of online distribution channel needs a reassessment. Insurance business in India needs special care as against other businesses as it is still in a very nasal stage. Both theory and practice have to be incorporated to provide best services to policy holders. This industry has to be prepared for more challenges due to in progress changes in the economy and employment modes. More world players have planned to enter into Indian market due to the high potential. In fact, it is very important to understand the customer expectation and attitude for this product. This is a time for business model re engineering. With several changes in regulatory framework leading to further change in the way the industry conducts its business and has customer engagements, the future looks promising for the insurance industry. A growing middle class, young insurable population and growing awareness of need for protection and retirement planning will help and support the growth of Indian insurance. The burgeoning usage of internet and that becoming an important channel of distribution for the insurance companies in various modes show a strong sign of the gradual movement of marketing from traditional mode to online mode with a strong focus on hybrid channel. The number of internet users in India is expected to reach 627 million in 2019, driven by speedy internet penetration and growth in rural areas, market research agency Kantar IMRB has said. Of the total user base, 87 percent Indians are categorised as regular users, having used internet in last 30 days. Approximately 293 million active internet users live in urban India and around 200 million in rural India. The report found that 97 percent use mobile phone to access internet. While internet users grew by 7 percent in urban India, reaching 315 million users in 2018, digital growth is now being pushed by rural India, registering a 35 percent growth in internet users over the past year. Bihar has shown highest growth of internet users across both urban and rural areas, with a growth rate of 35 percent over last year. The report also pointed that internet usage is better gender balanced than before with 42 percent of women using the internet. "It is captivating to note that the digital riot is now sweeping small towns and villages perhaps due to affordable data costs. The increased usage of internet in rural India, where more than two-thirds of active internet users are now using the Internet daily to meet their entertainment, communication and shopping needs. Insurance Industry thus has a huge opportunity today where it can use digital distribution to reach its consumers both in urban and rural India and have a much better market penetration and coverage.
FUNDING AGeNCy
Publisher has waived the Open Access publishing fee.
|
2022-03-18T15:10:42.939Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "0957677892bb06c9af43ed6742f1654af16af292",
"oa_license": null,
"oa_url": "https://doi.org/10.4018/ijabim.297852",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e164f27a4107daff40a95bb046942fa3a2f65a6c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
17049231
|
pes2o/s2orc
|
v3-fos-license
|
Identifying Patterns For Short Answer Scoring Using Graph-based Lexico-Semantic Text Matching
Short answer scoring systems typically use regular expressions, templates or logic expressions to detect the presence of specific terms or concepts among student responses. Previous work has shown that manually developed regular expressions can provide effective scoring, however manual development can be quite time consuming. In this work we present a new approach that uses word-order graphs to identify important patterns from human-provided rubric texts and top-scoring student answers. The approach also uses semantic metrics to determine groups of related words, which can represent alternative answers. We evaluate our approach on two datasets: (1) the Kaggle Short Answer dataset (ASAP-SAS, 2012), and (2) a short answer dataset provided by Mohler et al. (2011). We show that our automated approach performs better than the best performing Kaggle entry and generalizes as a method to the Mohler dataset.
Introduction
In recent years there has been a significant rise in the number of approaches used to automatically score essays. These involve checking grammar, syntax and lexical sophistication of student answers (Landauer et al., 2003;Attali and Burstein, 2006;Foltz et al., 2013). While essays are evaluated for the quality of writing, short answers are brief and evoke very specific responses (often restricted to specific terms or concepts) from students. Hence the use of features that check grammar, structure or organization may not be sufficient to grade short answers. Regular expressions, text templates or patterns have been used to determine whether a student answer matches a specific word or a phrase present in the rubric text. For example, Moodle (2011) allows for the use of a "Regular Expression Short-Answer question" type which allows instructors or question developers to code correct answers as regular expressions. Consider the question: "What are blue, red and yellow?" This question can evoke a very specific response: "They are colors." However, there are several ways (with the term "color" spelled differently, for instance) to answer this question. E.g. (1) they are colors; (2) they are colours; (3) they're colours; (4) they're colors; (5) colours; or (6) colors. Instead of having to enumerate all the alternatives to this question, the answer can be coded as a regular expression: (they('|\s(a))re\s)?colo(u)?rs.
Manually generated regular expressions have been used as features in generating models that score short answers in the Kaggle Short Answer Scoring competition (ASAP-SAS, 2012). Tandalla (2012)'s approach, the best performing one of the competition, achieved a Quadratic Weighted (QW) Kappa of 0.70 using just regular expressions as features. However, regular expression generation can be tedious and time consuming, and the performance of these features is constrained by the ability of humans to generate good regular expressions. Automating this approach would ensure that the process is repeatable, and the results consistent.
We propose an approach to identify patterns to score short answers using the rubric text and topscoring student responses. The approach involves (1) identification of classes of semantically related words or phrases that a human evaluator would expect to see among the best answers, and (2) combining these semantic classes in a meaningful way to generate patterns. These patterns help capture the main concepts or terms that are representative of a good student response. We use a word order graph (Ramachandran and Gehringer, 2012) to represent the rubric text. The graph captures order of tokens in the text. We use a lexico-semantic matching technique to identify the degree of relatedness across tokens or phrases. The matching process helps identify alternate ways of expressing the response.
The patterns generated contain (1) positional constraints (?=, which indicates that the search for the text should start at the beginning, and (2) the choice operator (|), which captures alternate ways of expressing the same term, e.g. diet or eat or grub. We look for match (or non-match) between the set of generated patterns and new short answers.
We evaluate our patterns on short answers from the Kaggle Automated Student Assessment Prize (ASAP) competition, the largest publicly available short answer dataset (Higgins et al., 2014). We compare our results with the those from the competition's best model, which uses manually generated regular expressions. Our aim with this experiment is to demonstrate that automatically generated patterns produce results that are comparable to manually generated patterns. We also tested our approach on a different short answer dataset curated by Mohler et al. (2011).
One of the main contributions of this paper is the use of an automated approach to generate patterns that can be used to grade short answers effectively, while spending less time and effort. The rest of this paper is organized as follows: Section 2 discusses related work that use manually constructed patterns or answer templates to grade student responses. Section 3 contains a description of our approach to automatically generate patterns to grade short answers. Sections 4 and 5 discuss the experiments conducted to evaluate the performance of our patterns in scoring short answers. Section 6 concludes the paper. Leacock and Chodorow (2003) developed the use of a short-answer scoring system called C-rater, which focuses on semantic information in the text. They used a paraphrase-recognition based approach to score answers. Bachman et al. (2002) proposed the use of a short answer assessment system called WebLAS. They extracted regular expressions from a model answer to generate the scoring key. Regular expressions are formed with exact as well as near-matches of words or phrases. Student answers are scored based on the degree of match between the answer and scoring key. Unlike Bachman et al., we do not use patterns to directly match and score student answers. In our approach, text patterns are supplied as features to a learning algorithm such as Random Forest (Breiman, 2001) in order to accurately predict scores. Mitchell et al. (2003) used templates to identify the presence of sample phrases or keywords among student responses. Marking schemes were developed based on keys specified by human item developers. The templates contained lists of alternative (stemmed) tokens for a word or phrase that could be used by the student. Pulman and Sukkarieh (2005) used hand-coded patterns to capture different ways of expressing the correct answer. They automated the approach of template creation, but the automated ones did not outperform the manually generated templates. Makatchev and VanLehn (2007) used manually encoded first-order predicate representations of answers to score responses. reformulated queries as declarative sentence segments to aid query-answer matching. Their approach worked under the condition that the (exact) content words appearing in a query would also appear in the answer. Consider the sample query "When was the paper clip invented?", and the sample answer: "The paper clip is a very useful device. It was patented by Johan Vaaler in 1899." The word patented is related in meaning to the term invented, but since the exact word is not used in the query, it will not match the answer. We propose a technique that uses related words as part of the patterns in order to avoid overlooking semantically close matches.
Approach
In this section we describe our approach to automatically identify text patterns that are representative of the best answers. We automatically generate two types of patterns containing: (1) content words and (2) sentence structure information. We use the rubric text provided to human graders and a set of top-scored student answers as the input data to generate patterns. Top-scoring responses are those that receive the highest human grades. In our implementation we use the top-scored answers from the training set only. Figure 1 depicts an overview of our approach to automate pattern generation. 1,2
Extracting Content Tokens
We rewrite the rubric text in order to generate a string of content words that represent the main points expected to appear in the answer. The aim of our approach is to generate patterns with no manual intervention. The re-writing of the rubric is also done automatically. It involves the removal of stopwords while retaining only content tokens.
We eliminate stopwords and function words in the text and retain only the important prompt-specific content words. Short answer scoring relies on the presence or absence of specific tokens in the student's response. Content tokens are extracted from sample answers, and the tokens are grouped together without taking the order of tokens into consideration.
Students may use words different from those used in the rubric (e.g. synonyms or other semantically related words or phrases). Therefore we have to identify groups of words or phrases that are semantically related. In order to extract semantically similar words specific to the prompt's vocabulary, we look for related tokens in top-scoring answers as well as in the prompt and stimulus texts.
Semantic Relatedness Metric
We use WordNet (Fellbaum, 1998) to determine the degree of semantic match between tokens because it is faster to query than a knowledge resource such as Wikipedia. WordNet has been used successfully to measure relatedness by Agirre et al. (2009).
Match between two tokens could be one of: (1) exact, (2) synonym 3 , (3) hypernym or hyponym (more generic or specific), (4) meronym or holonym (sub-part or whole) (5) presence of common parents (excluding generic parents such as object, entity), (6) overlaps across definitions or examples of tokens i.e., using context to match tokens, or (7) distinct or non-match. Each of these matches expresses different degrees of semantic relatedness across compared tokens. The seven types of matches are weighted on a scale of 0 to 6. An exact match gets the highest weight of 6, a synonym match gets a weight of 5 and so on, and a distinct or non-match gets the least weight of 0.
In the pattern (?=. * (larg(e)?|size|volum(e)?). * )(?=. * (dry). * )(?=. * (surface). * ), the set (?=. * (larg(e)?|size|volum(e)?). * ) contains semantically related alternatives. The pattern looks for the presence of three tokens: any one of the tokens within the first (?=. * · · · * ) and tokens dry and surface. These tokens do not have to appear in any particular order within the student answer. A combination of these tokens should be present in a student answer for it to get a high score. Steps involved in generating content tokens based patterns for the text "size or type of container to use" are described in Algorithm 1.
Extracting Phrase Patterns
In order to capture word order in the rubric text we extract subject-verb, verb-object, adjective-noun, adverb-verb type structures from the sample answers. The extraction process involves generation of Input: Rubric text, top-scoring answers, and prompt and stimulus texts (if available) Output: Patterns containing unordered content words. for each sentence in the rubric text do / * Rubric text: "size or type of container to use" * / 1. Remove stopwords or relatively common words. / * Output: size type container use * / 2. Rank tokens in top-scoring answers, and prompt and stimulus texts based on their frequency, and select the top most frequent tokens. / * size container type * / 3. Identify classes of alternate tokens, for each rubric token, from among most frequent tokens (from Step 2). / * {size, large, mass, thing, volume} {container, cup, measure} {type, kind} * / 4. Stem words and use the suffix as an alternative / * container→ (stem: contain, suffix: er) →contain(er)?/ 5. Generate the pattern by AND-ing each of the classes of words. / * (?=. * (large|mass|size|thing| volume). * )(?=. * (contain(er)?|cup| measure). * )(?=. * (kind|type). * ) end Algorithm 1: Generating patterns containing unordered content tokens.
word-order graph representations for the sample answers, and extracting edges representing structural relations listed above. Generating word-order graphs: We use wordorder graphs to represent text because they contain the ordering of words or phrases, which helps capture context information. Context is not available when using just unigrams.
Word graphs have been found to be useful for the task of determining a review's relevance to the submission. Word-order graphs' f-measure on this task is 0.687, while that of dependency graphs is 0.622 (Ramachandran and Gehringer, 2012). No approach is highly accurate, but word graphs work well for this task.
Structure information is crucial in a pattern- Figure 2: Word-order graphs for texts (A) "Generalists are favored over specialists" and (B) "The paper presented important concepts." Edges in a word-order graph maintain ordering information, e.g. generalists-are favored, paper-presented, important-concepts.
Input: Rubric text, top-scoring answers, and prompt and stimulus texts (if available) Output: Patterns containing ordered word phrases. for each sentence in the rubric text do / * Rubric text: "· · ·particles like sodium, potassium ions into membranes· · ·" 1. Generate word-order graphs from the text, and extract edges from the word-order graph. / * The extracted segment: particles like--sodium potassium--ions into membranes. Graph edges are connected with a "--" 2. Replace stopwords or function words with \w{0,4}. Algorithm 2: Generating patterns containing sentence structure or phrase pattern information. generation approach since some short answers may capture relational information. Consider the answer: "Generalists are favored over specialists", to a question on the differences between generalists and specialists. A pattern that does not capture order of terms in the text will not capture the relation that exists between "generalists" and "specialists". Figure 2(A) contains the graph representation for this text.
During graph generation, each sample text is tagged with parts-of-speech (POS) using the Stanford POS tagger (Toutanova et al., 2003), to help identify nouns, verbs, adjectives, adverbs etc. For each sample text consecutive noun components, which include nouns, prepositions, conjunctions and Wh-pronouns are combined to form a noun vertex. Consecutive verbs (or modals) are combined to form a verb vertex; similarly with adjectives and adverbs. When a noun vertex is created the generator looks for the last created verb vertex to form an edge between the two. When a verb vertex is found, the algorithm looks for the latest noun vertex to create a noun-verb edge. Ordering is maintained when an edge is created i.e., if a verb vertex was formed before a noun vertex a verb-noun edge is created, else a noun-verb edge is created. A detailed description of the process of generating word-order graphs is available in Ramachandran and Gehringer (2012).
For this experiment we do not use dense representations of words (e.g. Latent Semantic Analysis (LSA) (Landauer, 2006)) because they are extracted from a large, general corpus and tend to extend the meaning of words to other domains (Foltz et al., 2013). In place of a dense representation we use word-order graphs, since they capture order of phrases in a text.
Substituting stopwords with regular expressions: Stopwords or function words in the extracted word phrases are replaced with the regular expression (\s\w{0,x}\s){0,n} where x indicates the length of the stopwords or function words, and n indicates the number of stopwords that appear contiguously. We use x=4, and n can be determined while parsing the text. We allow for 0 occurrences of stopwords (in {0,n}) between content tokens. Some students may not write grammatically correct or complete answers, but the answer might still contain the right order of the remaining content words, which helps them earn a high score. Identifying semantic alternatives for content words: Just as in the case of tokens-based patterns (Section 3.1), semantically related words are identified to accommodate alternative responses (relatedness metric described in Section 3.1.1). Tokens in top-scoring answers and prompt texts are ranked based on their frequency, and the most frequent tokens are selected for comparison with words in the rubric text. Apart from that we also add other synonyms of the token to the class of related terms. For instance some synonyms of the token droplets are raindrops, drops, which are added to its class of semantically related words.
Stemming accommodates typos, the use of wrong tenses as well as the use of morphological variants of the same term (containing singular-plural or nominalized word forms). For instance if "s" is missed in "drops", it is handled by the expression "drop(s)?". These are correctly spelled variants of the same token. We use Porter (1980) stemmer to stem words. The final class of words from the example above looks as follows: {droplet(s)?, driblet, raindrop(s)?, drop(s)?}. Humans tend to overlook typos as well as difference in tenses. Therefore the trailing "s" is considered optional.
Algorithm 2 describes steps involved in extracting phrase patterns from a sample answer "· · ·particles like sodium, potassium ions into membranes· · ·".
Kaggle Short Answer Dataset
The aim of the Kaggle ASAP Short Answer Scoring competition was to identify tools that would help score answers comparable to humans (ASAP-SAS, 2012). Short answers along with prompt texts (and in some cases sample answers) were made avail-able to competitors. The dataset contains 10 different prompts scored on either a scale of 0-2 or 0-3. There were a total of 17207 training and 5224 test answers. Around 153 teams participated in the competition. The metric used for evaluation is QW Kappa. The human benchmark for the dataset was 0.90. The best team achieved a score of 0.77.
Tandalla's Approach
Tandalla (2012)'s was the best performing model at the ASAP-Short Answer Scoring competition. One of the important aspects of Tandalla's approach was the use of manually coded regular expressions to determine whether a short answer matches (or does not match) a sample pattern. Specific regular expressions were developed for each prompt set, depending on the type of answers each set evoked (e.g. presence of words such as "alligator", "generalist", "specialist" etc. in the text). These patterns were entirely hand-coded, which involved a lot of manual effort. Tandalla built a Random Forest model with the regular expressions as features. This model alone achieved a QW Kappa of 0.70. Tandalla also manually labeled answers to indicate match with the rubric text. A detailed description of the best performing approach is available in Tandalla (2012).
Experiment
Our aim with this experiment is to compare systemgenerated patterns with Tandalla's manually generated regular expressions. The goal is to determine the scoring performance of automated patterns, while keeping everything (but the regular expressions) in the best performing approach's code constant.
We substituted the manual regular expressions used by Tandalla in his code with the automated patterns. We then ran Tandalla's code to generate the models and obtain predictions for the test set. We evaluate our approach on each of the 10 prompt sets from the Kaggle short answer dataset.
The final predictions produced by Tandalla's code is the average of four learning models' (two Random Forests and two Gradient Boosting Machines) predictions. The learners were used to build regression (and not discrete) models. We used content tokens and phrase patterns to generate two sets of predictions, one for each run of Tandalla's code. We stacked the output by taking the average of the two sets of predictions. We compare our model with the following: 1. Tandalla's model with manually generated regular expressions: This is the gold standard, since manual regular expressions were a part of the best performing model.
Tandalla's model with no regular expressions:
This model constitutes a lower baseline since the absence of any regular expressions should cause the model to perform worse. Since the code expects Boolean regular expression features as inputs, we generated a single dummy regular expression feature with all values as 0 (no match).
Results
From Table 1 we see that Tandalla's base code along with our patterns' stacked output performs better than the manual regular expressions. On 8 out of the 10 sets our patterns perform better than the manual regular expressions. Their performance on the remaining 2 sets is better than that of the lower baseline i.e., Tandalla's code with no regular expressions. The mean QW Kappa achieved by our patterns is 0.78 and that achieved by Tandalla's manual regular expressions is 0.77. Although the QW Kappas are very close (i.e. the difference is not statistically significant), their unrounded difference of 0.00530 is noteworthy as per Kaggle competition's standards. For instance the difference between the first and second place teams (Luis Tandalla and Jure Zbontar) in the competition is 0.00058. 4
Analysis of Behavior of Regular Expressions
While the overall performance of the automated regular expressions is better than Tandalla's manual regular expressions, there are some aspects that it may be lacking in when compared with the manual regular expressions.
In the case of Sets 5 and 7, the stacked model performs worse than the model that uses manual regular expressions. This indicates that the manual regular expressions play a very important role for these prompts. In the case of set 5, the prompt evokes information on the movement of mRNA across the nucleus and ribosomes. We found that: 1. The answers discuss the movement of mRNA in a certain direction, e.g. out of (exit) the nucleus and into the (entry) ribosome. Although students may mention the content terms such as nucleus and ribosome correctly, they tend to miss the directionality (of the mRNA). Since terms such as into, out of etc. are prepositions or function words, they get replaced, in our automated approach by \w{0,x}. Hence, if the student answer mentions "the mRNA moved into the nucleus" as opposed to saying "out of the nucleus", our pattern would incorrectly match it.
As described above we found that retaining stopwords (e.g. prepositions such as "into" or "out of") in the regular expressions may be useful in the case of some prompts. Our approach to regular expression generation may be tweaked to allow the use of stopwords for some prompts. However, our aim is to show that with a generalized approach (in this case one that excludes stopwords) our system performs better than Tandalla's.
In the case of prompt 7, the answers are expected to contain a description of the traits of a character named Rose, as well as an explanation on why students thought that the character was caring. An automated pattern such as: (?=. * (hard|difficult). * )(?=. * (work-(ing)?). * ) captures some of Rose's traits. The answer "Rose was a very hard working girl. She felt really lonely because her dad had just left and her mother worked most of the day." matches the above pattern. However the explanation provided by the student in the second sentence is not correct. This answer was awarded a score of 1 by the human grader, but was given a 2 by the system. Although the pattern succeeds in capturing partial information, it does not capture the explanation correctly for this prompt.
Mohler et al. (2011)'s Short Answer Dataset
In this section we evaluate our approach on an alternate short answer scoring dataset generated by Mohler et al. (2011). The aim is to show that our method is not specific to a single type of short answer, and could be used successfully on other datasets to build scoring models. Mohler et al. use a combination of graph-based alignment and lexical similarity measures to grade short answers. They evaluate their model on a dataset containing 10 assignments and 2 examinations. The dataset contains 81 questions with a total of 2273 answers. The dataset was graded by two human judges on a scale of 0-5. Human judges have an agreement of 57.7%.
Mohler et al. apply a 12-fold cross validation over the entire dataset to evaluate their models. On average, the train fold contains 1894 data points while the test fold contains 379 data points. Models are constructed with data from assignments containing questions on a variety of programming concepts such as the role of a header file, offset notation in arrays and the advantage of linked lists over arrays. Although all the questions are from the same domain (e.g. computer programming) the answers they evoke are very different. Mohler et al. achieved a correlation of 0.52 with the average human grades, with a hybrid model that used Support Vector Machines as a ranking algorithm. The hybrid model contained a combination of graph-nodes alignment, bag-of-words and lexical similarity features. The best Root Mean Square Error (RMSE) of 0.98 was achieved by the hybrid model, which used Support Vector Regression as the learner. The best median RMSE computed across each individual question was 0.86.
Experiment and Results
We use the same dataset to extract text patterns. Since patterns are prompt or question specific we cannot create models using the entire dataset like Mohler et al. do. Patterns extracted from across different questions may not be representative of the content of individual questions or assignments. Questions within each assignment are on the same topic. Table 2 contains a list of all questions from Assignment 5, which is about insertion, selection and merge sort algorithms. We therefore extract patterns containing content tokens and phrases for each assignment.
The data for each assignment is divided into train and test sets (80% train and 20% test). The train set contains a total of 1820 data points and the test set contains a total of 453 data points. The train data is used to extract content tokens and phrase patterns from sample answers.
Most short answer grading systems use term vectors as features (Higgins et al., 2014), since they work as a good baseline. Term vectors contain frequency of terms in an answer. We use a combination of term vectors and automatically extracted patterns as features.
We use a Random Forest regressor as the learner to build models. The learner is trained on the average of the human grades. We stack results from models created with each type of pattern to compute final results. Results are listed in Table 3. Our approach's correlation over all the test data is 0.61. The RMSE is 0.86, and the median RMSE computed over questions is 0.77. The improvement in correlation of our stacked model over Mohler et al.'s performance of 0.52 is significant (one tailed test, pvalue = 0.02 < 0.05, thus the null hypothesis that this difference is a chance occurrence may be rejected). Correlation achieved by using just term vectors is 0.56 (difference from Mohler et al.'s result is not significant). These results indicate that the use of patterns results in an improvement in performance.
The above process was repeated at the granularity level of questions. Data points from each question were divided into train and test sets, and models were built for each training set. There were a total of 1142 training and 1131 test data points. Results from the stacked model are computed over all the test predictions. This model achieved a correlation of 0.61, and an RMSE of 0.88. The median RMSE computed over each of the questions is 0.82. As can be seen from Table 3 our stacked model performs better in terms of correlation, RMSE and median RMSE over questions than Mohler et al.'s best models. One of the reasons for improved performance could be that models were built over individual assignments or questions rather than over the entire data. Patterns are particularly effective when built over assignments containing the same type of responses. Short answer scoring can be very sensitive to the content of answers. Hence using data from across a variety of assignments could result in a poorly generalized model.
Conclusion
Automatically scoring short answers is difficult. For example, none of Kaggle ASAP short answer scoring competitors managed to consistently reach the level of human-human reliability in scoring. The results of the Kaggle competition, however do show that manually generated regular expressions are a promising approach to increase performance. Regular expressions like patterns are easily interpretable features that can be used by learners to boost short answer scoring performance. They capture semantic and contextual information contained within a text. Thus, determining the best ways to incorporate these patterns as well as making it efficient to develop them is critical to improving short answer scoring.
In this paper we introduce an automated approach to generate text patterns with limited human effort, and whose performance is comparable to man-ually generated patterns. Further we ensure that the method is generalizable across data sets.
We generate patterns from rubrics and sample top-scoring answers. These patterns help capture the desired structure and semantics of answers and act as good features in grading short answers. Our approach achieves a QW Kappa of 0.78 on the Kaggle short answer scoring dataset, which is greater than the QW Kappa achieved by the best performing model that uses manually generated regular expressions. We also show that on Mohler et al. (2011)'s dataset our model achieves a correlation of 0.61 and an RMSE of 0.77. This result is an improvement over Mohler et al. (2011)'s best published correlation of 0.52 and RMSE of 0.86.
|
2015-08-11T20:29:18.000Z
|
2015-06-01T00:00:00.000
|
{
"year": 2015,
"sha1": "9a3cdd8b6c3a9d5d2fd90d18a5463e3f0741511a",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W15-0612.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "9a3cdd8b6c3a9d5d2fd90d18a5463e3f0741511a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18723940
|
pes2o/s2orc
|
v3-fos-license
|
The fundamental difference between shear alpha viscosity and turbulent magnetorotational stresses
Numerical simulations of turbulent, magnetized, differentially rotating flows driven by the magnetorotational instability are often used to calculate the effective values of alpha viscosity that is invoked in analytical models of accretion discs. In this paper we use various dynamical models of turbulent magnetohydrodynamic stresses, as well as numerical simulations of shearing boxes, to show that angular momentum transport in MRI-driven accretion discs cannot be described by the standard model for shear viscosity. In particular, we demonstrate that turbulent magnetorotational stresses are not linearly proportional to the local shear and vanish identically for angular velocity profiles that increase outwards.
INTRODUCTION
It has long been recognized that molecular viscosity cannot be solely responsible for angular momentum transport in accretion discs. Shakura & Sunyaev (1973) offered an appealing solution to this problem by postulating a source of enhanced disc viscosity due to turbulence and magnetic fields. The standard accretion disc model rests on the idea that the stresses between adjacent disc annuli are proportional to the local shear, as in a Newtonian laminar shear flow, but that it is the interaction of large turbulent eddies that results in efficient transport. The idea that turbulent angular momentum transfer in accretion discs can be described in terms of an enhanced version of the molecular transport operating in (laminar) differentially rotating media has been at the core of the majority of studies in accretion disc theory and phenomenology ever since (see, e.g., Frank, King, & Raine 2002).
The origin of the turbulence that leads to enhanced angular momentum transport in accretion discs has been a matter of debate since the work of Shakura & Sunyaev (1973). The issue of whether hydrodynamic turbulence can be generated and sustained in astrophysical discs, due to the large Reynolds numbers involved, is currently a matter of renewed interest . However, this idea has long been challenged by analytical (Ryu & Goodman 1992;Balbus & Hawley 2006), numerical Balbus & Hawley 1997; ⋆ E-mail:mpessah@ias.edu (MEP) Hawley, Balbus, & Winters 1999), and, more recently, experimental work (Ji, Burin, Schartman, & Goodman 2006).
During the last decade, it has become evident that the interplay between turbulence and magnetic fields is at a more fundamental level than originally conceived. There is now strong theoretical and numerical evidence suggesting that the process driving the turbulence in accretion discs is related to a magnetic instability that operates in the presence of a radially decreasing angular velocity profile. Since the appreciation of the relevance of this magnetorotational instability (MRI) to accretion physics (Balbus & Hawley 1991; see also Balbus &Hawley 1998 andBalbus 2003 for a more recent review), a variety of local (Hawley, Gammie, & Balbus 1995Stone, Hawley, Gammie, & Balbus 1996;Brandenburg, Nordlund, Stein, & Torkelsson 1995;Brandenburg 2001;Sano, Inutsuka, Turner, & Stone 2004) and global (Armitage 1998;Hawley 2000Hawley , 2001Hawley & Krolik 2001;Stone & Pringle 2001) numerical simulations have revealed that its long-term evolution gives rise to a turbulent state and provides a natural avenue for vigorous angular momentum transport.
The fact that the overall energetic properties of turbulent magnetohydrodynamic (MHD) accretion discs are similar to those of viscous accretion discs (Balbus & Papaloizou 1999) has lead to the notion that angular momentum transport due to MRI-driven turbulence in rotating shearing flows can be described in terms of the alpha model proposed by Shakura & Sunyaev (1973). This, in turn, has motivated many efforts aimed to computing effective alpha values from numerical simulations (see, e.g., Gammie 1998;Brandenburg 1998, and references therein) in order to use them in large scale analytical models of accretion discs.
In this paper, we address in detail how the transport of angular momentum mediated by MHD turbulence depends on the magnitude of the local disc shear and contrast this result to the standard model for shear viscosity. We find that one of the fundamental assumptions in which the standard viscous disc model is based, i.e., that angular momentum transport is linearly proportional to the local shear, is not appropriate for describing turbulent MRI-driven accretion discs.
ALPHA VISCOSITY VS. TURBULENT MHD STRESSES
The equation describing the dynamical evolution of the mean angular momentum density of a fluid element,l, in an axisymmetric, turbulent MHD accretion disc with tangled magnetic fields is Here, the over-bars denote properly averaged values (see, e.g., Balbus & Hawley 1998;Balbus & Papaloizou 1999),vr is the radial mean flow velocity, and the quantityT rφ represents the total stress acting on a fluid element as a result of the correlated fluctuations in the velocity and magnetic fields in the turbulent flow, i.e., whereR rφ andM rφ stand for the rφ-components of the Reynolds and Maxwell stress tensors, respectively. Equation (1) highlights the all-important role played by correlated velocity and magnetic-field fluctuations in turbulent accretion discs; if they vanish, the mean angular momentum density of a fluid element is conserved. In order for matter in the bulk of the disc to accrete, i.e., to loose angular momentum, the sign of the mean total stress must be positive. Note that the potential for correlated kinetic fluctuations to transport angular momentum in unmagnetized discs is still present in the hydrodynamic version of equation (1). However, the dynamical role played by the correlated velocity fluctuations in hydrodynamic flows is radically different from the corresponding role played by correlated velocity and magnetic field fluctuations in MHD flows, even when the magnetic fields involved are weak (Balbus & Hawley 1997. In order to calculate the structure of an accretion disc for which angular momentum flows according to equation (1), it is necessary to obtain a closed system of equations for the secondorder correlations defining the total stressT rφ . In this context, the original proposal by Shakura and Sunyaev (see also, Lynden-Bell & Pringle 1974) can be seen as a simple closure scheme for the correlations defining the total turbulent stress in terms of mean flow variables (e.g., the pressure).
The model for angular momentum transport on which the standard accretion disc is based rests on two distinct assumptions. First, it is postulated that the vertically averaged stress exerted on any given disc annulus can be modeled as a shear viscous stress (Lynden-Bell & Pringle 1974), i.e., in cylindrical coordinates, where Σ and Ω stand for the vertically integrated disc density and the angular velocity at the radius r, respectively. This is a modifica-tion of the Newtonian model for the viscous stress between adjacent layers in a differentially rotating laminar flow (Landau & Lifshitz 1959); in this case, the coefficient ν turb parametrizes the turbulent kinematic viscosity. In this model, the direction of angular momentum transport is always opposite to the angular velocity gradient. This is the essence of a shear-driven viscous disc. Second, on dimensional grounds, it is assumed that the viscosity coefficient can be parametrized as ν turb ≡ αcsH. This is because the physical mechanism that allows for angular momentum transport is envisaged as the result of the interaction of turbulent eddies of typical size equal to the disc scale-height, H, on a turnover time of the order of H/cs, where cs is the isothermal sound speed. The parameter alpha is often assumed to be constant and smaller than unity. In the presence of a shear background, the vertically integrated stress is then given bȳ whereP = Σc 2 s /γ stands for the average pressure, γ is the ratio of specific heats, and we have used the fact that the scale-height of a thin disc in vertical hydrostatic equilibrium is roughly H ∼ cs/Ω. This parametrization of the coefficient of turbulent kinematic viscosity implies that the efficiency with which angular momentum is transported is proportional to the local average pressure. This is the idea behind the standard alpha-disc model.
It is worth mentioning that the expression usually employed to define the stress in alpha-models, i.e.,T v rφ = αP , only provides the correct order of magnitude for the stress in terms of the pressure for a Keplerian discs. Indeed, in this case, the shear parameter is equal to 3/2 ∼ 1. However, the fact that the viscous stress is proportional to the local shear cannot be overlooked in regions of the disc where the local angular velocity can differ significantly from its Keplerian value 1 . This is expected to be the case in the boundary layer around an accreting star or close to the marginally stable orbit around a black hole. These inner disc regions are locally characterized by different, and possibly negative, shearing parameters q. More importantly, if the explicit dependence of the stress on the local shear is not considered then equation (6) predicts unphysical, non-vanishing stresses for solid body rotation, q ≡ 0 (c.f., Blaes 2004). More than a decade after the paper by Balbus & Hawley (1991), the MRI stands as the most promising driver of the turbulence thought to enable the accretion process. The stresses associated with this MRI-driven turbulence have long been considered as the physical mechanism behind the enhanced turbulent viscosity postulated in the standard model for shear-driven angular momentum transport. It is important to note, however, that there is no a priori reason to assume that the correlated fluctuations defining the total turbulent stress in MRI-driven turbulence are linearly proportional to the local shear. In fact, this assumption can be challenged on both theoretical and numerical grounds.
PREDICTIONS FROM STRESS MODELLING
There are currently few models that aim to describe the local dynamics of turbulent stresses in differentially rotating magnetized media by means of high-order closure schemes (Kato & Yoshizawa 1993Ogilvie 2003;Pessah, Chan, & Psaltis 2006b) 2 . In these models, the total stress,T rφ =R rφ −M rφ , is not prescribed, as in equation (6), but its value is calculated considering the local energetics of the turbulent flow. This is achieved by deriving a set of non-linear coupled equations for the various components of the Reynolds and Maxwell stress tensors. These equations involve unknown triple-order correlations among fluctuations making necessary the addition of ad hoc closure relations.
Although the available models differ on the underlying physical mechanisms that drive the turbulence and lead to saturation, some important characteristics of the steady flows that they describe are qualitatively similar. In particular, all of the models predict that turbulent kinetic/magnetic cells in magnetized Keplerian discs are elongated along the radial/azimuthal direction, i.e.,Rrr > R φφ whileM φφ >Mrr. Furthermore, turbulent angular momentum transport is mainly carried by correlated magnetic fluctuations, rather than by their kinetic counterpart, i.e., −M rφ >R rφ . All of these properties are in general agreement with local numerical simulations. However, a distinctive quantitative feature of these models that concerns us here is that they make rather different predictions for how the stresses depend on the magnitude and sign of the angular velocity gradient.
In the remaining of this section, we highlight the most relevant physical properties characterizing the various models and assess the functional dependence of the total stress,T rφ , on the sign and steepness of the local angular velocity profile. For convenience, we summarize in Appendix A the various sets of equations that define each of the models.
Kato & Yoshizawa 1995's Model
In a series of papers, Kato & Yoshizawa (1993 developed a model for hydromagnetic turbulence in accretion discs with no large scale magnetic fields (see Kato, Fukue, & Mineshige 1998, for a review, and Nakao 1997 for the inclusion of large scale radial and toroidal fields). In their closure scheme, tripleorder correlations among fluctuations in the velocity and magnetic fields are modeled in terms of second-order correlations using the two-scale direct interaction approximation (Yoshizawa 1985;Yoshizawa, Itoh, & Itoh 2003), as well as mixing length concepts.
In the model described in Kato & Yoshizawa (1995), the shear parameter q appears explicitly only in the terms that drive the algebraic growth of the turbulent stresses. The physical mechanism that allows the stresses to grow is the shearing of magnetic field lines. This can be seen by ignoring all the terms that connect the dynamical evolution of the Reynolds and Maxwell stresses. When this is the case, the dynamics of these two quantities are decoupled. The Maxwell stress exhibits algebraic growth while the Reynolds stress oscillates if the flow is Rayleigh stable (i.e., q < 2). The growth of the magnetic stresses is communicated to the different compo- Figure 1. Total dimensionless turbulent stress at saturation as a function of the shear parameter q ≡ −dln Ω/dln r. The various lines show the predictions corresponding to the three models for turbulent MHD angular momentum transport discussed in §3. The data points correspond to the volume and time averaged MHD stresses calculated from the series of shearing box simulations described in §4. The diamonds and circles represent the results obtained in the isothermal (γ = 1) and adiabatic (γ = 5/3) cases, respectively. The vertical bar in the simulations corresponding to the shear parameter q = 3/2 shows the rms spread in the stresses as calculated from ten numerical simulations for a Keplerian disc. This spread, of roughly 20%, can be taken as a representative value for the typical rms spread in runs with different values of the parameter q for both the isothermal and adiabatic cases. The dotted line corresponds to a linear relationship between the stresses and the local shear, like the one assumed in the standard model for viscous angular momentum transport. All the quantities in the figure are normalized to unity for a Keplerian profile, i.e., for q = 3/2. nents of the Reynolds stress via the tensorSij ≡ C S 1Rij −C S 2Mij , where C S 1 and C S 2 are model constants. A characteristic feature of this model is that the physical mechanism that leads to saturation is conceived as the escape of magnetic energy in the vertical direction. This process is incorporated phenomenologically by accounting for the leakage of magnetic energy with terms of the form −βMij , where β stands for the escape rate. Although turbulent kinetic and magnetic dissipation act as sink terms in the equations for the various stress components (proportional to the model parameters ǫG and ǫM, respectively), if it were not for the terms accounting for magnetic energy escape, the system of equations would be linear. This means that, in order for a stable steady-state solution to exist, either the initial conditions or the constants defining the model will have to be fine-tuned.
The functional dependence of the total stress on the shear parameter q for the model proposed in Kato & Yoshizawa (1995) is shown in Figure 1. The model predicts vanishingly small stresses for small values of |q|; the total stressT rφ is a very strong function of the parameter q. Indeed, for q > 0, the model predictsT rφ ∼ q 8 .
Ogilvie 2003's Model
Based on a set of fundamental principles constraining the nonlinear dynamics of turbulent flows, Ogilvie (2003) developed a model for the dynamical evolution of the Maxwell and Reynolds stresses. Five non-linear terms accounting for key physical processes in turbulent media are modeled by considering the form of the corresponding triple-order correlations, energy conservation, and other relevant symmetries.
The resulting model describes the development of turbulence in hydrodynamic as well as in magnetized non-rotating flows. An interesting feature is that, depending on the values of some of the model constants, it can develop steady hydrodynamic turbulence in rotating shearing flows. For differentially rotating magnetized flows with no mean magnetic fields, the physical mechanism that allows the stresses to grow algebraically is the shearing of magnetic field lines, as in the case of Kato & Yoshizawa (1995). The transfer of energy between turbulent kinetic and magnetic field fluctuations is mediated by the tensor c3M 1/2M ij − c4R −1/2MR ij , whereR andM are the traces of the Reynolds and Maxwell stresses, respectively, and c3 and c4 are model constants. Note that, in this case, the terms that lead to communication between the different Reynolds and Maxwell stress components are intrinsically non-linear. The terms leading to saturation are associated with the turbulent dissipation of kinetic and magnetic energy and are given by −c1R 1/2R ij and −c5M 1/2M ij , respectively. The dependence of the total stress on the shear parameter for the model described in Ogilvie (2003) is shown in Figure 1. For angular velocity profiles satisfying 0 < q < 2, the stress behaves asT rφ ∼ q n with n between roughly 2 and 3. Although the functional dependence of the total stress on negative values of the shear parameter q is different from the one predicted by the model described in Kato & Yoshizawa (1995), this model also generates MHD turbulence characterized by negative stresses for angular velocity profiles increasing outwards.
Pessah, Chan, & Psaltis 2006's Model
Motivated by the similarities exhibited by the linear regime of the MRI and the fully developed turbulent state (Pessah, Chan, & Psaltis 2006a), we have recently developed a local model for the growth and saturation of the Reynolds and Maxwell stresses in turbulent flows driven by the magnetorotational instability (Pessah, Chan, & Psaltis 2006b). Using the fact that the modes with vertical wave-vectors dominate the fast growth driven by the MRI, we obtained a set of equations to describe the exponential growth of the different stress components. By proposing a simple phenomenological model for the triple-order correlations that lead to the saturated turbulent state, we showed that the steady-state limit of the model describes successfully the correlations among stresses found in numerical simulations of shearing boxes (Hawley, Gammie, & Balbus 1995).
In the model described in Pessah, Chan, & Psaltis (2006b), a new set of correlations couple the dynamical evolution of the Reynolds and Maxwell stresses and play a key role in developing and sustaining the magnetorotational turbulence. In contrast to the two previous cases, the tensor connecting the dynamics of the Reynolds and Maxwell stresses cannot be written in terms of Rij orMij . This makes it necessary to incorporate additional dynamical equations for these new correlations. In this model, all the second-order correlations exhibit exponential (as opposed to algebraic) growth for shear parameters 0 < q < 2, in agreement with numerical simulations. Incidentally, this is the only case in which the shear parameter, q, plays an explicit role in connecting the dynamics of the Reynolds and Maxwell stresses. The terms that lead to non-linear saturation are proportional to −(M /M0) 1/2 , whereM is the trace of the Maxwell stress andM0 is a characteristic energy density set by the local disc properties.
The functional dependence of the total turbulent stress on the local shear for the model developed in Pessah, Chan, & Psaltis (2006b) is shown in Figure 1. For angular velocity profiles decreasing outwards, i.e., for q > 0, the stress behaves likeT rφ ∼ q n with n between roughly 3 and 4. For angular velocity profiles increasing outwards, i.e., for q < 0, the stress vanishes identically. Note that this is the only model that is characterized by the absence of transport for all negative values of the shear parameter q.
It is worth mentioning that in all three high-order closure schemes described above, the pressure,P , does not appear explicitly in the equations defining the models. However, it does play a role in setting the overall scale at which the stresses saturate. This is because the pressure provides a characteristic velocity (e.g., the sound speed) or a characteristic length (e.g., the disc scale height) which in turn determine the saturation level of the stressesT rφ . In order to compare the predictions of how the dimensionless stresses depend on the local shear independently of other factors, we normalized the quantityT rφ /P predicted by each model in Figure 1 with the values corresponding to the Keplerian cases with the same pressure 3 . Figure 1 illustrates the sharp contrast between the functional dependence of the saturated stresses predicted by all three MHD models with respect to the standard shear viscous stress defined in equation (5). It is remarkable that, despite the fact that the various models differ on their detailed structure, all of them predict a much steeper functional dependence of the stresses on the local shear. Indeed, for angular velocity profiles decreasing outwards, they all implyT rφ ∼ q n with n 2. The predictions of the various models differ more significantly for angular velocity profiles increasing outwards. In this case, the models developed in Kato & Yoshizawa (1995) and Ogilvie (2003) lead to negative stresses, while the model developed in Pessah, Chan, & Psaltis (2006b) predicts vanishing stresses.
RESULTS FROM NUMERICAL SIMULATIONS
There have been only few numerical studies to assess the properties of magnetorotational turbulence for different values of the local shear. Abramowicz, Brandenburg, & Lasota (1996) carried out a series of numerical simulations employing the shearing box approximation to investigate the dependence of turbulent magnetorotational stresses on the shear-to-vorticity ratio. Although the number of angular velocity profiles that they considered was limited, their results suggest that the relationship between the turbulent MHD stresses and the shear is not linear. On the other hand, Hawley, Balbus, & Winters (1999) carried out a series of shearing box simulations varying the shear parameter from q = 0.1 up to q = 1.9 in steps of ∆q = 0.1 and reported on the dependence of the Reynolds and Maxwell stresses on the magnitude of the local shear but not on the corresponding dependence of the total stress.
In order to investigate the dependence of MRI-driven turbulent stresses on the sign and magnitude of the local shear, we modified a version of ZEUS-3D to allow for angular velocity profiles increasing outwards (i.e., characterized by shear parameters q < 0). ZEUS is a publicly available code and is based on an explicit finite difference algorithm on a staggered mesh. A detailed description of this code can be found in Stone & Norman (1992a,b) and Stone, Mihalas, & Norman (1992).
The Shearing Box Approximation
The shearing box approximation has proven to be fruitful in studying the local characteristics of magnetorotational turbulence from both the theoretical and numerical points of view. The local nature of the MRI allows us to concentrate on scales much smaller than the scale height of the accretion disc, H, and regard the background flow as essentially homogeneous in the vertical direction.
In order to obtain the equations describing the dynamics of a compressible MHD fluid in the shearing box limit, we consider a small box centered at the radius r0 and orbiting the central object in corotation with the disc at the local speed v0 = r0 Ω0φ. The shearing box approximation consists of a first order expansion in r − r0 of all the quantities characterizing the flow. The goal of this expansion is to retain the most important terms governing the dynamics of the MHD fluid in a locally Cartesian coordinate system (see, e.g., Goodman & Xu 1994;Hawley, Gammie, & Balbus 1995). This is a good approximation as long as the magnetic fields involved are subthermal (Pessah & Psaltis 2005). The resulting set of equations is then given by where ρ is the density, v is the velocity, B is the magnetic field, P is the gas pressure, and E is the internal energy density. In writing this set of equations, we have neglected the vertical component of gravity and defined the local Cartesian differential operator, whereř,φ, andž are, coordinate-independent, orthonormal basis vectors corotating with the background flow at r0. Note that the third and fourth terms on the right hand side of equation (9) account for the Coriolis force, present in the rotating frame, and the radial component of the tidal field, respectively. We close the set of equations (8)-(11) by assuming an ideal gas law with P = (γ − 1)E.
Numerical Set Up
We set the radial, azimuthal, and vertical dimensions of the simulation domain to Lr = 1, L φ = 6, and Lz = 1 and consider a grid of 32 × 192 × 32 zones. This resolution corresponds to the standard resolution used in most shearing box simulations carried out up to date (see, e.g., Hawley, Gammie, & Balbus 1995;Sano, Inutsuka, Turner, & Stone 2004).
The density scale in the shearing box is arbitrary and we choose ρ0 = 1 as in, e.g., Hawley, Gammie, & Balbus (1995) and Sano, Inutsuka, Turner, & Stone (2004). We consider the case of zero net magnetic flux through the vertical boundaries 4 by defining the initial magnetic field according to B = B0 sin[2π(r − r0)/Lr]ž. The plasma β in all the simulations that we perform is β = P/(B 2 0 /8π) = 200, so the magnetic field is highly subthermal in the initial state. The initial velocity field that corresponds to the steady state solution is v = −qΩ0(r − r0)φ and we choose the value Ω0 = 10 −3 in order to set the time scale in the shearing box. Note that for q = 3/2, this velocity field is simply the first order expansion of a steady Keplerian disc around r0. In order to excite the MRI, we introduce random perturbations at the 0.1% level in every grid point over the background internal energy and velocity field in all of the cases.
Results
Keeping all the numerical settings unchanged, we perform two suites of numerical simulations with different values of the shear parameter q, from q = −1.9 up to q = 1.9 in steps of ∆q = 0.1. The two sets of runs differ only in the value of the adiabatic index γ; we considered an isothermal case, with γ = 1.001, and an adiabatic case, with γ = 5/3. For each value of the shear parameter q, we run each simulation for 150 orbits. We then compute a statistically meaningful value for the saturated stressT rφ and pressureP by averaging the last 100 orbits in each simulation (Winters, Balbus, & Hawley 2003;Sano, Inutsuka, Turner, & Stone 2004). Figure 1 shows the dimensionless stressT rφ /P obtained for both the isothermal and the adiabatic cases (represented with diamonds and circles respectively) as a function of the local shear. It is evident from this figure that the turbulent magnetorotational stresses are not proportional to the local shear in either the isothermal or the adiabatic cases. There is indeed a strong contrast with respect to the standard assumption of a linear relationship between the stresses and the local shear (dotted line in the same figure) for both positive and negative shear profiles. For angular velocity profiles that decrease with increasing radius, i.e., for q > 0, all of the turbulent states are characterized by positive mean stresses and thus by outward transport of angular momentum. In these cases, stronger shear results in larger saturated stresses but the functional dependence of the total stress on the local shear is not linear. For angular velocity profiles that increase with increasing radius, i.e., for q < 0, all the numerical simulations reach the same final state regardless of the magnitude of the shear parameter q. The stresses resulting from the initial seed perturbations (at the 0.1% level) quickly decay to zero. This is in sharp contrast with the large negative stresses that are implied by the standard Newtonian model for the shear viscous stress in equations (5) and (6).
In order to explore further the decay of MHD turbulence found for angular velocity profiles increasing outwards, we also performed the following numerical experiment. We seeded a run with a shear parameter q = −3/2 with perturbations at the 100% level of the background internal energy and velocity field. In this case, the timescale for the decay of the initial turbulent state was longer than the one observed in the corresponding run seeded with perturbations at the 0.1% level. The final outcome was nonetheless the same. After a few orbits, the stresses decayed sharply and remained vanishingly small until the end of the run at 150 orbits. This result highlights the strong stabilizing effects of a positive angular velocity gradient on the dynamical evolution of the turbulent stresses due to tangled magnetic fields. This behavior can be understood in terms of the joint restoring action due to magnetic tension and Coriolis forces acting on fluid elements displaced form their initial orbits.
DISCUSSION AND IMPLICATIONS
In this paper, we investigated the dependence of the turbulent stresses responsible for angular momentum transport in differentially rotating, magnetized media on the local shear as parametrized by q = −d ln Ω/d ln r. The motivation behind this effort lies in understanding whether one of the fundamental assumptions on which much of the standard accretion disc theory rests, i.e., the existence of a linear relationship between these two quantities, holds when the MHD turbulent state is driven by the MRI. We addressed this problem both in the context of current theoretical turbulent stress models as well as using the publicly available three-dimensional numerical code ZEUS.
From the theoretical point of view, we have seen that, despite their different structures, all of the available high-order closure schemes (Kato & Yoshizawa 1993Ogilvie 2003;Pessah, Chan, & Psaltis 2006b) predict stresses whose functional dependence on the local shear differ significantly from the standard model for angular momentum transport. In order to settle this result on firmer grounds, we performed a series of numerical simulations of MRI-driven turbulence in the shearing box approximation for different values of the local shear characterizing the background flow. The main conclusion to be drawn from our study is that turbulent MHD stresses in differentially rotating flows are not linearly proportional to the local background disc shear. This finding challenges one of the central assumptions in standard accretion disc theory, i.e., that the total stress acting on a fluid element in a turbulent magnetized disc can be modeled as a (Newtonian) viscous shear stress.
We find that there is a strong contrast between the stresses produced by MHD turbulence and the viscous shear stresses regardless of whether the disc angular velocity decreases or increases outwards. In the former case, i.e., for q > 0 as in a Keplerian disc, the total turbulent stress generated by tangled magnetic fields is not linearly proportional to the local shear, q, as it is assumed in standard accretion disc theory. On the other hand, for angular velocity profiles increasing outwards, i.e., for q < 0 as in the boundary layer close to a slowly rotating accreting stellar object, MHD turbulence driven by the MRI fails to transport angular momentum, while viscous shear stresses lead to enhanced negative stresses.
The functional dependence of the local stress on the shear profile determines the topology of transonic accretion flows onto black holes and the radial position of the corresponding critical points (Abramowicz & Kato 1989;Kato, Fukue, & Mineshige 1998;Afshordi & Paczynski 2003). It also plays a critical role in determining the global structure of accretion flows onto stellar objects, determining the exchange of angular momentum between the disc and the gravitating body and even the angular velocity distribution itself (Popham & Narayan 1991). If magnetorotational turbulence is the main mechanism enabling angular momentum transport in accretion discs then the dependence of the stress on the local shear discussed in this paper can have important implications for the global structure and long term evolution of accretion disc around proto-stars, proto-neutron stars, accreting binaries, and active galactic nuclei.
ACKNOWLEDGMENTS
We are grateful to Jim Stone for useful discussions and for helping us with the necessary modifications to the ZEUS code. We thank Gordon Ogilvie for his detailed comments on an earlier version of this manuscript. We have also benefit with fruitful discussions with Eric Blackman, Omer Blaes, Phil Armitage, and Andrew Cumming on different aspects of stress modelling and accretion theory. We also thank an anonymous referee for useful comments and constructive criticisms. MEP was supported through a Jamieson Fellowship at the Astronomy Department at the UA during part of this study. This work was partially supported by NASA grant NAG-513374.
APPENDIX A: HIGH-ORDER CLOSURE MODELS
We summarize here the various sets of equations defining the models described in §3. In order to simplify the comparison between the different models, we adopt the notation introduced in §2, even if this was not the original notation used by the corresponding authors. With the same motivation, we work with dimensionless sets of equations obtained by using the inverse of the local angular frequency (Ω −1 0 ) as the unit of time and the relevant characteristics speeds or lengths involved in each case. We also provide here the values of the various model constants that were adopted in order to obtain the total turbulent stresses as a function of the local shear shown in Figure 1.
where the relevant characteristic speed used to define dimensionless variables is the local sound speed.
In this model, the flow of turbulent energy from kinetic to magnetic field fluctuations is determined by the tensor where C S 1 and C S 2 are model constants. This quantity plays the most relevant role in connecting the dynamics of the different components of the Reynolds and Maxwell tensors. IfSij is positive, the interactions between the turbulent fluid motions and tangled magnetic fields enhances the latter. The pressure-strain tensor is modeled in the framework of the two scale direct interaction approximation according tō whereR andM stand for the traces of the Reynolds and Maxwell stresses, respectively. This tensor accounts for the redistribution of turbulent kinetic energy along the different directions and tends to make the turbulence isotropic. The dissipation rates are estimated using mixing length arguments and are modeled according to where νG and νM are dimensionless constants. The escape of magnetic energy in the vertical direction is taken into account phenomenologically via the terms proportional to the (dimensionless) rate β = XM 1/2 , with 0 < X < 1.
The values of the constants that we considered in order to obtain the curve shown in Figure 1 are the same as the ones considered in case 2 in Kato & Yoshizawa (1995), i.e., C Π 0 = C Π 1 = C Π 2 = C S 1 = C S 2 = 0.3, νG = νM = 0.03. We further consider X = 0.5 as a representative case.
HereR andM denote the traces of the Reynolds and Maxwell tensors and we have defined the quantities c1, . . . , c5 which are related to the positive dimensionless constants defined by Ogilvie C1, . . . , C5 via Ci = Lci, where L is a vertical characteristic length (e.g., the thickness of the disc). Note that Ogilvie's original equations are written in terms of Oort's first constant A = q/2 (in dimensionless units).
In this model, the constant c2 dictates the return to isotropy expected to be exhibited by freely decaying hydrodynamic turbulence. The terms proportional to c3 and c4 transfer energy between kinetic and magnetic turbulent fields. The constants c1 and c5 are related to the dissipation of turbulent kinetic and magnetic energy, respectively. Note that, in order to obtain the representative behavior of the total turbulent stress as a function of the local shear that is shown Figure 1, we set c1, . . . , c5 = 1.
A3 Pessah, Chan, & Psaltis 2006's Model
We have recently developed a local model for the growth and saturation of the Reynolds and Maxwell stresses in turbulent flows driven by the magnetorotational instability that leads to exponential growth for the stresses and can account for a number of correlations observed in numerical simulations (Pessah, Chan, & Psaltis 2006b). In this model, the Reynolds and Maxwell stresses are not only coupled by the same linear terms that drive the turbulent state in the previous two models but there is also a new tensorial quantity that couples their dynamics further. This new tensor cannot be written in terms ofRij orMij , making it necessary to incorporate additional dynamical equations.
The set of equations defining this model is where we have defined dimensionless variables considering the mean Alfvén speedvAz =Bz/ √ 4πρ0, withBz the local mean magnetic field in the vertical direction and ρ0 the local disc density. The tensorW ik = δviδj k is defined in terms of correlated fluctuations in the velocity and current (δj = ∇×δB) fields. The (dimensionless) wavenumber defined as corresponds to the scale at which the MRI-driven fluctuations exhibit their maximum growth and is a characteristic energy density set by the local disc properties, with H the disc thickness. The parameters ζ ≃ 0.3 and ξ ≃ 11 are model constants which are determined by requiring that the Reynolds and Maxwell stresses satisfy the correlations observed in numerical simulations of Keplerian shearing boxes with q = 3/2 (Hawley, Gammie, & Balbus 1995).
|
2014-10-01T00:00:00.000Z
|
2006-12-15T00:00:00.000
|
{
"year": 2006,
"sha1": "ba1e14a592939a914a75691997e6c0e88ae01c43",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/383/2/683/18572577/mnras0383-0683.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "11c61acf1554410d0cc12210d3f6a706d662091c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
89522646
|
pes2o/s2orc
|
v3-fos-license
|
Chimeric Antigen Receptor Engineered T (CAR-T) Cells and Cancer Therapy
Gene therapy has demonstrated significant potential as a cancer therapy in the last few years. The greatest successes have been reached by genetic modification of autologous patient T cells with chimeric antigen receptors (CARs), which are novel and synthetic receptors composed of the antigen-binding domain from a B cell receptor fused to the signaling elements associated with a T cell receptor [1].
Structure of CAR
CAR is composed of an extracellular antigen-bindingdomain derived from an antibody (mostly a single-chain fragment of variable region antibody), a hinge region, transmembrane domains and an intracellular signaling chain, frequently the TCR-derived CD3 chain [3].
Most investigators use the hinge and transmembrane domains of CD8 or CD28 [4]. The antibody domain mediates target recognition independently of major histocompatibility complex and enables the targeting of a plethora of antigens, including proteins, carbohydrates and gangliosides, as long as the antigen is present on the surface of the target cell [5].
Retroviruses are used to introduce the CAR constructs into T cells. One potential disadvantage of retrovirus as vehicle is the potential for silencing of CAR expression based on silencing of the long terminal repeats. This could be an advantage of CARbased therapies if they are used as a bridge to another definitive treatment such as allogeneic bone marrow transplant [4]. Lentiviral vectors are potentially safer than retrovirus based on integration preferences examined in hematopoietic stem cells, though it is not clear that this applies to primary human T cells. Use of specific promoters in combination with Lentiviral transduction has enabled sustained surface expression of CARs on transduced T cells; this likely extends the survival of functional CAR T cells in vivo [4].
The Generations of CARs
CARs are described as first-, second-or third generation [6]. The "generations" of CARs typically refer to the intracellular signaling domains they contain. First-generation CARs include only CD3 as an intracellular signaling domain. They lack costimulatory properties. Second-generation CARs include a single costimulatory domain derived from either CD28 or 4-1BB to fully activate T cells. Third-generation CARs include two costimulatory domains, such as CD28, 4-1BB, and other costimulatory molecules in tandem [4]. Preclinical experiments suggest that third generation CARs may be more potent than second generation CARs [7].
Clinical studies of first-generation CAR T cells showed that lympho depleting chemotherapy may enhance CAR T-cell responses by eradicating regulatory T cells, eliminating other immune cells that may compete for homeostatic cytokines, and enhancing antigen presenting cell activation [8].
Clinical Studies of CAR T cells CAR T cells and hematologic malignancies
The most investigated target for CARs is CD19 because of its common expression in most B cell leukemias and lymphomas, and its absence in all normal tissues other than B cell lineage [9]. CD19-targeted CAR constructs have demonstrated consistently high antitumor efficacy in children and adults with relapsed B-cell acute lymphoblastic leukemia (B-ALL), chronic lymphocytic leukemia (CLL), and B-cell non-Hodgkin lymphoma (B-NHL) in the non-transplantation setting [8].
ALL: Dramatic results have been reported with use of CAR T cells in ALL [10]. Anti-CD19 CAR T cells can lead to complete response rates of up to 90 % in heavily pretreated ALL patients. These high response rates are tempered by the requirement for individual product manufacturing for each patient, the high costs of gene transfer technology and emerging problems such as limited persistence in some patients and antigen-loss relapses.
Preliminary results indicate that molecules other than CD19 can also be effectively targeted [11].
Cancer Therapy & Oncology International Journal a) NHL: Promising results have been seen in NHL patients [10]. b) CLL: Limited clinical efficacy of CAR T cells was observed in CLL patients compared to B-ALL. Potential explanations is limited persistence of CAR T cells in CLL patients, the immuno-inhibitory tumor microenvironment of CLL, the lymph-node based disease in CLL compared to the mostly bone marrow-based nature of B-ALL and the lower tumor burden at treatment in B-ALL patients. Potential methods to overcome these possible barriers include incorporation of other signaling domains or other immune effectors into the CAR T cells. Limitations of persistence may be overcome by incorporating co-stimulatory domains such as CD28, CD137 or CD134 into third-generation CARs or by directing secretion of pro-inflammatory cytokines such as IL12 in a second-generation CAR [1].
c)
Acute myeloid leukemia: A CD33-specific CAR has been developed [12] and is effective in preclinical experiments. However, this approach needs further evaluation asCD33 is a pan-myeloid marker and so, CD33-CAR-redirected T cells may lead to profound and prolonged myeloid depletion. The isoform variant 6 of CD44 (CD44v6) represents another possible target for CAR-T cells in myeloid leukemias. Preliminary results show that CD44v6-CARredirected T cells had antitumor effects against CD44v6positive malignancies [13]. d) Multiple myeloma (MM): A recent case report described the use of CD19 CAR T cells after a second ASCT in a MM patient. The patient sustained a CR without evidence of recurrence at 12 months [10]. Allogeneic CD19-directed CAR T cells (derived from donor lymphocytes) have induced remissions without induction of GVHD in post allo-HCT relapsed patients. Thus, the allo-HCT or ASCT platform could be adapted to subsequent CAR T technology [14]. Given the extremely low expression of CD19 on the patient's neoplastic plasma cells [10], several promising antigenic targets have been identified for the development of anti-MM CARs such as B-cell maturation antigen, CD138, kappa light chains and CS-1 [14].
CAR-T cell-and minimal residual disease
CAR-T cell-based therapy may be better suited to minimal residual disease or as an adjuvant for patients at high risk of relapse, who responded to salvage treatment or after transplant [7].
CAR T cells and transplantation
CAR T cells targeting CD19 have served as a bridge to transplantation or have been used as salvage for patients who relapse or progress after transplantation. Ongoing studies are examining the role of combining these therapies with stem cell transplantation to further improve outcomes in lymphoma and MM patients [10].
CAR T-cell dose
The cell dose for patients with morphologic disease is lower than those with MRD (1X106vs3 X 106 19-28zCART cells per kg). A higher CAR T-cell dose is well tolerated in patients with MRD [8].
Toxicities of CD19-targeted CAR T cells
All trials of CD19-targeted CAR T cells have reported similar treatment related toxicities, particularly cytokine release syndrome (CRS), neurological toxicities, and B-cell aplasia, although severity of observed toxicities differs. CRS reflects a systemic inflammatory response syndrome hours to days following CAR T-cell infusion, characterized by elevations of proinflammatory cytokines and T-cell activation and expansion [8]. Clinical features include fever, myalgia, malaise, and, in more severe cases, a capillary leak syndrome associated with hypoxia, hypotension, and occasionally renal dysfunction and coagulopathy. Severe CRS may be treated with the IL-6 receptor inhibitor tocilizumab or with lymphotoxic corticosteroids [8].
CAR T cells and solid tumors
Unfortunately, the clinical results in solid tumors have been much less encouraging. Specific target antigens on solid tumors are more difficult to identify. Roughly 30 solid tumor antigens are being evaluated for CAR T-cell therapy. The two most positive trials reported were GD2 CARs to target neuroblastoma and HER2 CARs for sarcoma [15].
The solid tumor landscape presents unique barriers that are absent in hematological malignancies. Even after successful trafficking and infiltration, T cells must surmount challenges conferred by: (i) an environment characterized by oxidative stress, nutritional depletion, acidic pH and hypoxia; (ii) the presence of suppressive soluble factors and cytokines;(iii) suppressive immune cells (regulatory T cells), myeloid derived suppressor cells, tumor-associated macrophages or neutrophils; and (iv) T-cell-intrinsic negative regulatory mechanisms (e.g., upregulation of cytoplasmic and surface inhibitory receptors) and over expression of inhibitory molecules [15].
TRUCKs (T cells redirected for universal cytokine mediated killing)
TRUCKs are CAR-redirected T cells used as vehicles to constitutively produce or induce release, mostly a proinflammatory cytokine in the targeted tissue. CAR T cells, when activated by their CAR, deposit IL-12 in the targeted tumor lesion, which in turn attracts an innate immune cell response toward cancer cells that are invisible to CAR T cells [5]. TRUCKs exhibited remarkable efficacy against solid tumors with diverse cancer cell phenotypes, suggesting their evaluation in clinical trials [5].
|
2019-04-01T13:15:27.920Z
|
2016-08-31T00:00:00.000
|
{
"year": 2016,
"sha1": "e6dbee0bbc3da3b43b38c132b7b0f6592efc19be",
"oa_license": "CCBY",
"oa_url": "https://juniperpublishers.com/ctoij/pdf/CTOIJ.MS.ID.555575.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cfe8180d66abe1a38c6005d69b5a246f96860644",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
203433838
|
pes2o/s2orc
|
v3-fos-license
|
Experience of Great Britain in organization of healthcare system for pharmaceutical provision with medicines for privileged categories of citizens
Background. Health is the highest value not only in the context of the perception of a person, but also society and the state as a whole. According to the content of international documents approved by the UN General Assembly, the WHO, the World Medical Association, governments should be responsible for the health of the population, ensure the implementation of the human right to life and health. Previously, the state of affairs in Ukraine regarding the pharmaceutical provision of privileged categories of citizens was studied. However, for forming a powerful and effective health care system in Ukraine for the pharmaceutical provision of privileged categories of citizens, it is useful to analyze the experience of providing health care in economically developed countries. The purpose of the research was to study the Great Britain experience with the organization of a healthcare system for the pharmaceutical provision of medicines to privileged population groups. Materials and methods. Common methods of normative legal, documentary, retrospective and comparative analysis used to gain that purpose. Results. The system of financing health care in the Great Britain provides getting finances mainly from the state budget, while the division takes place on the management vertical from the highest level to the lower one. The taxes that make up about 90% of the health care system budget are the financial basis of the national health system of the Great Britain. For comparison, only 7.5% comes from employers. Therefore, can argue that the healthcare system of the Great Britain in fact completely financed by financial contributions from taxpayers and by the government. There are three main models of financing health care allocated in the world practice: private, budget (model Beveridge), mixed (Bismarck model). A private financing model exists through the creation of sustainable competition between healthcare facilities. The part of private insurance is about 40% of total costs, and the patient himself covers the cost of pharmaceutical provision. A private financing model is typical for such developed countries as the United States of America and Japan. The budget model of funding or the Beveridge model means covering a large part of the costs of pharmaceutical provision by state institutions. This model is also typical for Great Britain. A mixed financing model or Bismarck model based on the three foundations: the state, enterprises and personal funds of a citizen. This system of insurance financing is typical for Germany, Austria and France. Private medicine in the Great Britain is one of the most advanced and most expensive in the world. There are about 300 non-state hospitals in the country, that receive a license at a local the National Health Service unit and tested twice a year. There are no queues here and medical care provided in full and to the extent necessary. The services of pay-doctors and cabinets paid either by insurance companies or by patients themselves. Large companies in Britain have health insurance as an additional paycheck bonus. Conclusions. According to the Great Britain experience, compulsory medical insurance takes place in countries with predominantly state funding. The Beveridge model is widespread in many countries where the state provides coverage of 80% or more of health care costs (Canada, Australia, Greece, Sweden, and Spain). Despite significant changes in the health care system of the Great Britain, the opportunity to choose the type of insurance and to take advantage of the benefits for the purchase of medicines and medical products, that allowed to increase competition between health facilities and, accordingly, to improve the quality and speed of pharmaceutical provision of patients. Thus, budget medicine of Great Britain is a priority for many world countries and a guarantee of state financing of pharmaceutical provision for privileged categories of citizens, regardless of income level and social status.
Introduction
Health is the highest value not only in the context of the perception of a person, but also society and the state as a whole. According to the content of international documents approved by the UN General Assembly, the WHO, the World Medical Association, governments should be responsible for the health of the population, ensure the implementation of the human right to life and health [6].
Previously, the state of affairs in Ukraine regarding the pharmaceutical provision of privileged categories of citizens was studied [9].
However, for forming a powerful and effective health care system in Ukraine for the pharmaceutical provision of privileged categories of citizens, it is useful to analyze the experience of providing health care in economically developed countries.
The purpose of the research was to study the Great Britain experience with the organization of a healthcare system for the pharmaceutical provision of medicines to privileged population groups.
Materials and methods
Common methods of normative legal, documentary, retrospective and comparative analysis used to gain that purpose.
Results and discussion
The system of financing health care in the Great Britain provides getting finances mainly from the state budget, while the division takes place on the management vertical from the highest level to the lower one.
The taxes that make up about 90% of the health care system budget are the financial basis of the national health system of the Great Britain. For comparison, only 7.5% comes from employers. Therefore, can argue that the healthcare system of the Great Britain in fact completely financed by financial contributions from taxpayers and by the government [2].
There are three main models of financing health care allocated in the world practice: private, budget (model Beveridge), mixed (Bismarck model) [10]. A private financing model exists through the creation of sustainable competition between healthcare facilities. The part of private insurance is about 40% of total costs, and the patient himself covers the cost of pharmaceutical provision. A private financing model is typical for such developed countries as the United States of America and Japan.
The budget model of funding or the Beveridge model means covering a large part of the costs of pharmaceutical provision by state institutions. This model is also typical for Great Britain. A mixed financing model or Bismarck model based on the three foundations: the state, enterprises and personal funds of a citizen. This system of insurance financing is typical for Germany, Austria and France.
Private health insurance covers healthcare services that not provided by the National Health Service. Private insurance companies are essentially complementary to the public health care system of the Great Britain; therefore, only risk insurance foreseen beyond the competence of the health service. The GB's private health insurance only includes a paid medical care at commercial and public health facilities. Great Britain budget financing has a number of disadvantages: the monopoly of the insurance market; the lack of the actual ability to elect a doctor or health care provider [3].
Currently, the Great Britain government is working to increase the effectiveness of providing medical care and pharmaceutical provision to different categories of citizens by increasing competition between types of funding [4].
The Great Britain has a centralized state healthcare and social welfare system -the National Health Service (NHS). A minister in charge of 14 regional health departments that, in their turn, subject to 145 local health departments and 90 family health departments [9] heads the healthcare system.
The main principle of the healthcare of Great Britain is free medical care for all contingents of the population living legally on the territory of the country. The main source of funds for health care is the state budget. The basis of the functioning of the dynamic health sys- Private insurance used by about 12% of the population of Great Britain, who receives services from private, companies as a supplement to funding from the NHS. In this case, the patient will not be able to rely on free medicines from the NHS. In part or in full, the patient should paid for the dentistry help, dental prosthesis, etc. by his own account in part or in full [1].
OTC medicines are paid for their own funds, while prescription medicines were first issued free of charge, but this led to the unjustified consumption of free medicines and became an overwhelming burden for the state, which led to the revision and the introduction of a fixed co-payment for each prescribed recipe. The prescription period in the GB is up to 6 months, while the medicines with narcotic and psychotropic components is 28 days [8].
The conditions for the provision of medicines differ in different parts of the Great Britain: the inhabitants of England pay a recipe of 7.65 (from April 2012), in Wales, Scotland, Northern Ireland -the co-payments are canceled.
About 90% of medicines and medical products are released free of charge for certain categories of the population (table).
Need to note, that there are monthly and annual certificates for patients who are continuously taking the drug, which reduces the cost of the prescription.
There is a so-called "black list" in Great Britain, that includes medicines prohibited for free provision, but permitted for own purchasing. There is also a "gray list" of medicines, which includes medicines for prescribing only in special cases or special patients [10].
Private medicine in the Great Britain is one of the most advanced and most expensive in the world. There are about 300 non-state hospitals in the country, that receive a license at a local NHS unit and tested twice a year. There are no queues here and medical care provided in full and to the extent necessary. The services of pay-doctors and cabinets paid either by insurance companies or by patients themselves. Large companies in Britain have health insurance as an additional paycheck bonus [5].
Conclusions
According to the Great Britain experience, compulsory medical insurance takes place in countries with predominantly state funding. The Beveridge model is widespread in many countries where the state provides coverage of 80% or more of health care costs (Canada, Australia, Greece, Sweden, and Spain). Despite significant changes in the health care system of the Great Britain, the opportunity to choose the type of insurance and to take advantage of the benefits for the purchase of medicines and medical products, that allowed to increase competition between health facilities and, ac-cordingly, to improve the quality and speed of pharmaceutical provision of patients. Thus, budget medicine of Great Britain is a priority for many world countries and a guarantee of state financing of pharmaceutical provision for privileged categories of citizens, regardless of income level and social status.
|
2019-09-17T03:02:07.300Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "98cdcca58e226a9638e5d13c362dadb3a29ccf4b",
"oa_license": "CCBY",
"oa_url": "http://health-society.zaslavsky.com.ua/article/download/172617/174033",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cdbe694ed1327ef8c641f67c03a4e6fe914f1e1b",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
221836796
|
pes2o/s2orc
|
v3-fos-license
|
The Expression of Activin Receptor-Like Kinase 1 (ACVRL1/ALK1) in Hippocampal Arterioles Declines During Progression of Alzheimer’s Disease
Abstract Cerebral amyloid angiopathy (CAA) in Alzheimer’s disease (AD)—deposition of beta amyloid (Aβ) within the walls of cerebral blood vessels—typically accompanies Aβ buildup in brain parenchyma and causes abnormalities in vessel structure and function. We recently demonstrated that the immunoreactivity of activin receptor-like kinase 1 (ALK1), the type I receptor for circulating BMP9/BMP10 (bone morphogenetic protein) signaling proteins, is reduced in advanced, but not early stages of AD in CA3 pyramidal neurons. Here we characterize vascular expression of ALK1 in the context of progressive AD pathology accompanied by amyloid angiopathy in postmortem hippocampi using immunohistochemical methods. Hippocampal arteriolar wall ALK1 signal intensity was 35% lower in AD patients (Braak and Braak Stages IV and V [BBIV-V]; clinical dementia rating [CDR1-2]) as compared with subjects with early AD pathologic changes but either cognitively intact or with minimal cognitive impairment (BBIII; CDR0-0.5). The intensity of Aβ signal in arteriolar walls was similar in all analyzed cases. These data suggest that, as demonstrated previously for specific neuronal populations, ALK1 expression in blood vessels is also vulnerable to the AD pathophysiologic process, perhaps related to CAA. However, cortical arterioles may remain responsive to the ALK1 ligands, such as BMP9 and BMP10 in early and moderate AD.
Introduction
The pathophysiology of Alzheimer's disease (AD) is characterized by progressive accumulation of beta-amyloid (Aβ) deposits in the brain. In the parenchyma, Aβ is present as diffuse amyloid or in the form of plaques. In addition, Aβ deposits in the walls of blood vessels-a process referred to as cerebral amyloid angiopathy (CAA) . In CAA, Aβ deposits are predominantly found in the periphery of arterioles (Weller et al. 1998). CAA is pathogenic, associated with microbleeds (Yates et al. 2014) and cognitive defects (Arvanitakis et al. 2011) and is presumably caused by abnormal vessel structure leading to increased risk of hemorrhage or reduced local blood supply (Greenberg et al. 2020). Indeed, imaging results indicate that vascular dysregulation and cerebral hypoperfusion are associated with increased risk of dementia and accelerated cognitive decline (Iturria-Medina et al. 2016;Wolters et al. 2017). Therefore, it is important to understand the pathogenesis of vascular dysfunction in AD and thus, preserving vascular function is a therapeutic target for this disease. A key regulator of vascular development and function is the activin receptor-like kinase 1 (ALK1) transmembrane protein that acts as signaling receptor protein kinase for its circulating ligands BMP9/GDF2 and BMP10 (Brown et al. 2005;David et al. 2007;Scharpfenecker et al. 2007;Upton et al. 2009;Townson et al. 2012). ALK1 is broadly expressed in the endothelium (David et al. 2009;Pardali et al. 2010) where its activity is central for normal vascular development and remodeling (Roman and Hinck 2017). Mutations in the ACVRL1 gene (reviewed in Abdalla and Letarte 2006), which encodes ALK1, cause hereditary hemorrhagic telangiectasia type II [OMIM #600376]-a disease characterized by arteriovenous malformations (Roman and Hinck 2017)-and are associated with pulmonary arterial hypertension (Trembath et al. 2001;Harrison et al. 2003;Yokokawa et al. 2020).
We have previously reported that ALK1 protein is expressed in human and rat hippocampus and that its expression in human CA3 neurons is reduced in advanced, but not early stages of AD (Adams et al. 2018). Here we describe ALK1 expression in human hippocampal cortical and leptomeningeal blood vessels in autopsy brains in which AD pathology was accompanied by CAA. We show that ALK1 immunoreactivity in hippocampal arteriolar walls is reduced in AD patients, as compared with subjects with early AD pathologic changes that are either cognitively intact or with minimal cognitive impairment, irrespective of amyloid accumulation measured by the intensity of Aβ vascular immunohistochemistry (IHC) signal. Overall, the data indicate a similar pattern of neuronal and arteriolar loss of ALK1 in advancing AD and suggest that this loss may contribute to the mechanisms of vascular pathophysiology of AD, thus potentially targeting ALK1-agonist therapy (e.g., with BMP9/BMP10) in early stages of AD pathology as a strategy for improving vascular function in AD.
Study Subjects and Human Postmortem Hippocampi
Human formalin-fixed paraffin-embedded (FFPE) tissue blocks of hippocampi were acquired through the Framingham Heart Study Brain Donation Program (Framingham, Massachusetts) and the Netherlands Brain Bank (Amsterdam, Netherlands) as described in Table 1. The study focused on arteriolar walls in hippocampal cortex and adjacent leptomeninges from individuals divided into 2 groups, matched for age and sex, based on clinical dementia rating (CDR) score (Blessed et al. 1968;Hachinski et al. 1975;Davis et al. 1991) and Braak and Braak (BB) stage (Braak and Braak 1991). The CDR was assigned based on antemortem assessment months before death and a postmortem retrospective CDR based on a family interview with one or more family members (Au et al. 2012). Group 1 included subjects either cognitively intact or with minimal cognitive impairment (CDR0-0.5) in the limbic BB stages (CDR0-0.5, BBIII; n = 5, age mean 87.4 years, 3 F/2 M), and Group 2 consisted of subjects with mild to moderate dementia (CDR1-2), definite AD by NINCDS-ADRDA criteria and the isocortical BB stages (CDR1-2, BBIV-V; n = 4, age mean 88.5 years, 2 F/2 M) ( Table 1). All subjects had various degrees of CAA (mild to severe), similar distributions of vascular pathology (atherosclerosis, arteriolosclerosis, and infarcts) and the absence of non-AD neurodegenerative pathology with no Lewy body pathology reported in any of the subjects. The consortium to establish a registry for AD (CERAD) plaque density ranged from sparse to high in both groups. Only one subject in Group 1 had no neuritic plaques but did exhibit severe CAA. All subjects were deidentified, and authors were blinded to subjects' CDR score and BB stage during data acquisition. Quantitative analysis of ALK1 immunoreactivity was conducted within the CA1 subregion distinctly identifiable at the level of the lateral geniculate nucleus.
Immunohistochemistry
FFPE blocks were sectioned at 5 μm thickness, dried at room temperature for 24 h, and heated at 80 • C for 24 h before IHC processing. Deparaffinization, antigen retrieval, and subsequent staining was performed with Ventana Benchmark Ultra automated IHC instrument using Ventana Medical System reagents including ultraView Universal DAB (Cat#760-500), Hematoxylin II (Cat#790-2208), and Bluing Reagent (Cat#760-2037) (Ventana Medical Systems, Inc., Roche Diagnostics Ltd, Tucson, AZ) at the Boston Medical Center Pathology Department. ALK1 protein and Aβ peptide expression was analyzed in 3 independent IHC experiments for each subject; in each experiment all the subjects were processed collectively. Therefore, experiments performed yielded 3 independently stained stepwise sections separated by at least 10 μm per subject for analysis. Automated IHC with the Ventana Benchmark Ultra allowed for maximally replicative conditions in IHC experiments, eliminating variability in reagent composition, quantity, incubation time, and human error, minimizing variability between experiments. Internal control sections from established subjects were stained collectively with any newly added subjects to ensure reproducibility of staining for the protein of interest. Quantitative analysis of ALK1 was generated from the imaged triplicate sections. Data from triplicate sections were averaged to obtain representative values for each subject.
Quantitative Image Analysis
Slides were imaged using an Olympus BX60 light microscope, QImaging Retiga 2000R camera, and QCapture Suite and Suite PLUS software. For each subject, in order to capture (nearly) all the cortical and leptomeningeal arterioles identified on a single section, 10 ×40 cortical fields and 10 ×20 leptomeningeal fields of CA1-subiculum were imaged by 2 independent observers. Average immunoreactivity signals from 3 sections for each subject were obtained by automated IHC as previously described (Adams et al. 2016(Adams et al. , 2017(Adams et al. , 2018) (see above). Before the quantitative analyses of ALK1 and Aβ immunoreactive signals, MSA immunoreactivity was used to perform qualitative identification and randomization of arterioles in the hippocampal parenchyma and leptomeninges (Fig. 1). This approach also prevented bias that could arise from blood vessel selection based on the features of interest, that is, ALK1 and/or Aβ. All images used in quantitation were analyzed with ImageJ, version 1.8.0, Bethesda, MD: National Institutes of Health (Abramoff et al. 2004;Schneider et al. 2012). Intensity was quantified in ImageJ by converting the red, green, and blue (RGB) images to 8-bit grayscale images and subtracting background noise using a rolling bar radius. After outlining the blood vessel of interest, the image was inverted, and the lookup table was inverted. This created an image with inverted pixel values, with intensity values ranging from 0 (white) to 255 (black). Mean intensity values from the ×40 and ×20 field images in triplicate experiments comprised representative values for each subject.
Data Presentation and Statistical Analyses
All individual data points are presented as well as means ±SEM. P value < 0.05 was considered statistically significant. The data were analyzed by t-test. Statistical analyses were performed with JMP software (Version 15.0.0 SAS Institute Inc., Cary, NC).
ALK1 Protein Expression in the Hippocampal Parenchymal Arterioles Decreases in AD Irrespective of Amyloid Angiopathy
The relatively strong ALK1 signal is present not only in the cytoplasm of pyramidal neurons and neuropil in hippocampi of non-AD subjects (Adams et al. 2018) but also in the arteriolar walls ( Fig. 2A-C). ALK1 signal is uniformly strong in the arteriolar walls free of Aβ deposition (Fig. 2C,D). We have occasionally observed faint or apparently absent ALK1 signal in the portions of the arteriolar walls bearing Aβ deposition (Fig. 2E,F; Supplementary Figure 1). ALK1 signal in the parenchymal CA1 arteriolar walls decreased significantly in AD patients regardless of the presence of CAA in comparison with subjects with early AD neurofibrillary tangles accumulation (BB III) that were either cognitively intact (CDR0) or had mild cognitive impairment (CDR0.5) (Figs 3A-D and 4A,B). Although arteriolar walls in the hippocampal leptomeninges seem to exhibit weakened ALK1 immunoreactivity in AD patients (BBIV-V, CDR1-2) versus non-AD subjects (BBIII, CDR0-0.5) (Fig. 3E,F), quantitative comparison showed that only parenchymal and not leptomeningeal arteriolar walls undergo significant reduction in ALK1 signal in AD patients (Fig. 4C,D). That reduction is independent of the measured Aβ deposition signal associated with CAA.
Discussion
In this study, we focused on the ALK1 expression in the hippocampal arteriolar walls in progressive stages of AD pathology. Our initial qualitative observations pointed to the possibility that the arteriolar wall regions with Aβ deposition were characterized by reductions, or even absence, of the ALK1 immunoreactive Figure 1), suggesting that CAA could lead to vascular ALK1 loss. Although this may be the case in individual vessels, our unbiased quantitative assessment of the vascular expression of ALK1 in the hippocampal parenchymain groups of subjects matched for age, sex, the degree of CAA, and vascular changes as well as the absence of non-AD pathologyindicates that the reduced ALK1 signal that accompanies AD progression is a feature of the arterioles in general (Fig. 4), apart from the presence of Aβ in the analyzed vessels or the global CAA evaluation of the subjects documented in the neuropathology reports.
The number of our subjects is, nevertheless, small. This is due to the criteria that we imposed at the onset of the study-that the subjects be matched for age and sex, that all have CAA, that none have non-AD pathology, and that they cover the CDR scores scale. Apolipoprotein E4 allele (APOE4) allele happened to be present in some AD subjects but not in any of non-AD subjects. Postmortem interval varied greatly (Table 1) but obviously did not affect the quality of tissue processing or the protein expression. Again, our approach to IHC in human postmortem cortical sections (Adams et al. 2016(Adams et al. , 2017(Adams et al. , 2018 resulted in the reliable and reproducible yield of immunoreactivity signals in each subject. The range of ALK1 signal intensity did not exceed 16 and 23 units in cortical and leptomeningeal arteriolar walls, respectively, in any of the subjects. Similarly, the range of Aβ signal intensity did not exceed 22 and 13 units in cortical and leptomeningeal arteriolar walls, respectively, in any of the subjects. The studies in living brains are still hampered by technological limitations when it comes to defining the relationship between Aβ deposition and the breakdown of the blood-brain. The breakdown of the blood-brain barrier has recently been suggested as a potential early biomarker for cognitive dysfunction in humans irrespective of positron emission tomography (PET)-and cerebrospinal fluid (CSF)-detected Aβ or tau accumulation (Nation et al. 2019;Montagne et al. 2020). The presence of one APOE4 allele apparently promotes blood-brain barrier breakdown in cognitively intact (CDR0) and in mildly cognitively impaired (CDR0.5) individuals (Montagne et al. 2020).
Vascular Aβ deposits are thought to diminish blood flow and reduce vessel diameter potentially impeding Aβ clearance rate, promoting inflammation, and thus, likely contributing to neurodegeneration in AD (Koronyo et al. 2015;Bakker et al. 2016). Aβ accumulation in the muscular walls of cortical and leptomeningeal arterioles is similarly associated with the risk of large hemorrhage in the brain (Vonsattel et al. 1991). Vascular amyloidosis in the brains of AD patients and animal models is also accompanied by degeneration of pericytes leading to the altered permeability of the blood-brain barrier (Winkler et al. ) and leptomeningeal (C, D) arterioles was determined as described in Methods. The original data points as well as means ± SEM are plotted on the graphs. The data were analyzed by t-test. There was a statistically significant decrease in ALK1 signal in advanced AD patients (CDR1-2; BBIV-VI) as compared with subjects with early AD-associated pathological changes (CDR0-0.5; BBIII). No other comparisons were statistically significant. 2012; Halliday et al. 2016). Data from animal studies suggest intricate interplay between vascular Aβ accumulation (CAA), blood-brain barrier stability, and AD pathology. The mechanism underlying the association between CAA and cortical microhemorrhages (Vernooij et al. 2008;van Veluw et al. 2016) has been recently probed in APP/PS1 mice with CAA (van Veluw et al. 2020). Although the presence of vascular Aβ deposits in these mice did not directly predispose arterioles in their brains to leak, the physical alterations surrounding the vascular network likely contributed to the formation of spontaneous leakage sites . As in CAA, ALK1 deficiency in a genetic mouse model with focal cerebral Alk1 gene inactivation was associated with compromised vascular integrity such as extravasation of intravascular components and reduced number of pericytes (Chen et al. 2013). Similarly, homozygous Alk1 deletion in mice caused albumin extravasation in the retina (Akla et al. 2018). Moreover, in the same study, ALK1 expression was downregulated in the diabetic retinal blood vessels of wild type mice and Alk1 heterozygotes (presumably expressing 50% of the wild type levels of the protein) were characterized by a dramatically exacerbated retinal vascular leakage evoked by diabetes, indicating Alk1 haploinsufficiency. In the current study, we observed a 35% reduction in the apparent ALK1 levels in the arterioles of AD subjects, suggesting that the magnitude of this reduction could, by analogy with the mouse model, result in functional vascular defects. However, human studies on larger cohorts than ours are warranted.
Molecules at the point of convergence for neuronal and vascular pathology represent potentially doubly valid targets for a therapeutic intervention. We previously demonstrated that the immunoreactivity of ALK1 in CA3 pyramidal neurons is reduced in advanced, but not early stages of AD (Adams et al. 2018). Given that BMP9 administration ameliorates hippocampal ADlike pathology in mouse models of this illness (Burke et al. 2013;Wang et al. 2017), ALK1 may constitute a viable therapeutic target in early and moderate AD for the treatment of vascular abnormalities of this disease. Indeed, BMP9 administration ameliorated vascular diabetic retinopathy (Akla et al. 2018) and reduced pulmonary arterial hypertension in rat and mouse models by acting on endothelial cells (Long et al. 2015).
Our current data and published results (Adams et al. 2018) showing concomitant changes in vascular and neuronal ALK1 expression during AD progression are in line with our previous studies documenting simultaneous neuronal and arteriolar abnormalities in the expression of methionine sulfoxide reductase B3 (MSRB3) in hippocampi of AD patients (Adams et al. 2017). A single nucleotide polymorphism rs61921502 in MSRB3 is associated with the risk of low hippocampal volume and AD. We also investigated the relationship between the rs61921502 G (minor/risk allele) and magnetic resonance imaging (MRI) measures of brain vascular injury and the incidence of stroke, dementia, and AD in 2038 Framingham Heart Study Offspring participants. When adjusted for age and age squared at MRI exam, sex, and APOE4), individuals with MSRB3 rs61921502 minor allele and no APOE4 had increased odds for brain infarcts on MRI (Conner et al. 2019).
Collectively, the data from our current and previous studies on ALK1 and MSRB3 (Adams et al. 2017(Adams et al. , 2018Conner et al. 2019) suggest that, in some cases, common molecular mechanisms may regulate vascular and neuronal function. These mechanisms may be vulnerable to pathophysiological processes, such as those of AD, in a similar fashion and thus be amenable to common therapeutic strategies. In the case of ALK1 dysfunction in early AD, these strategies could include treatment with agonists (Burke et al. 2013;Long et al. 2015;Wang et al. 2017;Akla et al. 2018) or with drugs that enhance ALK1-mediated signaling (Ruiz et al. 2017).
Supplementary Material
Supplementary material can be found at Cerebral Cortex Communications online.
Notes
We thank Terri Lima and Cheryl Spencer for expert IHC advice and assistance, Dr Joel Henderson for the use of imaging equipment, and Kerry Cormier of Framingham Heart Study Brain Bank and Michiel Kooreman of Netherlands Brain Bank for specimen procurement. Conflict of Interest: None declared.
|
2020-07-30T02:06:19.476Z
|
2020-07-28T00:00:00.000
|
{
"year": 2020,
"sha1": "517702dbba737faf9c9ac0e2dcf0c04bc0cb43dc",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/cercorcomms/article-pdf/1/1/tgaa031/37951454/tgaa031.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b50fd9de3a4bc3358d49ee26a68f83448ad9315f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234545608
|
pes2o/s2orc
|
v3-fos-license
|
Family and Its Role in the Cultivation and Preservation of Traditional Folk Music at Junior Primary School Age
: The cultivation of folk tradition begins in the family, and continues in a systematic and organized way in school as a certainly important task in the education and upbringing of children of junior primary school age. Factors contributing to the realization of this task include: a) school; b) family, and c) other out-of-school factors. Starting from the fact that the cooperation between one’s family and school should be based on partnership, and that learning about traditional folk music requires coordinated action, authors organized a survey aimed at: 1) examining the extent to which the cultivation of musical tradition within the family is reflected on the learning and adoption of related content in music education classes at junior primary school age; 2) determining the extent to which traditional folk music is cultivated in the family by listening to and playing such music; 3) examining the role of family in the process of introducing students to traditional folk music at junior primary school age with regard to other in-school and out-of-school factors. The authors have concluded, teachers believe that cultivating traditional folk music in one’s family is reflected in the form of recognition and improved student motivation for learning content, related to traditional folk music. Students believe that the activity of listening to folk music within family is an insufficiently utilized resource. The survey results confirm the hypothesis that the role of family should be significantly encouraged in relation to other out-of school factors.
Introduction
The learning and development of each individual begins within the family. Parents are the first educators and teachers of their children, therefore children acquire their first knowledge, skills and habits in the family. Preschool institutions and school join in the process later, but the influence of one's family never disappears.
Modern family "is a product of historical trends in which elements of the traditional and the modern are intertwined" (Zuković, 2012a, 16). It functions even in turbulent times that generate uncertainty. As the society has changed, so has family, however, the way it operates, family relations and family values have remained permanent and unchangeable. Regardless of its faults, family environment is perceived as a place where one will always find security, support and protection. The current social context and the generally accepted value system define the quality of family life in the broadest sense, and thus impact family functioning (Zuković, 2012a).
It should be noted that functional families are cohesive, stable, and their members cooperate frequently and productively. Such families are capable of facing and overcoming problems in a constructive to an important role of family -transmission and building of values that arise from our musical tradition. Through musical tradition, a child is introduced to the culture and history of his/her people, but also to the "history of mankind, because characters and events portrayed in traditional music also possess an element of universality, i.e. timelessness" (Pavlović and Cicović Sarajlić, 2013, 276). Given the fact that "children's folklore has witnessed childhood through countless ages and the spiritual maturation of mankind" (Ljubinković, 1976, 57), the role of family is that of an intermediary of sorts between individuals and the values characteristic of a certain society, then and now. Based on the knowledge that parents possess a complete value system, they will serve as a model for identification which translates its beliefs into "systems of norms and customs, expressing them in an understandable language and linking them to specific child's behavior" (Manić, 2016, 5).
With the birth of a child, the first contact and the first perception of music in one's native language occurs within the family, including: spontaneous singings (lullabies, jingles, nonsense verse, counting rhymes, amusements, etc.), listening to and playing music on different media, and body movement (clapping, tapping, rocking, swaying, rattling, etc.), all of which stimulates and develops children's sense of melody and rhythmic pulse, as well as tempo (Nikšić, 2016, 18).
Research in the field of musicolinguistics suggests that the musical experience of children acquired primarily within the family prior to starting school, and "authentically preserved" in our long-term memory" has invaluable importance in the learning of new music content (Levitin, 2011, 184).
Family environment and Traditional Folk Music
These first musical experiences a child acquires and develops within the family contribute to the development of a generally positive attitude toward music and playing instruments, which greatly and directly impacts "the formation of one's musical taste and preferences" (Radoš, 2010, 124).
Children's attitude toward music greatly depends on their family, and the things it offers as a family model of behavior. If a family nurtures a positive relationship toward traditional folk music, it is likely that a child will also build a personal relationship with this music genre. In contrast, if children are left to their own devices when it comes to the formation of musical taste, it is likely that commercial music, as the most dominant and accessible at the moment, will shape their musical taste to a great extent (Nešić et al, 2006). One of potential solutions is to design and place a certain type of auditory performance containing elements of traditional folk music in every national environment. We can make such content more relatable and more familiar to children by repeating certain patterns based on which they should gain a certain musical experience. By enriching and expanding the children's auditory experience, they will form their musical taste with an inherent affection toward the musical tradition of their people. Of course, in addition to traditional folk music, children should be introduced to other forms of traditional and original music, both from their own, and other countries as well. Nešić reminds us that "accepting the music of other nations and other historical periods, and understanding it (universalization of musical inclinations) doesn't damage or threaten one's affection for their own traditional music, just as learning foreign languages doesn't make one forget their mother tongue" (Nešić, 2003, 232). Čokorilo (2013) points out that family still successfully resists modern challenges thanks to traditional patterns, and represents one's intimate and very significant emotional community which helps preserve traditional values. It is, therefore, important that family should lay strong foundations for the preservation of tradition, language, origin and culture of one's nation in contrast to other nations, which could help develop sensitivity toward musical values and familiarize people not just with their own musical tradition, but the tradition of other nations as well. Thus, family may help communities which have undergone significant cultural changes in the global age to strengthen their national values and preserve their tradition (Deletić, 2013).
On the other hand, it is a fact that family is nowadays faced with serious challenges, media influence, globalization, social changes, and that the pressure of these changes has left family exposed to various influences and threats, fighting to preserve its identity and reconcile modernity and tradition (Zuković, 2009). Every individual is increasingly influenced by the media, and the environment that insists on commercial content and neglects traditional folk music. Learning about one's musical tradition primarily depends on the experience children acquire in their family, but also on the parents' relationship toward it, which is why it is important, as stated by Grandić (2007), to rely on the principles of coordinated action between all educational factors. Thus, by simultaneous action of teachers and the family toward children, and by introducing them to a wide range of domestic and foreign, original and traditional compositions, we will ensure further development of their musical taste, and shape their cultural identity (Vidulin and Ćalić, М., & Đurđanović, М. (2020). Family and its role in the cultivation and preservation of traditional folk music at junior primary school age, International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE) Martinović, 2015). Ethnomusicologists remind us that "children's songs (lullabies, jingles and nonsense verse) are slowly disappearing" (Fracile, 1987, 68), and the reason for this lies in the fact that parents no longer sing songs to their children, but play music on different media. In addition, seeing the media as a very influential factor, ethnomusicologist Dević explains that in the age of "very aggressive media domination over our traditional folk music, we have no other alternative but, just as we speak in our native language, to continue to sing our songs, passing on our musical and poetic heritage orally to our sons and daughters" (Dević, 2001, 14). The role of family is to build a positive attitude toward traditional music from an early age, counterbalancing other circumstances and content present in the media. In this regard, Zuković (2012a) believes that "a healthy and functional family is one that can grow and develop despite the challenges and obstacles it encounters. It always strives to expand its experience, and solves problems through family rules and values. Its strengths include love, positive emotion, support, tolerance and compromise" (Zuković, 2012a, 74).
From the aspect of ethnomusicology, folk art has always had a pronounced social function in the historical and cultural development of Serbian people, and a special place in the development of family. It is a well-known fact when certain songs are to be sung, why they are to be sung, how certain folk dances are to be performed, with what intention and purpose. It is no coincidence that in the past, traditional folk singing was the most common form of musical performance in this part of the world, just as ethnomusicologist Golemović reminds us -"there was no person living in rural areas who wouldn't sing, whether the occasion was everyday or ceremonious" (Golemović, 1998, 7).
In order to develop moral education, singing and listening to traditional folk songs were used to nurture patriotism and identification with one's nation. The content of traditional folk song reflects moral feelings, including honor, pride, dignity, responsibility, awareness of the necessity of work, respect for truth, and respect of other nations and cultures (Pavlović, 2013). Family is the main drive and initiator responsible for the moral behavior and actions of children both in the closest and a broader environment. In addition to listening to and performing traditional music, attending concerts and other performances and events of domestic and foreign artists that cultivate the spirit of musical tradition significantly enriches musical experience of young generations (Ćalić and Đurđanović, 2016). Moreover, school as the main representative of formal education since the beginnings of civilized society should provide support for the family, and utilize its mechanisms and instruments so as to realize the aspiration to preserve traditional folk music (Kostović, 2005).
There are many ways in which family can help to familiarize children with their musical tradition. Some of them include: -establishing a partnership between family and school, and exchanging experiences referring to musical tradition; -cultivating the habit of listening to traditional music within the family; -cultivating traditional music during religious holidays and family celebrations (Christmas, Easter, family patron saint -slava, etc.); -encouraging children to attend elective curricular and extracurricular activities that honor and celebrate traditional values; -including children into recreational activities (folk ensembles, ethno workshops, etc.) that cultivate traditional music; -attending concerts of traditional music and similar events; -visiting ethnographic museums, libraries (heritage departments), ethno villages and other institutions that aim to preserve national cultural heritage.
The key role of family involves building elements of folk tradition that will be later upgraded and adopted through different subject areas in school.
The cooperation between school and family helps to familiarize children with the musical tradition of their country, and thus with the criteria for identifying authentic folk music and art in general (Ivanović, 2007).
Moreover, the Rulebook on the Curricula (Teaching and Learning Plan) proposes that parents should be involved in the implementation of the content related to folk tradition, because the general opinion is we have little space left "to develop sensitivity toward musical values by learning about folk tradition of our own, and other people's" (Curriculum for the First Cycle of Primary Education, 2006, 90). Among other things, the Strategy for Educational Development in Serbia 2020 specifies that "the cooperation between school and family is not based on partnership", and that in order to overcome this outdated concept, "schools should apply the concept of partnership between school and parents/guardians" (Ibid, 2006, 90). The coordination between school and family will achieve a broader synergistic relationship, which means Ćalić, М., & Đurđanović, М. (2020). Family and its role in the cultivation and preservation of traditional folk music at junior primary school age, International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE),8(3), 103-112.
www.ijcrsee.com 106 that school and family together can achieve much more than by acting on their own (Zuković, 2012b). The new concept based on partnership between school and family provides the opportunity to design learning and purposefully organize part of students' free time thanks to an early discovery of different interests and abilities of students. Such an approach can reinforce the cultivation of local musical tradition.
Many analysts (Amato, 2001;Berk, 2005;Kieman, 2003;Nelson, 1993) point out that modern family is undergoing a crisis, which often results in increased divorce and separation rates, and such events put pressure on children, exposing them to constant stress and forcing them to adapt to new circumstances in order to overcome the crisis (As cited in: Vulfolk et al. 2014, 174-175). All socio-economic problems are inevitably reflected on family, which causes the value system to collapse, or rather to become vague and undefined, confusing parents and complicating their educational efforts. The cause of this situation lies in "everyday stress and uncertainty arising from socio-economic circumstances in which society has been caught at the turn of the century" (Nikolić, 2012, 22). If we view a child as a social being who "constantly interacts with his environment, and actively participates in the construction of his knowledge -discovering new meanings, developing new mental structures and accepting the values of the culture he/she is a part of" (Zuković, 2012a, 149), then it is crucial to provide support to children at all institutional levels.
Therefore, it is important to point out the outlook (DeFrain, 1999, as cited in: Zuković, 2012a) according to which the strategy for strengthening modern family should exist and should be carried out by different partakers in social life: teachers (...), politicians (...), media, counselors, social workers and volunteers, as well as by every family member who seeks ways and opportunities to create healthy family relations imbued primarily with love (Zuković, 2012a). A family thus empowered and working in partnership with school provides far better opportunities for the cultivation of traditional folk music, and consequently raises awareness of the importance of musical tradition in the life of every individual.
The aim of this paper is to draw attention to the importance and role of family in the cultivation, and thus in the preservation of traditional folk music, because learning about this topic "begins in the family, and continues in an organized, systematic, planned and continuous way in school" (Ćalić, 2011, 253). The role of school is to establish good cooperation and partnership with parents/guardians, and to further encourage the nurturing of musical tradition in the family.
Pedagogical research (Nikolić, 2012) indicates that it is necessary to cultivate the pedagogical culture of parents so as to raise their awareness of the responsibility and importance of good upbringing. School and family are in constant interaction and exchange with the environment as complex, living systems which intersect and intertwine through their relationship with the child (Zuković, 2012a). It should be noted that there are numerous studies (Epstein and Sanders, 2002;Milošević, 2002) which analyze the mutual relationship between family and the school system, dealing primarily with the influence of family situation on the child's behavior and their academic achievement. For this reason, we wanted to examine the role of family in the cultivation of musical tradition at junior primary school age. We tried to obtain teachers' opinions on whether family encourages learning about traditional folk music, and if such encouragements are reflected on the realization of formal music education. We also wanted to learn if traditional music is played in the family (in contrast with classical, pop and commercial music). And finally, we wanted to examine the contribution of students' families in music education, i.e. teachers' efforts toward the delivery of content related to traditional folk music in junior grades of primary school with regard to other out-of-school factors.
Materials and Methods
Research aim. The aim of this survey was to estimate whether family motivates students to learn more about traditional music in music classes at junior primary school age, and the extent to which musical tradition is cultivated in the family by means of listening to traditional folk music, and finally, to determine the influence of family (in relation to other factors) when it comes to the adoption of content related to traditional music.
Research instruments. We designed two questionnaires for the purposes of this survey, one for class teachers (classified by years of service), and the other for students of the fourth grade of primary school (classified by gender and academic achievement). Questionnaire for teachers contained questions for the purpose of establishing teachers' opinions on whether family stimulates learning about music folk tradition and how much that encouragement is reflected in the process of conducting music lessons. Questionnaire for students contained questions aiming to examine the attitude of students on how much is tradional music listened to within family and how much family, in comparison to other out of school factors, contributes to learning folk music tradition. Questionnaires were created by the authors of this work. The Ćalić, М., & Đurđanović, М. (2020). Family and its role in the cultivation and preservation of traditional folk music at junior primary school age, International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE),8(3),[103][104][105][106][107][108][109][110][111][112] www.ijcrsee.com 107 research was anonymous, both for teachers and students. The value of Cronbah Alpha coefficient, for teachers' questionnaire (0,0897) and for students' questionnaire (0,0898), indicates a good reliability of both instruments and justifies their acceptability.
Research sample and techniques. The survey was carried out on a sample of 597 respondents, i.e. students of the fourth grade of primary schools in Užice and Kralјevo, as well as 196 teachers (random selection) who work in primary schools in Užice and Kraljevo. Independent variables for the student sample included student gender and academic achievement, whereas the independent variable for the teacher sample involved professional experience (less than 10 years, 10-20, 20-30, over 30 years).
The research was based on a descriptive research method, and the data were collected using a survey technique.
Statistical data processing was based on the IBM SPSS 20 software package, statistical description and inference. We used a chi-square test to determine the statistical significance in students' opinions.
Family as an Empowerment Resource Helping School in Its Effort to Improve Student Knowledge of Traditional Folk Music
The research was aimed at identifying and analyzing the influence of family as perceived by teachers on the existence and sensitivity of children of junior primary school age to traditional music. We were primarily interested in the teachers' observations of the extent to which family encourages learning about traditional music in junior grades of primary school.
Table 1 Assessment of the motivational influence of family on students with regard to adoption of content related to traditional folk music
Results show (Table 1) that the majority of 196 teachers-respondents participating in the research agrees that introducing students to musical tradition within their family improves their motivation for learning similar content in music education classes (137 or 69,90%).
Table 2
Assessment of the motivational influence of family on students aimed at adopting content related to traditional folk music with regard to teachers' professional experience If we analyze respondents' opinions with regard to their professional experience, we can see that the largest number of teachers who positively asses motivation from the family are also the most professionally experienced (over 30 years of service), in other words, they believe that family is a significant motivating factor and a good incentive for learning about musical tradition. Least experienced respondents (less than Ćalić, М., & Đurđanović, М. (2020). Family and its role in the cultivation and preservation of traditional folk music at junior primary school age, International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE),8(3), 103-112.
We had in mind that the new age and modern technologies reinforce content which, in a way, suppresses and changes the priorities in the assessment of music content in education, and diminishes the role of both family and school. Therefore, we wanted to ask students what kind of music is played within their family environment on different media.
We wanted to learn what kind of music is played, and therefore cultivated in students' families (Table 3).
Table 3
Music preferences in students' families with regard to student gender Most students (51%) listed pop music as the favorite music genre played in their family. 90 students, or 15,08% were undecided, 160 or 26,80% prefer folk music, whereas 41 or 6,87% named art music as the favorite music genre of their family. We observed small differences in the opinions in relation to student gender. A slightly higher percent of boys, 58 or 18,07% classified themselves as undecided when it comes to a preferred music genre of their family in contrast to 32 or 11,59% girls who share this opinion.
Testing the statistical significance of differences in respondents' opinions with regard to the preferred music genre in their family with a chi-test square, we obtained the following results: = 6,282 for df = 4, which means the difference is not statistically significant.
Family should significantly contribute to the cultivation and adoption of traditional folk music by encouraging children to listen to this music genre. Therefore, we wanted to learn what is the preferred music genre in students' families with regard to student academic achievement (Table 4).
Table 4
Student preferences toward folk music with regard to academic achievement The data in Table 5 show that there are differences in respondents' answers in relation to the independent variable -student academic achievement. When it comes to the preferred music genre within their family, students whose academic achievement was estimated as outstanding responded that it is primarily pop music (201 or 53,03%). 65 or 17,16% of the respondents were undecided. Based on the respondents' preferences, i.e. their attitude toward the music genre preferred in their family, 89 (or 23,48%) opted for folk music, and 24 (or 6,33%) opted for art music. Respondents with very good academic achievement also prefer pop music (92 or 51,69%), whereas 56 or 31,46% of students from the same category opted for folk music, in contrast to students with outstanding academic achievement who settled on undecided when it comes to other music genres played in their family. The smallest percent of respondents chose undecided as their attitude (19 or 10,67%). There is also a difference in the category of respondents with satisfactory academic achievement in relation to other categories of respondents, because their preferred music genre is folk music (15 or 41,67%), which is not the case in the outstanding and very good achievement category.
Testing the statistical significance of differences in respondents' opinions about the preferred music genre of their family with regard to student academic achievement as the independent variable with a chisquare test, we obtained the following results: = 25,799 with df = 9 degrees of freedom, which means the difference is not statistically significant.
Results of the analysis confirm that traditional folk music (TFM) is insufficiently present in respondents' families.
The Role of Family in the Adoption of Traditional Music in School
Research shows that students recognize family as "an institution that preserves traditional values", and which is also an indisputably important factor in the cultivation and preservation of traditional music, as well as in familiarizing children with it (Ćalić and Grkić, 2013, 301). Starting from the fact that the cooperation between family and school plays a significant role in introducing one to the musical identity of their country, and thus helps them adopt criteria for recognizing authentic folk music, the second research task was aimed at determining the role of family with regard to other factors that contribute to the adoption of traditional folk music in school (Table 5).
Table 5
Teachers opinions' on the role of family in adoption and cultivation of traditional folk music with regard to their professional qualifications Looking at Table 5, we can see that teachers believe peers (47,96%) are the most influential factor contributing to the adoption and cultivation of traditional music. The second most influential factor in the adoption of traditional music is school (56 or 28,57% respondents), whereas 33 (or 16,84%) believe family is the decisive factor when it comes to the adoption and cultivation of traditional music. Respondents do not recognize the media as a factor of importance for the cultivation of traditional music. Survey results show that respondents' opinions only slightly differ with regard to their professional qualifications, which was additionally confirmed with a chi-square test. In other words, differences in opinions are not statistically significant, because =0,723 with df=3.
We also wanted to examine if there are any differences in answers to the question -what is the role of other factors, beside family, in the adoption of traditional music in school -in relation to teachers' professional experience (Table 6).
Table 6
Role of family among other factors in the adoption and cultivation of traditional music with regard to teachers' professional experience Looking at Table 6, we can see that family is gaining in importance as a factor relevant for the cultivation and adoption of traditional folk music as the teacher's professional experience increases. Teachers with most extensive professional experience (over 30 years) share the opinion that family, school and peers play an equally important role when it comes to the adoption and cultivation of traditional folk music. Respondents with less experience (under 30 years of service) believe peers are the decisive factor (around 50%) in the adoption of this content. This group of respondents does not recognize the media as a factor of importance in the cultivation of traditional music.
By calculating the statistical significance of differences in teachers' opinions on the role of family and other factors in the adoption and cultivation of traditional folk music with regard to their professional experience, we can conclude that the difference is not statistically significant, given that the chi-square test showed =9,187 with df=9.
Discussion and Conclusion
Results of the survey have confirmed that family is an insufficiently utilized resource which can significantly enrich music education in terms of content, as well as other subjects that include similar content, and help in the development of a generally positive attitude of students toward traditional music, and which directly shapes their musical taste and preferences for this music genre.
When it comes to teachers who participated in the survey (69,90%) believe that traditional music is sufficiently present in the family. However, it is very significant that 30,00% of the teachers-respondents believe that traditional values are cultivated insufficiently (23,47% ), or not at all (6,63%) within the family.
Students responded that the most popular music genre in their family environment is pop music (51,26%), followed by folk music (26,80%), and we also identified a statistically significant difference between students' opinions about the preferred music genre of their family with regard to their academic achievement.
Factors that favorably influence teachers' work on familiarizing their students with traditional music include peers (47,96% respondents), school (28,57%), and family (16,18%). Teachers with less professional experience do not perceive, and consequently, do not involve parents/guardians into the adoption of content related to traditional folk music, whereas teachers with over 30 years of service think that family is an equally important factor as school and peers (mostly members of folk ensembles where they learn traditional dances and songs), and together they greatly improve the realization of content related to traditional music in music education at junior primary school age.
School has a significant role in the cultivation and preservation of traditional music and culture, because it introduces students to this content in an organized and systematic manner. Music education classes provide plenty of topics referring to folk tradition, and as such, provide a good basis for getting to know it. However, we should not ignore the fact that the role of the teacher in the systematic and gradual introduction of traditional music of one's own and other countries into students' lives, is crucial. Teachers should strive to utilize this content in the best possible way, to motivate and engage students, to make the content relatable, to encourage students to join extracurricular activities, and to motivate parents as well, thus increasing the role of family in the cultivation and preservation of traditional music in school.
|
2020-12-24T09:13:09.257Z
|
2020-12-20T00:00:00.000
|
{
"year": 2020,
"sha1": "4a3865c76e51fbfe40b098542fa7da7ae89dc78d",
"oa_license": "CCBY",
"oa_url": "https://www.ijcrsee.com/index.php/ijcrsee/article/download/500/501",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cfeeed66fb525c7181dc7ad5dea5aa498e95fcb2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
268444725
|
pes2o/s2orc
|
v3-fos-license
|
A Mediterranean diet plan in lactating women with obesity reduces maternal energy intake and modulates human milk composition – a feasibility study
Introduction Maternal obesity is associated with increased concentrations of human milk (HM) obesogenic hormones, pro-inflammatory cytokines, and oligosaccharides (HMOs) that have been associated with infant growth and adiposity. The objective of this pilot study was to determine if adherence to a Mediterranean meal plan during lactation modulates macronutrients and bioactive molecules in human milk from mothers with obesity. Methods Sixteen healthy, exclusively breastfeeding women with obesity (body mass index ≥30 kg/m2) enrolled between 4 and 5 months postpartum. The women followed a 4-week Mediterranean meal plan which was provided at no cost. Maternal and infant anthropometrics, HM composition, and infant intakes were measured at enrollment and at weeks 2 and 4 of the intervention. Thirteen mother-infant dyads completed the study. Additionally, participants from an adjacent, observational cohort who had obesity and who collected milk at 5 and 6 months postpartum were compared to this cohort. Results Participants’ healthy eating index scores improved (+27 units, p < 0.001), fat mass index decreased (−4.7%, p < 0.001), and daily energy and fat intake were lower (−423.5 kcal/day, p < 0.001 and-32.7 g/day, p < 0.001, respectively) following the intervention. While HM macronutrient concentrations did not change, HM leptin, total human milk oligosaccharides (HMOs), HMO-bound fucose, Lacto-N-fucopentaose (LNFP)-II, LNFP-III, and difucosyllacto-N-tetrose (DFLNT) concentrations were lower following the intervention. Infant intakes of leptin, tumor necrosis factor (TNF)-α, total HMOs, HMO-bound fucose, LNFP-III and DFLNT were lower following the intervention. Specific components of the maternal diet (protein and fat) and specific measures of maternal diet quality (protein, dairy, greens and beans, fruit and vegetables) were associated with infant intakes and growth. Discussion Adherence to a Mediterranean meal plan increases dietary quality while reducing total fat and caloric intake. In effect, body composition in women with obesity improved, HM composition and infants’ intakes were modulated. These findings provide, for the first time, evidence-based data that enhancing maternal dietary quality during lactation may promote both maternal and child health. Longer intervention studies examining the impact of maternal diet quality on HM composition, infant growth, and infant development are warranted.
Introduction
In the United States, more than 50% of women enter pregnancy with either overweight or obesity (1).Pre-gravid obesity has been associated with changes in the macronutrient (2) and bioactive composition of human milk (HM) (2)(3)(4), which may impact infant health (5).HM from women with obesity has higher energy, fat, and protein content compared to milk from mothers with normal weight throughout lactation (2,6,7) and obesity-associated elevations in HM hormones (8,9), pro-inflammatory cytokines (10,11) and HM oligosaccharides (HMOs) (4), are positively associated with infant growth and adiposity (2,4,9).Therefore, obesity-associated alterations in HM composition may play a role in early-life nutritional programing of infant adiposity.
Dietary interventions during the postnatal period may provide a window to temper the effects of obesity on HM composition.It has been shown in observational studies that a higher Mediterranean diet score is associated with lower HM saturated fatty acid concentrations and with increased monounsaturated fatty acids and total antioxidant capacity (12,13).However, very few dietary intervention studies have been conducted in breastfeeding women that have also analyzed components of HM.A crossover study, employing four different dietary paradigms (galactose vs. glucose and high carbohydrate vs. high fat) during lactation, showed an association between dietary energy source and HMO concentrations (14).Another study, aimed at decreasing maternal energy, fat, and sugar intake over 2 weeks postpartum, found that HM insulin, leptin and adiponectin were reduced by 10-25% following the dietary intervention (15).Together these studies suggest that dietary interventions can modulate HM composition.As such, the Mediterranean diet has shown efficacy in decreasing body mass index (BMI) (16,17), circulating obesogenic hormones (17), adipokines (18), and systemic inflammation (16,19) in non-pregnant/non-lactating women with obesity.However, it is yet unknown whether similar results can be attained in lactating women or if these changes may affect human milk content.
In this within-subject pilot intervention trial, we aimed to determine if adherence to a Mediterranean meal plan during lactation could modulate the macronutrient and bioactive (hormone, HMO, and cytokine) content of HM from women with obesity.
Participants and study design
The within-subject intervention study took place at the Arkansas Children's Nutrition Center in Little Rock, Arkansas between April 2019 and February 2020.Healthy women with obesity who were exclusively breastfeeding were recruited from the surrounding community.Of the 90 participants screened, 28 were eligible and of those, 16 enrolled between 4 and 5 months postpartum (Supplementary Figure S1).Three participants did not complete all study visits (19%), resulting in 13 participants for the current analysis.Inclusion criteria were: BMI = 30-50 kg/m 2 , ≥ 18 years of age, singleton pregnancy, intent to continue breastfeeding exclusively until at least 6 months postpartum, and child being able to be fed expressed milk from a bottle.Exclusion criteria included: pre-existing conditions (e.g., diabetes, hypertension, heart disease); use of recreational drugs, tobacco, or alcohol; food allergies, intolerances or preferences incompatible with meal plan; and the use of medications or supplements that are contraindicated for lactating mothers.Maternal age, race and ethnicity, and infant sex were self-reported.Assessments took place at enrollment (pre), 2 weeks and 4 weeks following the start of the dietary intervention (Wk2 and Wk4, respectively).To examine the impact of time on milk composition and infant intakes, participants from an adjacent, observational cohort from the same study center (2,4,20) were matched to the participants of the within-subject intervention study based on maternal BMI and HM sample availability at postpartum months 5 and 6.From the adjacent, observational study, there were only 10 participants that had a BMI above 30 and collected milk samples at both 5 and 6 months postpartum, therefore, all 10 were used to compare with the participants from this within-subject study.To learn about the observational study sample used, please refer to our group's previous publications on this cohort (2,4,20).
Ethics statement
Written, informed consent was obtained from all participants prior to study procedures.All study procedures were approved by the Institutional Review Board of the University of Arkansas for Medical Sciences (Protocol #: 228407).This trial was registered at clinicaltrials.gov (NCT03744429).
3-day food records
Habitual maternal dietary intake was assessed prior to the initiation of the dietary intervention using 3-day food records (two weekdays, one weekend day) and analyzed with the Nutrition Data System for Research (Nutrition Coordinating Center, University of Minnesota, MN) software by trained interviewers.Participants recorded all food, beverages, supplements, and medications that they consumed during the 3 day period.
Dietary intervention
Participants met with a registered dietitian at the initial study visit to receive education about the dietary intervention based on the Mediterranean diet (21) and weekly thereafter to monitor adherence to the meal plan.Motivational interviewing, active listening, and goal setting techniques were used to help participants comply with the intervention.The goals of the counseling sessions were to identify and resolve barriers to adherence as well as provide encouragement and support.The initial session educated on the study intervention and tracking dietary intake while subsequent sessions reviewed compliance to problem solve challenges and celebrate successes.The macronutrient distribution (20-35% of calories from fat, 45-65% carbohydrates, 10-35% protein) and provided caloric intake met the Dietary Guidelines for Americans recommendations (22).All lunches and dinners (2/day, in the form of fresh packaged meals) were provided weekly to the participants throughout the 4 weeks by Trifecta Nutrition (Sacramento, California).Breakfast (breakfast sandwiches and oatmeal, 1/day) and snacks (walnuts, granola bars, Greek yogurt, and fruits, 2/ day) were provided by the research team.Participants were also provided with extra virgin olive oil to add to their meals and were instructed to buy 1% low fat milk to drink or combine with fruits as a smoothie.Participants recorded all food, beverages, supplements, and medications that they consumed and where they made substitutions in the meal plan for the entirety of the trial.Dietary intake was analyzed using the Nutrition Data System for Research.Healthy Eating Index (HEI) and Mediterranean Diet scores were derived from published guidelines (23)(24)(25).The overall intervention dietary composition is summarized in Supplementary Table S1 and an example of a week's menu is shown in Supplementary Table S2.Intervention compliance was calculated as the participants HEI score of consumed meals divided by the HEI score of the prescribed meals multiplied by 100.
Anthropometrics and body composition
Maternal and infant anthropometrics and maternal body composition were measured at each visit.Maternal weight and height and infant weight and length were measured as previously described (2).Weight-for-length, weight-for-age and length-for-age z-scores were calculated based on the World Health Organization Child Growth Standards (26,27).Maternal BMI was calculated as kg/m 2 .Maternal fat mass (FM) and fat free mass (FFM) were measured using air displacement plethysmography (Cosmed BodPod ® , Concord, CA).
Infant fat mass and lean mass were measured using quantitative nuclear magnetic resonance (EchoMRI-AH, Echo Medical Systems, Houston, TX).FM and FFM index (FMI and FFMI, respectively) were calculated as FM (kg)/m 2 and FFM (kg)/m 2 .
24-h human milk collection
Participants collected HM over 24-h, prior to each visit.Mothers were given the option to either feed their infant the expressed milk from a bottle or to feed baby from one breast and pump the other breast during the 24-h collection period.If only one breast was pumped, mothers were instructed to alternate the nursed breast and the pumped breast at each feed and record accordingly.At each feed, the mothers were asked to gently invert the expressed HM and aliquot 4 mL of HM into the provided polypropylene tubes.HM was stored at 4°C until the full 24-h collection was complete.Afterwards, the 24-h samples were pooled and stored intact at-80°C.
Human milk composition and infant intakes
Macronutrients (fat, protein, and carbohydrates) were measured in milk from all visits using a Miris HM Analyzer (Miris, Uppsala, Sweden) according to manufacturer's instructions, from which caloric content was derived.Leptin, insulin, CRP, IL-6, IL-8, and TNF-α concentrations were measured in milk from all visits using highperformance electrochemiluminescence immunoassays (Meso Scale Diagnostics, Rockville, MD).Concentrations of HMOs (nmol/mL) were measured in milk from the pre-intervention and Wk4 visits only.by high-performance liquid chromatography on an amide-80 column (2 μm particle size, 2 mm ID, 15 cm length) with fluorescent detection, as previously described (14).The absolute quantification of the 19 most abundant HMOs (4) was determined using the non-HMO oligosaccharide raffinose as an internal standard added to all milk samples at the beginning of analysis.Infant intakes were estimated using test weighing, which is considered a useful and precise method for assessing milk intake (28)(29)(30), at each visit to obtain a single-feed milk intake volume multiplied by regular, daily feeding frequency as reported by the mothers.
Statistical analyses
Demographic data was summarized using mean and standard deviation for continuous variables and counts (percentages) for categorical variables.Comparisons were made using linear mixedeffect models constructed with random intercepts for each participant followed by type 2 analysis of variance for measurements with no Wk2 values or using linear mixed-effects models constructed with random intercepts for each participant followed by contrasts of estimated marginal means using the lme4, car and modelbased R packages (31)(32)(33).Repeated measures correlations were performed to assess the relationship between dietary components and human milk content using the rmcorr R package (34) and were FDR-adjusted.Power analysis determined that n = 13 participants would allow consideration of an effect size greater than 1.6 g/100 mL for HM fat, 248 pg/mL for leptin, 0.16 pg/mL for TNF-α and 92 ng/mL for CRP.Significance was set at alpha ≤0.05.Data analyses were performed using R (version 4.1.0)(35).Extreme outliers were removed if they were 3 times above the upper quartile or 3 times below the lower quartile for all measurements.
Mother and infant baseline characteristics
Participants were on average 32.8 ± 3.8 years of age and 77% of participants were of non-Hispanic, White descent (Table 1).All participants had obesity at enrollment (mean BMI: 35.9 ± 5.0 kg/m 2 , FM: 46.1 ± 12.3 kg, and FMI: 17.1 ± 4.3 kg/m 2 ).Infants showed expected growth with increases in weight and length parameters over the 4-week study (Supplementary Table S3).Of all the measured characteristics, only baseline plasma IL-8 was significantly different between the participants that completed the intervention (2.6 ± 1.1 pg/ mL) and those that did not (4.3 ± 1.4 pg/mL).
Changes in human milk bioactive molecule concentrations and daily infant intakes
HM collections occurred 5.4 ± 4.5 days before the beginning of the dietary intervention, 0.77 ± 1.5 days before the Wk2 visit and 1.3 ± 2.5 days after finishing the dietary intervention.Following the 4-week intervention, mean HM leptin concentrations significantly decreased by 37.1% (p < 0.001, Figure 2; Supplementary Table S4), even after adjusting for maternal weight loss during the intervention.HM total energy (p = 0.77) and macronutrient levels (fat: p = 0.75, carbohydrate: p = 0.60, and protein: p = 0.78) did not change, nor did HM concentrations of insulin (p = 0.28), CRP (p = 0.78), IL-6 (p = 0.25), IL-8 (p = 0.37), as shown in Supplementary Table S4.HM concentrations of TNF-α (p = 0.11) were not significantly different between time points, albeit levels decreased in 9 out of the 13 participants (Figure 2).
Effect of time on lactation outcomes, a comparison with an observational cohort
To understand the potential impact of time on HM composition and infant intakes, we examined HM parameters and infant intakes from the within-subject study compared with those of an observational cohort of lactating women with obesity at the same months postpartum.There were no demographic differences between the two cohorts (Supplementary Table S5).There were no differences between the studies in HM concentrations of fat, protein, energy, insulin, IL-6 or CRP, or infants' total milk intake and daily intakes of carbohydrates, protein, energy, insulin, TNF-α, IL-6, or CRP (Supplementary Table S6).There were also no differences in infant weight-for-age, length-for-age or weight-for-length Z-scores or FFMI between cohorts nor did these parameters change with time (Supplementary Table S6).Infant fat mass index was also not different between cohorts, however we did observe that it increased with time, as would be expected in healthy growing infant cohorts.
Association of maternal diet components with human milk composition and daily infant intakes
To determine if specific dietary components had a direct relationship with HM or infant outcomes, we performed repeated measures correlations (Figure 5).After FDR-adjustment, no components of maternal diet were associated with HM composition.Several dietary components showed significant, negative associations with infant daily intake of leptin (Figure 5) including maternal dairy HEI score (r rm = −0.76,p rm adjusted = 0.004), total HEI score (r rm = −0.70,p rm adjusted = 0.014), total fruit HEI score (r rm = −0.65,p rm adjusted = 0.021), total vegetable HEI score (r rm = −0.68,p rm adjusted = 0.019), and greens and beans HEI score (r rm = −0.72,p rm adjusted = 0.01).Conversely, maternal fat intake as a percentage of daily calories was positively associated with infant daily intake of leptin (r rm = 0.65, p rm adjusted = 0.021).
Adjusted linear mixed effects models were used to investigate the relationships between HEI score components and infant intakes while adjusting for total HEI score (Table 3).The adjusted β coefficients were less than 25% different for the following models: dairy HEI score and daily infant intake of TNF-α (14.5%), and greens and beans HEI score and daily infant intake of leptin (16.5%).Together, these data suggest that the observed relationship between dietary components of maternal HEI scores and infant intakes of human milk components were not dependent on total HEI scores.
Discussion
The growing evidence that having overweight and obesity modulates HM composition in ways that can promote infant adiposity (2,4,5,9) warrants the development of interventions that may temper these effects.In this study, we tested the effect of a 4-week Mediterranean meal plan, implemented at 5 months postpartum in women with obesity and demonstrated for the first time that the intervention improved maternal HEI scores and plasma lipid profiles, reduced maternal BMI and fat mass, decreased HM concentrations and infants' intakes of leptin, TNF-α, LNFP II, LNFP-III, DFLNT, total HMOs and HMO-bound fucose.While we observed differences in the concentrations of HM leptin, maternal circulating levels of leptin did not change, indicating other potential avenues for maternal dietary interventions to alter HM composition such as altering leptin production locally in the mammary gland (36).Additionally, we identified individual dietary components (e.g., protein, fat) and HEI score components (e.g., dairy, total fruit) that were significantly associated with intakes of bioactive molecules in HM.These findings provide compelling evidence that dietary interventions during lactation can mitigate obesity-associated alterations in HM composition that may ultimately affect early-life nutritional programing of infant health while promoting maternal health.
Maternal diet and maternal outcomes
In non-pregnant/non-lactating women with obesity, adherence to a Mediterranean diet has been shown to decrease BMI (16,17), circulating obesogenic hormones (17), adipokines (18), and systemic inflammation (16,19).Findings from this study suggest that similar results can be attained in lactating women with obesity, which is of importance to prevent postpartum weight retention and optimize maternal and child health (37).Several randomized controlled trials have demonstrated the efficacy of caloric restriction and/or exercise on weight loss and body composition during the postnatal period, although they failed to evaluate their impact on HM composition or infant health (38).By proxy, replacing habitual post-partum maternal diet with a Mediterranean dietary pattern in our study population resulted in a reduced caloric intake while meeting dietary guideline recommendations.Consistent with our findings, caloric restriction resulted in significant weight loss and improved body composition that was sustained for up to 1 year in some studies (38, 39).Stendell-Hollis et al. also demonstrated that 4 months of a Mediterranean diet or a MyPyramid diet were effective in reducing postpartum maternal weight, fat mass, and plasma TNF-α levels (40).Low-fat diets, and diets that are high in fiber decreased HDL levels in adults with normal weight and overweight/obesity, similar to what we observed in lactating women (41,42).Future studies will need to elucidate the unique contributions of energy deficit vs. maternal dietary quality to changes in HM composition and their benefits to the child.
Maternal diet and human milk bioactives
Women with obesity have elevated pro-inflammatory chemokines (2,11), leptin (2, 3), and insulin (2, 8) HM content compared to peers with normal weight.Importantly, infants' intakes of HM insulin and CRP were significantly and positively associated with their fat mass index (2).It is believed that obesity-related systemic and local (mammary gland microenvironment) inflammation (43) may contribute to elevations in pro-inflammatory cytokines that have been observed in HM from women with obesity (2,11).While much research has focused on how maternal obesity influences bioactives in HM (10), few studies exist describing the relationships between maternal dietary intake and HM bioactives.In animal models of obesity, caloric restriction has led to decreases in mammary gland inflammation (43,44).Our study expands on these findings and demonstrates a reduction in HM pro-inflammatory cytokines (TNF-α and IL-8) from women with obesity who underwent a Mediterranean dietary intervention.In agreement with our current data, improved maternal dietary quality and reduced caloric intake led to lower HM insulin and leptin levels (17).Critically, infant intakes of these HM components were also reduced following the 4-week intervention.In accordance with our study, a previous investigation reported reduced caloric intake for 2 weeks did not result in changes in concentrations or infant intakes of HM macronutrients or in changes in infant weight-for-length, weight-for-age, or length-for-age z-scores, despite changes in infant intakes of leptin and insulin (15).With such an acute intervention, changes in infant growth that could be attributed to HM components would not necessarily be expected.Therefore, future studies that can implement dietary interventions throughout the postnatal period are critical to understanding the potential positive impact such dietary interventions may have on HM composition and subsequent offspring body composition.We and others have also shown positive associations between maternal obesity and HMO content (4,45).These associations are important because HMOs are among the most predominant bioactive components in HM, supporting infant gut development (46,47) and the prevention of infectious diseases (47).However, recent evidence also suggests that some HMOs that are elevated in HM from women with obesity are associated with infant growth (greater weight-forlength Z-scores) and fat accretion (4,48,49).While LNFP III was not associated with maternal obesity in our previous study (4), it showed a strong positive association with infant fat mass at 2 months of age (4).Repeated measures correlations between maternal diet components and infant intake of human milk components.Scatterplots showing relationships between maternal dietary components and infant intakes of human milk components.Maternal dietary intake was analyzed using the Nutrition Data System for Research.Infant intakes were estimated at each visit using test weighing and feeding frequency reported by the mothers.FDR-adjusted p-values are presented.HEI: Healthy Eating Index, 6'SL: 6'Sialyllactose, LNT: Lacto-N-tetrose, LSTb: Sialyl-lacto-N-tetraose b, IL: Interleukin LNH: Lacto-N-hexaose, TNF-α: Tumor Necrosis Factor α. (51)(52)(53).Our data are not completely aligned with these previous reports.It is possible that these discrepancies are related to differences in the analyses of dietary intake data (e.g., food frequency questionnaires vs. daily food records), in the timing of sample collection, or in the analytical approaches of the studies.Comparable to our data, Azad et al. reported no significant association between maternal HEI score and HMO concentrations (51).However, Azad et al. did find a weak but significant negative association between maternal total protein intake and LSTb concentrations consistent with the infant intake data presented herein.LSTb showed a strong, positive association with infant fat mass as well as weight-for-length and weight for age Z-scores in our previous study (4), suggesting that maternal protein intake may be a modifiable factor that can be used in future intervention studies to improve infant body composition.A recent short-term crossover study employing four different dietary paradigms (galactose vs. glucose and high carbohydrate vs. high fat) demonstrated a significant association between maternal dietary energy source and the concentrations of HMOs ( 14) further supporting the notion that interventions focused on specific dietary components may benefit infant health through alterations to HMOs.
Limitations and strengths
Caution should be taken when interpreting these results because of the small sample size of mainly non-Hispanic White lactating women and the lack of a concurrent control group for comparison.Yet, given the US population-wide exclusive breastfeeding rates at 6 months of 24.9% (54), this study assesses the feasibility of Mediterranean diet pattern implementation in an exclusively breastfeeding cohort of women with obesity.Therefore, reporting data from 13 participants of this population provides foundational knowledge for future Mediterranean diet intervention designs.There is a clear need to conduct randomized control trials to confirm our pilot-study findings and to use standardized methodology to increase reproducibility and rigor of future research.A second limitation is the confounding effect of calorie restriction that occurred by replacing habitual dietary patterns with the Mediterranean diet plan in this study.While this prevents us from exclusively attributing the assessed effects to the change in diet quality, it provides insights to the prevailing nutrient poor, calorie dense habitual diets in the assessed cohort.Furthermore, participants consumed an average of 1841 kcal/d during the intervention, which is in-line with DGA for sedentary women ages 19-50y, as an additional 450-500 kcal/d intake during breastfeeding is only recommended for women aiming to maintain post-partum weight (22).Third, the effects of storage at 4°C of the HM during the 24-h collection at the participants' homes were not investigated.Despite these limitations, this pilot study provides unique results that healthy dietary habits can influence maternal health, HM composition, and children's HM intakes during the postpartum period in women with obesity.There were several significant strengths to this study, including: (1) greater than 80% adherence to the dietary intervention that resulted in significant improvements in maternal diet quality and body composition in only 4 weeks, (2) significant changes in human milk bioactive components and (3) measuring infant HM intakes and acquiring representative milk samples over 24-h to use best practices in estimating infants' exposures.Future studies will need to evaluate a more diverse population, larger cohort, and a longer length of intervention while maintaining isocaloric intakes and body weight from baseline.
Conclusion
This study is the first to demonstrate the feasibility of implementing a Mediterranean meal plan in lactating women with obesity while examining its impact on human milk composition, infant intake, and infant anthropometrics.This in-depth investigation allows for a better understanding of the dynamic of the breastfeeding triad of mother/milk/infant and how a healthy diet could improve maternal and child health.
FIGURE 1
FIGURE 1 Changes in maternal outcomes of lactating women with obesity following a 4-week Mediterranean dietary intervention.Paired plots showing changes in maternal body mass index [BMI, (A)], fat mass index [FMI, (B)], fat free mass index [FFMI, (C)], cholesterol (D), high density lipoprotein [HDL, (E)], and low-density lipoprotein [LDL, (F)] between pre-intervention (Pre) and the end of the intervention (Wk4).Dotted lines connect the Pre and Wk4 measures for each participant.Quantiles within the density plots are indicated by the solid horizontal lines.
FIGURE 2
FIGURE 2Changes in leptin and TNF-α composition and infant intake following a 4-week Mediterranean dietary intervention.Paired plots showing changes in human milk leptin and tumor necrosis factor α (TNF-α) between pre-intervention (Pre) and the end of the intervention (Wk4).Dotted lines connect the Pre and Wk4 measures for each participant.Quantiles within the density plots are indicated by the solid horizontal lines.
FIGURE 3
FIGURE 3 Changes in human milk oligosaccharide composition and infant intake following a 4-week Mediterranean dietary intervention.Paired plots showing changes in composition and intake of human milk oligosaccharide (HMO)-bound fucose, Lacto-N-fucopentaose (LNFP) II, LNFP III, Difucosyllacto-Ntetrose (DFLNT), and Total HMOs between pre-intervention (Pre) and the end of the intervention (Wk4).Dotted lines connect the Pre and Wk4 measures for each participant.Quantiles within the density plots are indicated by the solid horizontal lines.
FIGURE 4
FIGURE 4Comparison of changes in human milk composition between the within-subjects intervention and an adjacent, observational cohort.Boxplots showing the changes from 5 to 6 months postpartum in each of the studies for carbohydrate, leptin, tumor necrosis factor-α (TNF-α), and interleukin-8 (IL-8) concentrations.Linear mixed-effects models were used to compare the studies and the p-values are presented.
TABLE 2
Maternal dietary characteristics before (Pre) and during (Wk2 and Wk4) Mediterranean dietary intervention that was provided from 5 to 6 months postpartum to 13 lactating women with obesity.Maternal dietary characteristics are summarized as mean (SD).Intervention compliance was calculated as the participants Healthy Eating Index (HEI) score of consumed meals divided by the HEI score of the prescribed meals multiplied by 100.Comparisons were made using linear mixed-effect models followed by contrasting estimated marginal means and values with different superscripts are significantly different.The bolded values are those that were significantly different (p < 0.05).(−48.3% and − 48.3%, p < 0.001), refined grains (−79.0%and − 75.0%, p < 0.001, respectively), and sodium (−62.0%
TABLE 3
(4),49)es of raw and adjusted Healthy Eating Index models.The bolded values are those that had % differences less than 25%.*Model adjusted for Total HEI Score.This is important because in the current study, HM concentrations and infant intakes of LNFP III were significantly lower following the dietary intervention.Similarly, previous studies have found significant positive associations between HMO concentrations (disialyllacto-N-tetraose, LNFP II, total HMO concentrations, and total HMO-bound fucose) and infant fat mass at 5-6 months of age(48,49), many of which were significantly reduced following the Mediterranean diet intervention in this study.It is important to recognize that HMO concentrations change over lactation (50, 51) and change in relation to maternal BMI(4).While these pilot data present compelling evidence for dietary influences on HMOs, our study did not include a prospective control group to test sufficiently the effect of dietary intervention vs. time on HMO concentrations nor did we have time-matched retrospective data on HMO concentrations in our observational cohort.Previous studies have shown that maternal consumption of fruit, whole grains, and specific fatty acids have been associated with individual HMO concentrations HEI, Healthy Eating Index; IL, Interleukin; LNH, Lacto-N-hexaose; TNF-α, Tumor Necrosis Factor α.
|
2024-03-17T16:03:29.665Z
|
2024-03-13T00:00:00.000
|
{
"year": 2024,
"sha1": "031513d323ed4cd2e6c3dc6a8c4d28ef99b7b452",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1303822/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18c0e20b81a97d01e6ff8d0c48074e9d79f93264",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
17516755
|
pes2o/s2orc
|
v3-fos-license
|
Nonabelian harmonic analysis and functional equations on compact groups
Making use of nonabelian harmonic analysis and representation theory, we solve the functional equation $$f_1(xy)+f_2(yx)+f_3(xy^{-1})+f_4(y^{-1}x)=f_5(x)f_6(y)$$ on arbitrary compact groups. The structure of its general solution is completely described. Consequently, several special cases of the above equation, in particular, the Wilson equation and the d'Alembert long equation, are solved on compact groups.
Introduction
Let G be a group. The d'Alembert equation where f : G → C is the function to determine, has a long history (see [2]). It is easy to check that if ϕ is a homomorphism from G into the multiplicative group of nonzero complex numbers, the function f (x) = (ϕ(x) + ϕ(x) −1 )/2 is a solution of Eq. (1.1) on G. Such solutions and the zero solution are called classical solutions. Kannappan [13] proved that if G is abelian, then all solution of Eq. (1.1) are classical. This was generalized to certain nilpotent groups in [6,7,11,15,16]. On the other hand, Corovei [6] constructed a nonclassical solution of Eq. (1.1) on the quaternion group Q 8 . It was realized later that Corovei's solution is nothing but the restriction to Q 8 of the normalized trace function tr/2 on SU(2), which is a nonclassical solution of Eq. (1.1) on SU (2) (c.f. [1,22]). Recently, it was proved in [22,23] that any nonclassical continuous solution of Eq. (1.1) on a connected compact group factors through SU (2), and that the function tr/2 is the only nonclassical continuous solution on SU (2). This was generalized by Davison to arbitrary compact groups in [8], and further to any topological groups in [9] (with the group SU(2) replaced by SL(2, C)). Hence Eq. (1.1) on topological groups has been completely solved. For more results related to Eq. (1.1), we refer to [4,5,10,18,19,20] and the survey [17].
A well-known generalization of the d'Alembert equation is the Wilson equation f (xy) + f (xy −1 ) = 2f (x)g(y), (1.2) where f and g are unknown complex functions on G. It was first considered by Wilson [21] and has also been extensively studied (see [9,10,11,16,17] and the references therein). It turns out in [9] that Eq. (1.2) is directly related to Eq. (1.1), where solutions of Eq. (1.2) were used to construct the homomorphism G → SL(2, C) mentioned in the previous paragraph. Furthermore, it was shown (see, e.g., [16]) that if f and g satisfy Eq. (1.2) and f ≡ 0, then g is a solution of the d'Alembert long equation (1. 3) The question of solving Eq. (1.3) on arbitrary topological groups was raised in [8]. However, the approaches in [8,9,22] do not apply to Eqs. (1.2) and (1.3). The purpose of this paper is to study the equation [14]). Our main ingredients are nonabelian harmonic analysis on compact groups and representation theory. Let G be a compact group. Then the Fourier transform transforms a square integrable function f on G into an operator-valued functionf onĜ, the unitary dual of G. Applying the Fourier transform to both sides of Eq. (1.4) and taking some representation theory into account, we will convert Eq. (1.4) into a family of matrix equations. We call a tuple of matrices satisfying such matrix equations an admissible (matrix) tuple. There are three types of admissible tuples, i.e., complex, real, and quaternionic types, which correspond to the three types of the representations [π] ∈Ĝ, respectively. To determine the admissible tuples is a question of linear algebra. We will find all admissible tuples of each type. Then applying the Fourier inversion formula, we obtain the general solution of Eq. (1.4).
The structure of the general solution of Eq. (1.4) can be compared with that of linear differential equations, where any solution is the sum of a particular solution and a solution of the associated homogeneous differential equation. In our case, the homogeneous equation associated with Eq. The paper is organized as follows. Some basic properties of the Fourier transform on compact groups and some facts in representation theory will be briefly reviewed in Section 2. In Section 3 we will give some basic definitions related to Eq. (1.4), introduce the notion of admissible matrix tuples, reveal their relations with Eq. (1.4), and present some examples which are the building blocks of the general solution. Then in Section 4 we will determine all admissible matrix tuples. The main results will be proved in Section 5. The general solutions of several special cases of Eq. (1.4) will be given in Section 6.
We should point out that one could apply our method in this paper to some other types of functional equations on compact groups, and that the method may be also generalized to solve functional equations on non-compact groups admitting Fourier transforms.
Throughout this paper, G denotes a compact group, dx the normalized Haar measure on G, and L 2 (G) the Hilbert space of all square integrable functions on G with respect to dx. By solutions of Eq. (1.4) (or its special cases) on G we always mean its L 2 -solutions.
We would like to thank Professor H. Stetkaer for giving many valuable comments.
Preliminaries
As mentioned in the introduction, our basic tools in this paper are Fourier analysis on compact groups and some results in representation theory. In this section, we briefly review some fundamental facts in these two subjects that will be used later.
2.1. Fourier analysis. We mainly follow the approach of [12,Chapter 5]. LetĜ be the unitary dual of the compact group G. For [π] ∈Ĝ, we view π as a homomorphism π : G → U(d π ), where d π is the dimension of the representation space. Let M(n, C) denote the space of all n × n complex matrices. For f ∈ L 2 (G), the Fourier transform of f is defined bŷ Note that for the sake of convenience, our definition is different from the one in [12] by a factor d π . In our setting, the Fourier inversion formula is . By the Fourier inversion formula, one can show that f ∈ L 2 c (G) ⊥ if and only if tr(f (π)) = 0 for every [π] ∈Ĝ. A crucial property of the Fourier transform is that it converts the regular representations of G into matrix multiplications. As usual, the left and right regular representations of G in L 2 (G) are defined by respectively, where f ∈ L 2 (G) and x, y ∈ G. Then it is easy to show that (L y f )ˆ(π) =f (π)π(y) −1 , (R y f )ˆ(π) = π(y)f(π).
2.2.
Representation theory. For a positive integer n, let I n denote the n×n identity matrix, and if n is even, let J n = 0 I n/2 −I n/2 0 . If n is clear from the context, we will simply denote I = I n and J = J n . Recall that Sp(n) = {x ∈ U(n) | xJx t J t = I} if n is even, where A t refers to the transpose of a matrix A. We recall the following definitions.
(3) π is of quaternionic type if n is even and there exists x ∈ U(n) such that xπ(G)x −1 ⊆ Sp(n).
What is really important for us is the equivalence classes of representations. So if π is of real (resp. quaternionic) type, we will always assume that π(G) ⊆ O(n) (resp. π(G) ⊆ Sp(n)).
LetĜ c (resp.Ĝ r ,Ĝ q ) denote the set of (equivalence classes of) irreducible representations of G of complex (resp. real, quaternionic) type. Then we have the following basic fact.
Proof (sketched). For an irreducible representation π : G → U(n), we consider the representation ρ of G in M(n, C) defined by ρ(g)(A) = π(g)Aπ(g) t . Let M(n, C) G denote the space of matrices A such that ρ(g)(A) = A for all g ∈ G.
Then [π] = [π] if and only if dim M(n, C) G = 1. In this case, any nonzero matrix in M(n, C) G is invertible. It is easy to see that M(n, C) is decomposed as the G-invariant direct sum of the space of symmetric matrices M symm (n, C) and the space of skew-symmetric matrices M skew (n, C). Hence [π] = [π] if and only if either dim M symm (n, C) G = 1 (which means that π(G) lies in a conjugate of O(n)), or dim M skew (n, C) G = 1 (which means that n is even and π(G) lies in a conjugate of Sp(n)). Since dim M(n, C) G = 1, the two cases can not occur simultaneously. For more details, see [
Constructing solutions from admissible tuples
We first introduce some notions on solutions of Eq. (1.4), and examine their basic properties. For g, h ∈ L 2 (G), let g ⊗ h be the function on G 2 defined by g ⊗ h(x, y) = g(x)h(y). As being a solution of Eq. (1.4) is a property about f 1 , f 2 , f 3 , f 4 and f 5 ⊗ f 6 , it is natural to denote a solution as a 5-tuple F = (f 1 , f 2 , f 3 , f 4 , f 5 ⊗ f 6 ) of functions. But sometimes we will also write the 5-tuple F as (f i ) 6 i=1 or simply (f i ) for convenience. The corresponding homogeneous equation (1.5) is important for us. Its solutions are 4-tuples of functions (f 1 , f 2 , f 3 , f 4 ), and form a closed subspace of L 2 (G) 4 in the usual way.
i=1 is a solution of Eq. (1.5). In this case, without loss of generality, we always assume that , and call such a solution a homogeneous solution of Eq. (1.4). We say that it is the trivial solution if furthermore is a homogeneous solution, then their sum is a homogeneous solution. We say that a solution ( Then any solution of Eq. (1.4) can be uniquely decomposed as a sum F + F c 1 ,c 2 , where F is normalized and F c 1 ,c 2 is given by (3.1). Furthermore, in the Hilbert space of homogeneous solutions of Eq. (1.4), normalized homogeneous solutions form the orthogonal complement of the space of solutions of the form F c 1 ,c 2 . Finally, we say that a solution . In this case, we say that F is supported on ̟.
In Section 5, we will determine all pure normalized solutions of Eq. (1.4), prove that pure normalized homogeneous solutions span the space of normalized homogeneous solutions, and that any solution is the sum of a pure normalized solution and a homogeneous solution.
We will convert Eq. (1.4) into a family of matrix equations. We call solutions of these matrix equations admissible matrix tuples, whose definitions are as follows. For A, B, C, D, E, F ∈ M(n, C), we consider the linear maps It is easy to see that Ψ E⊗F depends only on E ⊗ F ∈ M(n, C) ⊗ M(n, C), and that if n is even we have It is obvious that trivial admissible tuples are homogeneous. If T is homogeneous, we always assume that E = F = 0.
We should mention that the trace conditions in Definition 3.1 are not essential. As we will see later, they are imposed so that admissible tuples correspond to normalized solutions. This will simplify some arguments below.
We will determine all admissible matrix tuples in the next section. In the rest of this section, we explain how to construct pure normalized solutions of Eq. (1.4) from admissible tuples. We also exhibit some examples of admissible tuples, which indeed include all nontrivial ones. The solutions constructed from these examples form the building blocks of the general solution of Eq. (1.4).
We begin with a simple example.
Then it is easy to check that F
It is homogeneous if and only if it is the trivial solution.
The general principle of constructing solutions from admissible tuples of real and quaternionic types is as follows. For a closed irreducible subgroup K of U(n) and a matrix L ∈ M(n, C), we define the function f L on K as where A, . . . , F ∈ M(n, C), we define the 5-tuple of functions Proposition 3.1. We keep the notation as above.
is homogeneous if and only if T is homogeneous.
(2) If n is even and T is an n-ordered q-admissible tuple, then F is homogeneous if and only if T is homogeneous.
is a solution of Eq. (1.4) on Sp(n). The proofs of the other assertions in (2) are similar to those of the corresponding parts in (1) and omitted here.
is a solution on G. Some relations between F K and F K • ϕ are revealed in the following assertion.
a,b is homogeneous. This fact is meaningful when we construct the general solution of Eq. (1.4) on arbitrary compact groups (see Section 5). For later reference, 2a,−2a . In our notation of homogeneous solutions, F Now we consider admissible matrix tuples of higher order. Since the bilinear pairing (X, Y ) → tr(XY ) on M(n, C) is non-degenerate, for a linear map Γ : M(n, C) → M(n, C), we can define its adjoint Γ † by tr(Γ(X)Y ) = tr(XΓ † (Y )) for all X, Y ∈ M(n, C). It is straightforward to check that Proof. We first prove the assertions for T r . Hence (T q A,B ) † and (T r A,B ) † are admissible tuples of quaternionic and real type, respectively.
The conditions of being homogeneous are easy to prove and left to the reader.
These solutions are homogeneous if and only if tr(A) = 0 and B = −A, and in this case we have F Now we consider 3-ordered r-admissible tuples. We view elements of C 3 as column vectors. For u, v ∈ C 3 , let u, v = u t v be the standard bilinear pairing, and define Let M skew (3, C) denote the space of 3 × 3 skew-symmetric complex matrices.
Note that for w ∈ C 3 , σ u w is (the complex analogue of) the cross product u × w of u and w.
Lemma 3.4. For any u, v ∈ C 3 , the tuple is r-admissible. It is homogeneous if and only if it is the trivial tuple.
Proof. Firstly we consider the representations ρ 1 and ρ 2 of the Lie algebra gl(3, C) in M skew (3, C) and C 3 defined by We claim that the linear isomorphism σ : C 3 → M skew (3, C) sending w to σ w is an equivalence between ρ 1 and ρ 2 , i.e., for all A ∈ gl(3, C) and w ∈ C 3 . To prove this, we note (the complex analogue of) the equality for scalar triple products, i.e., for all w, w 1 , w 2 ∈ C 3 , we have where [w, w 1 , w 2 ] is the 3 × 3 matrix specified by column vectors. Now let A ∈ gl(3, C) and w, w 1 , w 2 ∈ C 3 . Then we have for all A ∈ gl(3, C) and w, w 1 , w 2 ∈ C 3 . Now we notice that From these identities, (3.9), and (3.10), it follows that for all X ∈ M(3, C) we have This proves that T u,v is r-admissible. If T u,v is homogeneous, then σ u = 0 or σ v = 0, which implies that u = 0 or v = 0. Hence it is the trivial tuple.
Determination of admissible tuples
In this section we determine all admissible matrix tuples, which are completely described in the following three propositions. We keep the same notation from Section 3. (1) If n = 1, then T = (a, a, 2a) for some a ∈ C.
(2) If n ≥ 2, then T is the trivial tuple. Proof of Proposition 4.1 (2). Denote Φ = Φ c A,B and N n = {1, . . . , n}. Since Φ = Ψ E⊗F , we have dim Im(Φ) ≤ 1. So the entries Φ(X) ij (i, j ∈ N n ) of Φ(X), viewed as linear polynomials in the entries X ij of X, are mutually linearly dependent. We make the convention that if a linear polynomial p in the variables y 1 , . . . , y m is written in the reduced form as p(y) = a 1 y 1 + a 2 y 2 + · · · , then the terms being omitted do not contain y 1 and y 2 .
Since they are linearly dependent, we must have A ij = 0. So A is diagonal. Similarly, B is diagonal. Now we have Φ(X) rs = (A rr + B ss )X rs for all r, s ∈ N n .
Setting (r, s) = (i, i), (i, j), (j, i), (j, j), we get four polynomials. Their mutual linear dependence implies that at most one of the four sums A ii +B ii , A ii +B jj , A jj + B ii , A jj + B jj is nonzero. This forces that they are all zero. So A = −B ∈ CI. But we have tr(A) = tr(B). Hence A = B = 0. This proves that T is the trivial tuple.
We use the similar idea to prove 4.2 (4).
Since they are linearly dependent, we have A ij = 0. So A is diagonal. Similarly, B, C, D are diagonal. Now we have Φ(X) rs = (A rr + B ss )X rs + (C ss + D rr )X sr for all r, s ∈ N n .
Setting (r, s) = (i, j), (i, l), (k, j), (k, l), we get four polynomials. Their mutual linear dependence implies that at most one of A ii + B jj , A ii + B ll , A kk + B jj , A kk + B ll is nonzero. This forces that they are all zero. So A ii + B jj = 0 whenever i = j. This is impossible unless A = −B ∈ CI. Similarly, due to (3.2), Proposition 4.2 (2) is equivalent to Proposition 4.3 (1). We find that the proof of Proposition 4.3 (1) is easier to write up. So we prove it first. In the following proof, we will constantly use the fact that Y + JY t J t = tr(Y )I for all Y ∈ M(2, C) without any further mention.
Step (i). First we assume that C = −A and D = −B. We prove that tr(A) = 0, B = A, and Φ(X) = 2tr(X)A.
In this case, we have Let (i, j) = (1, 2) or (2, 1). Since their linear dependence implies that A ij = B ij . Using this, it is easy to compute that We claim that A ii + B jj = 0. For otherwise, if A ii + B jj = 0, then by the mutual linear dependence, we have But tr(A) = tr(B). Hence tr(A) = 0 and B = A. This also implies that C = D = −A = JA t J t . By (3.8), we have Φ(X) = 2tr(X)A.
Step (ii). Now we prove the general case. Since Step (i), we have tr(A − C) = 0, of C 2 is less than or equal to 1. This implies that A + D is a scalar matrix. By (4.1), B + C is also a scalar matrix. Hence Similarly, we have C = JB t J t . From (3.8), we see that Φ = Ψ I⊗(A+B) and T = (T q A,B ) † . Proof of Proposition 4.2 (2). By (3.2), the tuple (A, −JBJ, −C, JDJ, −(JE)⊗ (F J)) is q-admissible, which must be T q A,−JBJ or (T q A,−JBJ ) † by Proposition 4.3 (1). This implies that T is equal to T r A,B or (T r A,B ) † . Finally we prove 4.2 (3). We will make use of the representations ρ 1 and ρ 2 of gl(3, C) in M skew (3, C) and C 3 defined in (3.9).
We prove that
their linear dependence implies that We now prove that If both Φ(X) ik and Φ(X) jk are identically zero, from the expressions we get A ii + B kk = A jj + B kk = A kk + B ii = A kk + B jj = 0, which implies (4.4). If one of Φ(X) ik and Φ(X) jk , say Φ(X) ik , is not identically zero. Then we have the linearly dependent polynomials Since Φ(X) ik ≡ 0 and B jk = A kj , we must have A ii + B jj = A jj + B ii , which also implies (4.4). By (4.4), there exists α ∈ C such that B ii = A ii + α. But tr(A) = tr(B). So we have for Y ∈ M skew (3, C), and we have dim Im(Φ 1 ) ≤ 1. Now we consider the representations ρ 1 and ρ 2 of gl(3, C) in M skew (3, C) and C 3 defined in (3.9). Note that Φ 1 = 2ρ 1 (A). From the proof of Lemma 3.4, we know that ρ 1 and ρ 2 are equivalent. So rank(A − tr(A)I) = rank(tr(A)I − A t ) = dim Im(ρ 2 (A)) = dim Im(ρ 1 (A)) = dim Im(Φ 1 ) ≤ 1.
The main theorems
Using the results about admissible tuples obtained in the previous section, in this section we prove our main theorems (Theorems 5.2-5.5 below). We first prove a lemma, which is crucial for converting Eq. (1.4) to matrix equations.
In our first theorem we determine all pure normalized solutions of Eq. (1.4). We keep the notation from Examples 3.1-3.5.
Theorem 5.2. Let [π] ∈Ĝ, and let F be a nontrivial pure normalized solution of Eq. (1.4) according to the type of π. Then F = F K • π, where F K is a solution of Eq. (1.4) on K, and the only possibilities of K and F K are as follows: According to the types of π (c.f. Theorem 2.1), there are three cases to consider.
Our next theorem gives all pure normalized homogeneous solutions. Proof. This follows directly from the proof of Theorem 5.2 and the conditions for F K being homogeneous given in Examples 3.1-3.5.
The next theorem characterizes the space of normalized homogeneous solutions. (2)). It is well known that for each positive integer d there exists exactly one d-dimensional irreducible representation of SU(2) (see, e.g., [3]). The 1-dimensional one is the trivial representation. So it is a representation into O(1). The 2-dimensional one is the identity representation. The 3-dimensional one is the adjoint representation Ad in the Lie algebra su(2) of SU (2), which can be viewed as a representation into O(3). As the 1-dimensional representation is into O(1), when applying Theorem 5.2 (1), we can use Example 3.2. Indeed, as the 1-dimensional representation is trivial, the pure normalized solutions obtained from Theorem 5.2 (1) are constant solutions. They are of the form for some a, b ∈ C. The pure normalized solutions obtained by applying The- where A ′ ∈ M(2, C), c 1 , c 2 ∈ L 2 c (G), α ∈ C. Finally, by Theorem 5.5, the general solution of Eq. (1.4) on SU(2) is given by u,v • Ad} and F h is given by (5.3).
Applications
In this section, we consider some functional equations on compact groups which are special cases of Eq. (1.4). In particular, we solve the Wilson equation and the d'Alembert long equation on compact groups. We also recover the general solution of the d'Alembert equation that was obtained in [8,22].
We first consider the equation is also a solution of Eq. (6.1). We first construct some homogeneous solutions of Eq. (6.1).
Example 6.1. Let π : G → O(1) be a homomorphism, and let a ∈ C. We view π as a function on G. Then is a homogeneous solution, provided that j≥1 |a j | 2 < ∞.
It is easy to check that (f, g, h ⊗ k) is a solution of Eq. (6.1) on U(1).
We leave the verification of the above examples to the reader. The following result claims that the above examples are the building blocks of the general solution of Eq. (6.1) on G.
where π : G → K is an irreducible representation with K = U(1), O (2) or SU(2), F is a solution of Eq. (6.1) on K as in Examples 6.2-6.4, and j≥1 F π j ,a j as in Example 6.1.
Proof. Let (f, g, h ⊗ k) be a solution of Eq. (6.1). Then (f, 0, g, 0, h ⊗ k) is a solution of Eq. (1.4). By Theorems 5.2-5.5, there exist c 1 , c 2 ∈ L 2 c (G) and irreducible representations π j : where is a homogeneous solution of Eq. (1.4) on K j , and the only possibilities of K j , π j , and F K j are given in Theorems 5.2 and 5.3. Note that this implies Without loss of generality, we may assume that each F K j is a nontrivial solution. We first prove that where τ v,u is as in Lemma 3.4. Since c 1 is a central function, vu t =ĉ 1 (π 0 ) + u, v I 3 /2 is a scalar matrix. This implies that vu t = 0, i.e., u = 0 or v = 0. Hence F K 0 is the trivial solution, a contradiction. Now we prove that if K j = O(2), then j = 0 and A,B ) † for some A, B ∈ M(2, C) with tr(A) = tr(B), and B = A t with tr(A) = 0 if j ≥ 1.
A,B , similar to the above proof, we obtain that B =ĉ 1 (π j ) is a scalar matrix. So B = tr(A)I/2. If j ≥ 1, then A = B = 0, conflicting with the assumption that F K j is nontrivial. Hence j = 0. If F K j = (F
O(2)
A,B ) † , then similarly B =ĉ 1 (π j ) and −A t =ĉ 2 (π j ) are scalar matrices. So A = B = λI for some λ ∈ C. By Remark 3.1, this case can be absorbed into the former case. Note that if we set P = A + tr(A)I/2, then we have 4 (x)) = tr(P x), f K 0 5 ⊗ f K 0 6 (x, y) = −tr(JP x)tr(Jy), x, y ∈ O(2). (6.4) A similar argument shows that if K j = SU(2), then j = 0 and F K 0 = F SU (2) A, 1 2 tr(A)I for some A ∈ M(2, C). In this case if we set P = A + tr(A)I/2, then we have 6 (x, y) = tr(P x)tr(y), x, y ∈ SU(2). (6.5) The above proofs also imply that if j ≥ 1, then K j = O(1) and F K j = F O(1) a j for some a j ∈ C. In this case we have Now we know that there are three possibilities for K 0 , i.e., K 0 = U(1), O(2), or SU(2). In each case, it is easy to see from (6.3)-(6.6) that where F is a solution of Eq. (6.1) on K 0 as in Examples 6.2-6.4. The proof of the theorem is completed by setting K = K 0 and π = π 0 . Now we consider the special case of Eq. (6.1) where f ≡ g. is f (x) = tr(P π(x)), h ⊗ k(x, y) = tr(P π(x))tr(π(y)), where π : G → SU(2) is a homomorphism and P ∈ M(2, C).
Proof. Clearly, the general solution of Eq. (6.7) corresponds to the solutions of Eq. (6.1) for which f ≡ g. By Theorem 6.1, the functions f and g in a solution of Eq. (6.1) has the form where K = U(1), O(2) or SU(2), π : G → K and π j : G → O(1) are distinct irreducible representations, f K and g K are functions on K as in Examples 6.2-6.4. Applying the Fourier transform, it is easy to see that f ≡ g if and only if f K ≡ g K and a j = 0. Restricting our attention to nontrivial solutions, we can see that either K = U(1) and δ 1 = δ 2 (in the notation of Example 6.2), or K = SU(2). If K = SU(2) we have reached the conclusion of the theorem. If K = U(1) and δ 1 = δ 2 =: δ, then the homomorphism x → diag(π(x),π(x)) ∈ SU(2) and P = diag(ε 1 δ, ε 2 δ) satisfy our requirements.
From Corollary 6.9, we see that the solutions of the d'Alembert long equation (1.3) and the d'Alembert equation (1.1) are the same. The similar result for step 2 nilpotent groups was proved in [16].
The factorization property of the d'Alembert equation on compact groups was studied in [8,9,22,23]. To conclude this section, we summarize the same property of the above equations as follows.
Corollary 6.10. The following factorization properties hold.
|
2008-09-04T21:02:09.000Z
|
2008-09-04T00:00:00.000
|
{
"year": 2008,
"sha1": "2f966b840791da7e04089d6b04f67daa5f755319",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2f966b840791da7e04089d6b04f67daa5f755319",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
260558879
|
pes2o/s2orc
|
v3-fos-license
|
Risk factors for mortality among hospitalized COVID-19 patients in Northern Ethiopia: A retrospective analysis
Background COVID-19 is a deadly pandemic caused by an RNA virus that belongs to the family of CORONA virus. To counter the COVID-19 pandemic in resource limited settings, it is essential to identify the risk factors of COVID-19 mortality. This study was conducted to identify the social and clinical determinants of mortality in COVID-19 patients hospitalized in four treatment centers of Tigray, Northern Ethiopia. Methods We reviewed data from 6,637 COVID-19 positive cases that were reported from May 7, 2020 to October 28, 2020. Among these, 925 were admitted to the treatment centers because of their severity and retrospectively analyzed. The data were entered into STATA 16 version for analysis. The descriptive analysis such as median, interquartile range, frequency distribution and percentage were used. Binary logistic regression model was fitted to identify the potential risk factors of mortality of COVID-19 patients. The adjusted odds ratio (AOR) with 95% confidence interval was used to determine the magnitude of the association between the outcome and predictor variables. Results The median age of the patients was 30 years (IQR, 25–44) and about 70% were male patients. The patients in the non-survivor group were much older than those in the survivor group (median 57.5 years versus 30 years, p-value < 0.001). The overall case fatality rate was 6.1% (95% CI: 4.5% - 7.6%) and was increased to 40.3% (95% CI: 32.2% - 48.4%) among patients with critical and severe illness. The proportions of severe and critical illness in the non-survivor group were significantly higher than those in the survivor group (19.6% versus 5.1% for severe illness and 80.4% versus 4.5% for critical illness, all p-value < 0.001). One or more pre-existing comorbidities were present in 12.5% of the patients: cardiovascular diseases (42.2%), diabetes mellitus (25.0%) and respiratory diseases (16.4%) being the most common comorbidities. The comorbidity rate in the non-survivor group (44.6%) was higher than in the survivor group (10.5%). The results from the multivariable binary regression showed that the odds of mortality was higher for patients who had cardiovascular diseases (AOR = 2.49, 95% CI: 1.03–6.03), shortness of breath (AOR = 9.71, 95% CI: 4.73–19.93) and body weakness (AOR = 3.04, 95% CI: 1.50–6.18). Moreover, the estimated odds of mortality significantly increased with patient’s age. Conclusions Age, cardiovascular diseases, shortness of breath and body weakness were the predictors for mortality of COVID-19 patients. Knowledge of these could lead to better identification of high risk COVID-19 patients and thus allow prioritization to prevent mortality.
Introduction
The first case of the coronavirus disease (COVID- 19) was confirmed in December 2019 in Wuhan, Hubei province, China [1][2][3]. It evolved from wildlife [4], and can cause fever and severe respiratory syndrome in human beings and was declared as a pandemic by the World Health Organization. COVID-19 is an emerging infectious disease due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). It is associated with lower or upper respiratory infections [5,6]. The infection fatality rates of COVID-19 patients, patient outcomes and related complications reported so far have varied considerably between countries.
Previous studies showed that the overall case fatality rate of COVID-19 patients is 3.77% -5.4%, and 41.1% -61.5% among critically and severely ill patients respectively [7][8][9][10][11][12]. To reduce the infection fatality rate, identifying the determinants related to mortality in COVID-19 patients is urgently needed. This is crucial for the decision-making process at national and international levels in order to properly respond to the pandemic. Although previous studies reported that old age, and underlying comorbidities were closely associated with disease severity or death of COVID-19 patients [8,11,13,14], the risk factors related to the mortality of COVID-19 patients in low and middle income countries are not well studied. Besides, the prevalence of underlying chronic non communicable diseases, known to be important risk factors for mortality in COVID-19 patients are also different between countries [15,16].
Identifying the determinants related to mortality in COVID-19 patients is urgently needed to reduce the case fatality rate of the deadly disease. The present study analyzed the social and clinical determinants of mortality among hospitalized COVID-19 patients in four treatment centers in Tigray, Ethiopia. This study provided useful information to associate risk factors with case fatality of COVID-19 hospitalized patients and support decision making regarding COVID-19 in resource limited settings.
Study design and settings
A retrospective cohort study was used that involved all COVID-19 patients from 7 May to 28 October 2020, from six COVID-19 isolation and treatment centers namely Mekelle, Maichew, Axum, Adigrat, Shire and Humera. Maichew and Shire were isolation centers whereas Mekelle, Axum, Adigrat and Humera had both isolation and treatment centers. Patients who were in an isolation center were transferred to one of the 4 isolation and treatment centers if hospitalization needed. Following the declaration of a pandemic situation by WHO, the Tigray regional state government with Tigray Health Bureau (THB) implemented mass screening of all travelers who enter to the region, individuals who had been in contact with confirmed cases of COVID-19 and individuals in high risk settings (health care workers, private business employees, long track drivers and merchants). Regardless of sign or symptoms development, all individuals with laboratory confirmed COVID-19 infection were admitted to the isolation and treatment centers within 24 hours. Moreover, anyone who has contact with confirmed COVID-19 cases was being isolated for 14 days. Persons who failed to develop symptoms within 14 days were being discharged from the isolation centers. Cases were confirmed by polymerase chain reaction (PCR) in the treatment centers.
Study participants and study period
The study participants were all laboratory-confirmed positive cases of SARS-CoV-2 admitted to the six isolation and treatment centers between May 7 and October 28, 2020.
Data source and sample
The data were collected using standardized form from electronic medical records from the six isolation and treatment centers. The data set contains clinical information of the patients, demographic characteristics and patient outcomes. All laboratory-confirmed COVID-19 cases admitted to the isolation and treatment centers and who were candidates for hospitalization were included in this study.
Operational definitions
COVID-19 cases are all individuals tested positive for SARS-CoV-2 by polymerase chain reaction (PCR) in the isolation and treatment centers. Symptomatic case is defined as any SARS-CoV-2 positive individual by PCR in the treatment centers with at least one sign or symptom for COVID-19. Signs and symptoms of COVID-19 include but not limited to: fever, cough, shortness of breath, headache, sore throat, pain, fatigue, myalgia, nasal congestion, diarrhea, nausea, vomiting, loss of smell, loss of taste and loss of appetite. Severe cases are with clinical signs of pneumonia (fever, cough, dyspnea, fast breathing) and have one of the following conditions: i) respiratory rate interval > 30 breaths/min; ii) SpO2 (saturation of peripheral oxygen) < 93% at rest; iii) severe respiratory distress. Critical cases have to meet one of the following conditions: i) respiratory failure and consequent needs of mechanical ventilation; ii) shock; iii) require intensive care because of multiple organ dysfunction. Asymptomatic patient is any patient who tested positive for COVID-19 but does not have any of the symptoms.
These patients are detected after isolation and contact tracing. Cases with comorbidity are COVID-19 patients with at least one known preexisting chronic medical illness.
Study variables
Dependent variable. The dependent variable in this study was hospitalized COVID-19 patient outcome. It was dichotomized as 1 if the patient has died and 0 if the patient has recovered. The confirmed COVID-19 patients after spending some days in isolation and treatment centers were retested and they can be discharged when the symptoms have subsided, the body temperature remains at a normal range for at least three days, two consecutive laboratory tests are negative and radiological improvement.
Independent variables. In this study the independent variables were sex, age, occupation, signs and symptoms, comorbidity and type of comorbidity, disease severity status, temperature, travel history, nationality and source of infection.
Statistical data analysis
The data were coded, cleaned and checked for completeness. STATA version 16 software was used for data processing and data analysis. Continuous variables were presented as median and interquartile range (IQR), while categorical variables were described as frequencies (%) and compared using the chi-square test or Fisher's exact test. Binary logistic regression model was used to explore the risk factors of mortality of COVID-19 patients. As the maximum likelihood estimation (MLE) method are systematically biased for rare events, the penalized maximum likelihood estimation (PMLE) method was used to estimate logistic regression model [17]. The PMLEs gives unbiased estimates for rare events and always converged even for small sample size. Bivariate analysis was first used to identify independent variables that were associated with the outcome i.e. death versus recovered. Independent variables that were significant at 0.25 levels in the bivariate analysis were further included in to multivariable binary logistic regression. Odds ratio with 95% confidence interval was also used to measure the degree of association between predictors and outcome variable, and the assumptions of multicollinearity between two or more independent variables were checked using variance inflation factors (VIFs). Goodness of fit of the model was assessed using Hosmer-Lemeshow test. The discriminatory accuracy of a diagnostic test was also assessed through receiver operating curve (ROC).
Ethical considerations
Permission to assess the data was obtained from Tigray Health Bureau (THB) and Mekelle University. This study was approved by the research ethics review committee of the College of Health Sciences, Mekelle University with reference number: IRB 1826/2021. The Tigray Health Bureau waived the requirement informed consent before the study started due to the urgent need to collect epidemiological and clinical data. The confidentiality of data was kept as there were no personal identifiers used and neither the raw data nor the extracted data were passed to a third person.
Clinical characteristics
The source of infection for the majority (68.4%) of the hospitalized COVID-19 patients was the community (Table 2). From the total, 56 (6.1%) patients have died. The most frequently reported signs and symptoms on admission were: cough (70.6%), fever (40.1%), sore throat (26.9%) and body weakness (25.9%). Based on the body temperature on admission, 403 (73.8%) of the patients had temperature < 37.3˚C, 77 (14.1%) had temperature > 38 0 C. One or more pre-existing comorbidities were present in 116 (12.5%) patients: cardiovascular diseases (42.2%), diabetes mellitus (25.0%) and respiratory disease (16.4%) being the most common (Fig 2). The comorbidity rate in the non-survivor group was higher than in the survivor group (44.6% versus 10.5%, p-value < 0.001). Moreover, the disease severity status between groups, the proportions of severely and critically ill patients in the non-survivor group were significantly higher than those in the survivor group (19.6% versus 5.1% for severe illness and 80.4% versus 4.5% for critical illness, all p-value < 0.001). The case fatality rate among patients with severe illness and critically ill were 20.0% and 53.6%, respectively. The overall case fatality rate was 6.1% and was increased to 40.3% among patients with critical and severe illness ( Table 2).
Risk factors of mortality among hospitalized COVID-19 patients
To determine the potential risk factors of mortality among the hospitalized COVID-19 patients, binary logistic regression model was conducted. We initially performed bivariate analysis followed by multivariable analysis. Predictors that were statistically associated with the outcome at p-value < 0.25 in the bivariate analysis were further included to the multivariable analysis. The multivariable binary logistic regression analysis showed that age, shortness of breath, body weakness and cardiovascular diseases were found as significant predictors of
PLOS ONE
Risk factors for mortality among hospitalized COVID-19 patients (Table 3). Keeping the effect of other predictors constant, COVID-19 patients with age < 30 years had 79% times (AOR = 0.21, 95% CI: 0.08-0.53) lower odds to die and patients with age 30-49 years were 0.14 times (AOR = 0.14, 95% CI: 0.06-0.36) less likely to die compared to patients with age 70 and above years. In addition, COVID-19 patients who had cardiovascular disease were 2.7 times (AOR = 2.49, 95% CI: 1.03-6.03) more likely to die than patients who had no cardiovascular disease. Patients who had shortness of breath at admission were 9.7 times (AOR = 9.71, 95% CI: 4.73-19.93) more likely to die than their counterpart. COVID-19 patients who had body weakness at admission were 3 times (AOR = 3.04, 95% CI: 1.50-6.18) higher odds to die than those who had no body weakness. The receiver operating curve (ROC) analysis showed a very good accuracy in classifying mortality due to COVID-19 (area under the curve = 0.8713). Finally, Hosmer and Lemeshow test results confirmed that the model was a good fit for the data (X 2 (9) = 7.82, p-value = 0.553).
Discussion
COVID-19 is an emerging infectious disease due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and has become a global pandemic. To reduce the infection fatality rate, identifying the determinants related to mortality in COVID-19 patients is urgently needed. This paper assessed the risk factors of mortality among hospitalized COVID-19 patients. This study comprised of 925 hospitalized COVID-19 patients with 56 deaths and 869 patients discharged improved. Our study showed that men accounted for a higher proportion of COVID-19 patients than women, which was consistent with most of the confirmed cases in France, China and Italy [15,[18][19][20][21]. The observed overall case fatality rate was 6.1%, which was similar with previous studies from China. The case fatality rate of COVID-19 was reported nearly 3.7-5.4 [7][8][9]22]. However, the magnitude was significantly lower than the previous studies conducted in France and New York City [15,23]. The discrepancy could be due to the older age of patients in these study (median ages: 72 and 63 years respectively) that could have led to high severe disease, which explain the higher mortality rates reported in these studies. The mortality rate increased to 40.3% among patients with severe and critical illness, which was significantly higher than the study conducted in Hubei province, China [22]. However, it was lower than the study conducted in France [15]. The younger age of patients in our study (median age 30 years) could have led to lower mortality rate. Previous studies reported that non-survivor patients of COVID-19 were older [11,14,22]. Moreover, it could be due to the treatment experience and awareness during the later period of this pandemic.
In this study the case fatality rate was 53.6% among patients with critically ill patients and 20.0% among the severely ill patients. The case fatality rate among the critically ill patients was similar with previous studies [10][11][12]22]. Our results showed that non-survivor patients were older and they had underlying diseases. This was consistent with most previous research studies [8,11,14,15,22,24,25]. Higher proportion of cardiovascular diseases and diabetes mellitus were in non-survivor than in the survivor groups, which was similar to those reported in other studies [9,15,21,22,[23][24][25][26][27][28]. As reported previously, cardiovascular diseases and diabetes mellitus were the most common comorbidities [21,26]. This is in line with our findings. Our data also showed that the most frequently reported symptoms on admission were cough, fever, sore throat, body weakness, pain and shortness of breath. As reported from previous studies the top three common symptoms were fever, cough and fatigue [22,29]. In this study there were no differences in gender, symptoms fever, cough and headache between the survivor and nonsurvivor groups. This finding was similar with some previous studies [13,22].
The multivariable binary logistic regression analysis demonstrated that age, shortness of breath, body weakness and cardiovascular disease were associated with the mortality of hospitalized COVID-19 patients. We found that older age was associated with high risk of mortality of COVID-19 patients, which was similar with previous studies [8,11,13,14,22]. Our study also showed that the odds of mortality were higher for patients who had cardiovascular disease, shortness of breath and body weakness at admission.
Our study is limited to the time frame of March-October 2020 because of the ongoing war in Tigray region. Since November of 2020, a destruction was launched over Tigray region by the Ethiopian and Eritrean forces with destruction of healthcare infrastructure and disruption of services [30]. COVID service has been severely affected (83%). This has impacted COVID vaccine access and vaccine coverage has remained below 2%. Future study should look at the impact of healthcare disruption on COVID-19 mortality. In addition, risk factors for COVID-19 mortality pre and post introduction of vaccine in Tigray region should be studied.
Conclusions
In conclusion, this study showed that non-survivors hospitalized COVID-19 patients were old and they had underlying comorbidity diseases. The factors affecting mortality among COVID-19 patients were age, shortness of breath, body weakness and cardiovascular diseases. Knowledge of these could lead to better identification of high risk COVID-19 patients and thus allow prioritization to prevent mortality.
|
2022-08-13T15:07:57.798Z
|
2022-08-11T00:00:00.000
|
{
"year": 2022,
"sha1": "ff2f906cb68a8758567d4b3ddc1512d9b117357d",
"oa_license": "public-domain",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0271124&type=printable",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dd74351faec49a6da9ec688e3268ada62437cc56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257991572
|
pes2o/s2orc
|
v3-fos-license
|
Status-Based Asymmetries in Relative Deprivation During the COVID-19 Pandemic
The COVID-19 pandemic has amplified existing inequalities by disproportionately affecting marginalized groups, which should differentially affect perceptions of, and responses to, inequality. Accordingly, the present study examines the effects of the pandemic on feelings of individual- and group-based relative deprivation (IRD and GRD, respectively), as well as whether these effects differ by ethnicity. By comparing matched samples of participants assessed before and during the first 6 months of the pandemic (Ntotal = 21,131), our results demonstrate the unique impacts of the pandemic on IRD and GRD among ethnic minorities and majorities. Moreover, our results reveal the status-based indirect effects of the pandemic on support for both collective action and income redistribution via IRD and GRD. As the pandemic rages on, these results foreshadow long-term, status-specific consequences for political mobilization and support for social change.
existing inequalities and left some more disadvantaged than others.
Critically, objective deprivation only partially explains people's feelings of dissatisfaction and injustice arising from inequality (see Schmalor & Heine, 2022;Smith et al., 2012). For example, subjective inequality measures better predict well-being than do objective measures (Vezzoli et al., 2022). Moreover, objectively disadvantaged groups that overlook their deprivation (relative to other groups) are less supportive of collective action to redress inequities (Osborne, Garcı´a-Sa´nchez, & Sibley, 2019). Hence, examining the pandemic's impact on feelings of relative deprivation is integral to understanding the consequences of the pandemic on social change and efforts to redress inequality (see Grant & Smith, 2021).
To these ends, the present study assesses the impact of the pandemic on individual-and group-based relative deprivation (IRD and GRD, respectively). Specifically, we draw on a large, nationwide sample of participants collected in the first 6 months of the pandemic (i.e., March-August 2020) and compare them with a propensity-matched control sample who completed the survey in 2019, well before the first reported cases of COVID-19. In addition, given that New Zealand's Alert Level system began with a wide-scale set of restrictions that gradually eased during these first 6 months (New Zealand Government, 2021; see Table 1), we compare participants' responses in different levels of ''lockdown.'' Critically, we examine (a) whether the effects of the pandemic differ between ethnic minorities and majorities and (b) whether the different lockdown conditions are indirectly associated with support for ethnic-based collective action and income redistribution via IRD and GRD.
Relative Deprivation Theory
Beginning with Stouffer et al.'s (1949) war-time studies, relative deprivation theory argues that responses to inequality depend on a person's subjective comparisons with similar others rather than their objective (dis)advantage (Pettigrew, 2015;Smith & Huo, 2014). Runciman (1966) expanded this concept by distinguishing between egoistic (individual) and fraternal (group) relative deprivation-an individual can believe they are deprived relative to other individuals (IRD) or that their ingroup is deprived relative to other groups (GRD). These discrete comparison targets produce different ''yardsticks'' to which individuals measure their (or their ingroup's) position in society (Kim et al., 2018;Smith & Huo, 2014).
Although measuring the effects of COVID-19 on objective indicators is important, it is critical to assess the pandemic's impact on one's relative position. Indeed, the pandemic presents unique challenges that should elicit IRD or GRD, depending on the type of comparison (see Osborne, Sibley, & Sengupta, 2015). For example, the unprecedented financial hardship associated with COVID-19 (e.g., Cortes & Forsythe, 2022) collapsed the global economy, with job loss up to four times higher than the 2009 global financial crisis (United Nations, 2021). However, economic hardship arose in tandem with stark increases in wealth for the elite (Neate, 2020;Ryan, 2022). Such inequities should emphasize individual economic conditions and foster social comparisons (see Cheung & Lucas, 2016). The salience of income inequality during the pandemic should thus increase IRD among the general population, particularly during the strictest lockdown conditions where these effects were most pronounced (Fletcher et al., 2022;Prickett et al., 2020).
The pandemic also emphasized group-based inequalities and, as such, should increase GRD. Although ethnic minorities were disproportionally affected by unemployment and economic hardship before the pandemic (see Iceland, 2019;Pager & Shepherd, 2008), the pandemic exacerbated these trends by differentially impacting job and income loss (Gemelas et al., 2022;Hu, 2020;Katikireddi et al., 2021). The salience of these inequities should thus increase GRD among ethnic minorities.
Recent research supports these theses and demonstrates the unique effects of the pandemic on people's perceptions of, and attitudes toward, inequality. For example, the pandemic altered people's attributions for poverty (Wiwad et al., 2021), elicited frustration over class-based inequalities (Ravenelle et al., 2022), and increased inequality aversion (Asaria et al., 2021). Moreover, Kiebler and Stewart (2021) show that IRD increased among low-income students during the pandemic. Thus, the pandemic (re)shaped people's frustration and attitudes toward injustice, creating a unique context to study feelings of relative deprivation (see Grant & Smith, 2021).
Indirect support for our hypotheses also comes from research revealing that distinct forms of objective inequality elicit individual-or group-based comparisons. For example, Osborne, Sibley, and Sengupta (2015) demonstrate that personal income correlates negatively with IRD, while neighborhood-level deprivation correlates positively with GRD. Similarly, the objective disadvantages faced by people of low subjective socioeconomic status (SES) elicit feelings of personal deprivation (Greitemeyer & Sagioglou, 2016. That is, individual-and group-based objective circumstances promote greater individual-and group-based relative comparisons, respectively. The pandemic's unique effects on individual-and group-level circumstances should thus elicit IRD and GRD, respectively.
Study Overview
The current study examines the impact of the pandemic on feelings of IRD and GRD. To do so, we compare propensity-matched samples of New Zealanders who completed our survey before the pandemic (October 01-December 31, 2019) to those who completed the survey during the first 6 months of the pandemic (March 26-August 30, 2020). Propensity-score matching strengthens causal inferences by providing a matched ''control'' sample when random assignment to a treatment group is impossible (Austin, 2011). There are, however, limitations to propensity-score matching, as unmatched variables may account for group differences that would be controlled for in experiments via randomization. To minimize this limitation, we match participants on objective deprivation (e.g., SES), demographic covariates (e.g., gender, age, and ethnicity), and other socioeconomic and health indicators (for a complete list, see Table S1). We thus increase confidence that our analyses uniquely examine the impact of the pandemic on participants' IRD and GRD (relative to the matched control sample). We investigate these effects in the context of New Zealand's four-tier Alert Level System for the COVID-19 pandemic (New Zealand Government, 2021; see Table 1). On March 25, 2020, New Zealand entered a national lockdown (Alert Level 4), which required people to stay at home save for essential movement. On April 27, 2020, New Zealand entered Alert Level 3, which eased restrictions slightly by allowing 10-person gatherings for weddings and funerals. On May 13, 2020, Alert Level 2 allowed businesses to reopen (with social distancing) and permitted gatherings of up to 100 people. Alert Level 1 began on June 08, 2020, which eased restrictions back to ''normal'' with no social distancing or limits on social gatherings. However, on August 12, 2020, Auckland (New Zealand's largest city) returned to Alert Level 3 following a second community outbreak. 1 Because these Alert Levels capture restrictions of increasing severity, we investigate the potentially different effects of the Alert Levels on our focal variables.
Although the pandemic increased the salience of objective personal-and group-based inequalities, the unique challenges of each Alert Level should have differential impacts on relative deprivation. Given the unique financial pressures and salience of income inequality during the pandemic, participants should report greater feelings of IRD during the pandemic relative to those in the pre-lockdown control group. In addition, Alert Levels with the strongest restrictions presented unprecedented hardships and, thus, should elicit larger increases in IRD than less restrictive Alert Levels.
Likewise, GRD should increase during the pandemic relative to the control group. However, minority groups are overrepresented in COVID-19 unemployment statistics and experience disproportionate rates of COVID-19-related infections, hospitalizations, and deaths (Gemelas et al., 2022;Hu, 2020;Ministry of Health, 2022). Because the pandemic exacerbated existing ethnic inequalities, GRD should be particularly heightened among ethnic minorities.
We also examine the indirect effects of the different Alert Levels on support for (a) ethnic-based collective action and (b) income redistribution via IRD and GRD. Because perceiving injustice-particularly when one is angry or frustrated by their ingroup's status (Jost et al., 2017;Smith et al., 2012;Thomas et al., 2020)-is a necessary antecedent to collective action, the pandemic should shape support for these social issues through relative deprivation (Grant & Smith, 2021). By examining this thesis, we contribute to a growing literature examining the effects of the pandemic on subjective experiences of, and responses to, inequality.
Sampling Procedure and Participants
We analyzed data from Time 11 of the New Zealand Attitudes and Values Study (NZAVS)-an ongoing nationwide longitudinal panel study of New Zealand adults that began in 2009. Participants were initially sampled from the New Zealand electoral roll and represent New Zealand's general population in age, SES, and region of residence (see Sibley, 2021). We focus on time 11 (N = 42,684), as data collection occurred between October 2019 and September 2020, including the first 6 months of the pandemic (March-August 2020). Because data collection began before the pandemic, a priori power analyses were not conducted.
We used propensity-score matching to match the respondents who completed the survey during the pandemic (N = 10,464) with respondents from a pool of pre-pandemic ''controls'' (N = 10,667) on a range of socioeconomic and demographic factors (see Table S1). Participants in the control group completed the questionnaire between October 2019 and December 2019, before the first cases of COVID-19 were reported (see Sibley et al., 2020).
Measures
Unless noted, items were measured on a 1 (strongly disagree) to 7 (strongly agree) scale and averaged to assess their respective constructs.
Predictors
Alert Levels. Participants in the pandemic condition completed the questionnaire within New Zealand's Alert Level System (New Zealand Government, 2021; see Table 1). The different Alert Level conditions and the matched control sample were dummy-coded (0 = no, 1 = yes) so that the effects reflect differences between the pre-pandemic control and the given Alert Level.
Outcome Variables. IRD was assessed using two items adapted from Abrams and Grant (2012) GRD was assessed using two items adapted from Abrams and Grant (2012) Support for income redistribution was measured by asking participants how strongly they oppose or support ''Redistributing money and wealth more evenly among a larger percentage of the people in New Zealand through heavy taxes on the rich.''
Results
We regressed IRD and GRD simultaneously onto our predictors in two separate models. In the first model, we regressed IRD and GRD onto the different Alert Levels, minority status and our covariates. The second model included interaction terms for each Alert Level with minority status. 2 Table 2 displays the descriptive statistics and bivariate correlations for the variables included in this study. The complete questionnaire and syntax used in this study are available at: https://osf.io/nzev8/.
After adjusting for these associations, participants in Alert Level 4 (b = 0.076, SE = 0.030, p = .013), Alert Level 3 (b = 0.112, SE = 0.043, p = .009), and Alert Level 2 (b = 0.133, SE = 0.040, p = .001) were higher in IRD than the control group. In contrast, Alert Level 1 and the Auckland Level 3 lockdown did not differ from the propensity-matched control group on IRD (b = 20.028, Interestingly, the effects of Alert Level 2 were stronger than that of the stricter Alert Levels, suggesting that IRD increased in the later, less restrictive, lockdown period. As shown in Model 2, minority status only moderated the relationship between Alert Level 4 and IRD (b = 20.215, SE = 0.084, p = .011). The remaining interaction effects were non-significant 3 (see Table 3), suggesting that ethnic-group differences in IRD did not vary across the remaining Alert Levels. Interestingly, simple slopes analyses revealed that the association between Alert Level 4 and IRD was positive among ethnic majorities (b = 0.108, SE = 0.033, p = .001), but negative and non-significant among ethnic minorities (b = 20.107, SE = 0.078, p = .168). Thus, only majority ethnic group members experienced an increase in IRD at Alert Level 4 vis-a`-vis the matched control.
After adjusting for these associations, our results revealed that different Alert Levels uniquely predicted GRD (see Table 3). Specifically, only participants in Alert Level 1 (b = 0.101, SE = 0.024, p \ .001) and Auckland Alert Level 3 (b = 0.101, SE = 0.043, p = .005) were higher in GRD than those in the control group. The remaining Alert Levels had comparable levels of GRD relative to the matched control group. Thus, the pandemic only increased GRD in the latter stages of the 2020 COVID-19 response when the lockdown effects began to accumulate.
Once again, Model 2 in Table 3 Conversely, GRD among ethnic majorities did not differ from the control group across Alert Levels, except for a slight increase at Alert Level 1 (b = 0.067, SE = 0.027, p = .011). Thus, the pandemic uniquely increased GRD among ethnic minorities across the first 6 months of the pandemic. However, ethnic majorities began to experience an increase in GRD relative to the control group at Alert Level 1, though to a lesser degree than ethnic minorities (b diff = 0.198, SE = 0.062, p = .001).
Given the status-specific impact of the Alert Levels on IRD and GRD, the pandemic may evoke distinct responses to inequality. Accordingly, we assessed the indirect effects of the Alert Levels on support for (a) ethnic-based collective action and (b) income redistribution via IRD and GRD. To examine whether these effects differed by ethnicity, we conducted multiple group analyses and compared models where estimates varied across groups to models where estimates were constrained to equality. Constraining the estimates to equality significantly decreased model fit for both collective action support (Dx 2 (17) = 401.85, p \ .001) and support for income redistribution (Dx 2 (17) = 210.29, p \ .001), suggesting that some associations differed by ethnic group membership. Figure 1 displays the significant associations between the Alert Levels, relative deprivation, and collective action support by ethnic group membership (for the full results, see Tables S2 and S4). Among ethnic majorities, Alert Levels 4, 3, and 2 were associated with greater collective action support via IRD, whereas Alert Level 1 was associated with greater collective action support via GRD. In contrast, IRD did not mediate associations between the pandemic and collective action support among ethnic minorities. Instead, Alert Levels 2, 1, and (Auckland) 3 were associated with greater collective action support via GRD for ethnic minorities.
Regarding support for income redistribution, Figure 2 reveals that Alert Levels 4, 3, and 2 were associated with greater support for income redistribution among ethnic majorities via IRD. Interestingly, Alert Level 1 was associated with reduced support for income redistribution among ethnic majorities via GRD (see Tables S3 and S5). However, among ethnic minorities, IRD did not mediate any associations between the Alert Levels and income redistribution support, and Alert Levels 2, 1, and 3 (Auckland) were associated with greater income redistribution support via GRD. These findings suggest status-specific indirect associations between Alert Level 1 and support for income redistribution via GRD.
Discussion
The COVID-19 pandemic has had unprecedented consequences for health, well-being, and the economic landscape worldwide, yet has disproportionally affected minority groups (e.g., Gemelas et al., 2022;Hu, 2020). As such, we examined the effects of the pandemic on perceptions of relative deprivation and whether these impacts differed across ethnic groups. Because the pandemic exacerbated existing inequalities-which should elicit greater upward social comparisons (Cheung & Lucas, 2016)-we expected IRD would increase, particularly during the strictest Alert Levels. Moreover, we expected the pandemic to increase GRD among ethnic minorities. As hypothesized, participants in Alert Levels 4, 3, and 2 had higher levels of IRD than those in the control group. Unexpectedly, the effects of Alert Level 2 were larger than that of the stricter Alert Levels. These results may be due to the cumulative effects of the lockdown ''outweighing'' the initial effects of the strictest Alert Levels. Nonetheless, our results were reliable after controlling for objective deprivation indicators and other covariates, demonstrating the pandemic's unique effect on IRD as COVID-19 began to spread, and that IRD is often rooted in reality (albeit imperfectly; see Osborne, Sibley, & Sengupta, 2015).
Interestingly, the pandemic only increased IRD during Alert Level 4 among ethnic majorities. This suggests that ethnic minorities did not feel more personally deprived in the initial stages of the pandemic, while ethnic majorities perceived greater IRD in the strictest ''lockdown'' conditions. Although unexpected, this replicates work showing that individuals are rarely high in both IRD and GRD (see Osborne, Sibley, Smith, & Huo, 2015). Indeed, objectively disadvantaged group members often deny being personally deprived but nonetheless recognize that their group as a whole is disadvantaged (see Crosby, 1984;D. M. Taylor et al., 1990). That ethnic minorities experienced higher levels of GRD-but not IRD-during the pandemic corroborates this literature. Conversely, ethnic majorities are structurally advantaged and often report greater levels of personal (vs group) inequities (Operario & Fiske, 2001). As such, objectively advantaged individuals may have felt less personally advantaged during the ''strictest'' lockdowns.
It is noteworthy that GRD was elevated among minorities at all Alert Levels but only during Alert Level 1 for majority group members. These results corroborate research showing that COVID-19-and subsequent lockdowns-disproportionately impacted ethnic minorities worldwide (Hu, 2020;Katikireddi et al., 2021;Mathur et al., 2021). Thus, it is unsurprising that ethnic minorities feel more collectively deprived during the pandemic.
That ethnic majorities experienced an increase in GRD during Alert Level 1 (i.e., when most restrictions had been lifted) alludes to how objectively advantaged groups may respond to the continual pressures of the pandemic. Ethnic majorities who feel collectively deprived are more likely to oppose efforts to redress inequality, as doing so conflicts with their self-interest (Leviston et al., 2020;Osborne, Jost, et al., 2019;Pettigrew et al., 2008;M. C. Taylor, 2002). In the pandemic context, ethnic majorities who perceive themselves as less relatively advantaged may be less likely to support efforts to redress inequalities. Indeed, our mediation analyses revealed status-specific indirect associations between the pandemic and support for collective action and income redistribution via IRD and GRD. Namely, the pandemic increased support for collective action via IRD (Alert Levels 4, 3, and 2) and GRD (Alert Level 1) for ethnic majorities, and only via GRD (Alert Levels 2, 1, and 3 [Auckland]) for ethnic minorities. Moreover, although the pandemic increased support for income redistribution among ethnic majorities via IRD (Alert Levels 4, 3, and 2) and among ethnic minorities via GRD (Alert Levels 2, 1, and 3 [Auckland]), Alert Level 1 decreased support for income redistribution among ethnic majorities via GRD. That is, the pandemic's effects on GRD among ethnic majorities (a) increased collective action support on behalf of the dominant group and (b) reduced support for income redistribution. Given that August 2020 (Alert Level 1) marked the beginning of far-right anti-lockdown protests in New Zealand (Molyneux & Satherley, 2020;Pearse, 2020), the relationship between the pandemic and relative deprivation among majority group members may illustrate how advantaged groups respond to situational factors that increase inequality.
In addition to important practical implications, our results corroborate Runciman's (1966) assertion that IRD and GRD develop from distinct comparison processes. Indeed, IRD and GRD increased under different Alert Levels and among different ethnic groups, highlighting that, while correlated constructs, they emerge from distinct processes and are uniquely affected by one's environment. Moreover, our study implemented an innovative methodology which allowed us to approximate experimental conditions (Austin, 2011). While propensity-score matching may allow for unmatched variables to explain group differences, the inclusion of indicators of objective deprivation closely associated with relative deprivation (i.e., income, employment, and education) and other relevant covariates increases confidence that the pandemic uniquely impacted IRD and GRD.
Our ability to compare participants who experienced the pandemic to those who completed the study before the pandemic began is a novel strength of our study, as only a few studies have data to compare these conditions directly (see also Howard et al., 2022;Sibley et al., 2020). Moreover, our data spans the first 6 months of the pandemic, allowing us to compare the effects of different restriction levels. This provides a broader understanding of the effects of the pandemic on relative deprivation than studies utilizing data from only the initial lockdown(s) in March 2020. Such information is critical should another pandemic emerge requiring strict lockdowns.
Despite these strengths, our measures only assessed the fiscal component of relative deprivation and, as such, may not generalize to forms of relative deprivation that focus on interpersonal treatment. However, fiscal dimensions of relative deprivation are particularly relevant to the pandemic, given the economic consequences of COVID-19 on individuals and groups (Fletcher et al., 2022;Hu, 2020). In addition, our measures included cognitive and affective measures of relative deprivation, which are integral to the experiences of relative deprivation (see Smith et al., 2012). As such, we are confident that our measures accurately reflect their respective constructs and can generalize beyond our sample population.
We should also note that the current study does not reflect within-person changes in relative deprivation throughout the pandemic, as most of our sample had not completed the previous wave (i.e., the year before the pandemic began). Instead, we identified important differences between those in the pre-pandemic condition and those in the different Alert Levels. Future research should investigate whether people experienced a within-person change in relative deprivation before and during the pandemic.
Finally, we investigate the pandemic from March to August 2020 in a country whose initial response was highly successful but unique in its quick implementation of strict lockdowns (Cumming, 2022). As such, our results may not generalize outside the initial wave of the pandemic nor to countries with less immediate and restrictive responses to COVID-19. That said, the pandemic has continued beyond the initial wave(s), with varying lockdown restrictions extending into 2022 (see New Zealand Government, 2021), particularly with the emergence of more transmissible variants of the virus (World Health Organization, 2022). Moreover, our results provide general insight into the effects of government restrictions on citizens' relative perceptions of inequality. As such, the current study provides a springboard for future research to investigate the relationship between the pandemic and relative deprivation well beyond the initial COVID-19 outbreak.
Conclusion
The present study examines the effects of the COVID-19 pandemic on IRD and GRD and whether these effects were disproportionately felt by the structurally disadvantaged.
Our results reveal that, relative to the pre-pandemic control group, the strictest Alert Levels increased IRD among the general population but selectively increased GRD among minorities. Moreover, the pandemic was indirectly associated with status-specific support for collective action and income redistribution via IRD and GRD. These results demonstrate that salient forms of inequality elicit individual-and group-based comparisons and suggest that the pandemic presents unique opportunities for social change. The current study thus provides the foundations for future research examining the consequences of the pandemic on relative deprivation and associated individual-and groupbased responses to inequality.
Authors' Note
The data described in the paper are part of the NZAVS. Full copies of the NZAVS data files are held by all members of the NZAVS management team and advisory board. A de-identified dataset containing the variables analyzed in this manuscript is available upon request from the corresponding author, or any member of the NZAVS advisory board for the purposes of replication or checking of any published study using NZAVS data. The Mplus syntax used to test all models reported in this manuscript will be available on the Open Science Framework upon publication: https://osf.io/nzev8/.
Author Contributions
KJL conceptualized the study, performed the statistical analysis, and wrote the manuscript and manuscript revisions. CGS curated the data and acquired funding. CGS and DO provided supervision and extensive feedback on the manuscript.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Preparation of this manuscript was supported by a University of Auckland Doctoral Scholarship awarded to the first author and a Templeton Religion Trust grant (TRT-2021-10418)
Supplemental Material
Supplemental material for this article is available online.
1.
While Auckland returned to Alert Level 3, the remainder of the country returned to Alert Level 2. Businesses outside of Auckland remained open and experienced less disruption than in the stricter Alert Levels.
2.
As the pandemic has made objective economic conditions more salient, we also examined the moderating effects of socioeconomic status (SES). Although SES was negatively associated with both IRD (b = 20.220, SE = 0.006, p \ .001) and GRD (b = 20.067, SE = 0.006, p \ .001), no interactions between SES and Alert Levels were significant (ps ø .086).
3.
Given the disproportionate impact of the pandemic on women, we also examined the moderating effects of gender on the association between the different Alert Levels and IRD. Similar to the results for ethnicity, IRD did not differ across women and men at any Alert Level (ps ø .152).
|
2023-04-07T15:33:43.653Z
|
2023-04-04T00:00:00.000
|
{
"year": 2023,
"sha1": "fe44bc7fc50280dfffaf4f15808a4a62bfe1049a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/19485506231163016",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "52f9614f231889358e520b9b7e83af045fdbec58",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
125198156
|
pes2o/s2orc
|
v3-fos-license
|
The ETS challenges: a machine learning approach to the evaluation of simulated financial time series for improving generation processes
This paper presents an evaluation framework that attempts to quantify the"degree of realism"of simulated financial time series, whatever the simulation method could be, with the aim of discover unknown characteristics that are not being properly reproduced by such methods in order to improve them. For that purpose, the evaluation framework is posed as a machine learning problem in which some given time series examples have to be classified as simulated or real financial time series. The"challenge"is proposed as an open competition, similar to those published at the Kaggle platform, in which participants must send their classification results along with a description of the features and the classifiers used. The results of these"challenges"have revealed some interesting properties of financial data, and have lead to substantial improvements in our simulation methods under research, some of which will be described in this work.
series, and to make a decision on whether the differences observed are acceptable or not, usually by means of some statistical hypothesis test. However, as the final goal of our simulations is to obtain simulated time series that behave as real ones, and not to test the fitness of a model in order to explain the time series behavior, the comparison can be better summarized by deciding whether real and simulated time series are distinguishable or they are not.
On the other hand, and even more important, if we just check the stylized facts we are constraining the search of differences to a set of already known properties, while there may be some other unknown important properties that we are just ignoring because we have not observed them previously in real time series. If instead of looking for what we already know, we just look for differences between simulated and real time series, we may found some interesting property or behavior shared by real financial data. With this aim, the goal of checking the goodness of a simulation method is tackled through an open competition posed as a binary classification problem in which a set of examples, consisting in raw return values, have to be classified as real or simulated financial time series.
For every challenge, two balanced sets of real and simulated time series are given to participants: one of them (the training set) is provided along with the true class labels for developing purposes, while the other one is unlabeled (the testing set). For this latter set, participants should run their feature extractors and classifiers, developed with the aid of the training set, and provide a score for every time series segment indicating the probability of the segment to belong to one of the two classes. Answers must be submitted within a month beginning at the challenge release date, including a description of the feature extractors and classifiers used. Classification results for every submitted system are evaluated by means of the Area Under the Curve (AUC) of their Receiver Operating Characteristic (ROC) [Fawcett (2004)]. For this metric, values close to 0.5 mean that outputs of the classifier are almost random, while values close to 1 mean accuracy almost perfect (values close to 0 also indicate almost perfect discriminative properties, but outputs pointing to the opposite class).
Both training and testing datasets comprises 6000 time series segments of 260 returns per class, as shown in Table 1. Time series segments are extracted at random from a larger dataset, either real or simulated, but training and testing datasets are independent for both classes, as the segments for each purpose (train or test) are extracted from different time series (different investment funds or different stocks). However, they may share the time period and may come from the same market (same type of investment fund or same index). Generation methods are trained on the whole real dataset used for the challenge (both training and testing subsets), and simulations are generated in the same proportion (same number of simulations per investment fund or stock). However, the generation methods tested may not be fitted to a particular time series but to a set of them, so there may not be a one-to-one correspondence between real and simulated time series. Figure 1 shows how training and test subsets are generated.
TA B L E 1 Composition of provided datasets.
| 2016 CHALLENGE: DETECTION IN THE CONTEXT OF INVESTMENT FUNDS
The first edition of the ETS Challenges was focused on generation methods for investment funds. Time series from two types of investment funds were used: fixed income and equity funds. Real time series sets used for the challenge are illustrated in Figure 2, showing the time series of both prices (upper panel) and returns (lower panel). For a better visualization, prices have been forced to start at a price value p(t = 0) = 1. Fixed income subset involve 64 different funds, while the equity subset involve 198 different funds. The dataset build by merging time-aligned series from both types of investment funds was used first to train the generation process summarized in the next subsection, and then split to obtain the training and testing datasets as described in the previous section.
| Tested simulation method
The simulation method used in the first edition of the ETS Challenges was an earlier version of that described in [Franco-Pedroso et al. (2018)]. The generation process can be summarized as follows: • Analysis stage: the whole multivariate training data set (each dimension being a different time series) is split into several time periods based on the trend changes estimated ex-post over the averaged time series (equally-weighted market index). Then, for each trend, a non-overlapping sliding window is used to compute mean vectors and covariance matrices from the multivariate returns (again, each dimension being a different time series) within each window. This sequence of N w mean vectors and covariance matrices (being N w the number of windows) constitutes the "model" of the trend.
• Synthesis stage: first, a random sequence of alternating trends (upwards and downwards), among those obtained in the analysis stage, is hypothesized. Then, for each trend, random multivariate returns are generated by drawing multivariate samples from Gaussian distributions whose parameters are updated according to the sequence of windows observed in the analysis stage.
• New assets generation stage: by following the two previous stages, simulated versions of the original dataset can be generated, keeping the correlations between the given time series thanks to the covariance matrices. In order to generate additional artificial assets with similar correlation properties, a PCA-based procedure is used. PCA is performed first in order to decompose the original set of time series, R , into eigenvectors (transformation matrix, W ) and components (projected time series, R ). Then, the transformation matrix is enlarged by adding artificial eigenvectors generated from a multivariate gaussian distribution with mean and covariance matrix obtained from W , leading to a new transformation matrix W with as much eigenvectors as the desired number of time series in the final simulated dataset. Finally, the components R are projected back into the original space to obtain the simulated dataset with the desired dimensions.
Examples of simulated datasets obtained with this generation method, and an exhaustive analysis of their empirical properties can be found in [Franco-Pedroso et al. (2018)] for stock time series. From now on, this will be referred as the method 1 in order to distinguish it from the generation process followed in the second edition of the ETS Challenges (described in Section 4).
| Submitted systems and results
For this edition of the ETS Challenges, only a few systems where submitted, most of them not being able to distinguish between real and simulated time series, and accompanied with shallow descriptions of the development process followed by participants. For those systems, no further analysis was done. However, one of the submitted systems achieved a very high performance in our classification task (0.95 AUC), which was further analyzed in order to relate the features used with some possible shortcomings of the generation process.
Hopefully, this submission was deeply described by the participant, who performed in the training set an analysis of the discrimination capability provided by the features used. These features consist of 100 coefficients of the autocorrelation function (ACF) of each sample (260 return values). Each sample was subtracted from a longer one, so in this case, the ACF was a relevant characteristic in a local context. The participant found that the first principal components of each class (real and simulated) significantly differed in the training set, being this a fact that allow to distinguish between them using an ensemble of 40 k-nearest neighbor (kNN) classifiers [Altman (1992)], with k = 1, based on the cosine distance between features' samples.
As mentioned in [Franco-Pedroso et al. (2018)], the method 1 does not follow an auto-regressive approach to reproduce the time series behavior, since the most reported empirical property regarding this statistic is, in fact, that they do not present significant auto-correlation as its value quickly decays for the first time lags [Pagan (1996)]. This was also observed in [Franco-Pedroso et al. (2018)] for the simulated times series. However, it seems that, while the ACF presents insignificant values for both types of time series (real and simulated), there is still a difference in how these values behave in simulated time series compared to real ones, which allow to easily distinguish between them.
| Post-evaluation analysis
In order to corroborate the discriminant capabilities of the best-performing submitted system, several experiments were performed on a different time series set, consisting of stocks from the S&P 500 index (this dataset was the one used in the second edition of the ETS Challenges, and is described in Section 4). It was observed that the system was still able to distinguish between real and simulated samples with high accuracy, when a protocol similar to that one used in the challenge was followed. In order to discard any possible error or bias in the random extraction process by which samples from both classes were extracted, several experiments were performed involving only real time series.
By doing this, we were trying to confirm if the features used were capturing some property shared by real financial data or, conversely, a specific particularity of the samples extracted and used with different purposes (train or test). If the features were capturing such a general property, there should not be any partition of real data that, considered as different classes, could be classified with high accuracy (that is, a classifier should not be able to distinguish between them). For that purpose, three different experiments were performed: • Experiment 1: the whole dataset was divided into two different time periods, being each period assigned to a different class. Examples for both training and testing purposes were extracted from the same subset (see Figure 3(a)).
• Experiment 2: the whole dataset was divided into two different time periods, being each period assigned to a different class. For each class, data were further divided into two different time series subsets for training and testing purposes (Figure 3(b)).
• Experiment 3: the whole dataset was divided into two different time series subsets, being each subset assigned to a different class. Examples for both training and testing purposes were extracted from the same subset (Figure 4(a)).
• Experiment 4: the whole dataset was divided into two different time series subsets, being each subset assigned to a different class. For each class, data were further divided into two different time periods for training and testing purposes ( Figure 4(b)).
Results for these experiments are shown in Table 2. As it can be seen, it is possible to divide the real dataset in a way that data from different subsets can be distinguished, by classifying them as belonging to different classes, indicating that the features used do not capture a general property of real time series but rather particular differences between specific subsets. These differences notoriously arise when samples for different classes are extracted from different time periods (experiments 1 and 2), even though time series are shared among different classes. However, it is much more difficult (experiment 3) to distinguish between classes when the time period is shared, and even impossible (experiment 4) when training and testing subsets for each class come from the same time period (that is, they are overlapped among classes, but they do not between train and test within each class).
The fact that the auto-correlation is similar for different time series if they are close in time could be partially explained by the usual presence of cross-asset correlations between different assets of the same type or coming from the same market [Plerou et al. (1999)], as they evolve over time in a similar way. However, similar experiments performed TA B L E 2 Experiments performed involving only real time series. on simulated datasets by using the method 1 showed that any partition made on the dataset did not provide subsets that could be distinguished or classified as belonging to different classes, even though cross-asset correlations were properly reproduced [Franco-Pedroso et al. (2018)]. The reason for such auto-correlation pattern not being reproduced on simulated data is that the method 1 do not follow an autoregressive approach but only attempt to match distributional properties. In order to reproduce such a behavior of real time series, a new generation method was developed, which is described in the next Section.
Experiment Classes represent AUC
Returning to the ability of the submitted system to distinguish between real and simulated datasets, it was observed that the classifier also achieved a high performance (0.9 AUC) if ACF coefficients were computed for absolute return values instead, revelaing that significant differences in volatility clustering [Mandelbrot (1963)], which is an already well known "stylized fact" [Cont (2001[Cont ( , 2007], [Chakraborti et al. (2007)], could be found as well. Thus, both systems (the submitted system and this latter one) were used as our reference systems (Reference 1 and 2, respectively), or sanity checks, for every generation process developed from that moment.
| 2017 CHALLENGE: DETECTION IN THE CONTEXT OF STOCKS
The second edition of the ETS Challenges was focused on testing our generation methods on stock data. Particularly, the main dataset used consist of the daily prices/returns between 01/01/2000 and 04/29/2016 of a set of 330 stocks that have been part of the S&P500 index at some time within this given period. This dataset is illustrated in Figure 5, showing the time series of both prices (upper panel) and returns (lower panel). For a better visualization, stock prices have been forced to start at a price value p(t = 0) = 1.
F I G U R E 5 Stocks dataset used in the 2017 ETS Challenge.
The whole dataset was used as the training dataset for the generation methods in order to obtain the simulated dataset. Then, it was split into two halves (different stocks, same period, as indicated in Figure 1) to extract time series segments for both train and test datasets used in the challenge. Only segments coming from this dataset were included as training data, while testing data also included time series segments from different datasets (considered as out-of-set examples): • same stocks as those in the main dataset but coming from a different time period.
• stocks form a different market (EUROSTOXX index), same or different time period from the main dataset.
| Tested simulation method
In order to overcome the main issue that our previous generation process presented (Method 1), as revealed in the previous edition of the ETS Challenges (Section 3), a different approach was followed. Similarly to the previous approach, the generation method can be summarized into the following stages: • Analysis stage: the whole multivariate training data set is split into several time periods based on the trend changes estimated ex-post over the averaged time series (equally-weighted market index), as it was done in Method 1.
However, the multivariate data within a trend is processed in a very different way. Instead of considering data as a time sequence of multivariate return values (each dimension being a different asset), each time series (or asset) within a trend is considered a multivariate sample itself in which the return values at different time steps are seen as different dimensions (see Figure 6). Then, a mean vector and covariance matrix (whose dimensions depends on the length of the trend) are obtained for each trend, what constitutes the "model" of the trend, and represents the average behavior of the market within this time period. In this way, the average auto-correlation is captured by the covariance between dimensions (time steps).
• Synthesis stage: as it was done for our previous approach, a random sequence of alternating trends (upwards and downwards) is hypothesized first. Then, for each trend, random return values are generated by drawing multivariate samples from a Gaussian distribution with mean vector and covariance matrix equal to that observed in the analysis stage. Note that, in this approach: i) the whole trend for a specific asset is generated at once by drawing a multivariate sample, and ii) there is no need for a procedure such as the PCA-based one used by Method 1 if more assets want to be generated, as we can simply generate more multivariate samples for the same trend. F I G U R E 6 Comparison of the information modeled by multivariate vectors (x) in Method 1 (a) and in Method 2 (b). For Method 2, the represented dataset is assumed to come from an isolated trend.
While this approach does not explicitly model the correlation between assets, it has been observed that the simulated data do present cross-asset correlation, as different assets are different samples drawn from the same multivariate Gaussian distribution and thus the time evolution within a trend is similar for different time series. On the other hand, the time series produced by following the previously described stages do not show a key feature of financial time series as the heavy tails are. For this reason, the following additional steps were included: • In the analysis stage, the cumulative distribution function (CDF) of the time series within each trend are estimated (Figure 7(a)).
• In the synthesis stage, after the generation of random samples from the multivariate Gaussian distribution, the histogram of the return values from each sample (one time series within a trend) is first equalized (Figure 7(b)), and finally transformed by applying the inverse of the CDF estimated in the synthesis stage (Figure 7(c)). (a) Step 1: real CDF estimation. (b) Step 2: histogram equalization for simulated return values. (c) Step 3: transformation of equalized histogram to fit real CDF.
F I G U R E 7 Additional steps performed to fit real CDF on simulated return values. Bar figures represent histograms, while line figures represent estimated CDF or its inverse, for a specific return time series within a trend.
In this way, the simulated asset returns within a trend better fits the distribution of real time series. However, this fitness is not "complete", as revealed by our reference system developed in order to check distributional properties ("Reference system 3" in Table 3), among others. This system uses as features several statistics (average, standard deviation, median, kurtosis, skewness), the Hurst exponent [Mandelbrot and Wallis (1969)], the Sharpe ratio [Sharpe (1966)] and some other metrics that also quantifies the shape of the distribution (percentage of return values bigger or smaller than some specific thresholds); the classification technique for this system consist of 100 bootstrapp-aggregated decision trees (bagged trees) [Breiman (2001)]. On the other hand, as it can be seen for the "Reference systems" 1 and 2, the problem previously observed regarding both the auto-correlation of asset returns and volatility clustering was completely solved by the approach followed in Method 2.
TA B L E 3 Summary of reference systems and results.
| Submitted systems and results
Seven submissions were received in this edition of the ETS Challenge, one of them including two different systems.
Those systems are describe in the following paragraphs, including both the features and the classifiers used.
• System 2: -Features. It used five sets of features: products and differences between consecutive return values (r (t ) and r (t + 1)) in time series, as well as the rolling standard deviation within small windows (2 and 3 return values) and the return values themselves. Each of these sets were finally sorted in ascend order. Feature vectors were of dimension 1295.
-Classifier. It used a gradient boost classifier. [Everitt (1981)] to model each class, and obtained the score for each class as the probability density of the test segment for that class. Finally, returned the ratio for the positive class. -Classifier. It used a one hidden layer neural network (NN) [Bishop (1995)] with 65 rectified linear units (ReLU) [Nair and Hinton (2010)], and one output unit with sigmoid activation. The network was trained with mean squared error (MSE) as the cost function.
-Classifier. It used a binary regression tree.
• System 6: -Features. It used the difference between the autocorrelation (ACF) and the partial-autocorrelation (PACF) for the first 10 coefficients of each segment. Feature vectors were of dimension 10.
classified as belonging to that class, applying an heuristic threshold. -Classifier. It used k-Nearest Neighbors (kNN) classifier. Table 4 summarizes the features and the classifier used for each system along with the results obtained in the challenge. It is interesting to note that only those systems that used gradient boost classifiers (Systems 1, 2 and 7) were able to distinguish between real and simulated time series, while the rest of the submitted systems provide almost random outputs (∼ 0.5 AUC). Among those best-performing systems, there was, however, a significant gap in performance (AUC) between System 1 (0.61) and Systems 2 (0.82) and 7 (0.89) that can be partially explained by the number of features used: just 7 features in System 1 versus hundreds in System 7 or even more than a thousand in System 2. However, most of the features used by System 1 consist of normality tests, a property which is already known that real time series do not present, and that has been avoided in simulated time series as explained in the previous section. The importance of the classifier used is also revealed by the results obtained by System 5, which uses similar features to those used by our third reference system described in the previous section but obtains much worse
| Post-evaluation analysis
In order to improve the simulation method used for this Challenge, the two best-performing submitted systems where further analyzed. Conclusions and findings obtained are exposed in this section.
First, we look at the features used by the submitted System 7, which obtained the best performance in the Challenge.
As previously mentioned, this system used an open-source library (tsfresh) specifically designed to extract relevant features from time series [Maximilian Christ (2016)]. This feature extractor computes 57 sets of features (see Figure 8), some of which are obtained for several values of the parameters they depend on, leading to a final set of 222 features. It was observed, however, that for some of those features NaN values were always obtained and converted to zero values, while some others present always the same value for every sample (for example, the feature "length", as all time series were equally long). Those features (highlighted in Figure 8) were removed, without changing the performance of the system, leading to a final set of 169 feature that was further analyzed.
F I G U R E 8 Features removed after the data 'cleaning' process are highlighted.
Next, the relative importance of individual features was analyzed by means of a classifier based on a bag of decision trees, which revealed that 6 predictors were significantly more important among the whole 169 set. Those features were the following: • Percentage of reoccurring datapoints to all datapoints.
• Percentage of reoccurring values to all values.
• Sum of reoccurring values.
• Ratio value number to time series length (=1 if any value is repeated).
The difference between reoccurring datapoints and reoccurring values can be noticed with in following example: the time series {1, 4, 0.5, 1, 2.9, 1.8, 4} presents 4 reoccurring datapoints (4 array positions for which the value at them also appears at another different position) but only 2 reoccurring values (2 specific values, namely the number 1 and the number 4, that also appear at other array positions).
All those features are related to the randomness of the return values in a time series sample: the more random the values, the more likely it is to be repeated (and the greater the entropy). As the simulation process generate return values by drawing random samples from a Gaussian distribution, and they are further transformed through a continuous function (steps 2 and 3 in Figure 7), simulated returns are assured to be completely random, not presenting any repeated values. However, the real dataset was discovered to had a high percentage of repeated values (close a 25% of the whole training dataset), probably due to truncation in price values as they were obtained from the database. In order to ensure that this was the main difference between real and simulated data detected by System 7, the two following steps were performed: • First, and independent feature extractor was developed to extract only the previously mentioned features, and the same classifier was used. The difference in performance between this modified system, which used only 6 features instead of 222, and the submitted one, was as low as 0.2% AUC in absolute terms.
• Then, a very low-power random noise was added to real time series samples to avoid repeated return values. The maximum amplitude of the noise was set to 10 −13 , nine orders of magnitude smaller than the minimum absolute return value in real time series (10 −4 ). This specific value was the minimum that lead to non repeated values in the real dataset. As it can be seen in Figure 9 for a particular sample, the behavior of the time series was not changed appreciably. The same was done for simulated time series as well to assure that every input sample was processed in the same way regardless its class, which is unknown in the testing phase.
• Finally, the system was used to process those modified time series samples replicating the Challenge protocol, obtaining a great drop in performance that lead to almost random output (0.53% AUC), confirming therefore the initial hypothesis. On the other hand, the performance for any of the other submitted systems was not modified.
Submitted System 2 obtained the second best performance at the Challenge. As described in the previous section, the features used by this system can be grouped in the following subsets: • Sorted return values in the time series sample (260 features).
• Sorted products between consecutive return values (259 features).
• Sorted standard deviation values computed over a rolling window of length 2 (259 features).
• Sorted standard deviation values computed over a rolling window of length 3 (258 features).
Similarly to the process done with submitted System 7, an analysis of the relative importance of individual features was performed first. The aim was not to look for a specific feature being particularly discriminant among the whole set, but for subsets of them. As shown in Figure 10, all subsets present features with higher relative importance at the edges of the feature subset, and some of them at the middle as well. It is important to note that the process of sorting within each trend, we looked at the differences between the "real" distribution (empirical CDF) and that one used to transform the simulated returns (step 3 in Figure 7), which was estimated through a kernel function (step 1 in Figure 7).
Those differences are highlighted in Figure 11 for two different trends of a particular stock. As it can be seen in Figure 11(a), high differences arise close to the extremes of the distribution for short trends, as few samples are observed and they to are concentrated around the mean value. Thus, the kernel CDF can not be properly estimated in those return ranges. When the trend is longer (Figure 11(b)), the kernel estimate is much more closer to the empirical CDF. However, if we look closer at an interval around zero, we can notice that even for long trends, differences also arise as real time series may have a large number of zero return values, as shown in Figure 11(c). These differences explain, at least, the relative importance of predictors being at the extreme and mid-range values of the first subset of features (1 to 260 in Figure 10) but they could also affect the remaining ones as they are based on calculations involving consecutive return values. in Method 2 by the covariance between adjacent dimensions. However, as the covariance matrix is computed as the sample covariance between time series segments (trends) belonging to different time series, an overall behavior is being estimated, while those relations may change from one time series to another.
On the other hand, it was noticed that the performance of Reference system 3 where also highly affected, dropping from 0.76 to 0.6 AUC. This is a consequence of the better fit to the CDF, as some of its features directly attempt to quantify the distribution of stock returns, especially those representing the percentage of return values smaller/larger than a given threshold (computed for several thresholds).
| SIMULATED-SERIES DETECTION-SYSTEMS AS AN EVALUATION FRAME-WORK FOR GENERATION PROCESSES
As it has been shown in previous sections, the proposed challenges allow to objectively measure the goodness of a simulation method and to easily relate the features used with shortcomings in the properties modeled by the process.
Similarly, we can compare among different simulation methods by choosing those systems that better performed on the proposed task. In this section, we use the Reference systems 1, 2 and 3, and the Systems 2 and 7 submitted to the second edition of the Challenges (note, however, that for System 7 we only include the most relevant features found in previous section). The methods compared, apart from our Method 2 after applying the improvements already mentioned in previous section, are the following: • Stochastic Differential Equation (SDE) models: Geometric Brownian Motion (GBM) and Constant Elasticity of Variance (CEV).
• Multivariate GARCH: BEKK implementation for 2 and 5 dimensions. Note that this is one of the main modern models that currently define the state of the art when generating financial series.
Results are shown in Table 5. As the goal of the simulation methods is to generate time series as similar as possible to real ones, the better is the performance of the detection systems (AUC values), the worse is the simulation method.
That is: "good performance" of a simulation method means low AUC value for a detection system. Then, it can be seen that Method 2 clearly outperformed the other well-known models for all the detection systems used. It is also interesting to note that results agree with previous knowledge regarding those other models: for example, SDE models do not follow an autoregressive approach, so they perform bad on Reference system 1, which is based on autocorrelation; also they do not model the time-varying behavior of variance, and so they perform bad for Reference system 2 as well. The converse happens with GARCH models, while they do not perform as well as Method 2. GARCH models also perform better than SDE ones for Reference system 3 and submitted System 2, as they used Student's T innovations and so reproduce better statics features as heavy tails. For the multivariate implementation tested, it can be seen the effect previously reported in [Franco-Pedroso et al. (2018)]: volatility clustering is worse reproduce (see results for Reference system 2) when more dimensions are attempted to model simultaneously. For this implementation, Gaussian innovations were used, so again we have lower performance for Reference system 3. features developed with the aid of a labeled training set. The goal of the competitions is two-fold: first, to test the goodness of our developed simulation methods in a factual manner in terms of their final goal (generate financial time series as similar to real ones as possible), and secondly, to find clues, not necessarily related with already known properties, that allow to identify their weaknesses in order to improve them.
In the first edition of the Challenges, one of the submitted systems showed that the simulation method used generated time series that could be easily distinguished from real ones comparing their autocorrelation coefficients in a local way. Moreover, it was observed that this system was also capable of distinguish between different subsets of real time series when they came from different time periods, revealing that, although financial time series do not present significant autocorrelation to allow predicting future price movements, they do exhibit some kind of pattern that is shared among different stocks within a given time period, and it changes over time.
In the second edition of the Challenges, two findings were done. On the one hand, one of the submitted systems (System 7) revealed an issue not related with the simulation process but with the real time series itself, as it detected that repeated return values were present in the original dataset. On the other hand, the submitted System 2 showed that the distributional properties of real returns, among others, were not perfectly reproduced by the simulated ones, as it was also pointed out by our Reference system 3. This allowed to focus our efforts in finding better ways of reproducing this property. Although the simulation method was improved, as shown by the drop of performance of the system, some differences remain between real and simulated time series and further research is needed.
Moreover, the evaluation framework defined for the challenges has allowed us to factually compare the latest version of our simulation method with some other well known and widely used methods, showing that ours performed significantly better for all of the detection systems. Finally, it can be noted that this cyclic process involving the evaluation of simulated samples and consequent improvement of the simulation method seems to be a perfect scenario for applying Generative Adversarial Networks (GANs) [Ian J. Goodfellow et al. (2014)], which have shown very promising results for other applications, specially for image synthesis. In this way, the whole process could be fully automatized, avoiding the need for finding a good combination of features and classifier. As a counterpart, the features being more discriminative could remain hidden depending on the discriminator complexity, which would prevent us to find unknown properties of financial data.
|
2018-11-19T08:42:49.000Z
|
2018-11-19T00:00:00.000
|
{
"year": 2018,
"sha1": "b701d7965ea7ac8d0754886dbd406253933c064f",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1811.07792",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b701d7965ea7ac8d0754886dbd406253933c064f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Economics"
]
}
|
19938796
|
pes2o/s2orc
|
v3-fos-license
|
Negative Magnetoresistance in a Magnetic Semiconducting Zintl Phase: Eu 3 In 2 P 4
A new rare earth metal Zintl phase, Eu 3 In 2 P 4 , was synthesized by utilizing a metal flux method. The compound crystallizes in the orthorhombic space group Pnnm with the cell parameters a ) 16.097(3) Å, b ) 6.6992(13) Å, c ) 4.2712(9) Å, and Z ) 2 (T ) 90(2) K, R1 ) 0.0159, wR2 ) 0.0418 for all data). It is isostructural to Sr 3 In 2 P 4 . The structure consists of tetrahedral dimers, [In 2 P 2 P 4 / 2 ] 6 - , that form a one-dimensional chain along the c axis. Three europium atoms interact via a Eu - Eu distance of 3.7401(6) Å to form a straight line triplet. Single-crystal magnetic measurements show anisotropy at 30 K and a magnetic transition at 14.5 K. High-temperature data give a positive Weiss constant, which suggests ferromagnetism, while the shape of susceptibility curves ( ł vs T) suggests antiferromagnetism. Heat capacity shows a magnetic transition at 14.5 K that is suppressed with field. This compound is a semiconductor according to the temperature-dependent resistivity measurements with a room-temperature resistivity of 0.005(1) ¿ m and E g ) 0.452(4) eV. It shows negative magnetoresistance below the magnetic ordering temperature. The maximum magnetoresistance ( ¢ F / F (H)) is 30% at 2 K with H ) 5 T.
Introduction
Zintl phases have been extensively studied since the Zintl concept was first presented. In this class of compounds, electropositive elements (alkali and alkaline earth metals) donate electrons to electronegative group 13-16 elements, and the compound is valence precise. 1,2 In recent years, the Zintl boundary has been extended to rare earth metal containing compounds. This approach has led to the discovery of many complex new structures. 3,4 In addition, because of the unique electron configuration of rare earth metals, rare earth metal Zintl phases were found to have special physical properties. Thermoelectricity 5,6 and superconductivity 7 have both been discovered in rare earth metal Zintl phases. Rare earth metal Zintl phases containing Eu and Yb are most interesting, because of the possibility of variable valence states and localized magnetic moment. Eu 2+ has seven unpaired f electrons. There are several Eu-containing Zintl phases that show unusual magnetic and magnetotransport properties. [8][9][10][11][12][13] Eu-containing Zintl phases are typically antiferromagetic with a low magnetic ordering temperature, 14,15 but many of them show a positive Weiss constant, which suggests a ferromagnetic correlation in the paramagnetic region. 11,12,16 In the last several years the Ln 14 MnPn 11 system has been explored, [8][9][10]17,18 where Ln ) Eu, Yb, Pn ) P, Sb, As, because of the unique magnetic and electronic properties found in this system. In an effort to make large crystals of Eu 14 MnP 11 10 utilizing a metal In flux, Eu 3 InP 3 was discovered. 19 The structure of Eu 3 InP 3 can be explained by the Zintl concept and is a semiconductor. The valence of Eu was determined by Mössbauer spectroscopy to have a value of 2+. There are three crystallographically different Eu 2+ sites. The three sites magnetically interact with each other, which results in several magnetic ordering transitions below 14.5 K.
In this paper, we introduce a new Eu-containing Zintl compound, Eu 3 In 2 P 4 . The compound is isostructural with the main group Zintl phases Sr 3 In 2 P 4 and Ca 3 In 2 As 4 . 20 Magnetic measurements of this compound show features of both ferromagnetism and antiferromagnetism. At low temperatures, Eu 3 In 2 P 4 shows magnetoresistance.
Experimental Section
Synthesis. The starting materials were 1 / 8 " Eu ribbon (99.999%, Ames Lab), cut into small pieces, red P (J. Matthey, Puratronic), crushed into small pieces, and In shot (Aesar, 99.99%), used as received. All reactants were mixed in a mole ratio of Eu:In:P ) 3:120:4 under N 2 atmosphere. The elements were placed in a 2 mL cylindrical alumina crucible with the Eu (136.8 mg) and P (37.2 mg) between two layers of In (4.1335 g). Another crucible filled with quartz wool was inverted and covered the reaction crucible, and the entire system was sealed in a quartz ampule under 1 / 5 atm Ar. The sealed reaction container was heated accordingly: ramp to 500°C over a period of 1 h and dwell for 1 h and ramp to 1100°C over a period 1 h, dwell for 6 h, cool at 3°C/h to 850°C, and dwell for 15 h. The reaction vessel was removed from the furnace at 850°C, inverted, and centrifuged. Large, 1-2 mm 3 , crystals were obtained. When exposed to air, the black crystals decompose into a yellow powder. Therefore, the reaction product was kept in a N 2 -filled drybox equipped with a microscope and protected from air exposure for all subsequent measurements.
Single-Crystal X-ray Diffraction. The compound structure was determined by single-crystal X-ray diffraction. The air-sensitive crystal was stored in Exxon Paratone-N oil when taken out of the drybox. To obtain a crystal of suitable size, a large crystal was cut with a stainless steel blade into 0.14 × 0.12 × 0.09 mm 3 . The crystal was mounted on the tip of a quartz fiber and positioned under a 90(2) K cold N 2 stream provided by a CRYO Industries lowtemperature apparatus on the goniometer head of a Bruker SMART 1000 diffractometer. Diffraction data were collected using graphitemonochromated Mo KR radiation. An absorption correction was applied using the program SADABS v2.10. The structure was solved and refined with the aid of the SHELXTL 6.10 program package. 21 Direct methods were used in solving the structure. The final refinement gave R1 ) 0.0159 and wR2 ) 0.0418, with the largest difference in the Fourier map being 1.107 e Å -3 , 0.87 Å from Eu(2). Some details of the crystallography and refinement parameters are listed in Table 1.
X-ray Powder Diffraction. The crystals were ground to a fine powder with an agate mortar and pestle in a N 2 atmosphere drybox and then mixed with approximately 15% silicon standard and placed between two pieces of cellophane tape. The sample was transferred to a Guinier camera (Cu KR1) with a vacuum sample chamber. The diffraction pattern was compared with the calculated diffraction pattern obtained from the single-crystal refinement using the computer program Crystaldiffract 3.2. The powder diffraction pattern could be indexed according to the single-crystal structure. There were no unindexed diffraction lines.
Magnetic Susceptibility Measurement. The magnetic measurements were obtained using a Quantum Design MPMS superconducting quantum interference device (SQUID) magnetometer. A 2.01 mg single crystal was used for the measurements. The crystal was protected in Paratone oil and fixed inside a drinking straw, which was used as a sample holder. The a, b, and c axis orientations were determined by single-crystal diffraction, so the magnetic properties in directions parallel to a, b, and c axis could be measured. Zero-field-cooling (ZFC) and field-cooling (FC) measurements were performed between 2 and 300 K with applied fields of 0.01, 0.1, and 5 T. Magnetization curves were also measured between -2 and 2 T at 5 K. The data were reproduced on several different crystals.
A Magnetic Semiconducting Zintl Phase, Eu 3 In 2 P 4
wires were attached to the crystal with Epo-Tek silver epoxy. The temperature-dependent resistivity of this compound was measured by both two-lead and four-lead methods because of the measurement limits. A four-lead method was employed from 300 to 130 K. A constant current of 100 nA was applied through two outer leads with a Keithley Model 224 current source, and a Keithley 181 voltmeter was used to measure voltage between the two inner leads. Below 130 K, the resistance became too high to measure with the voltmeter (200 mV), and a two-lead method was used. A Keithley Model 617 programmable electrometer that can measure up to 200 GΩ was employed for the two-lead method. Resistance as a function of temperature was measured in both 0 and 3 T applied fields. Heat Capacity Measurement. Heat capacity of Eu 3 In 2 P 4 was measured with a Quantum Design Physical Property Measurement System (PPMS) in the temperature range from 1.8 to 30 K. The measurement was performed with applied fields of 0 and 1 T using a thermal relaxation method. A crystal of 1.93 mg was used for the measurement. The sample was mounted on the sample holder with Apiezon N-grease. There is a possibility of exposure of the sample to air for less than 30 s when putting the sample into the PPMS. Addenda measurements were done prior to sample measurements. Entropy was calculated by integrating the specific heat divided by temperature in the measured temperature range. Large crystals of the nonmagnetic analogue, Sr 3 In 2 P 4 , were not available in order to subtract the nonmagnetic contributions to the C p . However, the electronic and phonon contributions to the heat capacity are expected to be small compared to the magnetic contribution at low temperatures in this semiconductor.
Results and Discussion
Synthesis. Large high purity crystals can be grown from flux reactions. 22,23 In the synthesis of this compound, indium was used as a flux because of its low melting point, inertness to the alumina crucible, small wetting effect, and low vapor pressure. The appearance of Eu 3 In 2 P 4 is very similar to that of Eu 3 InP 3 and was initially discovered as a side product in the flux synthesis of Eu 3 InP 3 . Both are black needle-shaped crystals. The flux composition was varied until Eu 3 In 2 P 4 could be produced as the sole product. The synthetic conditions for these two compounds are similar, except that the reactants of Eu 3 In 2 P 4 have a higher P:Eu ratio. Eu 3 In 2 P 4 could not be prepared from stoichiometric amounts of the elements in a sealed tantalum ampule heated at 1100°C for 5 days. Instead, Eu 3 InP 3 is obtained as the main product. It is possible that part of the In or P reacted with the tantalum tube, causing the reaction to be off-stoichiometry. We did not attempt further experiments varying temperature or stoichiometry for tantalum tube reactions, since the product could be obtained in high yield from the flux reaction.
Structure. A study of two main group Zintl phases of the 3-2-4 structure, Sr 3 In 2 P 4 and Ca 3 In 2 As 4 , 20 was published in 1986. The structure type is orthorhombic with space group Pnnm. Eu 3 In 2 P 4 is the first magnetic compound of this structure type. Table 1 provides the X-ray data collection parameters and the structure solution and refinement results. There are two crystallographically unique Eu and P sites in this compound.
The crystal structure of Eu 3 In 2 P 4 is shown in Figure 1. The compound is composed of units of two edge-shared tetrahedra, [In 2 P 4 ] 6-, that are corner shared and stacked to form a chain along the c axis. Eu atoms surround the chains. Figure 2 shows the local environments of Eu(1), Eu(2), and In. In-P distances range from 2.5618(16) to 2.6369(10) Å and the P-In-P angles are from 99.42(4)°to 113.16(4)°, almost identical to those of Sr 3 In 2 P 4 . 20 The longer In-P distances come from corner-shared phosphorus (P1). There are six P atoms around each Eu atom forming distorted octahedra. The Eu-P distances are also very similar to the Sr-P distances in Sr 3 In 2 P 4 . The Eu-P distances are between 2.9599(11) and 3.1905(16) Å, except the 3.5827(17) Å Eu(1)-P(1) (shown as a dash line in Figure 2). The distance range is typical for Eu-P binaries, such as EuP 7 , Eu 3 P 4 , and EuP. 24 With such distances, Eu and P generally show covalent interactions. 24 The Eu-only lattice is very interesting in this compound. Eu(2) occupies a site with 2/m symmetry, while Eu(1) resides on the mirror plane. By symmetry, two Eu(1) atoms build a Eu(1)-Eu(2)-Eu(1) straight line triplet with each Eu(2) atom, as indicated with the dotted line in Figure 1. These triplets are in the ab plane. The Eu(1)-Eu(2) distance in a triplet is 3.7401(6) Å. This is the shortest Eu-Eu distance in this compound, and it is close to the shortest Sr-Sr distance (3.665 Å) in Sr 3 In 2 P 4 . The next shortest distance is much longer: 4.2712(9) Å between two adjacent Eu(1) (or adjacent Eu (2)) atoms along the c axis direction. The structure of Eu 3 In 2 P 4 can be explained by the Zintl concept, as has been done for Sr 3 In 2 P 4 and Ca 3 In 2 As 4 . 20 The charge of the anionic unit, [In 2 P 4 ] 6-, can be balanced with divalent Eu atoms.
Magnetism. The temperature-dependent magnetic susceptibility between 2 and 40 K with 0.1 T applied field is shown in Figure 3. ZFC and FC data were identical in all three orientation measurements. At high temperatures, the curves of the different orientations are very similar, and they can be fit with the Curie-Weiss law The result gives an average of C ) 22.65(5) emu K/mol and θ ) 16.9(8) K. The C value corresponds to a total formula effective moment of 13.46(4) µ B , close to the theoretical value of 13.75 µ B for three 8 S 7/2 Eu 2+ cations. The 1/ vs T data, for the c axis orientation with an applied field of 0.1 T are shown in the inset to Figure 3.
In the low-temperature range there is an obvious magnetic transition at 14.5 K. There is a deviation from the magnetic susceptibility curve as a function of crystal anisotropy at temperatures as high as 30 K. This may arise from shortrange ordering above the transition temperature. According to the shape of the susceptibility curves, antiferromagnetism is dominant at low temperatures. However, there are two anomalies with regard to normal antiferromagnetic (AFM) behavior: first, the susceptibility in the c axis direction is much smaller than in the other two directions; second, below 14.5 K, the susceptibility decreases with temperature in both the a and c axis direction. In addition, the positive Weiss constant suggests a ferromagnetic (FM) interaction at high temperatures, another deviation from traditional antiferromagnetism. This phenomenon has been observed in several Eu-containing intermetallic and Zintl phases such as EuGa 4 , EuMn 2 P 2 , and EuSnP. 11,12,16 In those reports the authors suggested that there is a coexistent ferromagnetic ordering component that gives rise to this effect. 11,12,16 The vs T data were obtained with an applied field of 5 T. The shape of the susceptibility curve is no longer reminiscent of antiferromagnetism. Instead, the susceptibility ( Figure 4) monotonically increases with decreasing temperature as with a ferromagnet.
The magnetic hysteresis curves at 5 K also suggest antiferromagnetism. As shown in Figure 5, the magnetization increases linearly with the applied field in all the three orientations. The magnetization value saturates at about 1.6 T for the c axis and 1 T for the a and b axis. These are low saturation fields compared to typical antiferromagnets and suggest that the antiferromagnetic interaction is very weak. No spin-flop transition was observed for any crystal orientation up to 3 T. The saturation magnetization is approximately 1.070(6) × 10 5 emu/mol (19.23(2) µ B per formula), which is slightly smaller than the value calculated from three Eu 2+ ions per formula (21 µ B ). The hysteresis measurement provides insight into the vs T data, which show AF ordering at low fields ( Figure 3) and FM ordering at high fields (Figure 4). The FM ordering occurs because ) C Tθ the high field saturates the sample and makes all the moments align in the same direction in a ferromagnetic fashion.
Heat Capacity. Figure 6 shows the result of heat capacity measurements. The data are shown in C p /T, normalized to per mole of europium. There is a sharp peak at about 14.5 K in the zero-field measurement, which corresponds to the transition seen in the susceptibility measurement. The tail above 14.5 K is considered to be due to short-range magnetic ordering. Under an applied magnetic field of 1 T, this peak broadens and shifts to a higher temperature due to the saturation of the magnetic moment associated with the FM character of the spin. The entropy of R ln 8, which corresponds to the full three Eu 2+ states, is reached at about 22 K, indicating that the system has an 8-fold degenerate Eu 2+ ground state, as expected from susceptibility data. The broad increase between 3 and 8 K is very similar to that observed in Eu 3 InP 3 . 19 This broad increase in Eu 3 InP 3 was explained as an intrinsic magnetic transition, since a peak was seen in susceptibility measurement at around 5 K. 19 However, there is no transition in the vs T plot of Eu 3 In 2 P 4 in this temperature range, suggesting it is not specifically due to the particular structure of Eu sites. Indeed, this kind of broad feature appears in several Eu 2+ and Gd 3+ systems, regardless of the crystal structures. [25][26][27][28][29] Below the transition temperature, there remains R ln 2 of entropy, which corresponds to R ln 8 for the three Eu cases. These facts together suggest that this broad feature is most likely due to the Zeeman splitting of the 8-fold degenerated states at the local Eu 2+ site by internal magnetic field produced by magnetic ordering below the transition temperature. We expect the same physics in Eu 3 InP 3 for the low-temperature feature around 5 K, that is, the increase in heat capacity at around 5 K in Eu 3 InP 3 is not due to an intrinsic magnetic transition, and the peak in the vs T plot of Eu 3 InP 3 at 5 K may be attributed from the local environment of spins.
Resistivity. The temperature-dependent resistivity (F) data are shown in Figure 7. Figure 7a shows the high-temperature data measured with a four-lead method, and Figure 7b shows the low-temperature data measured with a two-lead method. The resistivity at room temperature is 0.005(1) Ω m. In the high-temperature region ln F as a function of inverse temperature can be linearly fit with the equation ln F ) E g /2k B T + f, which suggests that this compound is a semiconductor. The fitting of the data between 130 and 300 K provides the gap energy E g ) 0.452(4) eV. Since this compound is a charge-balanced Zintl phase, its semiconducting property is expected according to the Zintl concept. Magnetoresistance at high temperatures is nonobservable. While at low temperatures, as shown in Figure 7b, the magnetoresistance onset occurs below approximately 30 K. This temperature is coherent with the temperature at which the magnetic anisotropy becomes noticeable. The maximum magnetoresistance is about -30% at 2 K with an applied field of 5 T. AFM materials with magnetoresistance are very rare, and very often these types of materials are metallic. [30][31][32] If this compound is antiferromagnetic, and the applied field is parallel to the spin moment direction (c axis), the magnetoresistance should be positive according to theory. 33 But here a negative magnetoresistance is observed in a field as high as 5 T, the spin moments in the compound are saturated in the field direction, and the compound is in an induced ferromagnetic configuration. This transition could cause a "red-shift effect" similar to that of EuSe, 34 which can lower the resistivity. Also, in the ferromagnetic state, the current carriers (electrons) can avoid scattering when hopping from site to site, because all sites have the same spin moment direction. This can also decrease the resistivity.
Since this compound is a semiconductor, the interaction should be the Bloembergen-Rowland (BR) coupling. 35 That is, interband exchange produces the magnetic interaction between Eu sites: phosphorus p electrons excited to the Eu d band and polarized by intraband f-d exchange. Alternatively, superexchange may occur between the two shortest Eu-Eu distances of 3.7401(6) and 4.2712(9) Å. A recent theoretical calculation suggests that Eu f shell superexchange interaction can produce a sizable effect in compounds with similar Eu-Eu distances. 36 These two possible mechanisms can compete between neighboring europium atoms and thus cause the weak, easily saturated, low-temperature antiferromagnetism in this compound. The positive Weiss constant in this compound is controversial. However, if we consider the structure, it is still understandable. The distances within one Eu triplet and between these triplets are very different, which means that the coupling within and between these triplets is different. There could be different types of couplings (one FM, the other AFM). If this is the case, a possible model might be with FM interactions within triplets and AFM interactions between triplets. With this model, the FM ordering would be short range and could be reflected in the Weiss constant, while the AFM ordering could be long range at low temperatures. This model provides one way to explain all the data. We plan to use neutron diffraction to determine the magnetic structure, and efforts are underway to grow large crystals for this experiment.
Summary. Eu 3 In 2 P 4 is a magnetic semiconducting compound with a transition temperature at 14.5 K. While there are aspects of the temperature dependence of the magnetic data that suggest weak antiferromagnetic order, there are also components that suggest ferromagnetism. Its magnetic property is anisotropic, and the Weiss constant θ is positive. Heat capacity shows a magnetic transition that is suppressed with field, consistent with AF at low field and FM at high field. At the magnetic ordering temperature, this compound shows a negative magnetoresistance. There are two crystallographically different Eu 2+ sites in this compound. We have proposed a simple model relating structure to the magnetic properties. Further efforts are under way to investigate this model with neutron diffraction.
|
2018-04-03T00:40:21.672Z
|
2005-06-24T00:00:00.000
|
{
"year": 2005,
"sha1": "14904887a6db94a05456d9ed573f7ce644767deb",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt0mv4z52f/qt0mv4z52f.pdf?t=oam3dr",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e64ce15b8db5b35deb74309ab702830aa7a226e5",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
119306085
|
pes2o/s2orc
|
v3-fos-license
|
Resolving Tensions between Congestion Control Scaling Requirements
Low Latency, Low Loss Scalable throughput (L4S) is being proposed as the new default Internet service. L4S can be considered as an `incrementally deployable clean-slate' for new Internet flow-rate control mechanisms. Because, for a brief period, researchers are free to develop host and network mechanisms in tandem, somewhat unconstrained by any pre-existing legacy. Scaling requirements represent the main constraints on a clean-slate design space. This document confines its scope to the steady state. It aims to resolve the tensions between a number of apparently conflicting scalability requirements for L4S congestion controllers. It has been produced to inform and provide structure to the debate as researchers work towards pre-standardization consensus on this issue. This work is important, because clean-slate opportunities like this arise only rarely and will only be available briefly---for roughly one year. The decisions we make now will tend to dirty the slate again, probably for many decades.
Introduction
A new Internet service has been proposed called L4S, for Low Latency, Low Loss Scalable throughput. It enables so-called 'Scalable' congestion controls to keep queuing delay and congestion loss to extremely low levels. But they can still share Internet capacity with existing traffic, while remaining isolated from its highly variable queuing delay and loss. The best background reference on L4S, for the present document is [DSBTB15] The aim of this paper is to articulate the tensions between a number of conflicting scaling requirements. The scope is limited to the steady-state. Various ways to resolve these tensions are given, including consideration of whether each requirement is best resolved in the network or on hosts. The idea is to determine the design space that flow-rate control mechanisms are confined to, because scaling requirements are the main constraints on a cleanslate design space.
Realistically, L4S is not truly a clean slate; it is a 'slightly-dirty slate', because it is built within the Internet architecture, which imposes a number of additional constraints. Some of these are documented explicitly as assumptions. But many implicit assumptions remain hidden, by definition! This work is important, because clean-slate opportunities like this arise only rarely and will only be open briefly. The period of research freedom will end as experimental standards start to be approved for the network mechanisms (perhaps late 2017 or early 2018). Therefore, the decisions we make now will dirty the slate again, probably for many decades.
The paper is structured as follows. § 1.1 follows with definitions of terms, variables and assumptions. § 2 states a number of scaling requirements that are mutually in tension, then § 3 proposes various ways to resolve these tensions. Some unequivocally solve the dilemmas, others are compromises that partially satisfy some of the apparently mutually incompatible requirements.
Terminology & Assumptions
Assumption 1. Scaling of topology is not part of this exercise. For traffic scaling purposes, it will be sufficient to consider a mini-scenario of a number of flows competing for capacity at a single bottleneck.
Consider a bottleneck link of capacity X serving a number of traffic flows indexed by i, where each flow has: • bit-rate x i ; • round trip time R i ; • and consists of segments of typical (usually maximum) size s i .
The number of segments sent but not acknowledged by source i is termed its window, Assumption 2. We initially assume first-in firstout (FIFO) queuing, so all L4S microflows for a site (customer/user) will share the same queuing delay, q.
The round-trip time R i of flow i consists of the base propagation delay between the endpoints R 0i and the queuing delay, that is It is unlikely that carrier-scale equipment will implement per-microflow queuing, not only due to cost, but also due to concerns over the tension between transport layer packet inspection and network layer encryption for privacy. Also per-flow queuing in the network requires the network to schedule each microflow, which raises concerns over constraining application flexibility (e.g. variablebit-rate video).
Assumption 3.
We assume an L4S-enabled bottleneck implements some form of unary per packet explicit congestion notification (not necessarily the standardized form of ECN [RFB01]) so all flows share the same packet marking probability p, with 0 ≤ p ≤ 1.
Assumption 3 does not preclude congestion controls that use both delay and explicit marking as complements. It does imply that solutions based solely on delay and/or loss would require a completely different analysis.
Assumption 4. We assume traffic will sometimes share the link with legacy ('Classic') TCP traffic, but it will be isolated from the harmful large and variable queue induced by 'Classic' TCP using, for example, the DualQ Coupled AQM [DSBEBT16].
Scalable Congestion Control Tensions
Here we show that a number of ideal scaling requirements are not all mutually compatible: 1. Scalable congestion signalling; 2. Limited RTT-dependence; 3. Unlimited responsiveness; 4. Low relative queuing delay; 5. Unsaturated signalling; 6. Coexistence with Classic TCP.
The scope is limited to scalability under steadystate conditions. Nonetheless, the purpose of some of the requirements (e.g. scalable control signalling) is to enable scaling of dynamic control. However, that linkage will not be explored in this paper.
Scalable Congestion Signalling
Requirement 1. For all flows, in the steady state, the number of congestion signals per round-trip, v i should be no less than a minimum.
Formally
: where v 0 will be a widely agreed lower bound for all flows. v 0 need not be > 1, but it should not be a lot less than 1, so that even in the worst case (steadystate) there will still be a signal nearly every round trip. The dependence of the variable v i on other variables will be investigated below.
This requirement ensures that flow rate can hug variations in available capacity as tightly as possible within the minimum delay that feedback takes to reach the sender. It ensures adjustments can remain as small as possible, which minimizes excursions into both queuing delay and under-utilization.
It also ensures that the sender can detect the absence of congestion signals within a small number of round trips, which can be used to rapidly trigger probing for more available capacity.
Limited RTT-Dependence
Requirement 2. In the steady state, the throughput of a flow with very large base RTT should not approach starvation while other flows sharing the same bottleneck queue receive plenty.
This requirement is deliberately not as strong as 'RTT-Fairness', in which the bit rates of competing flows are required to be independent of their RTTs. It may be argued that existing congestion controls are RTT-dependent, and the lower throughput of large-RTT flows has not been problematic. However, this is because the RTT-dependence of TCP has been cushioned by queuing delay, which L4S aims to remove.
Specifically, with tail-drop queues, the RTTs of all long-running flows have included a common queuing delay component that is no less than the worstcase base RTT (due to the historical rule of thumb for sizing access link buffers at 1 worst-case RTT). So, even where the ratio between base delays is extreme, the ratio between total RTTs rarely exceeds 2 (e.g. if worst-case base RTT is 200 ms, worst-case total RTT imbalance tends to (200+200)/(0+200).
Classic AQMs reduce queuing delay to a typical, rather than worst-case, RTT, but this still cushions the effect of RTT-dependence. For instance, with PIE, the target queuing delay common to each flow is 15 ms. Therefore, even if the ratio between RTTs is 100× (e.g. 200 ms/2 ms) worst-case rate imbalance is only roughly 13 (see Table 1).
However, because L4S all-but eliminates queuing delay, any RTT-dependence translates (nearly) directly into rate imbalance. For instance, if the target L4S queuing delay is 500 µs, the same 100× imbalance of base RTTs leads to a rate imbalance of about 80 (also shown in Table 1).
It is hard to state requirement 2 precisely. Initially it will be stated as a rough equality requirement, but this will be nuanced in later discussion. For now, consider two flows i & j sharing the same bottleneck. Then,
Combining Equation 1 & Equation 2
, the marking probability p is common to all flows. That is,
Substituting into Equation 3
: Therefore, for flow bit-rate to be (roughly) independent of RTT, source i would have to make either the segment size s i or v i (or the product of both) proportionate to its RTT R i . Both lead to problems when the RTT is small: • The segment size is usually set to the maximum that all the links along the path can support. Therefore, to make s i proportionate to R i , segment size would have to be reduced on shorter RTT paths. Then the packet processing rate and therefore the likelihood of processor overload, would be much higher than necessary whenever content was sourced locally, rather than remotely. Such perverse inefficiency is not a feasible proposition.
• If v i were proportionate to R i , the number of round-trips between signals would become very large over short RTT paths, leading to slack control of dynamics (failing Req 1); We will tease apart this dilemma between requirements 1 & 2 when we consider potential compromises between requirements in § 3.
One possible escape from this dilemma is that the range of feasible RTTs will not need to scale infinitely, although this point is controversial: • The RTT in glass over the earth's surface between two points at opposite poles (200 ms or 240 ms allowing for typical indirect routing) could be considered as an upper bound to RTT. However, this excludes inter-planetary communication, which is likely to become less and less unusual.
• There is clearly a minimum distance and therefore RTT between two machines capable of running application processes and congestion control algorithms, given physics sets a minimum bound on the size of transistors. But such a limit would be hard to pin down precisely.
Traditionally, flows at very different scales of RTT do not coexist in the same bottleneck. Instead, a domain at one scale (e.g. a data centre) is often separated from a domain at another scale (e.g. the public Internet) by an intermediate buffering node; a congestion control proxy. Nonetheless, one purpose of designing scalable control algorithms is to remove the need for such proxies.
Unlimited Responsiveness
Requirement 3. An L4S congestion controller must continue to remain responsive to congestion for all values of the window, W i .
The ACK-clocking mechanism of Classic TCP cannot work if the window is less than d segments, where d is the delayed ACK factor. For example, with a delayed ACK factor of 2, the ACK-clock fails if the window is less than 2. If the base RTT is so low that the window needs to be below d to fit available capacity, Classic TCP never reduces its congestion window below d. Instead, TCP holds the congestion window at d, which forces the queue to grow. This grows the total RTT until a window of d packets will fit within it [BDS15].
Traditionally, it was thought that this was only a problem with very low capacity. However, once queuing delay is all-but removed, it is not uncommon for the base RTT to be low enough to exhibit this problem. For instance, consider available capacity x i = 2 Mb/s, which might occur when a few flows happen to be sharing the link. With a common segment size s i = 12 kb and base RTT R i = 6 ms, the window to fill this capacity is If L4S controllers became unresponsive at some limit, like Classic TCP does, they would ruin the low queuing delay feature of the L4S service in many realistic cases like those above. This is why requirement 3 states that an L4S controller must not exhibit such a limit.
Low Relative Queuing Delay
Requirement 4. Queuing delay should remain small relative to the likely shortest base RTT of any bottlenecked flow.
There is no need for queuing delay to be smaller than some absolute delay limit as long as it does not have significant impact relative to the base round trip delay of any communication.
Queuing delay has been steadily eroded as we have moved from i) tail drop to ii) AQMs for Classic TCP traffic to iii) an L4S AQM designed to be used with Scalable congestion controls. Assuming the bottleneck is in a link where flow multiplexing is low, these respectively keep queuing delay to i) the worst-case base RTT; ii) a typical base RTT; and iii) the minimum expected base RTT. Therefore L4S brings us to the point where this requirement can be satisfied.
Each case is briefly explained in the following: 1. In a low stat-mux case, a well-sized drop-tail buffer is not configured smaller than 1 worstcase bandwidth-delay product, which equates to 1 worst-case RTT of delay. Otherwise all lone flows except those with worst-case RTT would under-utilize the bottleneck link and continual unavoidable bursts would exacerbate under-utilization even for the longest RTT flows.
2. An AQM is designed to absorb bursts up to a worst-case RTT in duration, so it can be configured to aim for a typical RTT of queuing delay, accepting that there will be some under-utilization by lone large RTT flows. For instance, an AQM in a data centre is configured with a much lower target delay than an AQM in the public Internet.
3. The utilization of Scalable traffic is relatively insensitive to a lower-than-optimal target delay [AJP11] so an L4S queue can be configured for close to the minimum likely RTT with very little under-utilization.
If we aim to enable a wider range of flows to coexist in the same bottleneck (e.g. 1 µs-200 ms), it will be necessary to either manually configure target delay lower, to reflect the lowest typical base RTT, or perhaps to design AQMs that auto-detect the lowest RTT flow that is using the bottleneck at any one time and auto-tune its target delay accordingly.
Satisfying this scaling requirement for a wider range of RTTs seems to require a change to AQM algorithms in the network. Whereas the other requirements have so far been addressed with host-only changes. Nonetheless, this requirement is still relevant to mention here because it complements requirements 2 & 3.
This combination of inequalities implies v 0 ≤ W i . So, when the window is small, congestion signalling could saturate at p = 100%. Then the controller will effectively stop reducing the window W i in response to further increases in congestion, contravening requirement 3. This causes the queue to grow, until the total RTT grows large enough to satisfy (substituting from Equation 1): This inequality is plotted in Figure 1 to illustrate the region where signalling saturates for two example values of v 0 . It might seem that v 0 could be set as low as possible to reduce the likelihood of saturation. However, at the other end of the window spectrum, this would reduce the number of control signals per RTT, compromising requirement 1.
Coexistence with Classic TCP
Requirement 6. No standard Classic TCP flow [APB09] should be pushed towards starvation while any L4S flows are not.
As with requirement 2, the words 'fairness' or 'TCP-friendliness' are deliberately not used for this requirement, because it is not trying to justify some unsubstantiated feeling that different users or applications should have similar rates [Bri07]. It is expressed in terms of each flow avoiding starvation, which Floyd and Allman [FA08] explained was the underlying motivation behind TCP-fairness. Perflow starvation-avoidance is all that is necessary for end-systems to implement. Networks might (and often do) additionally enforce or police the relative rates of users, but networks need to be careful not to limit application flexibility without strong reasons.
As long as there is plenty of capacity, this requirement then allows flows to weight their rates to be different from each other as long as they do not increase congestion to a level at which a standard TCP flow [APB09] would approach starvation.
We use the term 'approach starvation' rather than just 'starvation' because strictly starvation is a condition where one congestion control continually reduces another, driving it to its minimum throughput whatever capacity is available. It will probably be necessary to define and standardize 'approaching starvation' as some minimum throughput of a Classic TCP flow, or equivalently some maximum level of Classic drop (or Classic marking). We shall call these the 'tolerable throughput' or 'tolerable congestion level', but not quantify them here.
The RTT used by the comparable standard TCP flow also needs to be considered, for two reasons: • As was explained in § 2.2, RTT-dependent congestion controls are no longer cushioned by queuing delay when queuing delay is kept low by AQMs.
• The traditional definition of TCP-fairness has always applied to flows of similar RTT, but this is not appropriate for comparing flows that are served by queues with different target queuing delay within the same bottleneck (as in the DualQ AQM [DSBEBT16]).
It would be over-restrictive to prohibit Scalable flows from pushing a long RTT Classic flow towards starvation, given short RTT Classic flows already push long RTT Classic flows towards starvation.
We take the position that we only have to prohibit Scalable flows from pushing low-RTT Classic flows towards starvation. Just as the aim here is to design Scalable CCs with limited RTT-dependence (requirement 2), it can be asserted that there is nothing to stop Classic CCs being redesigned for limited RTT-dependence. If so, there is no doubt that the aggression of long-RTT Classic flows would be increased, rather than that of short-RTT flows decreased.
This still begs the question of what RTT we mean by a 'low-RTT' Classic flow. The RTT of a Classic TCP flow will never be less than the queue delay target in an AQM for Classic traffic. As explained in § 2.4 the queuing delay target of a Classic AQM is configured for the typical RTT of the flows it controls. We do not know of a study that measures the average base RTT of traffic on the pubic Internet weighted by usage. Nonetheless, the lowest opinion of what is 'typical' is the 5 ms target of CoDel. 2 By Assumption 4, L4S and Classic traffic share capacity through a mechanism like the DualQ Coupled AQM [DSBEBT16]. Currently, this relates the loss (or ECN marking) probability seen by Classic traffic, p C , to that seen by L4S traffic, p, as follows: By the above arguments, it is sufficient for coexistence to set the coupling factor, k with only 'low-RTT' Classic flows in mind.
This leaves the question open of what value to agree on for the aggressiveness of Scalable flows (v 0 in Equation 2
). The choice of v 0 already requires a tough compromise to be struck between requirements 1, 2, 3 and 5. So it would be sensible to wait for some consensus to emerge over the choice of v 0 before recommending a value for the coupling factor k.
Unsaturated Marking
A scheme such as REM [ALLY01] could be used in the network to reduce the likelihood that signalling will saturate (requirement 2.5). Nonetheless, below we propose a scheme that purely involves the sender's control algorithm.
It is proposed to use the number of unmarked packets, u, between marked packets to drive the sender's congestion control algorithm. If p is the packet marking probability, as already defined, then the number of packets delivered per marked packet is 1/p. Therefore, the number of unmarked packets between the marked packets, Whereas p is confined to the range [0, 1], the range of 1/u is [0, ∞). This is the unsaturating property that is needed. Other examples are illustrated in Figure 2.
These unsaturating congestion signals will sometimes be called virtual marks, because the host (or any observer) can calculate the occurence of virtual marks from the spacing between real marks.
Scalable Signalling vs. RTT Independence
A difficult tension remains between the scalable congestion signalling requirement (2.1) and the requirement to limit RTT-dependence (2.2).
The authors cannot find an elegant resolution to this tension. Instead, we have considered a number of inelegant compromises. Those ideas that are decent enough to present here are named "Compromise 4" and "Compromise 5". The flow of the argument continues from § 2.2, where we initially set the bit rates of two competing flows, i & j with different RTTs to be roughly equal, x i ≈ x j , which we shall call our 'interim RTT-independence requirement'. Different compromises soften this requirement in different ways.
Before continuing, we shall simplify. § 2.2 concluded that the maximum segment size could not be varied upwards, given existing link limitations, and nothing would be gained by varying it downward. Therefore we shall simplify by discussing packet rate, r, not bit rate. Then Equation 1 can be restated as
Compromise 4
This compromise ends up not being chosen, but it is included here to illustrate the problem.
Returning to our interim RTT-independence requirement, it led us to Equation 4, which required that the marks per RTT, v i ∝ R i . Substituting in Equation 2, Substituting from Equation 8 and introducing a constant of proportionality The constant c 0 = pr i can be interpreted as the constant number of marks per unit time necessary for RTT-independence. For example, if c 0 = 1000, in each flow, one packet would be marked per ms.
Such RTT-independence would be problematic in two cases: Low rate: If r < 1 packet/ms, there would not be enough packets to be marked once per ms; Low RTT: If R i < 1 ms, there would be less than 1 mark per round trip. For example, if R i = 1 µs, there would be only one mark every 1,000 round trips, which would not provide the tight control demanded by requirement 2.
The first problem is a signalling saturation problem, which can be solved using the technique in § 3.1. The second problem is not surprising, because Equation 10 is derived from pW i = c 0 R i , and when R i is small this contravenes v i ≥ v 0 from Equation 2, which expresses the scalable signalling requirement.
In contrast, DCTCP [AGM + 10] is a good example of the advantage of scalable congestion signalling (requirement 2). In the steady state its congestion window converges to, where in DCTCP's case v 0 = 2 segments. However, this contravenes requirement 2, because, substituting from Equation 8, the packet rate is inversely dependent on RTT.
One possible compromise is to replace the dependence on RTT in Equation 11 with dependence on another scalable property, perhaps the inter-packet departure time, 1/r: This should scale reasonably well, because it will only be less than the RTT if the window is less than 1 segment, which is uncommon (but not impossible-see requirement 3). However, this simplifies to which would be an impractical rate control, because, the congestion level would not change with rate, so it would never converge.
A possible alternative compromise would be to replace dependence on RTT with dependence on the square-root of the inter-departure time 1/sqrtr. For completeness, we will also address the saturation problem by replacing p with 1/u: which simplifies to Renaming the squared constant, we get to plot the inter-mark time 1/(pr i ) compared to the inter-packet departure time, using c 0 = 1000. At any flow rate, for example the vertical at 1 Gb/s, the ratio of the times where this vertical intersects the two plots (120 µs / 12 µs = 10) represents a likely worst-case number of round trips per mark at that flow-rate. This assumes that the worst-case is a window of one segment, so that the intersection of the vertical with the inter-packet time plot represents a worst-case RTT, which is perhaps reasonable, but not strictly true, as already discussed.
Therefore, at flow rates below about 100 Mb/s, there is little likelihood of unscalable control signalling (many round trips between marks). However, at higher flow rates, and low RTTs, this approach compromises the scalable control signalling requirement (1) in favour of RTT-independence.
Further, because of the squared congestion metric in Equation 12, the coupling between Classic and L4S congestion signals would have to be altered from that given in Equation 6. In order to coexist with Classic TCP (requirement 6), the coupling would require an exponent of 4, rather than 2.
It is questionable whether it will be worthwhile to standardize an exponent of 4 rather than 2 in the L4S coupling mechanism, solely to support an approach that does not reliably satisfy one of the conflicting requirements, specifically scalable signalling (requirement 1).
Compromise 5
The nub of the tension can be seen by restating the equations representing the scalable signalling requirement (Equation 2) and the limited RTTdependence requirement (Equation 9) together: A better compromise might be possible if the marks per RTT can take the form of a function of RTT v i (R i ), such that, as RTT reduces, marks per RTT are lower bounded (or at least reduce slowly) while, as RTT rises, marks per RTT become proportional to RTT. Equation 13 fits this description fairly well: .
R 0 would probably need to be standardized, at least to within a range. It is a configuration parameter common to all flows that represents the RTT at which W i /u i = v 0 . This formula is illustrated in Figure 4 using parameters v 0 = 2, R 0 = 500 µs.
It will be noted that non-saturating congestion signals, 1/u, have been used in place of p, as described in §,3.1. We use the unit 'marked packet' for these signals, which is a good enough approximation at the low marking probabilities used in the examples here.
The marks have been contrived to become proportional to RTT as RTT rises 3 so that, when marks/RTT is divided by R i to derive the formula for marks per second, it will tend to a constant asymptote. The resulting formula for marks per second is given in Equation 14 and Figure 5 implies that it does indeed tend to a constant of about 2,800 as R i → ∞.
= f (R i ). For two flows, i & j with RTTs R i & R j , the ratio between their packet rates will be the ratio of the functions f (R i )/f (R j ) using Equation 15. This is because u will always be common to both flows. For example, reading off from Figure 5 at R i = 10 µs & R j = 130 ms, r i /r j ≈ 35, 000/2, 800 ≈ 13. Thus, a round-trip ratio of over 4 orders of magnitude only results in a rate imbalance of a little more than 1 order of magnitude.
This relatively small rate imbalance is not at the expense of control signal scaling. For instance, in a round trip of 10 µs there are about 0.35 marks (about 3 round trips per mark).
Therefore, in theory at least, 'Compromise 5' is a good compromise between scaling requirements that were thought to be mutually incompatible.
Summary
The status of the requirements set at the start of this document are summarized in Table 2.
The tension between the first two requirements is resolved fairly well by Compromise 5 ( § 3.2.2), but this does not preclude finding a better compromise.
The unlimited responsiveness requirement (2.3) was set aside for the purposes of the present paper because it is not so obviously in tension with any other requirements. It remains to be resolved.
|
2019-04-16T11:37:26.000Z
|
2019-04-16T00:00:00.000
|
{
"year": 2019,
"sha1": "106f6372468e7b081d50f4793197c471fa61668c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "106f6372468e7b081d50f4793197c471fa61668c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
195699750
|
pes2o/s2orc
|
v3-fos-license
|
Ju n 20 19 Anisotropic fluid spheres in the framework of f ( R , T ) gravity theory
The main aim of this paper is to obtain analytic relativistic anisotropic spherical solutions in f(R,T ) scenario. To do so we use modified Durgapal-Fuloria metric potential and the isotropic condition is imposed in order to obtain the effective anisotropic factor ∆̃. Besides, a notable and viable election on f(R,T ) gravity formulation is taken. Specifically f(R, T ) = R + 2χT , where R is the Ricci scalar, T the trace of the energy-momentum tensor and χ a dimensionless parameter. This choice of f(R, T ) function modifies the matter sector only, including new ingredients to the physical parameters that characterize the model such as density, radial, and tangential pressure. Moreover, other important quantities are affected such as subliminal speeds of the pressure waves in both radial and transverse direction, observational parameters, for example, the surface redshift which is related with the total mass M and the radius rs of the compact object. Also, a transcendent mechanism like equilibrium through generalized Tolman-Oppenheimer-Volkoff equation and stability of the system are upset. We analyze all the physical and mathematical general requirements of the configuration taking M = 1.04M⊙ and varying χ from −0.1 to 0.1. It is shown by the graphical procedure that χ < 0 yields to a more compact object in comparison when χ ≥ 0 (where χ = 0.0 corresponds to general relativity theory) and increases the value of the surface redshift. However, negative values of χ introduce in the system an attractive anisotropic force (inward) and the configuration is completely unstable (corroborated employing Abreu’s criterion). Furthermore, the model in Einstein gravity theory presents cracking while for χ > 0 the system is fully stable. The relationship between effective pressures and effective density ρ̃ is discussed and obtained. This is achieved by establishing the corresponding equation of state.
I. INTRODUCTION
Put forward by Harko and his collaborators [1], f(R,T ) gravity theory was designed to face the late-time acceleration of the Universe and the existence of dark matter. All these issues provided by recent observational data [2][3][4][5][6]. At present f(R,T ) theory is an active research field in the cosmological context [8][9][10][11][12][13][14]. Other interesting works available in the literature are for example the study by Sharif et.al about the non-static line element for collapsing of a spherical body having anisotropic fluid [15], the static spherical wormhole solutions found in [16,17]. Moreover, perturbation techniques were used by Bhatti et.al in the study of spherical stars [18]. The effects on gravitational lensing due to f(R,T ) gravity were discussed by Houndjo in [19]. Furthermore, Baffou et al. [20] employed perturbation on de-Sitter space-time and power-law models in order to explore some cosmic viability bounds. Even though the study of the Universe as a whole is a very intriguing current problem of dealing with, studies and investigations of structures within it such as neutron stars, white dwarfs and black holes among others, * sunil@unizwa.edu.om † francisco.tello@ua.cl which constitute real laboratories to analyze in a fragmented way the most hidden secrets of the Universe, it is expected once these secrets are revealed, they provide us with the expected response that will finally put the pieces of this great puzzle in the corresponding place. In this direction, many authors have investigated the existence of collapsed structures within the framework of f(R,T ) theory, exploring how different models of f(R, T ) affect on the principal properties of these kind of objects, besides contrasting the reported results with general relativity theory (GR hereinafter) [21][22][23][24][25][26][27][28]. Of course, is not an easy task solve f(R, T ) field equations (as in GR case), for this reason one needs to prescribe additional information such as a suitable metric potential, an electric field (in the corresponding case), an adequate anisotropy factor or an equation of state (EoS from now on). Concerning the latter, the obtaining and not imposing of the EoS leads to a better understanding of how the matter confined in the stellar interior behaves. Moreover, it is possible to determine what type of material constitutes the distribution, for example, ordinary matter such as neutrons or strange matter such as quarks [29] (and references contained therein). Additionally, the EoS provides the relation between the macro observables parameters of the star such as mass and radius. Following the same spirit, in this work we study the existence of compact objects, specifically neutron stars (or the possibility of quark stars also known as strange stars) in the frame-work of f(R,T ) gravity. The main goal is to build up the full geometrical description of the interior space-time taking as departure point Durgapal-Fuloria [30] metric potential ξ and introduce an anisotropic behavior of the matter distribution. The latter is achieved imposing the isotropic condition at the level of the field equations considering p r = p t once a viable form of the f(R,T ) function is chosen. In this opportunity we have selected the modified gravity model to be f (R, T ) = R + 2χT [1], being R the Ricci scalar, T = g µν T µν the trace of the energy-momentum tensor (As was pointed out by Harko et.al [1] the dependence from T may be induced by exotic imperfect fluids or quantum effects) and χ a coupling dimensionless constant. This coupling constant in some sense quantifies the effect on the matter sector modifications and in the geometrical one also (it is included in the Durgapal-Fuloria ansatz).
The study of compact objects driven by anisotropic matter distributions must fulfill some general and basic requirements in order to be a physical and mathematical admissible model from the astrophysical point of view.
Taking into account such conditions, with the support of graphic analysis we have studied and corroborated the fulfillment of each of them, which have been established throughout history from the pioneering work by Bowers and Liang [31], in research developed by Herrera, Ponce de León, Cosenza, Di Prisco and Ivanov, to name a few [32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47]. Although these demands have been widely developed and applied to the study of collapsed structures within the framework of GR [48] (and references contained therein). The same in general must be satisfied in the scenario of modified gravity theories, since these theories under some limit must reproduce GR and its results, which are known to be very precise and have been verified several times. Of course, the chosen f(R,T ) model in this research reproduces the original Durpgapal-Fuloria solution developed in the context of GR in the limit χ = 0.0 and α = 1. So, the obtained model has interesting properties that can be compared with the obtained ones in the GR frame. In considering −0.1 ≤ χ ≤ 0.1 and M = 1.04M ⊙ (regarding also the GR limit ı.e χ = 0.0) we have performed a full study and checked all the necessary and sufficient conditions in order to describe an acceptable compact object. So, in section II we derive the complete set of equations corresponding to f(R,T ) theory, and the matter content is fixed ı.e the energy-momentum tensor T µν and the Lagrangian matter L m . In Sec. III the full model has specified ı.e the inner space-time geometry and the material content. In Sec. IV the internal geometry is smoothly joining with the exterior Schwarzschild geometry. In Secs. V and VI we analyze the full main salient features of the obtained model. Studying the physical and mathematical behavior of the space-time geometry and all the thermodynamic variables such as the effective radial and tangential pressure, the effective energy density. In Sec. VII by means of energy conditions we check if the energy-momentum tensor is driven the ma-terial content if well behaved at every point within the star. In Sec. VIII causality condition and its implications are discussed. Furthermore, we discuss the influence of anisotropies fluids distributions and f(R,T ) model on the surface redshift. Sec. IX is devoted to the analysis of the Tolman-Oppenheimer-Volkoff equilibrium equation in the f(R,T ) context. In order to explore how the whole system is affected by the different forces. We provide in Sec. X the corresponding equation of state, examining its properties and discussing its importance in the study of collapsed configurations such as neutron stars. In Sec. XI the stability of the system is analyzed using Abreu's criteria. Finally, in Sec. XII we reinforce all the good properties obtained in this work and comparisons between f(R,T ) gravity theory and GR are given.
Let us consider the integral action S for modified f (R, T ) gravity theory as where relativistic geometrized units were employed i.e c = G = 1. Here, f (R, T ) is an arbitrary function of the Ricci scalar R and the trace T of the energy-momentum tensor T µ ν , while L m denotes the matter Lagrangian density. By variation of action S with respect to the metric tensor g µν yields the following field equation where f R (R, T ) denote the partial derivative of f (R, T ) with respect to R and f T (R, T ) is the partial derivative of f (R, T ) with respect to T , while R µν is the Ricci tensor.
The box operator ≡ ∂ µ ( √ −gg µν ∂ ν )/ √ −g is called the D'Alambert operator, and ∇ µ represents the covariant derivative associated with the Levi-Civita connection of metric tensor g µν . The stress energy tensor T µν and Θ µν are defined as follows, Using Eq.(2), the Einstein tensor G µν can be written as, The equation (6) shows that the covariant derivative of stress-energy momentum tensor T µν not vanishes in modified f (R, T ) gravity as in other theories of gravity. Throughout in our study, we consider the Lagrangian matter L m = −P where P = 1 3 (p r + 2p t ) [49]. Then using Eq.(4) we obtain Θ µν = −2T µν − Pg µν .
In order to obtain the effective stress-energy momentum tensor for modified theory of gravity we consider the simplest linear functional form of f (R, T ) (proposed by Harko et al. [1]) as follows where χ is a coupling constant. The above f (R, T ) function has been used widely used to develop the different f (R, T ) gravity compact objects. By inserting the value f (R, T ) from Eq. (7) in Eq. (5) we obtain Here, we consider the energy momentum tensor T µν corresponding to anisotropic fluid distribution which can be defined as, where u ν is the four velocity, satisfying u µ u µ = −1 and u ν ∇ µ u µ = 0. Here, ρ is matter density, while p r and p t are radial pressure and tangential pressure respectively. Now the modified energy-momentum tensorT µν can be written as By inserting the value of f (R, T ) = R + 2χT in Eq.(6) we get . (11) By using the Eqs. (10) and (11) we can write, Let us consider the spacetime being static and spherically symmetric, which describes the interior of the object can be written in the following form ds 2 = −e ν(r) dt 2 + ξ −1 dr 2 + r 2 (dθ 2 + sin 2 θ dφ 2 ), (13) Since the modified energy tensorT µν is sum θ µν which will clearly generates the anisotropic pressure within the effective matter distribution. Hence, using Using Eqs. (8) and (10) together with line element (13) The field equations for the spherically symmetric anisotropic stellar system can be written as, The primes denote differentiation with respect to the radial coordinate r. Using the Eqs. (10) and (10) the effective quantities like effective density (ρ), effective radial pressure (p r ) and effective tangential pressure (p r ) can be written in terms of anisotropic pressures p r and p t , energy density ρ for anisotropic matter distribution as, Eqs. (14) defines the gravitational mass inside the star of radius r. Using the Eqs. (14) and (17) For this modified anisotropic matter distribution, it is also necessary that the anisotropic fluid must satisfy another additional equation (known as conservation equation) as, . (21) The above conservation Eq.(21) is also known as modified Tolman-Oppenheimer-Volkoff (TOV) equation for modified theory of gravity. It is note that this modified conservation equation reduce into conservation equation for General relativity when χ = 0.0. On the other hand, the energy density (ρ), radial pressure (p r ) and tangential pressure (p t ) for anisotropic stellar model in modified f (R, T ) gravity theory can be written as, A. Solution of field Equations (14)- (16) in f (R, T ) gravity theory: To solve the equations (14), (15) and (16), we use the isotropy condition in Eqs. (15) and (16) which leads a second order differential equation of the form as, The above pressure isotropy equations in f (R, T ) gravity will be same as in GR if χ = 0. Since Eq.(26) contain three unknowns ν, ξ and ∆ = p t − p r . Then to solve this equation, we choose a modified anstaz of gravitational potential ξ, proposed by Durgapal-Floria [30], of the form as, The choice of above metric potential (27) is well motivated, because it is free from physical and mathematical singularities everywhere within the compact structure.
Furthermore, yields to a finite, well defined and decreasing outward energy density at all points inside the star. The inclusion of χ in potential ξ will cause the effect in modified energy density (ρ). Now it can be noted that if χ = 0 and p t = p r , then ν = 4 ln(1 + A r 2 ) will satisfy the isotropy Eq. (26). By keeping this point in our mind, we construct the expression for ∆, by using isotropy condition (26) and potential ξ, of the form as, It is observe from Eq. (28), the anisotropy ∆ is zero at centre, and then p t = p r at centre. However, the other details of physical features for ∆ has been discussed in Sec. (V). Now by substituting the gravitational potential ξ and anisotropic factor ∆ from Eqs. (27) and (28) into Eq. (26), and by using the transformation ν = 2 ln Ψ and x = Ar 2 we obtain, . It is note that the value Ψ = (1 + α x) 2 satisfy the above differential Eq.(29) which implies that this value of Ψ leads a particular solution of Eq. (29). Then most general solution of Eq.(29) is given (using the change of dependent variable method) as, where C and D are arbitrary constant of integration and expression of the used coefficients are as follows: By plugging the value of ξ, ν = 2 ln Ψ from Eqs. (27), and (30) into Eqs. (14)- (16) we find the effective energy densityρ, effective radial pressure (p r ) and effective tangential pressure (p t ) in f (R, T ) gravity theory,ρ = 8 9 + 2 x + x 2 + χ(9 + 5 x) where,
IV. MATCHING CONDITION
Since all compact structures are bounded objects, to ensure a well behaved stellar interior i.e finite material content and smooth geometry at the surface Σ ≡ r = r s (where r s is the radius of the sphere) of the configuration, one needs to join the inner space-time M − with the corresponding outer space-time M + . In this case we are trying with uncharged anisotropic fluid sphere described by (27) and (30). Moreover, due to the election given by (7) on the f(R,T ) function, the appropriated exterior space-time M + corresponds to Schwarzschild geometry given by this is so because the modification introduced in matter sector represented by T is vanishing beyond Σ. So, to join the interior geometry with Schwarzschild outer space-time one requires to impose the so called first and second fundamental forms across Σ. The first fundamental form refers the continuity of the intrinsic metric g µν induced by both metrics M − and M + on Σ. Explicitly it reads ds 2 Σ = 0 ⇒ e λ − (rs) = e λ + (rs) and e ν − (rs) = e ν + (rs) .
So, for the present model we have The second fundamental form is related with the continuity of the extrinsic curvature K µν induced by M − and M + on Σ. The continuity of K rr component across Σ yields top The above requirement determines the size of the object ı.e the radius r s which means that the material content is confined within the region 0 ≤ r ≤ r s . The continuity of the remaining components K θθ and K φφ leads tõ Equation (39) is the total effective mass contained in the sphere which is expressed by Eq. (20). After solving Eqs.(36)- (38) we obtain the parameters C, D and A as, where, The expressions for the coefficients used here are mentioned in the Appendix A. Then, Eqs. (40)-(42) obtained from Israel-Darmois [50,51] junction conditions (first and second fundamental forms) are the necessary conditions to get the compete constants parameter that characterize the model.
V. GEOMETRIC CHARACTERIZATION OF THE MODEL
The studied model is described by a spherically symmetric static manifold whose temporal and radial metric tensor components are given by Eqs. (27) and (30), respectively. These metric potentials are free from physical and mathematical singularities throughout the compact object. This fact guarantees a well behaved space-time region. Once the parameters A, C and D have been determined, the behavior of the metric potentials is studied through a graphical analysis. As shown in Fig. 1 both ξ −1 and e ν are regular function with increasing radial coordinate everywhere inside the star for all values of the parameter χ. As usual ξ −1 takes the value 1 at r = 0 while e ν > 0 at the same point. The choice of Durgapal-Fuloria type potential ξ has well-founded physical reasons, it is clearly free from physical and mathematical discontinuities. The extension carried out in this work to the context of f(R,T ) gravity theory maintains the same spirit. However, we have made a small modification in order to include the effects of this modified gravitational theory in the full obtained model. From
VI. THERMODYNAMIC OBSERVABLES
In this section we study and analyze the behaviour of the main salient features of the model ı.e matter densitỹ ρ, radialp r and tangentialp t pressure respectively. Also we examine the role played by the effective anisotropy factor∆ inside the stellar structure. It is well known that the main physical parameters of any compact object describing stellar interiors should be free from physical and mathematical drawbacks. Furthermore, they should be monotonic decreasing functions of the radial coordinate towards the surface, with their maximum values attached at the center of the configuration. These general requirements ensure in principle a well behaved model which can serves to describe some natural objects like white dwarf, neutron stars even quark stars. Moreover, in the study of compact structures there are other ingredients as essential as the aforementioned, which provide a more realistic view of the behavior of celestial bodies. For example, the inclusion of anisotropies in the material content contained in the fluid sphere. Anisotropy in this context means that the pressure in the radial direction is different from the pressure in the angular directions ı.e p r = p t . So, the effective anisotropy factor is defined by∆ =p t −p r . The inclusion of anisotropies within the stellar content introduces improvements in stability and balance mechanisms and increases the value of the surface redshift. However, regarding the equilibrium mechanism the contribution that it will give depends on the sign, that is, whether it is pos-itive∆ > 0 ⇒p t >p r or negative∆ < 0 ⇒p t <p r . In the first case the system experiences a repulsive force that helps to counteract the gravitational gradient and in the second case, the force due to the anisotropy helps the gravitational force compress the object. If the pressure exerted by the nuclear force fails to overcome the gravitational attraction, the structure eventually will continue to collapse until its Schwarzschild radius. At this point, the object forms a black hole with many unusual properties. This means that the presence of an attractive force due to anisotropies damages the balance and stability of the configuration. It is clear that the collapse of the structure towards a singularity depends on the gradient pressure (hydrostatic force) exerted by the matter inside the star. Figure 2 shows the behaviour of all thermodynamic observables and anisotropy factor. From the upper panels we can see the behaviour of both effective radial and tangential pressure (left and right respectively). These physical quantities have their maximum values at the center of the configuration and are monotonic decreasing functions with increasing radial coordinate. It is observed that for negative values of χ the maximum value is greater than the values obtained considering 0.0 (GR limit) and 0.1. Respect to the effective density (lower right panel), it has its maximum value attained at the center corresponding with χ = −0.1, is monotonic decreasing function towards the surface and positive defined everywhere within the star. Then, all the thermodynamic observables increases at the center of the star when χ moves from −0.1 to 0.1. The behavior of the effective anisotropy factor∆ (lower left panel) is strongly dependent on the value that χ takes. For χ = 0.1 its behaviour is positive at all points within the star, vanishing at the center and increasing function with increasing radius. As we explained above, this conduct introduces in the system a repulsive force (outward). On the other hand, for χ = −0.1 the system is subject to an attractive force (inward). The effect on the system caused by this attractive force will be analyzed in the dynamical equilibrium section IX. Finally, the GR limit corresponding to χ = 0.0 shows a positive anisotropy factor throughout the object and attains its maximum value within the configuration. Furthermore, comparing GR (χ = 0.0) with χ < 0 it is observed that in f(R,T ) gravity the object are more compact than in GR. Table I On the other hand for Einstein general relativity, the anisotropy is positive throughout and attains its maximum value within the stellar compact objects. We also note that the configuration is more compact in f(R,T ) gravity as compared to Einstein general relativity for χ ≥ 0.
VII. ENERGY CONDITIONS
It is well known that the matter distribution that makes up celestial bodies can be composed of a large number of material fields. Despite knowing the components that describe this material content inside the compact structure, it could be very complex to describe exactly the shape of the energy-momentum tensor. In fact, one has some ideas on the behaviour of the matter under extreme conditions of density and pressure. On the other hand, there are certain inequalities which are physically reasonable to assume for the energymomentum tensor. So, in this section we are willing to verify these inequalities at all points in the interior of the star. In the literature these inequalities are known as energy conditions. Then we have the null energy condition (NEC), dominant energy condition (DEC), strong energy condition (SEC) and weak energy condition(WEC). Ex-plicitly, these are given by where i ≡ (radial r, transverse t), l µ and t µ are timelike vector and null vector respectively. To verify a well defined energy-momentum tensor everywhere within the compact configuration the above inequalities must satisfy simultaneously. We will check the energy conditions with the help of graphical representation. In Fig. 3, we have plotted the L.H.S of the above inequalities which verifies that all the energy conditions are satisfied at the stellar interior.
These energy conditions beyond capturing the idea that the energy must be positive defined, have a clear physical and geometric interpretation [52]. From the physical point of view NEC means that an observer traversing a null curve will measure the ambient (ordinary) energy density to be positive. WEC implies that the energy density measured by an observer crossing a timelike curve is never negative. SEC purports that the trace of the tidal tensor measured by the corresponding observers is always non-negative and finally DEC stand for mass-energy can never be observed to be flowing faster than light. Furthermore, violations of the energy conditions have sometimes been presented as only being produced by unphysical stress energy tensors. Usually SEC is used as a fundamental guide will be extremely idealistic. Nevertheless, SEC is violated in many cases, e.g. minimally coupled scalar field and curvature-coupled scalar field theories. It may or may not imply the violation of the more basic energy conditions i.e. NEC and WEC. It is worth mentioning that both SEC and DEC imply NEC and DEC implies also WEC. Additionally, the fulfillment of SEC and DEC conditions imposes strong restrictions on the maximum plausible bound of the surface redshift Z s of the compact structure when there are anisotropies in the stellar interior. These implications will be discussed in more details in the next section. long-dash with dotted lines for ρ + pt, and iii). solid lines for ρ + pr + 2 pt.
VIII. CAUSALITY AND SURFACE REDSHIFT
Among the modifications introduced by the presence of anisotropies in the stellar interior. We have the velocities of propagation associated with the pressure waves in the main directions of the sphere, that is, in the radial and transverse directions and the modification of the upper bound of the surface redshift Z s . First of all, the subliminal speeds corresponding to each direction are defined by v 2 r = dp r dρ and v 2 t = dp t dρ .
In order to obtain a physically admissible model, both speeds v r and v t must be bounded by the speed of light (c = 1 in relativistic geometrized units). This tells us that the pressure (sound) waves in the fluid do not propagate at arbitrary speeds. This is known as causality condition. This condition is peremptory regardless if the material content of the star is isotropic or anisotropic. The only difference between the mentioned cases is that for the anisotropic case there is propagation in the two main directions of the sphere ı.e radial and transverse directions. Moreover, in the isotropic case the subliminal sound speed, should be a decreasing function. However, this is not true in the case where there is anisotropy, since the speed behaviour depends on the rigidity of the material. So, causality condition reads Causality condition (48) has strong implications on the behavior of the matter distribution within the object. One of them is related to energy-momentum tensor that describes the material content. If causality is preserved then the energy-momentum tensor is well defined. Secondly, imposing this important condition and once the radial pressurep r is obtained, a relation between it and the densityρ can be established ı.e an equation of statẽ p r =p r (ρ). This last statement is very important because usually in searching solutions to Einstein field equations, the equation of state is imposed. However, it often results in the violation of causality. Additionally, the fact of having different speeds in the directions mentioned above, influences the stability of the system (this subject will be discussed in section A). From Fig. 4 it is appreciated that v 2 r and v 2 t are satisfying causality condition for all χ. For χ < 0 both subliminal sound speeds are decreasing in nature. Nevertheless, v 2 t is greater than v 2 r at all points in the star. In the case χ > 0 the radial speed is always greater than the tangential one, whilst in GR scenario the radial sound velocity is greater than the tangential velocity for 0 < r < 8.802 and then has an opposite behavior for r > 8.802. The surface redshift Z s a significant observational parameter that relates the massm(r s ) = M and the radius r s of the star, is affected when anisotropies are introduced into the system, regardless the mechanism that originated them. For isotropic fluid spheres its maximum value is Z s = 2. This value is determined by Buchdahl constraint on the compactness factor u = 2M /r s ≤ 8/9 [53]. The explicit relation between Z s , M and r s is given by So, the effect of anisotropies on Zs has a long history. For example, Bowers and Liang [31] considered an hypothetical model containing a constant density ρ = ρ 0 (incompressible fluid) and a specific form of the anisotropy factor ∆. They concluded that when the anisotropy factor is null i.e ∆ = 0 ⇒ p r = p t the maximum value for the surface redshift corresponds to Z s = 4.77, and in the case of a positive anisotropy factor ∆ > 0 ⇒ p t > p r the above value can be exceed (otherwise if ∆ < 0). Moreover, if the anisotropy factor is extremely large then the surface redshift will be too. Moreover, Ivanov studies shown that for realistic anisotropic star models obeying SEC the maximum surface redshift is Z s = 3.842 (this value corresponds to a model without cosmological constant) meanwhile for models satisfying DEC is given by Z s = 5.211 [47]. These values correspond to the following mass-radius relation 0.957 and 0.974, respectively. In Fig. 5 the surface redshift Z s has a monotonic increasing behaviour towards the boundary with its maximum value attained at the boundary of the object. Besides, for negative values of χ the surface redshift takes larger values in comparison with χ ≥ 0. This is also appreciated in In this section we discuss the dynamical equilibrium condition of the stellar model by using Tolman-Oppenheimer-Volkoff (TOV) [54,55] equation in the framework of f (R, T ) theory gravity. This modified TOV equation for f (R, T ) theory, as already mentioned by Eq. (21), is given by where we denote first term − dpr dr = F h , second term − ν ′ 2 ρ+p r = F g , third term 2 r (p t − p r ) = F a and fourth term χ 6(4π+χ) 3 dρ dr − dpr dr − 2 dpt dr = F χ . These term describe the hydrostatic force (F h ), gravitational force (F g ), anisotropic force (F a ) and coupling force (F χ ), respectively. In the case of isotropic fluid spheres (p r =p t ) and regarding χ = 0.0 (GR limit), this equation drives the equilibrium of relativistic compact structures such as neutron stars, white dwarfs, etc. In considering the inclusion of and v 2 t are satisfying the causality condition i.e. 0 < v 2 r < 1 and 0 < v 2 t < 1 everywhere within the stellar models. In f (R, T ) gravity system, the radial velocity of sound is greater than the tangential velocity for χ = 0.1 while radial velocity of sound is less than the tangential velocity for χ = −0.1. But in scenario of Einstein general relativity the radial velocity of sound is greater than the tangential velocity for 0 ≤ r ≤ 8.802 and then start opposite behavior for r > 8.802.
anisotropies and the effect of relativistic modified gravity theories, this equation still drives the balance of the system. However, its form change a little bit as shown Eq. (50). As we can see this equation relates the effective thermodynamic quantities with the metric potential e ν . In order to keep the system in equilibrium and prevent it from collapsing below its Schwarzschild radius, it is necessary to have a relationship between the thermodynamic variablesp r andρ, that is, an equation of state (EoS)p r =p r (ρ) that links them. Nonetheless, in this case where there are contributions from the anisotropies and from the theory considered, for certain conditions it may not be enough to withstand the gravitational attraction. Thus the structure equations (20) and (50) imply that there is a maximum mass that a star can have. As was pointed out before, the present model is under four forces. The impact that these have on the system is shown in fig. 6. We can observe that the system is in equilibrium, the gravitational gradient is counterbalances by the hydrostatic F h , anisotropic F a and coupling F χ forces (although its contribution is very small) when χ > 0. On the other hand, when χ < 0 the anisotropic force takes negative values. It means that the system is under an attractive force. This fact his can damage the balance of the system if the hydrostatic gradient is not strong enough to counteract the force exerted by the gravitational attraction and the anisotropic force. Thus the system can collapse towards a singularity. However, as noted, the negative anisotropic force is very small in magnitude and the pressure gradient overcomes the action F g + F a . Another interesting point to note is that for positive values of χ the hydrostatic gradient is lower in f(R,T ) theory than in the corresponding one in GR (the inverse situation is presented for χ < 0). The same happens with the gravitational attraction.
X. THE EQUATION OF STATE (EOS)
In the study of compact structures such as neutron stars it is very important to know how the principal thermodynamic variables are connected. This relation known as the equation of state (EoS) drives a relationship between the effective radial pressurep r and the effective energy densityρ. The microphysics, as described by the EoS, is linked to the macroscopic properties of the neutron star, in particular, their masses and radii, via the Tolman-Oppenheimer-Volkoff equations, which provide the direct relation that is necessary to use astrophysical observations to constrain nuclear physics at very high densities. However, the composition of a neutron star chiefly depends on the nature of strong interactions. Depending on the type of interaction, the models can be grouped into three broad categories: non-relativistic potential models, relativistic field theoretical models, and relativistic Dirac-Brueckner-Hartree-Fock models. In addition, in each of these perspectives, the presence of softening components such as hyperons, Bose condensates or quark matter, can be incorporated [56,57]. On the other hand, one can classify the EoS in two classes: First, normal equations of state have a pressure which vanishes as the density tends to zero. Second, self-bound equations of state have a pressure which vanishes at a significant finite density. Respect to the self-bound EoS the most famous example is the MIT bag model EoS. It was pointed out by Witten [58] that strange quark matter is the ultimate ground state of matter. This leads to the fact that the internal and external vacuum densities of the hadrons are completely different and that the vacuum pressure of the bag wall balances the pressure of the quarks, stabilizing the whole system [59,60]. So, the MIT bag EoS model reads where B is the so called bag constant and represents the difference between the energy density of the perturba- tive and non-perturbative QCD vacuum. In this model the interactions of quarks and gluons are enough small, neglecting quark masses and supposing that quarks are confined to the bag volume. Concerning normal matter the EoS describes an interacting nucleon gas above a transition density 1/3ρ s to 1/2ρ s (being ρ s the surface density). Below this density, the ground state of matter consists of heavy nuclei in equilibrium with a neutronrich, low-density gas of nucleons. Nonetheless, the equilibrium of the system exists below the transition density [61,62]. So, in order to explain the structural properties of compact stars model at high densities, several authors have proposed the EoS P = P (ρ) should be well approximated by a linear function of the energy density ρ [63][64][65]. Some authors have also expressed more convincing approximated forms of the EoS P = P (ρ) as linear function of energy density ρ [29,66]. Furthermore, a linear relation between pressure P and energy density ρ ensure the preservation of causality condition. For the present model the EoS is not a linear relation between pressure and energy density. The functional relation is more complicated. The effective radial pressurẽ p r in terms of the surfaceρ s and centralρ c effective energy density the EoS has the following form However the expressions for other used coefficients are mentioned in the Appendix B. In Fig. 7 we can graphically appreciate the shape of the EoS of the model under study. In spite of the complex relation given by 52, the behavior that appears from the surface to the core of the object is linear (the curve grows from regions of low to high densities). This curve can be described approximately by the following polynomial linear interpolation (keeping only the first order inρ s ) as follows where α is a non-negative constant. It is clear from Eq. (54) that when r = r s thenp r = 0. This is so becausẽ ρ(r s ) =ρ s ı.e the energy density at zero pressure (surface energy density). Additionally, we can observe from Fig
XI. STABILITY
In this section we analyze the stability of the model by means of Abreu's criterion [67]. Basically, the method consists in contrasting the speed of pressure waves in the two principal directions of the spherically symmetric star: the subliminal radial sound speed with the subliminal tangential sound speed, and then based on those values at particular points in the object, one could potentially conclude whether the system is stable or unstable under cracking instability. Put forward by Herrera [39], cracking involves the possibility of smashing the fluid sphere in view of the appearance of total radial forces of different signs, and therefore in different directions, at different points within the configuration. It should be emphasized that this effect has never been observed, however under appropriate physical assumptions, it is a likely scenario. Cracking process is mechanism to study instability when anisotropy matter distributions are present. Nevertheless, this mechanism can be characterized most easily through the subliminal speed of pressure waves. Follow-ing we have Moreover, from causality condition one has 0 ≤ v 2 Therefore, the main idea behind Abreu's criterion is that if the subliminal tangential speed v 2 t is larger than the subliminal radial speed v 2 r ; then this could potentially result in cracking instabilities to occur in the object, rendering the latter an unstable configuration. So, with the help of graphical analysis one can determines the potentially stable/unstable regions within the star and then conclude whether the system is stable or not. From Fig. 8 (upper plot) it is observed that the system presents all the regions completely stable, that is, the whole system is stable (for χ > 0). Completely unstable behavior (for χ < 0), it is also observed that the region 0 < r < 8.802 is stable while the region r > 8.802 is unstable (for χ = 0). Then, there is cracking (change in sign of v 2 t − v 2 r ) in GR theory for the present model. This is so because from r > 8.802 the tangential velocity is greater than the radial velocity of the pressure waves. Instability of the system for negative values of χ could be anticipated from Fig. 4, since v 2 t is greater than v 2 r at every point inside the star. In distinction for positive values of χ where v 2 t is always less than v 2 r everywhere within the object. Although |v 2 t − v 2 r | lies between 0 and 1 (lower panel Fig. 8), it does not means that the system is stable. Notwithstanding, it warns us of the presence of cracking in the sphere, like in the case of GR. As seen in the lower panel of Fig. 8, the red curve (GR) is decreasing up to certain value of the radial coordinate r and then suddenly changes its behavior to a growing one. Therefore, in this opportunity for the specific value χ = 0.0, the system has stable and unstable regions.
XII. SUMMARY AND OUTLOOK
In the present paper we have obtained an analytic relativistic anisotropic spherical model in the framework of f(R,T ) gravity theory. This was achieved through the imposition of three ingredients, which form the fundamental pillars of the obtained model. The first of these was to consider a simple, notable and viable modified gravity model given by f (R, T ) = R+2χT [1], being R the usual Ricci scalar, T the trace of the energy momentum-tensor and χ a constant coupling. In addition we have taken the Lagrangian density matter to be L m = 1 3 (p r + 2p t ). The second one is the imposition of the metric potential Fig.(8), we observe that the velocity difference v 2 t − v 2 r in f (R, T ) gravity system is positive for χ = 0.1 and negative throughout within the stellar compact star models while in scenario of Einstein general relativity the velocity difference v 2 t − v 2 r is positive for 0 ≤ r ≤ 8.802 and negative if r > 8.802. This implies that the cracking does not appear within the anisotropic matter distribution for f (R, T ) gravity system while the cracking appears within the anisotropic matter distribution for Einstein general theory of relativity. We conclude that anisotropic compact star model are stable for f (R, T ) gravity system as compared to Einstein system. ξ, which corresponds to a modification of the original potential proposed by Durgapal-Fuloria [30] in the context of GR. The modification on ξ considers the inclusion of χ (see Eq. (27)). Finally, we have imposed the isotropic condition under the restriction p r = p t in order to obtain the effective anisotropy factor∆. With these ingredients in hand we arrive at the differential equation given by Eq. (29) in order to get the e ν metric potential. Once this differential equation is solved the inner geometry of the whole system is completely specified. The election of ξ is well motivated because it is free from physical and mathematical singularities everywhere inside the compact object. Moreover, the complete internal manifold (27)- (30) is well behaved at all point within the star. After that we proceeded to obtain the constant parameters of the solution. For this purpose we have made the joint between the model and the external Schwarzschild solution on the surface Σ of the compact structure. Thus, the first and second fundamental form provide the corresponding parameter space that characterizes the model. The junction between the collapsed configuration and the Schwarzschild space-time was possible due to the modifications introduced by T in the matter sector remain finite and bounded by the object. It means that beyond Σ we have an empty space-time. It is of our interest to check the outcomes and the predictions of one of the extended gravity, i.e., f(R,T ) theory regarding the existence, stability and equilibrium of spherical stars. Therefore, we have explored the behaviour of the main salient features such as the effective radial and tangential pressure, the effective energy density and the effective anisotropy factor. The behaviour of all these quantities is influenced by χ. Throughout the study we have taken −0.1 ≤ χ ≤ 0.1, M = 1.04M ⊙ and α = 1.12.
|
2019-06-27T16:00:37.000Z
|
2019-06-27T00:00:00.000
|
{
"year": 2019,
"sha1": "1443ecdb66706fd8a1f6b4f8eabeeabdbd300cad",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1906.11756",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "67b9f652d47d3ae822aaeb8b2205f95464a38895",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
150241256
|
pes2o/s2orc
|
v3-fos-license
|
Democratizing the access to college education : Brazilian race / color classification in affirmative action ’ s debate 1
One of the principal aspects to develop affirmative action in Brazil comprises how to define target population, which includes uses and perceptions of ethnic/ racial/color categories. The present paper has the main objective of bringing the analysis about IBGE’s2 race/color classifications contribute to the design of affirmative action in Brazil using categories historically constructed with the endorsement of official statistics. The color issues in the Brazilian Census and the experiences, including the racial designation as an open response, has been studied since 1872, and it is noted that there are two dimensions to be observed in the affirmative action debate: a structural and other cultural involving race/color classifications in Brazil. The statistics are fundamental to build the best evidence in shaping public policy. On the other hand, we must recognize ethnic and racial identities as cultural phenomena that are susceptible to change, which drives us to continue the discussion, trying to capture the meaning of these transformations. The affirmative action debate may not disqualify any of these approaches to knowledge about race relations in Brazil.
One of the principal aspects to develop affirmative action in Brazil comprises how to define target population.Many authors recognize this (ROSEMBERG, 2004;SANTANA, 2010;RATTS;CIRQUEIRA, 2010, GONÇALVES, 2014), which includes uses and perceptions of ethnic/racial/color categories.The present paper has the main objective of bringing the analysis about IBGE's3 race/color classifications contribute to the design of affirmative action in Brazil using categories historically constructed with the endorsement of official statistics.
Since the 1940s, when the first census conducted by IBGE occurred, population data on race/color have been used by social scientists in the analysis of race relations in Brazilian society (AZEVEDO 1955;PINTO 1952;FERNANDES, 1978).
These data and the corresponding information analysis were useful to subsidize the discussion of some concepts such as "racial democracy" and patterns of miscegenation (interracial marriage), which contributed to develop studies on racial inequalities in Brazilian society.These studies included inequalities in schooling, income distribution, access to the labor market and social security.More recently, these same data have been the substrate for the debate on affirmative action for native Brazilians and Black people in Brazil.As expected, when data leave the abstract realm of research to steer public policies, there is a resurgence of the debate about these surveys, the kind of information/data they produce and which categories they contribute to reinforce.These discussions are not entirely new and have been present at all times since prior surveys (OLIVEIRA 2003;BELTRÃO;TEIXEIRA, 2009).For this reason, it is important to recover past debate in order to understand the present and to analyze prospects for improvements that allow us to better capture the phenomenon in the best possible way.
Different dimensions of a racial question as an object to statistical research
Studying the race/color questions in Brazilian Censuses conducted between 1872 and 1960, Costa (1974, p. 100), came to the following conclusions: 1. there has not been a common criterion by the censuses, which has undermined the comparability of data; 2. both self-classification and the classification made by an interviewer were based on more than one criterion; 3. there was a consensus about the importance of collecting this kind of data to understand the contribution that different ethnic groups give for the formation of the current population.Concerned about the quality of data during the preparation of the 1970 Census, IBGE consulted experts on the race/color issue and their opinions pointed to the need of taking into account three levels: 1. classification criteria; 2. race/color terms used in everyday discourse by the population; and 3. multiple relationships between these two levels.
Therefore, the research question would be how people classify themselves and, in doing so, which terms they normally use (COSTA, 1974).
As it is known, the consultation led to the withdrawal of the topic in the 1970 Census with the proposal that the Office would implement further studies in order to answer the question -done for the first time in 1976 with a household sample survey.
The working hypothesis were: 1. the classification by race/color is a proxy for an ethnic/racial identification; 2. research conducted so far indicated that the terms, primarily those of color, also included several other physical features besides skin pigmentation; 3. the perception and the resulting classification by color is influenced by physical criteria as well as social prestige and other situations of interaction between different groups and individuals; 4. the perception of color is expressed in a vocabulary with a rich variety of terms (HARRIS, 1970;TEIXEIRA, 1987, SHERIFF, 2002); 5. this vocabulary is a sort of cultural manifestation -the results of the 1976PNAD4 and July 1998 PME5 clearly stated that (Table 1); 6. then the objective would be to study the classification expressed in the vocabulary to obtain a classification which reflects the variety of criteria Brazilian population effectively use to identify themselves racially or ethnically, turning this classification comprehensible for society as a whole.
In this first experience (IBGE, 1976) IBGE asked interviewee's designation of color in a question with an open answer.It was a way to test the use of categories and if people would use to identify themselves IBGE's traditional categories -brancos (whites), pretos (blacks) and pardos (mixed-races).Results showed that even if interviewees declared more than 130 different terms to identify themselves by color, the most frequent categories were those used historically by the institute.
Twenty years later, IBGE conducted a similar survey as a supplement questionnaire in PME (IBGE, 1978), again as part of the discussions during the preparation of the 2000 Demographic Census.The Table 1 shows the results of these two researches conducted by IBGE trying to capture in an open question the vocabulary used by the population to identify their own race/color6 .We can perceive that branca is the modal category in both surveys, but an even higher percentage in 1998.The second most cited category is morena, which was more representative in 1976 while the reverse happened with parda category.The category preta has increased its incidence between 1976 and 1998, and most recently in a 2008 survey7 (further, this research will be better explained).Looking specifically to PME's data and to the differences between Brazilian regions (Table 2), we can see that the perfect coincidence between the terms of closed categories and the use of their namesakes in answering the open question, defined as those with a higher percentage of coincidence -all above 90% -are the categories branca in São Paulo, Rio de Janeiro and Porto Alegre and amarela in São Paulo.The other categories have a lower degree of acceptance by the population.It is in Rio de Janeiro and Porto Alegre metropolitan areas that the categories preta and parda appear with a higher percentage of coincidence, between 52% and 60%.In Porto Alegre metropolitan area, the modal categories for the open question are always namesakes of the corresponding closed category and in the Rio de Janeiro metropolitan area, the discrepancy occurs only for the indígena population who spontaneously prefer the omnibus category of morena.
Analyzing the categories of the closed question which spontaneously preferred options in the open question that were not homonymous (Table 3), we observed that in Recife metropolitan area there seems to be a dichotomy in which the population is classified in the open question as branca or morena.The amarela category is mainly in the first option and all the other closed categories in the second option.It is important to observe the preferred option of indígena by the open category morena in all metropolitan areas, with the exception of Porto Alegre (though both open and close questions mention color or race).At the other extreme we have the open homonymous option always modal for individuals classified as branca in the closed question.Between 14 cells not empty in the Table 3, 11 refer to the open option morena reinforcing the view already expressed by Silva (1996) when analyzing data from 1976.
There are other implications for the study of vocabulary and their meanings.Studies about racial and ethnic identities in addition to the multiplicity of terms and categories that can be used according to local culture (TEIXEIRA, 1987;SHERIFF, 2002) inform us that we could, in principle, recognize and distinguish at least five levels of classification: i) individual point of view about himself; ii) individual point of view about someone close to him (a relative, for example); iii) individual point of view about a person unknown, so only based on the appearance of this person; iv) individual point of view about how he is perceived by society in general and v) a description of how an individual wants to be perceived in a given context (Cf. (BELTRÃO;TEIXEIRA, 2008).
This diversity may generate different perceptions, even conflict in some cases, from the way individuals understand the question to which terms and criteria are used to classify people, especially in the case of people classified as parda (census category) and morena (preference of respondents in surveys with open response).There are many examples given by field researches.In 2008, a new research entirely dedicated to investigate the dimensions involved in ethnic and racial classification of population was conducted by IBGE.With a sample of 15.000 households in five Brazilian states and the Federal District: Amazonas, Paraíba, Mato Grosso, São Paulo, Rio Grande do Sul and Distrito Federal, the research provides a methodological innovation: the exclusive selection of one single respondent per household to guarantee the color or race self-declaration.The research sought to reproduce in an objective questionnaire, the approach of field research on racial identity.So the PCERP started the survey with a question that sought to introduce the subject to the respondent, asking him about the importance of such information: 63.9% said they believe that color or race have influence on people's lives in Brazil.
In sequence, an initial inquiry different from the traditional question was made.
In place of "what is your color or race", it was asked whether the respondent "could tell what color or race": 96% of respondents said yes, that they could say what their color or race were.When responding, openly, what would be the majority of, respondents ranked among six most frequent categories, representing about 93% of total respondents (Table 4).Among the alternatives, we can see that there was an increase in the identification as negro (category relating to family origin) over as preto (category relating to color), compared to the previous experience conducted by PME in July 1998.It is noticed that the more frequent categories remain the same: branco (white), moreno (dark), negro, preto (black), pardo (brown) (Tables 4 and 5).
Comparing these data with the PNAD's data in the same year and in the same Brazilian states investigated by PCERP, we realize that, in relative terms, the PNAD category "parda" could represent the sum of "brown" and "dark" PCERP categories in all states.It is interesting to note that the "white" declaration is much higher in the PNAD, showing that when people who declare themselves as "white" before the classic question with closed response alternatives can choose they prefer other terms, possibly under the category compound adjectives: branco brasileiro (white Brazilian); branco moreno (dark white), etc.In other words, when left to choose the categories in which they better identify, fewer people identify themselves as "white" and more people identify themselves as "negros" (except from Paraíba state).
The survey also included an interviewee's classification by color or race made by the interviewer, to investigate the level iii mentioned above, i.e., the dimension of race/color classification related to how a person (the respondent) can be perceived by society (the interviewer).The results are presented below (Table 6).
Among people who see themselves as "white", it is not so much considered by the interviewer, especially in Amazonas, Paraíba, Mato Grosso and the Federal District, where 14.2%, 21.3%, 10.6% and 16.4% were respectively classified as "mulatto".In São Paulo and Rio Grande do Sul the classification as "white" was more consistent between interviewee and interviewer.Anyway, this is for all categories that offer the highest degree of consistency between the two types of classification − by the self and by the "other".
Among those who classified themselves as "browns", the greater consistency with regard to the classification by the interviewer happened in Paraíba 65.9%, followed by Amazon, 58.8 and São Paulo, 54.8%.In Rio Grande do Sul, more people − 43.9% − were considered "white" by the interviewer while in Mato Grosso, 56.2% were considered "black".In the Federal District, 39% were also considered by the interviewer as "browns" and 34.6% as "dark".
Among those who said they were "black" there was reasonable consistency; over 50% to 67% in Rio Grande do Sul, Distrito Federal, Paraíba and São Paulo.Regarding the classification of "black", it is perceived that this was a category used more by the interviewee than by the interviewer, since the majority of which were classified as qualified by other words, in Paraíba, 84.1% as "brown"; Amazonas, 35.3% as "brown" and 25.1% as "mulato", 45% in São Paulo and Rio Grande do Sul 58.8% as "black".
The self-declared "indigenous" only appeared in the Amazon and 68% were also considered as such by the interviewer.
Regarding the criteria, we know that in Brazil the color or race of people has always been associated with the phenotypic features or the phenotypic characteristics (NOGUEIRA, 1985).The PCERP also investigated this dimension of ethno-racial classification, detailing the main criteria (up to 3, in order of importance) that the society uses (in interviewee's opinion) and the criteria that each respondent used to characterize himself.The criteria were identified based on literature by skin color, physical traits in general, descent, social and economic origin (social class), political or ideological motivation or cultural traits.
The chart below presents the results of a combination of criteria to set the color or race, both for the general population (according to the interviewee's opinion) as to the actual interview.We added alternatives to "phenotypic features criteria (physical features and skin color) as Code1, "origin criteria" (culture/tradition; family origin; social and economic origin) as Code 2 and "political and ideological criteria" as Code 3. We can observe that most interviewees state three criteria (a question asked for information of up to 3) and very few (less than 1%) do not report any criteria.It is also interesting to note that in describing the behavior of the "other" respondents reporting a greater number of criteria in which to inform its own color or race (Graphic 1).
In the Table 7 we can see the distribution of the criteria by type in the order of importance in which they appeared, in the opinion of the interviewee, as the criteria used by society in general and to himself.We realize, for example, that the "phenotypic features criteria" appear in first place in both situations, but more representative of the people in general (70%) than for itself (60%).Moreover, rules of origin are more representative to set the color or race of that person (38%) than that used for people in general (28%).Otherwise, political and ideological criteria are more representative of people in general and appear in descending order of importance, more like third criterion, then as a second and less as the first criterion.It is also interesting to note that people use fewer criteria to define their own color or race than what they think people use in general to classify others.This can be verified in the amount of non-responses in the second and third positions declared by the interviewee talks about people in general and when they speak of themselves.We can think of two possible reasons for this: that the respondents consider that more criteria is needed to set the color or race of people in general or that they are trying to represent the thinking of many people about it.Anyway, the fact that most people do use more than one criterion to identify themselves or to identify the other and as a second more frequent criterion is the family origin (even more frequent to identify themselves) may explain why people use this criterion in the first place to identify themselves to be a candidate to affirmative action.In other words, thinking in only one criterion, policymakers think in skin color or "phenotypic features criteria" while candidates manipulate their identities putting family origin in the first place to define themselves.Although in most cases both criteria may be coincidental, people with particular "phenotypic features" have the corresponding descent.But we know that in many cases the criteria for family background have a more flexible scope for manipulating the identity.This also explains why many institutions have resorted to photos, as UnB 8 or 8 University of Brasília (Federal District).
Graphic 1. Criteria's combination for color/race definition according to number (up to 3) of criteria used.
Source: IBGE (2008a).for personal interviews, so they could ensure the application of the phenotypic features criterion on the process of identifying candidates to affirmative action 9 .
In this sense we can say that the advent of affirmative action policies have posted a new dimension to the agenda: people come to racially characterize based on criteria of "descent" to have access to seats reserved for negros, black or brown.
Evidence of these changes in grading standards, in our view related to change of criteria, can be seen in official statistics by increasing the population self-reported black and brown (see data from PNAD in Table 8).
This change in the way of qualifying appears to be occurring in young age groups, as can be seen in the Graphics 2, 3 and 4. (1999; 2004; 2008b; 2015).Source: IBGE (2008b;2015).
The question seems to be how to bring this experience of color or race surveys of official statistics for the universities to implement their affirmative action programs since people seem to respond equally to surveys that have different characteristics and purposes.
Based on this research everything indicates that the different views that comprise the universe of Brazilian racial/ethnic classification cannot be understood from a single question in a survey, whatever its nature, may it be a census or a registration form in the entrance exam to university.
We cannot fail to highlight the issue of the nature or objectives of the survey, which also may influence how individuals can be identified.Rallu, Piché and Simon (MORNING, 2005, p. 243) describe four types of governmental approach to ethnic enumeration: 1. Enumeration for political control (compter pour dominer), 2. Non-enumeration in the name of national integration (ne pas compter au nom de l'intégration nationale), 3. Discourse of national hybridity (compter ou ne pas compter au nom de la mixité), and 4. Enumeration for antidiscrimination (compter pour justifier l'action positive).
In this sense, different directions can be perceived in the processes of survey conducted by the IBGE census of population and the most recent surveys conducted by universities, which keep a tighter relationship with the fourth type of approach mentioned above.And by all indications, this latest move of the surveys for the purpose of affirmative policy is to reflect on how the population has traditionally responded to the question in surveys of the IBGE.The PCERP also investigated categories of family origin.The distribution of combinations of declaration of origin in the color / race declared shows that the declaration of origin is the single most common source for those who said, among all groups of color and race, being higher among those who declared yellow (81.5%), followed by those who declared white (78.6%) (Table 9).
The Table 10 presents the same information from the previous table, but with the specification of combinations of origins most cited.The category with the highest incidence among all groups of color/race was declared a single origin in the "Far East" to the yellow (72.6%), followed by European origin among whites.Indigenous origin exclusively was the category of those who classified themselves in moreno (14.5%), pardos (Browns) (15.2%) and Blacks (15.3%), and incidences very close together.Among Blacks, the exclusive African origin (17%) was the combination of origins most cited.However, among those who declared themselves black, the exclusive Indian origin was the most cited (15.3%), followed by Africa alone (14.9%) and by the combination of African and European (14.5%).Among the yellows, the most frequent origin was unique Far East (72.6%).
Final considerations: dimensions for analysis
Observing all these attempts to understand the uses of a terminology to ethnically or racially characterize Brazilian population, we conclude that it is necessary to see this field of study involving racial/ethnic classifications in Brazil as comprising two different approaches to two major strands of research.One that works with cultural and social representations, where we discuss what our national identity would be and which ethnic categories could define it, if they are representations of color, race or origin -who is white, black or mixed race in the country, ultimately, who we are, what we want to be as we elaborate our ethnic and cultural identity, how we conceptualize people and nation, how we represent ourselves and others according to our own notions and concepts, and also considering personal views of how each one develops its own identity based on its references, experiences and history.
Another strand of research, I think, refers to ways of approaching the same levels of social reality through broader, more comprehensive representation and relate it to what we have become from those principles that guide our actions.It is within this perspective that we treat (or invent) analytical concepts such as social, racial, or gender inequalities.We are talking here from the perspective of social structures and how they operate specifically in real society.
It is evident that these two tests intercommunicate and interact in terms of social reality, one helping the understanding of the other.We are talking about the effect of specific looks that different social scientists bring to the subject, which are complementary in my view.Many of the arguments and misunderstandings in respect to affirmative action policies can be translated by the confrontation between these two points of view that represent different areas of knowledge that have always worked together in building the field of studies on race relations in Brazilian society while these studies were not brought to the public policy arena.
These two forms of analysis involve only different approaches to the same social reality.In the first approach, we emphasize the ethnic and racial diversity with its richness and variety of cultural identities of almost endless regionalism.Portrayed in such analysis, Brazil is dynamic and changes easier than portrayed in the second approach, the social structure that allows us to speak about the same country in Democratizing the access to college education: Brazilian race/color classification in affirmative action' s debate terms of its great polarities.From this perspective it is possible to represent this country, often in the form of two Brazils, of the poor and of the rich ones (people that have access to goods and services), the Brazil of whites and the Brazil of nonwhites (or blacks), the Brazil of those who have employment and of those who are unemployed, the Brazil from those who have access to education and of those who do not.That Brazil, in the structural point of view, is more difficult to change.This Brazil is a more lasting.It is organized and structured upon inequalities that seem to be crystallized in society, being difficult to overcome in the short term.And precisely on that point lays the strength of his analysis, the emphasis on more permanent aspects in our social reality.
In Brazil the investigation of color and race of the population exists since the first Census of 1872, with little variation (1872: whites, blacks, browns and shifting cultivators; 1890: whites, blacks, mestizos, caboclos, 1940, 1950, 1960 and 1980: whites, blacks, browns, yellows, 1991 and 2000: the same, with the inclusion of indigenous people).Very differently from other Latin American countries, we have a long history of surveys about this issue, with all problems and criticisms we can make.This information has enabled Brazil to discuss the level of structures.
We can say that data since census of 40, have enabled researchers to point to the underprivileged situation that blacks and browns have enjoyed in Brazilian society -studies of Unesco in the 50s: Costa Pinto analyzed data for Rio de Janeiro (PINTO, 1952); Thales de Azevedo for Bahia (AZEVEDO, 1955); Florestan Fernandes studied data from 1950 for São Paulo (FERNANDES, 1978).Since 1976 when it was made a particular PNAD to prepare the return of the item to the census of 80 − it was not part of the Census of 70 − many researchers have done tests that seek to denude the Brazilian racism by official statistics: in IBGE, Tereza Cristina Araújo, Rosa Maria Porcaro, and Lucia Elena de Oliveira (OLIVEIRA; PORCARO; ARAÚJO, 1985); in IUPERJ, Carlos Hasenbalg (HASENBALG, 1979) and Nelson do Valle Silva, 1996, who formed a group of researchers currently working on the topic, have focused on these same data as well as many others.More recently, specifically after the Durban Conference in 2001, even the IPEA10 has incorporated new researchers working on such data in the same direction and tackling racial inequalities.
Since the success on demonstrating it through the "impartiality" of numbers, the questioning about the existence of racial prejudice in Brazil has stopped.Some social investments were made, but it seems that they were insufficient to alter significantly the structural framework of racial inequalities.Data analysis of the 90s -from the PNAD -may demonstrate this.This shows that we are dealing with an issue that lies at the level of the social structure and thus makes these initiatives take time to reflect on a structural change.
But with changes in the political sphere depicted by the movement succeeded in favor of affirmative action, some began to doubt these sixty years of studies of racial inequality based on official statistics.When we bring this knowledge to support the projects of intervention in society through affirmative action policies, especially in higher education (where the bottleneck was always bigger and just a change only in the base would take many years to bear fruit on it) takes a step back the discussion of what is believed to be overtaken by social scientists who study the subject.
In my view, this is a battle that reflects these two dimensions of analysis of the racial issue.This is the point that is shown in this article.The statistics are fundamental to build the best evidence in shaping public policy.If they are not good or appropriate at the time as in the past, also because we move forward in their study, and we know how we can improve them.Foolhardy to think they are good for nothing, is false or any other adjective that serves to disqualify all reviews of several researchers that in these 50 years of studies have sought to advance the knowledge of the racial issue in Brazil from official statistics.
On the other hand, we must recognize ethnic and racial identities as cultural phenomena that are susceptible to change, which drives us to continue the discussion, trying to capture the meaning of these transformations.Until the late 40's we had in formation the ideology of the three races in Brazil -whites, blacks and indians, the foundation of the idea that we are primarily a country of "mestizos".The analysis sought to uncover the persistent racial prejudice and discrimination in Brazilian society, used the device of joining the census categories "blacks" and "browns" in the same category of origin (negro) − which might also be regarded as a synonym of African descent − once they also came close in terms of socioeconomic background.Those who used this trick always knew that this analytical strategy approximate sizing of the problem and not all "browns", of course, have "black" origin.By the 1991 census − when the Indians and their descendants have earned a separate category in order to identify themselves − they, even with a relatively small (less than 1% of the population) were always aggregated under the term "brown".Prof. José Jorge de Carvalho (2005) makes a very interesting reflection on the browns of Midwest Brazil, in his view, historically identified more with whites than with blacks.At this moment of reflection on ways of ethno-racial classification and its implications for the construction of public policies based on it, we need to resume the studies done in the past, recognizing their merits and achievements, without forgetting their limitations.In addition, in that sense, we must remember that one thing is to build a category of analysis as an artifice, and quite another is to bring it to the real world of individual or collective identities.
We must think about the fact that making this implementation -the structural analysis for the reality of diversity and complexity -has contributed to the design of a framework that may be headed towards strengthening also ideologically, or in the cultural field, that way bipolarizing the representation of the country.In fact, the discussions around the projects of Affirmative Action have walked in this direction, to polarize the debate around the issue of race in the country, to the point of bringing these two directions of research mentioned before, that of interconnecting factors of the different perspectives of the same issue, has gone to join the same vision about it.That is precisely the discussion that, in my view, must be intensely debated.The debate may not be by a way of disqualification of any of these approaches to knowledge about race relations in Brazil.If so, we will be giving "a shot in our own foot".What happens is that those who formulate public policy seem to have in mind the structural model as dichotomous and those who have applied to politics may be driving racial-ethnic origins on the basis of other criteria, such as family origin.In this sense, we need to ask ourselves if whether this is exactly what we want -for example, get like the Americans of the past, for whom a drop of black blood makes one person as black, or if, unlike this, we reaffirm our vision mixed and pacifist, or if we want something different, to overcome our inequalities, stating our ethnic and cultural diversity in an even wider way.We need to reflect if we are not substituting one bias for another, closing the doors of affirmative action for those who wish to assert a different identity -e.g. of the mestizo or mixed race, or whatever that represents the racial mix, but without that connotation of "false consciousness of black" of the 70s and the 80s.That is why the affirmative action draft should be alert to what is being proposed not to leave out exactly one population segment -also deleted -that to be seen included in the project has to be stated as "black" perhaps denying their own proposal of ethno-cultural identification.
Table 1 .
Distributions of responses to race/color self-identification (open answer)*.
Table 6 .
the respondent color/race self-declaration according to the classification given by the interviewer -PCERP 2008. self-
Table 7 .
Distribution by type of criteria in order of importance that appeared to people in general and to the interviewed himself -PCERP 2008.
the access to college education: Brazilian race/color classification in affirmative action' s debate Source:IBGE (2008b; 2015).
Table 9 .
Distribution of combinations of origin declaration by complexity of response according to colour stated -PCERP 2008.
Table 10 .
Distribution of origin declared combination by colour declared -Answers more significantes -PCERP 2008.
|
2019-05-12T14:23:21.968Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "6aa5b97e69f5d33e9b18c11a151386a20463110a",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/ensaio/v26n100/1809-4465-ensaio-26-100-0595.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6aa5b97e69f5d33e9b18c11a151386a20463110a",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
3192897
|
pes2o/s2orc
|
v3-fos-license
|
New phenyl derivatives from endophytic fungus Botryosphaeria sp. SCSIO KcF6 derived of mangrove plant Kandelia candel
Two new phenyl derivatives (1 and 3), along with two new natural products (4 and 5), and three known compounds (2, 6 and 7), were isolated from an endophytic fungus Botryosphaeria sp. SCSIO KcF6. The structures of these compounds 1–7 were elucidated by the extensive 1D and 2D-NMR and HRESIMS Data analysis, and compared with those of reported data. The absolute configuration of the compounds 1 and 3 were assigned by optical rotation and CD data. The isolated compounds were evaluated for their cytotoxic, anti-inflammatory (COX-2) and antimicrobial activities. Compound 3 exhibited a specific COX-2 inhibitory activity with the IC50 value of 1.12 μM.
Introduction
Endophytic microorganisms are mostly termed as bacteria, fungi and actinomycetes that live in the intercellular spaces of plant tissue without causing any apparent damage to their host. Especially, some of these fungi isolated from the coastal mangrove plants produce diverse array of bioactive substances. Mangrove plants are growing in subtropical and tropical intertidal habitats (Wang et al. 2003). Marine mangrove system is a special eco-environment and an increasing number of bioactive secondary metabolites have recently been reported from mangrove plants and mangrove-derived fungi (Yang et al. 2010;Blunt et al. 2012Blunt et al. , 2013Rukachaisirikul et al. 2012;Zhou et al. 2014). Recent research supports that these endophytic fungi derived from coastal mangrove plants can be a good source of potentially new bioactive secondary metabolites, some of with featured novel carbon skeletons hitherto unprecedented in nature which were reported in our previous work (Ai et al. 2014;Bai et al. 2014;Wang et al. 2014;Yang et al. 2014).
In recent years, much attention is focused towards a particular group of endophytic fungus which belongs to the genus Botryosphaeria. Some interesting class of chemical moieties likely to be naphthalenones, lactones, polyketides, diterpenoids, benzofuran derivatives and exopolysaccharides are recently isolated and identified from the species of Botryosphaeria (Rukachaisirikul et al. 2009). These metabolites are strong enough in their biological properties which includes antibacterial (Pongcharoen et al. 2007), antiseptic (Voegtle et al. 2008), phytotoxic (Venkatasubbaiah et al. 1991) and antimicrobial (Yang et al. 2006). As a wide stretch of this particular research theme, a fungal endophyte of Botryosphaeria sp. SCSIO KcF6, obtained from the fruit part of a mangrove plant Kandelia candel from the South China Sea coast, and herein we isolated two new compounds and two new natural compounds, together with three known compounds from the ethyl acetate extract. Isolation, structural elucidation through 1D and 2D-NMR spectra and HRESIMS spectrometry, and biological screening of the isolated compounds are described herewith.
Results and discussion
Two new phenyl derivatives (1 and 3), along with two new natural products (4 and 5), and three known compounds (2, 6, and 7) were isolated from the ethyl acetate crude extracts of the rice medium.
Compound 1 was isolated as white powder. The high resolution mass spectra of 1 gave [M þ H] þ at m/z 267.0861, corresponding to the molecular formula C 13 H 14 O 6 . The IR spectrum showed the presence of ester/lactone carbonyl at 1682 cm 21 and hydroxyl at 3356 cm 21 . The 1 H NMR spectrum of compound 1 exhibited signals of an olefinic proton at d H 6.61, an aromatic methane proton at d H 6.38, an aromatic methoxyl proton at d H 3.76 and 2-hydroxy propyl moiety at d H 4.16, 2.62 (2H) and 1.26. The 13 C NMR spectrum revealed signals of a carbonyl carbon at d C 167.9, two sp 2 quaternary carbons at d C 132.4 and 98.3, four oxygenated quaternary carbons at d C 161.7, 161.0, 156.0 and 136.0, three methines at d C 103.9, 101.7 and 66.4, an aromatic methoxyl carbon at d C 61.8, a methylene carbon d C 44.2, and a methyl carbon at d C 23.5. The planer structure of compound 1 was confirmed by the key HMBC correlations. In the HMBC spectrum, the signal of H-4 showed correlations with C-3, C5a and C-8a, and the signal of H-5 showed correlations with C-5a and C-8a, supporting the lactone ring was connected with the aromatic ring at C-5a and C-8a. The HMBC correlations of H-11 to C-3, H-4 to C-10, and H-9 to C-3 and C-4 reveal that a 2-hydroxypropyl moiety was attached to C-3. The NMR data of 1 were very similar with that of the known compound diaporthin (Hallock et al. 1988), with the sole difference being the presence of a hydroxyl at C-7 instead of the proton. The configuration of C-10 was determined by comparison of optical rotation with the known compound, orthosporin 2 (Ichihara et al. 1989). The result indicated that compound 1 has same positive sign with that of 2. Thus, the structure of compound 1 was determined and named as botryosphaerin A (Figure 1).
Compound 3 was isolated as yellow oil. Its molecular formula was assigned as C 15 H 16 O 6 on the basis of HRESIMS [M þ H] þ at m/z 293.1023. The IR spectrum indicated the presence of OH and CO groups. The structure of 3 was determined by its NMR data and by comparison with those of guignardianone (Buckel et al. 2013). The key HMBC correlations of compound 3, from H-3/3a to C-5, H-5 to C-7, C-3/3a C-6, supported a tri-substituted olefin (d H 6.43/d C 110.0 and 134.3) attached to a phenyl residue (d H 6.85, 7.56/d C 116.0, 132.0 and 156.8) and a 1,3-dioxolan-4-one moiety (d C 108.6, 134.3 and 163.4). In addition, signals of the isopropyl moiety (d H 1.03, 2.64/d C 14.7, 15.5 and 33.2) were observed, and further inspection of the HMBC spectrum indicated that the isopropyl moiety and the ester carbonyl (d C 166.5) both were linked to a carbon at d C 108.6. The 1 H and 13 C NMR data of compound 3 closely resembled those of guignardic acid (Bai et al. 2014), and the only difference was the proton being substituted by a hydroxyl at C-1 (d C 156.8). The absolute configuration at C-8 was determined as S by comparison of optical rotation (½a 25 D 2 31.6 (c ¼ 0.24, acetone)) and CD profile with guignardic acid (Bai et al. 2014). Therefore, the structure was defined and named as botryosphaerin B.
The isolated compounds were evaluated for their cytotoxic, anti-inflammatory (COX-2) and antimicrobial activities. Only compound 3 exhibited a specific COX-2 inhibitory activity with the IC 50 value of 1.12 mM.
Experimental 3.1. General procedures
Optical rotations were measured using Anton Par MCP-500 polarimeter (Hertford, UK). The NMR spectra were measured on a Bruker AC 500 MHz NMR (Bruker, Fällanden, Switzerland) spectrometer with TMS as internal standard. High resolution mass spectra (HR-ESI-MS) were recorded on a Bruker micro TOF-QII mass spectrometer (Bruker). CD spectrum was measured with a Chirascan circular dichroism spectrometer (Applied Photophysics, Surrey, UK). Size exclusion chromatography was done on Sephadex LH-20 gel (GE Healthcare, Uppsala, Sweden). Column chromatography was carried out on silica gel (Qingdao Marine Chemical Factory, Qingdao, China). TLC spots were detected under UV light or by heating after spraying with 5% H 2 SO 4 in EtOH. Thin layer chromatography was carried out with precoated silica gel plates (GF-254, Jiangyou Silica Gel Development, Inc., Yantai, China).
Fungal strain
The endophytic fungus SCSIO KcF6 was derived from the inner fruit part of a mangrove plant K. candel, which was collected at Daya Bay, Shenzhen, Guangdong Province, China, in March 2012. Isolated fungal strains were cultured on MB agar medium at 258C. This strain was stored on MB agar slants at 48C and then deposited at the Marine Microbial collection center of CAS Key Laboratory of Tropical Marine Bio-resources and Ecology. This isolate was identified to be a member of the genus Botryosphaeria on the basis of its ITS phylogenetic analyses, and was designated as Botryosphaeria sp. SCSIO KcF6. The 489 base-pair ITS sequence (NCBI Gen Bank accession number KM 246294) has 99% sequence identity to that of the Botryosphaeria dothidea strain CBS 116743 (NCBI Gen Bank accession number AY786322).
Fermentation, extraction and isolation
Botryosphaeria sp. SCSIO KcF6 stored on MB agar slants at 48C was cultured on MB agar plates and incubated at 258C for 7 days. Seed medium (potato 200 g, dextrose 20 g, sea salt 10 g, distilled water 1000 mL) was inoculated with Botryosphaeria sp. SCSIO KcF6 and incubated at 258C for 48 h on a rotating shaker (180 rpm, 258C). Large-scale fermentation in a solid rice medium of 1000 mL flasks supplemented with 3% NaCl (rice 200 g, sea salt 6.0 g, distilled water 200 mL) (n ¼ 35) was inoculated with 10 mL of seed solution. Flasks were incubated at 258C under static condition and fermented for 45 days.
Antimicrobial activity
The isolated compounds 1-7 were subjected for antibacterial screening (Bauer et al. 1996), against five human bacterial pathogenic strains of Acinetobacter baumannii ATCC 19606, Klebsilla pneumonia ATCC 13883, Escherichia coli ATCC 29213, Staphyloccocus aureus ATCC 29213, and Enterococcus faecalis ATCC 29212. Minimum inhibitory concentration (IC 50 ) value was determined by assessing significant bacterial growth inhibition at lower dose.
COX-2 inhibitory activity assay
COX-2 a well-established target is an inducible enzyme, which expression is activated by cytokines, mitogens, endotoxin and tumour promoters. The anti-inflammatory and analgesic properties of traditional NSAIDs are primarily due to the inhibition of COX-2. Hence, the compounds isolated were tested for COX-2 inhibitory activity using the COX (ovine) inhibitor screening kit according to the manufacturer's instructions. The test compounds were dissolved in DMSO and the final concentration was set to 30 mM. The percentage inhibition has been calculated by comparison with the control group.
Conclusion
In the present study, we have isolated and characterised two new phenyl derivatives of botryosphaerin-A (1) and botryosphaerin-B (3), along with five other known compounds from a mangrove derived endophytic fungi of Botryosphaeria sp. Compound 3 exerted moderate COX-2 anti-inflammatory effect with IC 50 range of 1.12 mM.
Disclosure statement
No potential conflict of interest was reported by the authors.
|
2018-04-03T04:02:06.655Z
|
2016-01-17T00:00:00.000
|
{
"year": 2016,
"sha1": "e8d9cd0d290d5fe730e4feba518137af884db7ed",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/New_phenyl_derivatives_from_endophytic_fungus_i_Botryosphaeria_i_sp_SCSIO_KcF6_derived_of_mangrove_plant_i_Kandelia_candel_i_/1477800/1/files/2168247.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "fa6f1b16f23ca31a740dcf428a40a7ada1a811c7",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
231665406
|
pes2o/s2orc
|
v3-fos-license
|
Involvement of neutrophils in rat livers by low-dose thioacetamide administration
The administration with high dose (close to LD50) of thioacetamide (TAA), a hepatotoxicant used widely to induce experimental liver lesions, develops hepatocellular necrosis and subsequent inflammation (mainly M1-/M2-macrophages without neutrophil infiltration) in rats. We analyzed rat livers treated with a low dose TAA (50 mg/kg/body weight) at 6, 12, 18, 24 and 48 hr. The lesions in the affected centrilobular areas consisted of slight hepatocyte degeneration at 12 hr, and inflammatory cell infiltration at 18 and 24 hr; the lesions recovered until 48 hr. Translocation of intranuclei to cytoplasm of HMGB1, a representative molecule of damage-associated molecular patterns, was seen in some hepatocytes mainly at 6, 12, and 18 hr. As an interesting finding, at 12 hr, myeloperoxidase-positive neutrophil infiltration was observed in the affected centrilobular area. Additionally, CD68 M1-/CD163 M2-macrophages increased consistently at 12 to 48 hr. CXCL1, a chemokine for induction of neutrophils, began to increase at 6 hr and gradually increased at 12, 18 and 24 hr, apparently corresponding to the appearance of neutrophils. Collectively, the present findings at the low dose TAA indicated that along with M1-/M2-macrophages, neutrophils were characteristically seen, which might be elicited by cytoplasmic translocation of HMGB1 from nuclei. These finding would be useful for evaluation of hepatotoxicity at the early stages.
Neutrophils are one of the immune cells originated from and matured in the bone marrow [16]. Generally, when animals are infected with pathogens, particularly bacteria, neutrophils are recruited to the infected site as the first innate immune cells from bloodstream rapidly, in order to kill pathogens via various ways including phagocytosis, reactive oxygen species and neutrophil extracellular traps [16]. In sterile settings, the recruitment of neutrophils is well-studied in hepatic ischemic/reperfusion injury and they mediate the progression at the later stage after reperfusion injury [21]. Infiltration/migration of neutrophils in the liver may be intermediated by Kupffer cells via activating compliments and releasing chemokines including C-X-C motif chemokine ligand (CXCL)-1 and CXCL-2 [2,14]. When cells are injured or undergo necrosis by ischemia or chemical exposure, damage-associated molecular patterns (DAMPs) are released to the extracellular space and can activate innate immune system through complicated mechanisms [8,11,13,15]. As a representative DAMPs, high-mobility group box 1 (HMGB1), a non-histone binding protein participating in DNA transcription [1,17], plays important roles in inflammation [7,13].
Participation of neutrophils in chemically-induced liver injury is known in limited chemicals such as halothane [20] and acetaminophen [9,10]. Thioacetamide (TAA) has been used to induce hepatotoxicity in rats and mice; the liver lesions induced by TAA are characterized by coagulation necrosis of hepatocytes in the centrilobular area followed by macrophage infiltration. The dose of TAA used for intraperitoneal injection on rats is usually 300 mg/kg body weight, of which dose is close to lethal dose 50 (LD 50 ) [5,8]. In the present study, we analyzed pathological lesions induced in rat livers by a lower dose of TAA administration, focusing on neutrophils and HMGB1.
Animals
Six-week-old, male F344/DuCrlCrlj rats were purchased from Charles River Japan (Yokohama, Japan). The TAA group was injected intraperitoneally with TAA dissolved in saline (50 mg/kg body weight; Wako Pure Chemicals, Osaka, Japan). The dose [390][391][392][393][394][395][396]2021 (50 mg/kg) was decided based on data in preliminary experiments. The control group was administered an equal volume of saline. These animals were housed in an animal room at a controlled temperature of 22 ± 3°C and with a 12-hr light-dark cycle; they were provided a standard diet (DC-8; CLEA, Tokyo, Japan) and tap water ad libitum. Rats were euthanized by deep isoflurane anesthesia, and the blood (from the abdominal artery) and liver were collected at 6, 12, 18, 24 and 48 hr after injection (n=4 in each point). Aspartate transaminase (AST) and alanine transaminase (ALT) were measured by SRL Inc. (Tokyo, Japan). The animal experiments were conducted under the institutional guidelines approved by the ethical committee of Osaka Prefecture University for animal care (No. 29-5).
Histopathology and immunohistochemistry
Tissues from the left lateral lobe of the liver were fixed in 10% neutral buffered formalin or periodate-lysine-paraformaldehyde (PLP) solutions. These tissues were dehydrated and embedded in paraffin. Deparaffinized sections, cut at 4 µm in thickness, were stained with hematoxylin and eosin (HE) for histopathologic examination. Immunohistochemical conditions were conducted according to methods reported previously [3]. PLP-fixed sections were used for immunohistochemistry with mouse monoclonal antibodies: cluster of differentiation (CD) 68 (clone ED1 for M1 macrophages; 1:500; Chemicon, Tokyo, Japan), CD163 (clone ED2 for M2 macrophages; 1:500; AbD serotec, Oxford, UK) and myeloperoxidase (for neutrophils; 1:500; R&D Systems, Minneapolis, MN, USA). After pretreated by microwave for 20 mins in 0.01 M citrate buffer (pH 6.0) for myeloperoxidase or by proteinase K (100 µg/ml) for 10 mins for CD68 and CD163, sections were incubated with each primary antibody for 1 hr at room temperature, followed by 1-hr incubation with peroxidase-conjugated secondary antibody (Histofine Simple Stain MAX-PO; Nichirei, Tokyo, Japan). Positive reactions were detected with 3, 3′-diaminobenzidine (DAB Substrate Kit; Nichirei). Sections were counterstained lightly with hematoxylin. In addition, liver samples obtained from rats injected with the high dose of TAA (300 mg/ kg/body weight) were used for immunohistochemistry for myeloperoxidase.
Immunofluorescence
PLP-fixed sections were also used for immunofluorescence with rabbit polyclonal antibody for HMGB1 (1:500; Abcam, Cambridge, UK). After pretreated by microwave for 20 min in 0.01 M citrate buffer (pH 6.0), sections were incubated with the primary antibody for 24 hr at room temperature, followed by 1-hr incubation with fluorescence-conjugated secondary antibody (Alexa Fluor 488; Thermo Fisher Scientific, Waltham, MA, USA). Samples were mounted with medium including 4′ 6-diamindino-2-phenylindole (DAPI) for nuclear fluorescence.
Cell count
The numbers of myeloperoxidase, CD68 or CD163-positive cells in the affected centrilobular area was counted in different three areas of 4 different rats using WinROOF (Mitani Corp., Fukui, Japan) and are expressed as the number of positive cells per unit area (cells/mm 2 ).
Statistics
Obtained data are represented as mean ± standard deviation (SD). Statistical analyses were performed using Dennett's test (versus control group). Significance was accepted at P<0.05.
The low dose TAA (50 mg/kg body weight) injection induces hepatocellular degeneration and inflammation
In livers of control (Fig. 1A) and at 6 hr, no histopathological abnormalities were seen. At 12 hr, some hepatocytes in the injured centrilobular area showed slight degeneration (Fig. 1B, arrowhead), with a small number of inflammatory cells. At 18 and 24 hr, inflammatory cell infiltration was more prominent in the affected area (Fig. 1C). At 48 hr, the lesions almost recovered. The inflammatory cells consisted of neutrophils and macrophages, specified by immunohistochemical analyses as mentioned below.
Being consistent with the histopathologic lesions, the AST and ALT values began to increase at 12 hr, with a peak at 24 hr. Both of the ALT and AST values showed a statistical increase at 18 and 24 hr, and the increased values were decreased at 48 hr ( Fig. 1D for ALT; data not shown for AST).
Nuclear to cytoplasmic translocation of HMGB1 occurs in the early stage of low dose TAA-induced hepatic lesions
In the control liver, positive reactivity for HMGB1 was seen exclusively within the nucleus ( Fig. 2A and 2B). At 6 and 12 hr, some hepatocytes in the centrilobular areas showed cytoplasmic positivity for HMGB1 ( Fig. 2C and 2D). Hepatocytes with cytoplasmic positivity for HMGB1 were still observed a 24 hr; they were rarely seen at 48 hr as in the control liver.
Myeloperoxidase-positive cells appear as an inflammatory cell, in correlation with increased CXCL1
In livers of control and at 6 hr, there were no myeloperoxidase-positive cells (neutrophils) (Fig. 3A and 3E). At 12 hr, in the injured centrilobular area, a few cells reacting to myeloperoxidase were present ( Fig. 3B and 3E), and the number increased gradually at 18 and 24 hr, showing a significant increase at 24 hr ( Fig. 3C and 3E). At 48 hr, the positive cells were rarely seen ( Fig. 3D and 3E).
The expression level of CXCL1, a neutrophil-activating/chemotaxis chemokine [8,18], began to increase at 6 hr and gradually increased at 12, 18 and 24 hr, with a peak at 18 hr and showing a statistical increase (Fig. 3F).
Infiltrating macrophages represent M1-/M2-phenotypes
To evaluate immunophenotypes of infiltrating macrophages, immunohistochemical analysis was performed using CD68 (for M1 macrophage) and CD163 (for M2 macrophage). In livers of control and at 6 hr, CD68-positive M1 macrophages were rarely seen in the centrilobular area. At 12 and 18 hr, the positive macrophages aggregated in the centrilobular area (Fig. 4A), and the increased number retained at 24 and 48 hr (Fig. 4B).
In livers of control and at 6 hr, CD163-positive M2 macrophages were seen along the sinusoid, indicating Kupffer cells, without an increase in number. At 12 and 18 hr, besides positive cells along the sinusoids, CD163-positive macrophages appeared in the centrilobular area (Fig. 4C), and the increased number of the positive cells retained at 24 and 48 hr (Fig. 4D).
Myeloperoxidase-positive cells do not accumulate in the injured centrilobular area in rats administered with high dose TAA
To confirm relevance of neutrophil appearance with TAA administration, comparison on histopathological and immunohistochemical analysis was performed using high-dose TAA samples. In HE-stained sections, there were no histopathological lesions in the centrilobular area at 10 hr (Fig. 6A), whereas many infiltrating cells were seen on day 3 (Fig, 6B). Not only at 10 hr (Fig. 6C) but also on day 3 (Fig. 6D), myeloperoxidase-positive neutrophils were rarely seen. The infiltrating cells were almost macrophages reacting to CD68 and CD163 as confirmed previously [18].
DISCUSSION
Injection of TAA at the present low dose (50 mg/kg body weight) induced hepatocyte degeneration and inflammatory cell reaction in the centrilobular area, with simultaneously increased values of AST and ALT. The procedure of histopathological changes were in accordance with those in rats injected at the TAA high dose (300 mg/kg body weight) [8], although the hepatic lesions were much milder in the present study than those in our previous studies with the high-dose TAA. The time course (injury/ inflammation and recovery) was shorter than that in the high-dose study; the present low-dose injection developed lesions at 12 and 24 hr, and then the lesions almost recovered, whereas the high-dose administration induced hepatic lesions consisting of necrosis/ inflammation on days 2 and 3, and subsequent recovery on day 5 [6,8].
Interestingly, in the present low dose TAA administration, neutrophil infiltration was characteristically seen. The finding has not been reported in hepatotoxicity of the TAA high dose experiments [6,8,18]; to confirm the finding, we conducted immunohistochemistry for myeloperoxidase using liver samples in rats injected with the TAA high dose. Cells reacting to myeloperoxidase were rarely seen even on day 3 (Fig. 6D) when infiltration was the greatest. On the contrary, CD68-expressing M1 macrophages and CD163-expressing M2 macrophages appeared simultaneously in the affected liver lesions, as seen at the high dose TAA [4]. Of note, myeloperoxidase-positive neutrophils tended to increase mainly at 18 and 24 hr, whereas CD68 M1-/ CD163 M2-macrophages increased consistently at 12 to 48 hr. These findings indicated that the appearance of neutrophils was transient in rat livers treated with at the low dose TAA.
CXCL1 is a chemokine for induction of neutrophils [2,14]. Interestingly, CXCL1 mRNA began to increase at 6 hr and thereafter gradually increased at 12, 18 and 24 hr, apparently corresponding to the appearance of neutrophils, particularly at 12, 18 and 24 hr. CXCL1 may be produced by activated Kupffer cells. Increased number of CD163-expressing M2 macrophages (maybe including Kupffer cells reacting to CD163), which began to be seen at 12 hr (Fig, 4D), might have been related to the appearance of neutrophils.
In the present study, the translocation of intranuclear HMGB1 to cytoplasm was observed in some hepatocytes mainly at 6 and 12 hr. Possibly, neutrophil infiltration might be elicited by HMGB1. Releasing of HMGB1 to extracellular situation prompts to transcript pro-inflammatory factors, including IL-1β, IL-6 and TNF-α, through complicated innate immune systems [12,17]. It would be worth to investigate the roles of HMGB1 to find out the mechanisms of neutrophil infiltration at the TAA low dose. In conclusion, the present study showed that injection of TAA at the low dose (50 mg/kg body weight) could induce liver lesions following neutrophil infiltration, along with M1-/M2-type macrophages. The neutrophil infiltration might be related to the translocation of intranuclear HMGB1 to cytoplasm. Although the significance of neutrophils should be investigated at different time points using various hepatotoxicants including TAA, the analyses of HMGB1 translocation, and neutrophil appearance and its related factors (such as CXCL1) would be useful for evaluation of hepatotoxicity at the early stages.
|
2021-01-22T06:15:53.569Z
|
2021-01-20T00:00:00.000
|
{
"year": 2021,
"sha1": "1ef04518c5cb917b9ae319eb306e144eea97b6a9",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jvms/83/3/83_20-0581/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d14f3bec443cdc1b67006891b5406c9585951ce",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246276106
|
pes2o/s2orc
|
v3-fos-license
|
Multiscale machine-learning interatomic potentials for ferromagnetic and liquid iron
We develop and compare four interatomic potentials for iron: a simple machine-learned embedded atom method (EAM) potential, a potential with machine-learned two- and three-body-dependent terms, a potential with machine-learned EAM and three-body terms, and a Gaussian approximation potential with the SOAP descriptor. All potentials are trained to the same diverse database of body-centered cubic and liquid structures computed with density functional theory. The four presented potentials represent different levels of complexity and span three orders of magnitude in computational cost. The first three potentials are tabulated and evaluated efficiently using cubic spline interpolations, while the fourth one is implemented without additional optimization. We compare and discuss the advantages of each implementation, transferability and applicability in terms of the balance between required accuracy versus computational cost.
We develop and compare four interatomic potentials for iron: a simple machine-learned embedded atom method (EAM) potential, a potential with machine-learned two-and three-body-dependent terms, a potential with machine-learned EAM and three-body terms, and a Gaussian approximation potential with the SOAP descriptor. All potentials are trained to the same diverse database of body-centered cubic and liquid structures computed with density functional theory. The four presented potentials represent different levels of complexity and span three orders of magnitude in computational cost. The first three potentials are tabulated and evaluated efficiently using cubic spline interpolations, while the fourth one is implemented without additional optimization. We compare and discuss the advantages of each implementation, transferability and applicability in terms of the balance between required accuracy versus computational cost.
I. INTRODUCTION
As the principal component of all steels and hence arguably the most important metal for industrial and structural applications, iron is one of the most intensely modelled materials. Its magnetic nature is the source of many interesting properties that separate iron from other body-centered cubic metals, such as its hightemperature phase transitions [1] and the exotic landscape of radiation-induced defects [2][3][4]. This makes developing accurate interatomic potentials for large-scale atomistic modeling of iron challenging. Consequently, a large number of interatomic potentials have been developed in the last decades, targeting different key properties. Most existing potentials are traditional parametric analytical potentials, like embedded atom method (EAM) potentials [5][6][7][8][9][10][11][12][13], angular-dependent modified EAM potentials [14][15][16][17], and Tersoff-like or magnetic analytical bond-order potentials (ABOP) [18][19][20][21]. Even though these potentials have been very successful in describing most properties of iron, recent machine-learning potentials have provided a new level of accuracy for e.g. thermal, defect, and screw dislocation properties [22][23][24]. Very recently, there has also been progress in explicitly including spins in machine-learning potentials [25,26] or coupling a machine-learning potential to a spin model [27] to quantitatively reproduce magnetism in iron and other materials.
Exploiting machine learning (ML) is now rapidly becoming routine when constructing and fitting interatomic potentials. A growing number of different ML frameworks and descriptors have been developed in what is now an extremely active research field [28,29]. Potentials using different underlying ML methods (artificial neural networks [30], kernel regression [31], linear regression [32,33], and deep learning [34]) have all demonstrated near-quantum accuracy for all classes of materials [35].
Despite their success and excellent accuracy, machinelearning potentials have not and will not completely * Corresponding author; jesper.byggmastar@helsinki.fi replace traditional parametric interatomic potentials. This is partly because traditional fixed-function potentials offer a transferability that is difficult to achieve with ML potentials, as ML models are inherently poor at extrapolation. Secondly, most ML potentials are computationally much more costly than simple traditional potentials like EAM or Tersoff. The choice of potential type one develops or applies in a simulation should be based on the balance between desired accuracy and the acceptable computational cost. For many molecular dynamics (MD) applications, such as simulating large-scale or long-term irradiation damage [36], the computational price of highly accurate machinelearning potentials is simply too high. With this in mind, the aim of this work is to develop, using machinelearning methods, a set of increasingly complex interatomic potentials for iron that provide different levels of accuracy and computational efficiency. In particular, we further develop the methodology of tabulated lowdimensional machine-learning potentials (tabGAP [37]) and show that they can provide an excellent balance between speed, accuracy, and transferability.
A. Gaussian approximation potentials
All potentials developed here are trained as Gaussian approximation potentials (GAP) [31] using different combinations of increasingly complex descriptors. All potentials include a fixed short-range repulsive pair potential (E rep ) appropriate to handle high-energy collision correctly [38][39][40], so that the total energy of a system of N atoms is given by The energy (and corresponding forces and stresses) to be machine-learned is hence E ML = E tot − E rep , where E tot is the total energy of a given structure in the training database computed with density functional theory (DFT). The repulsive pair potential is a screened Coulomb potential fitted to Fe-Fe repulsion and forced to zero by a smooth cutoff function f cut (r ij ) as [41] as in the universal ZBL potential [38]. The cutoff function forces the potential smoothly to zero in the range 1.1-2.2Å. This is well below the nearest-neighbour distance in bcc (2.45Å) and hence leaves all nearequilibrium interactions to be machine-learned. The screening function φ is fitted to reproduce all-electron DFT data for the Fe-Fe dimer repulsion [40] and is given by The simplest and least accurate potential version, a machine-learned EAM potential, contains two machinelearning terms with pairwise (E 2b ) and embedding energy (E emb ) contributions: All machine-learning terms are evaluated using Gaussian process regression as implemented in quip [42] and part of the GAP framework. Including the EAM-like embedding term has not been done previously in GAP and is explained in detail below. The two-body term can be written where δ 2 is a prefactor, α s are the regression coefficients, and K se is the squared-exponential kernel. The sum runs over a selected (sparsified) subset of known descriptor environments from the training structures (here just the M 2b interatomic distances r s ) [43]. The embedding energy is similarly given by Here, the descriptor input to the kernel function is the total density contributed by all atoms j in the local atomic environment of i, as in a normal EAM potential: The use of an EAM-like density as a simple many-body descriptor for ML potentials was first demonstrated in Ref. [44], although with different expressions for the pairwise and total density. We have implemented several functions for the pairwise density contributions ϕ ij . Here, we use the polynomial function ϕ(r ij ) = (−1) n (r ij − r cut ) n /r n cut , r ij ≤ r cut 0, r ij > r cut , with n = 3, making the cutoff continuous up to the second derivative. n = 2 would be the Finnis-Sinclair density function (normalised so that ϕ(0) = 1) [45]. r cut is the cutoff radius. Since the descriptor is the total pairwise-contributed density, training the GAP-EAM potential effectively means machine-learning the embedding function of an EAM potential together with the pair potential. Note that in normal EAM potentials, the pair potential and the pair density function are often fitted freely using cubic spline functions. The GAP-EAM potential is hence actually less flexible because the pair density function is fixed as a part of the descriptor during the fitting process (although it could in principle be pre-fitted and used as a descriptor). The main practical advantage of the machine-learned embedding term is when combining it with an angular-dependent descriptor as discussed below. The simple GAP-EAM potential is here mainly included for the purpose of comparison with increasingly more flexible machine-learning potentials. It would also be possible to include several embedding terms with different pair density functions, which could be seen as a machine-learning multi-band generalisation of the 2-band EAM potential [46]. Here, we only use one embedding term and leave investigation of machine-learned multi-band EAM potentials for future work. We also train a potential with only two-and threebody terms as The three-body machine-learning term is where the descriptor is the three-valued permutationinvariant vector [43] The GAP-EAM potential represents the simplest possible many-body potential and is computationally efficient. However, it contains no angular dependence and can only be expected to work reasonably well for simple metals. In contrast, the GAP-3b potential captures angular information, but the pure three-body dependence is not enough for liquids or amorphous structures, where many-body (higher than three) and proper coordination dependencies are needed to reach good accuracy (as we demonstrate in Sec. III A). For a more flexible and generally applicable potential, therefore, it is obvious that both the three-body and the embedding terms should be used as The GAP-3b+EAM potential can be considered a machine-learning alternative to the angular-dependent modified EAM potentials [47,48]. The final and most complex potential is a typical GAP where the main ingredient is the well-established SOAP descriptor [49], used here together with the repulsive and machine-learned pair potentials as We refer to this potential as GAP-SOAP. The manybody SOAP term is given by where K SOAP is the SOAP kernel and q i is the SOAP descriptor vector of the local environment of atom i [49].
B. tabGAP: tabulated Gaussian approximation potentials
The GAP-EAM, GAP-3b, and GAP-3b+EAM potentials depend only on simple low-dimensional descriptors. Hence, after training they can all be tabulated by mapping the machine-learning energy predictions onto suitable grids [37]. This bypasses the Gaussian process regression sum over the training environments, and yields a significant computational speed-up. The pairwise energies can be trivially tabulated as a function of the interatomic distance r ij and evaluated using a smooth and differentiable one-dimensional cubic spline interpolation. Similarly, the machine-learning embedding term can be tabulated as a function of the total density ρ, which in turn is tabulated as a function of r ij . The three-body term must be mapped onto a threedimensional grid and evaluated by a 3D spline interpolation. For this, we choose a grid of (r ij , r ik , cos θ ijk ) points and a 3D cubic spline implementation. θ ijk is the angle between the ij and ik bonds. With sufficiently dense grids, the interpolation errors are negligible compared to the accuracy of the potential, as demonstrated in Appendix B. Similar tabulation schemes have been developed before for other types of ML potentials [50,51], although some details differ from our approach.
We refer to the tabulated versions of the lowdimensional GAPs as tabGAPs [37]. With S representing cubic splines, the tabGAP-EAM can be written where the repulsive and ML pair potentials are combined into one spline interpolation. In practice, this represents a normal tabulated EAM potential file and tabGAP-EAM can thus be evaluated normally using any EAM implementation. The tabulated version of GAP-3b becomes We have implemented this 1D+3D cubic spline interpolation as the pair style tabgap for lammps, available from Ref.
[52] along with code for making tabGAP potential files from GAP potential files. The GAP-3b+EAM becomes the tabulated version For simplicity and because this version is the most accurate and practically useful tabulated potential, we refer to it hereafter simply as the tabGAP. The tabGAP is in practice used with the hybrid/overlay functionality in lammps, combining the eam/fs and tabgap pair styles. Note that our original tabGAP for refractory alloys in Ref. [37] used only 2b and 3b terms as in tabGAP-3b. Table I lists the key hyperparameters used when training the GAPs. The interaction range for all descriptors except three-body includes the third-nearest neighbour atoms in bcc iron. For the three-body descriptor, we found that using a shorter cutoff that only includes second-nearest neighbours provides the best compromise between speed and accuracy. The number of sparse points M were converged to sufficient values by looking at the test errors as functions of M . For the energy, force, and virial regularization parameters σ used in GAP training [53], the default values were set to 1 meV/atom, 0.04 eV/Å, and 0.1 eV. For surface structures we used stronger regularization with twice the default values (2σ) and for liquids 10σ.
D. Training and testing data
The training data consists of total energies, forces, and (for some structures) virial stresses computed by density functional theory calculations for 1078 structures containing 1-259 atoms (in total 38613 atoms). The training and testing structures are available from Ref.
[54]. The following types of structures are included in the training data: • Elastically and randomly distorted bcc unit cells.
• Single-crystal bcc cells at finite temperatures and a few different volumes.
• Single vacancies and clusters up to three vacancies, including various migration path saddle points.
• Short-range structures for interatomic repulsion, where one interstitial atom is placed randomly in the bcc crystal without relaxation (so that it is relatively close, but not too close, to its neighbour atoms).
• Liquids at various densities. To get a reasonable spread from low to high densities, we sampled liquids according to a χ 2 -distribution around the density of liquid iron at the melting point and normal pressure from experiments [55].
In all structures except the distorted unit cells, the atoms are slightly displaced from the perfect lattice positions to induce non-zero forces and to create unique local atomic environments. This is either done by introducing small random displacements or by picking frames from finite-temperature MD simulations. For many structure types (mainly the liquids and the defect clusters), new structures were created by relaxing or running MD with an early version of the GAP or tabGAP.
During training and when testing and converging hyperparameters, the accuracy was monitored with a test set of crystalline and liquid structures. The test set crystals include bcc lattices with random atom displacements and five 250-atom lattices containing 3-5 randomly inserted Frenkel pairs to test defect properties. The test set also includes five 128-atom liquids.
E. Density functional theory calculations
All density functional theory calculations are performed with the vasp code [56][57][58][59]. We used the PBE GGA exchange-correlation functional [60] and the Fe sv projector-augmented wave potential [61,62] with 16 valence electrons. The energy cutoff for the plane-wave expansion was 500 eV. The spacing of k-points for the Brillouin zone integration was set to a maximum of 0.15 A −1 on Γ-centered Monkhorst-Pack grids [63]. A 0.1 eV first-order Methfessel-Paxton smearing [64] was applied. All calculations were done with spin-polarization and collinear magnetic configurations (correspondonding to ferromagnetic Fe in the bcc crystalline structures).
F. Molecular dynamics and statics simulations
All molecular dynamics and statics simulations for benchmarking the potentials are done using lammps [65]. Migration barriers are computed with the climbing-image nudged elastic band (NEB) method [66] as implemented in lammps.
For most of the test calculations and simulations, to minimise the effort for the slow GAP-SOAP potential, we used the fast tabGAP-EAM to find converged box sizes and simulation times. The thermal expansion for both the bcc and the liquid phase was simulated using 1024 atoms in 20 ps simulations in the N P T ensemble at zero pressure [67,68] and averaging the volume over the last 16 ps. The structure and properties of the liquid were further examined by equilibrating a molten 2000atom cell at the melting temperature for 100 ps at zero pressure in MD. The final 85 ps were used to get the average potential energy, volume, and radial distribution function. We determined the melting temperature using the solid-liquid interface method in a box of 1372 atoms, i.e. by finding the temperature at which the solid and the liquid phase are in equilibrium [69].
All defects were relaxed by minimising the energy and pressure of the system. The single vacancies and divacancies were relaxed in 250-atom bcc lattices, including the NEB calculations. For the single SIA relaxations and NEB calculations we used 1024-atom bcc lattices. The small size 2-4 parallel and nonparallel SIA clusters were inserted and relaxed in 2000-atom bcc lattices. The bigger SIA clusters (up to 100 SIAs) were inserted and relaxed in boxes of 16000 atoms. The dislocation loops were (close-to) circular loops with Burgers vectors 1 0 0 and 1/2 1 1 1 . For sizes below 25 SIAs, we relaxed 1/2 1 1 1 loops with both {1 1 1} and {1 1 0} habit planes and used the lowest-energy configuration for the final formation energy. Overall, the difference in energy was small, so for larger than 25 SIAs we only considered the {1 1 1} plane.
To find low-energy C15 clusters, we carried out growth-annealing simulations with the tabGAP similar to what is described in detail in Ref. [19]. In short, we started from a stable C15 cluster and inserted random interstitials close to the cluster one by one followed by annealing and final energy minimisation. During annealing, the C15 cluster captures the added interstitial and grows. This is repeated until a desired size is reached. In this way, we grew C15 clusters between sizes 4-40 SIAs, starting from stable size-4, size-11, size-17, and size-30 C15 clusters. We simulated 40 different growth runs for every size range, and extracted the lowest-energy C15 clusters for comparison with the formation energies of dislocation loops in all potentials.
III. RESULTS AND DISCUSSION
A. Accuracy versus speed Fig. 1 illustrates the balance between achievable accuracy and computational cost of various types of interatomic potentials for iron. It shows the root-meansquare errors with respect to the DFT force components in the test structures plotted as functions of the computational cost of the potential. EAM potentials are by far the fastest many-body potential, but can only reach a limited level of accuracy. Angular-dependent potentials, like MEAM and ABOP, can be slightly more accurate at the expense of some speed. Fig. 1 shows that the machine-learned potentials developed in this work fall into favourable spots in the balance between speed and accuracy compared to existing potentials. However, it should be emphasised that most of the existing potentials have not been force-matched to DFT data, but instead fitted to a mix of experimental and DFTcomputed material properties. This makes the comparison with our DFT-computed forces somewhat unfair, but still provides an approximate measure for the performance of different types of potentials. The few notable exceptions that were fitted to DFT forces and liquid properties; EAM-A04, ADP-S21, MEAM-A15, and MEAM-E18, stand out with the lowest errors among the existing potentials in Fig. 1. Fig. 1 also shows the speedup gained by the tabulation of the various low-dimensional GAPs into the corresponding tabGAP versions. The reduction in computational cost is more than two orders of magnitude. For the most accurate and relevant version, the tabGAP, the GAP-3b+EAM → tabGAP tabulation provides a 175-times faster potential with no loss in accuracy.
GAP-SOAP is the most accurate of the new potentials, but also by far the slowest (about 65 times slower than the tabGAP). Fig. 1 also shows the previous GAP-SOAP (GAP-D18 [22]), which due to differences in hyperparameters is slower than our GAP-SOAP, but also very accurate. Note that liquids were not included in their training data, which explains the higher test errors for the liquid test set.
To further examine and compare the training and testing accuracy of the new tabGAPs and GAP-SOAP, Figs. 2 and 3 show energy and force errors for the test and training data sets. Fig. 2 is the same test data as in Fig. 1. Comparing the potentials in Figs. 2 and 3 reveals several noteworthy points. First, the limited flexibility of tabGAP-EAM and tabGAP-3b makes it impossible to reproduce certain structure types with good accuracy. For example, tabGAP-EAM is somewhat overfitted to defects and gives much larger energy errors for simple finite-temperature bulk bcc iron (Fig. 3). The accuracy of tabGAP-EAM is overall much worse than the other potentials, which is expected, and can to some degree be accepted given its low computational cost. Second, tabGAP-3b provides very good accuracy for all crystalline structures, but the pure 3-body dependence is clearly insufficient to accurately describe the liquid phase, as seen in both Fig. 2 and 3. Using both the 3body and the EAM descriptor in the tabGAP provides enough flexibility to overcome the above-mentioned issues. Figs. 2 and 3 show that the accuracy of tabGAP for crystalline structures is still excellent, often very close to GAP-SOAP, and the liquid errors are greatly reduced compared to tabGAP-EAM and tabGAP-3b. The RMS errors for crystalline structures are at most a few meV/atom and around 0.06 eV/Å. For liquids they are reduced to only around 10 meV/atom and 0.3 eV/Å, compared to 20-50 meV/atom and 0.4-0.5 eV/Å for tabGAP-EAM and tabGAP-3b. GAP-SOAP outperforms the tabGAP for all structures slightly, although at a significantly higher computational cost as discussed above.
From here on, we will not include tabGAP-3b in the discussion as it is overall much less accurate than tab-GAP but at the same computational cost (the additional cost of the EAM term in tabGAP is negligible compared to the 3-body term).
B. Bulk and surface properties
As a benchmark for how well the energy and force test errors translate to actual material properties, Tab. II lists basic structural, elastic, surface, defect, and thermal properties of iron compared between experiment, DFT, and the three potentials (tabGAP-EAM, tabGAP, and GAP-SOAP). All three potentials reproduce the properties of iron well, with the noteworthy exception of the elastic constants by tabGAP-EAM. It overestimates all elastic constants by about 20% compared to DFT, which already overestimates the experimental values by 10-20%. This shortcoming of tabGAP-EAM is clear in the energy-volume curve of bcc iron shown in Fig. 4, where the DFT points are well captured by tabGAP and GAP-SOAP, but tabGAP-EAM produces a stiffer curve around the equilibrium volume. Overall, from Tab. II it is clear that tabGAP-EAM is by far the least accurate potential, as expected, while tabGAP and GAP-SOAP show in general very similar agreement with the reference data.
On the other hand, the thermal expansion coefficient at 300 K as listed in Tab. II is close to the experimental value in all three potentials. Furthermore, Fig. 5 shows [16], and GAP-D18 from Ref. [22]. the volume expansion at zero pressure for the temperature range from 0 K to far beyond the melting point for all three potentials and experimental measurements. All three potentials show very similar trends in the range of the ferromagnetic bcc phase. The experimental transition to the fcc phase and back to the bcc phase indicated in Fig. 5 is not captured by any of the tested potentials. With no magnetic degrees of freedom, this phase transition is not possible. However, the solid-liquid phase transition is captured the closest by the tabGAP and GAP-SOAP potentials, although the volume increase of the liquid phase above the melting point is slower with the temperature compared to the experiment, see Fig. 5.
C. Liquid properties
The melting temperature predicted by both tabGAP and GAP-SOAP is 1900 K, only 5% higher than the experimental 1811 K. The tabGAP-EAM potential overestimates it by 12% (2020 K). The latent heat is also overestimated compared to experimental measurements (Tab. II) in all three potentials, and is likely linked to the slight overestimation of the melting point [15]. Fig. 6 shows the radial distribution function computed as an average over time for an equilibrated liquid at the melting point in each potential. The tabGAP and GAP-SOAP data overlaps almost perfectly with experimental measurements [6], and only the tabGAP-EAM potential shows small discrepancies.
D. Repulsive potential
We benchmarked the repulsive parts of the potentials both statically and dynamically. In the static test, an atom is moved step-wise along a given crystal direction while computing the change in energy. We chose to sample the 1 1 0 direction, as it provides an interesting energy landscape when the atom moves past its nearest neighbours. Reproducing the 1 1 0 energy landscape was also recently shown to correlate with other properties relevant for radiation damage simulations and was hence suggested as a good way to ensure that the repulsive part of the potential is accurate [80]. Fig. 7(a) shows the results from all three potentials and DFT. Again, only tabGAP-EAM shows visible deviations from DFT while tabGAP and GAP-SOAP accurately follow the DFT points.
In the dynamic test, we simulated a low-energy recoil in a direction close to 1 0 0 with the tabGAP. The choice of direction, 0 1 5 , and recoil energy (20 eV) corresponds to a near-threshold event for defect creation (the minimum threshold displacement energy in Fe is around 20 eV and around the 1 0 0 direction [81,82]). From the recoil simulation trajectory with the tabGAP, we recomputed the energies with the other two potentials and picked a set of interesting trajectory frames [75] f Ref. [76] g Ref. [77] h VASP-PAW results from Ref. [8] i Ref. [78] j Ref. [55] for DFT. The potential energy variation of the recoil trajectory is shown in Fig. 7(b), compared between the three potentials and DFT. All potentials are very close to the DFT points, suggesting that they can reliably model the interatomic collisions and initial defect creation processes that are important in collision cascade simulations.
E. Defects
Tab. II lists basic properties of single vacancies, divacancies, and single self-interstitial atoms. GAP-SOAP and tabGAP compare well with the DFT data, in particular they reproduce accurate binding energies of divacancies and the correct order of stability and energy differences of single SIA configurations.
The migration barriers of the single vacancy and SIA, computed with the NEB method, are shown in Fig. 8 compared to DFT data [77,78]. All three potentials reproduce the migration energies well, although tabGAP-EAM and tabGAP show a double-hump profile for the vacancy migration barrier which is not present in DFT but is a common feature of existing interatomic potentials [8]. Fig. 9 compares various migration barriers of single SIAs and divacancies between the potentials and DFT data from Ref. [8]. The migration paths and corresponding energies are illustrated and listed in the Supplemental material. Fig. 9 shows that overall, tabGAP and GAP-SOAP produce migration energies that are quite consistent with the DFT data. GAP-SOAP has a tendency to slightly underestimate (di)vacancy migration energies and shows a RMS error of 0.08 eV compared to DFT for both SIAs and divacancies. The tabGAP is somewhat more accurate with RMS errors 0.05-0.06 eV, while tabGAP-EAM performs reasonably well for divacancy migration but quite poorly for SIA migration paths.
The energy landscape and possible geometries of selfinterstitial clusters in iron is rich and challenging for classical interatomic potentials to reproduce. It is now well established by DFT calculations that nonparallel clusters are at small sizes much more energetically stable than parallel dumbbells and dislocation loops [2,84,85], several attempts to reproduce this complex landscape of SIA clusters in iron with analytical interatomic potentials [2,13,19], although none have been completely successful. GAP-SOAP and tabGAP provide improvements over the existing analytical potentials, but still leaves some room for improvement. Table III lists formation energies of parallel and the nonparallel 1 1 0 dumbbell configurations discussed above. Both tabGAP and GAP-SOAP correctly reproduce the triangular configuration as the most stable cluster of 2 SIAs. Only tabGAP predicts the hexagonal size-3 cluster to be the most stable, although the energy difference compared to the parallel configuration is very small also in GAP-SOAP. For size 4, both tabGAP and GAP-SOAP correctly reproduce the C15 clusters to be significantly more stable than parallel dumbbells. From Tab. III it is clear that tabGAP-EAM, due to its limited flexibility and lack of angular dependence, struggles to reproduce the relative formation energies of SIA clusters.
Most of the small SIA clusters discussed above are well-covered by the training database and good accuracy is therefore to be expected. We also computed the formation energies of clusters up to 100 SIAs in the form of C15 clusters and dislocation loops with the 1 0 0 and 1/2 1 1 1 Burgers vectors. Fig. 10 shows the formation energies per interstitial in all three potentials. The results are compared to DFT data for small clusters from Ref. [85]. Note that there are often many geometrically different ways to construct clusters of a given size. Hence, our clusters may not be exactly the same as the DFT data used for comparison [85]. For dislocation loops at sizes larger than a few interstitials, the difference in formation energy between different configuration is typically quite small. For C15 clusters, however, there are vast amounts of possible configurations for any given size and the formation energy may vary significantly. Only a few sizes allow for well-defined highsymmetry shapes, which have relatively low formation energy. Previous work, including the DFT work with which we compare with here, have employed various criteria for constructing possible low-energy C15 clusters. Here, we use a growth-annealing method as described in section II F to find low-energy C15 clusters. We only report the formation energy of the lowest-energy C15 cluster that we found at each size in Fig. 10 (which, however, is most likely not the most stable out of all theoretical C15 configurations and also likely not the same configuration as in the DFT study). Fig. 10 shows that the tabGAP and GAP-SOAP reproduce the relative stability of the three clusters in good agreement with the DFT data. The formation energies of C15 clusters are somewhat overestimated, and very much so in tabGAP-EAM, which does not provide any improvement over existing EAM potentials [2,13]. 1/2 1 1 1 loops are more stable than 1 0 0 loops for the entire size range in all potentials, consistent with DFT extrapolation [85]. The DFT-based extrapolation model developed in Ref. [85] suggests a crossover in the energy of C15 and 1/2 1 1 1 loops at around 50 interstitials and between C15 and 1 0 0 loops at around 90 interstitials. The tabGAP and GAP-SOAP predict the corresponding crossovers at much lower sizes, 34 and around 45 for tabGAP, and 23 and 33 for GAP-SOAP. In comparison, the recent ML potentials based in linear regression achieved a crossover between C15 and 1/2 1 1 1 at around 40 SIAs [24], i.e. somewhat closer to the DFT estimate than tabGAP. Given that the stability of C15 and other clusters can vary significantly between different exchange-correlation functionals and pseudo-or PAW potentials in DFT [84], it remains unclear if the differences in crossovers is a shortcoming of the potentials, or if the difference can to some extent be attributed to differences in our DFT compared to the reference DFT data from Ref. [85].
It is noteworthy that while GAP-SOAP is most accurate among the potentials for small clusters (Tab. III), the tabGAP shows better transferability to larger clusters (Fig. 10). tials. We confirmed that the tabGAPs and GAP-SOAP all reproduce the symmetric nondegenerate core structure of the 1/2 1 1 1 screw dislocation as predicted by DFT. Fig 11 shows the relaxed core of the screw dislocation in all three potentials and our DFT. We used 135-atom boxes with the quadrupolar periodic arrangement of screw dislocation dipoles [87], produced by inserting two screw dislocations (around 17Å apart) with opposite Burgers vectors.
We also computed the Peierls barrier for screw dislocation migration in the tabGAPs and GAP-SOAP using the NEB method. Fig. 12 shows the results. The barriers are computed in two ways, with simultaneous migration of both dislocations (Fig. 12a), and with only one of the dislocations migrating (Fig. 12b). The latter approach replicates the method used in Ref. [22], which allows a direct comparison between the potentials and their DFT barrier. We used the same 135-atom box to be consistent with the DFT results. The obtained Peierls barriers from simultaneous migration (Fig. 12a) are compared to DFT data from Ref. [87]. Fig. 12 shows that both tabGAP and GAP-SOAP produce similar barriers with shapes and heights consistent with the DFT results. The tabGAP-EAM potential, like most existing EAM potentials, fails to reproduce the Peierls barrier and predicts an almost flat energy barrier. The tabGAP and GAP-SOAP agrees much better with the DFT barrier from Ref. [22], which we believe is more consistent with our DFT training data.
IV. SUMMARY AND OUTLOOK
We have developed four interatomic potentials for iron using machine-learning methods and increasingly flexible combinations of descriptors for the local atomic environments. Three out of these potentials were thoroughly benchmarked, and two of them (tabGAP and GAP-SOAP) showed overall great accuracy for a range of solid and liquid properties. The three tested potentials span more than three orders of magnitude in computational cost, and hence provide options depending on the required accuracy and speed. All potentials contain accurate repulsive parts that make them applicable to collision cascade simulations.
The results demonstrate that our method for tabulation of low-dimensional Gaussian approximation potentials (tabGAP) provide interatomic potentials with a good balance between accuracy, speed, and transferability. The tabGAP combines simple two-body, threebody, and EAM-like density descriptors that together provide good flexibility and can be mapped onto suitable grids, making them computationally efficient. In particular, we showed that our new simple EAM-like descriptor provides the many-body coordination dependence necessary to accurately describe the liquid phase. The tabGAP developed here shows overall similar accuracy to the GAP-SOAP potential but at a much lower computational cost, similar to that of classical analytical angular-dependent potentials. Given its modest compu- tational cost and good accuracy and transferability for defect properties, the tabGAP is well-suited for largescale radiation damage simulations.
ACKNOWLEDGMENTS
This work has received funding from the Academy of Finland through the HEADFORE project (grant number 1333225). The authors wish to thank the Finnish Computing Competence Infrastructure (FCCI) and CSC -IT Center for Science for supporting this project with computational and data storage resources. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -EU-ROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. Convergence of the cubic-spline interpolation errors for the 1D functions (pair potential and embedding energy) and the 3D three-body term. For the final tabGAPs, we used 5000 points for the 1D interpolation and 80 × 80 × 80 points for the 3D grid. the repulsive screened Coulomb potential. The pair density function is fixed as part of the descriptor, as described in the Methods section, and is identical for both potentials. The machine-learned embedding functions shown in Fig. 13(d) have the physically reasonable monotonically decreasing but convex shape [6]. The grey vertical line in Fig. 13(d) indicates the maximum total density encountered in the training structures, after which the embedding energy starts approaching zero due to lack of training points (densities higher than this will in practice never be encountered as it would require multiple atoms simultaneously very close to each other).
Appendix B: tabGAP grid convergence Fig. 14 shows the convergence of the tabGAP interpolation error as functions of grid size. For 1D interpolation, the errors are already vanishingly small when using more than a few hundred points. For the final tabGAPs, we used 5000 points. For the 3D (r ij , r jk , cos θ ijk ) grid, using thousands of points in each dimension is out of reach, but Fig. 14 shows that the interpolation error is already negligible compared to the accuracy of the potential itself when using more than 50 grid points in each dimension. For the final tabGAP, we used a N × N × N grid with N = 80, for which the interpolation error is well below 0.1 meV/atom and 0.01 eV/Å.
|
2022-01-26T02:16:29.523Z
|
2022-01-25T00:00:00.000
|
{
"year": 2022,
"sha1": "2a9a8f519415b35b734fbc2c7146697c310a2597",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2201.10237",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f0d241d6ab90d8f4081c3cf8d8b1c60564379f84",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
53292320
|
pes2o/s2orc
|
v3-fos-license
|
Triadic Social Structure Facilitates Backing for Crowdfunding Projects
Crowdfunding is a new funding method through which founders request small amounts of funding from a large number of people through an online platform. Crowdfunding facilitates a new type of social capital and exhibits a unique form of social dynamics, thus attracting the interest of sociologists and other social scientists. Previous studies have focused on social relationships in crowdfunding such as direct reciprocity and consider how they contribute to the success of funding. The social structure of crowdfunding, however, involves more complex social relationships and it may contribute to the success of a new venture or project in many ways. In this study, we focus on a specific type of triadic social structure, the buddy relation, which can be described as a relationship through which project founder x, who previously backed another founder z's project, receives financial backing from the other backers of z's project. We found that the buddy relation occurs significantly more often than randomly, concluding that this structure facilitates the gathering of financial backing and may contribute to the success of a crowdfunded project.
I. INTRODUCTION
Crowdfunding is a funding method through which founders, to realize their goals, request funds from crowd comprising many and unspecified individuals through an online platform. Crowdfunding sites are used for a variety of projects, such as video games, free software, inventions, scientific research, environmental initiatives, social welfare, and political activities. Founders make their own proposals in public on crowdfunding sites and ask backers to contribute a small amount of money. The founder sets up a target amount of funding and offers returns for their support such as products, thank you letters, or advertisements of backers' names as an acknowledgment. Individuals examine and compare proposals on the site. Those motivated select their favorite projects and decide the amount they want to pledge, and transfer money via micropayment. Finally, such backing expands via word of mouth on social networking services.
Backers have the following two types of motivation: extrinsic and intrinsic motivations. The former is based on returns, that is, self-interest, whereas the latter is based on sympathy toward founders [1]. For example, those motivated intrinsically see the feelings of "connectedness" to a community as precious, via participating social interactions [2].
The more attractive the proposal or return, the more fundraising is likely to be achieved. However, the major characteristic of crowdfunding is that new social capital emerges directly among people unknown to each other, and such social dynamics have attracted the interest of sociologists and other social scientists. Particularly, a large proportion of users in crowdfunding may interact with others not once but repeatedly; thus, their interactions form a type of social structure. Furthermore, present backers may become future founders and vice versa. This interdependent relationship creates the social dynamics of crowdfunding. For example, b's previous experience of being backed by a may make b willing to back a this time; this is known as the reciprocity principle [3]. Social dynamics in crowdfunding, however, goes beyond simple reciprocity in crowdfunding, as discussed later in this study.
Previous studies on crowdfunding have mainly focused on factors contributing to funding success. They have examined the contents of a project and the way it is presented to identify elements that are crucial for success, such as the "goal" (the target amount of funding), "deadline," "news updates" (frequency of news updates), "Facebook likes" (number of Facebook likes), and length of explanation. Meanwhile, typographical errors usually result in failure [4].
Among the characteristics of crowdfunding, particularly interesting are those related to its social capital. References [4], [5], [6], and [7] explored the effects of founders' social capital on the success of a funding initiative. Reference [4] showed that the possibility of success is significantly correlated to the number of Facebook likes, which they regarded as a founder's social capital. Reference [5] showed that success is correlated to the social capital inside the crowdfunding site. As for the inside capital, [6] showed that a founder who has received backing previously from many individuals is likely to be able to collect more backing for future initiatives. Reference [7] also showed that a founder who has backed many others is more likely to receive more backing. Particularly, [7] have confirmed that direct reciprocity between founders exists in crowdfunding sites. As a related work, [8] showed that there is no indirect reciprocity among founders.
These studies shed light on the important mechanisms of social capital that make it possible to fundraise through crowdfunding sites. However, they do not show a more complex social structure than a dyad such as direct reciprocity (however, see [8] and the discussion). In this study, we explore such a social structure and its dynamics.
II. THEORY
The questions explored in this study include the following: Why do backers back certain founders and not others? What is the effect of backing? How does previous backing by present founders contribute to gathering funds and thus the success of their projects?
The act of backing can be considered as the emergence of a direct tie between a backer and a founder. Thus, the backingbacked relationship forms a type of social network in crowdfunding sites. The network dynamics can be analyzed through social network theory [9] [10] [11] [12].
One of the most common theories is the theory of reciprocity; that is, if your project was backed by someone previously, then you should back her if she is seeking funding for her project. In fact, [7] showed the existence of direct reciprocity. However, direct reciprocity cannot be seen as the major dynamics because the number of founders is very less when compared to backers. Other mechanisms should be considered.
In social network theory, there are other candidates of mechanisms of tie creation apart from reciprocity. An important theory is that of triadic closure [13]. This theory states that you are likely to become friends with those who are friends of your friends. This dynamic has been found in various sites such as e-mail and social media [14] [15].
Furthermore, a shared focus tends to promote friendship between users. A focus here is defined as a "social, psychological, legal, or physical entity around which joint activities are organized (e.g., workplaces, voluntary organizations, hangouts, and families)" [16]. Reference [14] showed that class participation as a shared focus promoted friendship between college students. This is called focal closure.
Based on the ideas of triadic and focal closure, this study proposes a new hypothesis, different from direct reciprocity, on how founders' previous backing behaviors contribute to gathering funds. Our hypothesis is as follows: If x backs project P z , then backers of project P z (hereafter referred to as w) will be more likely to back x's project P x , as shown in Fig. 1. 1 We interviewed D.S., who is the founder of a popular crowdfunding site in Japan. We conducted the interview on June 27, 2018. This may be interpreted as the interaction of focal and triadic closures, as displayed in Fig. 2. First, this relationship may manifest focal closure because P z is a project in which the joint activity (backing the project) is organized. Specifically, when x backs P z , this backing relationship may express x's shared concerns in the cause of the project launched by z. In the same way, w's backing P z may indicate w's shared concerns with z. Hence, it is expected that x and w may share similar concerns in the cause of the project that x is trying to launch. In this way, P z, as a focus, promotes shared concerns between x and w.
However, only shared concerns may not guarantee a financial contribution to an unknown person. The w's trust in x will be also required. Here, it must be noted that founder z may not only be a shared focus for x and w but may also be a mutual friend, which implies that a type of triadic closure may also be involved in this relationship. As a mutual friend, z may promote this trust relationship. We assume that this is attributed to the following story. When z knows that x is launching her project P x , z informs w of this and asks w to back x's project P x . Subsequently, w may be convinced that x and her project P x is trustworthy because her friend z has guaranteed it. When this interaction of two mechanisms activates a friendship between x and w, and w backs x's newly launched project P x , we refer to this as the "buddy effect." This is not merely our speculation. We interviewed the founder of a crowdfunding site 1 to confirm whether the above relationship is observed often. He told us that successful projects typically gather 1/3 of their funding from the founder's direct friends, 1/3 from friends of friends, and 1/3 from others. Furthermore, he stated,
As a rule of thumb, what a founder should do is not to ask for contributions from her direct friends, but to say to them, "Please let your friends know that I am launching my project."
This clearly refers to the process of closure that we hypothesized. However, this is just evidence; thus, quantitative evidence must be provided to verify this hypothesis, which we will do later in the study. His story suggests that our theoretical hypothesis is not merely speculative but worth exploring.
In sum, our theory predicts that if x backed P z , which was backed by w, then x is more likely to be backed by w in the future. As a corollary, if x backed P z and a founder z has many backers, then x can gather more funding.
A. Data Collection and Organization
Crowdfunding consists of four categories [17]: donations [18], rewards [4][5] [19], debts [20], and equity [21] [22]. Donations do not give backers any returns, whereas rewards, debts, and equity give backers non-monetary returns (e.g., a product or a thank you letter), dividends, and interest, respectively. Donations and rewards represent an essential aspect of crowdfunding because they have nothing to do with money. In the study, we collected data for these crowdfunding categories, particularly from Readyfor, Japan's largest crowdfunding site. The data were collected from the site's activities spanning from May 16, 2011 to September 5, 2017. The data are organized into the following two classes: user data and project data. The user data were collected from 306,968 users. Several variables were included such as user ID, username, and user description. While users can be founders, they are not necessarily founders in this case; the majority of Readyfor users only provide backing and do not launch their own projects. We will refer to founders as the "community," and users who only back others' projects as the "crowd." The project data consists of 6,559 different projects. The project information includes project ID, founder ID, project category, who commented on the project and when, and project deadline. Comment information is critical because it can be used to identify backing relationships and the dates of these links. In Readyfor, it is a norm to leave comments if you back a project. Since the commenting time is recorded, this can be used as a proxy for the time of the occurrence of the backing event. Project deadline is another valuable source of information. It specifies when the call for project funding ended. The problem is that it is impossible to specify the starting time of a project. As a proxy for this, we used the time of the first comment on a project as its starting time.
By combining user and project data, we obtained a bipartite network whose nodes are projects and users and whose edges are backing relations from users to projects, which is shown in Fig. 3. The number of backer nodes is 203,568 (this is smaller than the total number of users because the latter includes users who do not back at all). The number of project nodes is 6,559, and the number of edges is 279,676.
There are two points to note in this case. First, project nodes have their founders. This implies that the user-project relationship also represents the user-founder relationship. Second, this is longitudinal network data; that is, edges have time stamps, and project life spans (from cut off to deadline) are also recorded. This time information is critical for our analysis.
B. Basic Statistics and Preliminary Analysis
The bipartite network created from backing relations in Readyfor had a very skewed degree of distribution, as expected. The basic statistics of project in-degree distribution are shown in Table I and Fig. 4. The mode of this distribution is 0. Specifically, 411 projects received no backing throughout their life spans. On the other hand, less than 25% of the projects obtained support from over 50 different backers, with the highest number reaching 1,402. This shows the skewedness of project indegree distribution.
The user out-degree distribution was much more skewed, as shown in Table II In fact, about 85% backers (172,232 backers) backed someone's project only once. The number of two-time and three-time backers was 19,006 and 5,705, respectively. These numbers account for more than 96% of the backers. Very few people backed projects more than ten times (approximately, 0.4% of the total backers).
As the next step, we examined whether there was any prima facie evidence of the buddy effect. A buddy relation is observed in the following two conditions. 1. Both x and w backed P z . The backer that backed first (x or w) is irrelevant.
2. After condition 1 is satisfied, w backs P x .
We can calculate the buddy ratio by dividing the number of case 2s by the number of case 1s.
Buddy ratio = the num. of cases where w backs P x / the num. of cases where both x and w backed P z If the buddy ratio in Readyfor is significantly high, then it can be considered as prima facie evidence for the existence of the buddy effect in this platform.
First, we calculated the denominator of the buddy ratio, that is, the number of cases where both x and w backed P z . The mean was 106.5. It must be noted that we excluded the 0-case in calculating the mean because when the denominator is 0, the buddy ratio cannot be calculated. This means that if you (x) are not the only backer of someone's (z's) project P z , there are other 106.5 backers (w) of P z , on an average, that are potential candidates for backing your project P x through the buddy relation. In real data, out of 106.5, 1.7 people backed P x , on an average. Thus, the buddy ratio is 0.031. Fig. 6 shows a typical example of a buddy relation. Y and W are a set of backers of P x and P z , respectively. P x is backed by ∩ who backed P z , which was also backed by X. Although the buddy relation appears to exist here, it is difficult to determine whether the raw ratio 0.031 indicates the existence of the buddy effect. The number may be generated through a purely random process. If w has a large out-degree, then w may back P x randomly without knowing that x backed the same P z as itself. However, this relation is created purely accidentally and is not considered a genuine buddy relation. To exclude this possibility, a more sophisticated method must be employed.
IV. METHOD
To confirm whether the buddy effect works, we conducted a conditional uniform graph (CUG) hypothesis test [10]. In a CUG test, the null hypothesis is that the observed graph is uniformly generated and conditional on the assumed properties of nodes and edges. Under this hypothesis, many simulated graphs are generated via the Monte Carlo simulation. Subsequently, the statistics between the observed graph and simulated graphs are compared to examine whether there are any significant differences.
For our null hypothesis, we must erase only the fact that w backed P z while preserving backers' tendencies to back and projects' tendencies to attract backings. First, to preserve the backers' tendencies, backers' out-degrees were fixed; if backers a, b, and c backed 4, 8, and 2 times, respectively, in the observed network, then their out-degrees in the simulated network should remain the same at 4, 8, and 2, respectively. Second, to preserve the projects' tendencies, backers choose projects to fund according to the "popularity" of the projects. The popularity in this sense is defined as the number of backers the project gathers in the real world. Thus, if project P a gathered more backing than project P b in the real world, project P a is more likely to be chosen as a backing destination than project P b . More details concerning this condition are provided below.
The Monte Carlo simulation runs as follows. As we shall see below, all edges in the bipartite graph are randomly rewired according to the conditions mentioned above. Let us define the edge from backer a to project P b as %&' , which has its own time stamp. Next, we must define a set of candidates of projects that are the target for rewiring as CS and denote as . The actual %&' is rewired and %&, ( ℎ , ∈ ) is created in the simulated network. P k is chosen probabilistically from CS-the set of candidates. CS is subject to the time constraint-all projects that exist when backer a backed project P b are assigned as candidates. In other words, the life span of the candidate should include the creation time The assumption here is that backers would choose among the candidate projects that run simultaneously with project P b at the time of backer backing it in the real world. Thus, the number of elements of CS as rewiring targets is much smaller than the number of total project nodes in the graph.
How is , chosen from CS? This process should be formalized by the popularity of projects. The in-degree of , is denoted as , . , is the number of backers that the project , gathered in the real world. In the simulation, the backer chooses the backing destination probabilistically, according to the categorical distribution, where , is the element of k-dimensional vector and its value is 0 or 1. Furthermore, , > ,?@ = 1. In other words, , is the indicator of , being chosen as the backing destination.
, is the probability of being chosen, which is defined as , / , > ,?@ . In other words, , is chosen with the probability of , 's in-degree divided by the sum of in-degrees of all the candidates for the rewiring targets.
Let us explain the procedure using a hypothetical example illustrated in Fig. 7. Consider rewiring the edge from to P b . This backing occurred on July 1, 2016. Assume that a set of candidates for the rewiring targets consist of P i , P j , P k , and P l . It must be noted that life spans for every project include the date when the backing from to P b occurred. The rewiring probability is determined in terms of each in-degree, that is, the rewiring probability for P i is 4/25, P j is 7/25, P k is 12/25, and P l is 2/25. In this case, the rewiring happened from to P k . In sum, this simulation satisfies the two conditions discussed earlier. First, since each actual edge is rewired one by one, the out-degree distribution remains unchanged. Second, rewiring probability follows the categorical distribution, reflecting the popularity of each project. We conducted this simulation 100 times.
V. RESULTS
For each simulated network, we calculated the buddy ratio defined above. The distribution of the ratio over 100 trials is shown in Fig. 8. The mean of the ratio is 0.0011.
Since the actual buddy ratio is 0.031, the null hypothesis that this occurs under the above two conditions is correct, with a probability of less than 0.01. Thus, it was concluded that the buddy effect exists in the crowdfunding site Readyfor. This study proposed the buddy relation as a more complex structure than a dyad to explain the facilitation of backing for projects. As a similar work, [8], focusing on a transitive triplet, which is a type of hierarchical network (Fig. 9), showed that the triplets significantly exist in the network among founders of Kickstarter, the leading crowdfunding site in the world. Although the transitive triplet and buddy relation appear to have similar structures (Fig. 9), the two are different in the following ways. First, all nodes of the transitive triplet, x, z, and w, are founders, while the w from the buddy relation are pure backers. Second, the edges of the triplet, xz, wz, and wx, have no ordering in time because network data was aggregated over time, while the order of edges of the buddy relation is that wx occurs after xz and wz. Finally, this study showed fine micro structures among founders and backers, which reflects a causal relationship. There are some limitations to the present study. First, the study focused on a single platform; thus, the generalizability of the finding is open for debate. A cross comparison of different crowdfunding platforms is a possible field for future research. Second, the study found that the buddy relation worked effectively, but this did not exclude the possibility that other social mechanisms are also effective in gathering the funding. For our next study, we aim to consider other mechanisms and compare their strengths with the buddy effect. Finally, although we showed that the buddy relation increased the probability of receiving backing, and this indicates that it , , , may contribute to the success in gaining funds eventually, direct evidence for this possibility was not provided. This factor should be demonstrated through future research.
VII. CONCLUSION
Crowdfunding can be a significant force of social change in modern society. Generally, cultural, environmental, and welfare issues have difficulty gaining funding because their marketability is poor (the failure of the market), and they do not serve as vote-gathering mechanisms (the failure of government). Conversely, besides marketing (the selfaid) and governmental aid (the public aid), crowdfunding serves to provide funding to social entrepreneurs, as a form of mutual aid. As a result, crowdfunding can be regarded as a new institution responsible for the redistribution of resources; thus, crowdfunding has a high social significance. However, the transfer of money between people who have no existing or weak relations with each other is quite difficult. To establish trustworthy relations among people beyond direct reciprocity, both the triadic and focal closure conditions need to be present. The performance of a joint activity to support a project and the existence of mutual friends provide opportunities for the creation of sympathetic feelings and trust building. We found evidence that such an essential dynamic, the buddy effect, exists and helps founders to gain funding for their projects. However, the ratio of 0.031 is quite small. The next generation of crowdfunding should facilitate the establishment of the buddy relation.
|
2018-11-15T22:18:45.180Z
|
2018-11-12T00:00:00.000
|
{
"year": 2018,
"sha1": "9dbe9c30b019ccda57a9620a531f9954dd9ad704",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1811.05104",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b046b099460ad1d5ee12e50fb24c7412a0176be7",
"s2fieldsofstudy": [
"Business",
"Sociology"
],
"extfieldsofstudy": [
"Business",
"Computer Science"
]
}
|
119095645
|
pes2o/s2orc
|
v3-fos-license
|
Detectability of Microwave Background Polarization
[NOTE: Previous versions of this paper (both on astro-ph and published in Phys. Rev. D) contain results that are in error. The power spectra C_l were normalized incorrectly by a factor of 2 pi. All observing times in detector-years in those versions are too large by a factor of 2 pi. The main place these numbers appear is on the vertical axes of Figures 4 and 5. Note that because all calculations were based on the same power spectra, all conclusions pertaining to comparisons of different techniques remain unchanged. This error has been corrected in the present version of the paper. An erratum is being sent to Phys. Rev. D. I apologize for the error.] Using a Fisher-matrix formalism, we calculate the required sensitivities and observing times for an experiment to measure the amplitudes of both E and B components as a function of sky coverage, taking full account of the fact that the two components cannot be perfectly separated in an incomplete sky map. We also present a simple approximation scheme that accounts for mixing of E and B components in computing predicted errors in the E-component power spectrum amplitude. In an experiment with small sky coverage, mixing of the two components increases the difficulty of detecting the subdominant B component by a factor of two or more in observing time; however, for larger survey sizes the effect of mixing is less pronounced. Surprisingly, mixing of E and B components can enhance the detectability of the E component by increasing the effective number of independent modes that probe this component
density perturbations produce only the E component, leaving the B component as a clean probe of subdominant sources of polarization such as tensor perturbations. 2 Given a full-sky polarization map, it is possible to separate the E and B components perfectly; with incomplete sky coverage, however, there is inevitably some cross-contamination between the components. This naturally makes detecting the B-component more difficult, as it is in danger of being swamped by the (typically much larger) E-component. In this paper, we will examine the experimental requirements to detect both E and B-component polarization signals in a degree-scale experiment, accounting for this cross-contamination.
One approach to separating E from B is to observe in a circular ring [24,25,26]. The separation of components is particularly clean in this case, but a strategy involving a two-dimensional map is likely to be much better for measuring power spectrum amplitudes [27], as many more independent modes at a given scale can be found in the data. Attention has therefore been paid to finding normal modes that minimize the complications due to E-B cross-contamination in a two-dimensional map [27,28]. In the present paper, we adopt a more straightforward approach: we consider the likelihood function of a polarization map in pixel space and compute the Fisher matrix for the normalizations of both E and B power spectra. Since this "brute-force" approach is based on the likelihood function of the full data, it must be at least as good as (i.e., give as small error bars as) any method based on an expansion in normal modes.
The remainder of this paper is organized as follows. In Section II, we review some properties of polarization maps and illustrate the difficulties in splitting a partial-sky map into E and B components. In Section III, we present the Fisher-matrix formalism we will use to determine the detectability of the two components in a given experiment and also present two simple approximation schemes for determining the detectability. Section IV presents our results, and Section V contains a brief discussion.
II. THE E-B DECOMPOSITION
In this section we describe the nature of the E-B decomposition of a polarization field. This description makes no pretense of completeness; much more information on this subject can be found in the literature. See in particular [29,30] and references therein.
A. The flat-sky approximation
Polarization is a spin-2 quantity (i.e., it is invariant under 180 • rotations). The linear polarization expected to be found in the CMB is described by the two Stokes parameters Q and U , which are related to the magnitude P and direction φ of the polarization as follows: (1) U = P sin 2φ. ( There are several ways to describe the division of a polarization map into E and B components. One simple way is to examine a small patch of the sky, which can be well approximated as flat. In this approximation, an E-component polarization field is one whose Stokes parameters satisfy the following equation: while a B-component field satisfies this equation: (These equations are simply the spin-2 analogues of the equations ∇ × v = 0 and ∇ · v = 0 for the scalar and pseudoscalar components of a vector field.) An arbitrary polarization field can always be written as a sum of an E and a B component. One easy way to see this is to work in Fourier space. If we consider a mode with spatial dependence e ik·x , then equation (3) implies that the E component has a polarization that is always parallel or perpendicular to k. In terms of the Stokes parameters, it can be written where φ k is the angle k makes with the x axis. Similarly, a B-mode has a polarization direction that is always at a 45 • angle to k. It can be obtained by simply rotating the polarization of an E mode through 45 • at each point: To decompose an arbitrary map into E and B components, therefore, we simply take the Fourier transforms of Q and U and then, for each wavevector, project (Q, U ) onto the axes defined by equations (5) and (6). Of course, to perform this decomposition, we need to know Q and U over all space. If we only know Q and U over a finite region, the Fourier transform and hence the E-B split will depend on the boundary conditions we assume. Figure 1 shows Gaussian "hot spots" for both the E and B components. Note that the B component, unlike the E component, has handedness; this is the reason that density perturbations, which lack handedness, cannot excite this component. Of course, we could generate another pair of E, B maps by replacing (Q, U ) by (−Q, −U ), which is equivalent to rotating the polarizations in these plots through 90 • at each point.
To illustrate the ambiguity in the E-B decomposition of an incomplete data set, suppose that we have observed only one quadrant of the E-component hot spot, as shown in the left panel of Figure 2. This is of course a pure E polarization field: it satisfies equation (3) at every point. However, it is also the sum of the E and B polarization fields shown in the center and right panels. These fields were produced by Fourier transforming the region covered by the data with periodic boundary conditions and splitting each Fourier mode into E and B parts according to equations (5) and (6). There are infinitely many other ways the decomposition could have been done; for instance, the data could have been padded with zeroes on all sides before Fourier transforming.
In this particular example, the "contamination" of a pure E component by B is not a small effect: the r.m.s. amplitude of the B component shown in Figure 2 is 0.87 times that of the E component. This is because the map being decomposed has a large amount of power power on scales of order the width of the map. In general, we can expect a large amount of cross-contamination between E and B on the largest scales probed by a given data set, with much less contamination on smaller scales. The reason for this is simple: the ambiguity in the E-B decomposition is a result of our ignorance of the boundary conditions to impose on the two components, so the natural length scale associated with E-B mixing is the width of the map.
B. Beyond the flat-sky approximation
Although the flat-sky formulae are in general simpler to work with, for many applications we need to consider the exact, full-sky formulae. In this section we will very briefly summarize this formalism; see [21,22,27,28] for further details and useful identities.
Since polarization is a spin-2 quantity, the natural basis functions to use in expressing the Stokes parameters (Q, U ) on the sphere are the spin-2 spherical harmonics. Specifically, we can write where ∓2 Y lm is a spin-2 spherical harmonic. Detailed information on the spin-2 spherical harmonics can be found in the sources cited above. For our purposes, all we need to know is that decomposing a polarization field into E and B components is quite simple in the spherical harmonic basis: the coefficients a ∓2,lm in the expansion can simply be written This, combined with the reality condition a * −2,lm = (−1) m a 2,l−m , allows one to determine E lm and B lm from the spherical harmonic coefficients.
If we assume that the polarization is a realization of a statistically isotropic random process (i.e., that there is no preferred direction), then the ensemble averages E lm and B lm must both vanish, and the covariances must satisfy If in addition the random process is parity-invariant (lacks handedness), then Furthermore, if the random process is Gaussian, then the two power spectra C E l and C B l form a complete description of the random process. 3 These power spectra are therefore the only thing a Gaussian theoretical model needs to predict about CMB polarization. Fortunately, theoretical models are capable of computing predicted power spectra with great precision and speed [21,22], for instance using the publicly available CMBFAST software [31].
Of course, actual observations always involve convolving the true polarization field with the telescope beam. As long as the beam is azimuthally symmetric and purely co-polar, this results in a simple replacement of C E,B l with 3 There is also the cross-correlation between the E component and the temperature anisotropy. We choose to focus exclusively on polarization data in the present paper, though, ignoring temperature data.
Throughout this paper, we will use C E,B l to denote the beam-smoothed power spectra. In the next section, we will consider the analysis of data from a hypothetical CMB polarization experiment. The key ingredients in the analysis are the real-space correlations between measurements at different points, Q(x 1 )Q(x 2 ) , U (x 1 )U (x 2 ) , and Q(x 1 )U (x 2 ) , which can be expressed as sums over the power spectra. Specifically, if Q and U are defined with respect to coordinate axes such that the x axis joins the two points, then where the functions F 1l and F 2l can be expressed in terms of Legendre functions as .
In practice, we often wish to know the correlations in some other coordinate system. Since we know how (Q, U ) transforms under rotations, we can easily get these correlations by applying the appropriate rotation matrices to the correlations given above. (A pleasingly explicit recipe for doing this can be found in [27].)
A. Likelihoods and Fisher Matrices
The Fisher information matrix provides a useful way to quantify the ability of a data set to estimate parameters. It has been applied to great effect in the study of CMB temperature anisotropy (e.g., [32]) and also to CMB polarization studies [27,33]. In this section, we show how the Fisher matrix can be used to calculate the significance with which the amplitudes of the E and B power spectra can be measured from a polarization map.
Suppose we have made maps containing Q and U measurements at N pixels. The pixel locations are x 1 , . . . , x N . Our 2N data points can be written Here i ranges over pixels in the map, and ǫ Qi (ǫ Ui ) is the noise in the ith pixel of the Q (U ) map. We will assume uncorrelated noise: Here A and B range over {Q, U }, and σ 2 Ai is the noise variance in pixel i of map A. We will arrange our data points into a data vector, 6 The 2N × 2N data covariance matrix is defined to be M = dd T . We can label elements of the data vector with a pair of indices iA, with i ranging from 1 to N and A being either Q or U . Then a typical element of the covariance matrix is M iAjB . The covariance matrix contains contributions from signal and noise with As described in the previous section, the correlation functions w Q , w U , w X can be expressed as sums over the power spectra.
A theory predicts a pair of power spectra C E l and C B l and hence a covariance matrix M. The likelihood of a theory is It is more convenient to work with the quantity which can also be written in the following convenient form: (The logarithm of a matrix is as usual defined via the Taylor series, or equivalently by diagonalizing the matrix and taking the logarithms of its eigenvalues.) Now suppose that we are considering a class of theories that contains P unknown parameters α 1 . . . , α P , and suppose for simplicity that the covariance matrix is linear in these parameters, so that we can write Here M (0) represents the (unknown) true covariance matrix. In other words, the true values of the parameters have been taken to be one. Our ability to measure parameters will be determined by how sharply peaked the likelihood is about its maximum, so we perform a Taylor expansion in L to determine this. If we let ∂ p stand for ∂/∂α p , then Note that in the ensemble average, dd T = M (0) , so ∂ p L = 0 when all parameters are equal to one. The quantity that characterizes the sharpness of the likelihood peak is of course the second derivative, so we must plunge ahead and differentiate again: Let us take an ensemble average of this quantity and evaluate it at M = M (0) (which is both the true value and the ensemble-average maximum-likelihood location). Then The quantities F qp are the elements of the P × P Fisher matrix F. They tell us the expected uncertainty with which a parameter α p can be determined from a likelihood analysis. Specifically, if all parameters except the pth are known a priori, then α p will be determined with an expected error of 1/ F pp . If on the other hand all parameters are unknown, then the uncertainty is in α p is (F −1 ) pp . Furthermore, these expected uncertainties are the smallest that can be obtained from this data set by any unbiased data analysis method. We will be interested in determining whether the E-and B-component polarizations can be detected by a given experiment. Let us make an admittedly optimistic assumption: suppose that the shapes of the power spectra C E l and C B l are known but the amplitudes are not. Then we will have two parameters α E and α B , such that whereĈ E,B l represents the true power spectrum. Then the matrices M (E) and M (B) that appear in equation (33) are simply the parts of M (and in particular M S ) proportional to C E l and C B l . We will say that the E (B) component is detectable if the parameter α E (α B ) can be measured with an expected uncertainty considerably less than one. Specifically, if α E has an expected uncertainty of 1 q , then we can expect to measure the amplitude of C E l with a signal-to-noise ratio of q. So if, say, we are interested in knowing whether a particular experimental design will provide a 3-sigma detection of the E component, we simply compute the 2 × 2 Fisher matrix F and determine whether (F −1 ) EE > 3.
B. The JKW Approximation
In the next section, we will present experimental requirements for detecting the E and B components, using the formalism described above. This process is somewhat laborious, since it involves manipulating 2N × 2N matrices, so simpler methods are clearly desirable. One very useful such approximation has been provided by Jaffe, Kamionkowski, & Wang (hereinafter JKW) [33]. Similar results may be found in [25]. In this section, we present a brief heuristic "derivation" of the JKW approximation.
First, consider an experiment in which the N pixels cover the entire sky uniformly. Let us also suppose that the noise level σ 2 is the same in each pixel of the Q and U maps. (In other words, σ 2 is the same as σ 2 Ai of equation (20) and is assumed to be the same across all pixels of both maps.) The analysis of this experiment is quite simple: we can estimate each coefficient a ±2,lm , and hence each E lm and B lm , independently by exploiting the orthonormality of the spherical harmonics over the whole sphere. Each coefficient will have noise of amplitude It is convenient to define the weight w of a set of Q and U maps to be 4 The results of this section depend on the assumption of uniform noise, so w = 2N/σ 2 . Equation (35) can therefore be written If we wish to estimate the E-component power spectrum C E l , therefore, we have at our disposal a set of independent Gaussian random numbersÊ lm (estimators of the true coefficients E lm ) with zero mean and variances Since all the variables are independent, the likelihood has the usual Gaussian form: 4 Beware: this definition differs by a factor 4π from the weight as defined in JKW. While we're on the subject, note that the polarization power spectra in JKW and also in Ref. [22] differ by a factor of two from those found elsewhere in the literature and in this paper.
If we assume as before that only the overall normalization of the power spectrum is unknown, we can determine its expected uncertainty just as in the previous section, by computing the Fisher matrix (which is diagonal for a full-sky experiment). Differentiating the above expression for L twice, we find that the uncertainty in the parameter α E is given by Here SNR stands for "signal-to-noise ratio." An experiment with an SNR of, say, 3, is one in which we would expect to measure the power spectrum normalization with an accuracy of three sigma. Of course, an identical relation applies to α B . This result has a very simple interpretation [25]. The quantity wC E l /8π is the square of the signal-to-noise ratio for a single mode at multipole l. Modes for which this quantity is much greater than unity contribute 1 to the sum, while modes for which it is much less than one contribute nothing. Therefore, what this formula says, roughly, is that the square of the signal-to-noise ratio of the power spectrum amplitude is equal to 1 2 the number of modes that are detected with high signal-to-noise. Now consider an experiment that only covers a fraction of the sky f sky . The number of independent modes that can be detected will of course be reduced by a factor f sky . On the other hand, the noise variance for each mode will be reduced by the same factor, since the total weight of the experiment is concentrated in a smaller area. We might guess, therefore, that the generalization of equation (40) to the case of partial sky coverage is Equation (41) is the JKW approximation (compare to equation (1) of JKW). A question arises as to the lower limit of the sum. JKW advocate starting the sum at l min = 180 • /L, where L is the survey size, on the grounds that modes with smaller l cannot be probed. It can be argued, though, that the sum should begin at the lowest possible l (namely 2). After all, the fact that modes with low l are not probed is already accounted for by the inclusion of the prefactor f sky . The effective number of independent modes below some multipole l 0 is approximately f sky l 2 0 , which is better approximated by f sky l0 2 (2l + 1) than by f sky l0 lmin (2l + 1). The choice made makes little difference to the final results (at most about 20%, and usually much less, for the results to be shown below), so for consistency we follow JKW's prescription.
Of course, the JKW approximation does not account for the mixing of E and B modes due to incomplete sky coverage. We therefore expect it to overestimate the detectability of the subdominant B component, since some modes that are used to estimate B will be swamped by contamination from the larger E component. As we will see below, this is indeed the case. Perhaps more surprising is the fact that the JKW approximation sometimes underestimates the detectability of the E component. In the next subsection, we will consider an enhanced version of the JKW approximation that sheds some light on the reason for this.
C. A toy model of E-B mixing
In this section, we will consider an experiment that covers a square patch of sky that is small enough to permit the use of the flat-sky approximation. Suppose as usual that the sky polarization is a Gaussian random field p = Q U .
It can of course be written as a Fourier transform, and each Fourier component can be split into an E and a B part: The assumption that the polarization is a realization of a homogeneous, isotropic, parity-invariant Gaussian random field means thatẼ andB are Gaussian random variables with zero mean and covariances The power spectra C E and C B depend only on the magnitude of k. The factor of (2π) 2 is inserted for consistency with standard normalization conventions: C E,B (k) ≈ C E,B l with l = k. Now, suppose that the polarization is measured over a square patch of sky of area L 2 . We might 5 choose to analyze such a data set by expanding the observed region in a Fourier series with coefficients where q = (2π/L)(n x , n y ) for integers n x and n y . These Fourier series coefficients are related to the true Fourier transform in the usual way, where the window function is In other words, each modeã(q) probes a range of k values of width ∼ L −1 around q. Suppose that we wish to estimate the E and B power spectra from these Fourier coefficients. We might proceed by decomposing eachã(q) into an "E" and "B" piece: However, sinceã(q) contains contributions from a wide range of k's, not all of which are parallel to q, these will not really be purely E or B. In fact, the mean-square value of the supposedly E component is where cos α =k ·q. A similar equation holds for the B component. When q is large compared to 1/L, of course, all values of k that contribute significantly to the integral are quite close in direction to q, so sin 2 2α ≈ 0 andã E (q) really does depend almost entirely on the E component. In other words, mixing of E and B is relatively unimportant on small scales. But for small q, α cannot be taken to be small. In fact, if we assume that the power spectra are approximately constant over the range where the window function is large, we can pull them out of the integral to find where c 2 q and s 2 q are the averages of cos 2 2α and sin 2 2α weighted by W 2 (k − q). Similarly, We can find these averages by numerical integration. Averaging over the direction of q, we get approximately (If we are a bit more sophisticated and taper the edges of the data before Fourier transforming, then the window function W doesn't ring so much, and s 2 q can be reduced by ∼ 10-20%. As we will see in the next section, though, this approximation gives surprisingly good results as is.) Now suppose that C E ≫ C B . Then for each q with a reasonably large value of s 2 q we will have two independent 6 modes that are dominated by E, rather than the expected one: according to equation (53), even the nominal B modẽ a B (q) is mostly E! Since, as we have seen in the previous section, the signal-to-noise ratio is determined by counting the number of modes with high signal-to-noise, this enhances the significance with which α E can be measured.
In fact, we can modify the JKW approximation to take this into account. Each "nominal E mode"ã E (q) provides a measurement of the E-mode with mean-square amplitude C E l c 2 l and mean-square noise 8πf sky /w + C B l s 2 l . (Here |q| = l. We are imagining an attempt to measure the E-component, so we treat the B-component part of the signal as if it were noise.) We can therefore replace 8πf sky /wC E l in equation (41) with And we should also include a term that accounts for the fact thatã B (q) can be used to measure the E-mode amplitude: The final result is Although this looks quite messy, the interpretation is fairly simple. κ −1 1l and κ −1 2l are the squared signal-to-noise ratios with which the E-component amplitude can be detected in each mode (counting the B-component contribution as noise). Modes with high signal-to-noise contribute one to the sum, while modes with low signal-to-noise (or low E-signal-to-B-signal, since B-signal is being counted as noise) contribute zero. In the no-mixing limit, s 2 l → 0, the κ 2l term doesn't contribute, and the κ 1l term agrees with equation (41). Incidentally, we now face the same question as in the original JKW approximation regarding the lower limit of the sum. As before, it makes relatively little difference whether we start the sum at 2 or at 180 • /L. In the results shown below, we have started the sum at 2.
We can try to use this approximation to find the B-component detectability, but it turns out to give terrible results. This is not surprising: because we can always measure the E component more accurately than the B component, modeling it simply as an unknown source of noise is a poor approximation.
IV. RESULTS
Consider an experiment in which a square patch of sky of length L is observed. We will suppose that the patch is pixelized into an N side × N side square array and that both Q and U are measured in each pixel, so that the dimension of our data vector d is 2N 2 side . Suppose furthermore that the noise level σ is the same for all of these measurements. The weight of the experiment is then The weight is of course proportional to the observing time: w = N d t obs T 2 0 /s 2 , where N d is the number of detectors and t obs is the observing time. T 0 = 2.728 K is the current CMB temperature. Its presence is simply to convert units: we choose to measure the sensitivity s in temperature units but the power spectra in dimensionless ∆T /T units. Instead of quoting weights, therefore, we can simply quote observing times in, say, detector-years at some nominal sensitivity level. In the results below, we will use s = 100 µK s 1/2 as our nominal sensitivity, so our observing times in detector-years will simply be Detector-years ≡ N d t obs 1 year 100 µK s 1/2 s 2 = w 2.35 × 10 16 .
We consider a "concordance" ΛCDM cosmological model with parameters chosen for reasonable agreement with the bulk of the available data: h = 0.70, Ω b = 0.041, Ω Λ = 0.7, Ω tot = 1, n = 1. In most of our results, we ignore the effects of reionization by setting the optical depth τ to last scattering equal to zero. We allow the tensor-to-scalar ratio T /S, defined as the ratio of the tensor and scalar temperature power spectra at l = 2, to vary. E and B power spectra for these models are shown in Figure 3.
Once all of the above parameters have been specified, we can compute the E and B component power spectra with CMBFAST, COBE-normalize [31, 34], and smooth with a Gaussian beam σ beam . We then use the formalism of Section III A to calculate the signal-to-noise ratios for the amplitude of the E and B components. Since we are interested in determining the required experimental parameters for a strong detection of the two amplitudes, we fix the desired signal-to-noise at 3 and solve numerically for the required weights. The results are shown in Figures 4 and 5. For comparison, we also show the result of the JKW approximation and, for the case of the E component, the result of the "enhanced" JKW approximation of equation (57). Although most of our results were calculated with σ beam = 0.3 • , we calculated the T /S = 0.01 power spectra for σ beam = 0.1 • and 0.5 • as well to illustrate the effect of beam size on our results.
In the Fisher matrix analysis, we analyze maps with several different pixel sizes, as indicated in the figure captions. For the 0.3 • beam, for example, the largest pixel size is 0.5 • , which is somewhat larger than one would ideally like to use. We use this large pixel size so that we can investigate data sets with large sky coverage without having to invert inconveniently large matrices. As shown in the figures, there is overlap in map size between different pixel sizes, which gives a crude idea of the amount of information being lost when large pixel sizes are used. In general, the bulk of the information is found on scales larger than the pixel scale; for maps with a reasonable number of pixels, the fact that we are undersampling the beam does not appear to affect the results very much.
Note that the JKW approximation underestimates the difficulty of detecting the B component as expected. For small survey sizes, though, it overestimates the difficulty of measuring the E component amplitude. The enhanced JKW approximation provides a better fit in this regime.
The detectability of the E component is quite flat as a function of survey size even for fairly small maps, but that of the B component is not. For the case of a 0.3 • beam, the optimal survey size for detecting the B component is 22 • , as compared to 17 • found via the JKW approximation. For σ beam = 0.5 • , the optimum is 26 • , while the JKW approximation would give 19 • . For the σ beam = 0.1 • case, the optimum survey size was too large to find with the Fisher-matrix formalism. The JKW optimum for this case occurs at L = 15 • , with a detection time of 30 detector-years; it is reasonable to suppose that the true optimum is at even larger scales.
While it may not be necessary for an experiment to achieve the optimum survey size, one would certainly like to stay well to the right of the "knee" in Figures 4 and 5. For the case of a 0.3 • beam, the required detection time is relatively flat for L > ∼ 12 • but rises sharply at smaller L. This suggests that a map of at least 40 × 40 σ beam -sized pixels should be made in order to measure the B-component amplitude in such an experiment. The experimental weight required for a 3-sigma B-component detection in such an experiment would be such that the B-component-signal-to-noise per pixel is approximately unity.
In order to determine whether reionization made a significant difference to our results, we examined a model in which the optical depth to last scattering was τ = 0.3. In order to keep the temperature power spectrum more or less the same, we tilted the spectral index n from 1 to 1.15. As shown in Figure 3, this makes a large difference to the polarization power spectra, but most of the difference is on larger scales than those to which our degree-scale experiments are most sensitive. The required detection times for this model were very similar to those in the noreionization case: for the E-component, the difference was less than 4%, while for the B-component it ranged from 15% down to 5% over the range of survey sizes shown in Figure 4.
V. DISCUSSION
As we have seen in the previous section, the Fisher-matrix formalism gives a slightly better detectability for the E-component power spectrum amplitude than is predicted by the no-mixing JKW approximation, at least in the case of small survey sizes. The reason for this is that some modes that "should" probe the B component are actually dominated by the E component, increasing the number of independent modes that can be used to estimate the E-component amplitude. The enhanced JKW approximation (57) does surprisingly well at characterizing the E-component detectability, confirming that this "extra mode" explanation is indeed correct.
The main practical consequence of the enhanced E-component detectability is that experiment designers need not be terribly concerned about ensuring large sky coverage: the detection requirements are fairly flat as a function of survey size. For the B component, on the other hand, survey size is quite important. The required observation time rises sharply as the survey size decreases. For a 0.3 • beam, the "knee" in detectability occurs at a survey size of about 12 • .
As expected, the neglect of mixing in the JKW approximation causes it to underestimate the difficulty of measuring the B component. We might expect this underestimation to depend primarily on the survey size, measured relative to some sort of coherence length of the signal being looked for. In Figure 6, we illustrate this by plotting the ratio of the detection time calculated using the JKW approximation to the Fisher-matrix calculation. On the horizontal axis is plotted the survey size, measured in units of the B-component coherence length. The coherence length is defined by θ coh = π/l coh , where the coherence multipole is the weighted average multipole, with weights given by l(2l + 1)C B l . (Here C B l is the beam-smoothed power spectrum as usual.) All of the models discussed in the last section are plotted here. In general, the JKW approximation becomes good when the survey size is many coherence lengths, but for L < ∼ 20 coherence lengths, it underestimates the detection difficulty by a factor of ∼ 2.5.
It is important to note that the Fisher-matrix formalism used in this paper gives the best possible error bars that can be achieved from a given experiment. Even if, for example, we expand the data set in some set of normal modes that minimize E-B mixing [26,27,28], we cannot do any better than the brute-force likelihood analysis of the entire data set on which the Fisher-matrix formalism is based. This is not to say, of course, that such methods are not useful. They may reduce computation time, provide insight into the nature of the E-B decomposition, and allow filtered real-space maps of the E and B components to be made, for example.
Although the Fisher-matrix results represent the idealized minimum possible error bars, they are likely to be quite close to the errors achievable in real experiments. 7 After all, lossless or nearly lossless methods such as those that have been applied to recent temperature anisotropy measurements [1,2] can be easily adapted to the polarization case. The other idealization that has been made in this analysis is that only the amplitudes, and not the shapes, of the power spectra are unknown. This will of course not be true in a real experiment (although, as long as CMB temperature anisotropy experiments continue to return results consistent with standard models, strong constraints on the shapes will be available); however, for the first few experiments to detect the E and B modes, a band-power estimate of the power spectrum in a few bands will probably be sufficient and may be expected to yield error bars similar in size to those calculated by the Fisher-matrix formalism.
E-B mixing is significant only on scales comparable to the size of the survey, and therefore affects a relatively small number of modes [27,28]. It might therefore be surprising that mixing has as large an effect on the B-component detectability as it does. The main point to remember in this regard is that the modes with wavelengths of order the survey size are always the ones that are detected with the highest signal-to-noise. (After all, the beam-smoothed C B l is invariably a sharply decreasing function of l, whereas the noise has a flat power spectrum.) The loss of these modes to mixing therefore has a disproportionately large effect.
It should be noted that the criterion adopted for "detectability" of a component in this paper is that the amplitude of that component's power spectrum can be measured with small fractional uncertainty. It is possible for a component to be "detected" (i.e., for the null hypothesis that the component is absent to be ruled out) at high significance even if this criterion is not met. For instance, if a single mode is detected with high signal-to-noise, and if that mode is known to be a pure B-mode, with no E-component contamination, then the B-component will have been detected with high significance. However, based on a single mode, the amplitude of the power spectrum cannot be determined with fractional uncertainty less than O(1), so such an experiment would not meet the detectability criterion considered in this paper.
This scenario could easily occur in an experiment designed to measure the Stokes parameters in a thin ring [24,25,26]. Because an accurate characterization of the power spectrum amplitudes will be extremely important in interpreting polarization results, and because a detection of the polarization in many different modes is much more robust than a detection in only a few, we have chosen to adopt the stronger detectability criterion of this paper.
The formalism described in this paper has numerous applications. Although we have considered only square maps, it can of course be used to explore the effects of survey geometry on detectability of the two components. It can also easily be adapted to examine the degree to which inhomogeneous noise alters detectability. One might suppose, for instance, that measuring the edges of the survey with high precision would help in separating the two components, since the modes that are "ambiguous" with respect to the E-B split tend to be supported most strongly near the boundary [28]. Finally, the formalism described herein may prove useful in studies of weak gravitational lensing (e.g., [35,36]): the shear induced by lensing is a spin-2 field, and the mathematics is therefore quite closely analogous to the case of CMB polarization.
|
2019-04-14T01:33:41.209Z
|
2001-08-12T00:00:00.000
|
{
"year": 2001,
"sha1": "a670a3198cb3c6ed2f6ad2ce0d7ce3c5df841914",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/0108209",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a670a3198cb3c6ed2f6ad2ce0d7ce3c5df841914",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
44103998
|
pes2o/s2orc
|
v3-fos-license
|
Why not? Understanding the spatial clustering of private facility-based delivery and financial reasons for homebirths in Nigeria
Background In Nigeria, the provision of public and private healthcare vary geographically, contributing to variations in one’s healthcare surroundings across space. Facility-based delivery (FBD) is also spatially heterogeneous. Levels of FBD and private FBD are significantly lower for women in certain south-eastern and northern regions. The potential influence of childbirth services frequented by the community on individual’s barriers to healthcare utilization is under-studied, possibly due to the lack of suitable data. Using individual-level data, we present a novel analytical approach to examine the relationship between women’s reasons for homebirth and community-level, health-seeking surroundings. We aim to assess the extent to which cost or finance acts as a barrier for FBD across geographic areas with varying levels of private FBD in Nigeria. Method The most recent live births of 20,467 women were georeferenced to 889 locations in the 2013 Nigeria Demographic and Health Survey. Using these locations as the analytical unit, spatial clusters of high/low private FBD were detected with Kulldorff statistics in the SatScan software package. We then obtained the predicted percentages of women who self-reported financial reasons for homebirth from an adjusted generalized linear model for these clusters. Results Overall private FBD was 13.6% (95%CI = 11.9,15.5). We found ten clusters of low private FBD (average level: 0.8, 95%CI = 0.8,0.8) and seven clusters of high private FBD (average level: 37.9, 95%CI = 37.6,38.2). Clusters of low private FBD were primarily located in the north, and the Bayelsa and Cross River States. Financial barrier was associated with high private FBD at the cluster level – 10% increase in private FBD was associated with + 1.94% (95%CI = 1.69,2.18) in nonusers citing cost as a reason for homebirth. Conclusions In communities where private FBD is common, women who stay home for childbirth might have mild increased difficulties in gaining effective access to public care, or face an overriding preference to use private services, among other potential factors. The analytical approach presented in this study enables further research of the differentials in individuals’ reasons for service non-uptake across varying contexts of healthcare surroundings. This will help better devise context-specific strategies to improve health service utilization in resource-scarce settings. Electronic supplementary material The online version of this article (10.1186/s12913-018-3225-4) contains supplementary material, which is available to authorized users.
Background
Despite ongoing efforts by the Nigerian health system to increase maternal health service utilization, including midwives service schemes, removal of user fees and increasing the involvement of the private sector [1,2], population usage of many life-saving obstetric interventions remains suboptimal. National statistics for 2009-2013 show that, for instance, 22.6% of all births occurred in a public health facility and 13.2% in the private sector leaving approximately two thirds of childbirths based outside of a health facility [3]. At the subnational scale, likelihoods for facility-based delivery (FBD) and private FBD vary considerably, and were significantly lower for women residing in parts of the South South zone, and majority of the Northern zone [3].
Both in Nigeria and other low-and middle-income countries (LMICs), having a FBD is a practical way to ensure assistance by a skilled birth attendant and access to life-saving interventions for mothers and newborn [4]. Previous reviews addressing factors related to FBD in sub-Saharan Africa and other LMICs have identified an array of determinants [4][5][6][7]. Moyer and Mustafa's literature review, published in 2013, highlighted an overwhelming reliance on population/ survey data with which maternal sociodemographic factors were well-represented [4]. The limited body of literature around community-level factors of FBD in LMICs emphasizes community socio-demographic characteristics, community views on skilled and traditional births [8,9], service accessibility such as distance to care and community uptake of antenatal care [4]. Communities likely have other unique characteristics that influence demand for and supply of healthcare [10], many of which are overlooked.
Unlike other health service seeking, childbirth can happen unexpectedly throughout the day and the woman may need to reach a nearby care provider at relatively short notice. The types of childbirth delivery services most accessible to, or most accessed by, the community directly relate to an individual's perception of, wishes for, and actual uptake of services. Women also exchange information and experience surrounding childbirth in social settings, and one's planning for future delivery may be conditioned by assessing factors important to their peers, culture and community [11,12]. A better understanding of one's healthcare surroundings is imperative to developing effective strategies to increase healthcare utilization among groups currently "left behind". Part of the dearth of research in this area might be due to the lack of suitable data, especially at the national scale.
In a study of the characteristics of health facilities across Nigeria, Nwakeze and Kandala found vast geographic disparities in the country, including greater dominations of lower-level and primary care and private health services in some areas but not others [13]. In addition, despite the Nigerian government's aspiration to provide free/subsidized maternity care in the public sector, some women who stay home for childbirth reported cost or finance as a barrier to using maternity care, among other factors [14,15]. This raises questions regarding current understanding of the factors for service uptake vs. non-uptake in relation to one's healthcare surroundings. In some settings, e.g. where public maternity care is free of charge, it is likely that some of those who stay home for childbirth for financial reasons only considered using private services, the alternative being homebirths (over public care). This speculation might be more pertinent where private FBD is common, due to the potential impact that one's peers and healthcare surroundings have on their reasons for service non-uptake.
The aim of this study is to assess the extent to which cost or finance is a barrier for FBD across geographic areas with varying levels of private FBD in Nigeria. To overcome the limitation of community-level data availability, we present an innovative approach applying geographic information system (GIS) tools to examine the clustering of maternity care utilization using individual-level survey data. This study will help motivate and enable further investigation of the way in which childbirth services frequented by the community influences community members to deliver in or outside a health facility, adding contribution to the current effort to support maternity care utilization for groups and individuals most "left behind".
Data and study sample
This analysis was based on data from the 2013 Nigeria Demographic and Health Survey (NDHS). The data is representative at the national level, of the six geopolitical zones and of the 36 states and the Federal Capital Territory (FCT-Abuja). The survey sample was selected using a stratified multi-stage cluster probability sampling design with census ward as the primary sampling unit. As part of the DHS sampling procedure, all households in each sampled ward was enlisted, which was then used as the sampling frame for household selection [16]. Eligible individuals aged 15-49 in selected households were interviewed with a standardized questionnaire. The final sample of the 2013 NDHS consisted 896 census wards and 39,902 eligible women; 98% (38,948) were successfully interviewed. Women with a live birth in the five years before the interview were asked to self-report the care received during pregnancy and delivery. The sample of the current analysis was restricted to the circumstances of 20,467 women's most recent live birth during the five-year survey recall period as some of the required data was only collected for this subsample.
Geography and administration of Nigeria
Nigeria is divided into six geopolitical zones ( Fig. 1): North Central, North East, North West, South West, South South and South West; and within these zones, into 36 states and the FCT-Abuja. For administrative purposes, the states are subdivided into 774 local governments areas [3], each made up of approximately 10-15 wards [17].
Measurement
Population centroids of wards, recorded as latitude and longitude, were obtained by DHS enumerators using Global Position System (GPS) receivers [18]. All individuals residing in a ward therefore have the same georeference. For privacy considerations, the coordinates were randomly displaced by up to 2 km in urban areas and up to 5 km in rural areas by the NDHS. An additional 1% of rural wards were displaced by 10 km.
Delivery location was based on women's answer to: "Where did you give birth to [name of child]?" on the Women's Questionnaire. The major categories of response were domestic environments (home of respondent or of traditional birth assistant (TBA)), public or governmental health facilities (HFs), private or non-governmental HFs, as well as all other unspecified locations [3]. FBD was obtained by coding responses as "any HF" and "not in a HF". For the analysis of private FBD, all births were categorized as "any private HF" and "not in a private HF". We note that the 2013 NDHS had conflated all non-governmental, for-profit and not-for-profit providers as one category of "private" provider.
The outcome of interest was financial barrier for FBD. Women who did not deliver in a HF indicated the reasons that applied to them from a list of potential barriers, including "cost too much". Other covariates and demographic information considered as potential confounders were: wealth quintile, maternal education, maternal age and parity at the time of the most recent birth, and whether the woman had health insurance coverage. Household wealth quintile was derived from the wealth indexconstructed by the DHS using household asset data via a principal component analysis [19]. The sampled households were than ranked and divided into five quintiles. Each woman is assigned her household's wealth quintile.
Spatial scan statistics of private FBD
To identify geographic clusters of high and low private FBD, the number of most recent births and those based in a private HF were aggregated at the ward level, with adjustment of survey sampling weighting. Together with ward latitude and longitude as inputs, each ward was treated as an analytical unit to test whether private FBDs were distributed randomly in space or not.
At the ward level, the observed numbers of private FBD varied from zero to the total number of eligible births. To detect clustering of private FBDs, we chose a Poisson distribution to represent the expected distribution of this count over space. Under the null hypothesis, the expected number of private FBDs in each area is proportional to its population size (approximated using sample size) [20,21]. Spatial scan statistics was performed using the SaTScan™ software (version 9.4) [20,21]. Spatial clusters were identified by taking into consideration the rates of nearby wards [22,23]. The spatial scan method used circular windows of various sizes that move across the map to find clusters of wards with either higher and lower than expected rates under the null hypothesis of uniform spatial distribution [24,25]. The radius of the circle varies continuously from zero to a predefined value that specified the percentage of the maximum total population at risk within the scanning window [21]. The recommended maximum size is 50%; we conducted additional scans at the 10 and 5% levels to account for independent smaller clusters that may be contained in a large cluster. The alternative hypothesis is that there is a reduced/elevated rate within the scanning window as compared to outside. The test of significance, based on likelihood ratio and the null distribution, was obtained from Monte Carlo Simulation [26]. The number of permutations was set at 999 and the significance level was set at 0.05 [21]. Identified clusters are ordered based on their likelihood ratio test values.
Geographic locations of, and wards contained in each, identified spatial cluster were merged back to the 2013 NDHS women's data. We considered women living in the same SaTScan spatial cluster to be in the same "community". Estimates on private service use as a percentage of all most recent births, financial barriers reported among women who did not deliver in a HF, as well as other covariates were recalculated for each SaTScan spatial cluster to generate community-level data. This was done in Stata SE version 14 (StataCorp LP, College Station, TX, USA), adjusted for survey-specific weighting and stratified, cluster sampling design.
Relating community -level private facility use to nonusers' self-reporting of financial barrier Using SaTScan spatial cluster as the analytical unit, the percentage of nonusers reporting financial barrier (main outcome) was related to the percentage of births occurring in private facilities. The SatScan spatial clusters were weighted by the number of most recent births circled within. To account for a proportion as outcome (bounded between 0 and 100%), we adopted a generalized linear model, specifying a logit link and the binomial family [27][28][29]: We denoted y i = number of private FBD in SaTScan spatial cluster i, N i = number of most recent births in i and p i = probability of having a private FBD. We also specified the Huber-White (i.e. robust) estimators of the standard errors in case of heteroskedasticity arising from potential misspecification in the distribution family [30]. The z test was used for significance testing of model coefficients. We generated predictions from both the bivariate and multivariate fits and back-transformed them as the percentages of women with a non-facility birth who cited cost was a barrier at 5%-intervals of community-level private FBD.
Missing data
We found missing data in geographic coordinates in seven wards, containing < 1% of the respondents from the study sample. These were removed from analyses where location data was required. We also found 0.4% of missing data for health insurance coverage and coded these as uninsured. There was no missing data in the other variables in the model.
Sub-national private facility-based delivery (Table 1). Using SaTScan analysis, ten spatial clusters of low level and seven spatial clusters of high private FBD were identified ( Table 2). The number of wards contained in these geographic clusters ranged from five to 88; the number of most recent births circled within a geographic cluster ranged between 63 and 1201, and spatial cluster radii varied between 21.2 and 208.5 km. Altogether, 648 wards and 14,434 births occurred in these 17 clusters.
The location and size of these geographic clusters were drawn in Fig. 2. Clusters of low private FBD were primarily located in the North West and North East zones, with an exception near Jos North in Plateau State, where one spatial cluster of high private FBD (50.5, 95%CI = 35.5,65.5) was identified. In addition, southern Cross River state and central and southern Bayelsa state (in the South South zone) also showed spatial clustering of low private FBD: 2.9% (95%CI = 1.0,4.7) and 2.1% (95%CI = 0.0,5.5), respectively. Communities of high private FBD were identified around the Lagos and Ogun States (52.8, 95%CI = 47.7,57.9), Edo State (32.9, 95%CI = 24.7,41.1) as well as large parts of the South-East zone (e.g., Imo and Abia States) and the North Central zone.
Mean percentages of private FBD in high and low clusters were 37.9% (95%CI = 37.6,38.2) and 0.8% (95%CI = 0.8,0.8), respectively. Average public FBD among all births was 37.2% (95%CI = 37.1,37.4) in high private FBD clusters. On the other hand, 14.8% of all births were public facility-based in the ten spatial clusters of low private FBD. Substantial differences in sociodemographic characteristics of women living in the two groups of spatial clusters were also seen (Fig. 2). Women in low private FBD clusters were more rural, poorer and less educated compared to women in high private FBD clusters.
We performed additional cluster detections setting maximum cluster size to 10 and 5% of the survey sample. The first yielded the same set of results. The details of the 19 SaTScan spatial clusters returned from using the 5% limit is given in Additional file 1: Figure S1. No substantial differences to the model with 17 SatScan clusters were observed.
Reporting cost as a barrier for facility-based delivery
Across the seven spatial clusters of high private FBD, 24.5% (95%CI = 21.1,27.8) of women delivered at home showed that the factors associated with self-reported financial barrier for FBD at the spatial cluster unit level included living in Cross River and Bayelsa States, the percentage of public facility utilization, rural setting, wealth, the level of maternal education, and the percentage of women covered by health insurance ( Table 3). All of these were significant at the p < 0.001 level.
In multivariate analysis, all predictors remained significantly associated with the proportion of women (Table 3). After controlling for the proportion of birth occurring in public HFs, rurality, wealth, maternal education, health insurance and residency in Cross River and Bayelsa States, a 10% point increase in private facility use for childbirth was associated with an average 1.94% point increase (95%CI = 1.69,2.18) in nonusers citing cost as a barrier for FBD. The adjusted predicted percentages of self-reported financial barrier across varying levels of private service use were also computed based on the adjusted regression model. Table 3 and Fig. 3 illustrate a steady rise in the extent to which financial consideration was a barrier as community-level private FBD increased.
Discussion
To our knowledge, this is the first study to examine national geographic disparities in private facility use for childbirth in a sub-Saharan African country at a small geographic scale. We found substantial spatial variation in the utilization of private facilities for delivery care across Nigeria. The level of private FBD was very low in the northern part of the country except for Jos in Plateau State. Private FBD was medium to high in North Central zone and the highest in the South West and South East zones. Certain areas in Lagos, Imo, Ogun and Abia States had particularly high levels of private FBD. Using a novel approach, we examined the association between private healthcare utilization contexts and financial barriers for FBD. We found cost was more likely to be cited as a barrier to FBD in settings where private FBD was high. We found exceptions, however, for southern Cross River and Bayelsa States, where a large proportion of nonusers reported cost as a barrier and overall facility delivery (in both the public and private sectors) very low.
Limitations
Our findings have important implications, but they should be understood with certain limitations. Firstly, the 2013 NDHS response option for private delivery included both for-profit and not-for-profit establishments operating under different financial motives and potentially charging widely varying fees for childbirth care. However, we still believe that our assumption that private sector childbirth costs more than public sector is valid. Self-reported reasons to deliver in non-healthcare settings might also be subject to accuracy and reliability issues [31]. In addition, women could list more than one barrier of FBDapproximately 50% of women who cited cost as a barrier also listed one or more other reasons (data not shown)and the relative importance of cost compared to other reasons is not known. Contributions of other potential factorsincluding, but not limited to individuals' perceptions towards the care received and healthcare professionalswarrants further investigation. The analytical approach presented in this study offers a novel method for such future research with available, secondary data.
The SaTScan spatial clusters identified were relatively large in geographical size (even with a smaller maximum allowable limit), and there might be substantial heterogeneity in the characteristics of the women living in the same spatial cluster. Some of this heterogeneity, including parity, pregnancy complication and marital status, may confound our primary association of interest at the individual-level, but were omitted as their relevance at the community level is likely low. Lastly, some loss of power in cluster detection might have occurred through a degradation of spatial information between the exact geographic coordinates of individuals and those at aggregated levels [32,33].
Giving birth in the private sector
In Nigeria and other LMICs, pregnant women who opt for private FBD have a similar sociodemographic profile higher SES, higher education and, in some contexts, certain ethnicity or religious affiliations [34][35][36][37]. A search of peer-reviewed articles and the grey literature returned little information on the cost of private FBD in Nigeria. However, a study showed 1.8 times more spending in private hospitals than public hospitals by users residing in urban south-eastern Nigeria [38]. Despite higher cost, for-profit healthcare care may have more appeal due to a wide range of reasons, such as privacy, shorter waiting times, higher perceived quality of care, empathy and respectful approach, availability of doctors and as a status symbol [39,40]. For users of private services, cost or affordability might be a relatively weaker determinant of where to seek care. Unadjusted and adjusted effects were back-transformed from parameter estimates obtained using a logit link transformation. The z test was used for significance testing of model coefficients + Adjusted estimates describe the adjusted curve drawn in Fig. 4 *Adjusted predicted percentage of nonusers citing financial barriers were obtained with all other coverages fixed their mean values Community-level private service use and self-reported financial barriers for facility-based delivery Our findings extend the current knowledge about preference towards private HFs for their users. We found that in contexts with relatively high private FBD, a greater proportion of facility non-users reported financial barriers for any care, including both private care and the relatively more affordable public care. In Edo, Ogun and Abia States, for instance, the majority of health facilities are privately owned [13]. Our results may indicate that facility nonusers living in high FBD contexts are unable to gain effective access to any healthcare due to personal financial barrier (for private care) and insufficient provision of public services in their lived environment. In other places of high private FBD where such practice may have become normalized, women who lack adequate funds for private providers might perceive delivering at home or a TBA's home as their best alternative due to social pressure and low acceptability of publicly provided services. The observed preference for homebirths is in line with qualitative findings from various states including FCT-Abuja and Lagos, where women who do not deliver in a health facility had poor confidence in the public health sector and strong desires to deliver with a TBA [41][42][43]. According to these studies, women perceive home delivery with a TBA, and especially with family members present, to be personal and supporting [41]. Some TBAs often allow for flexible finance options, such as payment in kind or in instalments, making it easier for families to pay [42].
On the other hand, in settings where private facility delivery use is relatively low, and especially where overall FBD utilization is also low, such as most of North West zone and North East zone, women's reasons to not give birth in a HF were less connected to cost. In these settings, other cited barriers included service availability, distance or physical accessibility, social norms and lack of perceived need [43]. In a study set in the Jigawa State, approximately 25% of nonusers claimed they did not attend facilities for childbirth because they did not think it was necessary [44]. In addition, household decision-making dynamics also varies across this large multi-ethnic country; Abuja city/FCT-Abuja, for instance, is generally associated with greater gender equality when compared to other southern and northern cities [45]. Especially in the north, women's relative lack of participation in intra-household decision making and access to money have been associated with very low FBD rates [45].
Exceptions to the inverse relationship found between financial barriers and private FBD were noted in southern Cross River State and Bayelsa State, where overall percentages of FBD were midrange, private FBD very low, and a relatively large proportion of nonusers reported financial barriers to delivering in a HF. This highlights the importance of contextualizing personal factors alongside other community-or macro-level factors. Bayelsa State is primarily covered by marshlands and waterways; it is also an important gas-and petrol-producing region in Nigeria that has generated interest among prospective companies [46,47]. However, most Bayelsans remain poor, and the state's public infrastructure development insufficient [47]. Lack of transportation and the riverine setting pose tremendous impediments to overcoming physical barriers to reaching health services [46,48]. In a study looking at barriers to utilization of maternal health services in Bayelsa State, a majority of respondents reported infrastructure-related barriers to access (availability of facilities/equipment, schedule of maternal health clinic, accessibility and so on); and much lower percentages of women reported deterrents such as cultural acceptance and language problems [49]. Compared to the rest of the country, special economic and environmental contexts and the additional resources required to overcome physical accessibility barriers may have caused financial considerations to operate differently among people living in Bayelsa and Cross River States. The role of financial barriers, separating direct payment for delivery from other expenses and trade-offs, including cost of transport, as well as time and financial lost from other daily/productive activities, warrants further research.
A note on using DHS data to study healthcare utilization surroundings Various studies have looked at the service provision environment as a determinant of FBD. A common approach consists of conducting interviews with women about the availability of maternity care in their community as a measure of service provision [50][51][52][53][54]. Alternatively, geocoded master facility list (MFL) data or the like, with which the entire health infrastructure of a spatial area is mapped out, are geographically linked to population data in a GIS to facilitate calculation of measures of people's healthcare availability [55][56][57]. The present study used available secondary data on individual-level service utilization and women's location of residence to construct the geographic patterning of healthcare surroundings across Nigeria. Our variable of interest was community-level utilization surrounding the individuals, which is somewhat conditioned on healthcare provision environment, but is also a consequence of other cultural, contextual and individual-level determinants. Nwakeze and Kandala examined the spatial distribution of health establishments using data collected by the National Bureau of Statistics of Nigeria, and found moderate to low numbers of private health establishments in the Benue, Nasarawa and Kogi States, compared to the number of public health facilities [13]. In the present analysis, however, parts of these places showed high level of private FBD. Our findings therefore also tangentially shed light on people's decision-making of the services to use from the options that are available to them. Such knowledge is useful for the formulation of appropriate interventions to concurrently address provision of and demand for services [58,59]. In the case of these states, additional provision of public health services might not be as effective a strategy to boost FBD as trying to strengthen the quality and acceptability of existing public services.
Conclusion
In this study, we found an inverse relationship between community private care-seeking for childbirth and self-reported financial reasons of service non-uptake. This extends current understanding of the influence of financial barriers for maternity care. We argue that further investigation of determinants of maternal health-seeking, and potentially other health-seeking, should look beyond individual-level barriers to consider community-level factors. Many LMICs continue to be challenged by poor maternal health outcomes driven to some extent by wide subnational disparities in maternal healthcare provision, utilization and care quality. The lack of research and attention in the existing literature to study community-level factor is possibly due to the lack of suitable data, especially since studies of determinants of FBD are mostly based on individual-and household-level data. Working with geographic data and GIS tools, including mapping techniques and spatial cluster detection, we developed a novel way to bridge this persistent knowledge gap. Our approach offers new approaches to examine the way in which childbirth services frequented by the community influences community members to deliver in or outside a health facility. The method presented can be extended to other research questions related to barriers and different health service characteristics, such as service acceptability and the level/standard of care most frequently sought, as well as perceived need, cultural drivers and social norms against overall utilization rate. Our approach also preserves spatial patterns in the data, a component that is often neglected but requires specific analytical considerations and carries contextual significance, including policy implications.
Overall, we suggest that the approach presented to be best for 1) illustrating the service utilization environment in the population and 2) examining associations between individual-level and community-level factors. The complex reasons behind underutilization of delivery care services indicates the need for a multi-focus approach that addresses service provision and usage suited for the local context of healthcare uptake and non-uptake. Further research is needed to help inform policies and health system responses to provide adequate health services that people will utilize.
Additional file
Additional file 1: Figure S1. Nineteen SaTScan spatial clusters (drawn proportionate to cluster radii) of higher and lower than expected proportions of private facility birth among all most recent births. The DHS wards contained in each spatial clusters are also shown. (DOCX 130 kb)
Funding
This research is supported by funding from MSD, through its MSD for Mothers programme. MSD has no role in the design, collection, analysis, and interpretation of data, in the writing of manuscripts, or in decisions to submit manuscripts for publication. The content of all publications is solely the responsibility of the authors and does not represent the official views of MSD. MSD for Mothers is an initiative of Merck & Co., Inc., Kenilworth, N.J., U.S.A. OJB is supported by a Sir Henry Wellcome Fellowship funded by the Wellcome Trust (grant number 206471/Z/17/Z).
Availability of data and materials
The dataset is available to the public freely at dhsprogram.com. Questionnaires used for the survey are attached to the final report published, which can be found at https://dhsprogram.com/pubs/pdf/FR293/FR293.pdf (last accessed: 12th May 2018).
Authors' contributions KW and OMRC conceptualised the study. KW conducted the analysis, developed the statistical methodology and approach, and prepared the first draft of the manuscript. ER contributed to drafts of the paper, interpretation of the findings and revising of the paper. OO contributed to interpretation of the findings and revising of the paper. OB contributed to developing the statistical methods, interpretation of the findings and revising of the paper. CL contributed to interpretation of the findings. LB contributed to developing the statistical methods, drafts of the paper, interpretation of the findings and revising of the paper. All authors read and approved the final manuscript.
Ethics approval and consent to participate
The DHS receive government permission and obtain informed consent from all participants. The Research Ethics Committee of the London School of Hygiene and Tropical Medicine approved our secondary analysis of anonymised data.
Consent for publication
The consent to publish is not applicable for the current analysis as individual data is not reported.
|
2018-06-05T13:18:53.700Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "eaaeab6902e769a57077e2fc6f06510597047403",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-018-3225-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d49c9afa6dbf131a6926ac58a77c79bdd0e6641",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56108133
|
pes2o/s2orc
|
v3-fos-license
|
The typology problem and the doxastic approach to delusions 1
This paper explores one of the most fundamental philosophical worries underlying the occurrence of delusions, namely, the problem about the specific type of mental state that grounds a delusional report or, as I shall call it, ‘the typology problem’. The analysis is developed as follows: (i) After formulating and circumscribing the target problem, (ii) I explore the main tenets and advantages of the doxastic view of delusions, perhaps, the strongest candidate currently available within the typology debate. (iii) After, I clarify and evaluate four of the main counter-arguments against the doxastic view offering a number of counter replies to these attacks. (iv) Finally, I conclude that the anti-doxastic argumentation offers no good reasons to abandon the doxastic model and that this model does not need to appeal to external resources to reply to such counter-arguments. At the same time, I finalize with some of the challenges that remain open within the doxastic view.
From clinical observations to philosophical worries: The complexities of delusion
Delusional cases cha lenge our most fundamental assumptions about the normal functioning of human mind.Often, from a clinical point of view, delusions have been usua ly regarded as the hallmark of psychosis, the sign pe excellence of a broken mind.As Ja ers (1963, p. 93) claims: "since time immemorial, delusion has been taken as the basic chara eristic of madness.To be mad was to be deluded" .Nowadays, delusions are considered a major symptom of a number of psychiatric conditions such as schizophrenia and major depression, although they can be also observed in neurological medical conditions such as dementia (Coltheart et al., 2011).
The study of delusions is complex as they are heterogeneous in terms of content, scope, aetiology, and phenomenological features.In terms of content, the delusion that one is dead and that one's internal organs are rotten is clearly bizar e for example (Cotard delusion).However, not a l delusions are bizar e; some of them might be even true in different circumstances, these are ca led 'mundane' delusions.Take the case of the delusion that my mother is a serial ki ler.Although this is not the case -not that I know of at least -this situation might be true if, in fact, my mother has got the atrocious habit of ki ling people 3 .Now, when patients exhibit a single delusion or a sma l set of delusional states that are a l related to a single theme, delusions are ca led monothe atic.In the Cotard case, the patient holds a ecific delusional state and although some other delusional states might emerge -like the delusional belief of being immortal -they seem to be associated with the main delusional belief (being dead).In contrast, if patients exhibit delusional states about a variety of topics that are unrelated to each other, delusions are polythe atic.For example, Cap s (2004) comments the case of Josh Nash who believed he was the Emperor of Antarctica, the left foot of God on Earth, and that his real name was Johann von Nassau, among many other delusional beliefs.
In terms of scope, delusions are ci cumsc ibed when they do not lead to the formation of other intentional states that might be related to the content of the delusion, nor do they have important effects in the subject's behaviour 4 .Delusions are elaborated when the subject reporting it draws consequences from the delusion (often manife ed in ecific behaviours) and forms other beliefs that orbit around the main delusions.It is claimed that elaborated delusions can turn into complex nar atives that might help to make sense of the whole delusional situation that the patient is living (Davies et al., 2001) 5 .
Regarding the aetiology of delusional phenomena, a basic distinction is usua ly drawn between motivational and deficit ap roaches (Mckay et al., 2009;López-Silva, 2015).The former class claims that the production of delusions is motivated by the psychological benefits they confer to the deluded subject (see Benta l and Kaney, 1996;Be l, 2003).On this view, delusional phenomena are chara erized as a ive psychological responses to threatening internal or external psychological stimuli, these responses not being necessarily linked to any particular type of affective, perceptual, or cognitive deficit or malfunction.Contrasting with the motivational formulation, deficit a proac es conceptualize delusional phenomena exclusively as the result of different impairments at different stages of the process of belief and thought formation (McKay and Dennett, 2009).Rather than adaptive psychological responses, delusions involve disorders and alterations in the normal functioning of beliefs produced by a combination of anomalous first-order perceptual experiences (Maher, 1974(Maher, , 2003)), impairments in the process of hypothesis evaluation (Langdon and Coltheart, 2000), or unusual experiences accompanied by reasoning biases (Garety et al., 2001), among many others.
A l these distinctions are source of a number of debates within the cur ent clinical community.However, although delusions are a crucial clinical phenomenon -given their consequences in human mental health -, when paying closer attention, they also become source of a number of philosophical discussions.Over the last years, delusions have attra ed the attention of philosophers from different traditions as they propose a number of questions regarding the most fundamental a ects of human mind such as the rules of rationality (Bermúdez, 2001;Ger ans, 2002), the intentionality of conscious mental states (Ber ios, 1991), the nature of self-knowledge (Fernández, 2010), the structure of self-awareness (Ga lagher, 2014), and the reality of the self (Benta l, 2003) among many others.Certainly, the exploration of delusions is a matter that necessarily requires a great deal of cooperation between dif-3 The bizarreness of a delusion seems to be a matter related to the degree of empirical likelihood of a certain situation or state of affairs. 4As we will see in section "The anti-doxastic stance: Objections and replies", this issue represents a current debate within the context of the discussion about the nature of delusions. 5The circumscribed-elaborated distinction seems to be relevant to specify the degree of integration between the delusional state and other states of different kind of the subject.Coltheart and Davies (2000) claim that while polythematic delusions tend to be elaborated, monothematic delusions tend to be circumscribed.However, it is important to note that the same type of delusion might be circumscribed in some cases and elaborated in some others.For instance, a patient with the belief that her left limb is not hers but belongs to her mother (somatoparaphrenia) but shows no preoccupation for of her original limb and does not look for it seems to have a circumscribed delusion.Now, a patient with the same type of delusion (somatoparaphrenia) might show preoccupation and even develop paranoid thoughts about the situation.In addition, she might engage in behaviours aimed to find her original limb.In this case, the patient would have an elaborated delusion.
Fo lowing the interdisciplinary spirit that the study of delusions requires, this paper aims at exploring one of the most fundamental philosophical wor ies underlying this phenomenon, namely, the pro lem about the ecific type of mental state that grounds a delusional report or, as I ca l it, 'the typology pro le '.The analysis wi l be developed as fo lows: (i) After formulating and circumscribing the typology debate, (ii) I wi l explore the main tenets and advantages of the doxastic view of delusions, perhaps, the strongest candidate currently availa le within our discussion.(iii) After, I wi l clarify and evaluate four of the main counter-ar uments against the doxastic view and I wi l offer some counter-replies to these attacks.(iv) Fina ly, I wi l conclude that the anti-doxastic arumentation offers no good reasons to abandon the doxastic model and that this model does not need to ap eal to external resources to plausi ly deal with such counter-ar uments.At the same time, I wi l refer to some of the cha lenges that remain open within the doxastic view.
The definition of delusions
The cur ent version of the Diagnostic and Statistical Manual of Mental Diso ders (DSM-V) defines delusions as 'a false belief based on incor ect inference about external reality that is held despite what almost everyone else believes and despite what constitutes incontroverti le and obvious proof or evidence to the contrary' (APA, 2013, p. 819).As we can easily observe, this definition is controversial in many ways.For example, some delusions might be chara erized as accidenta ly true -as in the case of my mother being a serial ki ler -and others might not even be about external reality but rather about internal mental or bodily states6 .Sometimes, even the internal-external distinction becomes obscure as in the case of a patient that claimed to be at Boston and Paris at the same time (Weinstein and Kahn, 1955) or in cases where patients report that some of their thoughts have been stolen (Mu lins and Spence, 2003).
Now, as we l as there are elements of disagreement surrounding the DSM's definition of delusions, almost everyone would agree that delusions -as a mental symptom -are an important source of stress to patients (Van Rossum et al., 2011;López-Silva, 2015).In this context, some might confuse a certain 'neutrality' or lack of affect of certain patients towards their delusions when reporting them, however, this seems misleading as the important moment to take into ac-count is when patients are actua ly 'having' their delusions.In those moments, delusions are an undenia le source of stress.Delusions are usua ly regarded as 'pathological' for many reasons (Campbe l, 2001;Frankish, 2009;Ga lagher, 2009).A basic point to be made here is that, even within motivational ap roaches, delusions emerge as an abnormal way of dealing with mental conflicts or deficits (depending on the view from within delusions are formulated, see last section).Fina ly, a common view in the philosophy of psychiatry and psychopathology is that they are not biologica ly adaptive (Zolotova and Brüne, 2006).In this sense, the development of delusions would not help in any way to increase a subject's probabilities of survival in a ecific environment McKay and Dennett, 2009).Fina ly, it is agreed that delusions are also reported in a sincere way even in the affective and behavioural rea ions observed in patients are not those that one might expected if those report were true (Zahavi, 2005;Bortolotti, 2010).
The typology problem
A key debate underlying the clinical and philosophical understanding of delusions is the one concerning the type of mental states in which delusional reports are actua ly grounded.Think about the fo lowing cases.When A ustín says he is atc ing Star Wars, we can conclude without further discussion or doubt that A ustín' s statement is grounded in a pe ce tual experience.A ustín reports what he reports just because he is having a visual experience of that movie.Now, when Ne ly says she was thinking about what would it be like to be a caveman, we can uncontroversia ly conclude that she was imagining that something was the case.Ne ly' s report is based on an imaginative-cognitive experience of a certain kind.However, what can we say in cases where a subject asserts that she is dead (Cotard delusion; Ber ios and Luque, 1995), her bodily movements are under control of aliens (Delusions of alien control; Firth, 1992), or when she says she an external agent is inserting thoughts into her mind (Mu lins and Spence, 2003)?Independent from ecific theoretical frameworks, a l these cases are usua ly regarded as delusions, but what types of mental state are these patients reporting?Are delusional reports grounded in actual perceptual states?Does this make them perceptual states?Are they just the product of unmonitored imaginative a ivity that ends up deceiving the subject?Let' s ca l this the typology pro lem, namely, the pro lem about the ecific type of mental state that grounds delusional reports.
Why is the typology pro lem a pro lem at a l?The answer to this question seems to have two dimensions.First, the typology pro lem is a pro lem from a clinical point of view.Clinicians aim not only at identifying pathological mental phenomena, but also, at building up explanatory theories and therapeutic treatments.Without a clear idea about the type of mental state a delusion is, it is hard to see how clinicians might be a le to offer ecific explanatory theories, as these theories would necessarily take the form of explanatory frameworks about ecific mental states and their disruptions (Coltheart, 2015).In the same way, it is hard to see how one might build ecific therapeutic technics without having clear clues about the type of mental state one is dealing with.Second, the typology pro lem is a philosophical pro lem as it leads to a number of debates about our most fundamental ideas about human rationality, the nature of phenomenal consciousness, the nature of intentionality, and other key issues within the philosophy of mind.Certainly, a clarification of the typology debate might not only contribute to the development of conceptual fields such as philosophy but also, to ap lied fields such as psychiatry, clinical psychology, and psychopathology.
A potential solution?
The doxastic approach to delusions and its appeal One of the strongest solutions to the typology pro lem cur ently availa le in the literature is the so-ca led doxa ic ap roach to delusions.This view takes its name from 'doxa' , Greek word for belief or opinion, and its main tenet is that delusions are better understood as a type of belief (Bayne and Pacherie, 2005;Bortolotti, 2010Bortolotti, , 2012;;Bayne and Hattiangadi, 2013) 7 .Contrasting with other views on the same pro lem, here, beliefs are broa ly understood as propositional attitudes8 .More ecifica ly, most advocates of the doxastic view tend to endorse McKay and Dennett's notion of belief as "mental states of a subject that implement or embody that subject's endorsement of a particular internal or external state of affairs as actual" (2009, p. 493).Of course, it is important to note that within the doxastic ap roach, beliefs are not just a type of belief; they are an abnormal type of belief, or, in words of McKay and Dennett (2009), they are misbeliefs.
Two main doxastic expressions can be identified in the literature.On the one hand, top-down views su gest that delusional doxastic frameworks might influence the phenomenal chara er of experiences and a ions (Campbe l, 2001).On this view, abnormal doxastic states would act as an inte ligibility framework for certain experience in a way that such frameworks would penetrate the subjective features of experiences.On the other hand, bottom-up views su gest that 'the proximal cause of the delusional belief is a certain highly unusual experience' (Bayne and Pacherie, 2004, p. 2).On this view, certain abnormal experiences would propose highly implausi le doxastic hypothesis that would be accepted by a cognitive system as a result of a number of deficits in the process of evaluating that type of information (McKay et al., 2009).As we can see, these two doxastic expressions differ when trying to set up the causal direction in the relationship between experience and belief.Within bottom-up ap roaches the causal relation goes from experience to belief (Maher, 1974(Maher, , 2003;;Coltheart, 2015), whilst in top-down ap roaches it goes from belief to experience (Campbe l, 2001).This distinction is important, as it wi l be part of the main cha lenges that remain open within the doxastic ap roach.However, it is not my job to decide which one of these doxastic expressions is better here.
The doxastic view enjoys a we l-deserved popularity within cur ent philosophy of mind and neuropsychiatry.This can be easily explained by the empirical and conceptual advantages that this view offers over its main rivals.Now, let me explore some of these main advantages: Diagnostic Evidence: It has been noted that delusions are usua ly reported as beliefs by patients (Bortolotti and Miyazono, 2014).Genera ly, when asked whether they rea ly believe what they report within a psychotherapeutic setting, delusional patients claim that they rea ly do so (Bisiah and Gemianini, 1991).For example, when asked if he rea ly believed what he was reporting, one of my patients exclaimed "what do you mean?Of course!I'm not inventing it!"9 .This type of reply and the high degree of subjective certainty commonly associated with delusional reports offer would be nicely explained by the doxastic view of delusions.
Subjecti e Certainty: Delusions are usua ly reported with a considera le degree of subjecti e certainty.This issue can be plausi ly explained if we conclude that delusions are beliefs because high degrees of subjective certainty seem to be chara eristic of beliefs (Langdon and Bayne, 2010).However, it is important to note that the degree of subjective certainty with which delusions are reported varies considera ly from subject to subject (Parnas, 2003).
Disc iminati e Powe and Conce tual Cla ity: the doxastic view offers a conceptua ly and phenomenologica ly ap ealing way to distin uish delusions from other types of psychopathological mental states.While delusions reflect disturbances in the process of formation of beliefs, ha lucinations, for instance, might reflect disturbances related to perceptual processes (Langdon and Bayne, 2010).Indeed, this ability is always desira le for it a lows clinicians to develop a more ecific diagnostic, which in turn might uide better-defined and more ecific treatments in psychotherapeutic contexts.This conceptual clarity is highly desira le when trying to develop empirical research on delusions.
Pathological Nature of Delusions: The doxastic ap roach to delusions explains nicely the pathological nature of delusions.As Bortolotti and Miyazono (2014, p. 32) su gest: LA-O's mental condition is pathological partly because she seriously denies that her left hand belongs to her.If she did not believe it, but merely imagined it, there would not be anything particularly pathological about her condition, as acts of imagination do not necessarily reflect how things are for the person engaging in the imagining.It is a strange thing for LA-O to imagine that her left hand does not belong to her, but we can easily entertain various kinds of strange possibilities in our imagination without losing mental health.
Strong Resea c Fra ework: the doxastic ap roach provides a robust conceptual framework to uide empirical research on delusions (Coltheart and Davies, 2000;Coltheart, 2015).Once we accept delusions are beliefs, psychiatrists and philosophers would only need to focus on the way human beings come to form beliefs and understand the different alterations of these mechanisms that give raise to delusions (Coltheart, 2015).Natura ly, here the cha lenge is to explore and comprehend such mechanisms in the adequate ways.A number of researchers in cur ent neuropsychiatry have endorsed this the doxastic ap roach to delusions and claim that in order to understand delusions, philosophers and pra itioners should have a closer look at the different perceptual and cognitive mechanisms involved in the process of production of beliefs (Maher, 1974;Coltheart, 2002Coltheart, , 2009)).The thought here is that by understanding how these mechanisms break down under certain circumstances, we might be a le to decipher the psychogenesis of delusions (Coltheart, 2005(Coltheart, , 2015)).
The anti-doxastic stance: Objections and replies Despite a l the aforementioned advantages, over the last years a number of authors have ar ued that the doxastic approach fails to make sense of delusions in a plausi le way.The anti-doxastic stance can be divided into two main a ects: a negative and a positive one.Whilst the negative a ect refers to the reasons offered by the anti-doxastic to believe that delusions are not beliefs and abandon a doxastic stance, the positive a ect refers to the alternative answers that advocates of anti-doxasticism would offer to the typology pro lem 10 .Now, the main focus of the negative dimension of the anti-doxastic stance has been the idea that delusions would fail to instantiate the main features of paradigmatic beliefs and that therefore, delusions should not be understood as beliefs.In this section, I evaluate these anti-doxastic ar uments and I offer counter-replies to these attacks.As I wi l stress in the conclusions section, as it stands, the doxastic ap roach seems to be in a good position to deal with these counter-ar uments without ap ealing to external ar umentative elements.
The argument about subjective certainty
The first attack to the doxastic view of delusions concerns the subjective features that would chara erize delusional reports.The ar ument about subjectivity certainty would run something like this: (1) Beliefs are consistently reported with high degrees of subjective certainty (2) Delusions are reported with varia le degrees of subjective certainty.
(C) Delusions are not beliefs because they do not posses the degree of subjective certainty that paradigmatic beliefs posses.This ar ument rests on the idea that normal beliefs are reported with high and invaria le degrees of subjective certainty.In contrast, delusions would be reported with vari-10 It has been suggested that delusions might be better characterized as cognitive imaginings, i.e. imaginative states that are misidentified by the subjects as beliefs (Currie, 2000;Currie and Jureidini, 2001).Hohwy and colleagues argue that delusions should be understood as the result of perceptual inferences (Hohwy and Rosenberg, 2005;Hohwy and Rajan, 2012).Egan (2009) claims that delusions are bimaginations, i.e. states with some belief-like and imagination-like features, and Schwitzbegel (2012) finally concludes that we should think of delusions as neither beliefs nor non-beliefs, but rather, as in-between beliefs.
a le degrees of certainty.For example, De Hann and De Bruin (2010) claim that, in some cases, patients report their delusional episodes 'as if ' they were the case: "it is as if my girlfriend can read my thoughts […] it is as if I am from another planet" (p.385, note 16).This fluctuation in subjective certainty would not be present in paradigmatic beliefs so it would give the anti-doxastic a reason to su gest that delusions do not instantiate the expected features of beliefs and therefore, that we should not chara erize them as such.
There are a number of ways in which the doxastic advocate might reply to this attack.The main pro lem seems to be that plausibility of premise (1) as it imposes a too demanding requirement, in fact, a requirement that cannot be even met by normal beliefs.The truth is that even normal beliefs are reported with varia le degrees of subjective certainty.Beliefs are not a static mental state, they are highly context-dependent, flexi le, and fluctuating (Bayne and Hattiangadi, 2013).I can report a certain normal belief with varia le degrees of subjective certainty depending on a number of internal (mood, affective processing, cognitive conditions, etc.) and external (social role, ecific task in which it emerges, etc.) elements.Given the contextual nature of beliefs, a l the elements might influence their degrees of subjective certainty.Take the case of P believing in G: God.The degree of subjective certainty with which P reports G might vary depending on the situation in which the belief is reported.Perhaps, after reading Nietzsche, P seems to believe in G but P's not quite sure about it.P's belief that G seems to show a low degree of subjective certainty in this situation.Now, P might report that G with high degrees of subjective certainty right after having experienced a massive earthquake.Ar ua ly, P's doubts about God's existence do not show that P does not believe in G, but rather, that P has got a certain belief and some reasons to doubt about it.For many beliefs, it seems uncontroversial to say that one can have them while nurturing doubts about them.In that case, such doubts are just the product of the exercise of one's rational abilities.Therefore, it seems plausi le to say that varia le degrees of subjective certainty also chara erize reports of normal beliefs and that one should rule out a doxastic stance towards delusions on this basis.
Another option availa le for the doxastic defender is to say that it is perfectly possi le for a single cognitive system to have contradictory beliefs (which is quite different from holding a single belief with contradictory content of the type <P ∧ ~P>).According to Davidson (1982), a subject can have two mutua ly contradictory beliefs, as long he does not believe their conjunction at the same time i.e. <P ∧ ~P>.Regar less of the ar umentative power of this last ar ument, here the main issue at hand is that the phenomenon of subjective certainty should be understood as a matter of degree.One should not think about this issue as a lack-or-white phenomenon, but rather, as a continuum in which different degrees of subjec-tive certainty can be located depending on the external and internal elements that accompany the emergence of such states.While one pole of this continuum might be associated with lower degrees of subjective certainty as those chara erizes imaginative and dream-like states, the other pole would be related to states showing higher degrees of subjective certainty as those chara erizing normal beliefs.Thus, although the degree of certainty in asserting the content of certain delusions can vary, this degree of certainty would not be compara le with those of imaginings, for example, where subjective certainty seems low or even non-existent.The truth is that, in most cases, delusional patients assert the content of their delusions with high degrees of subjective certainty at different stages of their aetiological development, and given that variability (within the higher-pole of this continuum), is present even in normal beliefs, delusions might be nicely explained by the doxastic model.
The argument about responsiveness
The second anti-doxastic ar ument concerns the way in which delusions respond to counter-evidence.This ar ument runs something like this: (1) Beliefs are responsive to counter-evidence.
(2) Delusions are not responsive to counter-evidence.
(C) Delusions are not beliefs because beliefs are responsive to counter-evidence and delusions are not.
The idea behind this ar ument is that M is a belief of the subject if and only if M is responsive to evidence.Thus, delusions fail to meet this requirement and therefore, they should not be understood as beliefs of any kind.The first point to be made is that it is just not true that delusions are never responsive to counter evidence.As Schreber observes in his Meoirs of My Nervous Illnes , "what objectively are delusions and ha lucinations are to him [the patient] unassaila le truth and a equate moti e fo a ion" (1988, p. 282, my emphasis).
A second reply emerges from a more pra ical point of view.Take the case of Cognitive Behavioural Therapy (CBT), perhaps the most popular ap roach to the treatment of delusions.This ap roach is premised in the idea that delusions are a type of belief (Alford and Beck, 1994).One of the main technics of CBT consists in questioning the patient's delusional belief in light of counterevidence (Dickerson, 2000).CBT has been proven effective in the treatment of some delusions and such effectiveness can be accounted by the fact that delusions are sometimes responsive to counterevidence (Garety et al., 1997) 11 .In this context, the truth is that delusions are sometimes responsive to evidence while other times they are not.
A final reply to the responsi enes objection is that the arument is based on the idea that paradigmatic beliefs are necessarily rational in virtue of their responsiveness to evidence.However, this idea seems to be way too demanding for it has been shown that, sometimes, not even ordinary beliefs are entirely responsive to counterevidence (Nisbett and Ross, 1980;Benta l, 2003;Bortolotti, 2010).Clear examples of ordinary beliefs lacking the degree of responsiveness to evidence suppose ly chara eristic of paradigmatic beliefs are racist and sexist beliefs.A male sexist subject might have the belief that being a woman is enough for that subject to be considered as inferior in many ways.Sexist beliefs are the result of a number of biases, and they lack the degree of responsiveness to evidence that paradigmatic beliefs exhibit; however, they are not denied the belief status.The pro lem with this objection is that some of our ordinary beliefs are ir ational in the same way delusions can be and therefore, again, the objection e ablishes a requirement that not even some ordinary (non-delusional) beliefs meet.Nonetheless, it is important to note that the main difference between delusional and non-responsive ordinary beliefs seems to be given by the degree of responsiveness.Ir ational ordinary beliefs seem to be more responsive to evidence than delusional beliefs and the cha lenge for the advocate of the doxastic ap roach is to account for this difference12 .The ar ument seems to take responsiveness as an a l-or-nothing phenomenon, idea that is far from plausi le.Different beliefs can be responsive to counter-evidence in different degrees depending on the quality of the information, the subject's personal cognitive patterns, the subject's cur ent affective situation, social context, and on the psychological role that the relevant belief plays in the subject's mind, and the same seems to ap ly to delusions if understood as beliefs.The idea behind this ar ument is that M is a belief of the subject if and only if M is integrated with other beliefs of the subject.Thus, delusions fail to meet this requirement and therefore, they should not be understood as beliefs of any kind.Cur ie and Jureidini (2001, p. 161) conclude that delusions "[fail], sometimes ectacularly, to be integrated with what the subject rea ly does believe" .
The argument about doxastic integration
First, it is important to note that it is not clear how Currie and Jureidini are in position to know what the patient really believe.If we take delusional reports at face value and consider the way in which patients report their delusional episodes, one might be a le to say without much discussion that they do believe that aliens are contro ling their bodily movements, that they are dead, that machines insert thought into their mind, and so on.The pro lem here is that delusional beliefs seem not to be integrated with some othe beliefs of the subject.However, in this context we can ask two simple questions: The first is (i) do delusions rea ly fail to be integrated with other beliefs of the subject?The answer seems to be 'not always' .In many cases, delusions are integrated we l with other beliefs of the subject (Bortolotti, 2010).P ima facie, the patient that planned to remove one of his two heads with an axe was a le to integrate his 'perceptual delusional bicephaly' with the belief with the content <I can use an axe to remove my second head> (see Ames, 1984).The second question to ask here is (ii) are paradigmatic beliefs always integrated between each other?The answer seems to be, again, 'not always' .The main pro lem with this objection is that the failure of delusions to be integrated with some other beliefs of the subject is exa gerated and therefore, it imposes a requirement that not even ordinary beliefs meet.However, as Bortolotti (2010) claims, it is necessary to note that delusions are evidently less integrated than ordinary beliefs so they show the mark of ir ationality (bad integration) to a higher degree than non-delusional beliefs.Although this is a phenomenon that the advocate of the doxastic view should be a le to explain, it is by no means a good ar ument to deny the status of belief to delusions.
The argument about action guidance
(1) Beliefs uide ecific a ions of the subjects that hold them.(2) Delusions do not uide ecific a ions of the subjects that hold them (C) Delusions are not beliefs because beliefs uide actions of the subjects that hold them and delusions do not.
The idea behind this ar ument is simple, M is a belief of the subject if and only if M uide a ions of the subject holding M. Thus, delusions fail to meet this requirement and therefore, they should not be understood as beliefs of any kind.
This seems to be one of the weakest ar uments against the doxastic ap roach for it looks like a great number of deluded patients do act on their delusional beliefs (de Pauw and Szulecka, 1988).Blount (1986) reports a case of a patient suffering from Capgras delusion that decapitated his stepfather trying to find the batteries in his head.Certainty, the a ion of decapitation was uided by the belief that the patient's stepfather was some kind of machine.Similarly, Young and Leafhead (1996) showed that a l Cotard patients showed some form of delusion-related behaviour.In fact, after planning to remove his second head with an axe, the aforementioned patient with perceptual delusional bicephaly decided to remove it with a un leading this to a number of injuries (see Ames, 1984).A number of patients with delusions of superhuman strength have been reported injured after a ing on their delusions (Petersen and Sti lman, 1978).These cases show that, sometimes, delusions do uide a ions in deluded patients.Now, for the sake of the discussion, let me proposed a refined version of this objection.This version might propose that delusions fail to produce the ight kind of a ion and e otional response that patients are expected to produced if delusions were beliefs.For instance, it is claimed that some patients suffering from the Capgras Delusion, who claim to have beliefs with the content <my wife is an imposter>, fail to react in the way we would expect if their delusions were actual beliefs.However, there are some basic difficulties with this new version of the ar ument.
First, one cannot expect psychotic patients to react or show the same type of rea ive behaviours that non-psychotic people commonly exhibit.This is to ignore a number of cognitive, affective, and motivational impairments that patients suffer and that might influence the way they react to certain mental state such as their own delusions (Fuchs, 2005).Second, having a certain belief is different from the behaviour derived from it.It has been shown that schizophrenic patients tend to have pro lems with intro ection and general problems with identifying their own mental states (Taylor et al., 1997).Parnas and Sass (2003) conclude that schizophrenic patients usua ly show a condition ca led 'hyper eflexivity' , i.e. an exacerbated explicit awareness of otherwise tacit elements that usua ly remains in the background of consciousness.Ar ua ly, one might say that hyper eflexivity arises in the context of an informational surplus in consciousness that does not a low the patients to behave in the way that it is expected when having a certain clear and we l-identified belief.Third, the ar ument seems to assume that we always act consistently with our beliefs but do we always behave consistently with our beliefs?Is it 'expected behaviour' a good parameter to distin uish between those states that are beliefs and those that are not?It seems that we do not always act consistently with the beliefs we hold.One can have the belief that there is a helper God that looks after his sons while a ing as though there is no God.However, the status of belief is not denied to the belief in God, even if the subject holding it acts like there is no God.
Until this point, the reader might be a le to realize that most of the anti-doxastic ar umentation seems to rely on an idealization of the features of normal beliefs, imposing constraints on delusional phenomena that not even ordinary beliefs would meet.Therefore, there seems not to exist sufficiently compe ling reasons to reject the doxastic ap roach, at least, on the basis of the four ar uments analysed here.
Concluding remarks: The challenges of the doxastic approach
Over the last years, the doxastic ap roach to delusions has become a strong candidate in the context of the typology pro lem i.e. the pro lem about the type of mental state that delusional reports instantiate.However, this ap roach has not been free from attacks.Taking into consideration the analysis offered here, it seems reasona le to say that the counter ar uments against the doxastic stance offer no sufficient reasons to reject it.If we are to reject such a view, it is not in virtue of the four ar uments analysed in section 4 which are the most popular in cur ent literature.Think about this issue in this way: the mere fact that delusions fail to perform some of the roles typica ly associated with paradigmatic beliefs is not sufficient reason to say that they are not beliefs at a l (Reimer, 2010).Such a conclusion seems too hasty.One might say that they are not paradigmatic beliefs -just as the doxastic view su gests -or as McKay and Dennett (2009, p. 493) su gest, that they are misbeliefs, namely, beliefs that are not cor ect in a l particulars.Metaphorica ly eaking, the fact that penuins cannot fly does not entail that pen uins are not birds at a l, rather, they are just birds that cannot fly.In the same way, the fact that delusions fail to instantiate certain features of paradigmatic beliefs in the same degree that they do does not entail the fact the delusions are not beliefs, but rather, that delusions are just abnormal beliefs.Strictly eaking, if we define a belief simply as a mental state of a subject that implements or embodies that subject's endorsement of a particular internal or external state of affairs as actual (see McKay and Dennett, 2009, p. 493), delusions can clearly count as an abnormal type of belief.
However, at this point it is important to note that although the anti-doxastic stance does not seem very successful, the doxastic model of delusions sti l faces a number of conceptual and empirical cha lenges.The doxastic view needs to refresh its main tenets taking into consideration a broader and more complete definition of belief and what beliefs imply in relation to other mental states of a single subject.The model needs to ecify a contextualized definition of belief that takes into account their flexi le, context-dependent, and fluctuating nature.In a dition, the model needs to clarify the issue about the continuum of subjective certainty and the way in which external and internal factors might influence the way in which beliefs are rea ive to counter-evidence.In this sense, such a definition needed involves the clarification of the role that beliefs play in a cognitive system's relationship with its environment and itself and not only the definition of the issue in isolation.Only by ecifying a l these a ects of a refreshed definition of delusions, the doxastic model wi l be a le to keep informing good-quality empirical models and, in turn, the empirical understanding and therapeutic treatment of delusions.Of course, this is a task I cannot undergo here.
( 1 )
Beliefs are integrated with other beliefs of the subject (2) Delusions are not integrated with other beliefs of the subject (C) Delusions are not beliefs because beliefs are integrated with other beliefs of the subject delusions are not.
|
2018-12-12T16:33:39.333Z
|
2016-10-04T00:00:00.000
|
{
"year": 2016,
"sha1": "a966f02bcba4615b97ee6f6a678a294d9edf425b",
"oa_license": "CCBY",
"oa_url": "http://revistas.unisinos.br/index.php/filosofia/article/download/fsu.2016.172.15/5695",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a966f02bcba4615b97ee6f6a678a294d9edf425b",
"s2fieldsofstudy": [
"Philosophy",
"Psychology"
],
"extfieldsofstudy": [
"Philosophy"
]
}
|
252159951
|
pes2o/s2orc
|
v3-fos-license
|
Group Antenatal Care in Ghana: Protocol for a Cluster Randomized Controlled Trial
Background: While group antenatal care (ANC) has been delivered and studied in high-income countries for over a decade, it has only recently been introduced as an alternative to individual care in sub-Saharan Africa. Although the experimental design of the studies from high-resource countries have been scientifically rigorous, findings cannot be generalized to low-resource countries with low literacy rates and high rates of maternal and newborn morbidity and mortality. The Group Antenatal Care Delivery Project (GRAND) is a collaboration between the University of Michigan in the United States and the Dodowa Health Research Centre in Ghana. GRAND is a 5-year, cluster randomized controlled trial (RCT). Our intervention—group ANC—consists of grouping women by similar gestational ages of pregnancy into small groups at the first ANC visit. They then meet with the same group and the same midwife at the recommended intervals for care. Objective: This study aims to improve health literacy, increase birth preparedness and complication readiness, and optimize maternal and newborn outcomes among women attending ANC at seven rural health facilities in the Eastern Region of using web application for data collection and a management will on an intention-to-treat to test the differences between the two arms: randomized to group-based ANC and randomized to routine individual ANC. We will conduct a process evaluation concurrently to identify and document patient, provider, and system barriers and facilitators to program implementation. study it is the first to be to examine the effects of ANC among and nonliterate participants. Our findings have the potential to impact how clinical care is to and to and
Background
In 2017, the maternal mortality ratio in Ghana was estimated to be 310 per 100,000 live births for the 7-year period prior to the Ghana Maternal Health Survey [1]. While 89% of women in Ghana surveyed had attended the minimum standard of four antenatal care (ANC) visits, 20% of women continued to give birth at home [1]. In contrast to the decline in infant and under-5 mortality, neonatal mortality has remained stagnant since 2007 [1].
ANC has the potential to play a pivotal role in ensuring positive pregnancy outcomes for both mothers and their newborns [2]. While ANC is widely available and attended by the majority of pregnant women in Ghana, the expected impact on birth outcomes is yet to be fully realized. Thus, it is vital to examine the way ANC is being delivered and to explore alternatives to the current model to enhance positive birth outcomes.
In addition to its clinical components, ANC is designed to teach pregnant women to recognize the danger signs that might warn them of complications that could affect either themselves or their babies, and to encourage prompt care seeking for such danger signs. ANC is also designed to promote a healthy lifestyle, to integrate positive health behaviors, and to develop a trusting relationship with a health care provider and the health system.
While group ANC has been delivered and studied in high-resource settings for over a decade, it has only recently been introduced as an alternative to individual care in sub-Saharan Africa. Two randomized controlled trials (RCTs) examining group ANC versus routine individual care conducted in the United States found that women assigned to group care had significantly better antenatal knowledge, had greater satisfaction with care, and were less likely to have a preterm birth than those in standard care. In addition, the trials showed more favorable birth, neonatal, and reproductive outcomes in the intervention groups [3,4]. Although the experimental design of the studies from high-resource countries are scientifically rigorous, findings cannot be generalized to low-resource countries with low literacy rates and high rates of maternal and newborn morbidity and mortality.
In sub-Saharan Africa, data from three pilot studies found ANC delivered in groups to be acceptable and feasible to both women and providers in Ghana, Senegal, Tanzania, and Malawi [5][6][7]. A two-country cluster RCT found a higher likelihood of birth in a health care facility for Nigerian women in group versus standard ANC and a higher frequency of ANC visits in both Kenya and Nigeria [8]. Finally, a large cluster RCT conducted in Rwanda to examine the impact of group ANC on gestational age at birth found no significant difference in gestational age between intervention and control groups. This is a critical time during which to examine group ANC in order to promote healthy pregnancy and optimize maternal and newborn outcomes in low-resource settings [9]. This paper describes the design and evaluation plan for a cluster RCT that is powered to fill the knowledge gap in women's health literacy skills in order to increase self-care knowledge and care seeking during intrapartum and postpartum periods.
Description of the Model for Group Antenatal Care
The World Health Organization (WHO) Standards for Maternal and Neonatal Care [9] guided our iterative process with content experts from the United States, Ghanaian health care providers, pregnant women, and stakeholders to ensure local and cultural relevance. The group-based ANC model in this study was developed and tested for acceptability and feasibility by the corresponding author (JRL) and her Ghanaian team for the first time in a clinical setting in Ghana [5,10]. At the core of the model is a negotiation process acknowledging that some health messaging may be in conflict with cultural beliefs. The model allows participants to incorporate safe, feasible, and culturally acceptable health beliefs into self-care actions by being inclusive of traditional practices that are not harmful. As part of the model, participants and the facilitator "agree" on safe and acceptable actions within the context of the setting that are then practiced by the group.
At the initial ANC visit, women are placed into small groups of 10 to 14 women with pregnancies of similar gestational age. Standard complete histories and physical exams as well as lab tests are completed, with group visits starting at the second ANC visit. Prior to the start of each group, blood pressure and weight are measured and a urinalysis is performed by each woman with help from the midwife. Each woman then receives an individual assessment with the midwife to measure fundal height, listen to fetal heart tones, and answer any questions she prefers not to raise in the group. The midwife and women then sit in a circle facing one another for a 60-to 90-minute facilitated discussion. The model uses strategies such as storytelling, peer support, demonstration, and teach-back to enhance its effectiveness. Health literacy is incorporated as an integral part of clinical practice within the model, not as an add-on to care.
Evidence-based information is presented in a nonhierarchical, patient-centered, participatory way. Picture cards ( Figure 1) are used to enhance communication and learning in the group setting. They provide a mechanism to help convey new concepts and ideas.
The picture cards encourage valuable group discussion and are an educational aid to stimulate thinking and reflection, dialogue, and learning among participants. Content is repeated multiple times in a variety of ways to enhance retention, including the following: (1) auditory (ie, listening to stories and signs of problems), (2) visual (ie, through the use of demonstration and picture cards), and (3) kinesthetic (ie, practicing actions and handling picture cards).
The Facilitator's Guide for Group Antenatal Care, developed by JRL, provides a step-by-step guide that details how to conduct each of the group ANC visits, become a facilitator, enhance adult learning, provide respectful maternity care, and monitor for program quality, performance, and fidelity.
Aims and Objectives
The Group Antenatal Care Delivery Project (GRAND) is designed to improve health literacy, increase birth preparedness and complication readiness, and optimize maternal and newborn outcomes among women attending group-based ANC at seven rural health facilities serving predominantly low-literacy and nonliterate pregnant women in the Eastern Region of Ghana. More specifically, GRAND is designed to achieve the following aims: • Aim 1: to quantify differences in birth preparedness and complication readiness, including knowledge of danger signs and recommended action steps, between women randomized to group-based ANC and those randomized to routine individual ANC.
• Aim 2: to assess behavioral differences in care-seeking patterns (eg, facility birth rates, postnatal care, and postpartum care) between women randomized to group-based ANC and those randomized to routine individual ANC.
• Aim 3: to evaluate the clinical outcomes of mothers and their newborns (eg, decrease in maternal morbidities and perinatal and neonatal mortality) between women randomized to group-based ANC and those randomized to routine individual ANC immediately postpartum and up to 1 year following birth.
We hypothesize that pregnant women randomized into group-based ANC, as compared to women who received routine individual ANC, will exhibit increased health literacy through the following: (1) increased birth preparedness, including recognition of danger signs and knowledge of how to respond to such signs; (2) higher rates of care-seeking behaviors, including seeking care for problems identified during pregnancy, higher facility delivery rates, and increased attendance at postnatal and postpartum care; and (3) better clinical outcomes for themselves and their newborns.
Overview
This study uses a theoretical model originally developed by Squiers et al [11] and modified in our preliminary research to assess maternal health literacy [5]. The Health Literacy Skills Framework uses an ecological perspective to help assist in the development and testing of potential interventions to impact a patient's health literacy [11]. As illustrated in Figure 2, our modified theoretical model, which is renamed the Maternal Health Literacy Skills Framework [5], is used to guide the aims and data analytic plan.
Our model addresses how group ANC builds knowledge by increasing the comprehension of stimuli, promoting self-determination, increasing action, and ultimately improving maternal health behaviors and outcomes. It considers how the individual's comprehension of stimuli and potential mediators may impact overall health behaviors and outcomes.
Study Design
GRAND is a 5-year cluster RCT. The study was registered at ClinicalTrials.gov (NCT04033003) on July 25, 2019, and is a collaboration between the University of Michigan in the United States and the Dodowa Health Research Centre in Ghana. Health facilities were selected based the number of ANC registrants per month and the average gestational age of pregnancy among women at registration in each facility. Facilities were then matched based on facility type, district, and number of monthly ANC registrants.
Study Setting
The study setting for GRAND includes four districts-Akwapim North, Yilo Krobo, Nsawam-Adoagyiri, and Lower Manya Krobo-within the Eastern Region of Ghana. Ghana ( Figure 3) has a population of approximately 30 million people and is situated in Western Africa between Togo, Burkina Faso, Ivory Coast, and the Atlantic Ocean.
Ghana is divided into 16 administrative regions, with the Eastern Region situated north and adjacent to the region that includes the capital city of Accra, the Greater Accra Region. While Greater Accra is predominantly urban and periurban, the Eastern Region relies on a primarily agrarian economy, including both subsistence and commercial farming. Approximately 20% of residents never attended any formal schooling, with another 60% stopping their education at the primary level (14.5%) or at the junior secondary (ie, high school) level (45.3%). Women are twice as likely as men to have never received any schooling [12]. According to the 2017 Ghana Maternal Health Survey, the fertility rate for the region is 3.8, comparable to the national average of 3.9 [1].
Sampling and Randomization Frame
Facilities were randomized using a matched-pair method. Variables for matching included the number of deliveries and the average gestational age of pregnancy among women at the time of enrollment for ANC in each facility, so that facilities within each pair are similar to each other with regard to these matching factors. For each pair of facilities, one site was randomly assigned to group-based ANC (intervention) and the other to routine individual ANC (control). The matching and randomization process was completed using the nbpMatching package from R software (version 1.5.0; R Foundation for Statistical Computing) [13]. The locations of the chosen facilities ensures that participating facilities will be far enough apart to minimize the likelihood of cross-group contamination.
Power and Sample Size Calculations
We calculated the sample size based on three primary outcomes: the change in birth preparedness, complication readiness index scores, and the percent change in women obtaining maternal postpartum checkups and babies obtaining postnatal checkups within first 2 days after birth. See Table 1 for a complete list of primary and secondary outcomes.
According to the literature, the median intraclass correlation coefficient (ICC) was 0.010 [14]. Since we proposed a cluster randomized design based on seven intervention facilities and seven control facilities, we considered the effect of the ICC. The ICC is a measure of the extent to which the effect of the intervention differs across facilities. Hence, we conducted our sample size calculation for an ICC equal to 0.01 using the CRTSize package from R software [15]. First, the percentage of women in Ghana who were categorized as "prepared for birth" was 30% [16]. We expect that our ANC intervention will improve the preparedness to 45%, as measured by the birth preparedness and complication readiness index. At a significance level of .05, we need 84 women per facility to reach 80% power to detect such an effect. Next, approximately 60% of women in the Eastern Region of Ghana receive a maternal postpartum checkup in the first 2 days after birth [12]. We expect that our intervention will increase this value to 75%. To test this, we need 76 women per facility. Finally, the current percentage of newborns obtaining postnatal checkups in rural Ghana within 2 days is 22% [12]. We expect that our intervention will increase this value to 35%, in which case we will need at least 100 women per facility to see such an effect. To preserve power due to attrition, we proposed recruiting 120 women per facility. Hence, the total number of women to be recruited is 1680 based on an attrition rate of 20% in our pilot work. Table 1. Primary and secondary outcomes.
Secondary outcomes Primary outcomes Aims
Aim 1: to quantify differences in birth preparedness, knowledge of pregnancy and newborn danger signs, and recommended action steps Ability to identify postpartum danger signs (eg, increased bleeding or large clots, weakness or fainting, fever, pain in abdomen or breasts, and painful urination) Ability to identify danger signs in pregnancy (eg, bleeding, severe headache, blurred vision, and fever) • • Birth preparedness and complication readiness (eg, saved money, identified birth facility and emergency transportation to facility, and identified blood donor) Ability to identify the recommended action steps when a problem is identified (eg, call for help, have a plan for transportation, identify someone to accompany you to the facility, identify someone to care for the family, go straight to the facility, and supportive care along the way to the facility) • • Self-efficacy, operationalized care-seeking history, and health information knowledge Ability to identify newborn danger signs (eg, poor suck, jaundice, difficulty or fast breathing, and convulsions)
Training of Research Personnel
Prior to data collection, all research assistants (RAs) were trained for the study by the primary investigator and coinvestigators. All trainings were held in English, the official language of Ghana, with discussions regarding key terms in Dangbe, Ga, Akan, and Ewe. Trainings included the following: (1) an overview of the study and its protocol, (2) information about the ethical treatment of human subjects, (3) standardized record keeping and data collection for the study, and (4) strategies for reducing bias and error. Biannual refreshers will be conducted with all RAs. All RAs are fluent in English as well as in the dialect and culture of their assigned area.
Training of Clinical Personnel
Prior to data collection, we conducted a training of trainers for research personnel at the Dodowa Health Research Centre and maternal, newborn, and child health nurses representing the four District Health Directorates. All registered nurses and registered midwives providing ANC at both intervention and control facilities received an update on the essential components of ANC based on WHO guidelines to ensure equal quality at all sites at baseline. Providers at intervention sites were trained to implement group-based ANC, whereas providers at control sites will continue delivering routine individual ANC. Providers at study sites randomized to group care were trained in the delivery of the methodology. The provider training mirrors the facilitator's guide, including an emphasis on active listening, ideal conditions to maximize learning, and the use of picture cards as an important training resource for low-literacy learners. These trainers, with assistance from the primary investigator and two experienced trainers, then conducted a 3-day didactic training with groups of 10 to 12 clinical personnel focused on facilitating group ANC, use of the methodology, organizing groups, and an overview of the research. All trainings were in English, the official language of Ghana, with discussions regarding key terms in both English and the local languages. Participants then practiced delivering care using the group model with support from the trainers. A learning methods checklist and a fidelity checklist for provider readiness, which was established during preliminary studies, were used to provide feedback to participants during practice and to establish when each individual is ready to take on facilitating a group, based on the checklist scores.
Recruitment of Participants and Informed Consent
Recruitment of women will occur at individual health facilities. The trained RA works with clinic staff to identify women who meet the eligibility criteria and are healthy enough to discuss enrolling in an ANC intervention. The RA will inform health facility staff as to when they will be at the clinic and available to women interested in learning more about the study. Midwives will identify women (1) whose pregnancies are at less than 20 weeks' gestation; (2) who speak Dangme, Ga, Akan, Ewe, or English; (3) who are over the age of 15 years; and (4) who are not considered high risk.
The midwife will then instruct women who qualify to talk to the RA if they are interested in learning more about the study. Women who approach the RA will be read an approved recruitment script. Those who are willing to participate will be taken through an informed consent procedure and complete baseline data collection.
The procedure for informed consent includes the following: 1. An informed consent document in English is translated into Dangme, Ga, Akan, and Ewe. 2. The informed consent document is read aloud individually to all potential participants in private. 3. The Ghanaian RA asks the potential participant questions to ensure understanding of the research process and informed consent document and invites questions until the information is clear. 4. The participant signs the document or marks it with a thumbprint. 5. The RA uses the camera on the encrypted tablet to take a photograph of the signed or thumbprinted page; the image is then stored securely, similar to all study data.
In the Eastern Region, 10.4% of women and girls aged 15 to 49 years have never attended school, and only 15.7% have completed secondary school or higher [12]. A teach-back method will be used to confirm participant comprehension of the study requirements and methodology. The RA will ask potential participants to describe their understanding of the study's purpose, procedure, risks, and benefits using open-ended prompts and will repeat the material until understanding is achieved.
Data Collection and Measures
All quantitative data will be collected by trained RAs using encrypted and password-protected tablets as well as a secure web application for data collection and database management geared to support online and offline data capture for research studies. When an internet connection is not available, data will be collected offline and stored on the encrypted tablet. Once a connection is available, these data will be uploaded, verified for accuracy and completeness, and stored on a secure server.
No data will be collected by clinical providers. Data collection will occur at five time points in both intervention and control arms (see Table 2 for measurements at each time point): 1. Time point 0: the baseline session occurs immediately following the consent processes. Data are collected by trained RAs using a structured survey; health information is self-reported. During visits, midwives record clinical health-related outcomes (ie, place of birth, hemoglobin levels, newborn birth weight, maternal and newborn morbidities, stillbirth, and postpartum visit within 2 days postbirth) on the women's ANC cards. These data will be collected by the RA postdelivery.
Overview
We will concurrently conduct a process evaluation to identify and document patient, provider, and system barriers and facilitators to program implementation. Using both quantitative and qualitative methods, we will identify potential and actual influences on the quality and conduct of the program's operations, implementation, and service delivery. We will employ structured observations of group sessions, interviews with providers, focus groups with women, and tracking logs to record how the intervention is delivered and received, document program fidelity, and identify opportunities to enhance the delivery of the intervention, while maximizing consistency in intervention delivery across sites. This process evaluation will add value to the analysis of the group ANC intervention by identifying barriers and facilitators at multiple levels throughout the study. For this process evaluation, we will focus on both fidelity of the intervention and dose, or frequency.
Individual Interviews With Midwives
All midwives involved in the intervention arm will be asked to participate in the process evaluation. Midwives will be approached by a member of the research team at the end of the seventh group meeting and asked if they are interested in providing feedback about group care. Those willing to participate will be taken through a consent process before the first interview begins. Each midwife will be interviewed at two random times, and each interview will last approximately 40 minutes. A structured interview using open-ended questions will be conducted to explore the midwife's perceptions of group versus standard ANC, barriers to implementation, challenges to integrating group-based ANC into the existing clinic workflow, and strategies that have helped with implementation. Interviews will be audiotaped, with permission from the participant, to ensure accuracy of responses; midwives can refuse to be audiotaped yet continue with the interview. The RA will write short-answer responses on a data collection form. Audiotapes will be transcribed and deidentified; tapes will be destroyed immediately after transcription. We have seven health facilities randomized to the intervention arm and 2 to 4 midwives at each facility; as each midwife may be interviewed twice, we expect that approximately 56 midwives may participate in the process evaluation.
Focus Group Discussion With Participants
Groups of women in the intervention arm will be randomly selected to participate in focus groups for process evaluation. We anticipate 10 random groups of 10 women through the course of the study for a total of 100 women in the focus groups. Women will be asked to describe their perceptions of group versus standard ANC, their perceptions of the value of group ANC, and how they could envision the process being improved.
Focus group discussions will be led by a member of the research team with randomly selected groups of women completing group ANC throughout the study. Focus groups participants will provide consent individually before they enter the focus group room so they may choose whether they want to participate. The group will be conducted in a private setting, and names will not be used during the discussion. Audiotapes will be transcribed and deidentified; tapes will be destroyed immediately after transcription. Each focus group discussion will last about 1 hour.
Structured Observations
A sample of 2 out of 7 group ANC visits will be observed for each provider to monitor fidelity to the model (eg, whether content is delivered as intended, women are engaged enough to actively participate in group discussions and activities, picture cards are used as written in the facilitator's guide, and feedback is provided to participants during demonstrations).
Tracking Logs
A brief form will be completed by the midwife provider each time an ANC visit is held to track the date of the session and the number of participants from the group in attendance, in order to track dose. See Figure 4 for flow diagram of enrollment and data collection.
Data Analysis: Aims 1 to 3
Data from all participants randomly assigned to the intervention or control groups will be analyzed on an intention-to-treat basis. Deviations from randomized allocation will be reported. We will also conduct per-protocol analysis by eliminating noncompliers in the analysis. Summary statistics based on mean, SD, or frequency will be used to characterize the sample distribution of each arm. Proper transformations will be investigated and taken if the sample distributions of continuous variables violate the normality assumption. For aims 1 to 3, generalized linear mixed models will be used to test the differences between the two arms since a cluster RCT design will be used. There are four components in generalized linear mixed models: outcome variable, fixed effects, random effect, and link function. The fixed effects include an explanatory variable and covariates. In this study, all three aims have the same explanatory variable, which is a binary variable indicating the arm to which women are assigned. The study sites, gestational age, and women's demographic variables, such as education or literacy, marital status, pregnancy history, and medical history, will be added to the generalized linear mixed models as covariates to increase the precision of the estimates. The random effect is comprised of the 14 facilities. In this study, a random intercept model will be used to account for the cluster effect. The outcome variables and link functions in generalized linear mixed models depend on the aims and are described below. The construct of the generalized linear mixed models is to test whether the explanatory variable is significant at the level of .05 using the likelihood ratio test.
For aim 1, to quantify differences in the recognition of pregnancy and newborn danger signs and knowledge of recommended action steps, the birth preparedness and complication readiness index will be measured at enrollment and at the third trimester. We will add baseline data as covariates. The logit link function will be used for each binary birth preparedness and complication readiness question to test the efficacy of the group-based ANC method. The identity link function will be used when the outcome variable is a summary statistic of the birth preparedness and complication readiness index. When the P value of the explanatory variable is less than .05, we will declare a significant difference between the two arms. Average changes in birth preparedness and complication readiness summary statistics or odds ratios for each question will be used to quantify the effect of the group-based ANC intervention.
For aim 2, where we will assess behavioral differences in care-seeking patterns between the two arms, the outcome variables are frequency of attendance of ANC visits, facility birth, and postnatal or postpartum care. For the attendance outcome variable, the identity function will be used. For facility birth and postnatal or postpartum care, the logit link function will be used. When the P value of the explanatory variable is less than .05, we will declare a significant difference between the two arms. The effect of group-based ANC on attendance will be quantified by the average difference. The effects of group-based ANC on facility birth and postnatal or postpartum care will be quantified by odds ratios. For the secondary outcomes in aim 2, the logit or identity link function will be used in a way similar to the primary outcomes.
For aim 3, in order to evaluate the clinical outcomes of mothers and their newborns, the outcome variables are maternal pregnancy-related morbidities and newborn birth status. For maternal morbidities, the logit link function will be used. Since newborn birth status is classified into three categories-stillbirth, live birth, and early neonatal mortality-the cumulative logit link function [17] will be used. When the explanatory variable is significant at .05, the effect of group-based ANC on maternal pregnancy-related morbidities and newborn birth status will be interpreted using the odds ratio. We will illustrate the difference between outcomes using odds ratios for each pair of newborn birth statuses. The secondary outcomes will be analyzed similarly using identity or logit link functions.
For multiple outcomes in the same family, we will conduct direct inference using the Holm multiple testing procedure [18] to control for the family-wise error rate at a level of .05 [19].
The generalized linear mixed model analysis will be carried out using the lme4 package from R software [20]. All findings will be reported using the CONSORT (Consolidated Standards of Reporting Trials) statement as a guide [21]. Full transparency will be provided when reporting experimental details so that others may reproduce and extend our findings.
Data Analysis: Process Evaluation
The approach by Steckler et al [22] will guide the analysis of our process evaluation of the data. Qualitative data will be obtained from semistructured interviews. All qualitative data will be collected by the research team and will be transcribed verbatim into English, leaving key phrases that are difficult to translate intact, with the closest approximate meaning put into parentheses in the transcript. No data will be collected by clinical providers. All transcripts will be stored on a password-protected server. All data from semistructured interviews will be entered into NVivo qualitative software (QSR International) to assist with the identification of key themes. Structured observations will be recorded and summarized for key points. The use of an audit trail composed of methodological and analytical documentation and validation with colleagues will be used to achieve validity.
Ethics Approval
This study and all procedures were approved by the Institutional Review Boards (IRBs) at the University of Michigan (HUM-00161464) and the Ghana Health Service (GHS-ERC016/04/19). This is a report of a study protocol; therefore, human subject consent was not necessary. As required by the University of Michigan, regardless of the country of residence, all research staff, including principal investigators, coinvestigators, and RAs, on research projects that involve human study participants must complete the online program for education and evaluation in responsible research and scholarship or equivalent, and they must have their human subjects certification renewed every 3 years.
Results
The study was funded in September 2018. During the first year, we completed the following: [23,24] for secure data management. 6. Developed and pilot-tested data collection instruments with modifications for the local context. 7. Identified study sites for inclusion. 8. Randomized study sites.
Study data are collected and managed using REDCap at the Dodowa Health Research Centre. REDCap is a secure, web-based software platform designed to support data capture for research studies. REDCap provides (1) an intuitive interface for validated data capture, (2) audit trails for tracking data manipulation and export procedures, (3) automated export procedures for seamless data downloads to common statistical packages, and (4) procedures for data integration and interoperability with external sources [23,24].
We also conducted a 3-day training for 10 champion trainers: 2 from each district in the research study and 2 from the provincial headquarters. The training covered an introduction to the study, an update or refresher on Ghanaian guidelines for ANC, and how to conduct group ANC using the facilitator's guide and methodology. A learning methods checklist was employed to ensure fidelity to the model. A schedule was prepared for the next 2 weeks of training at the district levels.
Recruitment and enrollment of participants and data collection started in July 2019. In November 2021, we completed participant enrollment in the study (n=1761), and we completed data collection at the third trimester in May 2022 (n=1284). Data collection at the additional three time points is ongoing: 6 weeks postpartum, 6 months postpartum, and 1 year postpartum. We are currently conducting preliminary data analysis and expect the results to be published in 2023.
Overview
We hypothesize that pregnant women randomized into group-based ANC will exhibit increased birth preparedness and complication readiness, including recognition of danger signs and knowledge of how to respond to such signs. This may result in higher rates of care-seeking behaviors, including seeking care for problems identified during pregnancy, higher facility-based delivery rates, and increased attendance at postnatal and postpartum care appointments.
This study is significant and timely because it is the first cluster RCT to be conducted in Ghana to examine the effects of group-based ANC on maternal and newborn clinical and behavioral outcomes. Ghana is one of 24 priority countries targeted by the United States Agency for International JMIR Res Protoc 2022 | vol. 11 | iss. 9 | e40828 | p. 11 https://www.researchprotocols.org/2022/9/e40828 (page number not for citation purposes) Development to improve maternal and child health and end preventable death [25].
Recent recommendations by the WHO call for rigorous research into group ANC to improve the use and quality of care [9]. A strength of our study is the use of a theoretical framework to examine health literacy. Initially considered as a patient's ability to read and understand written information, health literacy is now more broadly defined as a person's ability to acquire or access information, understand it, and use the information in ways that promote and maintain good health [26,27]. Despite a burgeoning emphasis on health literacy in high-resource countries [28], there are a dearth of studies examining interventions to improve health literacy in low-resource settings [29]. Even fewer studies have examined maternal health literacy, defined as the "cognitive and social skills which determine the motivation and ability of women to gain access to, understand, and use information in ways that promote and maintain their health and that of their children" [29]. New approaches to improve health literacy are sorely needed in countries where women and newborns continue to die from preventable causes [30].
Our process evaluation will allow us to contribute to a growing body of evidence that identifies barriers and facilitators to the implementation of group ANC. Findings from the process evaluation will contribute to eventual scale-up of the intervention in Ghana should group ANC be shown to improve maternal and newborn outcomes.
Our research team is committed to disseminating the findings from this proposed study in four different ways: (1) presentations at national and international conferences; (2) journal articles in peer-reviewed journals, including open access for our international colleagues; (3) community presentations, media events, and other public venues where we intend to discuss our findings; and (4) meetings and presentations with the Ghana Health Service to discuss cost-effective ways for scaling up the project and ensuring sustainability.
Limitations
Although we have designed a rigorous cluster RCT, neither the study sites nor the participants are blinded to the study conditions because providers at sites have been trained to deliver group ANC. We eliminated selection bias by randomly selecting sites using a stratified random sampling method from the sampling package in R software. Participating sites are limited to one rural area of Ghana; thus, results may not be generalizable to urban settings. However, results could guide country-wide policies for improving maternal and newborn health, and results could highlight the benefits of group ANC for similar rural areas across Africa where maternal and newborn morbidity and mortality are high.
Improving maternal and newborn health outcomes has been a major focus for the governments of many low-and middle-income countries, including Ghana. Free maternal and child health has been introduced in Ghana as part of a comprehensive policy to improve maternal health care delivery and reduce maternal and child deaths [1]. Group ANC has the potential to improve the quality of care and pregnancy outcomes for women and their newborns. Findings from this study will provide strong evidence and lessons learned to contribute to future policies and scale-up for all of Ghana.
|
2022-09-10T06:17:28.247Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "6760b1bc48e8cbec15889fa0fbad7cbc7c32ea66",
"oa_license": "CCBY",
"oa_url": "https://www.researchprotocols.org/2022/9/e40828/PDF",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3b6d876b1a58470013b155138676bd562a27a1db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17465556
|
pes2o/s2orc
|
v3-fos-license
|
Determining Sustainable Tourism in Regions
The goal of achieving sustainable tourism is now a priority for many tourism planners. It has been suggested that stakeholder analysis is an essential step in determining sustainable tourism in regions, given its highly contextual nature. However, previous research has tended to focus heavily on stakeholders with the assumption that attitudes within groups are homogeneous. This research questions this assumption and in doing so, takes a critical approach by examining attitudes towards sustainable tourism and then assesses whether attitudes align with stakeholder groups. The study was conducted in the island state of Tasmania, Australia, and utilised the Q-methodology to examine attitudes towards sustainable tourism in the Bay of Fires region. The results concur with recent research, which shows that attitudes do not always align with those of stakeholder groups. The critical and reflexive approach suggests that assumptions regarding stakeholder attitudes need to be reviewed and more attention given to people’s contextualised attitudes, rather than the stakeholder group in which they sit.
Introduction
It is now widely accepted that stakeholders' attitudes must be taken into account in order for sustainable tourism to be achieved [1].The term "stakeholder" was first defined by Freeman [2] as 'any group or individual who can affect or is affected by the achievement of an organisation's objectives'.Stakeholder methodology generally accepts that the first step in stakeholder consultation is to determine what stakeholder groups exist, then to explore what potential or current power they have and incorporate these attitudes into planning for sustainable tourism [3][4][5].However, in recent years, a critical turn has taken place within sustainable tourism research [6,7].This approach argues that relationships that communities have with tourism change over time and are not static.Moreover, critical theorists question traditional assumptions and call for reflexive and empathetic modes of inquiry that challenge traditional assumptions [8,9].In response to this approach, recent research has examined whether stakeholders' attitudes align with their traditional stakeholder group [10], as is often postulated within tourism literature.It has found that while some alignment between stakeholders' groups and their individual attitudes occurs, it is not always the case.This paper responds to the calls by critical theorists and examines the issue of stakeholder groups' attitudes from an alternative perspective.Rather than identifying groups first and then examining whether attitudes within them aligned, the aim of this research was to explore the attitudes that individuals had towards sustainable tourism first and following this, examine whether their attitudes aligned with their stakeholder groupings.This was achieved through the use of the pictorial Q-methodology in the Bay of Bay Fires region of Tasmania, Australia.Consequently, the objectives of this paper are to: (1) Conduct a critical assessment of the literature related to sustainable tourism and stakeholder analysis; (2) Review the socio-political situation within the Bay of Fires region in Tasmania in order to determine issues that may affect individuals' attitudes towards the region and to inform the selection of the research approach; (3) Undertake a Q-method analysis of individuals' attitudes towards sustainable tourism and assess whether these align with their behavioural stakeholder group; (4) Contribute to a critical review of stakeholder theory regarding individuals' alignment with behavioural stakeholder groups; and (5) Identify the key issues of concern regarding sustainable tourism, for those living in the Bay of Fires region of Tasmania, Australia.
In order to meet these objectives, the paper is structured in the following way: It begins by exploring the literature pertaining to sustainability, sustainable tourism and stakeholder involvement and attitudinal research.Following this the paper details the case study region and the Q-methodology as a legitimate means to respond to the theoretical gaps and locational nuances of the case study region.Following the results, the paper makes theoretical contributions to stakeholder theory and specifically individuals' alignment with behavioural stakeholder groups.Finally, it makes recommendations for managers in the case study region and recommendations for further research.
Literature Review
As has been well documented, sustainable development was first defined by the Brundtland Commission [11] (p. 43) as that which "meets the needs of the present without compromising the ability of future generations to meet with own needs."Its uptake by the tourism industry, along with other industry sectors was rapid and resulted in it being defined specifically for tourism.For example, the UNEP/WTO [12] (pp.11-12) defined sustainable tourism as: " . . .a condition of tourism based on the principles of sustainable development, taking full account of its current and future economic, social and environmental impacts."Within academic literature a raft of academic definitions has also emerged: It was claimed by Graci and Dodds [13] that over 200 definitions exist.A broad and enduring definition of sustainable tourism was proposed by Muller [14] (p.132) and is similar to the aforementioned UNEP/WTO definition, as it defined sustainable tourism as an approach to tourism that influences: (1) economic health; (2) subjective well-being of local peoples; (3) protection of natural resources; (4) healthy culture; and (5) optimum satisfaction of guest requirements.
Recently, debate has centred around the concept of sustainable tourism, including those who have advocated it as inherently contradictory and difficult to achieve [15] and that it places too much emphasis on the macro level, regional to global scale concerns which in turn detract from a focus on local issues [1].They also suggest that too much scientific, quantifiable focus is given to issues, with only limited attention being given to intangible aspects such as cultural change.Moreover the authors posit that the focus is not necessarily always centred upon local stakeholders, as absent international stakeholders, such as investors, must also be considered.They also argue that if sustainable tourismstudies were to enhance their micro level approaches through active engagement with stakeholders, then this would provide solutions to operationalise the macro goals of sustainable tourism.
Given the requirement for sustainable tourism to address multiple and inherently conflicting issues, including environmental preservation, economic health, satisfying guests and ensuring community wellbeing [13], it appears logical that engagement with those stakeholders who represent the groups is essential in order to achieve sustainable tourism [1,[15][16][17][18][19][20][21][22].Arguably sustainable tourism cannot occur if agreement and collaboration between stakeholders does not exist [23,24].
Within tourism research, stakeholder analysis has been explored from a variety of perspectives.Research has explored engagement methods, classified different stakeholders groups with names that refer to their behaviour such as: visitors; community members; those in regulatory positions; and those in the private or public sector [25,26].However amongst this research there are methodological trends regarding stakeholder research.The first is a tendency to assess only one stakeholder group at a time, such as residents' attitudes towards development [27][28][29][30][31]; tourists' attitudes [32,33], operators' attitudes [34] and policy makers' attitudes [35].Very little research compares the attitudes of multiple stakeholder groups concurrently [16,[36][37][38][39][40].While allowing for detailed understandings of different groups' perspectives, this trend runs the risk of not developing a holistic understanding of how the issue is affecting the entire community.Moreover it assumes that specific attitudes are contained within specific stakeholder groups and are not shared by others.
The second trend is related to the first.That is, the tendency for researchers to propose that stakeholders' attitudes align with their fellow stakeholder group members [18,37] even across regions [41].This approach assumes homogeneity exists within the groups, which is a notion that has become increasingly challenged in recent years [26].
A third issue is a tendency to focus on identifying stakeholder groups first and then assessing if synergies occur within those groups [41].Leading critical thinkers suggest that the assumptions of approaches in research should be not be taken for granted.Given recent research that suggests that stakeholders' attitudes may not align with their behavioural group, it would appear that this approach warrants examination.The critical theory approach has been embraced by many researchers in sustainable tourism.In addition to questioning approaches and assumptions, it advocates the selection of research approaches that empower those who have not had their voices heard, through reflexive and culturally appropriate methods [8].This has synergies with a growing movement within sustainable tourism research to reverse the tendency to overlook minority groups and disadvantaged populations [42].Consequently, with this in mind, this research sought to challenge traditional approaches to stakeholder analysis by assessing a wide variety of individuals and categorising them by their attitudes first, rather than their stakeholder group.This approach has rarely been used in tourism research before with the exception of Ryan [43], who categorised stakeholders by their attitudes not their behavioural group; and Hunter [44] who categorised tourists by their subjective opinions.In doing so, it challenges assumptions regarding the alignment of individual stakeholder attitudes with their behavioural group.
Study Region
The Bay of Fires region is a coastal region, made up largely of white sandy beaches separated by occasional granite rocky outcrops that are covered in orange lichen.The region's boundaries are unclear but it is often referred to as stretching from Binalong Bay in the south to Eddystone Point, 30 km to the north, including Mount William National Park.The region was named in 1773 by Captain Tobias Furneaux who sighted many fires along the coast, which were created by the Larapauna Family Group [45].The region is rich in cultural heritage, containing many Aboriginal middens (shell and bone deposits).
The region is largely made up of dry coastal land and low land vegetation.It does however stretch inland to the west, including the wetter and elevated region of Ben Lomond and Mount Pearson (Figure 1) [46].Three distinct zones have been recognised within the region: (1) The northern section from Ansons Bay to Eddystone Point.This section is located within Mt William National Park and has camping facilities but only limited public access; (2) The middle section, which is located around and to the south of Ansons Bay.There are no shops or other facilities in this area; (3) The southern section, between Binalong Bay and The Gardens.This section contains houses, caravan and tent camping sites, and relatively good road access to facilities in the south of the section, such as Binalong Bay, and St Helens.
Sustainability 2016, 8, x 4 of 17 (3) The southern section, between Binalong Bay and The Gardens.This section contains houses, caravan and tent camping sites, and relatively good road access to facilities in the south of the section, such as Binalong Bay, and St Helens.A variety of threatened flora and fauna is located within all three zones in the region, which was articulated in a proposal to increase the region from conservation to National Park status in 2009 [46].
Since 1950 and most recently following the listing of the region as "The World's Hottest Travel Destination" in 2009 [48], there has been a large degree of tourism development in Binalong Bay and the Bay of Fires region.At the time of writing there were around 200 shacks/dwellings within the Bay of Fires region; 100 of which were permanent residences and 100 that were holiday residences/tourist accommodations [49].Apart from a small community at Binalong Bay, where a café and fire station is located, there were no other services, apart from at St Helens, located 11 km to the south east of Binalong Bay.
The Research Experience
A key component of critical thinking research approaches is that researchers would take care to understand participants' contextualised meanings and beliefs, such that the research may have the opportunity to contribute towards positive changes in society [8].With this in mind, the research team took time during the planning phase to become sensitised to the major socio-political issues facing the region, as well as current tensions that may exist within the local community.Local media sources and local radio were consulted and a regional visit was undertaken prior to the research commencing to determine where the research should be undertaken and to determine the socio-political sensitivities within the region that were relevant to tourism.This process identified that there were concerns that had been voiced in local media about the growth of tourism, local recreational use and the development of tourism infrastructure.
During the research-planning phase, information sheets that outlined the goals of the project were sent to regional tourism bodies and local councils.Media releases were also made and a small amount of local radio exposure resulted from these.
The planning phase also identified who were to be the interviewees.This is a difficult issue in tourism-based communities, where owners of holiday shacks and rental properties often live outside of the region.Given our budgetary and timing constraints, a decision was made to limit the A variety of threatened flora and fauna is located within all three zones in the region, which was articulated in a proposal to increase the region from conservation to National Park status in 2009 [46].
Since 1950 and most recently following the listing of the region as "The World's Hottest Travel Destination" in 2009 [48], there has been a large degree of tourism development in Binalong Bay and the Bay of Fires region.At the time of writing there were around 200 shacks/dwellings within the Bay of Fires region; 100 of which were permanent residences and 100 that were holiday residences/tourist accommodations [49].Apart from a small community at Binalong Bay, where a café and fire station is located, there were no other services, apart from at St Helens, located 11 km to the south east of Binalong Bay.
The Research Experience
A key component of critical thinking research approaches is that researchers would take care to understand participants' contextualised meanings and beliefs, such that the research may have the opportunity to contribute towards positive changes in society [8].With this in mind, the research team took time during the planning phase to become sensitised to the major socio-political issues facing the region, as well as current tensions that may exist within the local community.Local media sources and local radio were consulted and a regional visit was undertaken prior to the research commencing to determine where the research should be undertaken and to determine the socio-political sensitivities within the region that were relevant to tourism.This process identified that there were concerns that had been voiced in local media about the growth of tourism, local recreational use and the development of tourism infrastructure.
During the research-planning phase, information sheets that outlined the goals of the project were sent to regional tourism bodies and local councils.Media releases were also made and a small amount of local radio exposure resulted from these.
The planning phase also identified who were to be the interviewees.This is a difficult issue in tourism-based communities, where owners of holiday shacks and rental properties often live outside of the region.Given our budgetary and timing constraints, a decision was made to limit the local community interviews to the residents of the towns of Binalong Bay, and St Helens.
Following the planning stage, interviews were arranged.Interviews of tourism operators, those in regulatory positions and members of community groups were arranged through a process of purposive sampling that used email and phone calls to make the first point of contact.
The Research Approach
The aim of this research was to explore the attitudes that individuals had towards sustainable tourism in the first instance and following this, examine whether their attitudes aligned with their stakeholder groupings.The critical theory approach that underpinned this research sought to address intangible aspects, such as attitudes, in order to respond to the critique of Dangi and Jamal [1] (2016) that research related to sustainability often overlooks local, intangible issues such as cultural change.Consequently, a mixed methods approach was taken, including that which involved quantitative analysis, but also placed a heavy emphasis on the attitudes and preferences of local people.In addition to this, in order to ensure that the voices of those who may have been overlooked in previous studies were heard, the researchers sought a design that would enhance discussion and be inclusive, by not relying on written responses.A focus on the use of photographs instead of written surveys was chosen, particularly given its credence by methodological researchers as a technique that is often considered as a surrogate for reality; its ability to standardise the 'question' as every participant views the same picture; and its ability to enhance response validity by keeping variables, such as crowding, in the photograph constant [50,51].
The pictorial technique utilised was the Q-methodology.A technique that has evolved from factor analysis, Q was first applied by Stephenson in 1935.It allows participants to sort through a variety of options, depicted by photographs or statements and then subjects their assessments to factor analysis, thus deciphering their individual subjectivities, as well as the relationships of their attitudes to other participants [52].Q-methodology as applied in this study required no level of literacy and its lack of use in our study region meant it was also a novel approach that the research team hoped would interest participants.
The research team followed Q-methodology outlined by Stergiou and Airey [52] and Hunter [44] which involved the following five steps: The initial step of Q-methodology requires the research team to develop a concourse of issues that underpin the rationale for the study and the subsequent choice of photographs that study participants will be asked to sort and rank.The concourse may be developed from existing scales, literature reviews or interviews that elicit major issues [52].Given the research team's desire to explore attitudes towards sustainable tourism, the research team selected Boyd and Butler's [53] Ecotourism Spectrum (ECOS), that was derived from Clark and Stankey's [54] visitor experience (Recreation Opportunity Spectrum) framework.This was most relevant to the largely nature based tourism experiences on offer within the case study region.The spectrum was augmented to conceptualise not only the existing, but also a possible, range of tourism development in the region.This meant that four defining attributes of sustainable tourism development options were present within the concourse: access; accommodation; impacts and management; and visitor experiences.These focused on the attributes, not the impact of tourism, to align with the focus of the study (Table 1).A final Q set of photographs was selected following several iterations of pilot tests with a variety of stakeholders.The set included 33 photographs, which was in line with conventional expectations for small-sample Q-studies [55].It should be noted that the same set of photographs were also used in two other regions in Tasmania, which will be analysed in future publications.The set of photographs included a variety of options for each of the defining attributes, to allow stakeholders to be very specific about their preferences (Figure 2).Random numbers were assigned to each of the photographs, which facilitated ease of data recording [52].The second step of the Q-methodology was to identify and recruit the P set or interview participants.The critical approach of this study was also demonstrated by the selection of participants.The research team took into account the criticism by Dangi and Jamal [1] that sustainable tourism research, given its global focus, tends to place emphasis on non-local stakeholders such as absent investors.To counteract this issue and ensure a local focus was given to The second step of the Q-methodology was to identify and recruit the P set or interview participants.The critical approach of this study was also demonstrated by the selection of participants.The research team took into account the criticism by Dangi and Jamal [1] that sustainable tourism research, given its global focus, tends to place emphasis on non-local stakeholders such as absent investors.To counteract this issue and ensure a local focus was given to this research, the team decided only to include stakeholders living in our study region.As such a theory driven purposive sampling strategy was utilised [44].The P set consisted of 43 respondents from three tourism stakeholder groups: operators (9 respondents), regulators (5 respondents), community group members (2 respondents from local development or advocacy groups) and locals (27 respondents).The size of the P set was appropriate for Q-studies as the emphasis is on individual subjectivity, thus allowing for a small number of participants [54] as has been the case with many previous studies where the size of P sets has been 34 [44], 30 [55][56][57][58] and 27 [59].
The third step involved the 43 participants conducting a Q sort, by arranging the 32 photographs into three piles (most preferred, least preferred, unsure/undecided) and then ranking the photographs across 9 distribution columns that were printed out on a large poster (Figure 3) [60,61].The Q sort interview was audio recorded and later transcribed to add depth to the data analysis.this research, the team decided only to include stakeholders living in our study region.As such a theory driven purposive sampling strategy was utilised [44].The P set consisted of 43 respondents from three tourism stakeholder groups: operators (9 respondents), regulators (5 respondents), community group members (2 respondents from local development or advocacy groups) and locals (27 respondents).The size of the P set was appropriate for Q-studies as the emphasis is on individual subjectivity, thus allowing for a small number of participants [54] as has been the case with many previous studies where the size of P sets has been 34 [44], 30 [55][56][57][58] and 27 [59].
The third step involved the 43 participants conducting a Q sort, by arranging the 32 photographs into three piles (most preferred, least preferred, unsure/undecided) and then ranking the photographs across 9 distribution columns that were printed out on a large poster (Figure 3) [60,61].The Q sort interview was audio recorded and later transcribed to add depth to the data analysis.The fourth step of Q-methodology involved analysis via the program PQMethod, Version 2.33 [62].This process assessed the correlation between each individual's Q sort and other participants' Q sort.Principal Components Analysis was used to create factors, or clusters of participants who sorted photographs in similar ways.This resulted in a list of participants with a nominal loading and in the first instance it revealed that most participants were loaded on the first factor.A Varimax rotation was then conducted to spread variance and participants' loading on more than one factor.This resulted in identifying and counting significant loaders on each factor.The team used Brown's [61] (p.222) method of including factors that had at least two significant loaders on the unrotated factor matrix.The team derived the significant level from the standard error formula of 1/√ , where N equals the number of items in the Q-method.The derived value for the 32 items in this study was 0.17, and at the 0.01 level of confidence this value was multiplied to 2.58 to set the significant level at 0.45.Loadings had to be 0.45 or above before they were determined as being significant.The research team also used traditional scree plots to assess where factor cut-offs should exist, to ensure a minimum sufficient set of factors that represented the data [63].The analysis resulted in three factors that accounted for 43 of the 45 sorts, with levels of significance ranging from 0.89 to 0.42.Only two sorts were statistically insignificant in any factor and excluded.The 43 sorts, their scores in relation to each factor and the explained variance are presented in Table 1.
This analysis focused on the core factors and particularly on participants who loaded heavily on each factor.This resulted in the development of a rich knowledge set for each factor.Those participants that loaded strongly on a factor had a proportionately greater influence on the factor's characteristics.Following this, the research team could then determine the images that were highly The fourth step of Q-methodology involved analysis via the program PQMethod, Version 2.33 [62].This process assessed the correlation between each individual's Q sort and other participants' Q sort.Principal Components Analysis was used to create factors, or clusters of participants who sorted photographs in similar ways.This resulted in a list of participants with a nominal loading and in the first instance it revealed that most participants were loaded on the first factor.A Varimax rotation was then conducted to spread variance and participants' loading on more than one factor.This resulted in identifying and counting significant loaders on each factor.The team used Brown's [61] (p.222) method of including factors that had at least two significant loaders on the unrotated factor matrix.The team derived the significant level from the standard error formula of 1/ ?N, where N equals the number of items in the Q-method.The derived value for the 32 items in this study was 0.17, and at the 0.01 level of confidence this value was multiplied to 2.58 to set the significant level at 0.45.Loadings had to be 0.45 or above before they were determined as being significant.The research team also used traditional scree plots to assess where factor cut-offs should exist, to ensure a minimum sufficient set of factors that represented the data [63].The analysis resulted in three factors that accounted for 43 of the 45 sorts, with levels of significance ranging from 0.89 to 0.42.Only two sorts were statistically insignificant in any factor and excluded.The 43 sorts, their scores in relation to each factor and the explained variance are presented in Table 1.
This analysis focused on the core factors and particularly on participants who loaded heavily on each factor.This resulted in the development of a rich knowledge set for each factor.Those participants that loaded strongly on a factor had a proportionately greater influence on the factor's characteristics.Following this, the research team could then determine the images that were highly positively or negatively significant for each factor, along with the images that could be regarded as exemplifying each factor.As with any factor analysis, it was determined that not every person would load on the identified factors.
The analysis of the Q-methods also involved the transcription of participants' responses while they sorted their photographs.This process revealed participants' rationale for their preferences.The analysis process involved matching the comments made with each of the corresponding photographs and emergent thematic analysis.Comparisons were then made of individuals' reactions to each of the photographs as well as their reasoning.
The fifth and final step involved the creation of descriptive names for each of the factors that would accurately reflect the predominant attitudes of the factors.Following this, the team determined the implications of their findings for both sustainable tourism development and the local tourism industry.
Results
The data analysis identified three factors of attitudes towards sustainable tourism development.These factors were based on three characteristics.First, they included consensus images, whereby all respondents in the Bay of Fires agreed with a significance rating of p > 0.05 (all p set).The second was that they included distinguishing images for each factor where all respondents within the factor agreed.Finally the factors contained qualitative defining statements gained through interview transcripts and thematic analysis, for each factor.
For the Bay of Fires region there were five consensus images that every respondent agreed upon (p > 0.05).Two of these were preferred images: a lone horse-rider; and a lone group of walkers.The remaining images were disliked by all respondents: an airplane hangar; a road with green edges and an image of several resorts.
In addition there were many words commonly used to describe the Bay of Fires by all three factors including; beautiful, spectacular, picturesque, clean, pristine, peaceful, and relaxed.However, when respondents were describing tourism in the Bay of Fires the following words were used: Underutilised, missed opportunities, not enough options for tourists, and enjoyment of driving, walking, being on the beach.
These consensual images and words for the area did not underlie the varying attitudes that respondents had for the region.Rather, they could be found to sit clearly around three factors that were determined to exist (Table 2).The three factors that emerged from the analysis may be described in the following ways:
Factor 1: Engagers with Nature
Engagers with Nature described the region as being characterised by its spectacular natural environment, its famous white sand, turquoise water and orange lichen covered rocks.They also described the region as being almost pristine.Participants in this factor also regarded the region as being relatively undeveloped and had a desire for tourists to be able to experience its isolation and peacefulness, and to see more wildlife than people.They also recognised the Bay of Fires as being a region with a significant Indigenous history.
The photographs that were ranked most highly in this group were an undeveloped track through rainforest, an image of a guided bushwalk and a hardened campsite (Table 3).Engagers with Nature believed that tourism experiences in the region should be immersive tranquil experiences that encourage tourists to engage with the natural surroundings and be of minimal impact.
[Member of a Community Group #11] " . . . to me (it's about) minimal impact on the environment . . .People walking through like that have minimal impact on the environment." [Regulator #29] " . . .I like the eco-looking lodge, it provides interpretation of the area and makes people better informed . . .they're built so their impact on the landscape is minimal." Participants in this factor believed that there was a need for increased tourism opportunities for the free independent tourists in the region.Suggestions for new opportunities included activities such as guided walking experiences that would prevent it from becoming a 'mass tourism' destination.
[Member of a Community Group #47] "When interpretation with guide is paid it creates employment." However, not all Engagers with Nature wanted an increase in tourism visitation.Indeed, this was the only factor that included some participants who wanted lower amounts of tourism than the current numbers.
The analysis also determined the photographs that Engagers with Nature did not like.These included a photograph of several high-rise resorts; an extensive resort with swimming pool; and several quad bikes.Engagers with Nature disliked these because they felt that large-scale accommodation was inappropriate for the Bay of Fires area.They also disliked high impact and noisy activities, believing that they were inappropriate to the region.
Factor 2: Environmental Accommodators
Environmental Accommodators described the Bay of Fires region as a remote place with unspoilt natural beauty.Participants in this factor valued a diversity of activities for tourists to engage in, but only on the condition that they encouraged an appreciation of the environment (fishing was the only consumptive activity mentioned).They placed importance on increasing opportunities that encouraged overnight stays in environmentally sensitive accommodation.Environmental Accommodators valued the region's Indigenous cultural heritage, and saw tourism as an opportunity to create custodians of the land: [Operator #43] " . . .tourism requires respect for the area and its values of course.I think you need to be aware of your role as a custodian, you can't take the place for granted." Analysis of this factor also revealed the photographs that Environmental Accommodators liked, including various forms of accommodation such as the eco-looking lodge, glamping and many cabins, as well as the indigenous canoes (Table 4).This focus on accommodation options is clearly the defining attitude of this factorial group.This selection of images revealed that Environmental Accommodators disliked large-scale resort style developments, believing that they would lead to an overtly commercial type of tourism in the region and that their environmental and visual impacts would be too great.Rather, Environmental Accommodators wanted to promote and provide access to the natural values of the area in a way that minimises harm to the environment.They were very keen to ensure that the tourism opportunities would be accessible to a spectrum of people.
[Operator #25] " . . .I don't want it (tourism) all to be for the people with lots of money.You can have your high-end stuff but still make it accessible for people who want more budget options.Permanent tents are quite good for that."[Regulator #44] " . . .These provide for a suite of opportunities to people who aren't high end or in the market for a guided standing camp experience to have access to the site (Bay of Fires) and provides a suite of opportunities in accommodation, whether they are tent, caravan or cabins."Like Engagers with Nature, Environmental Accommodators were divided over how much tourism the Bay of Fires should have.
Factor 3: Outdoor Recreationists
This factor group was small and included only local stakeholders (Table 5), thus making it the only factor that aligned with a single stakeholder group.While this group did not represent the breadth of attitudes for all local people in the region, the alignment of attitudes warranted it to be included as its own factor.Outdoor Recreationists believed that the region should offer opportunities for relaxation and activities including fishing, walking and picnics.Their views were exemplified in the following quotes: [Local #63] " . . .driving/walking and enjoying it.People need more information about the area so they can engage with it, for example, walking along Binalong Bay sand with bare feet." [Local #72] " . . .underutilised, not enough options for tourism." The photographs that characterised Outdoor Recreationists' attitudes towards tourism centred around outdoor activities included: a photograph of a man fishing; several horse-riders; and several quad bikes.Interestingly, there was a contradiction between what members of this group liked and disliked.While the photographs that they liked included quad biking, which is often considered environmentally destructive, members of this group expressed a desire to ensure that tourism activities would not negatively upon the environment.They had a strong desire to ensure that any future development did not impede on their own recreational opportunities within the region.
Outdoor Recreationists supported an increase in tourism, but emphasised their desire for small-scale tourism and were wary of tourism that had large visual and environmental impacts.In particular they did not want to see high-density accommodation in the region, as they believed it was not needed, would impact negatively on the environment and was not in keeping with the values of their region, particularly that which was listed as a protected area.They believed that the natural environment and heritage values were more relevant for future tourism development than cultural heritage.
Other forms of tourism development that Outdoor Recreationists supported included: more visitor information; more promotion; and an increased range of tourism activities for visitors to engage in.
Alignment of Stakeholder Groups to Factors
The analysis illustrated that stakeholders' attitudes in the Bay of Fires could be neatly grouped into three factors focused around: Engagers with Nature (1), Environmental Accommodators (2) and Outdoor Recreationists (3).All individual respondents (except for 2 participants) loaded cleanly onto the three preferences factors (Table 3).This finding indicates that peoples' attitudes could be clearly differentiated into three groups and that these factors illustrated the breadth of attitudes within the community.
In order to determine whether there was any relationship between factors, we investigated the correlation that the factors had with each other.The correlation coefficient (a value between ´1 and +1) illustrated the strength between two variables and the analysis identified that the three factors were highly correlated (Table 6).Table 6 illustrates the high correlation between Factors 1 (Engagers with Nature) and 2 (Environmental Accommodators), indicating shared attitudes over some issues.Their similarities lay in their desire for the region to have a strong nature based focus and for its values to be preserved.The differences were that Environmental Accommodators preferred an increase in built accommodation options and did not place such a great emphasis on highly immersive activities like bushwalking and camping.Consequently Factors 2 and 3 had a low correlation score and shared very few attitudes towards tourism development.
A final issue to resolve was the relationship of the stakeholder groups to attitudinal factors.While there is a mixture of stakeholder groups for two of the factors, Factor 3 consisted only of local people (Table 7).
The analysis revealed that operators and regulators were spread across two factors (1 and 2), while locals were spread across all three factors (1, 2 and 3) and members of community groups were only found in Factor 1 (Engagers with Nature).This indicates that the alignment of individuals' attitudes to their behavioural stakeholder groups is not clear.The other issue this highlights is while there is a purely 'local' attitude factor that exists, it would be remiss to argue that this factor represents the view of all locals, as it was determined that another clear group of locals existed in Factor 1.
Conclusions
This research took a critical perspective and examined individuals' attitudes towards sustainable tourism first and following this, assessed whether their attitudes aligned with their stakeholder groupings.This was achieved through the use of the pictorial Q-methodology in the Bay of Fires region of Tasmania, Australia.
The analysis of the Q sort results revealed three tourism development factors for the Bay of Fires.Significantly, one of these factors, the Outdoor Recreationists, was made up only of local people.However this did not represent all locals' views; a significant number of locals were also present in other groups, particularly Engagers with Nature.The other factor clusters that emerged from the analysis had a mix of stakeholders within them.Environmental Accommodators had respondents from three of the four stakeholder categories within it: operators, regulators and locals.Additionally, the factor group called Engagers with Nature, had individuals from each of the stakeholder groups within it, including all those who were representatives in community groups; and most of the local community members.
Traditional approaches to stakeholder research have tended to identify the stakeholder groups in the first instance, then assess their attitudes; and only then focus on the differences between these groups [64,65].This research has demonstrated that assumptions that individuals within stakeholders groups have similar attitudes, requires revision.While their primary behaviour on a day-to-day basis, such as being an operator or a person in a regulatory position may be similar, this does not necessarily determine their attitudes.Moreover, this study demonstrated that individuals within stakeholder groups often had vastly divergent views.The results determined that locals in particular had very divergent views: some were identified in Factor 1, 'Engagers of Nature', and these individuals did not engage in recreational activities such as quad biking or horse riding.Locals with the opposite preferences for recreation were subsequently found in Factor 3, Outdoor Recreationists.These findings illustrate that there is a risk that research that uses stakeholders groups as its starting point runs the risk of artificially creating boundaries around behavioural groups, when in fact they should be around attitudinal groups.
This research also determined that attitudes towards sustainable tourism are contextual and that the concept is perceived differently by different attitudinal groups.This finding aligns with previous work in this space [26].The factor that was made up entirely of local people coined Outdoor Recreationists illustrated this clearly: their preferences for the style of tourism development that should occur within the Bay of Fires region was informed by the activities that they undertook during their own leisure time.As a consequence, their preferred style of tourism included horse riding, quad bikes and fishing.Relatively speaking these activities have a higher impact on the environment than other options that were depicted in the photographs, such as bushwalking.But Outdoor Recreationists did not select these as preferred options.This was not because they had no regard for the environment; indeed this group of participants expressed their concerns for ensuring that environmental impacts were minimised and valued the region's untouched beauty.They also disliked a larger style development such as eco lodges, due to their larger visual impacts.Rather, their vision of a sustainable tourism future was one which was informed by their own livelihoods and recreational values.
A critical turn that has occurred within the broader field of tourism studies and more recently, studies of sustainable tourism, questions norms, assumptions and power relationships that may exist within the tourism industry and whilst undertaking research [8,9].This study has demonstrated the value of this approach by questioning the assumption of stakeholder groups' attitudinal homogeneity.Moreover the photographic, participatory approach has highlighted the importance of conducting research that focuses on the inclusion of those who may have been underrepresented by traditional means of research, due to their low levels of perceived power.
Further research is now needed to compare the attitudes of individuals across entire regions.This research was part of a broader study that examined the attitudes of stakeholders across three regions in Tasmania, Australia.The next step is to examine whether commonalities in attitudes exist across entire regions or whether they remain contextually bound.There is also a need to continue critically bound appraisal of the norms underpinning stakeholder research, in order to examine the issue of attitudinal alignment of individual stakeholders, vis-à-vis their behavioural group.This is particularly pertinent given the consensus that stakeholder involvement at all stages of tourism development is an essential component in achieving sustainable tourism.
Figure 1 .
Figure 1.The location of the Bay of Fires Region in Tasmania, Australia [47].
Figure 1 .
Figure 1.The location of the Bay of Fires Region in Tasmania, Australia [47].
Figure 2 .
Figure 2. Photographs used for the Q Sort.
Figure 2 .
Figure 2. Photographs used for the Q Sort.
Figure 3 .
Figure 3.The Q sort distribution table.
Figure 3 .
Figure 3.The Q sort distribution table.
Table 1 .
The sustainable tourism concourse.
Table 2 .
Q sort factor analysis results.43 Q sorts accounted for in 3 factors, with sorts 18 and 44 excluded, due to not loading on any of the identified factors.
Notes: * Respondents key: Loc = locals; Reg = regulator; Comm = member of a community group; and Op = tourism operators.
Table 6 .
Correlation between Factor scores.
Table 7 .
Count of stakeholder categories in each Bay of Fires factor.
|
2016-07-17T08:36:30.178Z
|
2016-07-12T00:00:00.000
|
{
"year": 2016,
"sha1": "a45d7cb73d3588f840c4b5851acb33fcc9acbf0e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/8/7/660/pdf?version=1468321773",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "a45d7cb73d3588f840c4b5851acb33fcc9acbf0e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
55787524
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of clinical assessment in Spanish forensic practice: detecting malingered psychological sequelae in victims of intimate partner violence
Background and Objectives: According to Spanish legislation, the psychological harm suffered by the victim of a criminal act is determined by assessing its impact on the victim’s mental state. Usually, the victim’s pain and suffering is estimated by administering clinical scales. The aim of the present study was to explore the effectiveness of psychopathological assessment using commonly used scales in clinical practice and whose results are presented as legal evidence in a forensic context in order to detect malingered psychological sequelae (anxiety, depression and low self-esteem) in victims of intimate partner violence in forensic contexts. Methods: In the present study three scales based in a clinical setting and regularly used in a forensic context were administered (BDI, STAI and Rosenberg) to assess malingering of symptoms. The sample comprised 66 women: 36 students, and 30 real victims. The non-clinical sample was evaluated twice: the first time they gave sincere responses, and the second time they were instructed to answer as if they were victims. The real victims underwent testing in a forensic context. Results and Conclusions: The results of our research show that, even without previous knowledge of the scales, people can distort the test results by malingering symptoms that are normally accepted as sequelae of intimate partner violence, especially depression and low self-esteem; however, the results for anxiety, were less homogeneous. Although these tests are used extensively in clinical psychology, our study confirms that, just by themselves, they are not a reliable source of information in a forensic context. Received: 23 November 2009 Revised: 6 September 2010 Accepted: 13 September 2010 EFFECTIVENESS OF CLINICAL ASSESSMENT IN SPANISH FORENSIC PRACTICE... 9
Introduction
According to the Spanish legislation regarding intimate partner violence that came into effect in 2004 (Ley Orgánica 1/2004), psychological violence is defined as a situation in which the victim is seen to be suffering psychological distress as a result of the crime committed.Therefore, from a legal perspective, pain and suffering (regardless of any economic compensation they may entail as evidence of civil responsibility within the criminal proceedings) need to be demonstrated.This is especially relevant nowadays, when considering that the number of criminal prosecutions for intimate partner violence (IPV) in Spain are increasing in a steady manner -up to 72.1%, from 47,262 cases in 2002 to 81,301 cases in 2007 1 .
These cases rely on external reports, usually from public services or private organizations, to determine the existence of psychological sequelae that will be used as prosecution evidence in court.In order for this to happen, the psychological assessment must establish that the alleged offence is the cause of such sequelae.
Although the psychological harm suffered by victims of a criminal act is identified by assessing its impact on their mental state 2,3 , clinical reports usually assess the presence of psychological sequelae using clinical scales that mainly measure depression, anxiety, sexual dysfunction, and similar [4][5][6] .However, the scientific literature establishes that the presence of these disorders and/or symptoms is legally, not a sufficient proof of the existence of psychological damage, especially given its possible interaction with previous or associated disorders 7 .Other studies on gender-based violence show as well, how the psychological sequelae caused by battering are the same as those seen in victims of other types of violent crime 8,9 .
Although other conditions associated with intimate partner violence -such as depression and anxiety-can be included in the clinical diagnosis, psychological signs can only be established on a forensic level when posttraumatic stress disorder (PTSD) is present 10 .
The diagnosis of a specific disorder is sufficient in a clinical assessment, but not in a legal-forensic context, in which the cause and effect relationship between the criminal act and the psychological sequelae must be established beyond any reasonable doubt.The possibility that the alleged symptomatology may be malingered must also be taken into account 11,12 .
We can assert that PTSD is the primary disorder in intimate partner violence cases 13,14 .On a secondary level IPV is associated with trauma, depression, anxiety and other symptomatologies 15 .Some authors have even established that this secondary trauma, observed in the absence of PTSD, cannot be attributed to the sequelae of a traumatic event 16 .In spite of this, in Spanish forensic practice, IPV normally tends to be accepted as a feasible hypothesis, especially in the absence of other contradictory forensic evidence or a more accurate assessment.
The ways of assessing malingering in its various forms in forensic practice have been studied in depth 17 .In our case, the most relevant aspect is the exaggeration of symptomatology, as it gives credence to the claims made by victims of intimate partner violence.It also enables the prosecution to establish a monocausal link between the alleged crime and the sequelae detected.In other words, the legal system itself acts as an external incentive, which is an indicator of malingering suspicion 18 .
Studies have shown that the prevalence of malingering fluctuates depending on the offence type, occurring in approximately one sixth of forensic cases 19 , even though in cases of assault the figure rises up to a 50% 20 .
One of the main limitations of conducting a forensic assessment based on questionnaires relies in the characteristics of the population involved.The absence of specific instruments in this field only allow us to use it as complementary evidence 21 , despite the efforts made to construct more sophisticated measurement systems 10,12 .
It should also be noted that motivation plays a key role in the different uses of questionnaires depending on the context in which they are used (clinical or legal).In a legal context, for example, the answer a respondent gives may influence the ruling and may obtain scores that are more in line with his/her wishes and needs.If this tactic were successful, it would raise doubts about the tests' validity and reliability in a forensic context 22 .
All this means that courts often have to assess the psychological evidence from a clinical point of view and not a socio-legal one; that is, the disorders will be assessed secondarily solely, due to the difficulties faced when linking them to the reported crime or another stressful event.In addition, courts tend to underestimate or ignore the possibility that the victim can be malingering symptoms in order to gain an advantage in the judicial proceedings.
Our study had two objectives: first, to assess the ability of subjects who had not been victims of gender-based violence to malinger psychological distress; and second, to analyse whether the Beck Depression Inventory (BDI), State-Trait Anxiety Inventory (STAI) and Rosenberg Self-Esteem Scale, tests that are commonly used in clinical practice and whose results are presented as legal evidence in a forensic context, can discriminate properly real victims from malingerers.
Participants
The sample was composed of 66 subjects (mean age 27.06 years, range 20-48), divided into two groups.The control group consisted of 36 female university students selected via a purposeful sampling process.None had been victims of IPV.The average age of the control group was 22.39 years (age range: 20-34).
The forensic experimental group was initially made up of 91 women (reduced to a final sample of 30 following the inclusion criteria described below, mean age 32.67 years, range 20-48), all victims of domestic violence.
Measures
All subjects answered three clinical tests regularly administered by public health professionals in forensic practice to determine psychological sequelae, which measure anxiety, depression and low self-esteem.
The State-Trait Anxiety Inventory (STAI) 23 is composed of two subscales -state and trait-each with 20 items, enabling the selfassessment of anxiety as a transitory (S/A) or an underlying trait (T/A).The inventory's test-retest reliability coefficient is high (0.81) on the trait anxiety scale and rather low (0.40) on the state anxiety scale 4 ; the inventory also has a high internal consistency, with alpha coefficients of 0.91 and 0.94 These results are supported by a study conducted on patients suffering anxiety disorders and other pathologies 24 .
The Beck Depression Inventory (BDI) was administered in the Spanish version 25 , based on the US version 26 .
The BDI has been applied in very different cultures and countries and has repeatedly shown its usefulness as well as clinical and psychometric validity 27 .It has a test-retest reliability of 0.69 to 0.90, satisfactory validity of 0.62 to 0.66 and a reliability coefficient of 0.93, as measured by the split-half method 4,28,29 .Other recent studies [29][30][31] support the inventory's psychometric qualities and linguistic adaptatability.
Devised in 1965, the Rosenberg Self-Esteem Scale is a 10-item scale that assesses up to what degree people are satisfied with and accept themselves 32 .
The original version obtained a test-retest reliability of between 0.82 and 0.88, and internal consistency of various samples of 0.77 to 0.88.A number of studies support the appropriateness of its psychometric characteristics in other languages 33,34 .It has also shown an internal consistency of 0.87 and satisfactory test-retest reliability (0.72 for two months and 0.74 for a year) 34 .As for the instrument's validity, the authors show a correlation between the Rosenberg Self-Esteem Scale and the interpersonal sensitivity dimension of the Symptom Cheklist-90 Revised (SCL-90-R).
Procedure and design
The study was conducted between 2008 and 2009.The tests were administered to both groups in the course of the first year, and in the second year the cases of violence in the experimental group were confirmed by the definitive legal sentences (after all appeals had been exhausted).
The control group was composed of legal psychology students from the University of Barcelona.On the second day of class they were asked to participate "in a research related to forensic practice in legal psycholo-gy".After the tests were administered the students were informed of the study's nature.They were asked to consent to the anonymous use of their data for research purposes, used in a manner consistent with the guidelines of the World Health Organization and the declaration of Helsinki, and were given the opportunity as well, to withdraw from the study if they wished to do so.Three subjects declined participation at the beginning, with no further drop-outs.Five participants were eliminated due to errors in the completion of the medical records either on the first or second test administration.
Each member of the control group answered each of the three tests (BDI, STAI and Rosenberg), on two occasions: First, they answered under the standard condition (honest answers) while on the second occasion, they answered under malingering condition (instructed to malinger answers).A counterbalance of the administration was performed.
The instructions they were given are the following:
"Imagine you can obtain great benefits by giving false answers in the test and suggesting that you have been a victim of intimate partner violence"
Subjects had no free time between the first and second administration, nor were allowed contact with the other participants.No practice time was allowed assuming that participants had previous knowledge of the contents evaluated from previously conducted studies.The different tests were administered and corrected following the standard application procedures suggested by their authors.
The counterbalance technique was applied in order to neutralize the possible progressive error effect (the sum of positive and negative effects of the first tests on the later ones) generated in those cases in which each subject undergoes all the experimental conditions.To avoid these progressive error effects, each subject or group of subjects underwent the treatments in a different order.
In the forensic experimental group, the tests were administered by the forensic teams at Girona Criminal Courts while the criminal hearings were in progress.Subjects were asked to give their consent in order for the results of the assessment to be used for research purposes, respect their right to privacy, and used in a manner consistent with the guidelines of the World Health Organization and the declaration of Helsinki.No records were rejected or eliminated on these grounds.As an external validity criterion, the only chosen cases were those in which -after the psychological assessment-a conviction that was not based exclusively or mainly on the proof of psychological damage as the State's evidence, but in which it was only referred to as evidence for civil liability.
As a result, 38 subjects were eliminated during the first and second stages of the proceedings because psychological damage was stated as the prosecution's proof.A further 20 subjects were eliminated after the court made a complete or partial judgement in favour of the defence, and three more because the records were not completed properly.
In those cases where further forensic assessment was required it was done so after the three tests of our study had been administered.
Version 15 of the SPSS was used to analyse the obtained data.The descriptive results of each group in each of the administered tests are given first.
In order to assure the normal distribution of variables, the Shapiro-Wilk test was performed.Due to the normal distribution of data, Student's t-test for repeated measures was used to compare the real and feigned responses of the control group and Student's t-test for independent variables was used to compare the control group and the experimental group.
Descriptive details of the forensic experimental group
On the BDI, the forensic sample obtained a mean score of 36.9 (range = 19-56; SD = 8.29), equivalent to severe depressive symptomatology.
The mean score on the Rosenberg Self-Esteem scale was 18.30 (range = 8-25; SD = 5.02), a level considered discriminant and below the cut-off point of 25, therefore showing low self-esteem (table 1).Finally, the mean score of 18.06 (range = 10-26; SD = 4.04) obtained on the Rosenberg Self-Esteem Scale denoted low self-esteem.
Comparison between real and malingered situations in the control group
The scale for measuring depression, the BDI, revealed clear differences between the situations: in the real situation subjects showed no depressive symptoms, but in the malingered situation they presented extreme depression.The statistical tests applied show significant differences between situations (Table 2).
There was a very small difference between anxiety scores (STAI-S) which were not statistically significant.The state anxi-ety level in both conditions was situated in the 65 th percentile (medium-high).
As far as the trait anxiety (STAI-T) is concerned, when the subjects responded truthfully they were in the percentile 55 (normal), whereas in the feigned situation they moved to the 65 th percentile (medium-high anxiety).The difference was not statistically significant (Table 2).
Finally, self-esteem based on true responses was normal, but low when the responses were malingered.The difference between the two situations was significant (Table 2).
Comparison between the control group and the forensic experimental group
Comparing the forensic group with the control group in the real condition, the BDI showed a difference in mean scores of 31.87 points.In the malingered condition the difference was also significant, but the mean score in the control group was 7.54 points higher than in the battered women.In other words, the malingerers exaggerated to such extent that they exceeded the level of severe depression and reached extreme depression.The Student's t-tests confirmed the statistical significance of these differences.
The results for state anxiety were inconsistent.The control group's scores were significantly lower than those of the forensic group when they gave true responses: (25.72 vs. 42.47).However, the difference remained when they were asked to feign their responses, and their mean STAI-S scores were still lower than those of the forensic group (25.25 vs. 42.47).The differences were again significant.
Regarding trait anxiety, however, no significant differences were obtained in any of the comparisons.The control group's mean score in the real situation was 26.50, which was very similar to that of the forensic group (28.23) and the difference was not statistical-ly significant.Nor was there a large increase when the control group malingered its responses: the mean score of 29 did not differ significantly from that of the forensic group.
The scores on the self-esteem scale presented significant differences between the forensic and control groups in the real situation: 18.30 vs. 31.50respectively.However, in the malingered situation, control scores were similar to those of the forensic group (18.06 vs. 18.30) and were not statistically significant (Table 3).
Discussion
Although the presence of gender-based violence often induces clinical symptomatology, the assessment of any of the constructs that may be studied should be standardized to ensure that the forensic conclusions are correct and that the clinical treatment is a suitable one 35,36 .Generally, one would expect forensic subjects of domestic violence to obtain higher scores than normal subjects on pathology tests 37 .Indeed, our study shows that, when responding truthfully, the normal population showed fewer symptoms.
When subjects were asked to malingering pathology associated with intimate partner violence, we expected that the forensic subjects, who knew the aim of the assessment, would have higher scores than normal population.This study shows how -even though unfamiliar with the tests administered-subjects were able to falsify scores in order to feign depression and loss of self-esteem, although not in the case of anxiety-trait.
Over half the items in the BDI refer to verbal or cognitive attitudes and symptoms, while only a small percentage refer to the affective traits of depression.For this reason some authors recommend that it should not be used by itself to assess depression 38 .Our results show that this is one of the tests with the highest scores in the malingering condition, as it can easily be manipulated in cases of IPV.However, when subjects malingered their responses, the obtained scores were significantly higher than expected.
This may be partly due to the fact that the test was not designed to diagnose depression (understood from a nosological perspective) and because an assessment of depression should be made with different instruments applied concurrently and the diagnosis made via a clinical interview.
The study also showed that the Rosenberg Self-Esteem Scale was unable to detect when people were malingering the symptoms of IPV victims.This test, therefore, does not distinguish between real symptomatology induced by criminal victimization and symptomatology intentionally manipulated in order to obtain an advantage in the courts.The Rosenberg scale is the easiest one to manipulate: the feigned responses obtained identical scores to those of the clinical sample.In the case of the BDI scale, feigned responses can be detected because the subjects obtained extreme scores in the simulation condition, while the clinical sample obtained only moderate scores.
The STAI scale represents symptoms which may be more difficult for the general population to malingering in cases of intimate partner violence.For trait anxiety, both the truthful control and forensic group scores were high and the control group's score did not increase when responses were malingered.However, for state anxiety, the control group's scores were low and showed no change when the responses were malingered.
To sum up, the results show that the psychological sequelae shown by subjects malingering gender-based violence include depressive symptoms, low self-esteem, and average levels of trait and state anxiety.In contrast, the forensic sample presented severe depression, extremely high state anxiety, average trait anxiety and very low self-esteem.
Although all the tests used in the study are considered valid and reliable in a clinical context, due to the fact that the items clearly define the construct to be assessed, it is precisely for this reason that they are easier to manipulate, and this reduces their validity and reliability in a forensic context.
As mentioned above, the psychological motivations when responding assessment instruments vary depending on whether the assessment is for legal or clinical purposes.This difference has a significant effect on responses to the tests and therefore makes them less useful in a legal context.
The results of our study show that special attention is required when clinical tests are used for the forensic assessment of intimate partner violence victims.A more detailed analysis of the extent to which the tests designed for use in clinical practice can be manipulated when used in a forensic context to assess the sequelae brought on by intimate partner violence is needed.
The main limitations of our study should be overcome in future studies, especially those regarding the control group, thus legal psychology students have more knowledge about mental disorders in clinical and forensic settings, and therefore might be able to malinger better than the general population.Furthermore, the size of the sample should be increased and the results must be generalized to other culture groups.Finally, tests that contain control items should be administered as well in future studies in order to assess malingering with a greater precision.
a) Real situation The mean scores obtained in the three tests by the control group subjects were within the normal range.The mean BDI score was 5.03 (range = 0-23; SD = 5.27), indicating the absence of depressive symptomatology.The mean STAI-S score was 25.72 (range = 9-39; SD = 5.95), indicating a mediumhigh level of state anxiety.The mean STAI-T score of 26.5 (range = 17-37; SD = 5.05) indicates normality.Finally, the mean score of 31.50 (range = 23-40; SD = 4.86) on the Rosenberg Self-Esteem Scale was also considered normal.b) Malingered situation The scores obtained when the subjects feigned their responses were generally higher and indicated the presence of disorder, except in the case of the STAI-S score, which remained in a medium-high level.The BDI score was 44.44 (range = 27-59; SD = 8.44), indicating extreme depression.The mean STAI-S score was 25.25 (range = 16-38; SD = 5.20), which indicates a medium-high level of state anxiety.The mean STAI-T score of 29.0 (range = 11-40; SD = 6.69) indicates medium-high anxiety.
EFFECTIVENESS OF CLINICAL ASSESSMENT IN SPANISH FORENSIC PRACTICE... 15 EFFECTIVENESS OF CLINICAL ASSESSMENT IN SPANISH FORENSIC PRACTICE...13
Table 2
Control group: comparison between malingered and real conditions
Table 3
Comparison of averages between the forensic group and the control group (real and malingered conditions)
|
2018-12-11T20:24:52.281Z
|
2011-03-01T00:00:00.000
|
{
"year": 2011,
"sha1": "3a417c137af01f7e2fc4b6b85a5fd19309d17feb",
"oa_license": "CCBYNC",
"oa_url": "http://scielo.isciii.es/pdf/ejpen/v25n1/original1.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3a417c137af01f7e2fc4b6b85a5fd19309d17feb",
"s2fieldsofstudy": [
"Psychology",
"Law"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
252384852
|
pes2o/s2orc
|
v3-fos-license
|
Enterococcus faecalis rnc gene modulates its susceptibility to disinfection agents: a novel approach against biofilm
Background Enterococcus faecalis (E. faecalis) plays an important role in the failure of root canal treatment and refractory periapical periodontitis. As an important virulence factor of E. faecalis, extracellular polysaccharide (EPS) serves as a matrix to wrap bacteria and form biofilms. The homologous rnc gene, encoding Ribonuclease III, has been reported as a regulator of EPS synthesis. In order to develop novel anti-biofilm targets, we investigated the effects of the rnc gene on the biological characteristics of E. faecalis, and compared the biofilm tolerance towards the typical root canal irrigation agents and traditional Chinese medicine fluid Pudilan. Methods E. faecalis rnc gene overexpression (rnc+) and low-expression (rnc−) strains were constructed. The growth curves of E. faecalis ATCC29212, rnc+, and rnc− strains were obtained to study the regulatory effect of the rnc gene on E. faecalis. Scanning electron microscopy (SEM), confocal laser scanning microscopy (CLSM), and crystal violet staining assays were performed to evaluate the morphology and composition of E. faecalis biofilms. Furthermore, the wild-type and mutant biofilms were treated with 5% sodium hypochlorite (NaOCl), 2% chlorhexidine (CHX), and Pudilan. The residual viabilities of E. faecalis biofilms were evaluated using crystal violet staining and colony counting assays. Results The results demonstrated that the rnc gene could promote bacterial growth and EPS synthesis, causing the EPS-barren biofilm morphology and low EPS/bacteria ratio. Both the rnc+ and rnc− biofilms showed increased susceptibility to the root canal irrigation agents. The 5% NaOCl group showed the highest biofilm removing effect followed by Pudilan and 2% CHX. The colony counting results showed almost complete removal of bacteria in the 5% NaOCl, 2% CHX, and Chinese medicine agents’ groups. Conclusions This study concluded that the rnc gene could positively regulate bacterial proliferation, EPS synthesis, and biofilm formation in E. faecalis. The rnc mutation caused an increase in the disinfectant sensitivity of biofilm, indicating a potential anti-biofilm target. In addition, Pudilan exhibited an excellent ability to remove E. faecalis biofilm. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-022-02462-1.
Recently, E. faecalis has been paid more attention due to its dominant role in the formation of extra radicular biofilm and periapical lesions [6]. According to the study by Barbosa-Ribeiro et al., E. faecalis was the most abundant bacteria in the teeth with endodontic treatment failure and was also associated with the periapical lesions of over 3-mm size [7]. It colonizes the biofilms, invades the dentinal tubules, and resists nutritional deprivation, thereby causing therapeutic failure and heavy economic burdens [4,8].
E. faecalis is a Gram-positive coccus, which is homologous with the dental caries pathogen Streptococcus mutans (S. mutans). The formation of biofilms results in the adhesion and aggregation of bacteria cells as well as increased resistance to root canal irrigants. The VicRK two-component signal transduction system is a key regulator in the synthesis of exopolysaccharide (EPS) in S. mutans. A previous study reported that the rnc gene, encoding ribonuclease III (RNase III), could promote the EPS synthesis and alter the morphology of biofilm [9]. However, the rnc gene function has rarely been detected in E. faecalis. Our previous study showed that rnc could repress vicRKX expressions at the post-transcriptional level via microRNA-size small RNAs (msRNAs) [10]. The WalRK signal transduction system in E. faecalis, which is homologous to VicRK, could also regulate EPS synthesis. It was reported that inhibiting the biofilm formation-related gene walR could reduce EPS synthesis and enhance the susceptibility of E. faecalis biofilms to chlorhexidine (CHX) [11]. Therefore, regulating the metabolism of biofilms might be a feasible way for eliminating E. faecalis biofilm infections. Due to the homology of the rnc gene in E. faecalis with that in S. mutans, it was speculated that the rnc gene could regulate the morphology of biofilms by promoting the EPS synthesis in E. faecalis.
Sodium hypochlorite (NaOCl) has been widely used in the irrigation of root canal due to its excellent antibacterial properties and ability to remove organic components and tissue remnants [12]. However, it has also raised concerns due to its cytotoxic effects on the periapical and pulp tissues [13]. CHX has also been widely used for the irrigation of root canal due to its excellent antibacterial activity. However, it is unable to dissolve the tissue remnants, which restricts its applications as a standard irrigation agent [14]. The current irrigation agents cannot be considered an ideal choice individually. Therefore, exploring new irrigation agents is needed.
Traditional Chinese medicine (TCM) has a history of thousands of years. These natural medicines are increasingly applied for the treatment of oral diseases. Pudilan is a TCM fluid, which has anti-inflammatory and antibacterial effects. It is made up of the extracts of multiple cold and calm herbs, including Scutellaria baicalensis root, Taraxacum mongolicum, Bunge corydalis herb, and Isatis indigotica [15]. Its anti-inflammatory effects have been confirmed in several classic inflammatory models [16]. It has also been applied to cure oral diseases, such as mild recurrent aphthous ulcers and chronic gingivitis [17,18]. The active ingredient in Pudilan has proved to inhibit the production of varies inflammatory factors, such as periodontitis target IL-1β [19,20]. Nevertheless, the antibacterial effects of Pudilan on E. faecalis or periapical periodontitis have not been investigated yet. Therefore, this study was aimed to explore the potential targets for the disinfection of E. faecalis biofilms and also explore the clinical alternative drugs. The main objectives of this study were as follows: (1) to construct and verify the rnc overexpression and low-expression mutant strains of E. faecalis; (2) to detect the regulatory effect of rnc on the morphology of biofilm and EPS production; and (3) to evaluate the rnc modulated susceptibility of E. faecalis biofilms to root canal irrigation agents and Pudilan.
Strains and culture conditions
Enterococcus faecalis ATCC 29212 strain was provided by the State Key Laboratory of Oral Diseases (China) and stored at − 80 °C. The rnc gene sequence was acquired from NCBI (Gene ID: 60892348). The rnc overexpression recombinant plasmid was designed and synthesized by adding promoters upstream of the rnc gene and cloning them into a spectinomycin-resistant shuttle vector pDL278. The recombinant plasmids were transformed into the E. faecalis ATCC 29212 strain through the chemical transformation method using 1 μg/mL competence-stimulating peptides (CSP) and the rnc overexpression mutant strain (rnc+) was established [10]. In order to establish the rnc low-expression mutant strains (rnc−), the reverse complementary sequences of rnc were designed and introduced into the pDL278 vector with promoter sequences [21][22][23]. Then, the plasmids were transformed into E. faecalis ATCC 29212 strain similar to that of rnc+ strains. The strains were cultured in brain heart infusion (BHI) broth (Difco, Detroit. MI. USA) at 37 °C under anaerobic conditions (80% N 2 , 10% H 2 , 10% CO 2 ). Spectinomycin was added to BHI plates with a concentration of 1 mg/mL to select the rnc+ and rnc− strains as needed.
Growth curve measurement
A single colony of each of the three strains was inoculated into the BHI medium and incubated in anaerobic conditions overnight (14-16 h). Then, the cultures were diluted to 1:20 with BHI medium and grown under anaerobic conditions for 2.5-3 h until the cells reached the mid-log phase (OD 600nm = 0.3-0.5) with constant turbidity in each group. The bacterial suspensions were transferred into sterile 96-well microtiter plates at a dilution of 1:100 and covered with sterile mineral oil in each well. Then, the growth of the strains was recorded using a monitoring system (BioTek, USA) for 24 h. Six biological replicates were used for each group in this study.
Biofilm structure imaging and analysis
Scanning electron microscopy (SEM) was used to detect the structures of E. faecalis biofilms. The E. faecalis ATCC 29212 parent and mutant strains in their midlog phases (OD 600nm = 0.3-0.5) were diluted to 1:100 with the BHI medium supplemented with 1% sucrose (BHIS). Bacterial suspensions were then transferred into a 12-well plate (2 mL per well), containing a round glass slide (14 mm in diameter). After 24 h of incubation, the biofilms were gently washed using phosphate buffered saline (PBS), and 2 mL of 2.5% glutaraldehyde was added to each well. The samples were then stored at 4 °C overnight. The biofilm samples of each group were serially dehydrated with 30%, 50%, 75%, 85%, 95%, 99% ethanol (v/v) for 15 min each time. There were three biological replicates for each group, which were examined at 1000×, 5000×, and 20,000× magnifications using SEM (Inspect Hillsboro, OR, USA).
Confocal laser scanning microscopy (CLSM) was performed to acquire fluorescence images and to determine the EPS/bacteria composition of E. faecalis biofilms. EPS was stained with Alexa Fluor ® 647 (Invitrogen, Eugene, OR, USA), and bacteria cells were stained with Syto 9 Nucleic Acid Stain (Invitrogen, Eugene, OR, USA). CLSM (OLYMPUS, JAPAN) in order to observe the fluorescence images under a 20× objective lens. There were three biological replicates for each group, which were observed under three random observation fields. The three-dimensional biofilm images were reconstructed and the EPS/bacteria ratio was analyzed using Imaris 7.0.0 software. (Bitplane, Zurich, Switzerland).
Crystal violet assay
Crystal violet assay was performed to quantitatively analyze the EPS matrix of biofilms. The biofilms of E. faecalis ATCC 29212 parent and mutant strains were incubated for 24 h at 37 °C under anaerobic conditions. After gently washing out the planktonic cells twice using PBS, 200 μL of 0.01% crystal violet (v/v) was added to each sample at room temperature for 10 min. After the careful removal of residual dye with running water, 33% acetic acid (v/v) was used to elute crystal violet, 37 °C, 150 rpm, 5 min. The OD 575 values of the eluents were recorded. In order to evaluate the ability of drugs to remove E. faecalis biofilm EPS, 1 mL 5% NaOCl (v/v), 2%CHX (w/v), Pudilan (Pudilan keyanning antibacterial mouthwash, China), and PBS were added respectively to the biofilm samples and incubated for 10 min. Pudilan keyanning antibacterial mouthwash is a product mainly contains extracts of herbs in Pudilan formula. Therefore, we selected it to represent the Pudilan and detected its antibiofilm effect. Then, the drugs were gently washed using PBS. The procedures of crystal violet assays were the same as mentioned above.
Detection of gene expression level
Total RNA was extracted from the mid-log phase bacteria using a MasterPure ™ RNA purification Kit (Epicentre) following the manufacturer's instructions. NanoDrop ™ 2000c Spectrophotometer (Thermo Scientific, USA) was used to measure the concentration and purity of total extracted RNA. PrimeScript ™ RT reagent Kit with gDNA Eraser (Perfect Real Time) (Takara, JAPAN) was used for the removal of genomic DNA and reverse transcription of RNA to cDNA. Quantitative real-time-PCR (RT-qPCR) was performed using LightCycler 480 (Roche, Switzerland). TB Green ® Premix Ex Taq ™ (Tli RNaseH Plus) (Takara, JAPAN) was used in the experiment according to the manual. The reaction process is as follows: 95 °C 30 s in the holding stage, then 95 °C 5 s and 60 °C 30 s for 40 cycles in the cycling stage, followed by melt curve stage and cool down. The reactions were carried out in triplicate. 16S rRNA was used as an internal standard and the relative expression level of the rnc gene was quantified using the 2 −ΔΔCT method. The RT-qPCR primer sequences are listed in Table 1.
Antibacterial assays
The 24-h E. faecalis ATCC 29212 parent and mutant biofilm samples were prepared and the planktonic bacteria were removed using PBS. Each group of the biofilms were incubated with 1 mL 5% NaOCl, 2%CHX, Pudilan, and PBS respectively for 10 min. Then, the drugs were washed gently. PBS solution (1 mL) was added to each sample to form a uniform bacteria suspension. Then the bacteria suspension was diluted to different concentrations by PBS according to the antibiofilm ability of the drugs. The bacteria was diluted to 10 -2 folds in the 5% NaOCl group, 10 -3 folds in the 2%CHX and Pudilan groups and 10 -5 folds in the PBS group. After mixing, 10 μL diluted bacterial suspension was dropped on BHI plate [24].
Statistical analyses
Data analyses were performed using SPSS 26.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA method was used to identify the significance of variables' effects. The Shapiro-Wilk test was applied and verified the data are normally distributed. Fisher's least significant difference was performed to compare the means of each group. Two-way ANOVA was applied to assess differences of the growth curves [25]. A P value < 0.05 was considered statistically significant.
Down-regulation of the rnc gene inhibited bacterial growth and EPS synthesis
We tried different methods to reduce rnc expression level. Firstly, polymerase chain reaction ligation mutagenesis [26] was used to construct rnc deletion mutants without success. No colony growth on the antibiotics selective plate. The rnc gene seems to be essential for E. faecalis ATCC29212 viability. Then we introduced plasmids carrying rnc antisense sequences into E. faecalis ATCC29212. This method can effectively hinder the expression of rnc by pairing and forming a rnc− antisense rnc duplex structure. Similarly, the rnc+ mutant strain was made by introducing plasmids carrying rnc sequences [22]. The expression level of the rnc gene was identified using RT-qPCR (Fig. 1A). The results showed that as compared to the E. faecalis ATCC 29212 wildtype, the rnc expression level of the rnc+ strain increased by 40.13 times, while that of the rnc− decreased by 0.22 times. This confirmed the successful construction of rnc+ and rnc− mutant strains. The growth curve of rnc+ and wildtype E. faecalis ATCC 29212 strains were similar; both reached the midlog growth phase nearly the same time (Fig. 1C). However, the rnc− strain showed a slower growth rate under the same culture conditions, indicating its weaker proliferation capability. The rnc− strain spent a longer time reaching the mid-log growth phase and presented a lower OD 600 value at the stationary phase. The average OD value of ATCC29212, rnc+ and rnc− were 0.737, 0.749 and 0.684 respectively. Statistical tests found significant difference between the rnc− strain and the other two species.
Crystal violet assays were performed to determine the differences in the total amount of EPS synthesis in the 24-h biofilms of wildtype, rnc+, and rnc− E. faecalis ATCC 29212 strains. As shown in Fig. 1B, the rnc+ strain showed significantly higher EPS productions as compared to the wildtype strain, while the rnc− strain showed significantly lower EPS contents (both P < 0.0001).
The morphology of the biofilms was evaluated using SEM (Fig. 1D). As compared to the wild-type strain, the Then, under 20,000× magnification, the biofilm looked uneven and the bacterial cells aggregated through the extracellular matrix. On the contrary, the biofilm of rnc− strain contained a sparse matrix with fewer cracks on the surface. Under 20,000× magnification, the rnc− strain showed a loose combination between the matrix and bacterial cells.
The microscopic morphologies of the wildtype, rnc+, and rnc− strains were consistent with their performances under the naked eye. While preparing the samples, the rnc+ biofilms were found to be firmly attached to the glass slide and were more resistant to the water impact, while the rnc− biofilms were fragile. The CLSM showed that both the EPS and bacteria showed a thick accumulation in the rnc+ biofilm, while those in the rnc− biofilm showed decreased production and were scattered and unevenly distributed ( Fig. 2A). Furthermore, the EPS/ bacteria ratio in the rnc+ biofilm was higher than that of the wildtype strain (P < 0.05), while that of the rnc− strain was the lowest (P < 0.05) (Fig. 2B). Overall, the results consistently revealed that the rnc gene could positively regulate bacterial growth and biofilm formation in E. faecalis.
Biofilms of rnc mutant strains showed an increased sensitivity to disinfectants
In order to compare the sensitivities of E. faecalis ATCC 29212 wildtype and rnc mutant strains to the different antibacterial agents, crystal violet assays were performed to quantify the EPS residues in the biofilms after treatment with the respective antibacterial agents. 5% NaOCl was set as a positive control. After incubating for 10 min with 5% NaOCl, all the three biofilms were almost eliminated with no significant differences (Fig. 3A). Interestingly, the rnc+ and rnc− groups showed lower EPS residues as compared to the wildtype strains after treatment with 2% CHX, Pudilan, and PBS, suggesting that the biofilms of rnc mutant strains were more sensitive to these antibacterial agents (Fig. 3B). Particularly, Pudilan showed better anti-biofilm activity as compared to the 2% CHX. Furthermore, the number of active bacteria in the wildtype and rnc mutant biofilms were compared after treatments with different drugs (Fig. 4). Due to the different antibiofilm ability of the agents, we diluted the bacteria suspension to 10 -2 folds in 5% NaOCl group, 10 -3 folds in 2% CHX and Pudilan group, 10 -5 folds in PBS group. As a positive control, 5% NaOCl showed the strongest antibiofilm ability towards the three strains with no significant difference among the ATCC29212, rnc+ and rnc− groups. In the 2% CHX and Pudilan group, there are significantly more colonies of ATCC29212 than rnc+ and rnc−. The PBS treated groups showed similar column number.
Discussion
The pathogenic biofilms of E. faecalis are closely associated with periapical periodontitis. The removal of E. faecalis biofilms is crucial for avoiding the failure of root canal treatments. Due to the complexity of the root canal system [27], chemical irrigation agents are supposed to disinfect the root canal, especially where the mechanical preparations cannot reach it. The Fig. 2 The rnc gene altered EPS and bacteria biomass ratio of biofilms. A The biofilm morphology was observed by CLSM. Double fluorescent labels marked bacteria (green, SYTO 9) and EPS (red, Alexa Fluor 647) respectively; scale bar = 50 μm. B The EPS/bacteria biomass ratio of E. faecalis ATCC29212, rnc + and rnc − . Three-dimensional reconstruction of the biofilms and quantitative data were performed by Imaris 7.0.0. (*P < 0.05) characteristics of the root canal irrigation agents determine their effects. As irrigation agents, 5% NaOCl and 2% CHX are effective and widely used. However, due to the irritation to the periapical tissue [28], 5% NaOCl should be applied with caution in the clinic. CHX has also shown some side effects, such as irritation to the oral mucosa, causing a burning sensation, and alteration of taste perception [29]. Therefore, the development of alternative drugs and improvement of bacterial susceptibility has been continuously sought. Pudilan is a commercial TCM made up of herbal extract. Pudilan keyanning mouthwash products contain Pudilan extract and 0.03%-0.06% cetylpyridinium chloride (CPC). The CPC (0.03-0.06%) has been reported with little antibacterial effects, which were weaker than 0.12% CHX [15]. In the current study, Pudilan exhibited a stronger EPS removing effect as compared to 2% CHX and was more effective on the rnc+ and rnc− strains. In the colony number assay, Pudilan and 2%CHX also showed stronger antibacterial effect on rnc+ and rnc− strains than ATCC29212. Overall, Pudilan preliminary showed an excellent anti-biofilm effect on E. faecalis but still needs further investigations.
Similar to E. faecalis, another Gram-positive coccus bacteria S. mutans, relies on the formation of stable biofilm to acquire resistance and cause virulence in the mouth. Preliminary studies have shown that the rnc deletion mutation could repress S. mutans cariogenicity in rat models, and the weakness of biofilm was attributed to the reduced EPS production and bacterial adhesion [10]. As rnc is a highly conserved gene, we proposed rnc as a target to eliminate the E. faecalis biofilms. The results demonstrated its excellent regulatory effects on biofilm metabolism and drug sensitivity. The rnc overexpression strain rnc+ and rnc low-expression strain rnc− were established and the effects of its rnc expression level on the growth status of E. faecalis were observed. The rnc+ strain showed a normal growth rate and formed a thriving biofilm, while the rnc− strain showed a delayed growth rate and fragile biofilm. These results revealed that the rnc gene could positively regulate the bacterial growth and formation of biofilm in E. faecalis, which was consistent with our hypothesis. The rnc gene encodes RNase III, which regulates gene expression at the posttranscription level [30]. Therefore, the rnc expression level might have a profound impact on the phenotypes of bacteria and biofilms. On the other hand, antisense walR as a post-transcriptional modulator, has been proven to regulate bacterial growth, virulence and EPS synthesis and aggregation. This is a successful precedent for posttranscriptional level regulation as an anti-biofilm target in E. faecalis [31].
In order to evaluate the ability of the three strains to drug resistance, the EPS residues and their colony number in biofilms after incubation were tested with different drugs. As expected, the 5% NaOCl group showed the least OD 575 absorbance as expected, followed by PBS, Pudilan, and 2% CHX group. The increased OD in the Pudilan and 2% CHX groups as compared to the PBS group might be due to the increased light absorbance by the biofilm pigmentation caused by its color. After treatment, the EPS residues in the biofilms of rnc mutant strains were less than those of the wild-type strain. It was concluded that the rnc− strain showed weakened drug resistance due to its thin and barren extracellular matrix. Interestingly, the thick rnc+ biofilm also showed increased sensitivity to the antibacterial drugs. The SEM and CLSM observation of the thick and uneven rnc+ biofilm suggested that this unevenness of the biofilm allowed the antibacterial drugs to penetrate, thereby showing their antibacterial effects. The rnc interference strategy not only reduced the EPS metabolism of E. faecalis biofilms but also made the biofilms more fragile, resulting in increased drug susceptibility. In order to comprehensively understand the regulatory effects of rnc on biofilm formation and their mechanisms, further studies of related genes, including epaI/epaOX [32], gelE, and esp [33] are required. Approaches to rnc gene regulation rather than the antibiotic use and development of resistance might be more in line with ecological regulation.
However, there are some limitations in this study. First, the biofilm models used in the experiment may not fully reflect the state in the disease. This was an in-vitro experiment and the biofilm samples were 24-h early mature biofilms. Ali et al. showed that the substrate-conditioning substances and biofilm age could affect the components of the cellular and extracellular matrix of E. faecalis biofilms [34]. Moreover, we failed to delete the rnc gene from genomic DNA either through chemical transformation or electroporation method, but the rnc deletion mutant was constructed in E. faecalis V19 [35], which is a plasmid-cured derivative of the vancomycin-resistant clinical isolate V583 [36]. The characteristic differences between type strain ATCC29212 and drug-resistant clinical strain V19 may explain the failure to knock out the rnc gene in ATCC29212. Moreover, the biofilm phenotype and drug resistance changes of the rnc− strain were obvious enough to judge the trend of the results. Therefore, we take the rnc− strain to observe the regulation effect of the rnc gene. The rnc− strain was constructed by transforming a shuttle plasmid loaded with an rnc antisense RNA sequence. Here are other possible hypotheses for failure to construct rnc deletion mutant strain. (1) The exogenous plasmids are abnormally expressed in bacteria; therefore, the mutant strains cannot survive on a selective medium. (2) The thick capsule of membrane shuts long-chain DNA out. (3) The transformation methods need further optimization. Although the rnc− strain showed decreased growth, the copy number variation might cause genetic and expression instability [37]. In brief, more advanced biofilm models and mutant strains are expected to be used in exploring anti E. faecalis targets.
Conclusions
In this study, the rnc overexpression and low-expression mutant strains of E. faecalis were successfully constructed. The biological features of rnc mutant strains and their sensitivity towards typical root canal irrigation agents and TCM fluid Pudilan were evaluated. This study revealed that the overexpression of rnc could promote bacterial growth and EPS synthesis, and vice versa. However, the altered rnc expression level could break the balance, forming a vulnerable biofilm. The altered biofilm structure made it more sensitive to the antibacterial agents, allowing for a decrease in antibiotic use and resistance. Taken together, these data suggested the rnc gene as a biofilm regulatory target and provided evidence for the antibacterial potential of Pudilan, providing a novel strategy for the management of root canal system and apical infection.
|
2022-09-21T13:47:31.985Z
|
2022-09-20T00:00:00.000
|
{
"year": 2022,
"sha1": "b956c990ad97b506c63b52e522dea26f65a0fa69",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "f10843ba7c0cb60dd3cafd1e07fffd908fdc6b96",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235283704
|
pes2o/s2orc
|
v3-fos-license
|
Mass Optimization of Automotive Radial Arm Using FEA for Modal and Static structural Analysis
The radial arm is an important part of an automobile suspension system. A vehicle suspension system having radial arms that are connected to axle brackets through vertically arranged bushings. The radial arms are fabricated using sheet metal members that are assembled in a clam-shell arrangement. An important consideration in the design of radial arm is the natural frequencies and mode shapes for dynamic loading conditions along with stiffness. The present work is focused on mass optimization of radial arm for modal and static structural analysis. The work includes investigation of natural frequencies and mode shapes of the optimized design of radial arm and comparing with the static structural analysis results using Finite element analysis tool. The radial arm was modeled in Catia V5, finite element modeling was done in Altair Hypermesh and analysis was done using Optistruct solver. From the static stiffness analysis, it was observed that the mass of the assembly can be reduced for given loading and boundary conditions by carrying out design modifications to the assembly without changing the manufacturing process. It was found that mass has been reduced to 13% of baseline model and stiffness also increased in all the directions. Modal analysis results indicate that all the natural frequencies are well within the range of baseline model.
Introduction
The radial arms are sheet metal fabricated members that are assembled in a clam shell arrangement. The radial arm is used for supporting the body and dissimilar parts of an automobile. The Fig.1 shows the radial arm assembled in vehicle suspension system. It has to withstand the loads such that the displacement and stresses developed are within permissible limit [1]. The sizing of mesh was chosen according to grid independence test for improved and accurate result. Modal analysis was used to find the natural frequency of the component or assembly. There is some percentage of error in the simulation and analytical methods. To predict the error, stiffness model of the component was built [2]. The present work includes modal and static analysis of radial arm of an automotive suspension system made from steel and also to carry out the mass optimization of the component. In the study, the radial arm of a four-wheeler automobile was considered for a safe design process, modal and static stiffness analysis. Mass optimization was carried out by performing design modifications in radial arm. All the parameters such as mounting position, bolt position, rivet positions are taken care while modifying the radial arm, so that design modification does not alter the position of these parts. Mass reduction helps in improving the efficiency of the overall system. The Radial arm model was created in Catia V5 and imported to Altair Hypermesh for FE modeling and preprocessing was carried out by applying appropriate boundary and loading conditions. The optistruct solver was used for performing the analysis and Heperview was used for post processing. Further, comparison is made between existing steel radial arm and the optimized model of radial arm in terms of natural frequencies, deflection and stiffness. The simulation results and the key features of the design modified radial arm are discussed in the following section.
Material Properties
The Table.1 gives the material properties of radial arm used for modal and static structural analysis.
Cad model of Baseline Radial Arm
The Radial arm model created in Catia V5 is shown in Fig.2. Baseline model of radial arm consist of total 6 parts namely arm, arm rear cover, arm bush, arm stud, bush cover and C bracket as shown in Fig.3 along with materials used and gauge (thickness) properties. Total mass of the radial arm is 10.13 kg.
FE Model of Baseline Radial arm
The radial arm CAD model was imported to Altair Hypermesh. The Figure 4 shows the FE model of radial arm where geometry clean-up was performed and extracted the mid surface for all the components. Elements considered for static and modal analysis are 2D-Pshell with four nodded quadrilateral and three noded triangular elements having six degree of freedom at each node [4]. Welding between the components was created by rigid Rbe2 elements. While meshing or FE modeling some of the quality parameters such as warpage, aspect ratio, skew, Jacobean and minimum element sizes are to be maintained.
Modal Analysis of Radial Arm
Modal analysis is performed to determine the natural frequencies and mode shapes. Using modal analysis, it is possible to observe the characteristic behaviour of the structure and the natural frequency of the component. The rigidity of the component could be analyzed and by knowing natural frequency, the resonance could be avoided. The main features of each mode of the structure can be figured out through the modal analysis and the actual vibration response under this frequency range can be predicted [5]. The results from modal analysis can be used as reference value for other dynamic analysis such as random analysis, harmonic analysis, etc.
Modal analysis was carried out using the Free-Free (without any constraints) boundary conditions. First twelve modes are determined, in that initial 6 modes are rigid body modes and are equal to zero frequency. For the 7 th mode natural frequency is found to be 185.5 Hz and the maximum displacement of the chassis is 3.41 mm at that frequency. Similarly, for 12 th mode, natural frequency is 1002.2 Hz and the maximum displacement is 2.52 mm. Mode shapes of the baseline model were shown in Figure 5 and related deformation pattern corresponding displacement and natural frequencies were given in table 2.
Static Structural Analysis of Radial Arm
Static structural analysis was performed to determine the stiffness of the component at the location for the given loading and boundary conditions [6]. For the analysis, displacement in all the directions are fixed at the hinge location as shown in Figure 6 and at bush location the force of 1000N applied along X, Y and Z directions as shown in Figure 7. The displacement along X, Y and Z direction was found from the analysis. Using the displacement values, the stiffness of the component was determined. Analysis was carried out in Optistruct solver [7][8]. From the analysis, displacement and stiffness values for baseline model was determined and are presented in the table 3.
Design Modification
The Design modification (Re-design) is the process of achieving some desired set of specification which minimizes critical factors of the model. While modifying the model, the designer must have the knowledge about model and behaviour of the model under given loading condition [9][10][11].
In the study, objective is to minimize the displacement of baseline model of radial arm. Therefore, displacement for modified design should be less than that of the baseline design for static analysis. Displacement of radial arm has different standard values for different type vehicle frames and also has different values for different type of analysis. Still it is possible to minimize the deflection of radial arm without increase in the mass of the components [12]. Design modification iterations are carried out by changing back cover design, varying thickness and modified design of C-bracket. Also, by changing number of components and are presented below. Iteration1: The baseline model is modified by gauge properties of two components and one more new component arm cup bracket is added near the stud area. The total no of components in the model are 7. Iteration2: Model of the previous iteration is modified near Arm C bracket and the gauge property of Arm C bracket is increased. The total no of components in the model are 7. Iteration3: Model of the iteration 2 is modified near Arm C bracket and an entire Arm C bracket is removed and the arm cup bracket is newly designed. The total no of components in the model are 6. Table 4 shows comparison of mass, number of nodes and elements in each model. Table 5 shows design modifications carried out to the baseline model of radial arm. The modal analysis is carried out for all modified designs and natural frequencies are determined [13][14]. Natural frequencies of 7 th to 12 th modes are listed in the Table 6. Natural frequencies and mode shapes of 7 th to 10 th modes of baseline and all the modified designs of the radial arm are shown in Figures 8-11. Figure 8 shows the 7 th mode of all the iterations, Figure 9 shows the 8 th mode, Figure 10 and 11 shows the 9 th and 10 th mode for all the iterations respectively. From the modal analysis, it is observed that all the frequencies are well within the range of baseline model that is from 185 Hz -1000 Hz. The static structural analysis was carried out for modified designs using same loading and boundary condition as of Baseline model and the results for the displacement and stiffness in X, Y and Z directions are presented in the Table 7. The Figures 12-14 shows the displacement of the radial arm in X, Y and Z loading direction respectively. From the result, it is found that the stiffness increased in all the direction in iteration-3 model.
Conclusions
In the present work mass optimization, modal and static structural analysis of radial arm was carried out. Natural frequencies and mode shapes of the optimized design of radial arm were determined and compared with base line model. The following observations were made from the analysis. Modal analysis was carried out to determine natural frequency of baseline and modified design models of radial arm, which is significant to study the vibrational characteristics and to avoid resonance. It is found that the natural frequencies of iteration3 model are comparable with baseline model results. From the static structural analysis, it is found that the deformation of the radial arm is reduced from 0.072 mm to 0.053 mm in X direction, 4.624 mm to 3.579 mm in Y direction and 1.666 mm to 1.574 mm in Z direction by design modification for iteration3 model. The Stiffness is increased by 26, 23 and 6 % along X, Y and Z direction for iteration3 model. Design modifications are carried out in radial arm to optimize the mass and there is a net reduction of 13% in radial arm mass due to design modification. From modal and static analysis, it is found that the iteration-3 model satisfies all the design requirements compared to baseline model. Therefore, iteration-3 model can be used for further design development and studies related durability and fatigue analysis are to be carried out before taking up for manufacturing the product.
|
2021-06-03T00:39:12.999Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ffe20c833bc453dfe9dc1a9094fc272a80db9b0b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1116/1/012113",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ffe20c833bc453dfe9dc1a9094fc272a80db9b0b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
196647387
|
pes2o/s2orc
|
v3-fos-license
|
Insight into Risks in Aquatic Animal Health in Aquaponics
Increased public interest in aquaponics necessitates a greater need to monitor fish health to minimize risk of infectious and non-infectious disease outbreaks which result from problematic biosecurity. Fish losses due to health and disease, as well as reporting of poor management practices and quality in produce, which could in a worst-case scenario affect human health, can lead to serious economic and reputational vulnerability for the aquaponics industry. The complexity of aquaponic systems prevents using many antimicrobial/antiparasitic agents or disinfectants to eradicate diseases or parasites. In this chapter, we provide an overview of potential hazards in terms of risks related to aquatic animal health and describe preventive approaches specific to aquaponic systems.
Introduction
The European Food Safety Authority reported a variety of drivers and potential issues associated with new trends in food production, and aquaponics was identified as a new food production process/practice (Afonso et al. 2017). As a new food production process, aquaponics can be defined as 'the combination of animal aquaculture and plant culture, through a microbial link and in a symbiotic relationship'. In aquaponics, the basic approach is to get benefit from the complementary functions of the organisms and nutrient recovery. The aquaculture part of the system applies principles that are similar to recirculating aquaculture systems (RAS). Aquaponics has gained momentum due to its superior features compared to traditional production systems. Thus, aquaponics seems capable of maintaining ecosystems and strengthening capacity for adaptation to climate change, extreme weather, drought, flooding and other disasters. These attributes are within reach, but as in other agri-/aquacultural production, aquaponics is not free of risks. Given the complexity of aquaponics as an environment for co-production of aquatic animals with plants, the hazards and risks may be more complicated.
The focus in this chapter is on categories of risk (i.e. animal health/disease) rather than specific risks (e.g. flectobacillosis disease). In traditional aquaculture, some of the more common types of production risks are diseases resulting from pathogens, unsuitable water quality and system failure. Snieszko (1974) reported that infectious diseases of fish occur when susceptible fish are exposed to virulent pathogens under certain environmental conditions. Thus, the interaction of pathogens, water quality and fish resistance is linked to occurrence of disease. Previous research using risk methods has studied the routes of introduction of aquatic animal pathogens in order to secure safe trade (e.g. import risk analyses) and support biosecurity (Peeler and Taylor 2011). Considering the similarity of aquaponics to RAS, it is expected that the health problems of aquatic animals in aquaponics may be identical to aquatic animals in RAS. Specifically, fluctuations in water quality may increase susceptibility of fish to pathogens (i.e. disease-causing organisms such as virus, bacteria, parasite, fungi) in RAS and cause disease outbreaks. Microorganisms in closed systems such as RAS or aquaponics are of significance in terms of maintaining fish health. Thus, Xue et al. (2017) reported the potential correlation between fish diseases and environmental bacterial populations in RAS. High pathogen density and limited medication possibilities make the system prone to disease problems. Disease or impaired health can cause catastrophic losses with decreased survival or poor feed conversion ratios. Regardless of which potential risk becomes problematic, each has the same impact: an overall decline in the production of a marketable quality product that then results in financial loss (McIntosh 2008). Diseases can be prevented only when the risks are recognized and managed before disease occurs (Nowak 2004). The severity of risks differs and will likely change depending on when each is encountered during the production cycle.
Aquaponics and Risk: A Development Perspective for Fish Health
Fish pathogens are prevalent in the aquatic environment, and fish are generally able to resist them unless overloaded by the allostatic load (Yavuzcan Yıldız and Seçer 2017). Allostasis refers to the 'stability through change' proposed by Sterling and Eyer (1988). Put simply this is the effort of fish to maintain homeostasis through changes in physiology. Allostatic load of fish in aquaponics may be a challenging factor as aquaponics is a complex system mainly in terms of the water quality and the microbial community in the system. Hence, the diseases of fish are generally speciesand system-specific. Specific aquaponic diseases have not been described yet. From aquaculture, it is known that fish diseases are difficult to detect and are usually the end result of the interaction between various factors involving the environment, nutritional status of the fish, the immune robustness of the fish, existence of an infectious agent and/or poor husbandry and management practices. In order to sustain aquaponic systems, an aquatic health management approach needs to be developed considering the species cultured, the complexity of environments in aquaponics and the type of the aquaponic system management. Profitability in aquaponic production can be affected by even small percentage decreases in production, as seen in aquaculture (Subasinghe 2005). Aquaponics is a sustainable, innovative approach for future food production systems, but this integrated system for production currently shows difficulties in moving from the experimental stage or small-scale modules to large-scale production. It could be hypothesized that the lack of economic success of this highly sustainable production system is due to major bottlenecks not scientifically addressed yet. Without a doubt, the cost-effectiveness and technical capabilities of aquaponic systems need further research to realize a scaling up of production (Junge et al. 2017). Research activity and innovations applied since the 1980s have transformed aquaponic technology into a viable system of food production, and although small-scale plants and research-structured plants are already viable, commercial-scale aquaponics are not often economically viable. The claimed advantages attributed and recognized for aquaponic systems are the following: significant reduction in the usage of water (compared to traditional soil methods of growing plants), bigger and healthier vegetables than when grown in soil, production of plants does not require artificial fertilizer and aquaponic products are free of antibiotics, pesticides and herbicides.
Risk Analysis Overview
Risk is defined as 'uncertainty about and severity of the consequences of an activity' (Aven 2016), and the risk picture reflects (i) probabilities/frequencies of hazards/ threats, (ii) expected losses given the occurrence of such a hazard/threat and (iii) factors that could create large deviations between expected outcomes and the actual outcomes (uncertainties, vulnerabilities). Risk analysis offers tools to judge risk and assist in decision-making (Ahl et al. 1993;MacDiarmid 1997). Risk analysis is based on systematic use of the available information for decision-making, using the components of hazard identification, risk assessment, risk management and risk communication as indicated by World Organisation of Animal Health (OIE) (Fig. 17.1). This framework is commonly used for pathogen risk analysis (Peeler et al. 2007).
Risk analysis in food production, including aquaponics, can be applied to many cases, such as food security, invasive species, production profitability, trade and investment, and for consumer preference for safe, high-quality products (Bondad-Reantaso et al. 2005;Copp et al. 2016). The benefits of applying risk analysis in aquaculture became more clearly linked to this sector's sustainability, profitability and efficiency, and this approach can also be effective for the aquaponics sector. Therefore, disease introduction and potential transmission of pathogens can be evaluated in the context of risk to aquatic animal health (Peeler et al. 2007). Various international agreements, conventions and protocols cover human, animal and plant health, aquaculture, wild fisheries and the general environment in the field of risk. The most comprehensive and broad agreements and protocols are the World Trade Organization's (WTO) Sanitary and Phytosanitary Agreement, United Nations Environmental Program's (UNEP) Convention on Biological Diversity and the supplementary agreement Cartagena Protocol on Biosafety and the Codex Alimentarius (Mackenzie et al. 2003;Rivera-Torres 2003).
A key challenge regarding the field of risk relates to our depth of knowledge. Risk decisions are related situations characterized by large uncertainties (Aven 2016). Specifically, animal health risk analysis depends on knowledge gained from studies of epidemiology and statistics. Oidtmann et al. (2013) point out that the main constraint in developing risk-based surveillance (RBS) designs in the aquatic context is the lack of published data to advance the design of RBS. Thus, to increase robust knowledge of risks in aquaponics, studies that both increase scientific data and reduce specific weaknesses and uncertain fields in aquaponics operations are needed. Some research areas that require more data for risk analysis in aquaponic systems are presented below (Table 17.1).
Risk Communication
In terms of risk analysis for aquatic animal diseases or health in aquaponic systems, the OIE Aquatic Animal Health Code (the Aquatic Code) can be considered because the Aquatic Code sets out standards for the improvement of aquatic animal health and welfare of farmed fish worldwide and for safe international trade of aquatic animals and their products. This Code also includes use of antimicrobial agents in aquatic animals (OIE 2017).
Hazard Identification
In risk analysis, a hazard is generally specified by describing what might go wrong and how this might happen (Ahl et al. 1993). A hazard refers not only to the magnitude of an adverse effect but also to the likelihood of the adverse effect occurring (Müller-Graf et al. 2012). Hazard identification is important for revealing the factors that may favour the establishment of a disease and/or potential pathogen threat, or otherwise detrimental for fish welfare. Biological pathogens are recognised as hazard in aquaculture by . A broad range of factors can be taken into consideration as long as they are associated with disease occurrence, i.e. they are hazards. Understanding the aquatic animal health and welfare concept in aquaponics in terms of the species of aquatic organisms and the system used Understanding the stress/stressor concept for aquatic organisms in aquaponics by the species and the system used Understanding the allostatic load for aquatic organisms and the emergence of diseases Understanding the welfare concept in aquaponics Characterizing the critical water quality parameters against aquatic animal health Understanding the sensitivity of aquatic organisms to the aquaponic environment Revealing the microbial profile for the different systems of the aquaponics Health indicators Developing and validating health indicators for aquatic animals raised in the aquaponic systems Database development Field data on the health/disease of aquatic animals in the aquaponics Field data on the microbial profile including pathogens The sustainability of aquaponics is linked with a variety of factors, including system design, fish feed and faeces features, fish welfare and elimination of pathogens from the system (Palm et al. 2014a, b). Goddek (2016) reported that aquaponic systems are characterized by a wide range of microflora as fish and biofiltration exist in the same water mass. Since a great variety of microflora exists in aquaponic practices, the occurrence of pathogens and risks for human health should also be considered in order to guarantee food safety. In terms of sustainability of aquaponic systems, pathogen elimination to prevent losses due to diseases may be a challenging factor when aquatic animal production is intensified. The use of chemotherapeutants in aquaculture to fight pathogens presents a number of potential hazards and risks to production systems, the environment and human health (Bondad-Reantaso and Subasinghe 2008) (Table 17.2).
To eliminate hazards, the fish rearing and plant cultivation phases should be considered separately. The biggest risks in fish rearing are related to water quality, fish density, feeding quality and quantity and disease (Yavuzcan Yildiz et al. 2017). Depending on the species of fish reared, the level of risk can increase if the species is not appropriate for the conditions of the particular system. For example, potassium is often supplemented in aquaponic systems to promote plant growth, but results in reduced performance in hybrid striped bass. Normally, freshwater and high-density culture-tolerant species are utilized in aquaponics. The most common species of fish in commercial systems are tilapia and ornamental fish. Channel catfish, largemouth bass, crappies, rainbow trout, pacu, common carp, koi carp, goldfish, Asian sea bass (or barramundi) and Murray cod are among the species that have been trialled (Rakocy et al. 2006). Tilapia, a warm-water species, highly tolerant of fluctuating water parameters (pH, temperature, oxygen and dissolved solids), is the species largely reared in most commercial aquaponic systems in North America and elsewhere. The results of a recent online survey, based on answers from 257 respondents, showed that tilapia is reared in 69% of aquaponic plants (Love et al. 2015). Tilapia presents an economic interest in some markets but not in others. In the same survey (Love et al. 2015), other species utilized were ornamental fish (43%), catfish (25%), other aquatic animals (18%), perch (16%), bluegill (15%), trout (10%) and bass (7%). One of the major weaknesses in aquaponic systems is the management of water quality to meet the requirements of the tank-reared fish, while cultivated crops are treated as the second step of the process. Fish require water with appropriate parameters for oxygen, carbon dioxide, ammonia, nitrate, nitrite, pH, chlorine and others. A high level of suspended solids can affect the health status of fish (Yavuzcan Yildiz et al. 2017), provoking damages to gill structure, such as the epithelium lifting, hyperplasia in the pillar system and reduction of epithelial volume (Au et al. 2004). Fish stocking density and feeding (feeding rate and volume, feed composition and characteristics) affect the digestion processes and metabolic activities of fish and, accordingly, the catabolites, total dissolved solids (TDS) and waste by-products (faeces and uneaten feed) in the rearing water. The basic principle on which the aquaponic system is based is the utilization of catabolites in water for plant growth. Aquaponic systems require 16 essential nutrients and all these macro-and micronutrients must be balanced for optimal plant growth. An excess of one nutrient can negatively affect the bioavailability of others (Rakocy et al. 2006). Therefore, the continuous monitoring of water parameters is essential to maintain water quality appropriate for fish and crop growth and to maximize the benefits of the process. Reduced water exchange and low crop growth rate can create toxic nutrient concentrations in water for fish and crops. On the other hand, the addition of some micronutrients (Fe +2 , Mn +2 , Cu +2 , B +3 and Mo +6 ), normally scarce in water where fish are reared, is essential to adequately sustain crop production. In comparison to hydroponic culture, crops in aquaponic systems require lower levels of total dissolved solid (TDS, or EC (0.3-0.6 mmhos/cm) and require, like fish, a high level of dissolved oxygen in water (Rakocy et al. 2006) for root respiration.
Fish Diseases and Prevention
While fish diseases caused by bacteria, viruses, parasites or fungi can have a significant negative impact on aquaculture (Kabata 1985), the appearance of a disease in aquaponic systems can be even more devastating. Maintenance of fish health in aquaponic systems is more difficult than in RAS, and, in fact, control of fish diseases is one of the main challenges for successful aquaponics (Sirakov et al. 2016). Diseases which affect fish can be divided into two categories: infectious and non-infectious fish diseases. Infectious diseases are caused by different microbial pathogens transmitted either from the environment or from other fish. Pathogens can be transmitted between the fish (horizontal transmission) or vertically, by (externally or internally) infected eggs or infected milt. More than half of the infectious disease outbreaks in aquaculture (54.9%) are caused by bacteria, followed by viruses, parasites and fungi (McLoughlin and Graham 2007). Often, although clinical signs or lesions are not present, fish can carry pathogens in a subclinical or carrier state (Winton 2002). Fish diseases can be caused by ubiquitous bacteria, present in any water containing organic enrichment. Under certain conditions, bacteria quickly become opportunistic pathogens. The presence of low numbers of parasites on the gills or skin usually does not lead to significant health problems. The capability of a pathogen to cause clinical disease depends on the interrelationship of six major components related to fish and the environment in which they live (physiological status, host, husbandry, environment, nutrition and pathogen). If any of the components is weak, it will affect the health status of the fish (Plumb and Hanson 2011). Non-infectious diseases are usually related to environmental factors, inadequate nutrition or genetic defects (Parker 2012). Successful fish health management is accomplished through disease prevention, reduction of infectious disease incidence and reduction of disease severity when it occurs. Avoidance of contact between the susceptible fish and a pathogen should be a critical goal, in order to prevent outbreak of infectious disease.
Three main measures to achieve this goal are: • Use of pathogen-free water supply.
• Use of certified pathogen-free stocks.
Implementation of these measures will decrease fish exposure to pathogenic agents. However, it is practically impossible to define all agents which could cause disease in the aquatic environment and to completely prevent host exposure to pathogens. Certain factors, such as overcrowding, increase fish susceptibility to infection and pathogen transmission. For that reason, many pathogens which do not cause disease in wild fish can cause disease outbreaks with high mortality rates in high-density fish production systems. To avoid this, the infection level of fish in aquaponics must be continually monitored. Maintaining biosecurity in aquaponics is important not only from an economic point of view but also for fish welfare. Appearance of any fish pathogen in constrained tank space and under high population density will inevitably pose a threat to fish health, both to the individuals that are affected by the pathogen and those still unaffected.
The goal of biosecurity is the implementation of practices and procedures which will reduce the risks of: • Introduction of pathogens into the facility.
• Spread of pathogens throughout the facility.
• Presence of conditions which can increase susceptibility to infection and disease (Bebak-Williams et al. 2007).
The achievement of this goal involves management protocols to prevent specific pathogens from entering the production system. Quarantine is an important biosecurity component for prevention of contact with infectious agents and is used when fish are moved from one area to another. All newly acquired fish are quarantined before they are introduced into established populations. Fish under quarantine are isolated for a specific period of time before release into contact with a resident population, preferably in a separate area with dedicated equipment (Plumb and Hanson 2011). New fish remain in quarantine until shown to be disease-free. It is advisable in some cases to quarantine new fish in an isolation tank for 45 days before adding them to the main system (Somerville et al. 2014). During quarantine, fish are monitored for signs of disease and sampled for presence of infectious agents. Prophylactic treatments may be initiated during the quarantine period in order to remove initial loads of external parasites.
For disease prevention, certain measures are recommended to reduce risk factors: • Administer commercial vaccines against various fish viral and bacterial pathogens. Most common routes of application are by injection, by immersion or via food. • Breed strains of fish which are more resistant to certain fish pathogens. Although Evenhuis et al. (2015) report that fish strains with increased simultaneous resistance to two bacterial diseases (columnaris and bacterial cold water disease) are available, there is evidence that increased susceptibility to other pathogens may occur (Das and Sahoo 2014;Henryon et al. 2005). • Take preventive and corrective measures to prevent stress in fish. Since multiple stressors are present in every step of aquaponic production, avoidance and management of stress through monitoring and prevention minimize its influence on fish health.
• Avoid high stocking density, which causes stress and may increase the incidence of disease even if other environmental factors are acceptable. Also, high stocking density increases the possibility of skin lesions, which are sites of various pathogen entries into the organism. • Regularly remove contaminants from water (uneaten food, faeces and other particulate organics). Dead or dying fish should be removed promptly as they can serve as potential disease sources to the remaining stock and a breeding ground for others, as well as fouling the water when decomposing (Sitjà-Bobadilla and Oidtmann 2017). • Disinfect all equipment used for tank cleaning and fish manipulation. After adequate disinfection, all equipment should be rinsed with clear water. Use of footbaths and hand washing with disinfecting soap at the entrance and within the buildings are recommended. These steps directly decrease the potential for the spread of pathogens (Sitjà-Bobadilla and Oidtmann 2017). Certain chemicals used as disinfectants (such as benzalkonium chloride, chloramine B and T, iodophors) are effective for disease prevention. • Administer dietary additives and immunostimulants for improvement of health and to reduce the impacts of disease. Such diets contain various ingredients important for improvement of health and disease resistance (Anderson 1992;Tacchi et al. 2011). There exists a wide range of products and molecules, including natural plant products, immunostimulants, vitamins, microorganisms, organic acids, essential oils, prebiotics, probiotics, synbiotics, nucleotides, vitamins, etc. (Austin and Austin 2016;Koshio 2016;Martin and Król 2017). • Segregate fish by age and species for disease prevention, since susceptibility to certain pathogens varies with age, and certain pathogens are specific to some fish species. Generally, young fish are more susceptible to pathogens than older fish (Plumb and Hanson 2011).
Maintaining the health of fish in aquaponics requires adequate health management and continuous attention. Optimal fish health is best achieved through biosecurity measures, adequate production technology and husbandry management practices which enable optimal conditions. As mentioned, avoidance through optimal rearing conditions and biosecurity procedures are the best way to avoid fish diseases. Invariably, however, a pathogen may appear in the system. The first and most important action is to identify the pathogen correctly.
Disease Diagnosis (Identification of Diseased Fish)
Early recognition of diseased fish is important in maintaining health of the aquaculture unit in the aquaponic system. Accurate diagnosis and prompt response will stop the spread of disease to other fish, thus minimizing losses.
Examination of live fish starts by observing their behaviour. Constant and careful daily observation enables early recognition of diseased fish. As a rule, fish should be observed for behavioural changes before, during and after feeding.
Healthy fish exhibit fast, energetic swimming movements and a strong appetite. They swim in normal, species-specific patterns and have intact skin without discolorations (Somerville et al. 2014). Diseased fish exhibit various behavioural changes with or without visible change in physical appearance. The most obvious indicator of deteriorating fish health is the reduction (cessation) of feeding activity, usually as a result of an environmental stress and/or an infectious/parasitic disease. The most obvious sign of disease is the presence of dead or dying animals (Parker 2012;Plumb and Hanson 2011).
Behavioural changes in diseased fish may include abnormal swimming (swimming near the surface, along the tank sides, crowding at the water inlet, whirling, twisting, darting, swimming upside down), flashing, scratching on the bottom or sides of the tank, unusually slow movement, loss of equilibrium, weakness, hanging listlessly below the surface, lying on the bottom and gasping at the water surface (sign of low oxygen level) or not reacting to external stimuli. In addition to behavioural changes, diseased fish exhibit physical signs that can be seen by the unaided eye. These gross signs can be external, internal or both and may include loss of body mass; distended abdomen or dropsy; spinal deformation; darkening or lightening of the skin; increased mucus production; discoloured areas on the body; skin erosions, ulcers or sores; fin damage; scale loss; cysts; tumours; swelling on the body or gills; haemorrhages, especially on the head and isthmus, in the eyes and at the base of fins; and bulging eyes (pop-eye, exophthalmia) or endophthalmia (sunken eyes). The internal signs are changes in the size, colour and texture of the organs or tissues, accumulation of fluids in the body cavities and presence of pathological formations such as tumours, cysts, haematomas and necrotic lesions (Noga 2010;Parker 2012;Plumb and Hanson 2011;Winton 2002).
Upon suspicion of deteriorating fish health, the first step is to check water quality (water temperature, dissolved oxygen, pH, levels of ammonia, nitrite and nitrate) and promptly respond to any deviations from the optimal range. If the majority of fish in the tank has abnormal behaviour and shows non-specific signs of disease, there is likely a change in the environmental conditions (Parker 2012;Somerville et al. 2014). Low oxygen (hypoxia) is a frequent cause of fish mortality. Fish in water with low oxygen are lethargic, congregate near the water surface, gasp for air and have brighter pigmentation. Dying fish exhibit agonal respiration, with mouth open and opercula flared. These signs are also evident in fish carcasses. High ammonia levels cause hyperexcitability with muscular spasms, cessation of feeding and death. Chronic deviation from optimal levels results in anaemia and decreased growth and disease resistance. Nitrite-poisoned fish have behavioural changes characteristic of hypoxia with pale tan or brown gills and brown blood (Noga 2010).
When only few fish show signs of disease, it is imperative to remove them immediately in order to stop and prevent the spread of the disease agent to the other fish. In the early stages of a disease outbreak, generally only a few fish will show signs and die. In the following days, there will be a gradual increase in the daily mortality rate. The diseased fish must be carefully examined in order to determine the cause. Only a few fish diseases produce pathognomonic (specific to a given disease) behavioural and physical signs. Nevertheless, careful observation will often allow the examiner to narrow down the cause to environmental conditions or disease agents. In a serious disease outbreak, a fish veterinarian/health specialist should be contacted immediately for professional diagnosis and disease management options. In order to solve the disease problem, the diagnostician will need a detailed description of the behavioural and physical signs exhibited by the diseased fish, daily records of the water quality parameters, origin of the fish, date and size of fish at stocking, feeding rate, growth rate and daily mortality (Parker 2012;Plumb and Hanson 2011;Somerville et al. 2014).
Treatment Strategies in Aquaponics
Treatment options for diseased fish in an aquaponic system are very limited. As both fish and plants share the same water loop, medications used for disease treatments can easily harm or destroy the plants, and some may get absorbed by the plants, causing withdrawal periods or even making them unusable for consumption. The medications can also have detrimental effects on the beneficial bacteria in the system. If a medicinal treatment is absolutely necessary, it must be implemented early in the course of the disease. The diseased fish is transferred into a separate (hospital, quarantine) tank isolated from the system for treatment. When returning the fish after the treatment, it is important not to transfer the used medications into the aquaponic system. All these limitations require improvements of disease management options with minimal negative effects to the fish, the plants and the system (Goddek et al. 2015Somerville et al. 2014;Yavuzcan Yildiz et al. 2017). One of the most used and effective, old-school treatments against the most common bacterial, fungal and parasitic infections in fish is a salt (sodium chloride) bath. Salt is beneficial for the fish, but can be detrimental to the plants in the system (Rakocy 2012), and the whole treatment procedure must be performed in a separate tank. A good option is to separate the recirculating aquaculture unit from the hydroponic unit (decoupled aquaponic systems) (see Chap. 8). Decoupling allows for fish disease and water treatment options that are not possible in coupled systems (Monsees et al. 2017) (see Chap. 7). One recent improvement for the control of fish ectoparasites and disinfection in the aquaponic systems is the use of Wofasteril (KeslaPharmaWolfen GMBH, Bitterfeld-Wolfen, Germany), a peracetic acid-containing product that leaves no residues in the system (Sirakov et al. 2016). Alternatively, hydrogen peroxide can be used, but at a much higher concentration. While these chemicals have minimal side effects, their presence is undesirable in aquaponic systems and alternative approaches, such as biological control methods, are required (Rakocy 2012).
The biological control method (biocontrol) is based on the use of other living organisms in the system, relying on natural relationships among the species (commensalism, predation, antagonism, etc.) (Sitjà-Bobadilla and Oidtmann 2017) to control fish pathogens. At present, this method is a complementary fish health management tool with high potential, especially in aquaponic systems. The most successful implementation of biocontrol in fish culture is the use of cleaner fish against sea lice (skin parasites) in salmon farms. It is best practiced in Norwegian farms where cleaning wrasse (Labridae) are co-cultured with salmon. The wrasse remove and feed on sea lice (Skiftesvik et al. 2013). Although cleaning is less common in freshwater fish, the leopard plecos (Glyptoperichthys gibbiceps), cohabiting with blue tilapia (Oreochromis aureus), successfully keeps infection with Ichthyophthirius multifiliis under control by feeding on the parasite cysts (Picón-Camacho et al. 2012). This biocontrol method is becoming increasingly important in aquaculture and can be considered in aquaponic systems. Additionally, it must be noted that the cleaner fish can also harbour pathogens that can be transmitted to the main cultured species. Therefore, they must also undergo preventive and quarantine procedures before introduction into the system.
Another biocontrol method, still in the exploratory application phase in fish culture, is the use of filter-feeding and filtering organisms. By reducing the pathogen loads in the water, these organisms can lower the chances of disease emergence (Sitjà-Bobadilla and Oidtmann 2017). For example, Othman et al. (2015) demonstrated the ability of freshwater mussels (Pilsbryoconcha exilis) to reduce the population of Streptococcus agalactiae in a laboratory-scale tilapia culture system. The potential of this biocontrol method in aquaponic systems is yet to be tested, and new studies are needed to explore the possibilities not only for fish disease control but also for control of plant pathogens.
The most promising and well-documented biocontrol method is the use of beneficial microorganisms as probiotics in fish feed or in the rearing water. Their usage in aquaponic systems as promoters of fish/plant growth and health is well known, and probiotics have also demonstrated effectiveness against a range of bacterial pathogens in different fish species. For example, in rainbow trout, dietary Carnobacterium maltaromaticum and C. divergens protected from Aeromonas salmonicida and Yersinia ruckeri infections (Kim and Austin 2006) and Aeromonas sobria GC2 incorporated into the feed successfully prevented clinical disease caused by Lactococcus garvieae and Streptococcus iniae (Brunt and Austin 2005 Sirakov et al. (2016) has made good progress in simultaneous biocontrol of parasitic fungi in both fish and plants in a closed recirculating aquaponic system. In total, over 80% of the isolates (bacteria isolated from the aquaponic system) were antagonistic to both fungi (Saprolegnia parasitica and Pythium ultimum) in the in vitro tests. Bacteria were not classified taxonomically, and the authors assumed that they belonged to the genus Pseudomonas and to a group of lactic acid bacteria. These findings, although very promising, have yet to be tested in an operational aquaponic system. As a final alternative to chemical treatment, we suggest the use of medicinal plants with antibacterial, antiviral, antifungal and antiparasitic properties. Plant extracts have various biological characteristics with minimal risk of developing resistance in the targeted organisms (Reverter et al. 2014). Many scientific reports demonstrate the effectiveness of medicinal plants against fish pathogens. For example, Nile tilapia fed with a diet containing mistletoe (Viscum album coloratum) increased the survivability when challenged with Aeromonas hydrophila (Park and Choi 2012). Indian major carp showed a significant reduction in mortality when challenged with Aeromonas hydrophila and fed with diets containing prickly chaff flower (Achyranthes aspera) and Indian ginseng (Withania somnifera) (Sharma et al. 2010;Vasudeva Rao et al. 2006). Medicinal plant extracts have also proven effective against ectoparasites. In goldfish, Yi et al. (2012) demonstrated the effectiveness of Magnolia officinalis and Sophora alopecuroides extracts against Ichthyophthirius multifiliis, and Huang et al. (2013) showed that extracts of Caesalpinia sappan, Lysimachia christinae, Cuscuta chinensis, Artemisia argyi and Eupatorium fortunei have 100% anthelmintic efficacy against Dactylogyrus intermedius. The use of medicinal plants in aquaponics is promising, but yet more research is needed to find the appropriate treatment strategy without undesirable effects. As referred by Junge et al. (2017), even though research on aquaponics has largely developed in recent years, the number of research papers published on the topic is still dramatically low compared to papers published related to aquaculture or hydroponics. Aquaponics, still considered an emerging technology, is however now characterized by having great potential for food production for the world's population that, according to the results of the UN World Population Prospects (UN 2017), numbered nearly 7.6 billion in mid-2017 and, based on the projections, it is expected to increase to 1 billion within 12 years, reaching about 8.6 billion in 2030. Nevertheless, considering the potential risks to the sustainability of aquaponics due to fish diseases, development of good ideas, and novel methods and approaches for pathogen control will be our major challenge for the future. There is a pressing need to initiate new knowledge to provide a better basis for management of fish and plant health, and to continue to develop operation and infrastructure systems for the aquaponic industry. The causes of fish losses in aquaponic systems, system-specific diseases and the interaction and alteration of microbial community, along with pathogens, are priority areas for study.
|
2019-07-16T22:04:33.516Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "63ce502941988f3a982026cc70344897f7c3eab3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-15943-6_17.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b1e3b924fc5861eb28f82f3ad3889bd117809802",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
14637596
|
pes2o/s2orc
|
v3-fos-license
|
Unified Gauge Field Theory and Topological Transitions
The search for a Unified description of all interactions has created many developments of mathematics and physics. The role of geometric effects in the Quantum Theory of particles and fields and spacetime has been an active topic of research. This paper attempts to obtain the conditions for a Unified Gauge Field Theory, including gravity. In the Yang Mills type of theories with compactifications from a 10 or 11 dimensional space to a spacetime of 4 dimensions, the Kaluza Klein and the Holonomy approach has been used. In the compactifications of Calabi Yau spaces and sub manifolds, the Euler number Topological Index is used to label the allowed states and the transitions. With a SU(2) or SL(2,C) connection for gravity and the U(1)*SU(2)*SU(3) or SU(5) gauge connection for the other interactions, a Unified gauge field theory is expressed in the 10 or 11 dimension space. Partition functions for the sum over all possible configurations of sub spaces labeled by the Euler number index and the Action for gauge and matter fields are constructed. Topological Euler number changing transitions that can occur in the gauge fields and the compactified spaces, and their significance are discussed. The possible limits and effects of the physical validity of such a theory are discussed.
Index invariant and index changing transformations can arise due to change of dimensions and of the topology /geometry of the space on which the dynamical variables are defined. The dynamics and symmetries of the system with the action and field equations are compatible with the geometry of the manifolds or other spaces. In string theory/ Brane theory/M theory and gauge field theory , the consistency of Calabi-Yau and other compactification spaces , and of the dynamics on them for the physically relevant objects is required. The compactifications can be written in the Kaluza Klein way of inserting gauge potentials in the metric of the n dimensional space, or it can be expressed as a Holonomy of the gauge fields on loops in the subspaces.
A 3 + 1 + 6 dimension space and the allowed compctifications to 3 + 1 spacetime can consistently give the required limits for high energy physics. In condensed matter the local variations in geometry/symmetry and topology are studied for their role in existence of many phases and their phase transitions. In Yang Mills gauge field theories the unification program in terms of Holonomy operators and the partition functions over the configurations gives the possibilities of high energy interactions in which there are changes in topology, that is changes in the Indexes and invariants of equivalence classes of the fields , thus topological phase transitions.
In the classical and quantum gravity of horizons of black holes and in the pre inflationary cosmological models , the possibility exists of topological phase transitions giving rise to thermal effects , and to dark matter and energy.There is also a convention of taking a 11 dimensional theory with the 3+1+1+2+4 = 11 split in the spacetime and gauge dimensions. There is a neccessity to recover a time dimension for doing the dynamics of entities in the space . Hence the signature of the space has to have a locally Lorentzian form. In 11 dimensions it could be described locally by the representations of SO(10, 1). SU (5) is a subset of SO(10); and SO(3, 1) is also a subgroup of SO(10, 1).
II THE MODEL FOR THE UNIFIED GAUGE FIELD THEORY AND COMPACTIFICATIONS
The model for the unified gauge field theory is chosen keeping the developments in quantum gravity, M theory and Yang Mills field theories of the past decades in view. A 10 dimensional space , with four for spacetime and gravitation and six for the electromagnetic,weak and strong interactions . All these interacting fields are taken as Yang Mills gauge fields with the gauge groups SU (2) as a subgroup of SL(2, C) which gives the local Lorentz connection for gravitation and the U (1) * SU (2) * SU (3) as a subgroup ofSU (5) Yang Mills gauge theory for the other three interactions in the standard model of particle physics.
In the contemporary way of writing field theories globally with a holonomy loop representation the counting of loop dimensions is done in the following way. Any representative element of the groups if taken in diagonal form would give one for U (1) , two forSU (2), and three forSU (3) as the independent entries on the diagonal.; seen as rank of matrix or number of eigenvalues or elements in the trace .The number of elements would be 1 + 3 + 8 for the individual groups of the product group, but the underlying space is of dimension 1 + 2 + 4 = 7. Ten or eleven dimensional spaces are frequently used as a standard model coming down from string theory to gauge fields. With dimSO(11)/SO(10) = 10 , a 10 dimensional space for the theory has a space for gauge compactifications is six dimensions, and the four dimensional spacetime manifold is also carrying a gravity gauge group for unification.
The light cone and null hypersurface has future and past halves which has aSU (2) group each. Notice that the point is written as x(+, −)iy and z(+, −)ct in spacetime as a 2 * 2 matrix on which SL(2, C) acts. The point can now be expressed as a Pauli matrix spinor times the position 4 vector. That is a general elemnt for SU (2). In the time reversed combination z(−, +)ct the other option of SU (2) is realised. This interpretation may avoid the difficulties raised about whether the connection is real or complex for gravitation. Counting two each as dimensions for the two SU (2) , the 4 dimensions for spacetime and gravitation are obtained . Lorentz connection based gauge transformed gravity is diffeomormphed spacetime. Thus the classical idea of deformed or curved spacetime as a representation of the gravitational effect is restored. This could be a possible argument for the choice.
In the Kaluza Klein Holonomy approach ; each dimension is represented by a loop; and thus a total of ten loops for ten dimensions. The weakness of gravity creates loops that are macroscopic . If gravity becomes stronger in a unified theory at energies close to the Planck scale then the loops would shrink too and at the beginning of the universe the very small radius of curvarure may go along with the compactified loop getting very small for the gravitational connection. In the late universe we live in the gravitational interaction is very weak and the geometry is classical with a very large radius of curvature and loop size. In the Einstein-Hilbert action the Ricci scalar is roughly inverse of the square of the radius of curvature; and weak gravity effects are a nearly flat spacetime, or very large radius of curvature.
This estimate is expressed as 1/(g 2 Y M ) T r(F ∧ * F ) for Yang Mills and as 1/κ R √ g for the gravitational action. The Ricci scalar is the product of the principal curvatures in the simple case. In both cases the coupling is related to the field strength and the compactification size. For a very weak gravitational constant the radii of curvatures are very large , while for the other gauge fields with the coupling term of order one, they have very small compactification loops. With this estimate the unified theory permits all gauge fields to act freely , including gravity. With 8πG as the gravitational constant and a loop radius l , the relation (8πG) 2 L as constant gives a scaling for the two , at weak and strong gravity limits.
The gravitational interaction has a classical limit which is the observable spacetime, while the electromagnetic interaction being U (1) has a infinite range gauge particle of zero mass and hence also operates in the whole of the spacetime ; even though its holonomy loop is small locally. The SU (2) connection of gravity has the difficult task of creating a graviton at one end, and the nonperturbative solutions of classical and quantum gravity at the other end of the energy scale . Spin two representations exist for the SL(2, C) group and that is still the local gauge group. The SU (2) subgroup and its connection may need a fresh approach if the Unification has to include the gravitational connection too. Should SU (2) * SU (2) be considered.
Consider the group relations : SU(2) ≃ SO(3) ; SU(2)⊗C = SL(2,C) , SU(2)⊗ SU(2)≡ SO(3) and SO(3)⊂ SO(3,1) SO(3,1)≡ SL(2,C) ; SU(2)= U(2)∩ SL(2,C) From these relations the correct nature of the gauge group for gravity has to be determined that encodes all the neccessary information from the Lie group to the Lie algebra , whose representations will give the spinors that go with the connection form. This choice should remain valid for small and large energies and curvatures; that is determine the neighbourhood for the group action on the underlying space.
The loops for the other interactions have , due to their larger coupling constants than for gravitation , very small sizes. How does the partition of the ten dimensional space occur so that it is 4 + 6 = 10. What are the allowed compactifications and what is the topology of these loops in terms of Homotopy . What are the allowed differential forms and hence the gauge fields on this space. And hence what is the cohomology of closed and exact forms that gives the Betti numbers and from which the Euler number can be defined. What indexes dependent on the field forms could be defined that are topological invariants , that set up a classification of the field equations, compatible with the geometry and topology of the subspaces of lower dimensions of full space of 10 dimensions.
For a even dimensional manifold with a nowhere vanishing vector field ,that exists as gravity is universally present ; the Euler number is zero. This is true for the 3+1 dimensional spacetime , subspace of the ten dimensional space and for that 10 dimensional space itself. The gravity equations giving dynamics of three surfaces will have Euler number zero for the odd dimensions , unless closed time like loops or singularities are present. The number of iso spectral Riemann surfaces of genus g grows exponentially with the square of g. Hence genus changing transformations of spacetime will have to be exceptionally handled. The Gravitational field equations constrain the spacelike surfaces to evolve in such a manner that the 4 and lower dimensional manifolds will have genus zero or 1. This is not topologically complicated.
III PARTITION FUNCTIONS AND TOPOLOGICAL INDEX The Partition function methods of statistical mechanics as applied to field theory are used. In the presence of symmetry groups in molecular and solid state physics that give rise to allowed configurations; the partition functions have to sum over all the configurations , with the Boltzman or Gibbs distribution and density of states as the weight function .
The probability for a configuration and the average values calculated with it give the thermodynamic quantities. Thus for example the partition function Σ J (2J + 1) exp((−J(J + 1))/2IkT ) has the sum over J. The 2J + 1 possibilities (degeneracy) for the quantum number or index J are summed . If any selection rules were to create allowed J values then the number of configurations to be summed over in the partition function would be determined. If the number of independent configurations to be summed over is given by any other rule, then that factor is included in the partition function as the weightage for the ensemble.
For example if there were N i configurations of the i th kind , then the sum for the Free energy and entropy and the probabilities given by The configurations can come from the energy levels as well as the symmetries.
The canonical ensemble exp(−βH) Gibbs density goes over to the exp(iS) in path integral form and with action S , and the euclidean form is exp(−S). This is understood as a functional integral over the matter and gauge fields.
). The configurations could follow a symmetry group or also a topological classification.
To introduce topological properties in the partition functions ; topological indices such as the Euler number label the equivalence class of subspaces and field configurations compatible with them. These are identified and summed over in the partition function. Thus the , where the sum is over all the possible Euler number topological configurations and the integration is on all the fields in the action S. A index changing transformation will have an effect on the partition function and other quantities. Writing the action integral,path integral form of partition functions , in perturbatively solvable interactions, the attempt is to obtain gaussian integrals.
In the modern theory of manifolds and gauge fields compatible with them, the spectrum of differential operators is related to and controlling the allowed geometries. Some invariant indexes are defined. The gauge fields expressed in differential forms will have their integrals define the indices. The closed and exact forms will give the cohomology and the Betti numbers. This defines the Euler number for the 10 dimensional space and the subspaces.
Consider the set X p (M n ) of closed forms of order p on the sub manifold M n of dimension n, and the set Y p (M n ) of exact forms of order p. Then the quotient space X p /Y p gives the cohomology H p (M n ). The dimension of this is the Betti number . dim(H p (M n )) = b p (M n ). Consider the collection of all possible p order differential forms ; in this case the gauge fields and potentials and related forms. Then the Euler number characteristic index is defined by the sum χ(M n ) = Σ n p=1 (−1) p b p (M n ) for the subspace M n of dimension n. Consider the collection of subspaces of all allowed dimensions and the equivalence class of all such subspaces having the same Euler number and dimension ; this describes the "degenerate excitation spectrum" of the theory.
It is connected to the gauge field theory chosen and the underlying geometry and topology of the subspaces or moduli spaces of the 10 or 11 dimensional space .In the simple case of 2 dimensional Riemann spaces the Euler index is given by the formula 2 − 2g where g is the genus, typically the number of holes minus the number of handles minus the number of boundaries. A simple model of a gaussian integral for action of the matter and gauge fields on the Riemann space with n holes gives a partition function that can be summed as a Series ; in terms of Riemann zeta functions.
IV TOPOLOGICAL PROPERTIES OF THE SUBMANIFOLDS AND TOPOLOGICAL TRANSITIONS The fundamental interactions of GUT form 6 dimensional subspaces or loop configurations that can have non trivial topologies. The Calabi-Yau spaces also provide a reduction of the 10 dimensional space to the lower dimension ones with various Euler numbers. As the genus number increases the number of possible configurations with the same genus number (an equivalence class) increases rapidly. Compatibility conditions of the fields and the dynamics will reduce the number of allowed configurations. In the partition function, the sum over all the possible (in)equivalent configurations labelled by the allowed Euler numbers is taken. The 10 dimensional apace has subspaces which could be manifolds of lower dimensions, Calabi -Yau spaces , and the holonomy loops have homotopy groups associated with them.
The Chern number, topological charge, Atiyah -Singer and Hopf -Poincare and other indexes are used to characterise the topological invariants of the field theory. But this gives the model and dynamics dependent part of the interactions and it depends on the field configurations compatible with the underlying spaces. The compactification to sub spaces of lower dimensions will lead to equivalent and inequivalent classes under the Euler number index ; and this is the chosen index in the research literature . It is related to the Betti numbers and hence to the cohomology of the differential forms of fields and the cycles for integration on subspaces. It is also obtained from the Gauss Bonet theorem and the properties of the critical points of vectorfields in terms of the Morse index ; and it is also a well defined number for discrete lattices, graphs and polyhedra. In condensed matter systems the structural phase transitions and discrete lattice topology changes involve the Euler number with number of vertices minus number of edges plus number of faces.
In the picture suggested in this paper, high energy interactions could allow possible changes of topology for the intermediate states of excitations. Symmetry group changes are well known and broken symmetry gives a number of well tested effects in physics. Topology changing transformations are increasingly being found in a variety of physics. Between the Planck scale and the scale of energies of the experimental data , there is a large range that may give rise to a number of phenomena. In the 10 dimensional unified gauge field theory of all interactions, the program of geometrisation of fundamental physics, identifies the properties of the underlying space with the fields present on it using gauge connection and action ; and compactifications compatible with the dynamics are expected. The spectrum of differential operators is directly connected with the geometric properties of the manifold.
If Yang Mills theory had been recognised as the underlying theory of fundamental interactions while Einstein was working , then he may have attempted to enlarge his 4+1 five dimensional unified theory of electrodynamics and gravitation ; such that additioanlly 2+3 dimensions would have been added to include the weak and strong interaction potentials in the metric as a ten*ten matrix. Perhaps realising that the modern theory of differential geometry , having evolved from Riemannian geometry shifts the emphasis from the metric to the connection; a follower of Einstein may then have tried to write both gravitation and the Yang Mills theories in the language of differential forms and spinors when quantum theory too had to be done. In that way the developments could have led to a unified gauge field theory.
However the developments took decades following Yang Mills work to actually lead to Abhay Ashtekar writing the SU(2) and SL(2,C) connection or gauge potential in the spinor form as new variables for gravity. This opened the possibility that all interactions are gauge field theories and the equations for fields would follow from the gauge connection. The Holonomy representation and path integral methods have now created a new possibility for a unified approach and possibly a real unification of all the interactions including gravity. The SU(2) subgroup of SL(2,C) and the U(1)*SU(2)*SU(3) as a subgroup of SU(5) provide the gauge groups for the four fundamental interactions.
The ten ( or eleven) dimensional space in which the holonomy loop integrals , with these gauge groups , are defind are the theory's basic assumption. The compactification to obtain the reduced four dimensional subspace as the spacetime with a gravitational connection form is the program of those in classical and quantum gravity. The quantum Riemann geometry and loop quantum gravity refers to this four dimensional subspace and the macroscopic world arises as a spacetime manifold in the classical case. The reduction of the 6 dimensional subspace of the ten dimensional space, into the Calabi-Yau spaces and manifolds is the subject of high energy physics done with string/brane/M theory or with the Yang Mills gauge field theory in Holonomy loop representation. Non trivial topological properties can arise in the compactifications.
To illustrate the topological trsnsitions that are possible the path integral over the action is augumented by the sum over configurations labelled by the Euler number index to obtain the partition function. In this an analogy with partition function applications in statistical mechanics of condensed matter systems that give topological phase transitions can be made.The Yang Mills theory in a two dimensional Riemann space with genus g has a partition function given by Σ(dim(M ) χ(M) *action integral. The Euler number is 2-2g, here with genus number g. If the Riemann space is a 2 sphere or 2 torus or a collection of torii then the genus number is a easy examole to obtain.
The general case of any subspace of dimension 6 or less is to be obtained in a similar manner , however the equivalence class of identical Euler number has a large number of configurations possible. The weightage for any configuration will be given by the evaluation of the path integral for the action of the theoretical model of the interactions . In the simple case when path integral can be written as gaussian integrals is taken then this gives a dimenesion dependent term and restricts the configurations to those of high symmetry for the field configurations over which the functional integral is defined. The Euler number could be counted or computed in a variety of ways , but taking the sum of Betti numbers is the preferred choice . Gauge fields give a natural condition of closed and exact forms and the cohomology for the upto six dimensional sub spaces of the ten dimensional space. This would be seven dimensional if the 11 dimensional theory is used.
The Euler index of the sub manifolds M n of dimension n are also given by the Hopf theorem as : For closed M n and vector field v on M n with isolated and finite number p of singularities, the sum ΣJ v (p) = χ(M n ) ; where the J v (p) is the index of the vector field v at the singular pointp. The singularities of the gauge vector field on M n and their indices at these points hence control the euler number. This is an equivalence class as several different choices of M n could have the same Euler index. Hence it gives a sum in the partition function over these configurations. The change in the Euler index caused by the various things it depends on, give the "topologically inequivalent" spaces.
Topological transitions between inequivalent spaces in this sense can occur. The variation of the Euler index as the compactification of Calabi Yau spaces, the interaction model and parameters, and the energy scale are varied could be seen. If the second derivative of the Euler index is calculated then the first and second order topological transitions can be described. In condensed matter physics , a model potential with its critical points varying as a parameter is varied, is used to find the condition ∂ 2 χ/∂v 2 > 0, < 0 for the two transitions.
If this analogy is extended to the Euler number defined with the singularities of the gauge vector fields , then there is a classification of types of topological phase transitions. Consider the second partial derivative of the Euler index with respect to any pairs of the gauge fields and the determinant greater than or less than 0. This can be also expressed in terms of the expectation value of the field operators in usual field theory and the Betti number is given by the dim(H n ) = b n for the cohomology H n , the quotient of the set of closed and exact forms on the subspace of order n is needed action S(φ i , A a i ) is for the gauge fields and matter fields dependent on the model of the interactions and the dimension of the space has index i from 1 to 10, and the subspaces with the index n from 4 to 9.
For the Yang Mills gauge fields with gauge group represented by the t a and the connections or gauge potentials by A a i the fields are F = dA + A ∧ A and the action is −1/e 2 d n xT r n (F ∧ * F ) The standard model or its unificationin SU(5) has the usual Yang Mills connections A a i t a where the t a are the group representation and the A a i are the gauge potentials with group index a and the index for the underlying space as i;,that can go upto 10 or 11 dimensions ; but is restricted to 4 after compactification , as the spacetime index. The gauge fields are F = dA + A ∧ A and the action is There is a topological invariant d n xT r(F ∧ F ) which can be interpreted like a Gauss Bonet term or 2nd curvature form. There is a topological charge d n xF or d n x * F , depending on whether the self dual or anti self dual case is taken, respectively * F = iF and * F = −iF . The gauge group acts on the potentials or connections as A ′ = gAg −1 + g∂ µ g −1 and F ′ = gF g −1 .
A similar construction would be expected for the gravity gauge group. But there are some differences. In the Sen, Ashtekar version the SU (2) real and complex cases were taken respectively for a Hamiltonian form of the theory. In A Magnon's covariant and geometric generalisation to include the U (1) gauge group unified with the SL(2, C), the role of the SL(2, C) connection becomes unified with the U (1) . The Einstein Maxwell theory is obtained with a possibility of a Yang Mills form. The action in the Einstein Hilbert form of d 4 x √ gR arises as a geometric invariant, that is extremal when the consistency condition is satisfied. Namely the Einstein equation for gravitation, with the Yang Mills form of the U (1) electromagnetic stress energy tensor as source, and the equations satisfied by the field ; as well as the Bianchi equations. The SL(2,C) Lorentz connection in spinor variables is A IJ α , which is antisymmetric in I, J and has a dual obtained using the ǫ IJ KL , with the self dual and antiself dual pair A (+,−) = i/2(A[−, +]i * A) . Then the Field or curvature obtained from the connection is given by IJ and this is self dual * F = iF Then the tensor quantities for general relativity are found as g αβ = η IJ e I α e J β as the metric in terms of the Lorentz frame and Minkowski metric. The Christoffel symbol is Γ γ αβ = A J αI e I β e γ J and the Riemann tensor is R +αδ βγ = F +IJ βγ e α I e δ J From this the Einstein equation is R + αβ − 1/2g αβ R + = 0 The problem of having a complex connection and non compact gauge group has been discussed but not resolved. There is also the question of having a a Lorentz gauge connection defined like all the other gauge connections on the 10 or 11 dimensional theory; and reducible to the 4 dimensional one for obtaining the classical spacetime.
This construction in A Magnon's work is expected to be generalisable to include the other non Abelian gauge field theories too ; but the success of this framework is not established convincingly in the 4 dimensional spacetime. However the possibility of starting with the higher dimensional space for a Unified field theory and then carrying out compactifications in the Holonomy representation for the Gauge fields could yield an alternative method . The Poincare group including the translations is also taken as the gauge group , but that program leads to additional terms in the action. This could be creating an interpretation problem when the combined gauge fields are taken. In this paper it is assumed that the correct gravity gauge group will eventually get fixed among these candidates and the higher dimensional unification will work as a program to get the classical limit as well as the Yang Mills like quantisation of the theory. The basic quantity to calculate is the partition function.
For the compactified subspaces the partition function becomes The compactifications could be expressed in the Kaluza Klein way as a line element, to illustrate the concept, µ, ν = 3to 10 or 11 As an example of an Yang Mills partition function in 2 dimensions Z = DA µ exp(1/g 2 M dµT r(F µν F µν )) For a simple model in two dimensions this has been evaluated Z = Σ R (dimR) 2−2G exp(−g −2 AC 2 (R)/2) where G is genus, g is Yang Mills coupling, A is area of metric on M, C is second Casimir invariant of Riemann space R χ(M i ) are Euler characteristics of Moduli spaces of a genus g Riemann surface , and the χ = 1/8π 2 d 4 x √ gF ∧ F is a Gauss Bonet integrand It can be χ = 2 − 2g − n for genus g and n punctures The compactification is done on the moduli spaces of the 10 or 11 dimensional space due to the gauge sub groups. The Calabi-Yau spaces are the sub manifolds described by the general formula Σ n j=1 C 1,j,n j ζ s1 1 ζ sj j ζ sn n = 0, which is a polynomial in products of the variables ζ, or a multidimensional polynomial. Computational packages such as Mathematica can give visualisations of these spaces and a classification of their Euler indices can be used for the equivalence classes.
When one Calabi-Yau space changes into another in compactification, the exponents and the coefficients in the formula defining it change. The identical Euler number of a class of such spaces can be used to describe the equivalence class of the spaces. They have dimension upto n = 10 and the Euler index . The equivalence class of all such spaces with the same Euler number is taken. The collection of all such equivalence classes labeled by their Euler numbers is the topological excitation spectrum of the theory. The dimension changing transformation is expressed as one or more of the exponents going to zero in the product over the Calabi Yau coordinates in the multi polynomial sum. The sum over all such configurations of each dimension allowed by the gauge field dynamics and the compatibility of the differential operators and the manifolds they act on is a complicated classification problem at the frontier of mathemat-ics. However it is seen that the dynamics of the fields and the properties of the manifolds they act on both play an important role.
As a simple example consider that the sub manifold described as a quadratic expression in three variables is a torus that could be changing into a sphere. A sphere with two holes and a handle attatched is topologically a torus. The genus number is number of holes minus number of handles,and is 2-1=1 for a 2-torus and zero for a 2 sphere. Hence the Euler number which is 2-2g for a 2surface changes from 0 to 2 when the torus changes to a sphere, by pinching off the handle ,or the torus along the smaller radius is pinched off. This involves the singular points of the vector field on the surface, and could be also seen as a Holonomy loop being made to disappear from the product of Holonomy loops ,that is become e 0 = 1 or exp(I) as an operator.
The partition function is expressed as ) and the expectation values of the Holonomies are taken as for the n loops Holonomy and the −1 has been inserted in the exponent as replacement of the 2πi/h as coeffecient of the Action The topology changing transitions are understood as follows. A transformation manifold M n to M ′ n such that in the definition of the Calabi Yau spaces there is a tranformation of any coefficients and powers of the variables ζ. The new space also has a euler index, either the same or different. A change of dimension by compactification too can occur. The equivalence class of same dimension and Euler number can be found. Then the total partition function is reevaluated.The simple example is to take the action consisting of quadratic forms in matter and gauge field variables. then the matrix gaussian is the exponential of the action integral and this gives the normalisation factor upon doing the functional integral. There will be a fixed factor dependent on dimension and volume of the space M n over which the integral is done. Then the sum over all the M n could be performed.
A closed compactified odd dimensional sub manifold makes change of topological index possible. Consider a congugacy class of loops for Holonomy on the submanifolds with the same Euler number. Then the Kaluza Klein compactification is a topology changing transition. The sum is over all configurations of the allowed subspaces of dimension upto 10 and the equivalence classes of subspaces described by the same Euler indexes for each such dimension. The integral is a path integral or functional integral over all the gauge fields A a j and matter fields φ i in the action and in the measure. The example of the 2 dimensional Yang Mills theory gives a result involving the area and the Casimir invariant. In general in the usual way of integrating over the allowed sub manifolds the exponential is expected to be turned into a multi dimensional gaussian functional and then integrated . This will give a quantity that depends on dimension ,for the volume of sub manifold, and on the couplings and characteristic Casimirs of the gauge fields; in the summation over the possible configurations in the partition function.
The coordinates in the compactified dimensions are angles , and the loop spaces for compactification have very largre radii for the gravity gauge group and very small radii for the other gauge group connections. As the energy increases to the GUT scale , the three interactions give the single SU(5) ccoupling 0.033. Gravity coupling is expected to be nearly constant till GUT scale of energy is reached and it could rise rapidly thereafter to become like the GUT coupling, in going towards the Planck scale. It is expected that in this regime the compactification loop for the SU(2) gravity gauge field will become very small, like that for the other interactions. When considered on a log scale for energy as well as for coupling constant, the variation for the three fundamental interactions, electromagnetic, weak and strong , it is a slow variation. For gravitation the behaviour of the gravitational constant could be flat or constant almost upto the GUT scale of energy 10 15 GeV and then rise till the Planck scale of 10 19 GeV; thus going from a value of −38 on the log scale to that of −2 for the gravitational coupling. This would provide a basis for considering a Unified Gauge Field Theory for all interactions and give a possibility of a consistent picture of Physics.
But these conditions are likely to occur only near a singularity or in the beginning of the universe. The unified field theory in this regime will have a gauge group that includes the SU(5) and the SU(2) or SL(2,C) as subgroups. Otherwise the weakness of the gravitational interaction compared to the others by a factor of (10 − 38), allows a compactification loop almost (10 38 ) times bigger. This gives a classical and macroscopic size universe with General theory of relativity giving its dynamics. On this spacetime manifold the other gauge fields exist in compactified dimensions and create the physics at all the energy scales below the GUT scale. This is indeed fortunate for all of us living in the Universe.
The difficulties with the renormalisability of gravity have been given many reasons ; that do not include the possibility of the coupling becoming very large ,like GUT coupling , but only at energy scales beyond GUT scale. If classically gravity can be expressed as a gauge theory how does the breakdown of the perturbative scheme occur in quantised gravity of any form. Does coupling constant renormalisation work. In the non perturbative approaches the emphasis is on loopspaces of holonomies , and these arise from a string /M theory or from the quantum Riemann geometry. In a path integral form of field theory the additional terms to be put in as perturbative corrections in the action and the gauge fixing terms have been known to give divergent quantities for gravity. How serious is this problem below the GUT scale ; and can it be avoided at the higher scale by the coupling constant increasing rapidly as a single Grand unified theory couples to gravity.
From the understanding, that fundamental interactions occur in Minkowski spacetime locally, and the expectation that ,geometric properties of a suitable underlying space should determine the physics globally of all the interactions; which Albert Einstein had in the first half of the 20th century , the program of a unified description of physics has come a long way. The twenty first century has begun with, as yet, no complete theory ; but many of the aspects of the Fundamental theory are known. This paper has discussed some of the possibilities of a Unified Gauge Field Yang Mills Type of a theory of all the interactions, and the significance of the topological index transitions for physics.
VII ACKNOWLEDGEMENTS I thank the Director,Institute of Mathematical Sciences, Chennai ,India and Prof N. D.HariDass for supporting my visit. I appreciate the Institute facilities and discussions with its members. My 5 papers were written at the Institute while on vacation from St Xavier's College, Mumbai, India ; where I have an active theoretical physics group.
|
2014-10-01T00:00:00.000Z
|
2004-06-04T00:00:00.000
|
{
"year": 2004,
"sha1": "7ae91243ea6e8763f5036446b2f935e027054c94",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "69097e0a6e7ba9633ac5ab9e0f1bf0f8101170cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
265134872
|
pes2o/s2orc
|
v3-fos-license
|
The Geriatric Nutritional Risk Index Predicts Prognosis in Japanese Patients with LATITUDE High-Risk Metastatic Hormone-Sensitive Prostate Cancer: A Multi-Center Study
Simple Summary The geriatric nutritional risk index (GNRI) is used as a prognostic factor in a variety of cancers. We aimed to evaluate the prognostic significance of the pretreatment GNRI and retrospectively compared androgen deprivation therapy (ADT) plus up-front abiraterone acetate (AA) or bicalutamide in patients with metastatic hormone-sensitive prostate cancer (mHSPC) using large multi-institutional data. We found that ADT plus abiraterone may have advantages over ADT plus bicalutamide. In addition, our analysis revealed the importance of prolonged time to castration-resistant prostate cancer even in the era of upfront androgen receptor axis target therapy or docetaxel. It also highlighted the prognostic significance of the pretreatment GNRI for patients with LATITUDE high-risk mHSPC treated with upfront AA plus ADT. Abstract Malnutrition is associated with prognosis in cancer. The geriatric nutritional risk index (GNRI), based on the ratio of actual to ideal body weight and also serum albumin level, is a simple screening tool for assessing nutrition. We investigated the GNRI as a prognostic factor for oncological outcomes in patients with high-risk metastatic hormone-sensitive prostate cancer (mHSPC) using a Japanese multicenter cohort. This study included a total of 175 patients with LATITUDE high-risk mHSPC, of whom 102 had received androgen deprivation therapy (ADT) plus upfront abiraterone acetate, and 73 had received ADT plus bicalutamide (Bica), from 14 institutions associated with the Tokai Urologic Oncology Research Seminar. Patients were classified into GNRI-low (<98) or GNRI-high (≥98) groups. The GNRI was based on the body mass index and serum albumin level. Kaplan–Meier analysis revealed that the median overall survival (OS) of a GNRI-low group (median 33.7 months; 95% confidence interval [CI]: 26.2–not reached [NR]) was significantly worse than that of a GNRI-high group (median: NR; 95% CI: NR–NR; p < 0.001). Multivariate analysis identified Bica and low GNRI (<98) as independent prognostic factors for reduced times to both castration-resistant prostate cancer and OS, and, therefore, a poor prognosis. Our findings indicate the GNRI may be a practical prognostic indicator in the evaluation of survival outcomes in patients with LATITUDE high-risk mHSPC.
Introduction
Androgen deprivation therapy (ADT) is given to patients with metastatic prostate cancer (PCA).However, a good initial response to ADT is often followed by the devel-Cancers 2023, 15, 5333.https://doi.org/10.3390/cancers15225333https://www.mdpi.com/journal/cancersopment of progressive castration-resistant prostate cancer (CRPC), leading to death in most patients with PCA [1].In metastatic hormone-sensitive prostate cancer (mHSPC), the survival benefits of combined ADT and second-generation androgen receptor axistargeted agents (ARATA), which include abiraterone acetate (AA) [2,3] or docetaxel [4], when compared to ADT alone were highlighted in several recent Phase III trials.For high-risk patients with PCA, including LATITUDE high-risk individuals, recent evidence has led to a treatment framework of intense therapy given at an earlier treatment phase.Consequently, the prognosis of patients with mHSPC has gradually improved [5].The LATITUDE study revealed that in patients with a large tumor volume, ADT together with AA led to improved radiologic progression-free survival (PFS) and overall survival (OS) compared to ADT alone [6].Thus, various clinical guidelines strongly recommend ADT treatment combined with upfront AA [7].
For patients with mHSPC, combined androgen blockade (CAB) is one of several initial treatments given, especially in East Asian countries like Japan [8].Although conventional CAB using bicalutamide (Bica) remains the prevailing choice for Japanese physicians, a direct prospective comparison of upfront AA plus ADT and CAB, particularly focused on data from Asian patients, might reveal new strategies for disease treatment.In Japanese patients, we and others have directly compared upfront AA plus ADT and CAB in those with high-risk mHSPC [9][10][11][12][13][14].In terms of PFS, upfront AA plus ADT was deemed superior to CAB in all prior studies; however, both our group [9] and another group [10] found that the two treatments showed no significant difference in OS.In addition, whether all LATITUDE patients with high-risk mHSPC should be treated with upfront AA plus ADT remains unclear.Thus, the best treatment choice for a specific patient remains unresolved.
Malignancies in patients are often accompanied by malnutrition such that medical nutritional therapy has become a necessary part of multidisciplinary anticancer treatment programs [15].Various nutritional assessment tools can identify survival-related prognostic indicators in multiple malignancies, including PCA [16].The geriatric nutritional risk index (GNRI) is an excellent tool that is used to assess nutrition and is calculated using the ratio of actual to ideal body weight in addition to the albumin (Alb) level; it is one such potential prognostic indicator [17].We recently described the utility of GNRI as a prognostic indicator that predicted survival outcomes in patients with bladder cancer treated with immunochemotherapy [18].
However, to date, no reports exist on the relationship between GNRI and outcomes for patients with LATITUDE high-risk mHSPC treated with upfront AA plus ADT or CAB.Therefore, we evaluated whether GNRI could be used as a prognostic indicator in such patients.
Patients
For this investigation, we collected further data from a previously studied cohort [8] of a total of 175 patients with mHSPC who were designated as "high risk" in the LAT-ITUDE trial.Between January 2018 and September 2020, patient data were accrued at our institution as well as affiliated hospitals including Nagoya City University Graduate School of Medical Sciences, Hamamatsu University School of Medicine, Fujita Health University School of Medicine, and Gifu University associated with the Tokai Urologic Oncology Research Seminar group.Inclusion criteria for patients were as follows: those who underwent treatment with ADT together with AA taken orally (102 patients; 1000 mg once a day) + prednisolone (upfront AA plus ADT treatment), or ADT and Bica taken orally (73 patients; 80 mg once a day), also known as CAB treatment.Criteria from Prostate Cancer Clinical Trials Working Group 3 were used to define biochemical, radiographic or clinical progressive disease.
Oncological Assessment
Patients were treated until radiographic or clinical disease progression was noted as well as an increased prostate-specific antigen (PSA) level.Overall survival was determined from the time period between the start of first therapy and death due to all-cause mortality.A diagnosis of CRPC was based on radiographic or PSA progression, and time to CRPC (TTCR) was measured from the start of first treatment until the time of a CRPC diagnosis.The start of initial treatment until a second/subsequent tumor progression in next-line treatment was defined as the time to second progression (PFS2).
Data Collection
The medical records of patients of the abovementioned institutions were used to extract patient information regarding age, height, weight, serum blood variables, ECOG-PS, initial PSA levels, the Gleason score from a prostate biopsy, and PSA kinetics.In addition, blood variables at treatment initiation, including alkaline phosphatase, C-reactive protein (CRP), Alb, and lymphocyte and neutrophil counts, were also collected.As previously reported [19], patients were grouped into three subgroups based on TTCR as outlined: 0-12, 12.1-18, and ≥18.1 months.
Nutritional Assessment by GNRI
Patients were classified into GNRI-low (<98) or GNRI-high (≥98) groups.The formula used for GNRI values was: 1.489 × serum Alb level (g/L) + 41.7 × (actual body weight [kg]/ideal body weight [kg]).If actual body weight was greater than ideal body weight the ratio of these two factors was set to one.The formula for ideal body weight is: ideal body weight = 22 × height (m 2 ) [9,20].
Ethics Approval
This study was undertaken with the approval of the ethics committees of all universities belonging to this group (approval nos.60-21-0018, 2021-042, 21-051, HM20-465) and websites outlined opt-out information for patients.The investigation was conducted according to the Declaration of Helsinki (2013).
Statistics
To evaluate differences in categorical parameters, we used Fisher's exact test or a Mann-Whitney U test, when appropriate, as an ordinal scale.After receiver operating characteristic (ROC) curve analysis of the total cohort with regard to CRPC development, new cutoff values for parameters were determined as drawn using a Youden Index.Cumulative rates of survival were estimated from Kaplan-Meier curves.Log-rank tests were used to determine significant differences between curves.Univariate and multivariate analyses were based on Cox proportional hazard regression analyses.Variables that were clinically important factors were used to predict TTCR and OS.Data were evaluated with the use of EZR software (Saitama Medical Center, Jichi Medical University, Yakushiji, Japan).
Characteristics and Outcomes of Patients by Further Follow-Up Study
A total of 175 patients, traceable from a previous analysis [8], were enrolled in this retrospective study.Of these, 73 and 102 patients received either CAB treatment or upfront AA plus ADT treatment, respectively.The two groups showed a significantly different median follow-up period of between 33.7 months for the CAB group vs. 26.2months for the upfront AA plus ADT group (p < 0.001).As shown in Table S1, peripheral blood markers and clinical parameters were not statistically different between the two patient groups.Furthermore, the median time to PSA nadir and rate of PSA decline >50% were similar.However, CRPC was found to occur more among patients in the CAB group when compared to those in the upfront AA plus ADT group (47/73; 64.4% for CAB vs. 31/102; 30.4% for upfront AA plus ADT, p < 0.001).Upfront AA plus ADT treatment led to a significantly prolonged median TTCR compared to CAB treatment (not reached [NR]; 95% confidence interval [CI]: NR-NR vs. 12.9 months; 95% CI: 9.2-22.0,p < 0.0001; Figure S1a).In addition, PFS2 in the upfront AA plus ADT group was significantly superior as shown in Figure S1b (median NR, 95% CI: NR-NR) compared to the PFS2 of the CAB group (median: NR, 95% CI: 25.7-NR; Figure S1b; p < 0.01).However, OS in the CAB group (median NR, 95% CI: 32.9-NR) did not significantly differ from that in the upfront AA plus ADT group (median NR, 95% CI: NR-NR; Figure S1c).
Thus, upfront AA plus ADT treatment led to decreased CRPC, and prolonged median TTCR and PFS2 compared to CAB treatment, although OS did not differ.
Setting New Cutoff Values and Prognostic Analysis Focusing on Serum Biomarkers
For the setting of new cutoff values for prognostic factors, clinical factors, including age, initial PSA, Alb, GNRI, CRP, and the neutrophil-to-lymphocyte ratio (NLR), were analyzed using ROC curves.New cutoff values were determined to be: 76 years of age, an initial PSA level of 82.4 ng/mL, an Alb level of 3.6 g/L, a GNRI of 98, a CRP level of 1.22 mg/dL, and an NLR of 2.003 as shown in Figure 1.Of the 175 patients, 66 were grouped within the GNRI-low group and 109 within the GNRI-high group.Baseline clinical oncological parameters were similar in the two groups (Table 1).However, the median body mass index (BMI), proportion of patients with better ECOG-PS, as well as serum Alb levels were significantly greater in the GNRI-high compared with GNRI-low group.Median age and median initial PSA levels were found to be significantly greater in the GNRI-low compared to GNRI-high group.Consistent with these data, CRPC development was found to be significantly lower in the GNRI-high compared to GNRI-low group (42/109 [38.5%] vs. 36/66 [54.5%] in GNRI-high vs. -low groups, respectively).Additionally, analysis of the whole cohort revealed that segregating patients based on age, initial PSA, and NLR status did not reveal significant differences in OS (Figure 2a,b,d).However, the median OS of the GNRI-low group (median 33.7 months; 95% CI: 26.2-NR) was significantly worse than that of the GNRI-high group (median NR; 95% CI: NR-NR; Figure 2f; p < 0.001).Concerning CRP, a significant difference in OS when comparing CRP-high and -low groups was also evident (Figure 2e).Furthermore, also in the analysis of cohorts receiving upfront AA plus ADT treatment, patients with high GNRI showed significantly prolonged OS compared to those with a low GNRI (p < 0.001; Figure 2h), similar to CRP (p < 0.05; Figure 2g).In summary, of the total cohort, a greater proportion of patients in the GNRI-high group showed a significantly greater BMI and serum Alb level, better ECOG-PS, lower median age, and initial PSA level, consistent with lower CRPC development and longer survival rates for this group.Patients undergoing upfront AA plus ADT treatment also showed significantly longer survival rates if they had a high GNRI.
Identification of Prognostic Factors for TTCR and OS
Concerning Alb, OS was statistically significant when comparing Alb-high and -low groups (p < 0.001; Figure 2c).The GNRI was calculated using a serum Alb-based formula when performing univariate and multivariate analyses.These two items strongly correlated and, therefore, both were not simultaneously included.When compared with GNRI, most patients that were divided by an Alb level cutoff were categorized as being in the Alb-high group (134 and 41 in high and low groups, respectively, for Alb; vs. 109 and 66 in high and low groups, respectively, for GNRI).Considering the substantial contribution of the Alb level to predicting OS in this study, and recent evidence on the efficacy of complex biomarkers, including serum Alb [21,22], the GNRI was therefore selected for this study.With regard to prolonged TTCR, the following were revealed in univariate analysis to be independent prognostic factors when initial treatment was started: existence of symptoms (HR: 1.87, 95% CI: 1.16-3.03),upfront AA treatment (HR: 0.38, 95% CI: 0.24-0.60),low CRP level (HR: 0.45, 95% CI: 0.28-0.74),and high GNRI (HR: 0.44, 95% CI: 0.28-0.69).Additionally, for prolonged TTCR, multivariate analysis identified significant prognostic factors: upfront AA treatment (HR: 0.39, 95% CI: 0.23-0.66)and high GNRI (HR: 0.38, 95% CI: 0.21-0.67)(Table 2).Furthermore, high GNRI was also identified as the sole prognostic indicator of OS both in univariate and multivariate tests (HR: 0.35, 95% CI: 0.19-0.64, in univariate, and HR: 0.45, 95% CI: 0.21-0.96, in multivariate, respectively; Table 3).Thus, the univariate analysis identified symptoms, upfront AA treatment, low CRP, and high GNRI to be significantly associated with worse survival rates.Multivariate analysis identified the variables of upfront AA treatment and high GNRI as being significantly associated with TTCR; high GNRI was also significantly associated with OS.
Analysis of OS Based on TTCR
As reported above, elongation of the follow-up period did not reveal a statistically significant difference in OS between patients receiving CAB and those receiving upfront AA plus ADT treatment.Therefore, an evaluation of OS based on TTCR classification was made.An analysis of the total cohort revealed that the median OS from diagnosis was 24.7 (95% CI: 16.6-31.9),NR (95% CI: 34.2-NR), and NR (95% CI: NR-NR) months in those patients with a TTCR of 0-12, 12.1-18, and ≥18.1 months, respectively (Figure 3a).Differences in OS between the three groups were statistically significant.In addition, it was observed that OS was proportional to TTCR in the upfront AA plus ADT treatment group (Figure 3b). Figure 3c shows the results of our analysis of "upfront AA plus ADT or not" and "GNRI high or low" as prognostic factors of PFS.The PFS showed significant differences between the three groups classified according to the positive number of these two independent factors (median 7.5 months, 95% CI: 4.1-NR in GNRI-low patients treated with CAB; median 14.9 months, 95% CI: 12.2-NR in GNRI-low patients treated with upfront AA plus ADT, or in GNRI-high patients treated with CAB; median NR months, 95% CI: NR-NR in GNRI-high patients treated with upfront AA plus ADT).Thus, a shorter TTCR in the total cohort and upfront AA plus ADT treatment led to reduced OS.When patients with various TTCR rates were further segregate was found to be prolonged for those patients with a high GNRI and/or treated wi front AA plus ADT.Thus, a shorter TTCR in the total cohort and upfront AA plus ADT treatment group led to reduced OS.When patients with various TTCR rates were further segregated, PFS was found to be prolonged for those patients with a high GNRI and/or treated with upfront AA plus ADT.
Discussion
Malignancies in patients are often accompanied by patients experiencing malnutrition [15].The GNRI is used to assess nutrition and is a potential prognostic indicator of the risk of morbidity and mortality [17].However, to date, no reports exist on the relationship between GNRI and outcomes for patients with LATITUDE high-risk mHSPC treated with upfront AA plus ADT or CAB.
In this study on Japanese patients with LATITUDE high-risk mHSPC, although OS was unchanged over a longer follow-up period, we found PFS2 after upfront AA plus ADT treatment was significantly superior to that of CAB treatment.In the current era of upfront treatment given to patients showing high-risk mHSPC, the clinical importance of a prolonged TTCR has not been conclusively established.In this analysis, differences in OS between three groups of patients, distinguished according to TTCR values, were statistically significant.In addition, OS was directly proportional to TTCR in the upfront AA plus ADT treatment group.To our knowledge, only one study [23] has described worse OS in patients with a TTCR < 12 months, with OS gradually increasing as the TTCR period increased.However, this study was performed on a heterogenous population and a variety of agents was used.Previously, before the era of upfront treatment, we grouped patients with mHSPC into four groups based on the length of TTCR in order to compare clinicopathological characteristics.We found that shorter TTCR in patients was associated with an unfavorable OS [19].Here, we show a longer TTCR favored improved OS both in total and upfront AA with ADT cohorts.We, therefore, conclude that even in the era of intensified upfront therapy, TTCR should be extended for the maximum time possible so as to reach the best prognostic outcomes in patients with LATITUDE high-risk mHSPC.
Even in the era of intensified upfront treatment for mHSPC, it is unclear whether all patients with LATITUDE high-risk mHSPC should be treated upfront with ARATA.Furthermore, it is important to identify patients most likely to benefit from upfront AA plus ADT or CAB using a suitable indicator superior to other variables, such as Gleason score, clinical stage, or ECOG-PS [9].Recently, several molecular mechanisms to explain the role of inflammation in PCA have been proposed.These include cellular turnover, induction of genomic and cellular environment that leads to replication, and activation of tissue repair [24].The NLR may be a reliable serum biomarker of inflammation since elevated NLR at initial treatment is predictive of poor OS rates in patients with mHSPC [25].Additionally, before the era of intensified therapy for mHSPC, PSA and CRP independently predicted poorer cancer-specific survival in a patient cohort with mHSPC receiving ADT alone [26].Although several studies recently linked CRP levels to CRPC [27], recent data linking CRP to prognosis in castration-sensitive PCA are lacking.
In our current study, we evaluated three immune-nutritional parameters as potential prognostic factors.Of these, high GNRI was superior to low GNRI as a prognostic indicator of both PFS and OS; this was the case for both the total cohort and the upfront AA plus ADT treatment cohort.Furthermore, after univariate and multivariate analyses, high GNRI was identified as the sole prognostic indicator that predicted both PFS and OS.Additionally, in the analysis of PFS focusing on two prognostic factors (upfront AA plus ADT or not, and GNRI high or low), significant differences were noted in PFS among the three TTCR groups classified according to the positive number of these two independent factors.
The GNRI is thought to reflect mortality in elderly patients as well as those on hemodialysis and with cardiovascular disease.This notion has also been used for patients with various cancers, including lung, gastrointestinal, and urothelial cancers [18, [28][29][30].Markers of nutritional status, including a decreased serum Alb level or BMI, have been associated with a poor oncological outcome in metastatic prostate cancer [31].However, only one report exists of an association between high levels of GNRI and a better prognosis in mHSPC [32].As a novelty, our study suggested the superiority of GNRI compared to NLR or CRP as a biomarker in patients with LATITUDE high risk, and also suggested that the GNRI could be used to aid in patient selection in the upfront treatment of such patients.In addition, our data also suggest that all patients with LATITUDE high-risk and high GNRI levels should receive upfront AA with ADT instead of CAB as initial treatment.Furthermore, considering the poor prognosis, upfront AA with ADT should be strongly recommended for patients with low GNRI.Further prospective studies on upfront intensified treatment are necessary to predict subsequent responses and survival outcomes.
Our study had several limitations.Although the data originated from among the largest cohorts of patients with LATITUDE high-risk mHSPC to receive upfront AA plus ADT or CAB, it is a retrospective analysis with the usual shortcomings of selection bias and small sample sizes.Second, both PFS and PFS2 were significantly superior in the upfront AA plus ADT cohort compared to the CAB cohort.However, though the follow-up period was extended by about 14 months from a previous study, a difference in OS was not observed between these two groups, which may be due to a clear difference in the median follow-up period.Third, the GNRI is a formula that incorporates the Alb level and is, therefore, not convenient compared with Alb alone.In this analysis, the patient population divided by Alb alone was imbalanced as described above; GNRI was selected as a biomarker.However, evaluating the superiority of GNRI compared with Alb was not an aim of this study.Finally, we could not perform second-line treatment-specific TTCR analyses because of limited sample sizes and baseline characteristics that differed between second-line treatment groups.A long-term follow-up study is required in the future to support our study conclusions.
Conclusions
In summary, we demonstrated that the GNRI may be a practical prognostic indicator of survival outcomes in patients with LATITUDE high-risk mHSPC.In addition, even in the era of upfront intensified ARATA or docetaxel, prolonging TTCR is required to achieve the best prognostic outcomes in patients with LATITUDE high-risk mHSPC.These data provide information that can aid in the selection of the first therapies for mHSPC patients, including those in Japan.
Figure 3 .
Figure 3. Kaplan-Meier curves for OS in LATITUDE high-risk mHSPC patients.OS from a diagnosis in LATITUDE high-risk mHSPC patients after initial treatment according to the TTCR in the total cohort (a) and upfront AA plus ADT group (b)."Upfront AA plus ADT or CAB" and "GNRI high or low" as prognostic factors for PFS (c).AA, Abiraterone acetate; ADT, Androgen deprivation therapy; CAB, Combined androgen blockade; CRPC, castration-resistant prostate cancer; mHSPC, Metastatic hormone-sensitive prostate cancer; OS, Overall survival; TTCR, Time to CRPC.p < 0.05, p < 0.01, p < 0.001, p < 0.0001, statistically significant n.s.: not significant.
Table 3 .
Univariate and multivariate analyses were undertaken of baseline parameters and overall survival in all 175 patients treated with upfront AA plus ADT or CAB.* p < 0.05, *** p < 0.001, statistically significant.
|
2023-11-13T16:43:32.650Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "8c63efa94b979f4e8702426dde1a0ace82b5777e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4e1ccf371076c91e3314e830f4f89c89fe121f3c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
210812779
|
pes2o/s2orc
|
v3-fos-license
|
Performance Analysis in Rugby Union: a Critical Systematic Review
Background Performance analysis in rugby union has become an integral part of the coaching process. Although performance analysis research in rugby and data collection has progressed, the utility of the insights is not well understood. The primary objective of this review is to consider the current state of performance analysis research in professional rugby union and consider the utility of common methods of analysing performance and the applicability of these methods within professional coaching practice. Methods SPORTDiscus electronic database was searched for relevant articles published between 1 January 1997 and 7 March 2019. Professional, male 15-a-side rugby union studies that included relevant data on tactical and performance evaluation, and statistical compilation of time-motion analysis were included. Studies were categorised based on the main focus and each study was reviewed by assessing a number of factors such as context, opposition analysis, competition and sample size. Results Forty-one studies met the inclusion criteria. The majority of these studies measured performance through the collection and analysis of performance indicators. The majority did not provide context relating to multiple confounding factors such as field location, match location and opposition information. Twenty-nine performance indicators differentiated between successful match outcomes; however, only eight were commonly shared across some studies. Five studies considered rugby union as a dynamical system; however, these studies were limited in analysing lower or national-level competitions. Conclusions The review highlighted the issues associated with assessing isolated measures of performance, lacking contextual information such as the opposition, match location, period within match and field location. A small number of studies have assessed rugby union performance through a dynamical systems lens, identifying successful characteristics in collective behaviour patterns in attacking phases. Performance analysis in international rugby union can be advanced by adopting these approaches in addition to methods currently adopted in other team sports.
Rugby performance analysis continues to rely heavily on isolated measures of performance, such as performance indicators, without providing context to confounding factors such as opposition behaviour, pitch location, period within match and venue location.
Some studies have investigated team behaviour in rugby union; however, to facilitate a better understanding of group behaviour in international rugby, a dynamical systems analysis approach at an elite level is recommended. Within and between team interactions have been measured in other sports including football and basketball. Rugby performance analysis may benefit from adopting strategies employed by these sports in order to gain a better understanding of team properties and the patterns that characterise their coordination.
Background
Performance analysis in team sports allows coaches to objectively assess the performance of the team while identifying their oppositions' strengths and weaknesses, and opportunities to exploit these in competition. To do this effectively requires a comprehensive analysis of individual and collective actions, to provide objective summaries of game activities during competition [1]. There has been an exponential growth in performance analysis research over the last two decades, largely a consequence of the advancement and availability of computer and video technology. Broadly, performance analysis involves an objective assessment of documented behaviours recorded in a discrete sequential manner containing information on 'what', 'who', 'when' and 'where' the behaviours occurred. Behaviours are typically recorded through annotation software; however, advancements in video capture technologies are allowing player position information to be analysed with associated behaviours to provide a more meaningful understanding of game behaviours. This development has contributed considerably to our understanding the performance requirements in elite-level competition. However, fundamental issues remain in the questions underpinning the research in the field; the cause-andeffect-based observations inherently assume linear relationships to predict and control match outcome. For example, the direction and scope of the research in rugby union has primarily explored a single or a combination of action variables (performance indicators) deemed relevant to successful outcomes such as possession and tackle success [2]. Furthermore, the analysis of these performance indicators has primarily only focused on discrete, descriptive and comparative statistics. Other common research topics have simply studied technical and physical requirements during specific periods or game events, such as peak running intensities [1,3,4]. Thus, this type of research assumes human behaviour is causal, measurable and thus predictable.
A further limitation to much of the research on performance analysis in rugby is that there is a lack of evidence surrounding the implementation of this work into everyday practice by coaches and practitioners. The apparent limited influence is potentially due to an absence of consensus between practitioners and scientists, and the information that drives actions and implementation. Performance analysis research is commonly composed by researchers, directing methods and structuring studies, potentially neglecting the applicability and utility of the research findings. Developing the field of performance analysis in rugby needs collaboration between scientists and practitioners to improve the ability of science to influence practice. Bridging the theory-to-practice gap may require developing an applied research model that describes rugby performance in an integrated manner.
To overcome the current methods beset by various issues, it seems pertinent to understand rugby performance as a complex dynamical system. In this sense, the patterns of game behaviour emerge from the self-organising interactions between players operating within task, and environmental and physical constraints [5]. A corollary to this is that rugby performance is highly complex and requires players to perform coordinated tactical behaviours and high-intensity movements with adept technical proficiency, making it difficult to reduce game analysis to isolated measures of performance. Therefore, there is a clear need for performance analysis to reflect and capture this complexity and create a global understanding of performance.
This paper systematically reviews the literature to describe the state of rugby union performance analysis, highlighting the various methods of analysis and exploring variables used to assess performance. We then conclude with some recommendations for future research drawing upon research from Association Football (football [soccer]) as a means of envisaging where the field of rugby could evolve to in the future.
Methods
A systematic review of the relevant literature was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines. The SPORTDiscus electronic database was searched on 8 March 2019 for relevant articles published between 1 January 1997 and 7 March 2019 using the following search terms: Rugby AND "collective behav*" OR "tactic* analysis" OR "tactic* performance" OR "tactical indicator*" OR "performance indicator*" OR "performance analysis" OR "notational analysis" OR "game analysis" OR "observational analysis" OR "Pattern* of play" OR "dynamic* system" OR "tactic* behave*" OR "neural network" OR "system* think*" OR "performance model*" OR "player selection" OR "player evaluation" OR "game statistics".
The inclusion criteria were as follows: included relevant data on tactical performance, time-motion analysis, such as assessments of team movement patterns in relation to time; participants included professional adult male rugby players; the sport analysed was 15-a-side rugby union; and articles were published in English. Articles were limited to journal articles where the full text was available. Studies were excluded if they included females; involved males under the age of 18; analysed rugby league or 7-a-side rugby union; were a conference abstract or doctoral thesis; and did not include relevant data for the study. Major research topics of game analysis that emerged from the detailed analysis were identified and the studies grouped accordingly: performance indicators, attack and defence. Research topics were decided upon by authors deeming the majority of the observations included (a) variables relating to the attacking team; (b) variables relating to the defensive team; or (c) predominantly involved the assessment of performance indicators. Successful and unsuccessful match outcomes were defined as match won and lost, respectively.
Quality of Studies
Quality of studies was not assessed based on a recognised classification method as the nature of the research valued observational, tactical studies. Therefore, as no experimental studies were included, Delphi, PEDro or Cochrane was not utilised as scales of evaluation. All 41 articles outlined in Table 1 were assessed for suitability and evaluated by the panel of authors prior to inclusion. All studies had to meet every item on the criteria list to be included in the analysis.
Results
The initial search revealed 110 papers. Titles were screened by two members of the research team for inclusion/exclusion criteria. Ninety articles were then removed. The abstracts of the 20 remaining articles were then read by the same two members of the research team where a further six articles were removed, resulting in 14 articles remaining for review. After reading the full texts, all papers were deemed suitable for review. An iterative reference check was then performed of all eligible papers and any commonly cited papers were also included and a further 27 papers were identified. In total, 41 papers were included for discussion ( Fig. 1).
Year of Publication and Competition
The 41 articles reviewed are presented in Table 1. In short, the articles were grouped into 5-year intervals by year of publication which resulted in an inverse parabolic curve representation of publication dates where 49% of the articles were published between 2008 and 2013 (Fig. 2). When articles were grouped into year of data collection and analysis,~50% of the articles analysed data from games played between 2000 and 2008 ( Fig. 2). Following this period, there has been a linear decrease in the collection of data for publication in rugby union performance analysis research.
The year with the most publications was 2013 (n = 5) ( Table 1), followed by 2010 (n = 4). The year of data collection and analysis was additionally considered important when interpreting results as game styles may have evolved from the time data were collected to the date of publication (Fig. 2)
Analysis of Opposition and Context
The majority of the articles did not include the opposition in their analysis. The~20% that considered the opposition included events such as ball carries (Table 1), tackles, rucks, scrums and performance indicators. Seventy-one percent of the articles that investigated performance indicators contextualised the data (Table 3). Variables were contextualised to field location, match outcome, period during match, numbers of players involved, match phase, team ranking and competition level. Of the 22 articles that contextualised their measures of performance, only five accounted for multiple contextual variables.
Sample Size and Events
The sample sizes ranged from seven matches to 313 matches, with a mean number of 67 match observations ( Table 1). Analysis of individual events ranged from 35, when try scoring incidences were explored, to 8563 ruck contests. The events analysed included ball carries, line breaks, tackles, ruck contests, try scoring observations and scrums. Ruck contests were the most commonly investigated individual events, totalling 15,677 individual events analysed across three studies.
Performance Indicators
A total of 392 performance indicators were identified across the reviewed articles (Table 3). Performance indicators were classified as either attack (n = 204); defence (n = 85); set piece (n = 53); or other (n = 50). Variables related to attack were the most frequently assessed measures of performance, followed by those related to defence.
Understanding the genesis of performance indicators might serve as a starting point for developing valid sets of quantitative tactical indicators. Therefore, the method utilised to select variables related to performance was also considered important. The method of selection utilised by the investigators included the following: a collaboration with investigators and coaches and/or experts; those selected solely by the research group; those sourced from a third-party company; and those where the method of selection was not stated. Providing a detailed description of each performance indicator is essential to maintain transparency when measuring performance-related variables. These operational definitions allow the shared understanding of the variables used ensuring their meaning is unambiguous and understood [43]. Only seven articles provided full operational definitions, while the remaining 15 provided no definitions for the variables investigated (Table 3). Additionally, the majority of the articles that provided full operational definitions developed these in collaboration with coaches and/or experts.
Indicators linked to successful performance are displayed in Table 2. Across the articles investigating performance indicators, 29 variables differentiated between successful and unsuccessful match outcomes. Possession kicked was positively related to performance in three separate studies [22,25,37] at the international and Super Rugby level of competition. The second most frequently observed variables were lineout success on opposition ball; tries scored; points scored (including when possession starts in the opposition 22 m area); conversions; tackles completed; turnovers won; and kicks out of hand (Table 2).
Discussion
The purpose of this literature review was to describe the state of rugby union performance analysis, highlight the various methods of analysis and explore variables used to assess performance. We have revealed that in the last two decades of rugby research, the approach to describing performance has remained largely unchanged. Investigations into successful performance typically continue to rely on univariate measures of performance, reducing performance to singular values (Table 3). In fact, 22 of the 41 studies retrieved focused on descriptive and comparative statistics and often lacked context. Confounding factors such as match venue, officials, weather and the nature of the opposing team have all been suggested to influence team performance, yet are rarely considered in the majority of the research [17]. This level of information details the origin of the data and arguably allows for more meaningful interpretations. Critical information [44]. For instance, a major confounding factor is the opposition team yet only eight of the articles retrieved considered the opposing team in the analysis [10,11,14,15,28,32,33,40]. More than half of the articles investigated successful and unsuccessful measures of performance by quantifying performance indicators over entire competitions. Although this approach is useful as a means to increase the number of data, this level of analysis ignores the variation in playing style over each match and typically lacks consideration of the influence of opposition. Ignoring data from the opposition will likely distort any relationships present [41], particularly when one considers that various studies included data over multiple competitions [3,4,6,25,38] as well as over several seasons [9,21,22,25,30,34] potentially misrepresenting performance outcomes. One paper examined the efficacy of two methods of data analysis to predict match outcomes [41]; isolated performance indicators, considering only the isolated data from a single team, were compared to a descriptive conversion method by calculating the differences between each team's data for each individual match. That study showed match outcomes were better predicted by relative data sets. Relative predictors of success included an effective kicking game, ball carrying abilities and not conceding penalties when the opposition are in possession. Although the majority of the studies included contextualised results, it should be noted that some research included contextual information from multiple confounding factors such as pitch location, match period and team ranking. For example, a study of effective strategies at the ruck in the 2010 Six Nations Championship accounted for team ranking, pitch location and number of players involved [33]. The results indicated greater success in [15]. Defending teams were more likely to turnover possession using an early counter ruck strategy in the wide attacking channels. Conversely, a jackal (a player on the defending team competing for the ball using his hands after a tackle was made but prior to the formation of a ruck) was the most effective strategy in the central field areas. Another study identified quick rucks within the first 20 min and within the 60-70 min time interval had the largest positive effect on match outcome [30], whereas slow rucks had the largest negative effect on winning a match, regardless of the time interval. These results highlight the importance of contextualising performance indicators, as game tactics may need to be adapted depending on the field location, time interval and ruck strategy employed. Applying the outcome from research using simple, descriptive and isolated variables without consideration of confounding variables is problematic in tactical preparation. For example, set piece tries discriminated between successful and unsuccessful teams [28]; however, without contextual information such as score differential, weather conditions, pitch location or team ranking, little inference can be made regarding how or why behaviours occurred. One study [14] investigating defending strategies in tackle contact events which considered the playing situation, defensive characteristics and phase outcomes bore some insights into effective defensive processes such as defensive speed, field location and period within a match. This study demonstrated that the period of the match and the distance of the contact event in relation to the previous phase are key variables that predict the likelihood of a successful phase outcome. In a practical sense, teams execute different lineout plays depending on the field location (i.e. 5, 6, 7 man; they may play off the top or maul). They may also be more reluctant to throw the ball to the back of the lineout in poor weather conditions. On this basis, set piece selection is commonly dependent on context and, therefore, it is important to consider these factors when assessing performance indicators. Furthermore, analysing the performance of a team assumes that the behaviours in one game will provide insights into future performance in subsequent matches. The fundamental issue is that game behaviours may only specifically represent the performance of a team at the time the data were captured [45].
Performance Definitions and Indicators
Over 300 performance indicators were identified across 22 studies (Table 3). Interestingly, only 29 were identified as related to successful performance. International tests demonstrated 14 variables (Table 2) discriminating winning and losing teams including higher points scored, kicks, turnovers and penalties conceded between the opposition's 50-and 22-m line. In regional-level competitions, such as Super Rugby in the Southern Hemisphere, 25 variables were identified as successful indicators of performance including a greater number of metres gained, kicks out of hand, line breaks and percentage tackles made compared to losing teams. To illustrate differences in styles of play at different levels of competition, performance indicators that discriminated between winning and losing teams in international test matches and Super Rugby games were investigated [25]. Winners of Super Rugby games kicked more possessions, made more tackles, completed more passes and made less errors. No performance indicators were able to discriminate between winners and losers in international test matches played during 2003 and 2006 when only close matches were investigated (< 15 points difference) [22]. In contrast, another investigation of international games in the same time period showed that winning teams had higher points scoring-related statistics, turn overs and kicks and were more successful at set piece [22]. This discrepancy in outcomes may be a function of close games potentially being played by two opposing high-quality teams, demonstrating similar levels of performance behaviours. This continues to highlight the importance of contextualising performance indicators as vital information is likely to be lost when confounding factors are not considered. There is typically a lack of transparency in the operational definitions used to describe and analyse rugby performance. Twenty-two retrieved articles quantified performance using performance indicators; however, only 7 actually defined the variables analysed. Furthermore, of the 22 articles, only 16 were explicit about the process of selecting the indicators used. The selection process included expert opinion and research group [1,17,21], commonly available statistics by a third-party company [22,23,25,28,38] and those selected solely by the research group [3,18,29,39] (Table 3). The method used when selecting performance indicators in the remaining articles was undisclosed. Challenges may arise given a lack of clarity (i.e. lack of definitions or objectivity when selecting performance indicators) when comparing or replicating investigations, making it difficult to advance the body of research and for coaching staff to implement the suggested practices. However, a summary of the research and performance indicators relevant to successful performance can provide useful insights.
As mentioned earlier, performance indicators provide an overview of certain events that may contribute to and predict successful performance. However, isolated performance indicators do not consider the opposition, nor do they account for unpredictability and inherent match specificity. For example, game behaviours tend to be inconsistent and performance indicators will most likely be influenced by player-opponent interactions. It is therefore unlikely that a complex, dynamic game such as rugby can be represented by isolated measures of frequency data.
Evolution of Performance Assessment
Studies relating to attack are more common than investigations into defence (Table 1). Topics such as try scoring, possession duration and ball carries were investigated in relation to the attacking team, whereas tackle contest events and rucks were detailed as measures of defence. Most studies analysing performance indicators investigated both attack and defence situations. Specific investigations into defensive strategies only appeared from 2013 most likely related to rule changes [36] favouring the defensive team during breakdown situations.
To accommodate changing game styles, rule changes were introduced in rugby during 2007 and 2013 expediting the speed of play to increase appeal and competitiveness [36,46]. The period prior to, during and thereafter should be considered and compared, understanding that successful performance indicators prior to 2007 may not be relevant thereafter. For example, amendments to laws surrounding the ruck led to a decrease in players involved in ruck situations [19]. Teams are instead favouring committing more players to the defensive line in preparation for subsequent phases. As a result, game actions have increased due to the added pressure on attacking teams to expedite the speed of play [36].
Between 2004 and 2007, winning teams won more lineouts on the opposition's throw, scored more tries, had greater metres gained, kicks out of hand, line breaks and percentage tackles made in international, Super Rugby and professional domestic competitions [17,22,23]. Successful teams also had higher points scored, conversions, successful drop goals, mauls won, line breaks, possession kicked, tackles completed and turnovers won. In contrast, losing teams lost more scrums and lineouts. Following this epoch, between 2007 and 2013, winning teams conceded more penalties between 50 m and opposition 22 m, and had more total kicks, including kicks out of hand, than losing teams. After 2013, variables likely to result in winning included higher average carry metres, clean breaks made and kicks made relative to the opposition in a professional domestic league. Negative outcomes were more likely when teams conceded penalties while the opposition was in possession. Data were considered in relation to the opposition rather than isolated data of each team considered discretely [41]. Isolated methods of analysis indicated winning teams missed less tackles in the Super Rugby competition [38]. Analysis of knockout stages of the Rugby World Cup, however, indicated that winning teams kicked a greater percentage of possession in the opposition 22-50 m and won more lineouts on the opposition ball [37], suggesting that successful test rugby may require a territory style of play. Performance indicators investigated were inconsistent across the studies, making it difficult to compare and assess the relevance and impact of key attacking and defensive variables. As such, although points scored were unrelated to match outcome post 2013 [41], it is problematic to suggest that point scoring is not important in rugby performance.
Factors such as competition location may rationalise the differing game styles observed. Approximately 20% of studies reported on Northern Hemisphere teams known to have a different style of play to [47] to Southern Hemisphere competitions. Southern Hemisphere teams tend to exhibit higher overall ball-in-play periods resulting in more game actions and injuries due to greater game continuity [47]. Additionally,~40% of articles investigated teams competing in international competitions (Table 1) and 13% included data sets from multiple competitions, possibly decreasing their relevance as some information may be missed given the loss of contextual information [48]. Maintaining the integrity of each individual match when using the established descriptive conversion method of analysis, which considers all performance indicators in relation to the opposition, is preferred [41].
In summary, studies of performance analysis in rugby often show methodological shortcomings regarding the genesis of performance indicators and selection process, a lack of transparency and operational definitions with the investigated performance indicators and issues related to investigating performance indicators over entire competitions. The problems associated with investigating performance indicators without the consideration of contextual and situational factors limit the application of research outcomes into the rugby community.
Advancing Rugby Performance Analysis
There are some notable studies that have explored the performance processes in rugby union. Recently, researchers have used clustering approaches to identify important patterns in match data associated with certain game outcomes [35,42]. These methods are useful for reducing large volumes of high-dimensional data to visualisable, lowdimensional output maps or identifying key playing patterns. One method identified that multiple game styles tended to result in success, such as a ball carrying, highcontact style of play. A low possession and strategic kicking style of play was observed to be just as effective. However, it is important to consider that data were not explored in relation to opposition game style for each specific match. This means that support for an ideal game style could not be established. Moreover, the level of competition analysed was low and restricted to a single nation. A K-modes cluster analysis was used to identify common playing patterns that preceded a try [42], suggesting plays following lineouts, scrums and kick receipts were common approaches to scoring tries in Super Rugby. A limitation to these approaches is the data related to collective team behaviour, such as player positioning and movements, were not collected in either of these studies.
Multiple studies have considered rugby union performance using a dynamical systems approach to analyse game characteristics [27,32,[49][50][51][52][53][54][55]; however, to the authors' knowledge, only three studies have used this approach in professional, male adult rugby union contexts [26,27,32]. In this approach, important characteristics of complexity are assessed by emergent patterns, due to the interactions between components in the system (i.e. players) over time [51]. This method has been found to successfully identify self-organising, emergent patterns from slight changes in interactions between players [56]. This suggests that players' decisions and actions are governed not only by prior instruction provided by coaches, but by constraints in the player-environment interaction. In team sports, these behaviours emerge in space and continuously change over time, under the influence of constraints such as task (rules governing the game), environmental (weather) and individual constraints (physical capacity of the athlete) [57], resulting in the spontaneous reorganisations of intrapersonal and interpersonal coordination [58]. Some research has measured the constraining influences of one team on the opposing team's playing system formation [32]. Attackers were observed to act as a coordinated sub-unit, measured through correlation values, accounting for distance and relative velocity values between each player within the sub-unit (two players from one team) [58]. When the sub-unit of the attacking team was able to disturb the coordination tendencies of the defending team's subunit, this resulted in opportunities for the attacking team to cross the gain line (an imaginary line parallel to the score line, set between the attackers and defenders every time that attackers and defenders perform a ruck, maul, scrum or lineout [32]). However, when both sub-units remained equally coordinated, neither the attacking nor the defending team was successful in crossing the gain line or regaining possession of the ball, respectively. Small adjustments in players' interpersonal distances and running line speed were considered useful tools to disturb the opponent's coordination patterns. Using a similar approach, pass decisional behaviour was found to be predicted by the time-to-contact between the attacker and the defender [27]. The type of pass that emerged was significantly correlated (p < 0.001) with the variables available in the interaction between players and the environment, suggesting that intrateam coordination is necessary for crossing the gain line as well as effective passing in rugby union.
Capturing movements at the team level associated with successful attacking phases of play, such as advances in territory (achieving a more advanced position in the field of play), have additionally been explored in rugby union [26]. Investigating the multi-player sub-phases, ball displacement trajectory patterns were analysed, revealing the maximum distance the ball travelled backwards from a pass was lower in successful phases of attack. Greater advances in territory were additionally observed when lower backward movements of the ball were coupled with rapid ball delivery. Assessing the macroscopic order therefore suggests successful characteristics in collective behaviour patterns in attacking phases involve a fast ball delivery to a receiver within a close distance [26].
This constraint-led approach is commonly used in the field of skill acquisition and motor learning and proposes novel actions might emerge by manipulating key practice task constraints [51]. This approach has additionally been used to identify the interaction between the intrinsic dynamics and the external constraints within critical match events [27]. Examining the interand intrateam coordination patterns that influence successful performance may, therefore, yield critical insights into behaviours associated with successful match events, such as line breaks [22] and try scoring [42]. These methods have yet to be explored in international rugby union and should be addressed in future research.
Future Direction
A small number of studies have started to progress the field of performance analysis in rugby union [26,27,32,35,42]. However, compared to various other team sports, the field of dynamical systems analysis in rugby remains largely unexplored. Sports such as football, basketball and AFL have adopted dynamical system approaches in their analysis of tactical performance; however, there is limited understanding of the value of such approaches in a 'gain line' team sport, such as rugby union, where teams in possession of the ball aim to gain ground relative to the initial starting position, referenced by a projected line that runs parallel to the try line known as the gain line.
Recognising the need for a multi-dimensional approach to analysing performance, many football researchers have explored the use of novel indicators to assess the tactical behaviour of players [59,60]. Using positional-derived metrics (such as xand y-coordinates), the synchronisation of players' movements were analysed, revealing positive outcomes associated with time spent synchronised with players from the same team [61]. Variables such as team centre, team dispersion, team interaction and coordination networks and sequential patterns have been explored to generate knowledge about team properties and the patterns that characterise their organisations [62]. These metrics capture intrateam coordination tendencies by measuring the synchronisation of a pair of teammates, known as a dyad, defined as a pair of two players who share the same environment and intentionality, and pursuing common goal-directed behaviours [63]. These dyads form the basis of local social interactions inherent to complex systems, in which individual agents (players) modify their behaviours on the basis of these local interactions and spontaneously organise themselves into coordinated patterns [64]. The local interaction rules are in fact context-dependent, given the presence of other teammates and opponents, demanding the continuous adaptive behaviour of players. Investigators have captured this context dependency through analysing the interpersonal distances between attacker-defender dyads and identifying periods of equilibrium when distances remain a specific distance apart [50]. When interpersonal distance decreases, these systems evolve from a state of balance to critical performance moments, as the contextual dependency rules governing performance require constant coadaptations of each player to their opponent [50,51]. It is these local interactions, or system components, governed by their simple local rules, that cause the system to evolve, forming new patterns of dynamics to emerge [51]. By understanding group behaviours and team dynamics during critical performance moments (goal scoring), football analysts are describing the phasic shifts in team dynamics, using team centroids, that can lead to scoring opportunities [65]. Social network theories have also been used to develop a deeper understanding of the passing interactions between team members that demonstrate the local interactions within the wider system [66,67]. As many of these methods have only been explored in football and basketball, investigating the coordinated patterns of players and continuous interactions as the rugby game evolves is needed to provide a deeper understanding about why certain patterns emerge in critical regions and/or periods in elite-level competition.
Exploring collective system measures and assessing the coordination dynamics between players and teams in elite international level competition may provide valuable insights into team behaviours [68]. This information can then be used to identify patterns of interactions between teammates [62] which coaches can harness to enhance task representation design in training [69].
Conclusions
The aim of this paper was to critically review the performance analysis research in professional male, 15-a-side rugby union. Studies were assessed based on a number of elements such as context, opposition analysis, competition and number of events analysed.
Studies utilising performance indicators were additionally assessed to establish the genesis of performance indicators and inclusion of operational definitions. Twenty-nine variables were related to successful match outcomes. Possession kicked, lineout success on opposition ball, tries scored, points scored from conversions; tackles completed; turnovers won; and kicks out of hand were the most frequently observed variables. Despite the majority of these articles including context in their analyses, very few accounted for multiple contextual variables, limiting insights into the process of game behaviours due to the player-opponent interaction and the effect of multiple confounding factors, such as field location, number of players involved and period within a match.
Only a third of the studies investigating performance indicators defined the variables used in their analyses. These findings highlight the need for clarity when measuring performance-related variables by providing full operational definitions, to continue to advance the field of performance analysis.
Despite the number of studies published in the last two decades, only a few studies have begun to advance the field, while the majority of the studies reviewed involved a reductionist view of performance. The limited number of studies adopting an alternate view of performance has assessed rugby union performance through a dynamical systems approach by observing emergent patterns. The examination of inter-and intrateam coordination patterns that influence successful performance has the potential to yield critical insights into behaviours associated with successful match events; however, these methods have yet to be explored in international rugby union.
Finally, the advancements in other team sports are discussed to illustrate the potential of a range of performance analysis methods that assess team properties and patterns that characterise their organisation. These methods have been applied to develop a deeper understanding into collective system measures providing valuable insights into sports such as football and basketball.
|
2020-01-16T09:03:56.184Z
|
2020-01-15T00:00:00.000
|
{
"year": 2020,
"sha1": "7905e966dcd501542c6e2893daa6677714950ed1",
"oa_license": "CCBY",
"oa_url": "https://sportsmedicine-open.springeropen.com/track/pdf/10.1186/s40798-019-0232-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7de9cd4c844f8eb510f59216510bb9f2c4e5289",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
225221187
|
pes2o/s2orc
|
v3-fos-license
|
The contraceptive behaviour of ever married women in tribal area of Ahmednagar district, Maharashtra
Introduction: The contraceptive behaviour of the population in reproductive age group has got an important role to play in deciding the population size of a particular country. There are many factors which decides the contraceptive behaviour of males and females. The scheduled tribes are known to be traditionally less educated and away from the modern world. Hence, in present study an attempt was made to study the contraceptive behaviour of tribal women in Akole block of Ahmednagar district of Maharashtra. Methodology: Descriptive cross-sectional study was conducted in Tribal area of Akole taluka in Ahmednagar district of Maharashtra among Ever married women (EMW). Interview of 392 EMW was carried out with selection as per systematic random sampling method. Results and Discussion: Total 392 EMW with mean age and age at marriage of the study population was 26.65±5.1 and 17.97±1.08 respectively were interviewed. 85.50% of married women had knowledge about one or other methods of family planning. Contraception use among EMW was 69.6% however it was only 41.40% in men. The contraceptive morbidity encountered by the participants was quite low i.e. 11.3%. Various barriers were also reported for using contraception. Conclusion: The knowledge of contraception among tribal women was 85.5% however prevalence was only 69.3%. The contraception use by male was low showing their poor involvement in family planning. © 2020 Published by Innovative Publication. This is an open access article under the CC BY-NC license (https://creativecommons.org/licenses/by-nc/4.0/)
Introduction
The planners and decision maker across the world are facing huge challenge of controlling the human population growth. Family planning programme is recognized as a key intervention for population control. Over the past 40 years, there have been significant advances in contraceptive methods; its approaches and services. India is ranking second in terms of population size followed by China. It constitutes 1/5 th world's population. As per the UN estimates, India's population is likely to reach at 1.53 billion by 2050 and then India will be ranking at the top in terms of population size. 1 This rise in population has deleterious effect on socio-economic development. During 2001-2010, annual growth rate of India's population was 1.64% as compare to the world which was 1.23%. The acceptance level for contraception between societies, religions, and cast in India varies widely. India has the majority of the rural population i.e. 68.84%. 2 According to National Family Health Survey (NFHS) 4, 3 use of contraceptive method is 53.5% with slightly higher uses of contraception in urban areas (57.2%) than the rural areas (51.7%). The prevalence of contraception has declined from 56.3% to 53.5%, when compared to NFHS-3. 4 Whereas, in 2017, 67% of the married women and women in-union used some or the other kind of contraception in the world. 5 The schedule tribes in India accounting around 104 million population are traditionally known as poor and educationally and economically backward community. The tribal population known as 'Scheduled Tribes' by constitution of India are one of the lowest and traditionally poorest group in India accounting around 104 million. https://doi.org/10.18231/j.jchm.2020.010 2394-272X/© 2020 Innovative Publication, All rights reserved. 51 Total tribal population of Maharashtra is 2,156,957. Out of which total tribal population of Ahmednagar district is 3, 78,230. It constitutes 8.33% of total tribal population of Maharashtra and 5.1% of total tribal population of India. 2 The total fertility rate of the scheduled tribes is 2.5 which is higher than other social groups. 3 In Maharashtra, 66.9% population uses any one or the other contraceptive methods, out of which 65.3% is in urban and 68.3% is in rural. The contraceptive behaviour of the population in reproductive age group has got an important role to play in deciding the population size of a particular country. There are many factors which decides the contraceptive behaviour of males and females. The scheduled tribes are known to be traditionally less educated and away from the modern world. With this background, the present study was conducted among the tribal married women of Akole taluka in Ahmednagar district of Maharashtra state.
Methodology
The descriptive cross-sectional study was conducted in tribal area of Akole taluka in Ahmednagar district of Maharashtra. The study was carried out among the ever married women of tribal population. According to 2011 census, population of Akole taluka is 2, 91,950 in which tribal population is 47.9% i.e. 1, 39, 730 out of which females are 69,403. The sample size was calculated by taking the overall prevalence of contraception in tribal population is 44.6%, 6 hence the calculated sample size was 379 (Z=1.96 and d=0.05). Akole taluka has three major revenue blocks which are Rajur (81.36% tribes), Kotul (46.04% tribes) and Akole (29.6% tribes). Rajur being the revenue block with highest tribal population was selected.
The criteria for inclusion of villages in the study was villages with and above 95% of tribal population, and having female population of 51% or above. There were 14 villages fitting in this criteria, hence all were included in study. Data was collected by one to one personal interview of ever married women (EMW) using systematic random sampling method by obtaining the list of EMW from ASHA and AWW of respective village. Minimum sample per village (i.e. 28) was calculated by dividing total sample size (379) with villages included in the study (14). Sampling interval was calculated by taking village with lowest female population and by dividing it with minimum sample size per village i.e. 28. Hence, every sixth EMW of the list was included in the study. Non tribal women from the list were excluded from the study. The data was collected with a pretested semi structured data collection tool.
Knowledge of contraceptive methods
85.50% of married women had knowledge about one or other methods of family planning, while 14.5% married women were lacking the knowledge of contraception. In a study conducted, it showed that prevalence of knowledge about any contraceptive method among married tribal women of reproductive age group was 97%. 6 Knowledge about modern methods of family planning is comparatively higher than the traditional methods. Among the modern methods, the knowledge about, female sterilization, male sterilization, oral pill, emergency contraceptives and male condom is slightly more than the, IUDs, female condoms and injectable. Increased involvement of family planning and health workers, contributed much in generating awareness about the family planning. The source of knowledge identified in the study were classmates & friends, family planning professionals, TV & radio, family, newspaper & periodicals, medical staff, course education and networking.
Current use of contraceptive method
Despite the high level of knowledge about family planning among population, it is not transmitted into practice. Contraception use was seen 69.6% including both traditional and modern methods. 33% study population uses withdrawal method, 34% uses permanent sterilization (all were females), 7% uses calendar method, while only 19% male condoms and 7% IUDs are utilized. The prevalence of contraception in the study area was quite higher as compare to other study which reported a prevalence of 40.7% Tribal Eligible Couples in Bankura district of West Bengal. 7
Male involvement in family planning
The study reflected that out of the participants who had knowledge about contraception, 71.4% married women said that their partners agree for contraception. Use of contraception by male partners was found to be low i.e. only 41.40% even though they are agreeing to use contraception, they are not using it themselves. The reasons for not using male sterilization encountered were, expecting more children by 29.3%, female sterilization is easy by 9.52%, female is the one who should do by 16.12%, not comfortable by 8.1%, partner's disagreement by 15% and fear of losing sexual strength by 18.7% of the participants. According to NFHS-4, 3 47. 8% women reported the use of any female modern method as against only 5.9% of using any male method and male sterilisation's share in family planning methods was 0.62% despite being safer, quicker and easier.
Contraceptive morbidity
The contraceptive morbidity encountered by the participants was quite low i.e. 11.3% because of minimal use of modern methods of contraception. Various types of contraceptive morbidity encountered included weakness/inability to work (19.35%), abdominal pain (19.35%), Body ache/backache (13%), Weight gain (9.7%), Excessive bleeding (3.22%), Cramps (9.7%), Pain during coitus (13%) and Burning sensation during coitus (3.22%). Contraceptive failure was found to be 11.6% as a contraceptive morbidity. In a study conducted in North East of India 8 body ache/back ache (4.5%), was the most reported morbidity followed by, abdominal pain (3.6%) and irregular periods (2.3%), which is somewhat similar to the finding of the present study. This study states that even though males are agreeing to use contraception, but they are not using it themselves, by not opting male condoms and male sterilization methods of contraception.
Barriers in using contraception
Majority of the participants i.e. 31% didn't use contraception because they didn't prepare the pills or tools for unplanned sex, 21% participants were expecting more children, 14.5% participant's partner didn't want them to use, 14.5% were not active sexually, 6.45% thought occasional sex could not lead to pregnancy, while 1.61% didn't know how to use the contraceptives. Some studies revealed that nearly 7.2% of women were not using any contraceptive methods because of opposition in husband and others. 9
Conclusion
The knowledge of contraception among the ever married women of tribal area was as high as 85.5%, while a gap between knowledge and practice was discovered as the prevalence of contraception was only 69.6%. The use of modern contraceptives methods was found to be relatively low in comparison to their knowledge regarding the same. Contraceptive morbidity in the study participants was very low. The study has reported a poor involvement of male in family planning as there was low use of contraception by male and major barrier for using contraception among tribal women was also partner's disagreement.
Source of Funding
None.
Conflict of Interest
None.
|
2020-08-06T09:07:08.338Z
|
2020-08-28T00:00:00.000
|
{
"year": 2020,
"sha1": "0ffb2fc22bd72d6fa9036bb400e0e3a58e7e3912",
"oa_license": "CCBY",
"oa_url": "https://www.jchm.in/journal-article-file/11816",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c16a6b44dfec72c93686c7fd996dfd27b3544a77",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
14460893
|
pes2o/s2orc
|
v3-fos-license
|
Understanding how and why health is integrated into foreign policy - a case study of health is global, a UK Government Strategy 2008–2013
Background Over the past decade, global health issues have become more prominent in foreign policies at the national level. The process to develop state level global health strategies is arguably a form of global health diplomacy (GHD). Despite an increase in the volume of secondary research and analysis in this area, little primary research, particularly that which draws directly on the perspectives of those involved in these processes, has been conducted. This study seeks to fill this knowledge gap through an empirical case study of Health is Global: A UK Government Strategy 2008–2013. It aims to build understanding about how and why health is integrated into foreign policy and derive lessons of potential relevance to other nations interested in developing whole-of-government global health strategies. Methods The major element of the study consisted of an in-depth investigation and analysis of the UK global health strategy. Document analysis and twenty interviews were conducted. Data was organized and described using an adapted version of Walt and Gilson’s policy analysis triangle. A general inductive approach was used to identify themes in the data, which were then analysed and interpreted using Fidler’s health and foreign policy conceptualizations and Kingdon’s multiples streams model of the policymaking process. Results The primary reason that the UK decided to focus more on global health is self-interest - to protect national and international security and economic interests. Investing in global health was also seen as a way to enhance the UK’s international reputation. A focus on global health to primarily benefit other nations and improve global health per se was a prevalent through weaker theme. A well organized, credible policy community played a critical role in the process and a policy entrepreneur with expertise in both international relations and health helped catalyze attention and action on global health when the time was right. Support from the Prime Minister and from the Foreign and Commonwealth Office was essential. The process to arrive at a government-wide strategy was complex and time-consuming, but also broke down silos. Significant negotiation and compromise were required from actors with widely varying perspectives on global health and conflicting priorities. Conclusions As primarily an exploratory study, this research sheds significant light on the global health policymaking process at the level of the state. It provides a useful and important starting point for further hypothesis driven empirical research that focuses on the integration of health in foreign policy, how and why this happens and whether or not it makes an impact on improving global health.
Background
Over the past decade, global health issues have become more prominent in foreign policies at the national level [1][2][3][4][5][6]. In 2007 the foreign ministers of Brazil, France, Indonesia, Norway, Senegal, South Africa and Thailand launched the Foreign Policy and Global Health (FPGH) initiative and the Oslo Ministerial Declaration and renewed it in 2010 [7,8]. Since 2008, the United Nations General Assembly has adopted three resolutions resolving that governments should pay more attention to global health in their foreign policies [9][10][11].
As nations become more interconnected and interdependent and health issues become increasingly global, state actors have more incentives to work together and with a variety of non-state actors on health issues that transcend national boundaries [12]. The process of negotiated collective action for global health has come to be referred to as 'global health diplomacy' (GHD), the 'policy-shaping processes through which state, non-state and other institutional actors negotiate responses to health challenges, or utilise health concepts or mechanisms in policy-shaping and negotiation strategies, to achieve other political, economic or social objectives' [ [13], p. 10]. The manner in which this concept is used, however, is highly diverse and the GHD process itself poorly understood [13,14]. Little empirical research that draws directly on the perspectives and experiences of those involved in GHD processes has been undertaken leading to strong calls for more descriptive, analytical, conceptual and practical rigour [5,[15][16][17]. This paper contributes to this goal by critically examining health in foreign policy (HiFP) through an empirical case study of Health is Global Strategy: A UK Government Strategy 2008-2013 (Health is Global).
Methods
The major element of the study consisted of an in-depth investigation and analysis of Health is Global launched in 2008. Of the countries with some formal strategy for GHD, the UK global health strategy is the most detailed and comprehensive.
Literature review, document analysis and semistructured interviews were used to conduct the UK case, as well as three background case reviews (Norway, Switzerland and Brazil). This article reports only on the primary UK case. To structure data for subsequent analysis and interpretation using the theoretical frameworks (Fidler's health and foreign policy conceptualizations and Kingdon's multiples streams model of the policymaking process) an adapted version of Walt and Gilson's policy analysis triangle [18,19] was used as an heuristic device to gather and organize a comprehensive and relevant set of data in five areas ( Figure 1): the policy context within which the policy was developed (i.e. context for and reasons why the policy was developed); the policy processes (i.e. how the policy was developed and is being implemented); the policy content (i.e. the global health issues to be addressed through the policy and how health is positioned in the policy discourse); and the actors involved (i.e. who and what role they played in the process) [18,19]. A fifth important category, indications of impact, was added to capture data that focused on potential and actual effects of the policy.
Data were analyzed, interpreted and explained using Fidler's health and foreign policy conceptualizations [20] and Kingdon's Multiple Streams Model of the policymaking process [21][22][23]. Fidler's work is grounded in international relations theory and posits three arguments for why health has risen as a foreign policy issuerevolution, remediation and regression (Table 1). Kingdon's model of the policymaking process is a highly reputable, evidence-based model that focuses on understanding why some topics become prominent on the policy agenda and others do not, and why certain policy alternatives are seriously considered while others are neglected ( Figure 2). Together these frameworks provide a useful and novel mechanism for analyzing and interpreting the study findings and arriving at conclusions in light of the main research questionhow and why is health integrated into foreign policy?
Purposive sampling was used to identify and recruit interviewees for semi-structured interviews. State and non-state actors who had been directly involved in health and foreign policy integration in each of the four countries were targeted for interviews. A total of twenty interviews were conducted, fourteen for the UK case (seven each with state and non-state actors). Access to interviewees was not an issue; however repeated attempts to recruit politicians who had been involved in the endorsement and approval of the UK strategy for an interview were not successful.
Interviews took place between August 27, 2009 andMarch 24, 2010. Six of the interviews with UK interviewees were conducted in person in London and the rest of the interviews were conducted by telephone. Interviews lasted from 30 minutes to 1.5 hours. Informed consent was obtained before the interview began. Interviews were audio-taped and then transcribed verbatim. Ethics approval was obtained from the University of Ottawa renewed on an annual basis until the study was completed and undertaken in full accordance with the University's ethical guidelines.
A general inductive approach was used to analyze the interview data using manual coding [24]. Both the research objectives and questions (deductive) and multiple readings and interpretations of the raw data (inductive) guided data analysis. Data analysis encompassed three concurrent and iterative flows of activities: data reduction, data display and conclusion drawing/verification. The main themes in the data resulting from this process are reported on in the results section of this article within "why" and "how" categories consistent with the research question followed by a discussion and analysis of these themes using Fidler's conceptualizations and Kingdon's model.
Results
Health is Global was released in 2008 largely in response to globalization and the realization that 'the old distinction between 'over here' and 'over there' was becoming increasingly redundant' and required nations to cooperate to achieve 'health for all' [1]. When released, Table 1 Summary of Fidler's health and foreign policy conceptualizations [20] Conceptualization Description
Revolution
• Health's increasing role in foreign policy is transformative of the health-foreign policy nexus • Health collapses the traditional distinction between high and low politics and creates a new political space in which health is an overriding normative value and the ultimate goal of foreign policy • Health is broadly conceived and encompasses the social determinants of health • Is consistent with health discourses that focus on health as a human right and the "health for all" ideal
Remediation
• Health's rise as foreign policy issue reflects the continued persistence of the traditional hierarchy of foreign policy functions • Health has become another issue that needs to be addressed through traditional approaches to foreign policy, or as a strategic vehicle through which traditional foreign policy goals can be achieved • Foreign policy attention on health is focused when disease crises appear and fades when crises drop off the political spotlight •Provides the strongest explanation for why health has risen as a foreign policy issue
Regression
• Health's integration into foreign policy is a regressive developmentan indicator that health problems are getting worse Health is Global was described as a 'cross government strategy' to highlight the breadth of challenges that face 'all of us' in the area of global health [25].
Health is Global is intended to span five years (2008-2013) however 'its vision covers a 10-to 15-year period' [1]. The strategy is comprised of goals with specific action areas and includes ten principles that are meant to guide decision-making particularly when conflicts among priorities arise. Sir Liam Donaldson, Chief Medical Officer for England and Dr. Nick Banatvala, Head of Global Affairs, UK Department of Health, acknowledged in their proposal for the strategy that potential conflicts exist between policy priorities.
'For example, reconciling UK trade interests (including trade in commodities) with sound pro-poor development policy and maintenance of international human rights might be difficult…..A coherent UK global-health strategy is important in navigating an economically and ethically acceptable path through the priority areas' [ [26], p. 857].
The UK strategy's final priorities and principles reflect the potentially conflicting reasons why it was developed and allude to the difficult process to reach a consensus on what eventually ended up in it.
In general, interviewees described the strategy as a very positive development referring to it as "motivational", "a commitment to global health", and "more than just another report". Findings pertinent to how and why the strategy was developed follow, beginning with a brief overview of the British foreign policy from 1997 to 2008 that formed the backdrop to the strategy.
Why?
The late 1990s marked the beginning of an increasing focus in the UK (and elsewhere) on the relationship between globalization and health with the UK's Nuffield Trust playing a key role in catalyzing attention on this phenomenon and the importance of integrating health into foreign policy. The late 1990s also marked the beginning of Tony Blair's premiership of the UK, a position that he held from May 1997 to June 2007 after which Gordon Brown became Prime Minister until 2010.
Three main foreign policies of Blair's 10 year span as UK's Prime Minister were an activist philosophy of international interventionism, a maintaining strong alliances with the United States (US) and a commitment to placing Britain at the heart of Europe [ [27], p.3]. Activist interventionism was regarded as a genuinely new perspective and approach, as was a focus on more 'joinedup-government' [27]. Under Brown, some 'recalibration' of the three themes occurred but there was more continuity than change from Blair to Brown [[27], p.3]. Of these three prongs, interventionism and the UK's special relationship with the US appear to be the most relevant contextual factors that influenced why Health is Global was developed and what was included in it. A focus on more 'joined up government' also helps explain why Health is Global is a whole-of-government strategy.
The Blair Administration
During the 1999 Kosovo crisis Blair delivered his famous Chicago speech in which he unveiled his 'doctrine of international community' [27]. This doctrine was based on the explicit recognition that nations were becoming interdependent and that national interest was to a significant extent governed by international collaboration. Mutual dependence was linked to the idea that boundaries between the domestic and the foreign were blurring; therefore, an overriding policy of non-intervention was no longer an option. Indeed, in cases of genocide or crimes against humanity, it was a moral imperative.
While initially expressed through a paradigm of humanitarian intervention, after 9/11, Blair's support for interventionism became linked with protecting national security, fighting terrorism and backing the US invasion of Iraq based on evidence about weapons of mass destruction in Iraq that it now appears Blair knew to have been fabricated [28]. This shift led critics to claim that respect for human rights and international law were subordinated to the UK's focus on its relationship with the US and the 'war on terror.' Blair gave more attention to international development in his second term, which some argue was an effort to improve Britain's tarnished reputation post Iraq [27]. One of the UK interviewees argued that Health is Global was in part politically motivated to the same end: "I think there was also, and I don't know how much this motivated the government, but I think because of the opprobrium and the criticism of the UK government's positions on the Iraq war and so on, I don't know to what extent that might have influenced them to try and see how they might get a better international profile by focusing on positive contributions the UK could make to strengthening health and development".
A 2006 commentary published in The Lancet highly critical of the UK's involvement in Iraq argued that 'a renewed foreign policy that might at least be one positive legacy of our misadventure in Iraq' [ [29], p. 1396] was desperately needed. It concluded that health 'is now the most important foreign policy issue of our time and should be used as an instrument of foreign policy' [ [29], p. 1397]. How much this article or the perspective it conveys influenced UK policymakers is not known. Its arguments, however, are explicitly referred to in the proposal that led to Health is Global, albeit without making any reference to the UK's role in Iraq as a contributing factor [26].
The UK's special relationship with the US may also have sparked Health is Global 's development. The US Institute of Medicine 1997 report, America's Vital Interest in Global Health, is listed as a key influence and rationale for a government-wide strategy in the proposal that led to the strategy. The US report 'identified three pillars: protecting people, enhancing the economy, and advancing international interests' [ [26], p. 857], which eventually became the thematic backbone of Health is Global. One of the interviewees also indicated that the Nuffield Trust's relationship with US colleagues in the American Association of Academic Health Centers in the early 1990s was another key development that attracted more focused UK attention on the links between globalization and health and the relationship between health and foreign policy. As this interviewee explained, "Basically the Association wanted to know whether we would have a continuing interest in working with the Americans on matters of mutual interest and on the health agenda". Stemming from this initial discussion, the Nuffield Trust began collaborating with the US on efforts to better understand the impact of globalization on health, which led to attempts to attract UK attention on this issue as well.
While a more sophisticated understanding of global health appeared to emerge during the Blair years, this did not necessarily translate into action [30]. Public health was not a dominant force in driving policy in the UK and the overriding reason for focusing on health in foreign policy was to protect national interest. Narrowly focused domestic security concerns were key motivating factors as seen in the emphasis given to bio-terrorism and infectious diseases [30].
The Brown Administration
Gordon Brown's government retained the broad principle of interventionism but recast it to be less about hard power and more about conflict prevention and humanitarian agendas. He could not, however, completely repudiate the exercise of military power at a time when British troops were in both Afghanistan and Iraq, but he did emphasize that military action in the future would be a last resort [27]. In his speech to the Lord Mayor's banquet on November 12, 2007, Brown summarized his approach as 'hard-headed internationalism' [ [31], p. 15].
'…internationalist because global challenges need global solutions and nations must cooperate across borders-often with hard-headed intervention-to give expression to our shared interests and shared values; hard-headed because we will not shirk from the difficult long term decisions and because only through reform of our international rules and institutions will we achieve concrete, on-theground results' [ [31], pp. [15][16] [32].
Brown made it clear that the government's primary obligation is the safety of the British people and the protection of the British national interests that would, in an interdependent world, be best realized through cooperation to overcome shared challenges [ [31], p.15]. Many of these fundamental policy themes and the reasons behind them (economic globalization) permeate Health is Global.
In addition to traditional 'high politics' priority areas and partners (the US), the Brown government continued to promote international development. Over the course of the Blair to Brown years, the Department for International Development (DFID) enjoyed a reputation as a progressive, innovative and effective donor agency with a strong voice across government [33]. During that time, British aid spending tripled in real terms and it plans to spend 0.7% of gross national income on international development by 2013 [1,33] The seeds for Health is Global were largely sown during the Blair years but it was under Brown's leadership that the policy was launched.
'Economic prosperity, security and stability for the UK and the rest of the world' The ultimate goal of the strategy is actually not global health per se but rather 'economic prosperity, security and stability for the UK and the rest of the world' [1]. As it states: 'a healthy population is fundamental to prosperity, security and stability -a cornerstone of economic growth and social development. In contrast, poor health does more than damage the economic and political viability of any one country -it is a threat to the economic and political interests of all countries' [ [1], p.7].
Based on this reasoning it appears that global health is a means to an end and not an end in itself. Therefore, 'improvements in the health of the UK and world's population' through 'greater coherence and consistency between international policies that affect global health' [1] are sub-objectives that support the overriding goal of economic prosperity, security and stability -traditional preoccupations of foreign policymakers.
Globalization
Several interviewees referred to the recognition of the important linkage between globalization and health as the driving force behind the attention it garnered from the Foreign and Commonwealth Office (FCO) in the early millennium. "FCO was a key player in late 1990s/early 2000s in the context of globalization". "Globalization" required a "rethinking of how government works", one in which "you need a joined-up approach". This 'joined-up' approach had already been established "under the New Labour quite early on". Another interviewee said that the strategy development process: "…looked at the whole issue and within that it became clear that globalization had an important linkage with global health such as communicable disease".
As stated in the strategy, 'safeguarding good health is not simply the province of individual countries. A globalised, interdependent world, characterized by the increasing movement of individual and populationsand where disease recognizes no borders -means that health has become a global issue' [ [1], p.7]. But, as this same interviewee added: "…there were [also] opportunities clearly in healthcare as a growth area in terms of business opportunities".
"First it's UK" "I think it would be foolish not to admit that a large part of it is done for UK benefit and that it has been recognized that there are global threats. So first it's UK but longer-term benefits in terms of relationships and protection from threats and so on. There is a need and that runs through the development concept that it's about working with other countries to reduce the global risk. The UK would want to protect its own positions, its own population, by recognizing these global threats".
The most prevalent and strongest rationale for the development of Health is Global is to benefit the UK. This rationale is evident through the focus on global health security (i.e. protecting the UK population from global health threats) that permeates the strategy.
In the wake of the 2003 SARS pandemic, the need to strengthen global health security and 'ensure the safety' [[1], p.3] of the UK population described as the 'first duty of any government' [[1], p.3] was clearly a strong, if not the strongest, rationale behind the development of Health is Global. As one interviewee put it, "we are united when it comes to being secure in the UK". This focus is also a priority of the UK's first ever national security strategy also launched in 2008 [36]. There is meant to be a 'strong link' [ [1], p.15] between Health is Global and the national security strategy that includes the risk to the UK of diseases such as pandemic influenza along with international terrorism, weapons of mass destruction, conflicts and failed states [36,37].
While 'global health security' per se is not clearly defined in the strategy, findings from the interviews support global health security as the driving force behind the strategy. As one interviewee stated, "it does rather focus on diseases crossing borders which is probably one of the reasons it's come to such a high profile". Others noted that "One of the things that we've done in the UK is essentially accepted global health as the securitization of the health agenda". There was some support for this perspective: "Development of sources or pockets of insecurity has led to, from my perspective, an equivocation of global health to global health security".
One interviewee talked at length about how "through securitization health diplomats got into rooms that they weren't previously in". He described this as "piggybacking" on the securitization agenda to bring focus to global health issues more generally.
"They (academic researchers) got invited to cabinet committees to sit at tables with four-star generals in a way that they weren't able to previously-academic researchers suddenly found that they could advocate for research funding because they were talking about things that might kill millions of people, like AIDS".
While the majority of interviewees acknowledged that global health security was the main motivating factor behind the strategy, several of the non-state interviewees were highly critical of this positioning. One stated: "I know why they're doing it, for government buy-in, but it's not enough to think of health as a foreign policy as global security. With this global security thing you get governments who are in it for themselves".
Another commented that: "The security of health agenda has gone unchecked and unchallenged because too many people have too much to gain from it. I'm not saying it's a bad thing but I'm not sure it's not the great thing that we're making it out to be".
These responses highlight a theme related to global health security, namely its potentially uneasy and conflicting relationship to global health equity. As one interviewee stated, "There are a lot of unaltruistic drivers of the development of the securitization of health agenda and one of these is the diminution of the health equity agenda".
Although strengthening global health security, primarily as a way to keep the UK population safe, was the main rationale for its development, two other reasons were also cited (apart from the strategy functioning to deflect criticism from Blair's Iraq debacle).
First, as one interviewee commented, the UK's traditional "colonial" approach to foreign policy means it likes to be seen as a leader on the global stage and will do things in order to protect that reputation. Another interviewee stated that it was very "disappointing" that the UK did not sign onto the Oslo Ministerial Declaration. The comment was also made, "it's so typical UKhave still got this old colonial, oh, we're so great and think that we can go it alone". In the same vein, "there was this thing that the UK still likes to see itself as a leader in things whether it is or not. That we must lead in global health. So the UK will do things in order to lead". The proposal for the government-wide strategy also alludes to UK leadership as one of the driving forces behind it, 'the UK has been at the forefront of multilateral initiatives, such as cancelling the debt for poor countries, access to medicines…the 2005 UK presidency of the Group of 8 wealthiest nations (G8) drew attention to global health, climate change, investment in health systems, and partnerships with government of developing countries' [ [26], p. 859]. Health is Global was seen as a logical extension of the UK's leadership in global health.
Second, the strategy was developed in part to enhance UK business opportunities overseas in the context of globalization. 'Health as a commodity' was identified as one of the main reasons for developing the governmentwide strategy in 2007 [ [26], p. 858]. Indeed, harnessing 'the force of globalization' is largely about trade and investment opportunities for the UK, although in doing so it is also regarded as a way to improve global health and access to care and services for the 'poorest people in the world [ [26], p. 858]. One interviewee commented that, "you have UK companies looking to win business overseas" and another who said "there were opportunities clearly in healthcare as a growth area in terms of business opportunities". The strategy seeks to enhance 'the UK as a market leader in wellbeing, health services and medical products (including pharmaceutical and medical devices)' including the priority to promote the 'best of British healthcare' both because it can contribute to strengthening health systems in other countries and also because it can bring 'significant benefits for the UK economy' [ [1], p.29] [ [38], pp. [66][67].
For the benefit of others
Other rationales focused more on contributing to improving health and prosperity outside the UK as a goal in its own right. As stated in the foreword by the Prime Minister, 'the strategy is one way for us in Britain to build a stronger, fairer world' [[1], p.3]. Along with global health being a question of security, it is also a question of 'morality' and is defined in the foreword as a 'force for good' [[1], p.3].
As several interviewees described, Health is Global stems in part from the UK's focus on development that became a more prominent part of the government's agenda under Tony Blair. A separate Department of International Development (DFID) was created in the late 1990s and several policies that focused on the UK's role in international development were released [38]. "There was a genuine interest in development in the government in the late 1990s and a growing concern about inequalities", said one interviewee. The strategy itself comments that 'improving health and reducing health inequalities requires tackling the underlying causes of ill health-the conditions in which people live and inequalities in the resources and opportunities to which they have access A few interviewees frequently linked the concept of "health equity" with that of "development". As one noted, "In the UK we talk about development rather than equity". Another commented, "I don't see equity being a central concept in the policy discourse. It's in a larger concept of development, which is then unpacked in various ways but I think very much informed by the neo-liberal premise". b One interviewee noted that the "equity lens is not fundamental. It's just part of the discourse -part of the mix", which another noted was placed and kept on the agenda by non-state actors: "activists, essentially, of one sort or another". These comments downplay the importance of the health equity argument and are interesting because promoting health equity and reducing health inequalities is a fairly prominent concept throughout the strategy. One of the strategy's ten principles explicitly refers to the importance of promoting equity within and between countries [1]. As well, health impact assessments are included in the strategy as a recommended approach to assessing the equity impact of domestic and foreign policy [38].
The inclusion of health equity as a priority in the strategy may well reflect the work and determination of a few strong non-state actors in the policymaking process. As one non-state interview noted, "I had to fight so hard to get human rights and health equity in it. They only play this card when it suits them". This interviewee expanded further on this viewpoint, "they don't really believe in equity either. Again it's good when they want to score brownie points or something but if it means they are going to have to give over sacrificing something they're not interested".
While some interviewees expressed doubt as to the strategy's commitment to global health equity, Health is Global nonetheless commits to investing in development. It aims to complement and build on DFID strategies by including actions focused on combating poverty and health inequalities in support of the MDGs and improving the social determinants of health in impoverished nations [1]. One interviewee praised the government's commitment to development saying, "I think the British government … has been very proactive … partly because civil society [was] onto it straight away, in saying we will keep our overseas development commitments".
Part of the rationale behind supporting development for health is based on the premise that 'a healthy population is fundamental to prosperity, security and stability' [ [1], p.14]. Quoting the WHO Commission on Macroeconomics and Health (2001), the strategy reiterates that 'ill health is a drain on society, while good health is a cornerstone of economic growth and social development in developing countries' [ [1], p. 14]. Taking this one step further, the strategy also asserts that in the context of globalization, poor health 'does more than damage the economic and political variability of any one country -it is a threat to the economic and political interests of all countries' [[1], p. 7]. While the development rationale is primarily about what the UK can do to help developing nations through trade and economic growth, it also includes elements of self-interest. As one stated, "security is now more centrally part of it (i.e. the reason for investing in global health)", not development.
Human rights also figure in the strategy and its development. Donaldson argued that one of the reasons the UK must engage with the global health agenda through the establishment of a coherent global health strategy is because 'health is a human right' [39]. The right to health underpins the ten principles included in Health is Global. The strategy highlights that the UK was one of the original 1948 signatories of the Universal Declaration of Human Rights but does not make any additional references to its specific obligations under international human rights covenants. Having said this, Health is Global does commit to including health as a section in the government's annual human rights report [1]. It also makes some explicit references to human rights with an emphasis on gender rights in the context of sexual and reproductive health and cautions that unfair or unethical trade can deprive workers of their 'rights to security of employment and compensation' [ [1], p.60].
Of the 14 UK interviewees interviewed, five did not make any reference to human rights, and only two mentioned international human rights frameworks c in their response. The majority of those who referred to human rights did so simply to affirm that human rights had been a consideration in the development of Health is Global. These comments embody the normative but not the legal dimensions of international human rights, which is consistent with how state actors tend to regard human rights as a rationale for focusing on health in foreign policy [37,40]. Only one interviewee explicitly referred to health as a "right" while others referred to concepts related to human rights such social justice and improving global health as an obligation. A prominent theme in the interview data was challenges associated with ensuring that a human rights perspective had an equal seat at the table in policy discussions along with trade, economic growth and security.
Influences from outside government
The Nuffield Trust played a key role in bringing the issue of the effects of globalization on health and HiFP to UK policymakers' attention beginning in the later 1990s and early 2000s. As one interviewee explained, "the idea came a long time ago from outside government", in particular from the Nuffield Trust and "from people with academic interest in global health diplomacy and the emerging concept of global health diplomacy". The Nuffield Trust funded leading scholars to generate critical academic research as evidence in this area and used its position to build extensive networks of senior level officials engaged in health as a foreign policy issue [41].
Other members of civil society also appeared to play an influencing role in establishing the need for a greater focus on global health by state policymakers. As one interviewee commented, "I think civil society has definitely had an influence through campaigns like Make Poverty History". The UK had played a leading role in launching the Make Poverty History campaign in 2005 which challenged the 2005 G8 Summit in Gleneagles to tackle issues of trade, aid and debt [42].
Developments in the international community that focused on health and foreign policy, in particular the publication of the Oslo Ministerial Declaration in 2007 also played a role. As one interviewee stated, these developments exerted "international pressure" on the UK "to get in the game", though as already noted, and as this interviewee emphasized, "the UK likes to go it alone". Another interviewee reflected: "So you have countries putting their stamp on their field saying this is what we understand by it. To me that raises the significance of a policy area, that when more than one state starts to do it then it becomes important for the UK to have its version of this discourse because it's achieving a degree of international prominence".
Influences from inside government
The intent to put in place a whole-of-government approach to addressing global health was a major force behind its development. As one interviewee put it, "another thing to bear in mind with New Labour is the greater focus on government coherence, joined-up policies. It's also important to see this as a driver for looking at how one policy area can impact on another". As the strategy states: 'Many UK government departments and agencies work on issues that directly or indirectly affect the health of the world's population. To be most effective in our work on global health, and to make the most of opportunities to improve UK health, we need a consistent and joinedup approach across government' [[1] p. 15].
An important factor that influenced and enabled the development of the strategy was the political support it had from the Prime Minister, Gordon Brown, and his Ministers of the day. Brown signed the foreword demonstrating "support from number 10". Another interviewee reflected that Brown likely supported the strategy out of personal conviction: "…he's committed the government to getting up to the UN target of 0.7% of GDP, he's created this new financial vehicle for vaccinations and immunizations so he himself would seem very supportive of global health but has that been done for foreign policy reasons or because it happens to be his personal conviction? I don't think he's doing this in a major way for foreign policy objectives but out of personal conviction".
Ministerial support for the strategy reflected in a common voice and position across government was also critical and appears to have been a significant enabling factor leading to its development and eventual launch. Ministers that were the leads on collaborating to develop the strategy were present at its launch. The press release that accompanied the launch included quotes from each of them [25,43]. This demonstrated as one interviewee put it, "that the baseline was all signed up to this. That is why we have an HMG (Her Majesty's Government) document". Another reflected that: "…one of the things that I've learned working in government is that conducive personalities are the biggest driver for change. One minister getting on with another minister across the pond will do more for catalyzing or evolving a policy area or an agreement between countries than years and years of careful negotiation and planning".
As several interviewees communicated, the Foreign and Commonwealth Office (FCO) was a key player in the late 1990s/early 2000s in bringing attention across government to the rising significance of health in foreign policy. It also played a major role in ensuring that Health is Global was developed and launched. "FCO ran a series of workshops on a kind of interface of health and foreign policy that helped open a few doors to the strategy actually being published, to get the conversation going with FCO at an institutional level". Several interviewees noted that "FCO support was key" and that there was a "push within government from a powerful part of government -FCO -to see this delivered".
"It would have been difficult to have seen this thing delivered if it simply came from the Department of Health. The strategy was led principally by the Foreign and Commonwealth Office. They discussed this in the context of globalization and how the UK should respond to it and there was agreement from that that one of the deliverables could be setting out what our global health policy-strategic approach could be and this dovetailed very nicely with what people were saying on the outside".
Several interviewees commented that there would not be a strategy without the lead public servant, Dr. Nick Banatvala, in the Department of Health (DH) who kept it moving forward. "There was a very, very committed individual in international health who was a dynamo, very, very brilliant and even when the time is right if you don't have an individual, a sort of champion, then sometimes you don't get things done". Dr. Banatvala was described as the "real hero". His understanding of the NGO world from which he came and his previous work with DFID were seen as critical to his success. He was also a medical doctor. It appears that Dr. Banatvala was successful in moving the strategy along not only because he was from the bureaucracy where "it really happens" but also because he had experience in and understanding of the different worlds, players and issues that needed to be integrated into the strategy. Another very important support for the lead public servant was "having people outside giving him the leverage to help inside government and for networking". [44]. Those developing the strategy also received written responses captured through the Health is Global website and reviewed commentaries published in health and medical journals about health and foreign policy. The results of the stakeholder workshop discussion were also published on the website and used to help shape the strategy. The interministerial working group for Health is Global oversaw the organization of these workshops, which aimed to involve the UKs devolved administrations in the process and a wide range of stakeholders from private, public and civil sectors, including those from the healthcare system, health insurance industry, academic and research organizations, the media, global health charities, health professional associations and advocacy groups [44].
Interviewees described the strategy development process as "an extensive exercise of consulting and getting feedback" that took about two years to complete. "It was clear how vast the agenda was", said another. The process included a "cross-government priority mapping" exercise that "helped crystallize who was coming from what perspective". As one non-state interviewee commented, "a lot of us learned a lot about how government works and in a way just that process itself was an important outcome. We got to know each other's business".
In addition to the development of background papers and stakeholder consultations, the policy development process also considered relevant research evidence. This evidence focused on the major causes of death and ill health in the world using data from the 2006 Global Burden of Disease and Risk Factors study and the 2006 Disease Control Priorities in Developing Countries report (DCP2) [38]. The findings from both reports appear to have informed a number of objectives and action areas in the strategy, including ensuring stronger, fairer and safer systems to deliver health and related actions such as focusing on non-communicable disease and injuries and identifying and supporting research and innovation that tackle global health priorities. The strategy also refers to both peer and non-peer reviewed literature and findings of important and relevant commissions such as the WHO Commission on the Social Determinants of Health, the WHO Commission on Macroeconomics and Health and the Codex Alimentarius Commission. One might conclude from this that there was significant attention paid to research evidence in the development of the strategy and in the final product. Interviewees, particularly those from the academic community and research organizations, tell a different story. While these interviewees acknowledged that there were deliberate efforts to involve academics and other sorts of researchers in the process because it was recognized that "there needed to be more evidence", evidence was only one of many factors considered in strategy deliberations alongside politics, ideology and values.
"The drivers are not necessarily that you've got a body of evidence why global health is important. Globalization is changing the context of health and that's a general body of evidence. There's a political and discursive element to this as much as an evidence-based one. It will always be couched as evidence based because that is the main legitimating discourse for policy innovation in the UK".
One interviewee stated, "My personal take is that there's kind of a political rationale that's important in understanding why this has happened rather than [it] being evidence based. To the extent that it is evidence based, its evidence of emerging infectious diseases". Another commented: "How do you start thinking about evidence based policy for trade, for example, when it is such a political topic? I mean there's an evidence base for pandemics because they're the more scientific things but its other things even the climate change stuff that we're just starting to do. So a lot of it is based on consensus, not evidence".
This particular interviewee also reflected on how evidence could be used going forward as the policy is being implemented: "My own ideal would be to have evidence collated now to develop policy further as well as supporting policy that exists…to be honest, it … could have done so much more in that section about the research and how research would be used to improve policy for the future and give a state of the artwhere we are now and where do we need to get to".
In contrast to perspectives provided from researchers, one of the lead public servants provided another point of view: "At one stage we had quite a difficult time with some of those researchers because they felt that the document as a sort of earlier iteration was not sufficiently evidence based and there lies a tension between policymakers and researchers. 'You're identifying these four priority areas. Where's the evidence for that?' There is time when you accept that you take the evidence as it is and you move forward on a particular piece of policy".
Reconciling differences
Developing Health is Global and agreeing on a final product required significant consensus building and reconciliation of differences and interests across the many players involved. Moreover, the government of the day had committed to seeing the strategy developed so a "certain degree of pragmatism" was required to ensure a final product was arrived at in a timely manner. Early on in the policy process it was acknowledged that there were potential conflicts between the priorities that were emerging and that there might be difficulty reconciling UK trade interests with sound development policy [26,45]. Indeed, enhancing the ability to reconcile differences across government in the area of global health through a whole-of-government approach was one of the reasons the policy was developed in the first place [39]. As a way to reduce policy conflicts, in the strategy the Department of Health committed to supporting other departments in preparing global health impact assessments of their foreign and domestic policies [1].
According to several of the interviewees, the process of developing the strategy did indeed advance understanding about global health and the reconciliation of potential differences in this domain: "I guess one of the useful things that's come out of this is that we've been able to improve discussions between …across government on what the different elements and issues are that intersect global health and then try to iron out what has been, at times, glaring contradictions in policy positions".
The process of ironing out contradictions and differences was clearly not an easy one, with the majority of interviewees describing it as difficult and requiring significant compromise to arrive at a final document: "It's tough because our government doesn't think the same on anything and each department has its own priorities and mandates so trying to get something that all would sign-off on including the PM was a big challenge. So he (the lead from the Department of Health) took stuff out".
Another commented, "I think it shows a bit of a tussle that it had to settle in order to be written. It had to settle for a slightly narrower definition of health". Others described a "push and pull" process, "huffing and puffing over drafts" and being involved in interactions that "weren't altogether as productive as they might have been" resulting in a product that "wasn't truly a joint production". Another commented that the "broad tone is collegial and amicable but it's too far to say it's consensual, there were very definite trade-offs".
Interviewees provided significant insight into the trade-offs as well as to the power struggles that took place among government players. As one interviewee noted, "DFID is like an NGO in government. The powerful are trade, industry, FCO". In keeping with the traditional 'high politics' areas of foreign policy, this comment likely explains what priorities rose to the top and received the greatest profile in the strategy (security and trade) as compared to those that received less (social determinants of health, health as a human right).
Two main areas that required negotiation and compromise were clear in an analysis of the interview data. First, as one of the comments in the previous paragraph highlights, there was a lack of consensus as to what global health actually is. For example, there were those who regarded global health as primarily about diseases that cross borders (e.g. Health Protection Agency) and others who regarded it as being much broader and also encompassing the social determinants of health (e.g. Department of Health International Unit, DFID, NGO interviewees). As one interviewee explained: "The policy community that focuses on global health is very, very small. You're basically talking about one unit of a unit within the Department of Health. I think not more than half a dozen middle-ranking civil servants in DFID, and I'd be surprised if we've got half a dozen people in FCO who have a specific portfolio brief for global health. To them global health means all the things that have been before, special health regulations, but also trade and IP, migration policy and healthoutside that group of people, global health is equated to health security".
To resolve this issue, it appears that the players agreed to settle on what one interviewee called a "slightly narrower definition of health", couched primarily within the rubric of global health security.
Second, as anticipated when the strategy was first under discussion, there were significant debates related to priorities that may conflict, such as 'enhancing the UK as a market leader in well-being, health services and medical products' on the one hand while 'promoting access to medicines' [1] on the other. In other words there were conflicts between priorities that would primarily benefit the UK and certain interests within the UK (e.g. trade, security) versus those that were meant to primarily benefit others (e.g. development, human rights). Several themes in the interview data elucidate this struggle further.
The first example has to do with international trade in conventional arms, which is a significant issue given the UK is one of the world's largest arms exporters [37]. As an interviewee from FCO said: "There are certainly a lot of civil society organizations saying if you are serious about improving global health outcomes, you should be tackling the arms industry. Now that takes us into very, very sensitive territory for FCO because you know, automatically there are going to be conflicting interests at play".
A few other interviewees also commented on the arms issue with one indicating that "there were some pretty robust discussions between the Ministry of Defense, Department of Health and the Foreign Office around what our global health strategy would mean to things like arms agreements". Another added: "I remember at one point we had a discussion of arms and how you know, how the arms industry was going to be integrated into all this and accepting that countries have a right to defend themselves but nevertheless, some arms exports end up in regimes which are unsavory, to say the least. I think that report rather dodged around that kind of issue".
This interviewee is likely referring to the section of the strategy in which the UK calls for a legally binding treaty for the international trade in conventional arms without impinging on 'legitimate, responsible defense exports' [ [38], p. 21]. What this means exactly is not elaborated on in the strategy but it can be assumed that "dodging the issue" through lack of clarity and the use of diplomatic language was perhaps the only way that relevant government departments would collectively sign off on this content in the strategy.
Another concern that at least half of the interviewees mentioned relates to the issue of advancing UK as a market leader in health and supporting UK industries abroad while at the same time also aiming to reduce health inequalities through, for example, contributions to improving health systems and access to medicine and technology in low and middle income countries. While the previous example brought in FCO, Defense and Health, comments about this issue focused primarily on conflicting priorities across DFID and the Department of Health. As one interviewee stated, "part of the DH role is to act as a sponsor for the UK health economy and you've got Trade and Industry which are responsible for trade promotion. DFID has been working on access to medicines so you can infer a potential kind of conflict there". Another elaborated further: "When you look at trade and intellectual property issues, DFID would always say, well, look, what can we do for the developing world? And when that comes into conflict with actually what might be most beneficial for UK companies in terms of how they can get stronger intellectual property protection globally, DFID will not soften its stance which would be at odds with what other departments are doing, say the Department of Health which is the lead sponsor department for the pharmaceutical industry within government and would want to pursue a policy position that government would be sympathetic to industry".
As an interviewee from DFID emphasized: "The thing that drives us and drives most development agencies are the MDGs. That's our focus, that's our mission. That will drive things first and then we will try to align with other domestic partners. Our first and foremost objective is reducing poverty. That comes before anything else".
One interviewee provided significant insight into the nature of the discussions that took place to hash out these sorts of conflicts and expressed some frustration with the DFID position. These comments also highlight that one of the positive aspects of going through the process of developing a strategy was the opportunity to hold discussions about contentious issues since it was imperative that a strategy be agreed upon and launched.
"There were times when we got to air some intellectual laundry that we never, never got to in public or private between ourselves before. I remember one particular exchange when we wanted issues related to medical devices, the pharmaceutical industry, the biotechnology industry, wanted all three of those approaches to get into the chapter and so we sat down, we had a meeting and agreed to the points we wanted to make and I went away and produced a draft. We shared it with our colleagues in DFID and it's not to say that there was ever any kind of intention of fighting but their comments displayed naiveté about the importance of economic issues and wealth generation for the UK and they'd state, which is true, our department can't support priorities which are about the UK getting richer. And we'd say, well, we understand that this isn't DFID speaking, this is the UK government. It's part of the kind of mentality that happens with all governments. It was frustrating that certain departments and certain colleagues still didn't make the intellectual leap required to have a joinedup piece".
When asked how these sorts of issue were eventually resolved so the strategy could be written, this interviewee said, "That particular chapter ended up being compromised and shrunk in size, unfortunately". It also appears that there was an agreement between DFID and DH that the strategy would reiterate DFID's commitment to working with 'the poorest countries in the world' [ [38] p.58] while DH would concentrate on middle income and emerging countries, such as Brazil, India and China [1].
Another interviewee provided some additional thoughts on how tensions were resolved during the policy development process: "I think there was a group of people who have a particular priority focus but who fundamentally have the same values and therefore discussion and rediscussion and redrafting and ensuring that the text reflected the commitments of the department whichever department you came from was not a completely painless process but it was done in a number of iterations to ensure that all stakeholders were content. And I think that was a critical part of ensuring that the strategy itself was actually accepted".
Despite the contentious issues that arose during the process and trade-offs and compromises that were required to "all meet in the middle", the majority of interviewees were satisfied with the final product. The process of developing it was seen as beneficial in achieving greater cross government understanding of issues and policy positions.
Policy implementation process
The strategy includes a detailed implementation plan with specific actions each with an assigned lead department(s). An interministerial group made up of representatives from the departments involved in the development of the strategy (DH, Defense, DFID, FCO) is responsible for implementing the strategy and monitoring progress [1]. A cross-government steering group of senior official supports the interministerial group [1]. Actions it is taking to ensure partner involvement include regular partner events to review global health challenges and to assess whether the strategy is making an impact [38].
The strategy did not commit new resources to support implementation but rather reiterated the relevant resources that it had already committed to global health, particularly those for international development funneled through DFID. The strategy also emphasized that existing resources from other government departments are important and that 'these resources need to be used strategically if they are to have maximum impact. This means supporting the priorities and approaches set out in the strategy and working with others to deliver them' One area of new investment included in the strategy pertains to the global health security priority. While details of the level of funding and which department is contributing to it are not provided, the strategy commits to 'new funding for the HPA (Health Protection Agency) to do more work internationally' [[1], p. 21] and support for a new Chatham House Centre on Global Health and Foreign Policy [38]. This investment demonstrates the importance of the strategy's global health security priority. In addition, the strategy also commits to providing funding for the new European Council on Global Health that aims to strengthen the European voice in global health governance and be a powerful advocate for a sustainable European commitment to global health. At the time this study was undertaken, Global Health Europe Task Force members included Dr. Nick Banatvala, who led the development of Health is Global, and Dr. David Heymann, Director of the Centre for Global Health Security at Chatham House [1,46,47].
Several interviewees mentioned the role of Chatham House in helping to implement the strategy. Most considered this to be highly positive given its long standing reputation as a 'world-leading source of independent analysis, informed debate and influential ideas' about international and global issues [48]. One interviewee from an NGO, however, was highly critical of this move, arguing that Chatham House has no experience in health and its focus on global health security as opposed to health equity was a "cop-out".
The Health is Global strategy set out a set of actions against which indicators would be developed and progress measured. It committed to reviewing progress regularly 'to improve the way we are working, ' [[1], p. 16] and overall impact at the end of the life of the strategy to determine what to do next [1]. As part of the evaluation process, it would commission annual independent reviews on progress on particular aspects of the strategy with a full review in 2013. It is not clear if such reviews have indeed been annual as only one such review conducted in June 2010 is publicly available [49]. It does appear, however, that the interministerial group is tracking progress on a regular basis since the strategy was released, as reported at partner meetings held at Chatham House and in partner newsletters [50,51]. Furthermore, the UK government launched a Health is Global outcomes framework in 2011 [52]. Starting with the original strategy and the recommendations from the first independent review just mentioned, the government developed an outcomes framework to support the next phase of the strategy. This framework reaffirms the guiding principles and focuses efforts towards achieving a consolidated set of twelve high-level global health outcomes by 2015 that will be underpinned across government by departments' own delivery plans.
Interviewees provided their perspectives both on what impact they thought the strategy has had so far as well as perspectives on success going forward. Overall, interviewees regardless of sector described the strategy as a positive and important milestone, particularly because it focused minds in a "more consistent way" across government, has been a "good driver" for individual and collective work because it is now "written down" and serves as a guide for identifying "how each department fits and where the gaps are". It was also described as a concrete example of the UK's commitment to global health, "sticking our flag in the sand is a successful output" said one. Several mentioned what they regarded as concrete positive outcomes of the strategy so far, including the launch of the research program at Chatham House and new funding for the Health Protection Agency. Another commented that the strategy is making a difference because it "builds awareness and support for the MDGs". A few interviewees nonetheless remained querulous about the strategy's impact: "Has the government kept to it? Is the government who signed onto it keeping to it? What has this strategy led to that would not have happened anyway?" And: "Success will be what happens to the policy community around this. Will there be greater interaction between FCO, DFID and Health? Greater cooperation? Genuine engagement?"
Discussion
The importance of actors and leaders Different types of actors played a significant role in influencing the creation of Health is Global and ensuring that it was developed, launched and implemented.
The policy community
While Health is Global was launched in 2008, the policy community had been actively influencing its eventual development for at least a decade earlier. The Nuffield Trust, in particular, played a major role in attracting and sustaining focus and analytical scrutiny on the link between globalization and health and, with partners, in connecting the various players in the policy community (e.g. government, academia, think tanks). This leading and connecting role is critical to preventing the fragmentation of the community and the policy alternatives it espouses, which can significantly weaken such a community's clout as influencers in the process [21]. A more closely knit policy community can generate consistent ways of thinking, common language and issue framing, all of which are important to softening up a policy space and stabilizing a policy system to influence change. That Health is Global was framed according to recommendations stemming from the Nuffield Trust led processes indicates that this policy community had an impact.
Government actors are part of policy communities and in the UK case the most prominent of these were FCO, DH and DFID. Whether actors from these three sectors considered themselves to be part of the same policy community during the policy development process is not known, although given the significant consensus building that was required to arrive at an agreed upon strategy, likely they did not. Instead, as several interviewees described, the policy development process itself brought departments with disparate views closer together creating somewhat of a closer knit policy community in government.
An interesting observation stemming from the interview data pertains to the somewhat tense interactions that academics who contributed to the process had with government policymakers. On the one hand, academics thought that there needed to be a greater focus on gathering and scrutinizing evidence to inform the policy, while on the other, the policymakers were focused on being pragmatic and moving forward with whatever evidence they had on hand. This tension is not surprising and is supported by ample literature about the challenges associated with the evidence-informed policy and decision making processes [53][54][55][56][57][58][59].
It appears, then, that while there was representation and participation from the academic community in the Health is Global process this does not necessarily go hand in hand with the conclusion that research evidence played a central role in influencing policy decisions. Drawing on conclusions derived from the application of Kingdon's model, policy is primarily the result of politics, policy entrepreneurs and the convergence of the three streams and not the result of research evidence per se. The interview data corroborates this conclusion. To repeat one particularly relevant comment, "my personal take is that there's kind of a political rationale that's important in understanding why this has happened rather than being evidence based. To the extent that it is evidence-based, it's evidence of emerging infectious diseases". This comment resonates with Labonté's argument that technical evidence, especially about risk and pandemic preparedness, may have traction in global health policymaking as it aligns with the health security focus, but rarely is there a full consensus on evidence with respect to other global health areas such as aid and development, leading to a significant amount of political interpretation [60].
Policy entrepreneurs
According to Kingdon's model, policy change cannot take place without leadership from tenacious policy entrepreneurs [21]. In the UK case a policy entrepreneur played the key leadership role in advancing policy directions. While such entrepreneurs do not necessarily need to be politicians or public servants, based on the findings from this study (including the three background cases not reported on in this article) leaders in GHD processes appear to possess at least two special attributes. First, they are either politicians or senior public servants, and second, they encompass both health and international relations expertise through formal training and/or education or a combination of the two. Three of the four leaders in the case studies (including the three background cases) were medical doctors who could call upon their status as the elite profession within health, as needed. Despite their authority and influence, however, policy entrepreneurs cannot be successful unless they have backing of those from the highest level of political power. In the UK case, Prime Minister Brown was personally committed to Health is Global. Support from "number 10" was viewed as essential for the process to succeed. Similar political support and policy leadership from the very top was seen as necessary for the policy directions taken in the background cases.
According to Kingdon, policy entrepreneurs play the key role in 'softening up' the system and linking the problem, policy and politics streams. One way in which they do this is by developing their ideas and proposals in advance of when a policy window may open. This was indeed what happened in the process leading up to Health is Global. "The real hero" as one interviewee called him, Dr. Bantavala, contributed to the precursor proposal, a summary of which was published in The Lancet [26].
World Health Organization (WHO)
At about the same time that the UK released Health is Global, it also published a UK Institutional Strategy that will guide and frame its work with the WHO. The UK's WHO strategy is a joint strategy of the health, international development and foreign affairs departments. The strategy coheres with Health is Global and sets how the UK and WHO will work together most effectively to support the goals and objectives of the UK government and of the WHO [61]. This strategy and the multiple references that Health is Global makes to the priority that the UK places on working with and strengthening the WHO to advance global health objectives is consistent with findings from the background cases.
The UK WHO strategy acknowledges that as a 'major force for good in global public health, ' [[61], p.6] the WHO is at the heart of responding to global health challenges, is responsible for providing leadership in global health matters and is also a key development partner for delivering on the MDGs [61]. The UK acknowledges that the WHO as an institutional actor in the context of globalization plays a major role in helping them to cooperate to achieve common global health objectives. While self-interest prevails as the main reason that states like the UK are developing strategic approaches to investing in global health, acknowledging that the WHO is an important and relevant actor, though also in need of significant reform [62][63][64], signals that negotiation and consensus building to improve population health both within and across states is both necessary and possible.
The importance of timing and stream alignment
Timing and the alignment of the problem, policy and politics streams found in Kingdon's model were critical to the eventual development and government-wide agreement on Health is Global. The growing awareness of global health and the potentially important relationship between health and foreign policy (part of the policy stream) had been brewing for several years in the UK policy community before the SARS crisis hit as the "wake-up call" for the government to take concrete action. While SARS was what Kingdon would call the 'focusing event, ' (a component of the problem stream) there also appeared to be political motivation -to improve the UK's global reputation post Iraq. Investing in global health was arguably one potential way to do this. The UK's commitment at the time to helping to achieve the MDGs (another aspect of the problem stream) was also a strong motivating factor for focusing on global health. Within this mix of policy, problems and politics, leaders within the bureaucracy (policy entrepreneurs) had set the stage for catalyzing stream alignment when the policy window opened with the SARS crisis. A bandwagon effect occurred at that point in time that and created incentives for the various government actors with non-state actor participation to arrive at an agreement on whole-of-government global health policy: the Health is Global strategy.
Revolution? remediation? regression? -self-interest dominates
Findings in the UK case lead to the conclusion that Health is Global was developed primarily to benefit the UK. Such self-interest is reflected in the strategy's focus on global health security, the priority the strategy places on capitalizing on global health as a business opportunity and the revelation that the strategy was likely developed in part to improve the UK's global reputation that had been tarnished as a result of its involvement in Iraq. These observations align with Fidler's remediation conceptualization, that the strategy is using global health to further other traditional foreign policy goals. In contrast to the revolution conceptualization, health is not an overriding normative goal of foreign policy but rather a means to an end.
While "First it's UK" was the driving motivation behind Health is Global, not all interviewees agreed with this rationale, arguing that it was a threat to health equity and undermined development efforts. Using development aid to further the UK's security agenda is one of the policies that Britain's new Prime Minister, David Cameron, appears to be supporting. In his first Lord Mayor speech in November 2010, Prime Minister Cameron, like Brown before him, focused on hard-headed internationalism albeit with an even stronger 'hard-headed' intent.
'Our foreign policy is one of hard-headed internationalism. More commercial in enabling Britain to earn its way in the world, more strategic in its focus on meeting the new and emerging threats to our national security…. Above all, our foreign policy is more hard-headed in this respect. It will focus like a laser on defending and advancing Britain's national interest' [65].
This statement reinforces the conclusion that it is the primacy of self-interest that will drive foreign policy under the Cameron government, potentially even more so than it had under Brown. In October 2010, Cameron unveiled the new UK security strategy allocating a larger proportion of DFID's budget to addressing issues of conflict [66]. Strengthening governance and security in fragile and conflict-affected countries, in particular Afghanistan and Pakistan, is among DFID's five priorities [67]. Critics described this as 'development as counterterrorism' , arguing that aid should be disbursed on a needs basis and not 'according to Whitehall's security agenda' [68]. Investing in development based on self-interest also appears to be part of the messaging in DFID's 2011-2015 business plan which refers to development as 'tremendous value for money and good for our economy, our safety, our health and our future ' [ [67], p.2]. In keeping with the government's new structural reform agenda, the plan strongly focuses on demonstrating value for money with an emphasis on results, transparency and accountability. While this approach to aid can potentially allow a better assessment of aid effectiveness, carried to an extreme it could end up favouring projects with short-term deliverables at the expense of long-term infrastructure, or on countries with a greater existing capacity to show returns at the expense of more vulnerable states [37]. Having said this, and while DFID's business plan contains certain 'hard-headed' elements, it also includes those that reflect the UK's commitment to benefiting others by investing in aid. The plan reiterates the UK's commitment to spending 0.7% of gross national income on aid by 2013, which OECD reports the country is well on the way to achieving, [69] and includes priorities such as leading international action to improve the lives of girls and women, combating climate change, responding to humanitarian disasters and improving the global development system [67].
While self-interest manifested through the global health security framing may attract the attention of foreign policymakers, such positioning is potentially fraught with risk for global health and can lead to what Fidler's refers to as global health 'regression.' Health may have risen as a foreign policy concern but in a way that tarnishes it's normative underpinning or what made health special in the first place leaving it at the margins of traditional foreign policy and vulnerable to shifting foreign policy attention. Adding to this risk is the lack of a clear and universal definition of global health security. As Aldis argues, policymakers in industrialized countries emphasize protection of their populations against external threats when talking about global health security, while policymakers in developing countries and the UN system understand the term in a broader public health or human security context [70]. This definitional problem may help explain why the term is used somewhat confusingly as a catch-all phrase in the Oslo Ministerial Declaration. As a policy position developed by Ministers from both developed and developing nations it raises the question as to whether there was a common understanding of this concept as presented in the Declaration. It is also difficult to assess the impact of such high profile framing on other global health policy processes, but as Kingdon's model highlights, issue framing is an important part of the policy process that can lead to a significant bandwagon effect [21].
Policy process is global health diplomacy
As noted in the Introduction, GHD is generally considered to involve policy-shaping processes around health challenges, or utilizing health concepts and issues to achieve other political, economic or social objectives. The examination of the Health is Global policy process provides evidence to support this definition. It also leads to a few specific conclusions about the nature of GHD at the state level when actors aim to develop whole-of -government strategies. As a starting point, the in-depth analysis of the UK process allows a number of more specific defining characteristics to be formulated: While non-state actors provide important inputs into the process, the final negotiation of the content of the strategy takes place among state actors, in particular those representing health, foreign affairs and development government departments which, assuming that there is political will behind the policy direction, are compelled to arrive at a strategy within a given timeframe that is acceptable to all relevant government actors. As the UK case revealed and the Kingdon model helps to explain, the process leading up to the state negotiation stage can be lengthy, potentially lasting many years. It is during this time that non-state actors can act in policy communities as policy advocates softening up and framing the policy space. Connections with policymakers and other policy entrepreneurs provide opportunities to influence policy direction further as does the development of evidence to support varying policy alternatives. Non-state actors can play an important challenging function, particularly during the strategy framing process, by drawing attention to global health equity issues. State actors negotiate the finer details of the strategy but as the UK case showed a process that aims to include actors from the public, private and civil sectors who will work with government to eventually implement the strategy appears to be an effective approach. The desired outcome helps to determine what process to put in place. If the intent is to develop a comprehensive global health strategy that will require multi-sectoral actors to help implement such actors should be involved as partners in the process from the outset. Leadership in the policy process by an authoritative, credible policy entrepreneur is a critical success factor. Such leaders have specific attributes, the most important of which is that they have knowledge, experience and training in both health/ medicine and international relations enabling them to understand, be credible and connect within both contexts. Political leadership from the head of government is also critical. The whole-of-government process is difficult, complex, fraught with differing policy perspectives and positions and time consuming. Skillful negotiation and consensus building is required to arrive at an acceptable strategy for all involved as are cross government processes and structures such as interministerial working groups and committees. The UK case showed that significant compromise could be required to reach an agreement and 'signoff ' on a strategy. While the process is difficult, interviewee comments indicated that it was nonetheless an important way of building common understanding across government and broke down silos to working together. This was perceived to be a positive consequence of the policy development process.
Conclusions
This paper provides significant insight into why and how health is integrated in foreign policy, which has helped to better define and crystallize the global health diplomacy process at the state level. Many of the main conclusions are similar to the unreported findings from the three background cases conducted as part of this study. Self-interest is the dominant reason that the UK developed Health is Global, a rationale that could become even stronger and deeper in a climate of economic constraint. This conclusion is consistent with the results of the Norway and Swiss cases in which self-interest was also the dominant rationale for investing in global health, i.e. to protect national and international security and their economic interests. In these cases, consistent with that of the UK, investing in global health was also seen as a way to enhance the state's international reputation. In terms of self-interest, however, Brazil was an outlier. International solidarity and health as a human right have been the driving forces behind its long-term investment in development cooperation to date. Investing in health for normative reasons was also a prevalent though weaker theme in the UK, Swiss and Norwegian cases.
In the UK case and the three background cases, the role that policy entrepreneur leaders, particularly those with expertise and experience in both health and international relations and other actors play in the process is extremely important. The WHO is regarded as a highly important and relevant institutional actor in global health diplomacy but recently it has been argued that organizational reforms are greatly needed if it is to continue to play this role effectively. This discussion also highlighted particular characteristics of the global health diplomacy process at the state level that may be helpful for other states to consider when developing similar whole-of-government global health strategy. Even if the current context in such countries is not ideal for such a strategy to take root because of the world's current economic situation, based on a more in-depth understanding of the process, it is important for policy
|
2015-03-19T23:44:59.000Z
|
2013-06-06T00:00:00.000
|
{
"year": 2013,
"sha1": "67b232c19fc7ebcd652f548d0762d566f58f9b0b",
"oa_license": "CCBY",
"oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/1744-8603-9-24",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff6e47a66ae879d95cfcf55aa48d83daa341803f",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": [
"Economics",
"Medicine"
]
}
|
223002283
|
pes2o/s2orc
|
v3-fos-license
|
REASONS FOR OMISSION OF PHYSIOTHERAPY IN A TERTIARY CARE INSTITUTE IN MUMBAI
Background: As critical care is progressing, we are able salvage many patients, many of which in the past decades, would not have survived. A good number of these patients spend prolonged time in the intensive care unit(ICU). The after effect of which is myopathy, respiratory muscle weakness, neuropathy etc. Physiotherapy is an especially important armamentarium in the arsenal of critical care medicine which helps in the prevention of morbidity. However, there are reasons why there is omission in the performance of physiotherapy in the intensive care unit.
INTRODUCTION
to prevent it.
As intensive care research and development is increasing, we are able to salvage many more patients after long drawn fights with death in the critical care unit. It is not only important to save a patient but equally important to give patient a good quality of life. Physiotherapy is an important intervention that prevents and mitigates adverse effects of prolonged bed rest and mechanical ventilation during critical illness. Physiotherapy is incorporated as an integral part of management in intensive care in many countries. Many studies have reported reduced length of stay and improved outcomes [1,2]. In a systematic review of 10 randomized controlled trials it was shown that physiotherapy had no effect on mortality but reduced length of hospital and ICU stay and increased number of ventilator free days [3]. However in a recent metanalysis of 5 randomized controlled trials with a sample size of 603 patient it was shown that multimodal physiotherapy lowered mortality with no effect on length of stay. However, it is important to note that all the trials enrolled very few numbers of patients [4].Hence there is a definite need for high quality trials and new evidence in this regards. Even the appropriate timing, nature of interventions and intensity ("dose") of physiotherapy remains unproven. However, the prevention of this physical deconditioning and thus reduction in length of mechanical ventilation and stay in the intensive care unit will help in reducing cost. In most hospitals in developed countries, physiotherapy is seen as an integral part of the management of patients in ICUs. The precise role that physiotherapists play in the ICU varies considerably from one place to other depending on where ICU is located, local tradition, staffing levels, training, and expertise. The referral process is one example of this variation, whereby in some ICUs, physiotherapists assess all patients, whereas in other ICUs, patients are seen only after referral from medical staff [5].
There is no study from India that has looked into the common causes of omission of physiotherapy in indicated patients. The authors thus go on to find out reasons as to why physiotherapy was omitted and thus take probable measures
MATERIALS AND METHODS
This study involved data collection from 240 patients admitted to a 30 bedded mixed medical and surgical ICU over a period of 3 months at S.L Raheja Hospital, Mahim, Mumbai, Maharashtra. Two dedicated physiotherapists were appointed for managing the daily physiotherapy routine for patients prescribed physiotherapy in the intensive care unit. The intensive care physician was responsible for the prescription of physiotherapy in the unit. Each physiotherapy session spanned anywhere between 20 to 30 minutes depending on the illness and prescription. This included assessments for initiation of early mobilization and permissible activity levels by patient physiologic characteristics and diagnoses. The therapy included positioning, mobilization, manual hyperinflation, chest manipulation, suctioning, breathing, limb exercises and postural drainage. The physiotherapy sessions were planned a day in advance by designated physiotherapists based on patient diagnosis, requirement, improvement and deterioration in patients' condition. Same was followed on the day of session.
The sessions were divided into three slots, morning session 8am to 12 am, afternoon session 1 pm to 4 pm and evening session 5 pm to 8 pm.
The average occupancy of the intensive care unit was 70 % and at any given day there were not more than 10 patients for elective physiotherapy sessions. The physiotherapist formulated the exercise plans and timings for the concerned patients a day prior and created a timetable. time(for e.g. 9.30 am) the patient was revisited in the designated shift time(for e.g. between 8 to 12.00 am) by either physiotherapist in order to complete the session. The physiotherapist was provided with the following survey table which had to filled by the physiotherapist in case the session was omitted. These forms were then collected by the investigators and analyzed after the patient was discharged from the unit. Table 2 In case of physiotherapy session were missed the appropriate reason was tick marked by the physiotherapist. The common reasons included in the questionnaire were as per tab.2.
2. Maximum hinderance to deliverance of physiotherapy was found during the morning shift. The following results are drawn from the survey (Figure 1, Figure 2) 1. 32% of the patients were not able to receive physiotherapy on the scheduled timings.
DISCUSSION
The reasons implicated were: A. Patient Refusal (36%): Attributed to in adequate sleep the night prior, accounting as the most common cause for delay or missing out on the physiotherapy session B. Patient movement to alternate areas (22%) Patients were shifted to scanning studies or subjected to imaging during the first half of the day C. Feeding cycle related (8%)-because the time coincided with meal timings in orally fed patients. D. Daily rounds (18%)-physician rounds, nursing rounds, bedside teaching C. Unexpected change in vital parameters (16%)-Sudden change in medical condition especially heart rate and blood pressure and respiratory parameters 3. Afternoon physiotherapy (Post lunch sessions) sessions were missed out 22% of times. This was mostly related to delay in delivering food on proper timings 4. Evening physiotherapy was omitted only in 2% of cases and the reason being limited to sudden change in medical condition.
It is well understood that physiotherapy is an important intervention that prevents and mitigates adverse effects of prolonged bed rest and mechanical ventilation during critical illness. Rehabilitation delivered by the physiotherapist is tailored to patient needs and depends on conscious state, psychological status and physical strength. It incorporates any active and passive therapy that promotes movement and includes mobilization. The National Institute for Health and Clinical Excellence (NICE), The European Respiratory Society and the European Society of Intensive Care Medicine recommend early assessment and management of physical morbidity (including mobilization and muscle training) delivered by physiotherapists and other health professionals. They also recommend that the physiotherapists should be responsible for implementing mobilization plans and exercise prescription in conjunction with other team members. Early mobilization can reduce ICU and hospital length of stay. A study that implemented a physiotherapy led early mobility protocol showed decreased intensive care unit and hospital length of stay (11.2 versus 14.5 days) and a potential cost saving of 7% of standard patient care costs Even after understanding the potential advantages of physiotherapy in the icu this facet of therapy is not well studied. The authors by way of this study tried to understand the causes as to why physiotherapy was omitted in indicated patients. This is the first Indian observational data from a large tertiary care intensive care unit to understand the reasons of omission of physiotherapy . As per the results of this study it was noticed that among the sessions (i.e morning ,afternoon and evening) the morning sessions involved maximum omissions. The prime reasons for which was A. Refusal as a result of improper sleep due to 1.noise level in the ICU at night hours (specifically nose from the monitors) 2.anxiety of having to sleep in an unaccustomed location and position. 3.pain 4.collection of labs at 4.00 am which disturbed all patients (this protocol was in place in the hospital so that laboratory results could be ready by 8:00 am) 5.Sponging of patient at 5.30 a.m. (which again disturbed the patient a second time and resulted in disturbed sleep) B. Scheduled C.T. scans and MRI and planned procedures (with no information given to physiotherapists) C. Morning rounds of physicians A. Sudden change in the health for example an arrythmia like atrial fibrillation, supraventricular tachycardia or sinus tachycardia There were very few omissions of physiotherapy in the afternoon and negligible omissions in evening hours sessions. Physical therapy indeed is an important part of patient care. The above causes are "real world" reasons for omission of physiotherapy in our institute. Taking cue from the results of this study few protocol changes have been made in the author's institute which include 1. Reducing noise levels in the night by promptly addressing alarms. 2. Switching off lights in the nights so that patient get effective sleep. 3. Giving a small dose of injectable anxiolytic or oral anxiolytic so that patient gets adequate sleep in the night and remains fresh in the morning. 4. Scheduling blood collection and sponging together at 5.00 am in the morning ensuring no more wakeup. 5. Early morning update to physiotherapist regarding scheduled scans so that the physiotherapy can be planned accordingly. 6. Completing rounds quickly and completely and doing bedside teaching in an adjoin room so that the physical therapy can continue. 7. Ensuring indication correction of electrolytes, oxygen level quickly in the morning avoiding arrhythmia like atrial fibrillation 8. Scheduling bronchodilators and cardiac medication 20 minutes prior to physiotherapy sessions to ensure smooth conduct 9. Co-ordinating diet timings with physiotherapy sessions so that there is minimal interruption of session.
CONCLUSION
Physiotherapy is an important part of the treatment of critically ill patients. However, there are many obstacles that lead to these therapy sessions not being conducted in time and effectively as shown in this paper. There is a great need to formulate country specific protocols in the assessment for the need for physiotherapy, timing of physiotherapy, the dose of physiotherapy and the actual form of therapy per session. There is also a need for large randomized control trials in the field of physiotherapy that could address the above issues.
We acknowledge the physiotherapist Dr Reema and Dr Garima for helping the authors during data collection Conflicts of interest: None
|
2020-10-16T14:22:09.139Z
|
2020-10-11T00:00:00.000
|
{
"year": 2020,
"sha1": "9227bcb5587b116e935ac2ae52136abdaaab2088",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijmhr.org/ijpr.8.5/IJPR.2020.166.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9227bcb5587b116e935ac2ae52136abdaaab2088",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195767513
|
pes2o/s2orc
|
v3-fos-license
|
A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3D Reconstructions
Fusing data from LiDAR and camera is conceptually attractive because of their complementary properties. For instance, camera images are higher resolution and have colors, while LiDAR data provide more accurate range measurements and have a wider Field Of View (FOV). However, the sensor fusion problem remains challenging since it is difficult to find reliable correlations between data of very different characteristics (geometry vs. texture, sparse vs. dense). This paper proposes an offline LiDAR-camera fusion method to build dense, accurate 3D models. Specifically, our method jointly solves a bundle adjustment (BA) problem and a cloud registration problem to compute camera poses and the sensor extrinsic calibration. In experiments, we show that our method can achieve an averaged accuracy of 2.7mm and resolution of 70 points per square cm by comparing to the ground truth data from a survey scanner. Furthermore, the extrinsic calibration result is discussed and shown to outperform the state-of-the-art method.
I. INTRODUCTION
This work is aimed at building accurate dense 3D models by fusing multiple frames of LiDAR and camera data as shown in Fig. 1. The LiDAR scans 3D points on the surface of an object and the acquired data are accurate in range and robust to low-texture conditions. However, the LiDAR data contain limited information of texture (only intensities) and are quite sparse due to the physical spacing between internal lasers. Differently, a camera provides denser texture data but does not measure distances directly. Although a stereo system measures the depth through triangulation, it may fail in regions of low-texture or repeated patterns. Those complementary properties make it very attractive to fuse LiDAR and cameras for building dense textured 3D models.
The majority of proposed sensor fusion algorithms typically augment the image with LiDAR depth. Then the sparse depth image may be upsampled to get a dense estimation, or used to facilitate the stereo triangulation process. However, we observe two drawbacks of these strategies. The first one is that the depth augmentation requires sensor extrinsic calibration, which, compared to the calibration of stereo cameras, is less accurate since matching structural and textural features can be unreliable. For example (see Fig. 2), many extrinsic calibration approaches use edges of a target as the correspondences between point clouds and images, which will have issues: 1) cloud edges due to occlusion are not clean but mixed, and 2) edge points are not on the real edge due to data sparsity but only loosely scattered. The second drawback is that the upsampling or LiDAR-guided stereo triangulation techniques are based on the local smoothness assumption, which becomes invalid if the original depth is too sparse. The accuracy of fused depth map is hence decreased, which may still be useful for obstacle avoidance, but not ideal for the purpose of mapping. For the reasons discussed above, we choose to combine a rotating LiDAR with a wide-baseline, high-resolution stereo system to increase the density of raw data. Moreover, we aim to fuse multiple sensor data and recover the extrinsic calibration simultaneously.
The main contribution of this paper is an offline method to process multiple frames of stereo and point cloud data and jointly optimizes the camera poses and the sensor extrinsic transform. The proposed method has benefits that: • it does not rely on unreliable correlations between structural and textural data, but only enforces the geometric constraints between sensors, which frees us from handcrafting heuristics to associate information from different domains. • it joins the bundle adjustment and cloud registration problem in a probabilistic framework, which enables proper treatment of sensor uncertainties. • it is capable of performing accurate self-calibration, making it practically appealing. The rest of this paper is organized as follows: Section II presents the related work on LiDAR-camera fusion techniques. Section III describes the proposed method in detail. Experimental results are shown in Section IV. Conclusions and future work are discussed in Section V.
II. RELATED WORK
In this section, we briefly summarize the related work in the areas of LiDAR-camera extrinsic calibration and fusion. For extrinsic calibration, the proposed methods can be roughly categorized according to the usage of a target. For example, a single [1] or multiple [2] chessboards can be used as planar features to be matched between the images and point clouds. Besides, people also use specialized targets, such as a box [3], a board with shaped holes [4] or a trihedron [5], where the extracted features also include corners and edges. The usage of a target simplifies the problem but is inconvenient when a target is not available. Therefore targetfree methods are developed using natural features (e.g. edges) which are usually rich in the environment. For example, Levinson and Thrun [6] make use of the discontinuities of LiDAR and camera data, and refine the initial guess through a sampling-based method. This method is successfully applied on a self-driving car to track the calibration drift. Pandey et al. [7] develop a Mutual Information (MI) based framework that considers the discontinuities of LiDAR intensities. However, the performance of this method is dependent on the quality of intensity data, which might be poor without calibration for cheap LiDAR models. Differently, [8]- [10] recover the extrinsic transform based on the ego-motion of individual sensors. These methods are closely related to the well-known hand-eye calibration problem [11] and do not rely on feature matching. However, the motion estimation and extrinsic calibration are solved separately and the sensor uncertainties are not considered. Instead, we construct a cost function that joins the two problems in a probabilistically consistent way and optimizes all parameters together.
Available fusion algorithms are mostly designed for LiDAR-monocular or LiDAR-stereo systems and assume the extrinsic transform is known. For a LiDAR-monocular system, images are often augmented with the projected LiDAR depth. The fused data can then be used for multiple tasks. For example, Dolson et al. [12] upsample the range data for the purpose of safe navigation in dynamic environments. Bok et al. [13] and Vechersky et al. [14] colorize the range data using camera textures. Zhang and Singh [15] show significant improvement on the robustness and accuracy of the visual odometry if enhanced with depth. For LiDARstereo systems [16]- [19], LiDAR is typically used to guide the stereo matching algorithms since a depth prior could significantly reduce the disparity searching range and help to reject outliers. For instance, Miksik et al. [17] interpolate between LiDAR points to get a depth prior before stereo matching. Maddern and Newman [18] propose a probabilistic framework that encodes the LiDAR depth as prior knowledge and achieves real-time performance. Additionally, in the area of surveying [20]- [22], point clouds are registered based on the motion estimated using cameras. Our method differs from these work in that LiDAR points are not projected on the image since the extrinsic transform is assumed unknown. Instead, we use LiDAR data to refine the stereo reconstruction after the calibration is recovered.
A. Overview
Before introducing the proposed algorithm pipeline, we clarify the definitions used throughout the rest of this paper. In terms of symbols, we use bold lower-case letters (e.g. x) to represent vectors or tuples, and bold upper-case letters (e.g. T) for matrices, images or maps. Additionally, calligraphic symbols are used to represent sets (e.g. T stands for a set of transformations). And scalars are denoted as light letters (e.g. i, N ).
As basic concepts, an image landmark l ∈ R 3 is defined as a 3D point that is observed in at least two images. Then a camera observation is represented by a 5-tuple o c = {i, k, u, d, w}, where the elements are the camera id, the landmark id, image coordinates, the depth and a weight factor of the landmark, respectively. In addition, a LiDAR observation is defined as a 6-tuple o l = {i, j, p, q, n, w} that contains the target cloud id i, the source cloud id j, a key point in the source cloud, its nearest neighbor in the target cloud, the neighbor's normal vector and a weight factor. In other words, one LiDAR observation associates a 3D point to a local plane and the point-to-plane distance will be minimized in the later joint optimization step.
The complete pipeline of proposed method is shown in Fig. 3. Given the stereo images and LiDAR point clouds, we first extract and match features to prepare three sets of observations, namely the landmark set L, the camera observation set O c and the LiDAR observation set O l . The observations are then fed to the joint optimization block to estimate optimal camera poses T * c and sensor extrinsic transform T * e . Based on the latest estimation, the LiDAR observations are recomputed and the optimization is repeated. After a number of iterations, the parameters converge to local optima. Finally, the refinement and mapping block joins the depth information from stereo images and LiDAR clouds to produce the 3D model. In the rest of this section, each component is described in detail individually.
B. Camera Observation Extraction
Given a stereo image pair, we firstly perform stereo triangulation to obtain a disparity image using Semi-Global Matching (SGM) proposed in [23]. The disparity image is represented in the left camera frame. Then SURF [24] features are extracted from the left image. Note that our algorithm itself does not require a particular type of feature On the other hand, point clouds are abstracted with BSC features, and roughly registered to find cloud transforms T l . Then point-plane pairs are found to build the LiDAR observation set O l . In the pose estimation and mapping phase (back-end), we solve the BA problem and the cloud registration problem simultaneously. Here the O l is recomputed after each convergence based on the latest estimation T c , T e and the optimization is repeated for a few iterations. Finally, local stereo reconstructions are refined using LiDAR data and assembled to build the 3D model. to work. After that, a feature point is associated with depth value if a valid disparity value is found within a small radius (2 pixels in our implementation). Only the key points with depth are retained for further computation. The steps above are repeated for all stations to acquire multiple sets of features with depth. Once the depth association is done, a global feature association block is used to find correlations between all possible combinations of images. We adopt a simple matching method that incrementally adds new observations and landmarks to O c and L. Algorithm 1 shows the detailed procedures. Basically, we iterate through all possible combinations to match image features based on the Euclidean distance of corresponding descriptors. L and O c will be updated accordingly if a valid match is found.
Additionally, an adjacency matrix A c encoding the correlation of the images can be obtained. Since the camera FOV is narrow, it is likely that the camera pose graph is not fully connected. Therefore, additional connections have to be added to the graph, which is one of the benefits of fusing point clouds.
C. LiDAR Observation Extraction
Although many 3D local surface descriptors have been proposed (a review is given in [25]), they are less stable and not accurate compared to image feature descriptors. In fact, it is preferable to use 3D descriptors for rough registration and refine the results using slower but more accurate methods such as Iterative Closest Point (ICP) [26]. Our work follows a similar idea. Specifically, the Binary Shape Context (BSC) descriptor [27] is used to match and roughly register point clouds to compute the cloud transforms T l . As a 3D surface descriptor, BSC encodes the point density and distance statistics on three orthogonal projection plane around a feature point. Furthermore, it represents the local geometry as a binary string which enables fast difference comparison on modern CPUs. Fig. 4-left shows an example of extracted BSC features. However, feature-based registration is of low accuracy. As shown in the right plots of Fig. 4, misalignment can be observed in the rough registered map. As a comparison, the refined map of higher accuracy obtained by our method is also visualized.
After the rough registration, another adjacency matrix A l encoding matched cloud pairs is obtained. We use the merged adjacency matrix A c ∨ A l to define the final pose graph, where ∨ means element-wise or logic operation.
To obtain O l , a set of points are sampled randomly from each point cloud as the key points. Note that the key points to refine the registration are denser than the features. For each pair of connected clouds in A l , the one with a smaller index is defined as the target while the other one as the source. Then each key point in the source is associated with its nearest neighbor and a local normal vector in the target within a given distance threshold. Finally, all point matches are formatted as a LiDAR observation and stacked into O l .
D. Joint Optimization
Given the observations O c and O l , we first formulate the observation likelihood as the product of two probabilities where T = {T i |i = 1, 2, · · · } is the set of camera poses with T 1 = I 4 , and T e is the extrinsic transform. Assuming the observations are conditionally independent, we have where i, j are camera ids and k is the landmark id, which are specified by observation o c or o l . The probability of one observation is approximated with a Gaussian distribution as where w oc , w o l are the weighting factors of camera and LiDAR observations. And the residual E f and E d encode landmark reprojection and depth error, while E l denotes the point-to-plane distance error. Those residuals are defined as depth: laser: Here, u and d are observed image coordinates and depth of landmark k. T l,ij = (T e T i ) −1 T j T e is the transform from target cloud i to source cloud j. Function φ(·) projects a landmark onto the image i specified by input intrinsic matrix K and transform T i . Function ψ(·) transforms a 3D point using the input transformation. σ p , σ d and σ l denote the measurement uncertainties of extracted features, stereo depths and LiDAR ranges, respectively. Substituting (2)-(8) back into (1) and taking the negative log-likelihood gives the cost function which is iteratively solved over parameters T , L, T e using the Levenberg-Marquardt algorithm.
To filter out incorrect observations in both images and point clouds, we check the reprojection error ||φ(l k |K, T i )− u|| and depth error ψ(l k |T i ) − d of camera observations and check the distance error n T (ψ(p|T l,ij ) − q) of LiDAR observations after the optimization converges. The observations whose errors are larger than prespecified thresholds will be marked as outliers and assigned with zero weights. The cost function (9) is optimized repeatedly until no more outliers can be detected. The thresholds can be tuned by hand and in the experiments we use 3 pixels, 0.01m and 0.1m respectively.
Similar to the ICP algorithm, the O l is recomputed based on the latest estimation of T c , T e , while the O c remains unchanged. Once O l is updated, the outlier detection and optimization steps are repeated as mentioned above. The O l only needs to be recomputed a few times (4 times in our experiments) to achieve good accuracy.
Additionally, the strategy of specifying the uncertainty parameters is as follows. Based on the stereo configuration, the triangulation depth error e d is related to the stereo matching error e p by a scale factor as in e d = (d 2 /bf )e p , where b is the baseline, f is the focal length and d is the depth. Assuming the uncertainties of feature matching and stereo matching are equivalent, we have σ d = (d 2 /bf )σ p . Therefore, we can now set σ p to be the identity (i.e. 1) and set σ d by multiplying the scale factor. On the other hand, the value of σ l is tuned by hand so that the total cost of camera and LiDAR observations are roughly at the same magnitude.
E. Mapping
With the camera poses estimated, building a final 3D model could be simply registering all stereo point clouds together. However, the stereo depth maps typically contain outliers and holes due to triangulation failure. In order to refine the stereo depth maps, we further perform a simple but effective two-fold fusion of LiDAR and camera data for each frame or station. In the first fold, the stereo depth is compared with the projected LiDAR depth and will be removed if there is a significant difference. In the second fold, LiDAR depth is selectively used to fill holes in the stereo depth. Particularly, we only use the regions that are locally flat (such that the local smoothness assumption is valid), and well observed (avoiding degenerated view angle). The curvature of the local surface is used to measure the flatness. And the normal vector is used to compute the view angle. Fig. 5 shows an example of refining the stereo point cloud. It can be observed that holes lying on a flat surface can be filled successfully, while the missing points close to the edges are not treated to avoid introducing new outliers.
F. Conditions of Uniqueness
The proposed approach relies on the ego-motion of individual sensors to recover the extrinsic transform T e , making it possible that T e is not fully observable if the motion degenerates. It turns out to be the same problem encountered in hand-eye calibration, where the extrinsic transform between a gripper and a camera is estimated from two motion sequences. Here we discuss conditions for a fully observable T e by borrowing knowledge from the hand-eye calibration, whose classical formulation is given by where T h , T c represent the relative motion of the hand and the camera w.r.t. their own original frames. Incorporating multiple stations will result in a set of (10) and then T e can be solved. According to [11], the following two conditions must be satisfied to guarantee a unique solution of T e : 1) At least 2 motion pairs (T c , T h ) are observed. Equivalently, at least 3 stations are needed, with one of them to be the base station. 2) The rotation axes of T c are not colinear for different motion pairs. In our case, the robot hand frame is substituted by the LiDAR frame. Therefore, the configuration of each station must also satisfy the above conditions of uniqueness. This provides formal guidance to collect data effectively. From our experience of deploying the developed system, an operator without adequate background knowledge in computer vision, particularly in structure from motion, is likely to miss the second condition and only rotates the sensor about the vertical axis, which will make the extrinsic calibration unobservable.
A. The Sensor Pod
To collect data for experiments, we developed a sensor pod (as shown in Fig. 6) which has a pair of stereo cameras (global shutter, resolution 4112 × 3008, baseline The calibration between the involved sensors are performed separately. We use the OpenCV library [28] to obtain camera intrinsic and extrinsic parameters. The transform between the motor and the LiDAR frame is obtained by placing the sensor pod in a conference room, and carefully tuning the transform until the accumulated points on walls and ceiling form thin surfaces in the fixed motor base frame. From now on, we use the term LiDAR frame to denote the fixed motor base frame instead of the actual rotating Velodyne frame, and assume all point clouds have been transformed into the LiDAR frame.
B. Reconstruction Tests
The first reconstruction test is carried out at the Shimizu Institute of Technology in Tokyo to scan a T-shaped concrete specimen that is under structural tests. In total, 25 stations of data are collected around the specimen at a distance of about 2.5 meters. Each station contains a stereo image pair, a point cloud that accumulates scans for 20 seconds and contains approximately 1.6 million points. For station 1-17, the sensor pod is placed on a tripod and pointed to the specimen. Station 18-25 are collected with the sensor pod on the ground, tilted up to capture the bottom of the specimen. Fig. 7 shows the reconstructed model and Fig. 8 visualizes the camera poses and landmarks. In the lower plots of Fig. 8, correlations found between images (blue lines) and point clouds (grey lines) are visualized. Since the cameras have narrow FOV (48 • horizontal), it is likely that adjacent images don't have enough overlap, which makes the pose graph not fully connected. Fortunately, LiDAR clouds have much wider Fig. 8: Top: Estimated camera poses (numbered in the order of capture) and visual landmarks (blue points). We follow the convention to define camera frame z (blue) forward, y (green) downward. Bottom: Pose graph connections from images (blue) and poing clouds (gray) FOV and therefore guarantees a fully connected graph.
As to the computation statistics, we provide a rough measure of the processing time of the major components. On a standard desktop (i7-3770 CPU, 3.40GHz×8), it takes less than 2min to remove vignetting effects and triangulate a stereo pair (40-50min for the whole dataset). The featurebased cloud registration takes about 15min in total and the joint pose estimation and map refinement can be finished in about 15min and 20min respectively.
In addition to the T-shaped specimen, we tested our algorithm in different environments, where the shapes of reconstructed objects vary from simple squared and cylinder pillars to more complex bridge pillars (see Fig. 9). Table I summarizes the model statistics. The averaged error is obtained by comparing to a ground truth model and more details are provided in Section IV-E.
C. LiDAR-Camera Calibration
In this section, we evaluate the accuracy of the recovered extrinsic transform. As a comparison, we implemented a target-free calibration method [6] which uses discontinuities in images and point clouds to iteratively refine an initial guess. The key steps of this method are shown in Fig. 10a-d. Basically, the initial guess is perturbed in each dimension (x, y, z, roll, pitch, yaw) separately and then moved towards the direction that increases the correlation between image edges and projected cloud edges. Eventually, a locally optimal solution can be found if any further changes will decrease the edge correlation.
Since it is difficult to get ground truth calibration, we choose to compare the extrinsic parameters computed from two methods. The extracted point cloud edges are projected on to the image plane and the projection is visualized in Fig. 10e and 10f. However, the edges are both well aligned and no obvious difference can be identified. We then compare the overlay of LiDAR clouds and stereo clouds (see Fig. 11). It can be observed that with our results, the models are aligned consistently while there exists an offset if calibrated using [6]. Further investigation shows that the offset happens along the camera's optical axis, in which direction the motion will generate less flow on the image. As a result, the total correlation score becomes less sensitive to the motion of the LiDAR along the optical axis. This observation suggests that calibration methods using direct feature alignment, including target-based and target-free, may require wide angle lenses.
D. Observability of Extrinsic Transform
The uniqueness conditions stated in Section III basically requires the sensor pod to change its position and orientation for different stations. In this section, we aim at providing more intuition behind the formal statements. Specifically, the conditions are experimentally demonstrated by perturbing the extrinsic parameters around their optimal values. Three tests are designed to clarify the situations of degeneration.
1) Rotation is fixed: In this case, the sensor pod is placed at 3 different positions but keeps its orientation unchanged. Specifically, station 1-3 are used for optimization. The total cost after the perturbation is visualized in the left 2 plots of Fig. 12. It can be seen that perturbing the translation won't affect the cost value at all, meaning unobservable. Besides, since the 3 frames are almost collinear, the pitch angle is also under-constrained (flat orange curve).
2) Rotation about one axis: In this case, stations 1-17 are used, where the sensor pod is placed around the T-shaped specimen and all rotations are about the camera's y-axis. As shown in the middle plots of Fig. 12, position y is underconstrained.
3) Rotation about two axes: For reference, we show the perturbed cost with all 25 available datasets in the right plots of Fig. 12. In this case, the rotations can be about xor y-axis. As expected, the extrinsic transform is well constrained.
E. Model Accuracy Evaluation
Since the ground truth data are not available during the test in Tokyo, we evaluate the reconstruction accuracy on the squared concrete pillar instead. A FARO FOCUS 3D scanner (see Fig. 13) with ±3mm range precision is used to obtain the ground truth. The comparison is performed by measuring the point to plane distance between the reconstructed model and the ground truth after precise ICP registration. Furthermore, we compare the results of three models reconstructed using: (1) stereo images only (standard stereo BA), (2) both LiDAR and stereo data but extrinsic calibration is pre-calibrated using [6], and (3) both LiDAR and stereo data with extrinsic calibration being adjusted jointly (proposed in this work).
Comparisons (1) and (2) share the same cost function in (3). However, in comparison (1) LiDAR observations are set to have zero weights and T e is fixed, and in comparison (2) only T e is fixed during optimization. (e) Edge alignment after calibration using [6] (f) Edge alignement after joint optimization The key steps of [6]. (e)-(f) Comparison of extrinsic calibration results from [6] (e) and ours (f). The color of projected cloud edge points encodes the correlation score: yellow means high while red means low. The error maps and histograms are visualized in Fig. 13. It can be observed that fusing LiDAR data helps to reduce the model error from 6mm to 2.7mm, which already lies in the precision range of the ground truth. In fact, due to the limited number of matches between some image frames, the pure image-based model does not align well, resulting in multiple layers of the surface. Compared with the pre-calibrated case, jointly optimizing the calibration improves the overall model accuracy and we also benefit from the convenience of self-calibration. Additionally, since our model is reconstructed from multiple sets of data and each station is collected close to the wall (2-3 meters), it measures about 70 points/cm 2 , which is much denser than the ground truth (10-15 points/cm 2 ). The evaluation results are obtained using the CloudCompare software.
V. CONCLUSIONS
This paper presents a joint optimization approach to fuse LiDAR and camera for pose estimation and dense reconstruction. It is shown to be able to build dense 3D models and recover camera-LiDAR extrinsic transform accurately. Besides, the accuracy of the reconstructed model is evaluated by comparing to a ground truth model and it shows our method can achieve accuracy similar to a survey scanner.
The proposed method requires data to be collected station by station, which can be time consuming and inconvenient if the viewpoint is difficult to access. For example, the Ishaped beams supporting the deck of a bridge are usually too high to reach. Therefore, future work will be focused on handling sequential data with the sensor pod moving in the environment. Micro Aerial Vehicles (MAVs) may also be used to carry the sensor pod. Another thread of future work is to improve the quality of stereo reconstruction. For instance, given the LiDAR-camera extrinsic calibration obtained from our method, probabilistic fusion methods such as [18] can be applied to recover a dense local map.
VI. ACKNOWLEDGE
This work is supported by the Shimizu Institute of Technology, Tokyo. The authors are grateful to Daisuke Hayashi for his help with experiments in Japan. We also thank Huai Yu, Hengrui Zhang and Ruixuan Liu for building the sensor pod and helping with data collection.
|
2019-07-01T17:08:29.000Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "f1a4d4b676a2aa5236d8cb08e1f16b166cc2a272",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1907.00930",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8268f3ff2e709fe1f404206914e432bacfc1e7ec",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
219450353
|
pes2o/s2orc
|
v3-fos-license
|
Preparation of Sol-Enhanced Ni–P–Al2O3 Nanocomposite Coating by Electrodeposition
A Ni–P–(sol)Al2O3 coating was prepared on the surface of Q235 steel by direct-current electrodeposition. This method was combined with sol-gel and electrodeposition technique, instead of the traditional nanopowder dispersion, to prepare highly dispersible oxide nanoparticle-reinforced composites. The effects of temperature, pH value, and current density and Al2O3 sol on the hardness of composite coating were investigated. The coating surface morphology and structure were characterized by scanning electron microscopy and energy dispersive spectroscopy, respectively. The corrosion resistance of coatings in the presence of intermediate layers was evaluated by electrochemical measurement in 3.5% NaCl solution by open-circuit potential measurement at room temperature. The hardness and wear resistance of the coating were measured by a microindentation instrument and friction wear machine, respectively. The results showed that Al2O3 sol can effectively improve Ni–P alloy coating structure and refine grain. When the bath temperature was 55°C, the pH value was 4.5, the amount of sol was 80mL/L, the current density was 1A/dm, and the hardness of the nanosol coating was 569HV. Compared with Ni–P, the friction coefficient increases slightly, but the wear rate was only 1:768 × 10−6 g · m−1. The corrosion resistance was also better than that of Ni–P coating.
Introduction
It is a challenge for traditional nickel coating, with the rapid development of modern industry, to meet the special requirements in some harsh conditions. In recent years, how to improve the comprehensive performance of coatings has become a hotspot [1]. Direct current electrodeposition refers to the electrochemical deposition process of metals or alloys from their compound aqueous solution, nonaqueous solution or molten salt. It is the basis of the process of metal electrolytic smelting, electrolytic refining, electroplating, and electroforming. These processes are carried out under certain electrolytes and operating conditions. The difficulty of metal electrodeposition and the shape of the deposit are related to the properties of the deposited metal and also depend on the composition of the electrolyte, pH value, temperature, current density, and other factors. Ni-based alloys show several attractive properties, such as its high hardness [2], toughness, and relatively good corrosion resistance in air. For these reasons, they become the first choice of protective coating materials. However, the phosphorous nickel alloy coating obtained by electroless Ni-P plating is prone to pinholes and other defects on the surface, and the porosity and morphology will directly affect the corrosion resistance of the nickel-phosphorus alloy coating [3]. The corrosion resistance of the coating decreases regardless of whether the penetrating pore is formed or not. Therefore, the researchers tried to add an appropriate amount of inert particles to the plating solution and deposit them with Ni-P to get more excellent composite coating [4]. At present, nano-SiC [5], TiO 2 [6][7][8], WC [9], SiO 2 [10,11], PTFE [12,13], ZrO 2 [14], and Al 2 O 3 [15] are common solid particles that can be added. As a widely used ceramic material, alumina is not only of high hardness but also easy to be combined with matrix [16][17][18][19]. Nanocomposite coating has higher hardness, abrasion resistance, friction reduction, and corrosion resistance than ordinary electroplating. Balarju et al. [20] found that nanoparticles had no effect on the chemical structure of the composite coatings obtained by adding nanoalumina and significantly improved the hardness, corrosion resistance, and wear resistance. However, the so-called composite coatings are prone to some problems. Because nanoparticles have a high energy surface and activity, these nanoparticles are unstable and easily agglomerated in the nickel plating bath without special surface modification. It is difficult to provide enough time for nanoparticles to deposit on the surface of the substrate even if they are stirred at high speed and for a long time. Appropriate dispersants and stabilizers must be added. This issue directly affects homogeneous quality and weakens the mechanical properties of coatings. Alumina sol can effectively avoid the agglomeration of nanoparticles in the coating matrix [21][22][23] and is rapidly and uniformly dispersed in the plating solution, which is easy to be doped, and the required external conditions are easy to be realized. Alumina sol used in this project, the preparation, and properties of Ni-P-(sol)Al 2 O 3 composite coatings were studied.
Preparation of Nanocomposite Sol
Coating. The substrate specimen used in the experiment was a Q235 cold-rolled steel sheet with a size of 40 × 25 × 2 mm 3 . The substrate was polished with 400, 800, and 1000 grade SiC paper and then washed and rinsed with distilled water. Figure 1 shows the experimental setup of the electrodeposition of Ni-P-(sol)Al 2 O 3 nanocomposite coating. The applied current is provided by a high-frequency plating rectifier. The CS2350 electrochemical workstation is used to provide DC stabilized power supply. The HJ-5 constant temperature magnetic stirrer is used to control the bath temperature and magnetic stirring to ensure uniform solution and dispersion of nanoparticles. The plated part is placed in the middle of the two anode plates to achieve double-sided growth of the plating on the substrate. In addition, it is necessary to ensure that the two anode plates are parallel to the cathode substrate and the distance is kept at 25 mm to ensure the same plating quality on both sides of the plated part.
Workpieces are inevitably contaminated with oil during processing, storage, and transportation. The oil removal formula used in the experiment is shown in Table 1. Then, it is cleaned with deionized water and ultrasonic cleaning is carried out for 2 min.
Pickling is the process of removing oxide film, oxide scale, and rust on the metal surface after oil removal. Hydrochloric acid has strong solubility to metal oxides, slow dissolution to iron and steel matrix, clean surface after pickling, but its acid fog is big, which corrodes equipment. Sulphuric acid also has less corrosion to the matrix and less acid mist, but is prone to overcorrosion and hydrogen embrittlement. In this experiment, the mixture of 15 wt% nitric acid and 5 wt% phosphoric acid was used as a pickling solution.
The purpose of activation is to remove the very thin oxide layer on the surface of the matrix after pickling and to expose the matrix metal evenly so that the coating can grow uniformly on its surface. In this experiment, 5 wt% hydrochloric acid was used for activation. The activation time was 3 min, and the temperature was about 25°C at room temperature. After activation, ultrasonic cleaning was added for 1 minute, and then plating was carried out. Composition and parameters of composite plating bath as shown in Table 2. Alumina sol is provided by the supplier of crystal Fire Technology Glass Co., Ltd. (http://www .jinghuoglass.cn). The alumina sol specifications were the alumina sol concentration was 20% and the average particle size was 60-70 nm. In order to ensure the quality of the coating, the distance between the two anodic pure nickel plates and the cathode substrate is 25 mm. Prior to the addition of this sol suspension into the bath, Ni-P coating was performed for ten minutes for better adhesion of coatings with the substrate. A reference specimen of plain Ni-P coating was also prepared for the comparative study. The plating bath was agitated using a magnetic stirrer at 180 rpm during the plating course.
Methods.
The Vickers microhardness measurements of the coatings were taken using a VMH-002 V microhardness tester at a load of 50 g for 15 s. The corresponding final values were reported as the average of five measurements. Friction and wear tests were carried out using ball-on-disk method by using a tribometer (MS-T3000 Instruments, China) at a normal load of 500 g rotation speed of 300 r/min under reciprocating sliding motion in a dry condition at the temperature 25-30°C and humidity 25% ± 10%. A bearing steel ball having a diameter of 3 mm was used as a counter sliding partner. The polarization curve of the anode was measured at room temperature using a CS2350 electrochemical workstation, and the corrosion resistance of the coating was evaluated by a Tafel curve. The coating was exposed to a scanning range of -2.0-1.0 V at a scanning rate of 1 mV/s in 3.5 wt% NaCl solution. The auxiliary electrode was a platinum electrode, and the reference electrode was a saturated calomel electrode. The surface and cross-section morphology and chemical composition of the coatings were analyzed by Quanta 200 scanning electron microscope (SEM) coupled with energydispersive X-ray spectroscope (EDS).
Effect of pH Value on Microhardness of Composite
Coating. Figure 2 shows the effect of bath pH value on Ni-P-(sol)Al 2 O 3 composite coating at current density 1A/dm 2 . The microhardness of sol composite coating first increases and then decreases with the increase of pH value. This is because the pH value has a critical impact on the nickel deposition process and the mechanical properties of the coating. Near the area with high H + concentration, the deposition rate is low due to the slow formation and growth of crystal nucleus, resulting in thin coating and no significant improvement in hardness. Similar situations have been reported in a similar pH bath [24,25]. Besides, it has been pointed out that the pH of the plating solution has an effect on the P content of the coating, that is, the hardness of the coating increases with the decrease of P content [26]. The increase of pH may decrease the P content in the coating. When the pH value of the plating solution reaches 4.5, the microhardness of the sol composite coating was 537 HV at the maximum. As the pH value continues to increase, the coating was mixed with nickel hydroxide and other impurities easily, resulting in rough coating brittle, hardness decline.
Effect of Temperature on Microhardness of Composite
Coating. Figure 3 shows the effect of bath temperature on Ni-P-(sol)Al 2 O 3 composite coating at current density 1A/dm 2 and pH value 4.5. With the increase of temperature, the microhardness of the sol composite coating increases firstly and then decreases. This is explained by an increase in the kinetic driving force with the increase in temperature can lead to a higher nucleation rate, i.e., formation of fine grains. These fine grains effectively inhibited its own growth, and the hardness of the coating increased accordingly [27]. As the temperature continues to increase, the thermodynamic driving force of crystallization decreases with an increase in the critical size of the nucleus resulting to lower nucleus densities, i.e., formation of coarse grain [28,29]. The formation of such coarse grains is likely to make the structure loose and the hardness of the coating decreased.
Journal of Nanomaterials
This discrepancy may occur due to the fact that an increase in the bath temperature has two contradictory effects on the thermodynamic and kinetic driving force [27]. Moreover, as the temperature is too high, the viscosity of the plating solution decreases, the adhesion force of the cathode surface decreases [30,31], and the content of Al 2 O 3 in the coating decreases, and the microhardness decreases. As a result, the plating temperature should be controlled at about 55°C.
Effect of Current Density on Microhardness of Composite
Coating. Figure 4 shows the relationship between the current density and the hardness of the composite coating when the plating temperature is 55°C and the pH value is 4.5. As the current density increases, the microhardness of the coating gradually increases. When the current density is 1 A/dm 2 , the microhardness reaches the maximum, and the current density increases further, the hardness of the coating decreases inversely. This is because, with the increase of current density, the current efficiency increases and the content of Al 2 O 3 sol in the sedimentary layer are increased per unit time. Too high current density will cause the rate of Al 2 O 3 sol embedding into the composite coating to be slower than the deposition rate of matrix metal, thus, reducing the content of Al 2 O 3 in the composite coating and decreasing the microhardness of the composite coating [15]. Therefore, the appropriate current density should be 1 A/dm 2 .
Effect of Al 2 O 3 Sol on Microhardness of Composite
Coating. Figure 5 shows the influence of Al 2 O 3 sol dosage on the composite coating when the current density is 1 A/dm 2 , pH value is 4.5, and the bath temperature is 55°C. It is clear that by adding more nanosol to the bath, the hardness increases to a maximum value of 569 HV for Ni-P-(sol)Al 2 O 3 and then decreases. On one hand, Al 2 O 3 nanoparticles can act as a barrier against plastic deformation and increase microhardness of the coating by preventing the movement of dislocations [32,33]. On the other hand, nanoparticles improved the grain refinement of the matrix and hence favor the higher microhardness of nanocomposite coating ( Figure 6) [11]. When the amount of sol is 80 ml/L, the microhardness is 569 HV; when the amount of sol in the plating solution is less than 80 ml/L, as the amount of sol increases, the amount of Al 2 O 3 particles deposited on the surface of the substrate increases, and the increased chance of being trapped into the coating, so that the coating hardness increases [17]; When the content of Al 2 O 3 sol in the plating solution is too much, some Al 2 O 3 particles may sink to the bottom without participating in the growth of coating, resulting in uneven deposition of Al 2 O 3 particles in the composite coating (Table 3). In addition, high alumina concentration reduces the reduction efficiency of matrix metal, thus, reducing the microhardness of composite coating, inconsistent with the literature report. [34]. It can find that when the dosage of Al 2 O 3 sol is 80 ml/L, the coating microhardness is better.
Microstructure and Composition Analysis of Coating.
According to the above results, the current density 1 A/dm 2 , pH value 4.5, bath temperature 55°C, and sol dosage 80 ml/L were selected in this study to prepare Ni-P-(sol)Al 2 O 3 composite coating. Figures 7(a) and 7(b) are the micromorphologies of Ni-P alloy coating and Ni-P-(sol)Al 2 O 3 nanocomposite coating, respectively. Figure 8 demonstrates that the Ni-P-(sol)Al 2 O 3 alloy coating deposited with these conditions has good quality and a uniform and smooth surface texture without porosity. Figure 9 is the Journal of Nanomaterials Figure 8. It is found that the composite coating contains 3.87% Al, 12.39% P, and 83.74% Ni, which proves the fact that nanoalumina particles in sol enter into the composite coating. As can be seen from Figure 7, pure Ni-P alloy coating has uneven grain size, slight microporous defects, coarse grain size, and uneven surface. The reason may be that in the early electroplating stage, the nickel ion concentration is high and the deposition rate on the substrate surface is fast. The nucleated crystals grow rapidly, and the older crystals prevent the later nucleated crystals from growing. The above results in uneven distribution of crystal size and more surface defects [15]. After the addition of Al 2 O 3 sol, the grains are refined, the size is uniform, and the deposition is relatively dense. Because the addition of Al 2 O 3 sol increases the cathode polarization in the composite electrodeposition process, leading to the reduction of the nuclear potential of the crystal, which facilitates the formation of a new crystal nucleus of Ni 2+ . Solutes such as dispersants are contained in the sol, which can inhibit the agglomeration and growth of metal grains. In addition, the incorporation of nanoalumina changes the crystal growth orientation and morphology of the substrate surface significantly. The use of nanoalumina particles as the center of nuclear formation has an inhibitory effect on Ni crystal growth [35]. The particle distribution changes gradually from a scattered distribution to a uniform distribution, and the degree of particle dispersion decreases from large to small, which makes the coating more compact.
Corrosion Resistance of Nanocomposite Sol Coating.
Tafel polarization diagrams for the Electroplate Ni-P and Ni-P-(sol)Al 2 O 3 coatings in the as-plated conditions in the 3.5% NaCl solution are shown in Figure 10. Corrosion parameters such as corrosion potential (Ecorr) and the corrosion current density (Icorr) after calculation based on diagrams are presented in Table 4. The obtained data demonstrate that the addition of alumina nanoparticles has resulted in the tendency of the corrosion potential of the Ni-P-(sol)Al 2 O 3 composite coating toward more diminished. Also, corrosion current density in the composite coating is lower than that of the Ni-P. This is because Al 2 O 3 particles are uniformly distributed in the coating, and the coating is relatively compact, reducing the intergranular corrosion in the coating [17,36]. Owing to the low conductivity 3.7. Wear Resistance of Composite Sol Coating. Figure 11 displays the variation of coefficient of friction of Ni-P, Ni-P-(sol)Al 2 O 3 nanocomposite coatings recorded during the wear test. Wear is a constant and undesirable gradual decrease of the material at the surface during contacting with other materials. The wear mechanisms of the Ni-P coatings are mainly abrasive and adhesive [37]. The coefficient of friction is initiated with the lower values in all the samples and reached maximum value within about two minutes of a sliding test. The friction coefficient of each sample increases rapidly at the initial stage of friction coefficient measurement until reaching an approximately stationary state at greater distance. The premature variation of coefficient of friction in a dry wear process is often referred to as "running" or "break-in" and can be attributed to the formation and rupture of surface oxide films or changes in the geometry of the contact surface [38]. The coefficient of friction varied with the sliding distance in the range 0.1-0.3, values that coincide with reports from the literature [39]. As the friction process proceeds, the surface of the specimen becomes smoother and the friction coefficient tends to be stable due to the plastic deformation of the microprotrusions on the friction contact surface. It can be seen from Table 5 that the friction factor of Ni-P-(sol)Al 2 O 3 nanocomposite coating is greater than that of Ni-P coating, while the wear rate of the composite coating is lower than that of Ni-P coating. This is explained by the addition of alumina particles increased the roughness of the coating, partially disappearing the nodular structure but keeping a homogeneous and uniform distribution of the reinforcement [40]. In the process of adding alumina sol to the plating solution, alumina particles enter the coating and microprotuberance occurs inside the coating. These microprotrusions cause the friction coefficient to increase. In addition, Al 2 O 3 particles have high hardness and wear resistance, which can support the friction surface load during the friction process, reduce the wear of matrix alloys, and resist plastic deformation. Therefore, the wear resistance of Ni-P-(sol)Al 2 O 3 composite coating is higher than that of Ni-P coating.
Conclusions
Ni-P-(sol)Al 2 O 3 composite coating has been produced by the sol-gel and electrodeposition technique. High dispersion nanoalumina particle reinforced composites were prepared. The hardness and corrosion resistance of the coatings prepared by different processing parameters were investigated. The best conditions for electrodeposition of Ni-P-(sol)Al 2 O 3 composite coating are pH 4.5, temperature 55°C, current density 1 A/dm 2 , and the dosage of Al 2 O 3 sol are about 80 ml/L. Ni-P-(sol)Al 2 O 3 composite coating has finer and more uniform grains, denser deposition, and higher hardness than pure Ni-P coating. The corrosion current densities of Ni-P coating and Ni-P-(sol)Al 2 O 3 coating are 5:57 × 10 -4 A/cm 2 and 4:18 × 10 -4 A/cm 2 , respectively. Corrosion resistance in 3.5%NaCl solution was improved compared with Ni-P Ni-P -0.
|
2020-05-21T09:09:09.599Z
|
2020-05-18T00:00:00.000
|
{
"year": 2020,
"sha1": "e18d87eaef7ddd724862e33792c50cfe692567cc",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jnm/2020/5239474.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4e7baece35cacfb8f713f40edbfff72ba03216b6",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
67860584
|
pes2o/s2orc
|
v3-fos-license
|
A Novel, Drug Resistance-Independent, Fluorescence-Based Approach To Measure Mutation Rates in Microbial Pathogens
Measurements of mutation rates—i.e., how often proliferating cells acquire mutations in their DNA—are essential for understanding cellular processes that maintain genome stability. Many traditional mutation rate measurement assays are based on detecting mutations that cause resistance to a particular drug. Such assays typically work well for laboratory strains but have significant limitations when comparing clinical or environmental isolates that have various intrinsic levels of drug tolerance, which confounds the interpretation of results. Here we report the development and validation of a novel method of measuring mutation rates, which detects mutations that cause loss of fluorescence rather than acquisition of drug resistance. Using this method, we measured the mutation rates of clinical isolates of fungal pathogen Candida glabrata. This assay can be adapted to other organisms and used to compare mutation rates in contexts where unequal drug sensitivity is anticipated.
result in a detectable phenotypic change. Because most genomes are extremely stable and mutation rates are typically very low, the most convenient and widely used mutation measurement assays are set up as selections, where mutation of a reporter gene confers resistance to a particular drug (1). In such an assay, mutations in the reporter gene arise at some low rate in cells proliferating in culture and are then selected by plating the cultures on drug-containing medium, which kills wild-type cells. The CAN1 mutation assay in Saccharomyces cerevisiae is based on this principle and has been used extensively to gain insights into mechanisms controlling genome stability in yeast (2)(3)(4)(5). Although drug resistance-based mutation assays have a number of advantages, most particularly in their relative ease and rapidity, their major limitation is that they allow direct comparisons of mutation rates only between strains that have the same level of drug sensitivity. This condition is largely satisfied when one compares the mutation rates of isogenic strains, e.g., a laboratory strain and a DNA repair mutant derived from it. However, this condition does not hold when one wishes to compare mutation rates among nonisogenic strains, e.g., a panel of clinical or environmental isolates, which may have various levels of drug tolerance. In this case, more tolerant strains are expected to survive for a longer period on selection medium, all the while continuing to produce resistance mutations, leading to an artificially high mutation rate estimate, and vice versa. Similar considerations preclude direct comparisons of mutation rates between different species, e.g., one that is highly drug susceptible versus one that is more drug tolerant, or between different growth conditions (e.g., growth in the presence of a stressor that may affect overall stress/drug tolerance). Thus, in order to rigorously measure and compare mutation rates in a way that is not restricted to a small number of laboratory strains and their derivatives, it is necessary to develop a drug resistance-independent method to measure mutation rates.
Candida glabrata is a yeast that is closely related to S. cerevisiae (6) and is also associated with the human microbiome (7). In contrast to S. cerevisiae, however, C. glabrata is an opportunistic pathogen that can cause life-threatening infections in immunocompromised individuals (8,9). The prevalence of C. glabrata in infections has been increasing, and it is now the second most prevalent cause of invasive candidiasis in the United States and Europe (10,11). One reason for this increase is that C. glabrata either is intrinsically resistant or acquires resistance relatively quickly to the limited number of antifungal drugs currently in clinical use (9,12). In C. glabrata, drug resistance is predominantly caused by point mutations, either in transcription factors regulating drug efflux (13)(14)(15) or in genes encoding drug targets (16)(17)(18). Several studies have also documented the emergence of multidrug-resistant (MDR) C. glabrata infections, for which there are no treatment options and which are associated with extremely high mortality (19,20).
Comparisons of DNA sequences (both of specific genes and of whole genomes) from different C. glabrata clinical isolates have revealed an exceedingly high level of genetic diversity, in terms of both single nucleotide polymorphisms (SNPs) and chromosomal arrangements (21)(22)(23)(24)(25)(26). A multilocus sequencing typing (MLST) scheme based on SNPs at six different loci has identified over 100 distinct sequence types (STs) of C. glabrata, which cluster into seven clades (22, 25; https://pubmlst.org/cglabrata/). However, even strains within the same clade exhibit high genetic diversity (22), which, together with rapid emergence of mutations that cause drug resistance, has led to the hypothesis that C. glabrata may have a highly plastic, or mutable, genome. However, mutation rates in C. glabrata have not been measured or compared to other organisms.
In a previous study, we began to examine the role of DNA mismatch repair (MMR) in maintaining genome stability and emergence of drug resistance in C. glabrata (24). In particular, we found that different STs of C. glabrata are associated with specific SNPs in MMR gene MSH2, some of which result in amino acid changes and, when introduced into an msh2⌬ reference strain on a plasmid, do not fully rescue that strain's hypermutator phenotype. This result suggested that some C. glabrata isolates, e.g., those carrying certain variants of MSH2, may exhibit higher mutation rates and may therefore acquire drug resistance more rapidly. Indeed, in Cryptococcus, naturally occurring mutations in MSH2 have been shown to contribute to microevolution and population diversity (59,60). Yet, recent clinical studies have not found an association between specific MSH2 alleles and drug resistance (27-29, 61, 62), raising the question of whether clinical isolates carrying these alleles are true mutators. To answer this question, it is necessary to measure and directly compare mutation rates between clinical isolates of C. glabrata. However, as described above, comparisons of different clinical isolates are complicated by the variation in their drug resistance profiles, some of which is due to varying activity of drug efflux pumps (14,15,18), which is likely to render any drug resistance-based mutation assay inapplicable.
In this study, we developed and validated a GFP-based mutation reporter that allowed us to measure mutation rates in a drug resistance-independent way. The reporter was shown to recapitulate the mutation rate and spectrum of a DNA mismatch repair mutant and detect DNA damage-induced mutagenesis in C. glabrata, recapitulate the mutation rates of wild-type and mutator strains of S. cerevisiae, and compare spontaneous mutation rates in C. glabrata and S. cerevisiae. Finally, we used this reporter to measure the mutation rates of a number of clinical isolates of C. glabrata, including those carrying a specific MSH2 variant previously suggested to increase mutagenesis.
RESULTS
Developing the GFP-based mutation rate reporter. To measure mutation rates in C. glabrata, at first we attempted to use traditional drug resistance-based reporters, such as CAN1, which has been used extensively to measure mutation rates in S. cerevisiae (2)(3)(4)(5). In that fungus, CAN1 cells are sensitive to the drug canavanine, whereas mutations in the can1 gene cause canavanine resistance and can be selected on canavanine-containing plates. However, although the C. glabrata genome contains several potential CAN1 orthologs (CAGL0J08162g and CAGL0J08184g), commonly used reference strain ATCC 2001 (also known as CBS138) was completely resistant to canavanine up to concentrations of 1 mg/ml (see Fig. S1A in the supplemental material; also data not shown), whereas the typical selection concentration in S. cerevisiae is 60 g/ml. We also tried using 5-fluoroanthranilic acid (5-FAA), which selects for mutations in the tryptophan biosynthetic pathway (30). Although ATCC 2001 and many clinical isolates of C. glabrata were sensitive to 5-FAA, we discovered that this sensitivity widely varied among different strains (Fig. S1B). Although this variation was not entirely surprising, as different clinical isolates are well known to show different levels of antifungal drug resistance, which is at least in part due to the activity of drug efflux pumps, it also eliminated the possibility of using 5-FAA-or likely any other drug resistance-based approach-to measuring mutation rates in C. glabrata clinical isolates.
To enable measurements of mutation rates in a way that was independent of drug resistance, we chose a fluorescence-based approach. We created a cassette where the gene encoding yeast enhanced green fluorescent protein (yEGFP) was driven by the strong constitutive promoter pTEF1 of S. cerevisiae (31), which was also previously shown to strongly induce gene expression in C. glabrata (32) (Fig. 1A). In order to facilitate the chromosomal insertion and subsequent tracking of this construct, the cassette also contained the gene conferring nourseothricin resistance (NAT) driven by its own promoter (Fig. 1A). This cassette was inserted into the right arm of C. glabrata chromosome K between two uncharacterized ORFs (Fig. 1A) and validated by sequencing. The resulting strain was constitutively and strongly fluorescent (Fig. 1B) and was used to measure mutation rates of yEGFP using fluorescence-based cell sorting (FACS) in fluctuation experiments as described below ( Fig. 1C; Fig. S2).
Briefly, in a typical experiment, a starter YPD culture was diluted into multiple (e.g., 8 to 12) parallel YPD cultures in a 96-well plate to a starting density of a few cells per well and incubated at 37°C overnight. The following morning, each culture, in its entirety, was diluted severalfold in YPD to ensure that cells collected for FACS analysis several hours later were in log phase, which was found to be necessary to achieve maximum expression of GFP and the optimal resolution between GFP-positive and GFP-negative populations. The cells were then collected by filtration, resuspended in PBS, and analyzed by FACS. Prior to FACS analysis, propidium iodide (PI) was added to each sample to gate out the inviable, PI-positive cell subset (Fig. S2). Although GFP levels of fluorescent cultures varied slightly between experiments (e.g., between the same strain analyzed on different days and between different strains), the overall fluorescence levels of GFP ϩ cells were always significantly higher than those of GFP Ϫ cells (Fig. S3), allowing for efficient sorting of GFP Ϫ cells from GFP ϩ populations.
The number of cells per culture (n) was optimized by varying the final volume of the cultures. For strains with lower mutation rates, such as the reference C. glabrata strain ATCC 2001, the optimal n was found to be Ն3 ϫ 10 6 cells/well, whereas for cells with elevated mutation rates (such as msh2⌬), 1 ϫ 10 6 cells/well was sufficient to obtain multiple cultures with mutants. Each culture, in its entirety, was analyzed by FACS, and GFP-negative cells were collected, immediately plated onto YPD plates, and allowed to form colonies. These colonies were then validated for (i) the presence of the NAT cassette (by replica plating onto nourseothricin medium) and (ii) reduction of GFP fluorescence (by flow cytometry). Next, yEGFP was sequenced in NAT ϩ GFP Ϫ colonies to identify the mutations responsible for reduced fluorescence. For every strain, 200 GFP ϩ cells were also collected by FACS and plated on YPD to calculate plating efficiency. Finally, mutation rates and 95% confidence intervals were calculated using the MSS maximum likelihood method (1, 33) based on n (the number of processed cells ϫ PI-negative fraction ϫ plating efficiency) and the number of NAT ϩ GFP Ϫ mutants in every culture (r). Importantly, we found that every NAT ϩ GFP Ϫ colony in which GFP was sequenced contained a single mutation in the yEGFP ORF or, in a few cases, the pTEF1 promoter (see below), indicating that loss of fluorescence is virtually always caused by mutations in yEGFP and that therefore this mutation assay is highly specific to a single locus.
Mutations in GFP do not affect cellular fitness. A key condition that has to be satisfied by any mutation rate measurement assay is that mutations in the reporter gene should not affect the fitness of the strain, either positively or negatively, as this would result in overestimating or underestimating the mutation rate, respectively (1). To check that this condition is fulfilled in the GFP-based mutation reporter, we isolated two different loss-of-function mutations in yEGFP using FACS and measured their fitness compared to the parent strain over 24 h of growth in YPD, which is the duration of a typical mutation rate experiment. Each strain was mixed 1:1 with the parent strain, producing a coculture where approximately half the population was fluorescent and half was not (Fig. 2, Time 0). Both cocultures were diluted into multiple wells to several hundred cells per well and grown for 24 h at 37°C, mimicking a typical fluctuation experiment. After 24 h, fluorescence measurements showed that the proportions of fluorescent and nonfluorescent cells in the cultures had not significantly changed (Fig. 2, 24 h), indicating that mutations in yEGFP did not affect fitness relative to the parental GFP-positive strain.
GFP-based mutation reporter recapitulates mutation rates in Saccharomyces cerevisiae. In addition to inserting the GFP cassette into the C. glabrata genome, we also inserted it at the CAN1 locus in S. cerevisiae (Fig. 3A) in order to test whether this reporter would recapitulate the previously determined mutation rates of two S. cerevisiae strains: a wild-type strain of the W303 background and an isogenic mutant carrying a deletion of the SHU1 gene. SHU1 functions in promoting error-free DNA repair by homologous recombination, and its loss was shown to increase CAN1 mutation rate by approximately 8.5-fold in one study (34) and by 4-fold in another study (35). The FACS-based assay described above (Fig. 1C) was used to measure the mutation rates of yEGFP in wild-ype and shu1⌬ strains. The assay recapitulated the increase in mutation rate in the shu1⌬ mutant relative to the wild-type strain (4-fold; Fig. 3B). Furthermore, the absolute mutation rate obtained for the wild-type strain was ϳ2.7 per 10 7
FIG 2
Mutation of yEGFP does not impact fitness relative to the fluorescent parent strain. Two different yEGFP mutants were each cocultured together with the fluorescent parent strain for 24 h. The relative proportions of the strains carrying wild-type and mutant yEGFP genes, quantified and shown as bar graphs, remained constant over the course of the experiment, indicating no difference in relative fitness.
Fluorescence-Based Mutation Assay ® cells per generation, which agrees well with typical mutation rates obtained in S. cerevisiae (2,5,34).
Finally, using the fluorescence-based mutation reporter, we were able to directly compare spontaneous mutation rates at the C. glabrata locus carrying the cassette (Fig. 1A) and the CAN1 locus of S. cerevisiae. Interestingly, we found that the mutation rate was 9-fold higher at the S. cerevisiae CAN1 locus than at the analyzed locus in C. glabrata (Fig. 3B), suggesting that, at least during unperturbed growth in YPD, the examined strain of C. glabrata (ATCC 2001) does not behave as a spontaneous hypermutator.
GFP-based mutation reporter captures the msh2⌬ mutator phenotype and mutation spectrum. To further validate the GFP-based mutation reporter in C. glabrata, we used CRISPR to insert the cassette into the same chromosomal location (Fig. 1A) in a C. glabrata strain derived from ATCC 2001 but carrying a deletion of DNA mismatch repair (MMR) gene MSH2 (24). MSH2 is the C. glabrata homolog of MutS, whose loss has been shown to result in a strong mutator phenotype in all organisms where it has been examined, including C. glabrata (24,36). Both MSH2 and msh2⌬ strains were used in fluctuation experiments to measure their mutation rates as described above (Fig. 1C and Fig. S2). We found that, as expected, msh2⌬ resulted in a strong mutator phenotype, increasing the rate of mutation of yEGFP by 40-fold (Fig. 4A), which is very similar to its effect on mutation rates in S. cerevisiae, where the msh2⌬ effect on CAN1 mutation rate ranges from a 16-to a 40-fold increase, depending on the study (37)(38)(39)(40). Furthermore, sequencing GFP Ϫ colonies revealed that the msh2⌬ strain produced a very different spectrum of mutations in yEGFP from the MSH2 strain ( Fig. 4B and C). Mutations in the wild-type (MSH2) strain were largely comprised of base The assay detected the mutator phenotype of the shu1⌬ mutant, previously shown to have an elevated CAN1 mutation rate (34). The assay also showed that spontaneous forward mutation in C. glabrata during standard laboratory growth in YPD is not higher than that in S. cerevisiae. Sc, S. cerevisiae; Cg, C. glabrata. Error bars, 95% confidence intervals.
pair substitutions (bps), whereas the majority of mutations in the msh2⌬ mutant were due to single nucleotide deletions or insertions in mononucleotide repeats (e.g., AAAA or TTTT), with the strongest mutation "hot spot" at a run of seven T's (Fig. 4B). This mutation spectrum recapitulates that of the msh2⌬ strain in S. cerevisiae and is thought to be due to DNA polymerase "slippage" errors at mononucleotide repeats, which are normally repaired by MMR (37,38,41). Thus, the GFP-based mutation reporter was able to accurately capture both the increase in mutation rate and the change in mutation spectrum of the msh2⌬ mutant.
GFP-based mutation reporter detects DNA-damage induced mutagenesis. To investigate whether the GFP-based mutation reporter would capture DNA damageinduced mutagenesis, we performed the fluctuation assay on the ATCC 2001 strain grown in the presence of 0.01% methyl methanesulfonate (MMS), an alkylating agent. Indeed, we detected a 48-fold increase in yEGFP mutation rate in cells cultured in the presence of MMS (Fig. 5A). Sequencing mutations in GFP Ϫ colonies recovered after growth in MMS showed that the spectrum of mutations did not change significantly from that in cells grown in the absence of MMS ( Fig. 5B and C). Thus, MMS caused an overall increase in mutagenesis of yEGFP, as expected, but apparently did not significantly change the cellular pathways by which these mutations were generated. This was consistent with a previous report, where the MMS-induced mutation spectrum in yeast was similar to the spontaneous mutation spectrum (42).
Mutation rates of C. glabrata clinical isolates. We used the GFP-based mutation assay to measure mutation rates of six clinical C. glabrata isolates: three that, like ATCC 2001, belonged to sequence type (ST) 15 and carried the corresponding MSH2 sequence and three belonging to ST16 and carrying the variant MSH2 E231G/L269F (23,24). For every clinical isolate, the reporter cassette was integrated into the same chromosomal locus using CRISPR and validated by flow cytometry and DNA sequencing, and the mutation rate was measured using FACS as described above. Interestingly, we found that none of the clinical isolates, including those carrying MSH2 E231G/L269F , had elevated mutation rates relative to the reference strain ATCC 2001 (Fig. 6). Thus, even though our previous examination indicated that MSH2 E231G/L269F did not fully rescue the mutator phenotype of the msh2⌬ mutant in ATCC 2001 (24), a direct, rigorous assessment of the mutation rate of the clinical isolates showed that under standard laboratory The mutation rate of yEGFP was measured and found to be increased 40-fold by msh2⌬ relative to the strain carrying wild-type MSH2. Error bars, 95% confidence intervals. (B and C) Sequencing of yEGFP in nonfluorescent mutants isolated by FACS from MSH2 and msh2⌬ strains identified a number of mutations throughout the yEGFP ORF. Whereas mutations in the MSH2 strain were mostly base pair substitutions, the majority of mutations in the msh2⌬ strain were single nucleotide frameshifts (insertions or deletions) in mononucleotide runs, recapitulating the mutational signature of msh2⌬ in S. cerevisiae (37,38,41). bps, base pair substitution; ins/del, insertion or deletion.
Fluorescence-Based Mutation Assay ® conditions this variant, in its native genomic context, is not associated with an elevated spontaneous forward mutation rate.
DISCUSSION
The goal of this study was to develop, validate, and use a new method for measuring mutation rates in a way that did not rely on drug resistance-based reporters. To this end, we designed a FACS-based scheme to capture and quantify loss-of-function mutations (A) C. glabrata strain ATCC 2001 carrying the fluorescent reporter was cultured in the presence of 0.01% MMS, and its mutation rate was measured as described in the text and in Fig. 1. Error bars, 95% confidence intervals. (B and C) Sequencing of yEGFP in nonfluorescent mutants that formed in the presence of MMS showed a mutation spectrum similar to that produced in the absence of the drug. Several mutations generated during growth in the presence of MMS were in the promoter region (lowercase letters, top row). bps, base pair substitution; ins/del, insertion or deletion.
FIG 6
Fluorescence-based mutation reporter reveals similar mutation rates in a panel of clinical C. glabrata isolates. Using CRISPR, the yEGFP-NAT cassette was integrated into the same genomic locus in six clinical isolates of C. glabrata. Three of these isolates, like ATCC 2001, belonged to ST15 and therefore carried the same MSH2 allele, whereas the other three belonged to ST16, which carries the MSH2 E231G/L269F variant (23). Mutation rates of yEGFP were measured as described above and found to be similar among all examined isolates irrespective of MSH2 sequence. Error bars, 95% confidence intervals.
in the gene encoding GFP and used it in fluctuation experiments to measure mutation rates in C. glabrata and S. cerevisiae. We found that this fluorescence-based mutation reporter recapitulated the previously reported mutator phenotype of the S. cerevisiae shu1⌬ mutant and captured the expected increase in mutation rates due to loss of MSH2 or treatment with a genotoxic agent in C. glabrata. This reporter also accurately captured the mutational spectrum (a predominance of single nucleotide insertions or deletions in homopolymeric runs) associated with loss of MSH2. Finally, the reporter was used to measure the mutation rates of several clinical C. glabrata isolates, including those carrying the MSH2 E231G/L269F variant previously suggested to contribute to increased mutagenesis (24), and showed that all clinical isolates examined had very similar spontaneous mutation rates.
Our mutation assay showed that under standard laboratory growth conditions (YPD, 37°C), C. glabrata clinical isolates carrying the MSH2 E231G/L269F variant do not show elevated spontaneous mutation rates relative to ATCC 2001 or to clinical isolates carrying the "wild-type" version of MSH2 (i.e., one identical to that in ATCC 2001). This is consistent with several recent clinical studies that did not see an association between MSH2 genotype and prevalence of drug resistance in C. glabrata (27-29, 61, 62). However, in our previous study, we found that several MSH2 variants, including MSH2 E231G/L269F , when introduced on a plasmid into an ATCC 2001-derived msh2⌬ mutant, did not fully rescue that strain's elevated mutation rate (24). There are several non-mutually exclusive possibilities that can reconcile previous data with this present study. First, it is possible that, although the plasmids were maintained by selection, a subpopulation of the culture had lost the plasmid and therefore lacked any copy of MSH2. Second, it is possible that the MSH2 variants carried on a plasmid were present in more than one copy, which, depending on the nature of the mutation, might either help restore function (for a partial loss-of-function mutation) or further exacerbate the associated defects (for a dominant negative mutation). Finally, it is important to consider that in the clinical isolates studied here, the MSH2 E231G/L269F variant is present in its normal genomic context, which is that of ST16 (23). ST16 is separated from ATCC 2001 (ST15) by hundreds to thousands of SNPs throughout the genome, including SNPs in genes that encode protein partners of Msh2, such as Msh3 and Msh6 (36). Thus, it is possible that each MSH2 variant has evolved in concert with its partner genes in a way that maintains efficient MMR and low mutation rates. In this scenario, moving a particular variant to a different genomic context would force it to form suboptimal partnerships with noncognate interacting proteins, resulting in less efficient MMR and an increased spontaneous mutation rate. Consistent with this hypothesis, our analyses of MSH3 and MSH6 sequences have revealed that each of these genes has multiple SNPs between ST15 and ST16 strains, including five SNPs in each gene that result in amino acid changes (the ST15 and ST16 whole-genome sequences have been generated by us for a different project [unpublished data]).
The fluorescence-based mutation assay developed in our study has specific advantages and specific limitations that need to be considered when deciding whether to use it or another assay to measure mutation rates in a given experimental system. First, similar to other mutation assays based on a loss of function of a reporter, only mutations that affect GFP function, i.e., its ability to fluoresce in the detectable range, can be identified. Thus, if it is desirable to calculate true mutation rates independently of whether a mutation is expressed, it is more appropriate to use the recently developed whole-genome sequencing (WGS) approaches (43)(44)(45). Although this is a very powerful technique, it requires WGS of multiple isolates per strain and is therefore still considerably more expensive and computationally heavy than our method, which requires no computational expertise. One unique hurdle of our assay not shared by other methods is that it requires access to a FACS instrument that can sort millions of cells in minutes; otherwise, the time frame of a single experiment becomes unfeasibly long. Once the sorting is completed, however, the rest of the steps require a standard flow cytometer (or fluorescence microscope) and standard laboratory techniques. As discussed above, this assay is going to be more informative than drug resistance-based Fluorescence-Based Mutation Assay ® assays when the strains under comparison have different drug tolerance profiles, as can be expected for environmental/clinical isolates or different species or when one wishes to measure the mutation rates of strains exposed to different types of environmental stress (e.g., antimicrobial drugs).
We validated our mutation assay by analyzing strains with elevated mutation rates (e.g., shu1⌬ in S. cerevisiae and msh2⌬ in C. glabrata). It should also be possible to use this assay to identify antimutators-i.e., genes whose loss reduces mutagenesis-which can be extremely informative for identifying cellular pathways that promote mutagenesis and genetic instability (46)(47)(48). However, this would require sorting cultures with significantly larger numbers of cells and would therefore take longer, with the sorting time required negatively correlating with the mutation rate of the strain. One potential improvement over the current methodology that would reduce the required time and labor is developing a fluorescence-based assay that uses flow cytometry to count the nonfluorescent cells but skips the sorting and plating steps. In our current setup, this was not possible because the number of GFP-negative cells recorded by the FACS instrument was typically much greater than, and did not correlate with, the number of colonies that grew from the sorted cells. In other words, despite our use of propidium iodide (PI) to gate out membrane-permeable cells, a large subset of PI-negative GFP-negative cells were inviable/nonculturable. Perhaps, with further optimizatione.g., using other fluorescent markers that can be used in conjunction with live/dead dyes-it will be possible to accurately record the number of nonfluorescent live cells using flow cytometry only, without the need for sorting and plating.
We have used the fluorescence-based assay to measure forward mutation rates; however, because the reporter cassette contains two genes (yEGFP and NAT), it can be adapted to measure large deletions by looking for simultaneous loss of both yEGFP and NAT, similar to the CAN1-URA3 loss assay developed in S. cerevisiae (49). Such an assay would be extremely useful in C. glabrata and other fungal pathogens characterized by frequent genomic rearrangements (26,50,51). In the current genomic location of the cassette, we did not identify any simultaneous deletions of yEGFP and NAT among the Ͼ170 analyzed C. glabrata cultures, including those containing genotoxic agent MMS, indicating that the spontaneous rate of deletions at this locus is extremely low. In future studies, the yEGFP-NAT cassette will be integrated at genomic loci more likely to undergo rearrangements, such as subtelomeric loci containing multiple and variable numbers of genes from the adhesin family (52,53) and ribosomal DNA (rDNA) (54).
The high degree of genetic diversity in C. glabrata populations and the fast emergence of drug resistance both indicate that at least under some conditions, C. glabrata is able to rapidly mutate and diversify its genome. Our present study indicates that these conditions do not include unperturbed growth in YPD, suggesting that C. glabrata may be subject to stress-induced mutagenesis. Indeed, previously, stressinduced mutator phenotypes have been identified in a majority of natural isolates of Escherichia coli, whereas only 5% were shown to act as constitutive mutators (55). Future studies will examine whether mutagenesis is affected by stress conditions, including those encountered by C. glabrata in the host, such as oxidative stress and exposure to antifungal drugs. The fluorescence-based mutation assay is particularly well suited to address such questions because its outcome is independent of whether the strain's sensitivity to stress is altered by an exogenous treatment (e.g., by an antifungal drug). This assay can also be adapted and used to address questions regarding mechanisms driving mutagenesis in other clinically relevant microbes, including bacterial pathogens and haploid pathogenic fungi, such as Candida auris, Cryptococcus neoformans, and Candida lusitaniae, where emergence of drug resistance poses a serious public health threat.
MATERIALS AND METHODS
Construction of the GFP reporter cassette and creating fluorescent strains for fluctuation analyses. The S. cerevisiae TEF1 promoter was amplified using primers CACACCAGAGCTCCAAAATGTTT CTACTCC and CCATTTTGGATCCAAAACTTAGATTAGATTGC and subcloned into the BamHI-SacI sites of pYC54 (56), placing it directly upstream of the YFP ORF. Next, the YFP ORF was replaced by that of yEGFP as follows. yEGFP was amplified from pGRB2.3 (57) using primers ACTAGTGGATCCCCCGGGCTGCAGGA ATTCATG and CGAATTGGCTAGCTTTACCTCTATATCGTGTTCG and subcloned into the plasmid using BamHI-NheI sites. The final plasmid contained the pTEF1-yEGFP-NAT cassette (Fig. 1A). This cassette was amplified from the plasmid using primers CCCCTCGAGGACGAAGTTCC and TGTAATACGACTCACTATAG GGCG and transformed into C. glabrata strain ATCC 2001 using nourseothricin resistance as selection.
Because there were no targeting homology sequences on the cassette, it integrated randomly into the C. glabrata genome. Several independent, constitutively fluorescent transformants were chosen and submitted for whole-genome sequencing at the New Jersey Medical School Molecular Resource Facility using the NextSeq (Illumina, San Diego, CA) platform. Libraries were prepared with the Nextera XT kit (Illumina, San Diego, CA) to produce paired-end reads of 150 bp for an approximate minimum coverage of 100ϫ. Data analysis was performed using CLC Genomics Workbench (Qiagen, Hilden, Germany). Each transformant was found to contain a single integration of the cassette. The transformant carrying the cassette integrated between uncharacterized ORFs CAGL0K11132g and CAGL0K11198g on Chr K (strain ESCg36), as shown in Fig. 1A, was chosen for further analysis.
In all other C. glabrata strains reported in this study, the cassette was integrated at the same genomic locus as in ESCg36 using CRISPR-mediated targeted integration. The cassette was amplified from ESCg36 using primers GCAGTCTTTCTTGATCCACATATC and CACAGAATTGGTAGGACGGG, which produced approximately 500-nt 5= and 3= homology each to the desired integration locus. CRISPR was performed as in reference 58, except that cells were made competent for electroporation using the Frozen-EZ yeast transformation kit (Zymo Research) according to the manufacturer's instructions.
To replace the CAN1 ORF of S. cerevisiae with the yEGFP-NAT cassette, the cassette was amplified using primers AAAAGGCATAGCAATGACAAATTCAAAAGAAGACGCCGACATAGAGGACCAGTGAATTGTAAT ACGACTC and AGGTAATAAAACGTCATATCTATGCTACAACATTCCAAAATTTGTCCTGGTACCGGGCCCCCCCT CGAG, which introduced 48-nt regions of homology directly 5= upstream and 3= downstream of the CAN1 ORF. This PCR product was transformed into S. cerevisiae strains W4069-4C (wild-type W303 MATa) and W4220-15A (shu1⌬::HIS3 MATa) using the Frozen-EZ yeast transformation kit (Zymo Research) according to the manufacturer's instructions. Transformants were selected on nourseothricin plates as described above and validated by sequencing of the CAN1 locus and by flow cytometry to verify the acquisition of green fluorescence.
All primers were ordered from Integrated DNA Technologies, and all Sanger sequencing of the above-described constructs was done by Genewiz.
Measuring yEGFP mutation rates using fluctuation analysis and FACS. A starter YPD culture of the strain whose mutation rate was being measured was diluted into multiple (e.g., 8 to 12) parallel YPD cultures in a 96-well plate to a starting density of several hundred cells per well and incubated overnight at 37°C (C. glabrata) or 30°C (S. cerevisiae). The following morning, each culture was diluted severalfold in YPD and grown at the same temperature to ensure that when the cells were collected for FACS several hours later, they were in log phase, which was found to be necessary to achieve maximum expression of GFP and the best resolution between GFP-positive and GFP-negative populations. The cells were then collected by filtration using 0.45-m mixed cellulose ester membrane filters (Millipore), resuspended in PBS, and sorted using the BD FACSAria II (BD Biosciences) at the New Jersey Medical School Flow Cytometry and Immunology Core Laboratory. Ten to 15 min before sorting, each sample was stained with 10 g/ml propidium iodide (PI; ThermoFisher) to identify and gate out inviable cells. FSC and SSC parameters were set on log scale. Cells were gated for singlets (SSC-W versus SSC-H) followed by live gating on PI negative. Finally, cells were sorted based on GFP expression. GFP-negative cells from each entire culture were collected into microcentrifuge tubes containing 250 l YPD and then plated immediately onto YPD agar plates. Two hundred GFP-positive cells were also sorted and plated to calculate plating efficiency. The plates were incubated at 37°C (C. glabrata) or 30°C (S. cerevisiae) to allow the sorted cells to form colonies.
The resulting colonies were checked for the presence of the NAT marker by replica plating or patching onto plates containing 100 g/ml nourseothricin (Jena Bioscience) and then checked for the level of green fluorescence using a BD Accuri C6 flow cytometer (BD Biosciences). Mutation rates and 95% confidence intervals were calculated using the MSS maximum likelihood method (1, 33) based on the number of NAT ϩ GFP Ϫ mutants in every culture (r) and the average number of viable cells per culture (n). To identify the mutations responsible for loss of GFP fluorescence, the yEGFP ORF and promoter were sequenced using primers CTCTTTCGATGACCTCCCATTG and TGTAATACGACTCACTATAGGGCG, respectively.
ACKNOWLEDGMENTS
We thank Puneet Dhawan of the New Jersey Medical School Molecular Resource Facility for assistance with whole-genome sequencing and Sukhwinder Singh and Tammy Mui of the New Jersey Medical School Flow Cytometry and Immunology Core Fluorescence-Based Mutation Assay
|
2019-03-11T17:22:21.226Z
|
2019-02-26T00:00:00.000
|
{
"year": 2019,
"sha1": "143eefb055c658e09270b795ffbe16e6f77d9e8c",
"oa_license": "CCBY",
"oa_url": "https://mbio.asm.org/content/mbio/10/1/e00120-19.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "96aeedf17f9b3f5c7a6e915f8c8b423e0af450b0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
252459569
|
pes2o/s2orc
|
v3-fos-license
|
Interaction effects of sex on the sleep loss and social jetlag-related negative mood in Japanese children and adolescents: a cross-sectional study
Abstract Study Objectives Sleep problems, such as accumulated sleep loss and social jetlag (SJL), which is characterized by a discrepancy in a person’s sleep pattern between the weekday and the weekend, are associated with physical and mental health problems, and academic performance in young ages. However, sex differences in these associations are not fully understood. The purpose of this study was to investigate the effect of sex on sleep-related factors, mental health (negative mood), and academic performance in Japanese children and adolescents. Methods A cross-sectional online survey was conducted with 9270 students (boys: N = 4635, girls: N = 4635) ranging from the fourth grade of elementary school to the third grade of high school, which typically includes ages 9–18 years in Japan. Participants completed the Munich ChronoType Questionnaire, the Athens Insomnia Scale, self-reported academic performance, and negative mood-related questions. Results School grade-related changes in sleep behavior (e.g. delayed bedtime, shortened sleep duration, and increased SJL) were detected. Girls had greater sleep loss on weekdays and SJL on weekends than boys. Multiple regression analysis revealed that sleep loss and SJL were more associated with negative mood and higher insomnia scores in girls than in boys, but not with academic performance. Conclusions Sleep loss and SJL in Japanese girls had a higher correlation to their negative mood and tendency to insomnia than in boys. These results suggest the importance of sex-dependent sleep maintenance for children and adolescents.
Introduction
Sleep deprivation (sleep loss) has been reported to be associated with an increase in various health risks, such as cardiovascular diseases, diabetes, metabolic syndrome, and depression, and has attracted attention as a global health issue [1][2][3]. A survey by the Organisation for Economic Co-operation and Development (OECD) found that Japanese people spend the lowest amount of sleep among 33 countries, revealing that Japan is one of the world's most sleep-deprived countries (https://www.oecd.org/health/ health-data.htm). In fact, according to a nationwide survey in Japan conducted in 2019, although the government recommends 6-8 hours of sleep, 37.5% of men and 40.6% of women sleep less than 6 hours a day [4].
A sleep loss-related sleep problem is "social jetlag" (SJL). SJL is caused by the "gap between social and biological time (circadian clock)." The circadian clock regulates many physiological functions with a day-night difference and modulates the sleep-wake cycle [5]. The SJL occurs when people wake up early for their social constraints (school and work) on weekdays, but their circadian clock phase is still delayed because of the delayed sleep phase on weekends [6,7]. Previous studies have shown that SJL is associated with obesity and depression [8,9]. Because evening chronotype people prefer to sleep later and to be active and alert in the evening, they tend to have more sleep loss on weekdays and more delayed bedtime and wake-up time on weekends with longer sleep, compared with morning chronotype. This sleep behavior in the evening chronotype people also results in a large SJL [7].
Adolescents tend to have more sleep deprivation than adults due to biological and social changes [10]. Biological changes include delayed circadian clock phase beginning around adolescence and delayed bedtimes due to delayed accumulation of sleep pressure [11,12]. In addition, adolescents must cope with social changes, such as extracurricular activities, the use of electronic devices, and increased academic workloads, which can lead to sleep loss and other sleep disorders [13,14]. One study showed that sleep deprivation in adolescents is a predictor of sleep disorders in adults [15], suggesting that adolescent sleep plays an important role in longterm health. Sleep duration in adolescents is decreasing year by year, with a delay in bedtime [16]. The National Sleep Foundation recommends 8-10 hours of sleep for 14-to 17-year-olds [17], but it has been reported that approximately 25% of adolescents in Japan sleep less than 6 hours, and sleep loss among adolescents is a growing problem in Japan [18]. SJL has also been reported in Japanese adolescents [19]. Sleep loss and SJL have been shown to be associated with a variety of health risks in both adults and adolescents and have negative effects on daily life, such as lower academic performance, negative mood, and daytime sleepiness [20][21][22]. However, most of these studies were conducted on high school and college students, and there have been few large-scale survey studies in younger ages such as children.
Sex differences in sleep have been observed in adolescents and adults [23,24]. For example, circadian rhythms tend to be more delayed in men than in women between the ages of 20 and 40 [24]. On the other hand, subjective and objective data reported that girls generally tended to sleep longer than boys [23]. Women have been found to have a higher prevalence of sleep disorders, higher levels of daytime sleepiness, and longer desired sleep duration, suggesting that women may have a greater need for sleep than men [25,26]. In addition, there are not only sex differences in sleep variables but also in the effects of sleep variables on health risk [27]. However, a large study of Japanese children and adolescents has not yet clarified the existence of sex differences in sleep variables or in associations with health risks.
In this study, we focused on the sex differences in sleep habits among Japanese elementary, junior high, and high school students, and how these sex differences affect academic performance, negative mood, and insomnia.
Ethical approval
The Ethics Review Committee on Research with Human Subjects at Waseda University approved this experiment (No. 2021-101) on June 4, 2021, and the guidelines of the Declaration of Helsinki were followed. This cross-sectional study was conducted, analyzed, and reported in accordance with the STROBE statement. Approval for data collection and use for research analysis was obtained from the participants when they answered the survey.
Study design and participants
Using our previous cross-sectional web-based survey, we performed a power analysis for multiple regression analysis with confounding factors to detect the sample size [28]. An online survey company (Macromill Inc., Tokyo, Japan) was commissioned to conduct the current survey. The recruitment was done through the company's online membership who registered a family structure. The company asked the online members who had children of targeted grades to participate, and then asked their children as participants to answer the questionnaire. We also asked parents to help their children to answer if necessary. Gift cards or shopping points were rewarded to the participants. Respondents who did not meet the criteria (e.g. mismatched grade or incomplete answer) were excluded by the company. Finally, 1030 participants were randomly selected from each grade level (from the fourth grade of elementary school (9-10 years old) to the third grade of high school [17-18 years old]), with a boys: girls ratio of 1:1. The survey was conducted in June 2021 and included 35 items related to basic characteristics (grade, sex, family structure), academic performance, mental health, and life habits (sleep, eating, and physical activity).
Sleep loss and SJL
Sleep behaviors were calculated based on Munich ChronoType Questionnaire (MCTQ) [29]. This study focused on three sleep variables: (1) sleep loss across the week (SLOSSweek), which can be calculated by the difference in sleep duration of school days and free days. (2) SJL, which is calculated by the difference of midpoint of sleep on school days and free days. (3) Midpoint of sleep on free days corrected for sleep loss on workdays (MSFsc), which is an indicator of chronotype.
Athens Insomnia Scale
Symptoms of insomnia were assessed using the Japanese version of the Athens Insomnia Scale (AIS)-Japanese version [30]. Each question of AIS was scored in the range of 0-3, and the total AIS score was in the range of 0-24. A high value of each item in the AIS indicates having more insomnia-related symptoms.
Self-reported academic performance
Academic performance was assessed using self-evaluations of four subjects (Japanese, mathematics, science, and social studies) for elementary school students, and five subjects (Japanese, mathematics, science, social studies, and English) for junior and high school students. The questionnaire item was "What is your level of academic performance in your class or grade?" and this was done for each academic subject. Responses were rated on a five-point scale: 0, lower; 1, lower middle; 2, middle; 3, upper middle; and 4, upper. The total score of these responses was calculated and analyzed as a continuous variable, with a score of 0-16 for elementary school students and 0-20 for junior and high school students. Since the total number of subjects assessed differences between elementary, junior, and high school students, the analysis was conducted for each school type.
Negative mood
The negative mood was assessed by five items: "fatigue," "irritable mood," "unmotivated," "depressed," and "poor appetite." The responses were rated on a four-point scale: 0: "I don't feel this at all," 1: "I don't feel this very much," 2: "I feel this quite often," and 3: "I feel this very much." Therefore, the higher the number, the more negative the mood.
Statistical analysis
Because most of the data did not pass the normality test, we chose nonparametric analysis in this study. To investigate the characteristics of the subjects by sex, Kruskal-Wallis tests were conducted for the sleep variable and the objective variables (academic performance, mood, and sleep) for grade changes, and Mann-Whitney U-tests were conducted for sex by grade.
Multiple regression analysis using the forced imputation method with an interaction term was then conducted to confirm the interaction effect of sex on the objective variables of SLOSSweek and SJL. Sex was treated as a dummy variable and placed as 0: "female" and 1: "male." Grade (age difference) was scored as an ordinal variable and analyzed. We set grade as the control variable; sex as the interaction variable; SLOSSweek or SJL as the explanatory variable; and academic performance, mood, or sleep as the objective variable. Only those values for which an interaction effect was confirmed were subjected to a simple slope analysis, which was a subtest. In all multiple regression analyses, the coefficient of variance expansion was VIF <10 and there was no multicollinearity among the explanatory variables. All data were analyzed using the Statistical Package for the Social Sciences (SPSS ver. 27, IBM Corp., Chicago, IL), and a p value <0.05 indicated statistical significance. Data were expressed as the mean ± SD)or the mean ± SEM.
Participants
In this study, data from 9270 students (4635 boys and 4635 girls) from the fourth year of elementary school through the third year of high school were used for analysis (Table 1). We confirmed significant differences in all sleep variables between elementary, junior, and high school students (p < 0.001 by Kruskal-Wallis test, Table 1).
Grade-and sex-dependent sleep behavior
The sex differences in sleep variables according to grade are shown in Figure 1. On weekdays, there was a tendency for bedtime to become later and sleep duration to become shorter as students grade level progressed. Among high school students, girls showed significantly earlier wake-up times than boys, and had shorter sleep durations. On free days, both wake-up time and bedtime became later as the grade level increased, and sleep duration decreased. Girls woke up significantly later and slept longer than boys in most grades. It was also found that the midpoint of sleep time became later as the grade increased for both weekdays and free days, but a sex difference was only seen for free days in each grade. The chronotype (MSFsc) was significantly later in girls through junior high school, but the trend was the opposite in high school students. Importantly, both SLOSSweek and SJL tended to increase with grade and were significantly greater in girls than in boys in almost all grades.
Sex differences in self-reported academic performance, negative mood, and insomnia
Sex differences in self-reported academic performance, negative mood, and insomnia scale by grade are shown in Figure 2. In some grade groups, academic performance was higher in girls than in boys. Girls scored higher in negative moods than did boys. In the content of the AIS questionnaire, a sex difference was found in many grades for the items "sense of well-being during the day" and "sleepiness during the day." However, the quality of sleep showed no consistent difference between sex. The total AIS score was also higher in girls than in boys in some grade groups.
Interaction effects of sex on the association between SLOSSweek/SJL with other parameters
Multiple regression analyses were conducted to investigate the interaction effect of sex on the association of SLOSSweek/SJL with self-reported academic performance, negative mood, or insomnia. The results of the multiple regression analysis are shown in Table 2, and those of the simple slope analysis are shown in Table 3. In addition, simple slopes for each sex are shown in the graphs (Figure 3). These were graphed by placing the estimate for girls (z = 0) and SLOSSweek or SJL = 0 at 0. All variables of self-reported academic performance, negative mood, and AIS contents were significantly associated with SLOSSweek and SJL ( Table 2). In terms of academic performance, no interaction effect of SLOSSweek and sex was identified in all types of elementary, junior, and high school ( Table 2). On the other hand, a significant interaction effect was found for "fatigue" and "depressed." For "fatigue," there was a significant positive slope for both sexes, and the slope was larger for girls than for boys (Table 3 and Figure 3). In addition, only girls showed a significant positive slope for "depressed," indicating that the level of SLOSSweek in girls has a greater impact on their mood than in boys (Figure 3). A significant negative interaction effect was found for the following sleep categories: "overall quality of sleep," "sense of well-being during the day," "functioning (physical and mental) during the day," and . Sex differences in academic performance, mood, and sleep-related contents in the AIS in each grade. *p < 0.05; **p < 0.01; ***p < 0.001, between sex by Mann-Whitney U-test. Data were expressed as mean ± SEM. Higher score means higher academic performance or better mood. Each question of AIS was scored in the range of 0-3, and the total AIS score was in the range of 0-24. A higher value of each item in the AIS indicates having more insomnia-related problems. "Grade" was started from 4 as the fourth grade of elementary school to 12 as the third grade of high school. "AIS score." "AIS score" showed a significant positive slope for both sexes, with a larger slope for girls. For the other items, a significant positive slope was found only for the girls. Therefore, SLOSSweek in girls has a more negative impact on overall quality of sleep, sense of well-being during the day, functioning (physical and mental) during the day, and AIS score, than in boys. In academic performance, similar to the SLOSSweek, no interaction effect of SJL and sex was found at all school types ( Table 2). In terms of mood, four items showed significant negative interaction effects: "fatigue," "irritable," "depressed," and "poor appetite" (Table 3). In addition, the items "depressed" and "poor appetite" were found to have a significant positive slope only for girls ( Figure 3). In the case of sleep, a significant negative interaction effect was found for the five items of "sense of well-being during the day," "functioning (physical and mental) during the day," "sleepiness during day," and "AIS score." For "sense of well-being during the day," only girls showed a significant positive slope. For the other items, there was a significant positive slope for both sexes, and the slope was larger for girls. Thus, SJL has a stronger negative impact on "sense of well-being during the day," "functioning (physical and mental) during the day," "sleepiness during the day," and "AIS scores" in girls than in boys.
Discussion
In this study, we found that girls experienced greater sleep loss and SJL. Girls slept longer and woke up later on free days than boys in almost all grades (9-18 years old), suggesting that girls in Japan did not have enough sleep and have a larger sleep debt than boys. These sex differences in sleep characteristics were consistent with the recent national survey database "Survey on Time Use and Leisure Activities (2016)" by the Statistics Bureau of Japan [31]. However, no one has focused on this sex difference in Japanese children and adolescents.
Several studies in other countries have demonstrated longer sleep duration in school-aged girls than boys, especially on free days [32,33], which is consistent with our results. However, one study showed no sex difference in adolescents' sleep behavior, which might be due to the small sample size [34]. In addition, our SLOSSweek and SJL calculation based on the MCTQ questionnaire identified greater sleep loss and SJL in girls than in boys. The evening chronotype is associated with a larger SJL [7], and the larger SJL in girls identified in the current study might be due to greater sleep loss on weekdays. This is because the sex difference was only detected at wake-up time, but not at bedtime on free days. It seems that sleep loss-induced longer sleep occurred on free days in girls, which induced later MSF and larger SJL. The mechanism of this sex difference in sleep characteristics might be explained by several factors, including physiological and social aspects. Generally, women have been found to have a higher sleep needs than men, because women have a higher prevalence of sleep disorders, higher levels of daytime sleepiness, and longer desired sleep duration [23,24]. In addition, since we did not assess the use of an alarm clock in this survey, it remains unclear whether the observed earlier awakenings in females are natural occurrences or due to morning obligations.
However, a recent study in young mice (8-12 weeks old) showed the opposite results, in which female mice showed shorter sleep duration and smaller response to sleep deprivation than male mice [35]. Thus, further investigation is needed to determine sex differences in sleep physiology in the future.
Girls in high school woke up significantly earlier than boys on weekdays, which may be because girls require more time for dressing and grooming. A survey on time use and leisure activities (2016) has shown that the average time required to get ready for school becomes longer for girls than boys as they advance through school types [31]. Taken together, sleep loss occurs among adolescents worldwide, and seems to be of particular health concern for females.
In the current study, we found that sleep loss and SJL in girls were more correlated with negative mood and insomnia-related problems, but not with self-perceived academic performance, than in boys. Girls had higher academic performance than boys, as reported previously [36,37]. However, previous studies have not yielded consistent results on the effect of sex on sleep and academic performance [38][39][40], suggesting that the effect might be small or that the girls are studying even if they lose sleep. As shown in Figure 2, girls showed higher irritable/depressed mood, fatigue, and daytime sleepiness than boys, which was consistent with previous studies [41,42]. Regarding the moderating effect of sex on mood and sleep in the current study, five previous studies reached a conclusion similar to the current results. Agathão et al. demonstrated that short sleep duration seemed to be problematic for common mental disorders in girls [42]. Mathew Scale (CES-D) was correlated with shorter sleep time and larger SJL in female adolescents than in boys, but there was no mean difference in SJL between the sexes [43]. Conklin reported that chronic shorter sleep (<8 hours) on weekdays was more associated with depressed mood in girls than in boys (age 13-18 years) [44]. An intervention study conducted among 14-to 18-year-olds in Australia found that girls showed greater sensitivity to the effects of sleep deprivation on various moods, including anger, anxiety, and fatigue [45]. In addition, a study conducted in South Korea in 2017 reported that the effect of short sleep duration on suicidal ideation among girls was 2.5 times higher than that of boys, from the first year of junior high school to the third year of high school [46]. Previous studies in adults have shown that the cortisol arousal response (CAR), a biomarker of stress response, is greater with shorter sleep durations [47][48][49]. In addition, CAR has been reported to increase in women, but to decrease in men, with greater depressive symptoms, which may account for women's vulnerability to sleep deprivation and mood [50]. However, studies on sleep deprivation and CAR in adolescents are scarce, and consistent results have not been obtained, suggesting that more detailed studies in adolescents are needed [51]. SJL is a measure of the misalignment between endogenous and exogenous rhythms [7]. Previous studies have shown that disruptions in circadian rhythms have adverse effects on endocrine functions, such as decreased 24-hour melatonin levels and altered cortisol secretion patterns, which have been reported to contribute to the development of depressive symptoms [52,53]. SJL and CAR have also been shown to be correlated [54]. Thus, there are combined effects of sleep loss and SJL on mood, and future studies should address both issues carefully. Multivariable regression analyses adjusted for grade (age). B: partial regression coefficient. (A) Academic performance is omitted because the interaction effect could not be confirmed.
Limitations
The limitations of our study include misclassification due to self-reporting by the adolescents, unmeasured and uncontrolled confounding factors (e.g. residence area, family income, or family composition). We did not ask whether participants use an alarm clock on free days in this MCTQ questionnaire, which might be a Figure 3. Interaction plot: effects of sex between SLOSSweek/SJL, and academic performance, mood, and sleep. Girls with 0 hour of SLOSSweek/SJL were set as 0 in each graph. p value in each graph refers to the interaction term between SLOSSweek/SJL and sex (SLOSSweek × sex or SJL × sex) in the multiple regression analysis in Table 2.
limitation of the current analysis. Additionally, our mood-questionnaire was not specifically designed for children and adolescents. A more objective methodology, including actigraphy recordings and observation methods such as sleep diaries, is desired. The cross-sectional study design limits the determination of the causal links among all variables.
Conclusion
In this study, we observed higher rates of sleep loss and SJL in girls than in boys, and the moderating effect of sex on the association of sleep loss and SJL on negative mood among Japanese elementary, middle, and high school students. The results of this study suggest the importance of sex-dependent sleep care for children and adolescents.
Funding
This work was partially supported by a Grant-in-Aid for Scientific Research (A, 19H01089 for SS; C, 21K11606 for YT) from the Japan Society for the Promotion of Science, the JST-Mirai Program (JMPJM120D5 for SS), and the JST-FOREST Program (JPMJFR205G for YT).
|
2022-09-23T15:23:14.244Z
|
2022-09-21T00:00:00.000
|
{
"year": 2022,
"sha1": "07b957326d2cffc04f66038de28b2b4d0cb59e43",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/sleepadvances/advance-article-pdf/doi/10.1093/sleepadvances/zpac035/45957482/zpac035.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "202acd6aa653240ae38b9f68f62afa5e7e29427d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231861756
|
pes2o/s2orc
|
v3-fos-license
|
Hitting Sets and Reconstruction for Dense Orbits in $\text{VP}_e$ and $\Sigma\Pi\Sigma$ Circuits
In this paper we study polynomials in $\text{VP}_e$ (polynomial-sized formulas) and in $\Sigma\Pi\Sigma$ (polynomial-size depth-$3$ circuits) whose orbits, under the action of the affine group $\text{GL}_n^{\text{aff}}(\mathbb{F})$, are $\mathit{dense}$ in their ambient class. We construct hitting sets and interpolating sets for these orbits as well as give reconstruction algorithms. As $\text{VP}=\text{VNC}^2$, our results for $\text{VP}_e$ translate immediately to $\text{VP}$ with a quasipolynomial blow up in parameters. If any of our hitting or interpolating sets could be made $\mathit{robust}$ then this would immediately yield a hitting set for the superclass in which the relevant class is dense, and as a consequence also a lower bound for the superclass. Unfortunately, we also prove that the kind of constructions that we have found (which are defined in terms of $k$-independent polynomial maps) do not necessarily yield robust hitting sets.
1. For C n (ℓ 1 (x), . . . , ℓ n (x)) ≜ Trace ℓ 1 (x) 1 1 0 ⋅ . . . ⋅ ℓ n (x) 1 1 0 , where the ℓ i s are linearly independent linear functions, we construct a polynomial-sized interpolating set, and give a polynomial-time reconstruction algorithm. By a result of Bringmann, Ikenmeyer and Zuiddam, the set of all such polynomials is dense in VP e [BIZ18], thus our construction gives the first polynomial-size interpolating set for a dense subclass of VP e .
2. For polynomials of the form ANF ∆ (ℓ 1 (x), . . . , ℓ 4 ∆ (x)), where ANF ∆ (x) is the canonical read-once formula in alternating normal form, of depth 2∆, and the ℓ i s are linearly independent linear functions, we provide a quasipolynomial-size interpolating set. We also observe that the reconstruction algorithm of [GKQ14] works for all polynomials in this class. This class is also dense in VP e .
3. Similarly, we give a quasipolynomial-sized hitting set for read-once formulas (not necessarily in alternating normal form) composed with a set of linearly independent linear functions. This gives another dense class in VP e . 4. We give a quasipolynomial-sized hitting set for polynomials of the form f (ℓ 1 (x), . . . , ℓ m (x)), where f is an m-variate s-sparse polynomial and the ℓ i s are linearly independent linear functions in n ≥ m variables. This class is dense in ΣΠΣ.
For polynomials of the form
where the ℓ i,j s are linearly independent linear functions, we construct a polynomial-sized interpolating set. We also observe that the reconstruction algorithm of [KNS19] works for every polynomial in the class. This class is dense in ΣΠΣ.
Geometric Complexity Theory (GCT for short), which was initiated by Mulmuley and Sohoni [MS01,MS08], approaches the lower bound question from a different angle. GCT also looks for an algebraic lower bound proof, but rather than exhibiting an algebraic argument, it aims to prove the existence of a separating polynomial. Specifically, GCT attempts to prove Valiant's hypothesis, that VP≠VNP, over C, via representation theory. Valiant's hypothesis is, more or less, equivalent to showing that the permanent of a symbolic n × n matrix is not a projection of the symbolic m×m determinant for any m = m(n) polynomial in n. 3 Recall that a projection of a polynomial is a restriction of the polynomial to an affine subspace of its inputs. Observe that a restriction of an n-variate polynomial f (x) to a subspace of its inputs, is equivalent to considering the polynomial f (Ax + b), where A is an n × n matrix and b ∈ C n . As any matrix is a limit point of a sequence of invertible matrices, an algebraic proof that the permanent is not a projection of the m × m determinant, over C, is equivalent to an algebraic proof showing that the permanent is not in the closure of the set of polynomials {Det(AX + b) A ∈ GL m (C), b ∈ C m }, where GL m (C) is the group of invertible m × m matrices (this is true for every field of characteristic ≠ 2). The set {Det(AX + b) A ∈ GL m (C), b ∈ C m } is called the orbit of the determinant under the action of the affine group (we denote the affine group over C m with GL aff m (C)). GCT considers the linear space of polynomials that vanish on every coefficient vector in the orbit of the determinant, and similarly the linear space of polynomials that vanish on every coefficient vector in the orbit of the permanent. There is a natural action of GL aff m (C) on those linear spaces, thus defining two representations of GL aff m (C). GCT wishes to find a separating polynomial by showing that some irreducible representation of GL aff m (C) has strictly larger multiplicity when considering the representation corresponding to the determinant. This approach bypasses the barrier given in [FSV18,GKSS17] as it does not exhibit any efficiently computable separating polynomial but rather just proves the existence of one. However, the representation theory questions arising in this program are quite difficult, even when considering the analog questions for restricted classes. For an introduction to GCT see the lecture notes of Bläser and Ikenmeyer [BI19].
Another possible approach for proving lower bounds against a class of polynomials C, is via the construction of a hitting set for C. Recall that a hitting set H for a class C is a set of points such that for any nonzero polynomial f , that can be computed by a circuit from C, there is v ∈ H such that f (v) ≠ 0. In [HS80] Heintz and Schnorr observed that if we have such a hitting set H then any nonzero polynomial g that vanishes on H cannot be computed in C. It is also not hard to see that this way of obtaining lower bounds also bypasses the natural proof barrier of [FSV18,GKSS17]. The problem is that in most cases we obtained a hitting set for a class only after proving a lower bound for it.
In [FS18] Forbes and Shpilka defined the notion of a robust hitting set for a circuit class C. Over fields of characteristic zero, a hitting set H for a class C is c-robust if it also satisfies that for every f ∈ C there is v ∈ H such that f (v) ≥ c ⋅ f , where ⋅ is some fixed norm on C[x] (see Definition 1.9 for a definition over arbitrary fields). It is not hard to see that if H is a robust hitting set for a class C then it also hits the closure of C.
In this work we focus on depth-3 algebraic circuits, known as ΣΠΣ, and on VP e , the class of algebraic formulas, two classes for which we lack strong lower bounds, and in particular we do not have hitting sets for them. For ΣΠΣ circuits the best lower bound is the near cubic lower bound of Kayal, Saha and Tavenas [KST16], and for VP e the best lower bound is the quadratic lower bound of Kalarkoti [Kal85]. Recall that by the result of Valiant et al. [VSBR83], a super-quasipolynomial lower bound against VP e implies a super-polynomial lower bound against VP. Similarly, a hitting set for VP e implies a hitting set for VP. We also note that by a result of Gupta et al. [GKKS16], a strong enough lower bound or a hitting set for ΣΠΣ imply both a lower bound for general circuits and a hitting set for them. This result also implies that a polynomial-time reconstruction algorithm for ΣΠΣ circuits would give rise to a sub-exponential time reconstruction algorithm for general circuits. Recall that a reconstruction algorithm for a class C is an algorithm that, given black-box access to a circuit from C, outputs a circuit in C that computes the same polynomial.
Instead of viewing robust hitting sets as a way to obtain hitting sets for the closure of circuit classes, we suggest to find subclasses of interesting classes,C ⊂ C, such that C is contained in the closure ofC, and aim to construct a robust hitting set for the subclassC. This offers a new approach for constructing hitting sets for known classes and for obtaining lower bounds. Specifically, we consider subclasses of ΣΠΣ and VP e that are dense in their superclasses. Each of these subclasses is the orbit of some simple polynomial under the group of invertible affine transformations.
For VP e , we first consider a subclass that was defined by Bringmann, Ikenmeyer and Zuiddam [BIZ18]-the orbit of the so called continuant polynomial (see Definition 1.16). We give a polynomial-sized interpolating set 4 for this subclass as well as a polynomial-time deterministic reconstruction algorithm that uses as oracle a root-finding algorithm. 5 In particular, this implies a polynomial-time randomized reconstruction algorithm, and, in some cases, a polynomial-time deterministic algorithm.
In addition, we exhibit two other subclasses that are dense in VP e . The first class is defined as the orbit of read-once formulas (ROF for short, see Definition 5.1) and the second as the orbit of read-once formulas in alternating normal form (ROANF for short, see Definition 5.3). We obtain hitting sets for both classes and an interpolating set for the second. We also observe that the reconstruction algorithm of [GKQ14] works for the polynomials in the orbit of ROANFs. Although the results that we obtain for the subclass defined by the continuant polynomial are stronger, we think that every such dense subclass can shed more light on VP e and may eventually be used in order to obtain new lower bounds.
For ΣΠΣ we consider two subclasses. One is based on orbits of sparse polynomials (polynomials having polynomially many monomials) and the other on orbits of diagonal tensors (see Definition 1.29). We give a hitting set for the first, an interpolation set for the second, and we also observe that a slight modification of the randomized reconstruction algorithm of [KNS19] applies for the second class.
In particular, our results give the first dense subclasses inside VP e and ΣΠΣ for which a polynomial-size interpolating set is known as well as a polynomial-time reconstruction algorithm. By [VSBR83] our result immediately translate to VP, giving a dense subclass of for which a quasipolynomial-sized interpolating set is known as well as a quasipolynomial-time reconstruction algorithm.
If we could transform the interpolating sets that we have found to robust hitting sets for the orbits, then this will immediately give hitting sets for the closure of the orbits, i.e. for ΣΠΣ and VP e , which, by [HS80] gives a lower bound for the class. Thus, our work raises an intriguing problem: Problem 1.1. Given an interpolating set for a class C construct a robust hitting set for C.
We stress that by our results, solving this problem would lead to hitting sets, and lower bounds, for VP e and VP.
Another advantage for having small interpolating sets for dense subclasses is the following: One approach for searching for separating polynomials for a class, is by considering the map from circuits in the class to the coefficient vectors of the polynomials that they compute. That is, once we fix a computation graph, an assignment to the constants appearing in the circuit determines the output polynomial. Each coefficient is a polynomial in those constants, and as there are "few" constants (polynomially many for polynomially sized circuits), and there are exponentially many coefficients, there should be many polynomials vanishing on the closure of the image of this map. If we could get a good understanding of this map then perhaps we could use it to construct a polynomial that vanishes on all such coefficient vectors. This polynomial will vanish on all coefficient vectors of the superclass in which the subclass is dense. A different approach is to find a coefficient vector that is not in the closure of the image of this map (this is the approach of Raz in [Raz10]). Now, assume that H is an interpolating set for a dense subclassC ⊂ C. We know that the map f → f H is one-to-one onC. Thus, the list of values f H can be viewed as an efficient encoding that is given in terms of values of the computed polynomial. This provides a different encoding of a circuit -instead of the constants in it, use the evaluations on H. Thus, by studying the closure of this map (i.e. the closure of the set of points on F H that can be obtained as evaluation vectors of polynomials in the subclass) we may be able to find a separating polynomial, or, as in Raz's approach, find an evaluation vector that is not obtained by any polynomial in the superclass. It is clear that one can also try this approach even if H is not an interpolating set, however, as interpolating sets "preserve information" of a dense set, we believe that such sets are better suited for this approach.
To conclude, focusing on dense subclasses and studying their properties could lead to better understanding of their superclasses and perhaps to breakthrough results in algebraic complexity.
To formally state our results we need some definitions that we give next. 2. An algebraic formula (also called arithmetic formula) over a field F, is a rooted tree whose leaves are labeled with either variable or scalars from F, and whose root and internal nodes (called gates) are labeled with either "+" (addition) or "×" (multiplication). An algebraic formula computes a polynomial in the natural way. Each leaf computes the polynomial that labels it, and each gate computes either the sum or product of its children, depending on its label. The output of the formula is the polynomial computed at its root. The size of a formula is the number of wires in it. The depth of a formula is the length of the longest simple leaf-root path in it. The formula size of a polynomial f is defined as the smallest size of a formula that outputs f .
A sequence m(n) of natural numbers is called polynomially bounded if there exists a univariate polynomial q such that m(n) ≤ q(n) for all n.
The complexity class VP e is defined as the set of all families of polynomials (f n ) n , with f n ∈ F[x 1 , . . . , x n ], whose formula size is polynomially bounded.
Given a family of circuits C, we will sometime denote it as C(F) to stress that we allow coefficients to come from the field F. Observe that the definitions of the classes above do not depend on the field and so we can define them over any field of our choice.
Approximate complexity
The following definition gives sense to the notion of approximation over arbitrary fields. In what follows we let ε be a new formal variable. 6 For a field F we denote with F[ε] the ring of polynomial expressions in ε over F, and with F(ε) the fraction field of F[ε], i.e. the field of rational expressions in ε.
Definition 1.5. Let C(F) be a circuit class over a field F. The closure of C, denoted C(F), is defined as follows: A family of functions (f n ) n , where f n ∈ F[x 1 , . . . , x n ], is in C(F) if there is a polynomially bounded function m ∶ N → N, and a family of functions (g m(n) ) n ∈ C(F(ε)), with g m(n) ∈ F[ε][x 1 , . . . , x m(n) ], such that for all n ∈ N, g m(n) (x 1 , . . . , x m(n) ) = f n (x 1 , . . . , x n ) + ε ⋅ g n,0 (x 1 , . . . , x m(n) ) , 6 Intuitively, one should think of ε as an infinitesimal quantity.
for some polynomial g n,0 ∈ F[ε][x 1 , . . . , x m(n) ]. Whenever an equality as in (1) holds we say that In that case we think of g m(n) as an "approximation" of f n , and we say that the family (g m(n) ) n approximates the family (f n ) n .
Alder [Ald84] have shown that over C it holds that (f n ) ∈ C(C), in the sense of Definition 1.5, if and only if it is in the closure of C(C) in the usual sense. That is, if for every n there exists a sequence of polynomials g n,k ∈ C(C) such that lim k→∞ g n,k = f n , where convergence is taken coefficient wise. This result holds over R as well, see [LL89,Bür04].
Finally, we note that every matrix is approximable (in the sense of Definition 1.5) by a non-singular matrix (which is equivalent to being a limit of a sequence of non-singular matrices, in characteristic zero).
Observation 1.6. For every A ∈ F n×n there exists a non-singular matrix B ∈ F(ε) n×n such that A = B+O(ε).
Hitting and interpolating sets
Definition 1.7. A set of points H ⊆ F n is called a hitting set for a circuit class C (we also say that H hits C) if for every circuit Φ ∈ C, computing a non-zero polynomial, there exists some a ∈ H such that Φ(a) ≠ 0.
We next give the definition of a robust hitting set, a notion first defined in [FS18]. Here we extend the definition for arbitrary characteristic. We start by giving the definition of [FS18], over characteristic zero (and focus on C) and then the more general definition.
Definition 1.8 (Following Definition 5.1 of [FS18]). Let ⋅ be some norm on C[x]. A hitting set H for a circuit class C ⊆ C[x] is called robust if there exists some constant c > 0 such that, for every 0 ≠ f ∈ C, 7 there exists some a ∈ H such that f (a) ≥ c ⋅ f .
For arbitrary characteristic we use the same approach as in Definition 1.5.
Definition 1.9. Let F be a field of arbitrary characteristic. A hitting set H ⊂ F n for a circuit class C(F) is called robust if for every circuit Φ ∈ C(F(ε)) computing a polynomial f ( It is not hard to prove using the result of [Ald84] that for F = C, Definitions 1.8 and 1.9 are equivalent. Observation 1.10. If H is a finite robust hitting set for C(F), then H hits C(F) as well.
We next define the notion of an interpolating set.
Definition 1.11. Let C be a class of n-variate polynomials. A set H ⊆ F n is called an interpolating set for C if, for every f ∈ C, the evaluations of f on H uniquely determine f . 7 We abuse notation and write f ∈ C when f is the output of some circuit from C.
A common method for designing hitting and interpolating sets is via hitting set generators.
Definition 1.13. A polynomial mapping G ∶ F k → F n is called a hitting set generator (or simply a generator) for a circuit class C(F) if for any non-zero n-variate polynomial f ∈ C, the k-variate polynomial f ○ G is non-zero.
Similarly, we call G ∶ F k → F n an interpolating set generator for a circuit class C(F) if for any two different n-variate polynomials f 1 , f 2 ∈ C, the k-variate polynomial (f 1 − f 2 ) ○ G is non-zero.
Generators immediately give rise to hitting sets.
Observation 1.14. Let G ∶ F k → F n be a generator for C(F) such that the individual degree of each coordinate of G is at most r. Let W ⊂ F be any set of size W = d ⋅ r + 1. Let H = G W k . Then H hits every n-variate polynomial f ∈ C of degree at most d.
Proof. As G is a generator, the k-variate polynomial f ○ G is nonzero. As its individual degrees are bounded by d ⋅ r it follows that at least one of the values in (f ○ G) W k = f (H) is not zero.
k-independent maps
Our constructions rely on polynomial mappings G k , parameterized by some integer k ≤ n, with the property that the image of f ○ G k contains all projections of f to k variables. We call such a map a k-independent map.
Definition 1.15. We call a polynomial mapping G(y 1 , . . . , y t , z 1 ) ∶ F t+1 → F n a 1-independent polynomial map if for every index i ∈ [n] there exists an assignment a i ∈ F t to y 1 , . . . , y t such that the ith coordinate of G(a i , z 1 ) is z 1 , and the rest of the coordinates are 0. For k > 1, a polynomial mapping G(y 1 , . . . , y tk , z 1 , . . . , z k ) ∶ F k(t+1) → F n is called a k-independent polynomial map (or a k-independent map) if G is a sum of k variabledisjoint 1-independent polynomial maps. We denote k-independent polynomial maps as G(y, z) when k, t are implicit. The y variables are called control variables.
A k-independent polynomial map G is called uniform if all n coordinates of G are homogeneous polynomials of the same degree.
The linear and affine groups and their actions
Given a matrix A ∈ F n×n and a tuple of variables x = (x 1 , . . . , x n ), we denote Let n ≥ m ∈ N. For an m-variate polynomial f (x 1 , . . . , x m ) ∈ F[x 1 , . . . , x m ], a matrix A = (A i,j ) n i,j=1 ∈ F n×n and a vector b = (b 1 , . . . , b n ) ∈ F n , we define the n-variate polynomial f (Ax + b) to be (2) Note that we ignored the last n − m coordinates of Ax + b.
We denote with GL n (F) the group of invertible n × n matrices over F, and with GL aff n (F) the group of invertible affine transformation, i.e. all the maps x → Ax + b, where A ∈ GL n (F) and b ∈ F n .
For an m-variate polynomial f over F, and n ≥ m we denote with f GL aff n (F) the orbit of f under the natural action of GL aff n (F): We similarly define f GLn(F) . More generally, for a class of m-variate polynomials C(F), we denote the orbit of C under GL aff n (F) by We similarly define C GLn(F) . When we want to speak about orbits of families of polynomials from C(F), with arbitrary number of variables, we use the notation C GL(F) or C GL aff (F) .
Our results
We first give our results for the class VP e and then for the class of depth-3 circuits, for which it may be easier to obtain a robust hitting set, or prove super-polynomial lower bounds.
The continuant polynomial
Bringmann, Ikenmeyer and Zuiddam [BIZ18] defined the following polynomial (in Remark 3.14 of their paper), which they called the continuant polynomial: Definition 1.16. The continuant polynomial on n variables, C n (x 1 , . . . , x n ), is defined as the trace of the following matrix product: (3) We denote with C GL aff (F) the class of families of polynomials (f n ) n such that f n ∈ F[x 1 , . . . , x n ] and for some A result of Allender and Wang implies that the polynomial x 1 ⋅ y 1 + ⋯ + x 8 ⋅ y 8 is not in C GL aff (F) [AW16]. Thus, as a computational class it is very weak. However, Theorem 3.12 of [BIZ18] states that for every field F of characteristic different than 2, it holds that We give a polynomial-size interpolating set for the class C GL aff (F) as well as a polynomial-time reconstruction algorithm for it. We first state a simple result that gives a hitting set for the class.
Theorem 1.17. Let f (x 1 , . . . , x n ) ∈ C GL aff n (F) m , for m ≤ n, and arbitrary F. Then, for any uniform 1independent polynomial map G over F, f ○ G ≠ 0.
As immediate corollary we get a hitting set for the class.
Corollary 1.18. For every field F, there is an explicit hitting set H ⊂ F n , of size H = O n 6 , that hits . If F < n 2 then H is defined over a polynomial-sized extension field of F, K such that K ≥ n 2 .
Theorem 1.19. For every field F, there is an explicit interpolating set H ⊂ F n , of size H = O n 10 , for . If F < n 2 then H is defined over a polynomial-sized extension field of F, K such that K ≥ n 2 .
Theorem 1.20. There is a deterministic algorithm that given F, an integer n, oracle access to a root-finding algorithm over F, and black-box access to a polynomial f (x 1 , . . . , x n ) ∈ C GL aff n (F) m (for any m ≤ n), runs in polynomial-time and outputs linear functions (ℓ 1 (x 1 , . . . , x n ), . . . , ℓ m (x 1 , . . . , x n )) such that If F < n 3 then the algorithm will make queries from a polynomial-sized extension field of F, K, such that K ≥ n 3 , and it also requires oracle access to a root-finding algorithm over K.
Orbits of read-once formulas
Roughly, a read-once formula (ROF) is a formula in which every variable labels at most one leaf. However, following [SV15,SV14] we also allow gates of the formula to pass on their output wire a linear function of their polynomial (see Definition 5.1). We denote with ROF GL(F) the class of families of polynomials (f n ) n , such that for every n there exists a ROF Φ, on m ≤ n variables, such that f n (x 1 , . . . , x n ) ∈ Φ GLn(F) .
A ROF is in alternating normal form (ROANF) if it is a full binary tree of depth 2∆ with alternating layers of addition and multiplication gates. In particular, it is a ROF on 4 ∆ many variables (see Definition 5.3).
We denote with ANF ∆ the canonical ROANF of depth 2∆ in which the leaves are labeled with the variables x 1 , . . . , x 4 ∆ according to their order (see Definition 5.4). We denote with ANF GL aff [F] the class of families of polynomials (f n ) n , such that for every n there exists ∆ such that 4 ∆ ≤ n and f n (x 1 , . . . , x n ) ∈ ANF GL aff n (F) ∆ . We first make the following simple observation.
Theorem 1.21. For every field F, it holds that However, when taking closures we get Our main results for ROFs and ROANFs are a construction of a hitting set for the orbit of ROFs, and an interpolating set for the orbit of ROANFs. Both constructions are obtained using independent polynomial maps (Definition 1.15).
Theorem 1.22. Let 0 ≠ f ∈ ROF GL aff n (F) where the underlying ROF depends on 2 t variables, for 2 t ≤ n. Then, for any (t + 1)-independent polynomial map G, over F, f ○ G ≠ 0. Corollary 1.23. For every field F, there is a hitting set H ⊂ F n , of size H = n O(log n) , that hits every 0 ≠ f ∈ ROF GL aff n (F) . If F < n 2 then H is defined over a polynomial-sized extension field of F, K such that K ≥ n 2 .
Since a hitting set for all polynomials of the form g − h where g, h ∈ C is the same as an interpolating set for C, the following theorem gives an interpolating set for the orbit of ROANFs.
Corollary 1.25. For any field F, the class ANF GL aff n (F) ∆ , for 4 ∆ ≤ n, admits an interpolating set H ⊂ F n , of size H = n O(∆) . If F < n 2 then H is defined over a polynomial-sized extension field of F, K, such that K ≥ n 2 .
Finally, we observe that the randomized algorithm of Gupta, Kayal And Qiao [GKQ14], for reconstructing random algebraic formula (for a natural definition of a random formula), yields a randomized reconstruction algorithm for ANF GL aff (C) . Naturally, the reconstruction is up to the symmetry group of ROANFs.
Theorem 1.26 (A special case of Theorem 1.1 of [GKQ14]). Let T be a finite subset of C. Let n, ∆ ≥ 1 be integers such that s ≜ 4 ∆ ≤ n. Given black-box access to the output f of a circuit Φ ∈ ANF GL aff n (C) , with probability at least 1 − n 2 s O(1) T (on internal randomness), Algorithm 6.9 of [GKQ14] successfully computes a tuple of s linearly independent linear functions L = (ℓ 1 , . . . , ℓ s ) ∈ (C[x]) s such that f = ANF ∆ (ℓ 1 , . . . , ℓ s ), and the ℓ i s are identical to the labels of the leaves of Φ up to TS n (C)-equivalence (see Definition 2.3). Moreover, the running time of the algorithm is poly(n, s, log( T )).
Remark 1.27. Theorem 1.1 of [GKQ14] is stated only for characteristic zero fields. However, in Remark 6.10 they explain how to make the algorithm work over any characteristic, for a large enough field. Thus, Theorem 1.26 also holds over large enough fields in arbitrary characteristic. Remark 1.28. As a direct implication of Theorem 1.24, the reconstruction algorithm of Theorem 1.26 can be converted into a zero-error algorithm, with expected quasipolynomial running time: Given black-box access to some f 1 ∈ ANF GL aff (F) , we define f 2 to be the output of the algorithm of Theorem 1.26 on input f 1 , and then verify f 1 = f 2 using Corollary 1.25.
Dense subclasses of ΣΠΣ
We start by defining the canonical diagonal tensor of degree d and rank s, T s,d ∈ F[x 1,1 , . . . , x s,d ], and the resulting class of polynomials T GL aff (F) . Definition 1.30. Let ΣΠ GL aff (F) denote the class of families of polynomials that are computed by orbits of depth-2 circuits, of polynomially bounded size, over F. I.e., it is all families (f n ) n , of polynomially bounded degree, such that for some polynomially bounded m(n), there exist Σ m(n) Π deg(fn) circuits Φ m , in k ≤ n, many variables, such that f n ∈ Φ GL aff n (F) m .
As before we first give the basic observation connecting all three classes.
Theorem 1.31. For every field F it holds that and for fields of size F ≥ n + 1 ΣΠ GL aff (F) ⊊ ΣΠΣ(F) .
In addition, Our main results for this section are a quasipolynomial-size hitting set for the class ΣΠ GL aff (F) , and a polynomial-size interpolating set for T GL aff (F) .
We next state our result concerning an interpolating set for T GL aff (F) .
Finally we note that the randomized reconstruction algorithm of Kayal and Saha [KS19a], which works for (as it is termed in their paper) "non-degenerate" homogeneous depth-3 circuits, works for T GL aff (F) . This follows from the observation that T GL aff (F) circuits are always non-degenerate.
Theorem 1.35 (special case of Theorem 1 of [KS19a]). Let n, d, s ∈ N, n ≥ (3d) 2 and s ≤ ( n 3d ) d 3 . Let F be a field of characteristic zero or greater than ds 2 . There is a randomized poly(n, d, s) = poly(n, s) time algorithm which takes as input black-box access to a polynomial f that is computable by a T circuit Φ computing f with high probability. Furthermore, Φ is unique up to TPS s,d (F)-equivalence (see Definition 2.6).
Remark 1.36. As in remark 1.28, Theorem 1.34 enables us to convert the reconstruction algorithm of Theorem 1.35 to a zero-error algorithm, with expected polynomial running time. Given black-box access to some f 1 ∈ T GL aff (F) , we define f 2 to be the output of the algorithm of Theorem 1.35 on input f 1 , and then verify f 1 ≡ f 2 by applying Theorem 1.34 to f = f 1 − f 2 .
Robust hitting sets?
As we showed in Observation 1.10, if a hitting set H for a circuit class C is robust, then H hits C as well. It is thus natural to ask whether our interpolating sets are already robust. Our next result shows that the property of being a t-independent map, which was sufficient for the constructions in Theorems 1.17, 1.19, 1.22, 1.24, 1.32 and 1.34 (for the appropriate values of t), by itself is not sufficient for obtaining robust hitting sets. We prove this by constructing an independent polynomial map which gives rise to a provably nonrobust hitting set. Our construction is the same as the one given by Forbes et al. [FSTW16] (Construction 6.3 in the full version).
Theorem 1.37. Let F be of characteristic zero. For every t, there exists a uniform t-independent polynomial map G and a nonzero polynomial f such that f ○ G ≡ 0, and f can be computed by a ΣΠΣ formula of size t O( √ t) . If F has a positive characteristic then f can be computed by a ΣΠΣ formula of size t t , or by a general formula of size t O(log t) . Furthermore, for a certain arrangement of the variables in a √ n × √ n matrix, f can be taken to be the determinant of any (t + 1) × (t + 1) minor.
Polynomial Identity Testing
So far we discussed our work from the perspective of dense subclasses of classes for which no strong lower bounds are known. Here we put our work in the context of the polynomial identity testing problem.
Polynomial Identity Testing (PIT for short) is the problem of designing efficient deterministic algorithms for deciding whether a given arithmetic circuit computes the identically zero polynomial. PIT has many applications, e.g. deciding primality [AKS02], finding a perfect matching in parallel [FGT19,ST17] etc., and strong connection to circuit lower bounds [KI04,DSY09,CKS18,GKSS19]. See [SY10,Sax09,Sax14] for surveys on PIT and [KS19b] for a survey of algebraic hardness-randomness tradeoffs.
PIT is considered both in the white-box model, in which we get access to the graph of computation of the circuit, and in the black-box model in which we only get query access to the polynomial computed by the circuit. Clearly, a deterministic PIT algorithm in the black-box model is equivalent to a hitting set for the circuit class. In this work we only focus on the black-box model.
The continuant polynomial and algebraic branching programs: The continuant polynomial is trivially computed by width-2 Algebraic Branching Programs (ABPs). Recall that an ABP of depth-d and width-w computes polynomials of the form Trace (M 1 (x) ⋅ . . . ⋅ M d (x)), where each M i is a w × w matrix whose entries contain variables or field elements. Ben-Or and Cleve proved that every polynomial in VP e can be computed by a width-3 ABP of polynomial-size [BC92].
Raz and Shpilka gave the first polynomial-time white-box PIT algorithm for read-once ABPs (ABPs in which every variable can appear in at most one matrix) [RS05]. Forbes, Saptharishi and Shpilka gave the first quasipolynomial-sized hitting set for read-once ABPs (ROABPs) [FSS14]. This result was slightly improved in [GG20] for the case where the width of the ROABP is small. Anderson et al. gave a subexponential hitting set for read-k ABPs [AFS + 18]. We note that none of these models is strong enough to contain the orbit C GL aff (F) . For ABPs that are not constant-read we do not have sub-exponential time PIT algorithms. Thus, the following is an interesting open problem (recall that by the result of Ben-Or and Cleve a PIT algorithm for width-3 ABPs works for VP e as well).
Problem 1.38. Give a sub-exponential time PIT algorithm for ABPs of width-2.
Although we do not have a PIT algorithm for general branching programs, in [KNST18] Kayal et al. gave an average-case reconstruction algorithm for low width ABPs. Kayal, Nair and Saha obtained a significantly better algorithm in [KNS19]. Their algorithm succeeds w.h.p, provided the ABP satisfies four non-degeneracy conditions (these conditions are defined in Section 4.3 of [KNS19]). However, the ABP computing the continuant polynomial does not satisfy the non-degeneracy conditions that are required for their algorithm to work. Thus, Theorem 1.20 does not follow from [KNS19].
To the best of our knowledge, C GL aff (F) is the first natural 9 computational class that is dense in VP e for which a polynomial (or even sub-exponential)-sized interpolating set (or a hitting set) is known.
Read-Once formulas: Hitting sets for read-once formulas were first constructed by Volkovich and Shpilka [SV15], who gave quasipolynomial-sized hitting set for the model, as well as a deterministic reconstruction algorithm of the same running time (earlier randomized reconstruction algorithms were known [BHH95,BB98]). Minahan and Volkovich obtained a polynomial-sized hitting set for the class, which led to a similar improvement in the running time of the reconstruction algorithm [MV18]. Anderson, van Melkebeek and Volkovich constructed a hitting set of size n k O(k) +O(k log n) for read-k formulas [AvMV15]. All these results work in a slightly stronger model in which we allow to label leaves with univariate polynomials, of polynomial degree, such that every variable appears in at most one polynomial, or with sparse polynomials on disjoint sets of variables.
The read-once models that we consider here, ANF GL aff (F) and ROF GL(F) , can be viewed as read-once formulas composed with a layer of addition gates with the restriction that the bottom layer of additions computes linearly independent linear functions. We note that these models do not fall into any of the previously studied models, as a variable can appear in all the linear functions.
As is the case with C GL aff (F) , our hitting sets for ANF GL aff (F) and ROF GL(F) are the first sub-exponentialsized hitting sets for natural dense subclasses of VP e . Small depth circuits: The class of ΣΠ circuits was considered in many works, see e.g. [BT88,KS01] and polynomial-sized hitting sets were constructed. The class of ΣΠΣ circuits also received a lot of attention but with lesser success. Dvir and Shpilka [DS07] and Karnin and Shpilka [KS08] gave the first quasipolynomialtime white-box and black-box PIT algorithms for Σ [k] Π [d] Σ circuits, respectively. Currently, the best result is by Saxena and Seshadhri who gave a hitting set of size (nd) O(k) for such circuits [SS12]. In [dOSV16] a subexponential-size hitting set for multilinear ΣΠΣ circuits was given. In [ASSS16], Agrawal et al. gave a hitting set of size where r is an upper bound on the algebraic rank of the multiplication gates in the circuit. Thus, known quasipolynomial-size hitting sets for subclasses of ΣΠΣ circuits are known when the fan-in of the top gate is poly-logarithmic, or when the algebraic rank of the set of multiplication gates is poly-logarithmic. In contrast, polynomials in T GL aff n (F) and ΣΠ GL aff (F) , when viewed as ΣΠΣ circuits, can have polynomially many multiplication gates and their algebraic rank can be n. On the other hand, the corresponding ΣΠΣ circuits are such that the different linear functions that are computed at their bottom layer are linearly independent (when we view linear functions that are a constant multiple of each other as the same function). Thus, our Corollary 1.33 provides a hitting set for a new subclass of ΣΠΣ circuits.
To the best of our knowledge, our results for T GL aff (F) and ΣΠ GL aff (F) give the first sub-exponential size hitting sets for natural subclasses that are dense in ΣΠΣ.
More related work
Approximations in algebraic complexity were first studied by Bini et al. in the context of algorithms for matrix multiplication [BCRL79]. For more on the history of border rank in the context of matrix multiplication see notes of chapter 15 in [BCS13]. More recently, influenced by the GCT program, a lot of research was invested in trying to find polynomials characterizing tensors of small rank. See [Lan17] for a discussion on this approach. More recently, Kumar proved that every polynomial over C can be approximated by a Σ [2] ΠΣ circuit (of exponential degree) [Kum20].
Very little is known about the closure of circuit classes. Forbes observed that the class of ROABPs is closed [For16]. I.e. ROABP = ROABP. We are not aware of other collapses or separation between general "natural" classes and their closures.
In general, we do not expect the reconstruction problem to be solvable efficiently, as the problem of finding the minimal circuit computing a given polynomial is a notoriously hard problem. A detailed discussion on the hardness of reconstruction can be found in [KNS19].
Proof technique
Our proofs are based on the following simple yet important, and as far as we know novel, observations concerning k-independent polynomial maps. Specifically, our proofs are based on the following two claims: 1. If we have a hitting-set generator H for nonzero polynomials of the form ∂f ∂x 1 , for f ∈ C, and if G is a 1-independent map then H + G hits every nonzero f ∈ C. This is proved in Lemma 3.9.
2. Similarly, we prove that if we have a hitting-set generator H for nonzero polynomials of the form f ℓ=0 (Ax + b), for f ∈ C, a linear function ℓ, and an invertible affine transformation (A, b), and if G is a 1-independent map then H + G hits every nonzero f ∈ C. This follows from Lemma 3.10.
By applying these claims k + r times we get that composition with a (k + r)-independent map allows to reduce the problem of hitting a class C to hitting polynomials of the form . Thus, if we could prove that for a class C, there is such a sequence of derivatives and restrictions that simplifies the polynomials in it to a degree that they can be easily hit by some map H, then we conclude that H + G k+r , for a (k + r)-independent map G k+r , is a hitting set generator for C.
It seems that all that is left to do is prove that for each of the orbits that we consider in Section 1.2 that is such small k and r. However, a potential problem is that a partial derivative of the polynomial Thus, it is no longer a derivative composed with an affine transformation but rather a sum of such derivatives, which could lead to polynomials outside of our class. For example, it is not hard ot prove that if we compose the ROF y 1 ⋅ y 2 ⋅ y 3 with (x 1 , x 1 + x 2 , x 1 + x 3 ) and then take a derivative according to x 1 , then the resulting polynomial, is not in the orbit of any ROF. The solution to this problem is to take a directional derivative in a direction coming from a dual basis. For example if ℓ i (v j ) = δ i,j then ∂g ∂v 1 = ∂f ∂x 1 (Ax + b) (see Lemma 3.8). Now, comes another important observation: If H is a hitting-set generator for nonzero polynomials of the form ∂f ∂v , for f ∈ C and a direction v, and if G is a 1-independent map then H + G hits every nonzero f ∈ C. The point is that if ∂f ∂v ○ H ≠ 0 then for some i, ∂f ∂x i ○ H ≠ 0 and the claim follows from the first claim above. Thus, composition with (k + r)-independent maps allows us to reduce the problem of hitting a class C to finding a generator for polynomials that are obtained as a restriction to a subspace of co-dimension r of a directional partial derivative of order k of polynomials in C.
Let us demonstrate this idea for the case of orbits of sparse polynomials. I.e. to polynomials of the form where the number of monomials in f is at most 2 t . It is not hard to see that there is a variable x i such that if we consider f x i =0 and ∂f ∂x i then one of these polynomials has at most 2 t−1 monomials. 10 Thus, after a a sequence of at most t partial derivatives and restrictions, we get to a polynomial with only one monomial that we can easily hit. Hence after at most t directional derivatives and restrictions to a subspace, we get that g is a product of linear forms, which we can easily hit. This proves that any (t + 1)-independent map hits such nonzero polynomials g.
To obtain interpolating sets for our classes (and also a reconstruction algorithm for the orbit of the continuant polynomial), we prove that if two polynomials in the orbit, of any of the classes that we consider, are different, then there is a sequence of a few (directional) partial derivatives and restrictions that makes one of them zero while keeping the other nonzero. Using this and the ideas from above we construct our interpolating sets.
Discussion
As Theorem 1.37 shows, our hitting sets are not necessarily robust. It is thus an outstanding open problem to find a way to convert a hitting set to a robust one (recall Problem 1.1).
The following toy example demonstrates that converting a hitting set for a class C to a robust hitting set for C, cannot be done in a black-box manner and one has to use information about C for that: let C(F) be the class of all polynomials with non-zero free term. A trivial hitting set for C would simply be the singleton set H = {0}. On the other hand, it is clear that C = F[x], so making H robust would yield a hitting set for all polynomials. Note, however, that this is not a "computational class." Another potential approach for obtaining robust hitting sets follows from the observation that the set of queries made by a non-adaptive deterministic black-box reconstruction algorithm, A, for C, which is continuous at 0 (i.e. at the identically zero polynomial) is a robust hitting set for C. The reason is, that if 0 ≠ f ∈ C and {f k } ∞ k=1 ⊆ C converges to f , then for large enough k: f k 2 ≥ 1 2 f 2 > 0. As the f k sequence converges and polynomial evaluation is continuous (and their evaluation vectors are bounded), the sequence v k = f k H ⊆ C H must also converge to some vector v = f H ∈ C H . If v = 0 then the continuity of A at 0 implies the coefficients of the polynomials f k (x) must also converge to zero, as A(0) = 0. This would contradict f k 2 ≥ 1 2 f 2 > 0 for large enough k, so v ≠ 0 and thus H hits C. Thus, an interesting challenge is to derandomize the reconstruction algorithms given in Theorems 1.20, 1.26 and 1.35, hoping that the resulting algorithms are continuous at 0. We note however, that currently we do not even have efficient deterministic root-finding algorithms over C. It is also known that in general, finding the minimal circuit for a polynomial can be very difficult. E.g., in [Hås90,Swe18] it was shown that the question of computing, or even approximating, tensor rank, for degree 3 tensors, is NP hard, over any field.
Remark 1.39. In Theorem 1.34, we have seen that any uniform O(log(sn))-independent polynomial map G is an interpolating set generator for T GL aff (C) ; i.e, G induces an interpolating set H for T GL aff (C) . On the other hand, in Theorem 1.37, we constructed such a map G, with the additional property that G is not a hitting set generator for ΣΠΣ circuits. In particular, this implies that the induced (non-efficient) reconstruction map A (that takes f (H) and returns a circuit computing f ) is not continuous at 0.
We conclude this section with a somewhat vague question.
Problem 1.40. Find a "computational" class of polynomials C with a known hitting set H, such that C ≠ C, and convert H to a robust hitting set.
We note that the closure of Σ ⋀ Σ circuits (i.e. circuits computing polynomials of the form ∑ i ℓ i (x) d , for linear functions ℓ i ) is contained in the class of commutative read-once algebraic branching programs (see [FSS14]). Thus, the hitting set for the latter class gives a robust hitting set for the former [FSS14]. However, we seek an example in which there is an "interesting" conversion of a hitting set to a robust one.
Organization
The paper is organized as follows. Section 2 contains some more basic notations and definitions as well as characterization of the groups of symmetries of ANF ∆ and of T s,d . In Section 3 we give properties and constructions of k-independent polynomial maps and prove Theorem 1.37. In Section 4 we study the continuant polynomial and prove Theorems 1.17, 1.19 and 1.20. In Section 5 we study orbits of ROFs and ROANFs and prove Theorems 1.21, 1.22, 1.24 and 1.26. Section 6 contains our results for subclasses of ΣΠΣ circuits (Theorems 1.31, 1.32, 1.34 and 1.35). The appendix contains missing definitions that are required for explaining the reconstruction algorithm of [GKQ14].
Notation
We use boldface lowercase letters to denote tuples of variables or vectors, as in x = (x 1 , . . . , x n ), a = (a 1 , . . . , a m ), when the dimension is clear from the context. For any two elements i, j coming from some set S (usually i and j will be numbers), δ i,j equals 1 when i = j and 0 otherwise. For every m ∈ N we denote with I m the m × m identity matrix. When we wish to treat the entries of a matrix A as formal variables, we use boldface A. We will note use capital bold face letters other than to denote such matrices.
For an exponent vector a = (a 1 , . . . , a n ) ∈ N n , we denote x a ≜ ∏ n i=1 x a i i . In some cases we shall consider "monomials" with respect to set of linear functions {ℓ i } m i=1 : for an exponent vector e = (e 1 , . . . , e m ) ∈ N m we denote ℓ e = ∏ m i=1 ℓ e i i and refer to it as an {ℓ i }-monomial. For a polynomial f (x) we define the monomial support of f , denoted mon(f ), as the set of monomials with non-zero coefficient in f . The variable set of f , denoted var(f ), is the set of variables that f depends on. I.e., all variables that appear in mon(f ).
of deg(f ) ≤ 1 is called a linear function, and if f is homogeneous then it is called a linear form. For a polynomial f ∈ F[x] and an integer k ∈ N we denote by f [k] the degree-k homogeneous part of f (x),i.e. the sum of all monomials of f of degree exactly k. In particular, Note that for a linear function f , f [1] is a linear form. We say that a polynomial f is homogeneous of Given a polynomial f (x), a subset of variables y ⊆ {x 1 , . . . , x n } and an assignment to those variables a ∈ F y , we denote by f y=a ∈ F[x ∖ y] the polynomial resulting from assigning the values of a to the variables of y in f (x). We sometimes abuse notation and write y ⊆ [n] to indicate the indices of the assigned variables instead of the variables themselves.
Given an arithmetic circuit Φ, we frequently denote by Φ(x) or, abusing notation, by Φ, the polynomial computed at the output node of Φ. Given a class of arithmetic circuits C and a polynomial f ∈ F[x], we say f ∈ C if f can be computed by some circuit from C. For a circuit class C(F) we denote by C(F) the closure of C(F), as in Definition 1.5.
Groups of matrices and their action
We first list some simple properties of composition with a linear (or affine) transformation that we shall use implicitly.
Observation 2.1. For any m variate polynomial f (x 1 , . . . , x m ) and n ≥ m: • The set of matrices A for which f (x) = f (Ax) forms a multiplicative subgroup of GL n (F) and a similar claim holds for GL aff n (F).
We next define some special groups that serve as group of symmetries of some of the models that we consider. We first define the group of symmetries of ANF ∆ (x).
Definition 2.2. For m, ∆ ∈ N such that m = 2 ∆ , the tree-symmetry group TR m (F) denotes the automorphisms of a rooted complete binary tree of depth ∆. It is defined recursively as follows.
• For m = 1, TR 1 (F) consists only of the identity matrix.
11 Note that by our definition, x and x + 1 are linearly dependent.
Definition 2.3. For any m = 4 ∆ , the tree-scale group TS m (F) is the group generated by elements of TR m (F) and matrices of the form The importance of the group TS m (F) stems from the fact that it is the symmetry group of ANF ∆ . To intuitively see why this is the case, notice that in any representation of an ANF one may swap children of any node without changing the output polynomial. We call such symmetries "tree-symmetries" and they are captured by the group TR n (F). A second source of ambiguity comes from the fact that we can rescale the formula. Recall that the output polynomial is of the form f 1 ⋅ f 2 + f 3 ⋅ f 4 (Definition 5.3). Clearly, the output does not change if we replace f 1 by, say, 2f 1 and f 2 by f 2 2. Such rescaling symmetries are captured by the group TS n (F). Finally, another source for ambiguity comes from the fact that the quadratic polynomials computed at the bottom two layers of the ANF may have different representations. For example, As there is an infinite number of representations for each quadratic polynomial (over infinite fields), we can expect to characterize the symmetries in term of the quadratics computed at the bottom two layers of the ANF.
Next, we define the group of symmetries of T s,d (x).
Definition 2.5. For any n ∈ N the permutation-scale group, denoted PS n (F), is the set of all matrices A ∈ GL n (F) which are row-permutations of non-singular diagonal matrices with determinant one.
For example, for s = d = 2 the matrix A = Intuitively, T s,d admits no symmetries other than the trivial ones: permutations on the product gates, and internal permutation-scale of each product gate such that the product of the scale coefficients is 1. This is exactly captured by the group TPS s,d (F), which is therefore contained in the group of symmetries of T s,d (x).
Proof of Lemma 2.7. Fix linear forms ℓ 1,1 , . . . , ℓ s,d such that the (i, j)th coordinate of Ax (using the indexing . By the discussion above, our goal is to prove that there exists a permutation π ∶ is also reducible. Composition with a non-singular matrix preserves reducibility, are s variable-disjoint, multilinear polynomials, each of which is either (d − 1)-homogeneous or zero. Thus, by Observation 2.8 below, at most one h i,r (A −1 x) can be non-zero. Accordingly, for every variable x i,j there exists a unique i ′ such that For any j > 1, if we take a derivative of (9) by x i,j then the LHS is clearly non-zero. Thus, both x i,1 and , proving variables in the same product gate of T s,d (x) are mapped to the same product gate of T s,d (Ax). A similar argument shows that variables from distinct product gates of T s,d (x) are mapped to different product gates of T s,d (Ax). It follows that product gates of T s,d (Ax) are variable-disjoint and that there exists a permutation π ∶ In particular, there can be no cancellations between different product gates of T s,d (Ax). Therefore, by multilinearity, for every i ∈ [s], the linear forms , this product must be 1, which completes the proof.
Observation 2.8. If f, g are non-constant, variable-disjoint, multilinear polynomials, then for every c ∈ F the polynomial f (x) + g(x) + c is irreducible.
3 k-independent polynomial maps and their properties All the hitting and interpolating sets that we construct are based on k-independent polynomial maps (Definition 1.15). We next give some simple properties of independent polynomial maps, that follow immediately from the definition.
Observation 3.1. It holds that 1. If G(y, z) is a (k + 1)-independent polynomial map, then there exists a subset of variables S and an assignment α ∈ F S such that G S=α is a k-independent polynomial map.
2. For any k ≥ 1, the n coordinates of any k-independent polynomial map are F-linearly independent.
We next give the construction of [SV15] of a k-independent polynomial map (denoted G k in [SV15]).
Definition 3.2. Fix n and a set of n distinct field elements and for any k ≥ 1, we define G SV k ∶ F 2k → F n as: Observation 3.3. G SV k is a k-independent polynomial map, in which each variable has degree at most n − 1.
The generator G SV k can be converted to a uniform k-independent polynomial map by adding another k control variables y k+1 , . . . , y 2k , and swapping out the L i (y j )s for their homogenizations y n−1 j+k L i y j y j+k : Definition 3.4. With the notation used in Definition 3.2, define the uniform SV-generator with k independence G SV-hom k ∶ F 3k → F n as: Observation 3.5. G SV-hom k is a uniform k-independent polynomial map, with individual degrees at most n − 1.
We next show how we can use k-independent polynomial maps in order to, roughly, simulate a kth order directional derivative or, project a polynomial to a subspace of co-dimension k. We first need to define the notion of a directional derivative.
If F has positive characteristic then by ∂F ∂x i we refer to the formal derivative (which in the case of fields of characteristic zero is equal to the analytical definition). Observe that we still have that where in the last expression f is an m variate polynomial, and g 1 , . . . , g m are n variate polynomials.
We shall often take derivatives according to a dual set to a set of linearly independent linear functions: Definition 3.7. A dual set for m linearly independent linear functions (recall that we say that linear functions are linearly independent if and only if their degree-1 homogeneous parts are linearly independent) in n ≥ m variables, Lemma 3.8. Let ℓ 1 , . . . , ℓ m ∈ F[x 1 , . . . , x n ], for n ≥ m, be linearly independent linear functions. Let {v i } ⊂ F n be a dual set. Let g ∈ F[y 1 , . . . , y m ] be a polynomial. Then, for f (x) = g (ℓ 1 (x), . . . , ℓ m (x)) it holds that Proof.
Proof. By definition of k-independent polynomial maps, G = G 1 (y 1 , z 1 ) + . . . + G k (y k , z k ) for some variabledisjoint 1-independent polynomial maps G 1 , . . . , G k . It is therefore enough to prove the lemma for k = 1, as we can replace f with ∂ k−1 f ∂v 2 ⋯∂v k , H with H + G 2 + . . . + G k and G with G 1 ; by iterative application of the result for k = 1, we will get the general result for an arbitrary k ∈ N.
Proof of Theorem 1.37
We next prove that there are k-independent maps that are provably not robust. The proof is by giving a different construction of such maps that, for an appropriate arrangement of the n variables in a matrix, is guaranteed to output matrices of rank at most k. Thus, a determinant of any (k + 1) × (k + 1) minor, a polynomial that has small formulas for small values of k, vanishes on the output of any such map.
The fact that such a construction exists was already noticed in [FSTW16] (Construction 6.3 of the full version of the paper). For completeness we repeat the construction here.
Proof. (of Theorem 1.37) Fix the number of variables n and assume WLOG n is a perfect square, i.e., n = m 2 . We index the variables as . We let f = Det t+1 . By [GKKS16], over fields of characteristic zero, f has a t O( √ t) = O(n) sized ΣΠΣ formula, which is polynomial in n for t = O (log n log log n) 2 . Over fields of positive characteristic the formula size is quasipolynomial in t, and the ΣΠΣ complexity is at most t!, which is polynomial in n for t = O (log n log log n).
Denote by M the (t + 1) × (t + 1) symbolic matrix of variables M i,j = x i,j . We first construct a uniform 1-independent polynomial map G 1 such that M ○ G 1 is of rank 1, and define G to be a sum of t variabledisjoint copies of G 1 . As rank(M ○ G 1 ) = 1, we have rank(M ○ G) ≤ t so Det t+1 (M ○ G) = 0, as required. We now focus on G 1 .
Fix n distinct field elements {α i,j } m i,j=1 ⊆ F and let w, y, z be new variables. Define two vectors of polynomials of degree n − 1, R = (R 1 , . . . , R m ), C = (C 1 , . . . , C m ) ∈ F[y] m , such that for every k ∈ [m] R k and C k satisfy R k (α i,j ) = δ i,k and C k (α i,j ) = δ j,k .
Define G 1 (w, y, z) as the m×m matrix z⋅(w 2n−2 R( y w )⋅C( y w ) T ) (the (i, j) entry of G 1 is z⋅w 2n−2 ⋅R i ( y w )⋅C j ( y w )). As every coordinate of G 1 is a homogeneous polynomial of degree 2n − 1, G 1 is a uniform polynomial map. For any i, j ∈ [m] we have that The above matrix has z in entry (i, j) and 0 everywhere else, so G 1 is a uniform 1-independent polynomial map. The resulting matrix M ○ G 1 is of rank 1 since it is a product of vectors R ⋅ C T , so the variable-disjoint
Interpolation and reconstruction for orbits of the continuant polynomial
We start by proving that any uniform 1-independent map hits C GL aff (F) (Theorem 1.17).
Let G 1 be a uniform 1-independent polynomial map into F n . Let d be the degree of the different components of G 1 . Observation 3.1(2) implies that (∏ m i=1 ℓ i ) ○ G 1 ≠ 0 and hence it is a nonzero homogeneous polynomial of degree m ⋅ d.
and the claim follows.
Corollary 1.18 follows immediately from Theorem 1.17, Observation 1.14 and the construction of a uniform generator in Definition 3.4.
Remark 4.1. A similar argument would show that G(y, z) ≜ y n−1 , y n−2 z, . . . , z n−1 is a hitting set generator for C GL aff n (F) m , which leads to a hitting set of size n 4 .
We now turn to giving a reconstruction algorithm for C GL aff (F) . We start by proving some simple lemmas that will be used for constructing an interpolating set.
Definition 4.2. We call an ordered triplet (i, j, k) ∈ Z 3 m a consecutive triplet if j = i + 1 and k = i + 2, or j = k + 1 and i = k + 2, where all equalities are taken modulo m. Proof. Observe that a polynomial f (x) has a monomial containing x i and x k but not x j , if and only if this is also the case when we set x j = 0. Assume that (i, j, k) is a consecutive triplet. Then, It immediately follows that no monomial of C m (x 0 , . . . , x i , 0, x i+2 , . . . , x m−1 ) contains both x i and x i+2 .
We now prove the second direction in the claim. Since C m is a trace of a matrix product, by properties of trace we can assume WLOG that i < j < k, by first rotating the order of the matrices until we have i < j < k or k < j < i (where a < b means that the matrix corresponding to a comes before that of b). As both cases are equivalent we can assume that i < j < k. We next handle this case. Assume WLOG that j − i > 1. Set x r = 0 for every i + 2 ≤ r < k, to 0. We get that the new polynomial has the form and a monomial of maximal degree in this polynomial contains both x i and x k (when k − i is even there is a unique monomial of maximal degree, and when k − i is odd there are two such monomials).
For every list of three distinct indices (i, j, k) ∈ [m] 3 0 denote Lemma 4.5. Let n ≥ m ≥ 3 and t be integers. Assume H(w) ∶ F t → F n is a hitting-set generator for , for every list of three distinct indices (i, j, k) ∈ [m] 3 0 . Let G 3 (y, z) be a 3-independent polynomial map (into F n ) that each of its coordinates is a homogeneous linear function in z, over F(y) (for example, G SV k has this property, for every k). Then, for every m 1 , m 2 and n and every two polynomials Roughly, what the lemma claims is that if G 3 is a 3-independent map and H hits C , then H + G 3 is an interpolating-set generator.
Step 1: As in the proof of Theorem 1.17, deg(f i ) = m i and the homogeneous part of degree m i in f i is given by Observe that since f is nonzero (e.g. by Observation 3.1(2)), and its degree, as a polynomial in z, is exactly m i (and every other term in f i ○ (H + G 3 ) has degree strictly smaller as a polynomial in z), it must hold that m 1 = m 2 . To simplify the notation let m = m 1 = m 2 . Again by comparing terms of maximal degree in z we see that As both {ℓ 1,i } and {ℓ 2,i } are linearly independent sets, we get from unique factorization and from Observation 3.1(3), that there exists a permutation π ∶ [m] 0 → [m] 0 and constants {α j } so that ℓ 1,j = α j ℓ 2,π(j) , for every j. This completes the first step.
From Lemma 4.5 we see that all that we have to do in order to construct an interpolating set for C GL aff (F) , is to find a map H as in the statement of the lemma.
Lemma 4.6. Let n ≥ m be integers. Let G 2 (y, z) be a 2-independent polynomial map into F n , that is linear in z. Then, For every list of three distinct indices (i, j, k) ∈ [m] 3 0 and for every m n-variate linearly independent linear functions ℓ 0 (x), . . . , Use G 2 to further restrict the polynomial to the subspace ℓ j−1 = 0 (using Lemma 3.10). Let G ′ 2 denote G 2 after the restriction. Lemma 3.10 guarantees that G ′ 2 is 1-independent. Observe that the homogeneous term of maximal degree in C (i,j,k) m (ℓ 0 , . . . , ℓ m−1 ) t . It follows that the term of maximal degree, as a polynomial in z, in C (i,j,k) m (ℓ 0 , . . . , ℓ m−1 ) 2 , which is nonzero by Observation 3.1(2).
Combining Lemmas 4.5 and 4.6 we get the following corollary: Corollary 4.7. Let G 5 (y, z) ∶ F t → F n be a 5-independent polynomial map that is linear in z. Then, for every m 1 , m 2 ≤ n and every two polynomials f 1 ∈ C GL aff n (F) m 1 and f 2 ∈ C GL aff n (F) m 2 , it holds that f 1 = f 2 if and only if f 1 ○ G 5 = f 2 ○ G 5 .
Theorem 1.19 follows immediately from Corollary 4.7 and Observation 1.14.
Reconstruction algorithm for C GL aff (F)
The reconstruction algorithm is given in Page 27.
Step 1 can be executed in polynomial-time.
Proof. Let G 1 (y, z) be a 1-independent map. Let w be a new variable and consider G = w⋅G 1 . I.e., we multiply each coordinate of G 1 with w. Observe that the degree of w and of z in (f ○ G) is exactly deg(f ) = m. As in the proof of Theorem 1.17, we see that the m-homogeneous component of (f ○ G), when viewed as a m−1 , such that for some permutation π and scalars α i , α i ⋅ L ; 5 /* Next we compute the free terms */ i + λ i ; 17 end 18 /* There is a permutation π and scalars α i such that α i ⋅ L i = ℓ π(i) */ 19 Find all consecutive triplets and recover the permutation π ; 20 /* WLOG π is the identity permutation */ 21 Find {u i } such that L i (u j ) = δ i,j ; 22 /* We now recover the α i s */ 23 if m is odd then Algorithm 1: reconstruction algorithm for C GL aff (F) polynomial in w, is ∏ m−1 i=0 ℓ [1] i ○ G 1 ≠ 0. As we know that m ≤ n, using interpolation (over w) we get black-box access to (f ○ G) [k] , for every 0 ≤ k ≤ n. We look for the first k, starting from n and going down, such that (f ○ G) [k] ≠ 0. This can be done, for example, by interpolation (over y, z).
Step 2 can be done with polynomially many queries to a root-finding algorithm over F (assuming F ≥ n 3 ).
We assume some knowledge with known factoring algorithms. For good a reference see [vzGG03] (the lecture notes of Madhu Sudan are also a great resource on the subject [Sud99]).
i , and all its linear factors are linearly independent. Known factoring algorithms require that we reduce the polynomial that we wish to factor to a square-free, bivariate polynomial. This can be easily done using 2-independent maps. Let G 2 (y, z 1 , z 2 ) be a 2-independent map that is a linear form in z 1 and z 2 (e.g., G SV 2 ). Observation 3.1(3) shows that composing f [m] with G 2 (y, z), keeps all factors linearly independent, when viewed as linear polynomials in z. Each assignment to y gives a different polynomial whose factors are homogeneous linear functions in z 1 , z 2 . Observe that there is an assignment to y from the set [n 3 ] y , that maintains the property that the factors are linearly independent. Indeed, for every two factors we need the assignment to be a nonzero of the determinant of the coefficientmatrix of the two factors. There are m 2 such determinant, each has degree 2(n − 1) as a polynomial in y (hence the requirement for a field of size n 3 ). By going over all such assignments to y, we are guaranteed to find one that maintains this property.
Once we reduced to the square-free, bivariate case, factoring algorithms proceed by reducing to factoring of univariate polynomials. In our case the univariate completely splits as a product of linear factors, hence the univariate factorization step only need oracle access to a root-finding algorithm.
Observe that we have found irreducible linear functions L i , each is a scalar product of some ℓ π(i) . Claim 4.10. For every i, the for-loop in Step 6 returns L i such that α i ⋅ L i = ℓ π(i) .
We now note that As i . As ℓ Hence,
It follows that
as claimed.
An important point to notice is that we can check whether deg(g i ) = m − 1 in the same manner in which we computed deg(f ) (thanks to Equation (12)).
Note that
Step 19 can be executed using Corollary 4.4 and Lemma 4.6. Indeed, as ℓ π(i) = α i L i , it follows that {v i α i } is a dual set for ℓ π(i) . That is, ℓ π(i) (v j α j ) = δ i,j . Therefore, = 0. Hence, with the help of Lemma 4.6 and interpolation, we can find all consecutive triplets.
Once we have that information, construction of π (up to reversal, which does not change the resulting polynomial) is immediate. Since we know π we can assume WLOG that π is the identity permutation.
Step 21 is possible as the L i s are linearly independent. Note that ℓ Claim 4.11. The linear functionsl i that were computed in Steps 23-31 satisfy C m l 0 , . . . ,l m−1 = f .
In this case we get thatl i = f (u i ) ⋅ L i = α i L i = ℓ i . In particular, we recovered the original ℓ i s.
Next, assume that m is even. Observe that since m is even we can replace each ℓ 2i with ℓ 2i α 0 and each ℓ 2i+1 with ℓ 2i+1 ⋅ α 0 and still get the same f (recall Equation (11)). Therefore, we may assume WLOG that α 0 = 1.
The claim regarding the running time is also obvious given the analysis above. We thus see that Theorem 1.20 holds.
Remark 4.12. As Theorem 1.37 shows that t-independent maps do not necessarily lead to robust hitting sets, our reconstruction algorithm is not continuous at 0 (recall the discussion in section 1.6): Intuitively, around 0, there is no way to break the tie between the different polynomials C
Orbits of read-once formulas
In this section we discuss the circuit classes ANF GL aff (F) and ROF GL(F) (see Definitions 5.1 and 5.3 below), which are dense in VP e . We construct a hitting set for ROF GL(F) and an interpolating set for ANF GL aff (F) . Finally we observe that the randomized reconstruction algorithm of [GKQ14] works for every polynomial in ANF GL aff (C) .
We start with basic definitions concerning ROFs and ROANFs and prove Theorem 1.21.
Definition 5.1. An arithmetic read-once formula (ROF for short) Φ over a field F in the variables x = (x 1 , . . . , x n ) is a binary tree T whose leaves are labeled with input variables and a pairs of field elements (α, β) ∈ F 2 , and whose internal nodes are labeled with the arithmetic operations {+, ×} and a field element α ∈ F. Each input variable can label at most one leaf. The computation is performed in the following way: A leaf labeled with the variable x i and with (α, β), computes the polynomial αx i + β. If a node v is labeled with the operation * ∈ {+, ×} and with α ∈ F, and its children compute the polynomials Φ v 1 and Φ v 2 , then the polynomial computed at v is is called a read-once polynomial (ROP for short) if f (x) can be computed by a ROF.
We next define formulas in alternating normal form, as was first defined in [GKQ14].
Definition 5.3 (Section 3.2 in [GKQ14]). We say that an arithmetic formula Φ, over F, is in alternating normal form (Φ is called an ANF for short) if: 1. The underlying tree of Φ is a complete rooted binary tree (the root node is called the output node). In particular, size(Φ) = 2 depth(Φ)+1 −1, where size(Φ) is the number of nodes in the tree of Φ and depth(Φ) is the maximum distance of a leaf node from the output node of Φ.
2. The internal nodes consist of alternating layers of + and × gates. In particular, the label of an internal node at distance d from the closest leaf node is + if d is even and × otherwise. So if the root node is a + node, its children are all × nodes, its grandchildren are all + etc.
3. The leaves of the tree are labeled with linear functions. That is, each leaf is labeled with The product depth ∆ of Φ is the number of layers of product gates. The number of leaves of Φ is therefore always 4 ∆ if the top gate is +, and 1 2 ⋅ 4 ∆ if the top gate is ×.
The class ANF GL aff (F) mentioned in section 1.2.2 is defined in terms of the following canonical read-once ANF formula (ROANF for short): Definition 5.4 (Notation from Fact 3.4 of [GKQ14]). We denote the canonical ROANF polynomial, of product depth ∆ on 4 ∆ variables, as ANF ∆ (x). It is defined recursively as follows: Observe that any polynomial in ANF GL aff n (F) ∆ is an ANF according to Definition 5.3, but not vice versa.
Next we give some basic definitions concerning the underlying tree of a ROF, or of a ROANF.
is the first gate in Φ common to all the paths from v i and v j to the root of the formula.
Definition 5.6. Let T be the computation tree of some ROP polynomial g ∈ F[x]. For a node v ∈ T that is not the root, we denote by sib(v) ∈ T the unique sibling of v in T . When clear from context, sib(v) ∈ F[x] denotes the polynomial computed at node sib(v).
We may characterize mon(ANF ∆ (x)) by the first common gates of pairs of variables appearing in the monomials: Observation 5.7. x e ∈ mon(ANF ∆ (x)) if and only if x e is multilinear of degree 2 ∆ , and for every x i ≠ x j ∈ var(x e ) it holds that fcg(x i , x j ) is a product gate.
Observation 5.8. Let n = 4 ∆ . Let T be the computation tree of ANF ∆ (x) (from Definition 5.4 above). Fix some variable x i ∈ x and let {v 1 , . . . , v ∆ } ⊆ T be the addition gates on the path from x i to the root of T , where v ∆ is the root. Denote with v 0 ∈ T the leaf labeled x i . Then, recalling Definition 5.6, ANF k (var(sib(v k ))) .
Corollary 5.9. For any set of variables S ⊆ x, ∂ANF ∆ ∂S is either zero, or a product of variable-disjoint ROANFs.
Proof. Denote u = (u 1 , . . . , u n ). By Observation 5.8, every monomial of ∂ANF ∆ We first give the simple proof of Theorem 1.21, that separates ANF GL aff (F) , ROF GL(F) and VP e , and that shows that their closures are equal.
Proof of Theorem 1.21. From the definition it is obvious that ANF GL aff (F) ⊆ ROF GL(F) . It is also clear that the classes are different as the degree of every polynomial in ANF GL aff (F) is always a power of 2, which is not necessarily the case for polynomials in ROF GL(F) . As polynomials in ROF GL(F) are multilinear with respect to some basis, it is also clear that ROF GL(F) ⊊ VP e , as the example f (x) = x 2 shows. It is also not hard to demonstrate a multilinear polynomial in VP e that is not in ROF GL(F) . The next claim follows example 3.8 of [SV14].
Proof. Assume for a contradiction that there is some ROF formula containing f in its orbit. As f is irreducible, the top gate of Φ is an addition gate. As there cannot be any cancellations in Φ, the children of the root must compute homogeneous degree 2 polynomials. It is not hard to see that this means that the polynomial computed cannot be written as a ROF in only three linear functions, as one child of the root must compute a linear function.
To show that the closures are equal, we note that Proposition 3.2 of [GKQ14] states that any polynomial that is computed by a size s formula, can be computed by an ANF formula of size O(s 4 ). As the leaves of an ANF formula are labeled with linear functions, we can approximate these linear functions with linearly independent linear functions and thus conclude that VP e ⊆ ANF GL aff (F) . The claim about the closures immediately follows.
A hitting set generator for orbits of read-once formulas
In this section we prove Theorem 1.22 that gives a hitting set for ROF GL aff n (F) . Our proof follows the proof of [SV15], who constructed such a generator for ROFs. We note that Minahan and Volkovich significantly improved upon the result of [SV15], namely, they achieved a polynomial-sized hitting set for ROFs. However, we do not know how to adapt their approach to orbits of ROFs and instead use the method of [SV15] that is based on taking partial derivatives, an operation that works well when composing the ROF with a kindependent map (recall Lemma 3.9). We now turn to proving Theorem 1.22.
Proof of Theorem 1.22. The proof of the theorem is by induction on the number of variables in the underlying ROF, which we denote by m. In fact, we claim something stronger: Let Φ be a ROF on m ≤ 2 t many variables that computes a non-constant polynomial. Then, for f ∈ Φ GL aff n (F) and any (t + 1)-independent polynomial map G, over F, f ○ G is a non-constant polynomial.
For m ≤ 2 the claim follows from Observation 3.1.
As in the proof of Lemma 5.1 of [SV15], we split the proof into cases depending on the top gate of Φ. Let G 1 , G t be a 1-independent polynomial map and a t-independent polynomial map, respectively, such that G = G 1 + G t .
As before, Corollary 1.23 follows immediately from Theorem 1.22 and Observation 1.14.
An interpolating set generator for ANF GL aff (F)
In this section, we construct an interpolating set generator for ANF GL aff (F) , thus proving Theorem 1.24. We restate the theorem to ease the reading.
The first step in the proof is a reduction to the case where f 1 and f 2 are "almost the same". Recall that by Fact 2.4, f 1 and f 2 can be equal and still compute different linear functions at their bottom layer. The next lemma (roughly) shows that composing ANF ∆ (x) with an O(∆)-independent map, preserves equivalence of different ANFs while not introducing any new equivalences.
x 0 , . . . , xn x 0 ) the homogenization of f i , and letà i be an extension of A i such that A i ∈ GL n+1 (F) and h i = ANF ∆ i (à i x). Set k = 2 max{∆ 1 , ∆ 2 } + 7 and let G be any uniform k-independent polynomial map. If f ≠ 0 then at least one of the following holds: 2. ∆ 1 = ∆ 2 , and there is a 1 − 1 map between the quadratic forms of h 2 (à −1 1 x) and those of ANF ∆ 1 (x), such that any two quadratics that were matched have the same monomials, possibly with different coefficients. 13 Furthermore, the map between the quadratics is a TR 4 ∆ 1 −1 (F) symmetry (see Definition 2.2).
Observe that if {ℓ i,j } are linear functions such that f i = ANF ∆ i ℓ i,1 (x), . . . , ℓ i,4 ∆ i , then the condition "the monomials appearing in the quadratic forms of h 2 (Ã 1 ) −1 x are identical to the monomials of the quadratic forms of ANF ∆ 1 (x), up to TR 4 ∆ 1 (F) symmetry" is equivalent to saying that there exists a permutation π ∈ TR 4 ∆ 1 −1 (F), matching quadratics in f 2 to those of f 1 , such that when we represent the ith quadratic q (2) i of f 2 according to the linear functions ℓ 1,1 , . . . , ℓ 1,4 ∆ 1 , then q (2) i has the same set of ℓ 1,1 , . . . , ℓ 1,4 ∆ 1monomials as q (1) π(i) , the π(i)th quadratic in f 1 . In general, whenever we say "up to TR 4 ∆ 1 −1 (F) symmetry" we mean that there exists a permutation π ∈ TR 4 ∆ 1 −1 (F) such that the statement holds when we apply π to the quadratics computed at the bottom layers.
Once we have this in mind we can see that the only "bad" case is when, for every i, ℓ 2,i = α i ⋅ ℓ 1,i , for scalars α i ∈ F (possibly after applying some TR 4 ∆ 1 −1 (F) symmetry). Thus, the proof of Theorem 1.24 would follow from the next lemma.
Lemma 5.14. Let x = (x 1 , . . . , x n ) and f ∈ F[x] be a polynomial of degree d. Let g(x 0 , x) = x d 0 f ( x 1 x 0 , . . . , xn x 0 ) be the homogenization of f , and let G ∶ F t → F n+1 be a polynomial map such that the coordinates of G are homogeneous polynomials of identical degree. Let H ∶ F t → F n be the restriction of G to the coordinates in [n] (i.e., we ignore the 0th coordinate). If g ○ G ≠ 0 then f ○ H ≠ 0.
, and denote by G 0 the 0th coordinate of G (such that G = (G 0 , H)). We get: Fix i ∈ [d + 1] 0 to be the minimal index such that f [d−i] ○ H ≠ 0. Such an index must exist, because g ○ G ≠ 0. As all coordinates of G are homogeneous and of identical degree, for any
Proof of Lemma 5.12
The high-level strategy for proving Lemma 5.12 is as follows: first, we show that if Case 2 of the lemma is false, then there are v, u ∈ F n such that ∂ 2 f ∂v∂u = ∂ 2 f 1 ∂v∂u ≠ 0. This is proven in Lemma 5.16, based on the structural result of Lemma 5.15. After that, we prove that (k − 2)-independent polynomial maps hit ∂ 2 f 1 ∂v∂u , in Lemma 5.18.
Proof. The proof is by induction on ∆.
First, note that var(g) = var(ANF ∆ (x)): we already know var(g) ⊆ var(ANF ∆ (x)), and g must depend on at least 4 ∆ variables, or the 4 ∆ linear functions on the leaves cannot be linearly independent.
Next, observe that g 1 g 2 and g 3 g 4 must be variable disjoint: if x i ∈ var(g 1 g 2 ) ∩ var(g 3 g 4 ), then ∂g ∂x i (A −1 x) is a sum of non-constant, variable-disjoint, multilinear polynomials, and ∂g ∂x i (x) is therefore irreducible (recall Observation 2.8). However, if we denote by x j the sibling of x i in ANF ∆ (x), the fact that mon(g) ⊆ mon(ANF ∆ (x)) implies that every monomial of ∂g ∂x i (x) is divisible by x j . As ∆ > 1, we have deg ∂g ∂x i ≥ 3, and therefore ∂g ∂x i must be reducible, in contradiction. Thus, var(g 1 g 2 ) ∩ var(g 3 g 4 ) = ∅, and in particular mon(g 1 g 2 ), mon(g 3 g 4 ) ⊆ mon(ANF ∆ ).
Next, assume, WLOG, there exist some monomial x e ∈ mon(F 1 F 2 ) such that x e ∈ mon(g 1 g 2 ). If g 1 g 2 contains a monomial of F 3 F 4 , then g 1 g 2 can be partitioned into a sum of two variable-disjoint, non-constant, multilinear polynomials; which would contradict reducibility of g 1 g 2 . Thus, mon(g 1 g 2 ) ⊆ mon(F 1 F 2 ). As we showed that var(g) = var(ANF ∆ (x)), the conditions on the monomials implies that there must exist some monomial of F 3 F 4 in g, so we may conclude mon(g 3 g 4 ) ⊆ mon (F 3 F 4 ), and in addition, var(g 1 g 2 ) = var(F 1 F 2 ) and var(g 3 g 4 ) = var(F 3 F 4 ).
To apply induction, it remains to prove that mon(g i ) ⊆ mon(F i ) for i ∈ [4] (up to TR(F)); focus on g 1 g 2 and WLOG assume var(g 1 ) ∩ var(F 1 ) ≠ ∅.
We now show that g 1 cannot contain variables from both F 1 and F 2 . Assume there exist monomials x e 1 , x e 2 ∈ mon(g 1 ) such that x e 1 contains variables from var(F 1 ) and x e 2 contains variables from var(F 2 ) (x e 1 and x e 2 may be the same monomial). WLOG assume var(x e 1 )∩var(p 1 ) ≠ ∅, and likewise var(x e 2 )∩var(p 3 ) ≠ ∅. Let x c ∈ mon(g 2 ), and let x i x c . If x i ∈ var(p 2 ), then x e 1 ⋅ x c ∈ mon(g 1 g 2 ) is a monomial involving variables from both p 1 and p 2 , in contradiction; by a symmetric argument, we cannot have x i ∈ var(p 4 ). Thus, all monomials of g 2 may involve only variables of p 1 and p 3 , i.e., var(g 2 ) ⊆ var(p 1 ) ⊍ var(p 3 ). Therefore, the only way to get monomials involving variables of p 2 or p 4 is via monomials of g 1 , so g 1 must contain monomials x e 1 ′ , x e 2 ′ containing variables of p 2 and p 4 , respectively (here we use the fact that var(g 1 g 2 ) = var (F 1 F 2 )). As before, we get var(g 2 ) ⊆ var(p 2 ) ⊍ var(p 4 ), in contradiction.
The next step is showing that, if Case 2 of Lemma 5.12 does not hold, then we may choose a pair of vectors by which to take a derivative of f = f 1 − f 2 such that ∂ 2 f 1 ∂v 1 v 2 = 0 and ∂ 2 f 2 ∂v 1 v 2 ≠ 0. This is formalized in Lemma 5.16 below, and is proved by applying Lemma 5.15.
• If x e is multilinear, then let x i , x j ∈ var(x e ) be such that fcg(x i , x j ) is an addition gate (all monomials ofg are of degree exactly 2 ∆ , so Observation 5.7 implies the existence of such a pair of variables). Set v ≜ v i , u ≜ v j . Lemma 3.8 again implies that ∂ 2 f ∂v∂u = ∂ 2 ANF ∆ ∂x i ∂x j = 0, because fcg(x i , x j ) is an addition gate in ANF ∆ . As before, it is clear that ∂ 2 g ∂v∂u ≠ 0.
Looking back at Lemma 5.12, Lemma 5.16 allows us to separate f 1 from f 2 , provided Case 2 of Lemma 5.12 does not hold. We still need to provide a hitting set for ∂ 2 f 1 ∂v∂u , where v, u are arbitrary, and satisfy ∂ 2 f 1 ∂v∂u ≠ 0.
To do so, we reduce ∂ 2 f 1 ∂v∂u to a single, non-zero product of variable-disjoint ROPs composed with affine transformations (Lemma 5.18). For simplicity, we first reduce to a product of ROPs in the standard basis in Lemma 5.17, and subsequently extend the result to affine orbits in Lemma 5.18.
Lemma 5.17. Let ∆ ≥ 2, and let f (x) = ∑ i,j α i,j ∂x i ∂x j be some non-zero linear combination of second derivatives of ANF ∆ (x). Then, there exist variables x i , x j , sets D, Z ⊆ x such that D ≤ 2 and Z = 2, and a constant β i,j such that Proof. First, assume there exist some i, j such that α i,j By Observation 5.8, ∂x i ∂x j ∂D ≠ 0 and is a product of variable-disjoint ROPs that do not depend on x i nor on x j .
Consider any pair {i ′ , j ′ } ≠ {i, j} and set h = = 0. This is true for any {i ′ , j ′ } ≠ {i, j}, and as ∂ 4 ANF ∆ (x) ∂x i ∂x j ∂D does not depend on x i nor on x j we get Next, assume all non-zero summands of f , α i,j As ∆ ≥ 2, q 1 has a sibling quadratic form; denote it by q 2 ≜ sib(q 1 ) = x k x ℓ + x k ′ x ℓ ′ and set D ≜ {x k }. Note that by Observation 5.8, (α i,j + α i ′ ,j ′ ) ∂x i ∂x j ∂D ≠ 0, does not depend on x i , x j , x i ′ , x j ′ , and is a product of variable-disjoint ROPs.
We are now ready to prove Lemma 5.12.
Proof of Lemma 5.12. First, assume ∆ 1 ≠ ∆ 2 . WLOG assume ∆ 1 > ∆ 2 . Let ℓ 1 , . . . , ℓ 4 ∆ 1 be linearly independent linear functions such that f 1 = ANF ∆ 1 (ℓ 1 , . . . , ℓ 4 ∆ 1 ). There must exist some i such that ℓ i is not spanned by the linear functions at the leaves of f 2 . Fix some vector v such that ℓ [1] (v) = 0 for every linear function ℓ labeling a leaf of f 2 , and such that ℓ i (v) = 1. By Lemma 3.8 and Corollary 5.10, ∂f 2 ∂v = 0 and ∂f 1 ∂v ≠ 0; thus, 0 ≠ ∂f ∂v = ∂f 1 ∂v . From Lemma 5.18 it follows that any (2∆ 1 + 5)-independent polynomial map G ′ satisfies ∂f ∂v ○ G ′ ≠ 0; and therefore, using Lemma 3.9, we get f ○ G ≠ 0, so Case 1 of the lemma holds. Next, assume ∆ 1 = ∆ 2 and denote h ≜ h 1 − h 2 (recall that h i is the homogenization of f i ). As G is uniform, Lemma 5.14 implies that it suffices to prove that either h ○ G ≠ 0 (where we extend G to n + 1 coordinates such that G is still a uniform k-independent polynomial map) or that Case 2 of the lemma holds.
Assume that h ○ G = 0. Lemmas 5.16 and 5.18 imply that ANF ∆ (x) and h 2 Ã 1 −1 (x) have the same set of monomials. From Lemma 5.15 we conclude that Case 2 holds.
Proof of Lemma 5.13
Finally, we conclude the proof of Theorem 1.24 by proving Lemma 5.13 that gives a hitting set for the difference of two polynomials in ANF GL aff n (F) ∆ that, up to constant factors, have the same linear functions on the leaves.
Proof of Lemma 5.13. First, if f = αg for some α ∈ F, then f − g ∈ ANF GL aff n (F) and the lemma follows from Theorem 1.22. We therefore assume that f is not a multiple of g, and denote that by f ∝ g.
For any node u in the complete binary tree of depth 2∆, denote by u f the polynomial computed at node u in ANF ∆ (ℓ 1 , . . . , ℓ n ), and by u g the polynomial computed at node u in ANF ∆ (α 1 ℓ 1 , . . . , α n ℓ n ). Fix a node u satisfying u f (x) ∝ u g (x), such that u is a deepest node with that property. In particular, each child of u f is a multiple of the corresponding child of u g . Note that, as f ∝ g, such a node u must exist; and by the premise of the lemma, u f and u g are not leaves. In addition, u f and u g must be addition gates, otherwise we may choose a child u ′ of u such that u ′ f (x) ∝ u ′ g (x). Let {v 1 , . . . , v n } be a dual set to {ℓ 1 , . . . , ℓ n }. Denote u f = f 1 f 2 + f 3 f 4 and u g = g 1 g 2 + g 3 g 4 , where the f i s are the grandchildren of u f and the g i s are the grandchildren of u g . By choice of u, there exist constants α, β ∈ F such that f 1 f 2 = α ⋅ g 1 g 2 and f 3 f 4 = β ⋅ g 3 g 4 , and α ≠ β (otherwise u f = α ⋅ u g ). WLOG, assume f 1 , g 1 are ancestors of the leaf labeled ℓ 1 (or α 1 ℓ 1 ), and f 3 , g 3 are ancestors of the leaf labeled ℓ 3 (or α 3 ℓ 3 ). By Observation 5.8, there exist polynomials F (x), G(x) such that: Observe that and Let G 1 , G 2∆+1 be a 1-independent polynomial map and a (2∆ + 1)-independent polynomial map, respectively, such that G = G 1 + G 2∆+1 . Theorem 1.22 and Observation 5.8 imply that ∂v 1 ○ G 2∆+1 ≠ 0 and thus (f − g) ○ G ≠ 0 (using Lemma 3.9). On the other hand, if α ⋅ F (G 2∆+1 ) = G(G 2∆+1 ), then, since α ≠ β, a similar argument, relying on Equation (15), shows that ∂v 3 ○ G 2∆+1 ≠ 0 and thus (f − g) ○ G ≠ 0, as claimed.
Reconstruction for ANF GL aff (C)
In this section, we argue that the reconstruction algorithm of Gupta et al. [GKQ14], when given oracle access to a polynomial f ∈ ANF GL aff (C) , w.h.p. successfully reconstructs an ANF GL aff (C) formula computing f . We do so by explaining why the different steps of their algorithm succeed w.h.p. on any input f ∈ ANF GL aff (C) . To ease the reading we give their algorithm (AFR) and its main subroutine (LDR) in the appendix (Algorithms 2 and 3). We remind that their result, with minor changes, can be adapted to any large enough field, see remark 1.27.
Trivially, for any ANF formula f (x) of product depth ∆ on n variables, there exists an assignment v ∈ C (n+1)⋅4 ∆ to the y variables of U ∆,n (x, y) such that f (x) = U ∆,n (x, v). Given the number of variables n, the size s = 2 ⋅ 4 ∆ − 1 of the ANF we wish to sample, and a finite set of field elements S ⊆ C, we define the distribution D(n, s, S) on ANF formulas by uniformly sampling an assignment v from S 4 ∆ (n+1) . This is the distribution used in the main result of [GKQ14]: Theorem 5.20 (Theorem 1.1 of [GKQ14]). Let F be a field of characteristic 0 and S be a finite subset of F. Assume there is a black box holding an ANF formula Φ of size s sampled from D(n, s, S), and Φ computes a polynomial f ∈ F[x 1 , . . . , x n ]. There is a randomized algorithm that, given this black box, either outputs an ANF formula Φ ′ of size ≤ s computing f , or outputs Fail. The algorithm succeeds for a (1− n 2 ⋅s O(1) S ) fraction of the ANF formulas from D(n, s, S). Moreover, the running time of the algorithm is at most (ns) O(1) .
We note that, although it is not mentioned in their main theorem, the output formula is unique up to TS n (C)equivalence, and this fact is stated when needed in intermediate results of [GKQ14] (recall Fact 2.4). We prove Theorem 1.26 by going over the different steps of Algorithm 2. We do not repeat all the arguments and claims of [GKQ14], but rather give high level explanations, referring to theorems, algorithms and tools of [GKQ14].
Sketch of proof of Theorem 1.26. We shall use the following notation in the proof. We wish to reconstruct f ∈ ANF GL aff n (C) that is computed by the ANF formula Φ. We define the homogenization of f , f h , as usual: . , x n x 0 ). Denote by A an (n + 1) × (n + 1) matrix of formal variables a i,j . For i ≠ j ∈ {r + 1, r + 2, . . . , n} we denote by A i,j r the matrix A where all columns except those indexed by {0, 1, 2, . . . , r} ∪ {i, j} are set to zero (generic projection matrix to the variables x 0 , x 1 , . . . , x r , x i , x j ). We denote by A ∈ C n×n an assignment to A, and likewise A i,j r would be an assignment to the n⋅(r+3) variables of Looking at Algorithm 2, it is clear that except for Step AFR3, the rest of the algorithm works without any assumptions on the input ANF. Hence, the proof of correctness boils down to proving that Step AFR3 works w.h.p.; and more importantly, proving that the LDR algorithm (Algorithm 3, the subroutine invoked in Step AFR3) succeeds w.h.p. on random projections of any ANF GL aff n (C) instance. Specifically, we need to prove that for any f ∈ ANF GL aff n (C) , step AFR3 succeeds with probability ≥ 1 − Thus, if the projected polynomials σ A i,j r (f ) that we compute in Step AFR3 satisfy FI and PSI, then the algorithm will correctly reconstruct our ANF GL aff n (C) formula.
To prove that (w.h.p.) σ A i,j r (f ) satisfies FI and PSI, Gupta et al. prove that these conditions are captured by a set of polynomial equations. Intuitively, this is not a surprising result as FI and PSI are algebraic conditions.
Observation 5.21. For every i, j ∈ {r + 1, r + 2, . . . , n} there exists a set of nonzero polynomials p 1 , . . . , p k ∈ C[A i,j r ] with the property that ANF ∆ (A i,j r x) satisfies FI and PSI if A i,j r is not a point on the variety V p 1 (A i,j r ), . . . , p k (A i,j r ) ≜ A i,j r p 1 (A i,j r ) = . . . = p k (A i,j r ) = 0 . Furthermore, the degree of each p i is 2 O(∆) , which is polynomial in the size of the formula.
This observation is not stated as is in [GKQ14] but it can be immediately deduced from the proofs of Corollaries 5.31 and 5.32 of [GKQ14].
Thus, we wish to show that a random A i,j r does not belong to the variety defined in Observation 5.21. For this we follow the same approach as Gupta et al. We prove that there exist good projections A i,j r that do not belong to the variety, and then using DeMillo-Lipton-Schwartz-Zippel lemma we conclude that such a random projection is not on the variety.
Claim 5.22. Let r ≥ 125 and n ≥ r. For any n-variate f ∈ ANF GL aff n (C) , computed by the ANF formula Φ, and any i, j ∈ {r + 1, r + 2, . . . , n}, there exists some projection A i,j r such that σ A i,j r (f ) satisfies FI and PSI at every internal node of Φ.
Proof. To prove the existence of a "good" projection for an arbitrary f ∈ ANF GL aff n (C) , we use an explicit ANF g, on 128 variables, that can be described as a projection of any f ∈ ANF GL aff n (C) (more accurately, of f h ). The definition of g comes from the proof of Lemma 5.30 of [GKQ14]: The exponent e ∈ N is chosen such that the degree of g is 2 ∆ for the given ∆, i.e. e = 2 ∆−3 . Gupta et al. prove that g satisfies PSI in Lemma 5.30. In Lemma 5.29, the FI condition is proven to hold for a slightly different polynomial (specifically, they prove g i as defined in equation (16) satisfies FI), but the proof for formulaic independence of g itself works exactly the same (relies on variable-disjointness of g 1 , . . . , g 4 ), so we get: Fact 5.23. The polynomial g defined in Equation (17) satisfies FI and PSI (and so does g(x π(0) , . . . , x π(127) ), for any permutation π).
Let g(x) be as defined in equation (17) above. Our goal here is, given an unknown f ∈ ANF GL aff n (C) and indices i, j ∈ [n], to prove there exists some projection A i,j r such that σ A i,j r (f ) = g(x) (possibly up to a permutation of the variables); as we only care about projections up to permutations of the variables, we can WLOG assume i = r + 1, j = r + 2. The correctness of Algorithm 3 is proven for a number of variables ≥ 128 and g is a 128-variate polynomial, so for sake of simplicity we may assume r = 125 such that projections of f h have the same number of variables as g.
For an ANF Ψ computing g such that each leaf is labeled by a single variable from {x 1 , . . . , x 128 } (times some constant), denote byΨ a new formula constructed as follows: for every i ∈ [4 ∆ ], if leaf number i in Ψ is labeled α i ⋅x j , relabel it to α i ⋅x j +ℓ i (x), where ℓ i is some linear form depending on the variables x 129 , . . . , x n . Choose the coefficients of the ℓ i s so that all the leaves ofΨ are linearly independent (thus,Ψ(x) ∈ ANF GL aff n (C) ). As f h andΨ are two polynomials in the GL n+1 (C)-orbit of ANF ∆ , there exists some B ∈ GL n+1 (C) such that f h (Bx) =Ψ(x), and by constructionΨ x 129 =0,x 130 =0,...,xn=0 (x) = Ψ(x) = g(x). By defining A i,j r to be the matrix B with columns 129, . . . , n set to zero, we get σ A i,j r (f ) =Ψ x 129 =0,x 130 =0,...,xn=0 (x) = g(x). Since A i,j r is a projection, this is what we wanted to prove.
Thus, by applying the DeMillo-Lipton-Schwartz-Zippel lemma, we can conclude that a random projection (sampled from a set T ⊆ C) of the homogenization of any f ∈ ANF GL aff n (C) satisfies FI and PSI with probability T , thanks to the upper bound on the degree of the p i s of Observation 5.21. For Step AFR3 to work, we need all n 2 projections to yield "good" polynomials, and by a simple application of the union bound we deduce that AFR3 succeeds with probability at least 1 − This completes the proof of Theorem 1.26 Remark 5.24. The original theorem of [GKQ14] uses two sets of field elements: the set S, used to sample random ANFs from the distribution D(n, s, S), and the set T , used to sample random projections A i,j r of the input ANF. As their algorithm works for any f ∈ ANF GL aff n (C) , we do not need the set S. Thus,we only use T , and we add run-time dependence on log( T ) so we can sample the uniform distribution on T .
Dense orbits for ΣΠΣ circuits
In this section we prove our claims regarding dense orbits in ΣΠΣ. We start by proving Theorem 1.31 regarding the relation between T GL aff (F) , ΣΠ GL aff (F) and ΣΠΣ.
Proof of Theorem 1.31. The claim regarding the closures follows immediately from the fact that every matrix can be approximated by invertible matrices and from the simple observation that for any n-variate polynomial To prove the separation we first note that the polynomial f (x) = x 2 1 is in ΣΠ, but not in T GL aff (F) : if f (x) ∈ T GL aff (F) , then there exists (A, b) ∈ GL aff n (F) such that f (Ax + b) = T s,d , for some s and d (as we compose with invertible affine maps). However, f (Ax + b) = (ℓ(x)) 2 for some non-constant linear function ℓ(x), which is obviously not a multilinear polynomial. The second separation will follow from the next simple claim.
Claim 6.1. If f ∈ ΣΠ GL aff (F) is d-homogeneous, then it is in the GL n (F) orbit of some d-homogeneous ΣΠ circuit (i.e. no affine translation is needed).
Proof. Let (A, b) ∈ GL aff n (F) and let Ψ be a ΣΠ circuit such that f (x) = Ψ(Ax + b). Observe that for every i it holds that Ψ(x) Let σ d (x) be the nth elementary symmetric polynomial. I.e. the sum over all degree-d multilinear monomials in n-variables. Theorem 0 of [NW97] shows that any homogeneous ΣΠΣ circuit computing σ d must have size Ω( n 2d ) d . As any homogeneous polynomial in ΣΠ GLn(F) can be computed by a homogeneous ΣΠΣ circuit of the same complexity, we get an exponential lower bound on the sparsity of any ΣΠ GL aff (F) circuit computing σ d , over any field. To get an upper bound on the ΣΠΣ complexity, note that, over any field of size F ≥ n + 1, σ d has a ΣΠΣ circuit of size O n 2 (see [SW01]), that is obtained by interpolating the polynomial We devote the rest of this section to proving Theorems 1.32, 1.34 and 1.35.
A hitting-set generator for ΣΠ GL aff (F) circuits
In this section, we prove Theorem 1.32. The main idea is that given some f ∈ ΣΠ GL aff (F) , where f (x) = g(Ax + b) = g(ℓ 1 (x), . . . , ℓ n (x)) for an s-sparse polynomial g, composing f with a 1-independent polynomial map allows us to "halve" the number of monomials appearing in the underlying ΣΠ circuit g(x). Depending on the structure of g, this can be done by either taking a derivative of f at the direction of an appropriately chosen dual vector, or by restricting f to a linear subspace in which some ℓ i (x) = 0 and other linear functions remain linearly independent. By Lemmas 3.9 and 3.10, both tasks can be simulated using a 1-independent generator.
As a reminder, we restate Theorem 1.32 before giving its proof.
Proof. By induction on t. For t = 0, 0 ≠ f (x) is either a non-zero constant, or a product of non-zero linear functions. A non-zero linear function composed with a 1-independent polynomial map G is non-zero because the n entries of G are linearly independent (Observation 3.1(2)), so f ○ G ≠ 0.
First, we note that WLOG we can assume that no variable x i divides g; otherwise we can take someg ∈ F[x] such that g(x) = x k ig (x), x i does not divideg and both g andg have the same sparsity. By the base case (sparsity 1), (ℓ i (x) + b i ) k ○ G ≠ 0, so f ○ G ≠ 0 if and only if (g(Ax + b)) ○ G ≠ 0. Now that we know g(x) is not divisible by any variable, we consider two cases: Case 1: There exists a variable x i ∈ var(g) that appears in ≤ 2 t−1 monomials of g(x). Choose v ∈ F n such that ℓ i (v) = 1, and for all j ≠ i, ℓ j (v) = 0. By Lemma 3.8, ∂f ∂v (x) = ∂g ∂x i (Ax + b). By choice of x i , ∂g ∂x i is non-zero and of sparsity ≤ 2 t−1 , so by induction: ∂f ∂v (G t ) ≠ 0. Lemma 3.9 implies that f ○G = f ○(G 1 +G t ) ≠ 0. Case 2: Every variable x i ∈ var(g) appears in at least 2 t−1 monomials of g. Assume, WLOG, that x 1 ∈ var(g), and defineg(x) ≜ g(0, x 2 , x 3 , . . . , x n ). As x 1 does not divide g,g ≠ 0 and is of sparsity ≤ 2 t−1 . By Lemma 3.10, there exist linearly independent linear functionsl 2 , . . . ,l n , an assignment α ∈ F y 1 and some linear function L(x) such that f (x + G 1 (α, L(x))) =g (ℓ 2 (x), . . . , ℓ n (x)) ≠ 0. Asg is non-zero and has sparsity ≤ 2 t−1 , we get from the induction hypothesis that f (x + G 1 (α, L(x))) ○ G t ≠ 0, and therefore f (x + G 1 (y 1 , z 1 Corollary 1.33 follows immediately from Theorem 1.32 and Observation 1.14.
An interpolating set generator for T GL aff (F)
To construct an interpolating set generator for T GL aff n (F) ≜ T GL aff (F) ∩ F[x 1 , . . . , x n ] we need a generator that hits the difference of two polynomials of T GL aff n (F) . As this class is closed under multiplication by scalars, such a generator hits every nonzero sum of two T GL aff n (F) polynomials. The main idea can be described as follows: the tensor T s,d on variables {x 1,1 , . . . , x s,d } has the property that for any two variables in distinct product gates, x i,j and x i ′ ,j ′ (i ≠ i ′ ), it holds that ∂ 2 T s,d ∂x i,j ∂x i ′ ,j ′ = 0. We prove that for a sum of distinct T GL aff n (F) polynomials, there is always a pair of "dual" vectors such that if we take a derivative in their direction then one of the T GL aff n (F) polynomials of the sum vanishes. Once we prove this, all that is left is to hit the remaining polynomial (or actually, its derivative).
If f ∈ T GL aff n (F) and v ∈ F n is arbitrary, then ∂f ∂v need not be in T GL aff n (F) . We thus begin by constructing a hitting set generator for directional derivatives of T GL aff n (F) polynomials.
Lemma 6.2. Let f ∈ T GL aff n (F) , k ∈ N and v 1 , . . . , v k ∈ F n . Then, for any (k + 2)-independent polynomial map G: 1 , G k be a pair of 1-independent polynomial maps and a k-independent polynomial map, respectively, such that G = G (1) Let {ℓ 1,1 , . . . , ℓ s,d } be linearly independent linear functions such i,j . I.e., ℓ such that Q i is non-constant (if no such i exists, then g is a non-zero constant and thus g ○ G ≠ 0). Assume, WLOG, that Q i depends non-trivially on w i,1 and consider the derivative in direction u i,1 . From Lemma 3.8 We get As Q i (ℓ i,1 (x), . . . , ℓ i,d (x)) is a kth order directional derivative of the product ℓ i,1 (x)⋯ℓ i,d (x) we have that for some constants α S ∈ F. Thus, Assume, WLOG, that for T = {2, . . . , k+1}, α T ≠ 0. Observe that except for the term α T ∏ j∈{2,...,d}∖T ℓ i,j (x) , every other term is divisible by one of the functions ℓ i,j , for j ∈ T . Let V = {v ℓ i,j (v) = 0, ∀j ∈ T }. It follows that ∂g ∂u i,1 V = α T ∏ j∈{2,...,d}∖T ℓ i,j (x) V ≠ 0. Lemma 3.10 implies that there exist linear functions L 1 (x), . . . , L k (x) and an assignment β such that for L = (L 1 , . . . , L k ): As the right term is a product of linear functions, we get from Observation 3.1(2) that The claim now follows from Lemma 3.9.
It is not hard to see that the proof above implies the following hitting set generator for T GL aff (F) : If 0 ≠ f ∈ T GL aff n (F) , then for any 2-independent polynomial map G: f ○ G ≠ 0.
We are now prepared to a construct hitting set generator for T GL aff n (F) + T GL aff n (F) . We recall the statement of Theorem 1.34.
Proof. Let G 6 be a uniform 6-independent polynomial map and let ℓ i,j,k be linear functions such that We first prove that if f ○ G 6 = 0 then d 1 = d 2 . Assume for a contradiction that d 1 > d 2 . Observe that f ○ G 6 ≠ 0, and as G 6 is uniform, It follows that f ○ G 6 ≠ 0, in contradiction. From now on we denote d = d 1 = d 2 .
Next, we note that we can assume that f is homogeneous. Letl i,j,k = x 0 ⋅ ℓ i,j,k (x x 0 ) be the homogenization of ℓ i,j,k . Observe that the homogenization of f isf (x 0 , x) ≜ x d 0 f (x x 0 ) = T s 1 ,d l 1,1,1 , . . . ,l 1,s 1 ,d 1 + T s 2 ,d l 2,1,1 , . . . ,l 2,s 2 ,d 2 , which is a homogeneous polynomial in T GL n+1 (F) + T GL n+1 (F) . By Lemma 5.14, it is enough to prove thatf ○ G ′ 6 ≠ 0, where G ′ 6 is a uniform 6-independent map into F n+1 . Hence, to simplify notation and WLOG, we assume from now on that f is homogeneous and that ℓ i,j,k = ℓ i,j,k . Next, we handle the case s 1 ≠ s 2 .
As span {ℓ 1,i,j } i,j = span {ℓ 2,i,j } i,j , we can represent f 2 as a polynomial in {ℓ 1,i,j } i,j (recall this notion from Section 2.1). We split the proof into two cases, depending on the {ℓ 1,i,j } i,j -monomials appearing in f 2 : 1. The set of {ℓ 1,i,j } i,j -monomials appearing in f 2 is a subset of the {ℓ 1,i,j } i,j -monomials in f 1 . I.e., , and the theorem follows from Corollary 6.3.
2. There exists an {ℓ 1,i,j } i,j -monomial ∏ i,j ℓ a i,j i,j in f 2 that is not an {ℓ 1,i,j } i,j -monomial of f 1 . Let {v i,j } be a dual set to {ℓ 1,i,j }. We proceed to show we can choose two vectors u, w ∈ {v 1,1 , . . . , v s,d } such that ∂ 2 f 1 ∂u∂w = 0 and ∂ 2 f 2 ∂u∂w ≠ 0. We again consider two cases: • There exists some a i,j ≥ 2: Let u = w = v i,j . By Lemma 3.8: 1,1 , . . . , ℓ 1,s,d ) = 0 and • a i,j ≤ 1 for every i, j: In this case, since f 2 is homogeneous, there must be some i ≠ i ′ such that for some j and j ′ , a i,j , a i ′ ,j ′ ≠ 0. Now choose u = v i,j and w = v i ′ ,j ′ . As before, it is easy to verify that Thus, in either cases, there exist u, w such that By Lemma 6.2, any 4-independent polynomial map hits ∂ 2 f ∂u∂w ; so by Lemma 3.9, any uniform 6independent polynomial map hits f .
Reconstruction of T GL aff (C) circuits
In [KS19a], Kayal and Saha gave a polynomial-time, randomized reconstruction algorithm that, given blackbox access to a homogeneous ΣΠΣ circuits satisfying a non-degeneracy condition (Definition 6.5), reconstructs the circuit with high probability. To prove Theorem 1.35 all we have to do is show that any homogeneous polynomial f ∈ T GL aff (F) satisfies the non-degeneracy condition of Definition 6.5.
To explain the condition we first need to define the partial derivative space of a polynomial: Definition 6.4. For an n-variate polynomial f (x) ∈ F[x], of degree d, and for any k ∈ [d], the partial derivative space of order k of f (PD k space for short), denoted ∂ k f , is the F-span of all partial derivatives of f of order k: Definition 6.5 (Non-degeneracy condition [KS19a]). Let f (x) = f 1 (x) + . . . + f s (x), where f i = ∏ d j=1 ℓ i,j for some linear forms ℓ i,j , be an n-variate d-homogeneous polynomial, which can be computed by a depth-3 circuit of top fan-in s. Fix k ≜ ⌈ log(s) log( n e⋅d ) ⌉, where e is the base of the natural logarithm. We say f (x) is non-degenerate if dim(∂ k f ) = s ⋅ d k , and for every i ∈ [s] there exist 2k + 1 linear forms ℓ i,r 1 , . . . , ℓ i,r 2k+1 such that: Theorem 6.6 (Theorem 1 of [KS19a]). Let n, d, s ∈ N, n ≥ (3d) 2 and s ≤ ( n 3d ) d 3 . Let F be a field of characteristic zero or greater than ds 2 . 15 There is a randomized, poly(n, d, s) = poly(n,s) time algorithm which takes as input black-box access to an n-variate d-homogeneous polynomial f that can be computed by a non-degenerate (Definition 6.5) ΣΠΣ circuit of top fan-in s, and outputs a non-degenerate, n-variate, d-homogeneous ΣΠΣ circuit of top fan-in s computing f .
For our proof we will need the following simple fact. : Proof of Theorem 1.35. As given a non-homogeneous T GL aff n (F) circuit we can easily get query access to its homogenization, f h = x d 0 f ( x 1 x 0 , . . . , xn x 0 ), which is a homogeneous polynomial in T GL n+1 (F) , we can assume WLOG that the black-box polynomial is homogeneous. It should also be clear that a polynomial satisfies the condition in Definition 6.5 if and only if its homogenization does.
It is clear that dim ∂ k T s,d = s d k , and since composing with an invertible linear transformation does not affect the dimension of the PD k space (Fact 6.7), it follows that dim ∂ k f = s d k for any d-homogeneous, s-sparse f ∈ T GLn(F) . It is also clear that T s,d satisfies the second condition and that this condition too is invariant under invertible linear transformations.
We still need to argue that the output of the algorithm of Theorem 6.6 is a T GL(F) circuit. Theorem 6.6 guarantees that the output circuit Φ = ∑ s 1 ∏ d 1 ℓ i,j is a non-degenerate d-homogeneous, ΣΠΣ circuit computing f . We claim the linear forms ℓ i,j on the leaves are linearly independent, and conclude that it is indeed a T GL(F) circuit. Indeed, as f (x) is GL n (F)-equivalent to T s,d (x) and ∂ d−1 T s,d (x) = span F {x 1,1 , . . . , x s,d }, it follows that ∂ d−1 Φ has dimension s ⋅ d. The space ∂ d−1 Φ is contained in span F {ℓ 1,1 , . . . , ℓ s,d }, so by dimension argument the set {ℓ 1,1 , . . . , ℓ s,d } must be linearly independent.
Finally, we note that by Lemma 2.7 the representation that was found is unique up to TPS s,d (F)-equivalence.
This concludes the proof of Theorem 1.35. notation of [GKQ14] we also use the following notation: given an (n + 1) × (n + 1) matrix A and a polynomial f (x 1 , . . . , x n ) we denote σ A (f ) ≜ f h (Ax), where f h is the homogenization of f . Finally, we define the rank of a homogeneous quadratic polynomial q to be the minimal k such that for some linear forms {ℓ i } k i=1 , q = ℓ 2 1 + . . . + ℓ 2 t − ℓ 2 t+1 − . . . − ℓ 2 k .
A.1 Definition of Formulaic Independence and Pairwise Singular Independence
In [GKQ14] Gupta et al. characterize "bad" inputs to their average-case, randomized algorithm in terms of points in a specific variety. As we only stated their algorithm over the complex numbers, we define varieties only over C. However, all definitions can be easily extended to other fields as well.
For any set of n-variate polynomials F ⊆ C[x], we define the zero set of F as: Any set V ⊆ C n that can be defined as a zero set V = V (F) for some set of polynomials F ⊆ C[x] is called a variety, or an algebraic set.
The notions "Formulaic Independence" and "Pairwise Singular Independence" are defined in terms of dimensions of projective varieties, as the polynomials in question are always homogeneous.
Let r ∈ N. The r-dimensional projective space P r is the space C r+1 ∖ {0} with the equivalence relation ∼, where v, u ∈ C r+1 ∖ {0} satisfy v ∼ u if and only if there exists some λ ∈ C such that λv = u.
If V = V (f 1 , . . . , f k ) is a variety where every f i is an r + 1-variate homogeneous polynomial, and if v ∈ C r+1 satisfies f 1 (v) = .
. . = f k (λ ⋅ v) = 0. Thus, the set V ∖ {0} can be viewed as a subset of P r . In this case we call V a projective variety, and define its dimension as follows: Definition A.2 (Proposition 11.4 in [Har13]). The dimension of a projective variety V ⊆ P r , denoted dim(V ), is the largest integer k such that any linear space of dimension ≥ r − k intersects V nontrivially.
The definition of formulaic independence involves the algebraic set of singularities of a polynomial f , and the Jacobian matrix of a tuple of polynomials: For a polynomial f ∈ C[x], the set of singularities of f is the set of points v ∈ C n such that f (v) = ∂f ∂x 1 (v) = ∂f ∂x 2 (v) = . . . = ∂f ∂xn (v) = 0. In other words, Sing(f ) = V f, ∂f ∂x 1 , . . . , ∂f ∂x n .
Given a tuple of polynomials f = (f 1 , . . . , f m ) ∈ C[x] m , the Jacobian of f is the following matrix of partial derivatives of f 1 , . . . , f m : Definition A.3 (Definition from Section 3.1 of [GKQ14]). Let M (x) ∈ C[x] s×r be a matrix whose entries are polynomials in x, and let t ∈ N. We denote by Minors(M (x), t) ⊆ C[x] the set of determinants of all t × t submatrices of M (x).
Definition A.4 (Definition 5.2 of [GKQ14]). Let g = (g 1 (x), . . . , g k (x)) ∈ C[x] be a k-tuple of homogeneous polynomials. The algebraic set V J (g 1 , . . . , g k ) (V J (g) for short) is defined to be the set of common zeroes of polynomials in Minors(J(g, x), k). In other words, V J (g) consists of all points v ∈ P r for which the rank of the Jacobian matrix J(g, x) is less than k.
Definition A.5 (Formulaic Independence, Definition 5.3 of [GKQ14]). Let x = (x 0 , x 1 , . . . , x r ) and let f, f 2 , f 3 , f 4 ). We say that f 1 , f 2 , f 3 , f 4 are formulaically independent if dim(V (f )) = r−4 and dim(Sing(f )∩V J (f )) < r−4. We say that a homogeneous ANF formula Φ satisfies formulaic independence at node v if v is a + gate, and the four polynomials computed at the grandchildren of v are formulaically independent.
We say that a homogeneous ANF formula Φ satisfies pairwise singular independence at a node v if the node v is a + gate, and (f v 1 , f v 2 , f v 3 , f v 4 ) are pairwise singularly independent, where v 1 , v 2 , v 3 , v 4 are nodes which are the grandchildren of v and f v i is the 4-tuple of polynomials computed at the grandchildren of the node v i .
|
2021-02-11T02:15:43.078Z
|
2021-02-10T00:00:00.000
|
{
"year": 2021,
"sha1": "11886246b55f1046efcb1ee6323b5f70e23e86a9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "11886246b55f1046efcb1ee6323b5f70e23e86a9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
258550870
|
pes2o/s2orc
|
v3-fos-license
|
Subjective Improvement in Hearing After Treatment With a Manic Episode in an Elderly Patient
This is a report of an 81-year-old woman presenting with hearing improvement after a reduction of antidepressant medication to manage a manic episode. The patient reported a subjective improvement in her hearing ability, not confirmed by audiometric testing. It was reported to us that she stopped using her hearing aids subsequently. This case highlights the potential for medications to impact hearing and the importance of monitoring for side effects in elderly patients with mood disorders.
Introduction
Hearing loss is a common issue in the elderly population and is often attributed to age-related changes and exposure to noise. However, the impact of medications on hearing has been increasingly recognized as a contributing factor. Hearing impairment is one of the most common disabilities in older adults. According to a recent report by the WHO, more than 1.5 billion people live with hearing loss [1]. Hearing loss is mainly categorized as conductive, sensorineural, and mixed. The most common hearing loss in the older adult population is sensorineural hearing loss (SNHL). In the International Classification of Diseases, Eleventh Revision (ICD-11), this is described as bilateral high-frequency hearing loss associated with difficulty in speech discrimination and central auditory processing of information [2].
In this case report, we present an 81-year-old woman with a history of manic episodes, who experienced a subjective improvement in her hearing ability after the reduction of her antidepressant medication. This case sheds light on the importance of monitoring for potential side effects in elderly patients receiving psychotropic medication and highlights the need for further investigation of the relationship between mood disorders, psychotropic medications, and hearing function.
Case Presentation
An 81-year-old woman with a 26-year history of bipolar disorder presented to the community mental health team with symptoms of mania, including pressure of speech, elated mood, and lack of sleep. The manic episode lasted for two weeks at the time of the first assessment and was found to be precipitated by antidepressant medication. We did not find any evidence of delusions at the assessment. The past medical history included bipolar affective disorder, hypothyroidism, and osteoarthritis of the hips. The patient also had a 10-year history of bilateral SNHL. The medications were Lamotrigine, Mirtazapine, Olanzapine, Levothyroxine, Omeprazole, Zopiclone, and Fluoxetine. After the first assessment, Fluoxetine was stopped. Two weeks after the initial assessment, the Mirtazapine dose was gradually reduced from 45mg to 15mg in two weeks. After the reduction of Mirtazapine, the patient experienced a significant improvement in her hearing and stopped using her hearing aids during daily visits. At the follow-up assessments, she was able to communicate with the staff member without using her hearing aids. An audiology test was completed two weeks after the subjective improvement in her hearing and the result showed that her hearing has been the same for the last year.
Discussion
In one cohort study, all classes of antidepressants were found to increase the risk of SNHL. Increased numbers of different classes of antidepressants were associated with a high risk of SNHL [3]. In another review, the authors suggest a framework, that decreased auditory input to the brain may cause the brain to compensate and facilitate speech discrimination using cross-modal plasticity [4].
Auditory perception is facilitated by the neural encoding of sound together with the transmission and processing of that information in the cortical and sub-cortical regions [5]. In this review, differences in speech discrimination in older adults with similar pure-tone audiometry results were explained by temporal deficits in the brainstem and midbrain [5].
We propose several hypotheses that may explain our patient's improvement in hearing. These are grouped into two: There was a physiological impairment that was not detected by the audiometry and there was no physiological impairment.
There was a physiological impairment not detected by audiometry
Audiometry involves pure tone tests but can also include other tests such as speech discrimination. We did not have further information on the tests that were performed but instead received a summary of the outcome from the patient on which we have based our considerations. If the audiometry involved speech discrimination then the possibilities below should be considered.
Firstly, the effects of antidepressant treatment are likely mediated in part by changes in neural plasticity [6].
The second hypothesis is that depression and mania influence hearing in this case via concentration. This in turn may improve speech discrimination even with the presence of partial hearing loss.
Thirdly, there could be an iatrogenic effect. In our patient, altered serotonergic activity in the brain may change the activity in the auditory cortex or associated areas. In one study, the authors found that increased plasma serotonin levels were found to be a biomarker of sudden sensorineural hearing loss (SSNHL) [7]. Therefore in this case there might be a link between serotonin levels and auditory processing.
Although the audiometry test result did not show any difference from previous years, the patient reported significant improvement in her hearing level. These perceived changes resulted in her stopping using hearing aids, which she used for a long period of time.
There was no physiological change
There remains the possibility that the perception of improved hearing occurred without a physiological change. In this case, the perception may represent a reverse somatoform disorder whereby there is a perception of recovered function. This may also be a delusion occurring in the context of mania where there is an overestimation of abilities. The study period and available information were not sufficient to exclude these possibilities.
As hearing impairment is a significant disability in the older adult population, this area needs further investigation. Although there are gaps in the information in our case, there are sufficient questions from the research literature to offer a framework for further considerations in this area.
Conclusions
Hearing loss is very common in the older adult population and this can impact on quality of life. Hearing impairment is a significant problem globally and psychiatrists need to be aware of this when treating psychiatric problems. One of the drawbacks of the study was that we did not use the Naranjo algorithm. Psychosomatic causes of hearing loss in the older adult population are an area that would benefit from further research. Further research is needed into the role, if any of serotonin in hearing.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2023-05-08T15:02:30.273Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "af3a0d642d805491482a4e713571ee2eef706ddb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7759/cureus.38624",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5c86776eb11f9dc26397bcc9668569a1de17a78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
188694006
|
pes2o/s2orc
|
v3-fos-license
|
Infographic Design as Visualization of Geography Learning Media
Infographics is an integration visualization technique that combines text with images into an illustration. Info graphic designs were used to convey a complex message or material, into an interesting visual data. Geography as a subject that plays a role in introducing natural phenomena, it is considered very important to choose the right media in visualizing the material well. So this research literature study done with purpose can introduce info graphic design to educational world, as visualization of interesting media of learning. Especially for schools that have a diversity learner in certain classes. With the media-based design of this info graphic will facilitate teachers in teaching and motivating students in learning.
Introduction
Learning is a teaching and learning process. Gagne and Briggs mean learning is a system that aims to assist student learning process which contains a series of events designed, arranged in such a way as to influence and support the occurrence of internal learning processes [1]. Meanwhile, learning is a conscious effort from teachers to make students learn, namely the occurrence of behavioural changes in self-learning students. The change is obtained from new capabilities that apply in a relatively long time and because of the process of interaction of learners with teachers and learning resources in a learning environment [2]. In fact, during this interaction between educators and learners still look one way. The learning process that takes place in the classroom is dominated by lectures from teachers. While students just come, sit and listen to lectures from teachers without understanding how each character of the students. This situation is bad for students, one of which is that students can only hear and record the material given, without fully understanding what the teacher is saying. It indicates the existence of the substance of influence, either from learners who have diversity or the influence of a teacher in choosing learning media that are less good and inappropriate. Visualization of instructional media based on infographic design is expected to attract attention and motivate learners in learning and can solve the problem of student diversity. Visualization of instructional media based on infographic design is basically instructional media designed in a collection of information designed and combined with graphic form images into a poster, or a book filled with many illustrations of images. In the case of the delivery of information or the delivery of the subject matter, this infographic design will be able to support visualizing complex data in an easy, interesting and simple way. Thus, from the description above, then conducted a study of this literature with the aim to introduce and design the infographic design as a visualization of geography learning media.
Methods
Data was collected through literature study. In order to know the understanding of infographics, history of infographic development, infographic functions in various fields, important role of infographics in geography learning, and steps to design visual infographics as a medium of geography learning. Data was obtained from books, scientific journals and articles, and other sources form internet.
The understanding of info graphics
The word "infographic" consists of the words "info" and "graphic" and is shortened from the expression "information graphic." Generally, infographics are visual presentations of data, information, and knowledge and by using them information, numbers, points, law and knowledge can be presented with visual method and in form of charts, symbols, images and maps [3]. Infographic (short for information graphic) is a type of picture that blends data with design, helping individuals and organizations concisely communicate messages to their audience. More formally, an infographic is defined as a visualization of data or ideas that tries to convey complex information to an audience in a manner that can be quickly consumed and easily understood. The process of developing and publishing infographics is called data visualization, visualization media, information design, or information architecture [4].
Infographics are an effective means for telling stories about data, as they capture a reader's attention by structuring these data stories with principles of graphic design. Information graphics, or infographics, combine elements of data visualization with design and have become an increasingly popular means for disseminating data. While several studies have suggested that aesthetics in visualization and infographics relate to desirable outcomes such as engagement and memorability, it remains unknown how quickly aesthetic impressions are formed, and what it is that makes an infographic appealing [5].
Not only that, infographics are a specialized form of visualization that combine words and pictures to communicate a particular message-or at least it ought to be. That message is crafted to achieve a particular outcome-or at least it ought to be. Infographics may use to achieve several goals. For instance, they may be used to inform, to persuade, to teach, or to move people to action. To qualify as an infographic, however, by definition they must inform [6].
Thus, the term infographic learning can be interpreted as a form of presentation of data or learning materials with visual concepts, illustrations or images equipped with information or text, resulting in a clear and interesting graphic images.
History of infographic development
Starting from history, basically this infographic concept is a design concept that is old and has been used since the first. Early humans, for example, made maps and other visual representations of their lives visible today. Even since the time of 30,000 before century (BC), humans are familiar and can make infographics manually with traditional tools and materials.
Today, infographics can be used by a wide variety of individuals or organizations. Infographics published in various media. From traditional media such as newspapers and magazines and across digital channels, where social media has helped fuel an explosion in their popularity. The casual observer, it would appear that infographics are a recent phenomenon that has been growing in conjunction with the growth of the Internet. The reality is that we have been using icons, graphics, and pictures throughout history to tell stories, share information, and build knowledge [4]. Some illustrations of infographic relics of earlier figures, can be seen in the figure 1 and 2. In 1786, Scottish engineer William Playfair pioneered data visualization. His book "The Commercial and Political Breviary" was the first to explain numeric data through the use of linear graphs, pie charts and bar graphs.
In 1857, English nurse Florence Nightingale combined stacked bar / pie charts (Coxcomb chart) to illustrate the monthly number of causalities and causes of death explain during the Crimean War. She used these infographics to help convince Queen Victory to improve conditions in military hospitals.
1850-1870 Charles Joseph Minard, a Civil Engineer from France, began combining maps with flow charts in order to explain geographical statistics. One of his most famous data visualization illustrated the causes of Napolean's failed attempt to invade Russia. He captured a complex data set for the period (map location, direction travelled, decline in troops and temperature a single infographic.
Infographic functions in various fields
The submission of data or material in this infographic form will make people more interested because it can convey information or messages using visible visuals quickly in an instant. There is a wide range of modern uses for infographics, from subway system maps to slides in presentations given at conferences [7]. In addition, the use of infographics can be applied in the delivery of materials, annual reports, research content, blogs, and newsletters. As a reader we want numbers and statistics to support the information we read, but consumers want the numbers and statistics to be visually appealing and not always text-based. With this infographic that can then make it easy for the reader. Using infographics causes the audiences notice a considerable amount of data and information that their written form may make a long article, in the form of image and keep them in mind for a long time.
Infographics can be beneficial in different fields. Using infographics as a tool for tracking, resume or backgrounds of people, reports or worksheets, news and information, advertisements, introductions and presentations, learning and teaching are only parts of their usages. A news or research results published through print and electronic mass media require infographics not only as news exposure, but also as a media attraction. The use of color, composition and other visual elements take into account the uniqueness and uniqueness of a medium. Through the infographics made, then print and electronic mass media will be more easily recognizable.
Infographics are a creative way to communicate information with graphics clearly and quickly. Not only graphics, some interesting infographics also use diagrams, symbols, and illustrations. Infographics at the same time serve to give pause. Once filled with a series of words, the reader is expected to be relieved for a moment when all that can be represented by the image. Infographics are very popular because they help people to deliver a message. Illustrated with creative and compelling images, infographics prove to be more attention-grabbing and easily understood by readers. In a very busy era, fast and effective communication is of course very useful. From a business perspective, one of the infographic definitions resonates above all [4].
The British graphic designer, author, and information theory designer Nigel Holmes simply referred to them as "explanatory graphics". It is important to understand that Infographics are not used solely for communication. But Infographics is a good medium for delivering marketing messages or insights to consumers and prospects, but they are equally effective when used to improve internal communication. In this case the visual representation (illustration / photography) has the power of attracting the attention of the target audience directly and has a big role in the visual persuasion of an advertisement.
Infographics are basically designed to communicate on a plot that is subjective, the information displayed is also very focused on certain themes to the audience. On the other hand, visualization data takes a more objective approach, the approach of graphics created must have strict precision. The purpose of visualization data is to prehange much information of quantity and type and present it in one place, the focus on visualization data is the accuracy of the data source context that only computer programs can produce using algorithms, it is difficult for humans to do because of their complexity density.
Important and objectives role of infographics in geography learning
Studies show that people pay more attention to images rather than texts. Since 90% of sent information are visual and after reading every passage only 20% of it can be recalled and our brain tends to analyses and store information visually, it's better to make images out of geography object and results. Sometimes explaining subjects is uninteresting and also boring, but by showing them in the form of proper infographics, the process of understanding information will be appealing and simplified and the attention of audiences can be attracted. Teacher can also design an infographic of your information in order to explain them easily in meetings and conferences [3].
Visualization infographic design as learning media
Educational technology and media used in this period are different from the past. Today students are born in the auditory, visual and kinesthetic world, so teaching them with the use of past methods and instructional media will be ineffective or it won't have a considerable outcome. Teachers need to have enough information about educational technology and new educational media and also deal with them with a positive attitude. In recent years, great attention paid to the use of new media in education. New media help the education system considerably to improve it by providing good opportunities for recognizing individual talents and interests of learners. Studies show that the utilization of modern technologies in classrooms gives learners the opportunity to learn faster with better function and with more satisfaction from their class attendance. Colz et al reports in their studies that using media for teaching audiences is very beneficial. Moreover, Hyden and Fife find it valuable and efficient to use media in teaching students [3]. Results of much research have shown that different senses don't have equal roles in human's learning. Also much research show that visualization learning media has an important and facilitating role in memorizing and remembering received information, such as visual and verbal information.
Steps to design visual infographics as a media of geography learning
In making a graphic work that can be enjoyed by everyone not only with strong desire. But also the correct design pattern of work. The following are the steps in making the infographics. In general, there are several steps in designing infographic, among others; 1) selecting the topic, 2) surveying & researching, 3) gathering data, 4) analyze data, 5) finding the narrative, 6) create a visual / wireframe sketch data, 7) edit the format and compose the display data to be loaded into visualization, 8) designing, (9) validate data in visualization from testing, (10) Completion and improvement based on trial [8].
Conclusion
From the results of this study, it can be concluded that the design of good infographics serves as a visualization of learning media. As the result of literature review, visualization of geography learning media can make it easier for students to digest the lesson well. With the concept of interesting and complex illustrations, this infographic design will be able to attract the attention of readers. Especially for students and teachers. Visual infographics is both used by teachers in teaching and facilitate students in learning.
|
2019-06-13T13:16:06.567Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "ffad13e750167974f7079691c6d8b796b148d2c9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/145/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "af2ff8c8f9c971e570854d6cc6697ecb32458ff0",
"s2fieldsofstudy": [
"Geography",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
}
|
264705968
|
pes2o/s2orc
|
v3-fos-license
|
Massive right-sided hemorrhagic pleural effusion due to pancreatitis; a case report
Background Hemorrhagic pleural effusion, especially in the right hemithorax rarely occurs as the sole presentation of pancreatitis. Case Presentation This article reports massive right-sided hemorrhagic pleural effusion as the sole manifestation of pancreatitis in a 16-year-old Iranian boy. The patient referred to Nemazee Hospital, the main hospital of southern Iran, with right-sided shoulder and chest pain accompanied with dyspnea. His chest x-ray showed massive right-sided pleural effusion. The pleural fluid amylase was markedly elevated (8840 U/L), higher than that in the serum (3318 U/L). Abdominal CT scan showed a cystic structure measuring about 5·2 cm in the head of pancreas, highly suggestive of a pancreatic pseudocyst. Pleural effusion resolved after 3 weeks of chest tube insertion but not completely. After this period of conservative therapy another CT scan showed that pseudocyst was still in the head of pancreas. So, external drainage was done with mushroom insertion and the patient was discharged after 40 days of hospitalization. The cause of pancreatitis could not be identified. Conclusion Pancreatitis should be taken into consideration when hemorrhagic pleural effusion, especially in the right hemithorax occurs.
Background
Hemorrhagic pleural effusion, especially in the right hemithorax, rarely occurs as the sole manifestation of pancreatitis [1][2][3][4][5][6]. Most cases of hemorrhagic pleural effusion secondary to pancreatitis are between the ages of 20 to 55, and patients are alcohol drinkers [2,5]. This article reports massive right-sided hemorrhagic pleural effusion as the sole manifestation of pancreatitis in a 16-year-old Iranian boy.
Case presentation
This 16-year-old boy, from a village in Fars Province, Southern Iran, developed left paraumblical (sometimes epigastric)abdominal pain with moderate intensity about five months prior to admission. The pain subsided after three months. However, 20 days later, the patient developed right-sided shoulder and chest pain accompanied with dyspnea. The patient's severe shoulder pain was mismanaged by a physician who considered it musculoskeletal pain. The patient then referred to Nemazee Hospital, the main hospital of southern Iran. His chest X-ray showed massive right-sided pleural effusion and thoracentesis revealed a dark red bloody effusion with: amylase: 8840 U/L (serum amylase: 3318 U/L), protein: 2.5 mg/dL (serum albumin: 4.1 mg/dL), lactate dehydrogenase (LDH): 227 U/L (serum LDH: 335 U/L), cell count: 590000 cells/mm3 with 2500 WBC/mm3 which has 73% segment, 2% lymphocyte and 6% mesothelial cell. Acid fast staining ofpleural fluid was negative three times. The results of pleural biopsy and pleural fluid culture for Tuberculosis were negative as well.
Chest tube was inserted for three weeks, during this period the clinical symptoms such as dyspnea and chest pain improved but not completely. Thedaily drain output was about 1500 cc at the first day of chest tube insertion but it decreased gradually. After this period of conservative therapy another abdominal CT scan showed a mass measuring 3·4 cm in the head of pancreas with possibility of pseudocyst. No evidence of pancreatic duct dilatation or common bile duct dilatation was seen. Therefore external psendocyst drainage was done with mushroom insertion. Mushroom was removed after one week when no drainage was seen. Finally the patient was discharged after 40 days of hospitalization. The cause of pancreatitis could not be identified.
Discussion
Intrathoracic neoplasms, trauma, bleeding diathesis or tuberculosis may cause hemorrhagic pleural effusion as well [1]. Right-sided hemorrhagic pleural effusion as the sole manifestation of pancreatitis is rare [1][2][3][4]6,7] especially when it occurs in the non-alcoholic patient under the age of 20 [2,3]. The postulated pathogenic mechanisms for hemorrhagic effusions include transdiaphragmatic transfer of fluid via lymphatics, diaphragmatic perforation of pseudocyst and mediastinal extension [1]. Several studies demonstrated that a fistula connecting a pancreatic pseudocyst with pleural cavity was the mechanism of pleural effusion [4,7].
Although the cause of pancreatitis could not be identified in our study, other studies have shown that pleural effusion with a very high pancreatic enzymes activity most frequently occurs in patients with alcoholic pancreatitis [5,[8][9][10]. Pleural effusions due to pancreatic diseases are mostly reactive with slightly elevated amylase levels. Very high levels of amylase in the pleural fluid are rare and can only be explained by the rupture of a pancreatic pseudocyst with perforation into the pleural cavity such as by drainage of pancreatic fluid into the pleural cavity [11]. Regarding elevated pleural fluid amylase, perforation of pseudocyst into the pleural cavity seems to be the mechanism of hemorrhagic pleural effusion in this case. The other causes of hemorrhagic effusions with an increased amylase include traumatic esophageal rupture and intrathoracic and other neoplasms [1].
In most cases, the pleural effusion occurs concomitantly with the signs and symptoms of pancreatitis, but may occur even after the acute abdominal symptoms have subsided. Considerable diagnostic problems may be encountered in cases in which the clinical picture is dominated by the pleuro-pulmonary symptoms, and the pancreatic condition remains completely or partly in the background [12]. An early and rapid diagnosis can be made by the examination of the pleural fluid for elevated amylase [2]. Visual methods such as computed tomography, ultrasonography, endoscopic retrograde cholongiopancreaticography (ERCP) are also useful [5].
Treatment with drainage by a chest tube, with concomitant conservative treatment of the pancratitis, is usually effective in massive pancreatic pleural effusions. If drainage by a chest tube fails, percutaneous catheter drainage of the abdominal pseudocyst can be considered for treatment [11].
Conclusion
Pancreatitis should be taken into consideration when hemorrhagic pleural effusion occurs, especially when it occurs concomitant with elevated amylase level of pleural fluid. In such a condition visual methods such as CT scan, ultrasonography and ERCP are so helpful. Treatment with drainage by a chest tube, with concomitant conservative treatment of the pancratitis, is usually effective in massive pancreatic pleural effusions.
|
2014-10-01T00:00:00.000Z
|
2004-02-17T00:00:00.000
|
{
"year": 2004,
"sha1": "642b6f40e418a145f4a558f99ff4e76fa8e7858d",
"oa_license": "CCBY",
"oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/1471-2466-4-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "642b6f40e418a145f4a558f99ff4e76fa8e7858d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270603415
|
pes2o/s2orc
|
v3-fos-license
|
Sociodemographic profile, index of functional autonomy and physical activity level of elderly women 1
To analyze the sociodemographic profile, functional autonomy index and level of physical activity of elderly women participating in an exercise program. 372 elderly women participating in the MASTERFITTS program and completed questionnaires regarding medical history and the adapted Baecke questionnaire to assess the level of physical activity in elderly. Tests from the GDLAM protocol were performed to evaluate functional autonomy. 51.61% were physically active ( X̅ = 2.68±0.49; ∆% = 1.09), and the majority achieved a good classification in the tests and overall functional autonomy index. A significant difference was found in the W10m test (p= 0.041; ∆% = -0.89), RSP (p= 0.024; ∆% = -1.90), and RVDP (p= 0.032; ∆% = -1.20) between the active and sedentary groups. An increase in the level of physical activity will also lead to a decrease in the completion times of the functional autonomy assessment tests, contributing to improved health.
Introduction
With the aging of the population and the increase in global life expectancy, the elderly segment has become the fastest-growing age subgroup in the world.However, this fact raises concerns about the increasing prevalence of frailty and functional dependence, as well as rising healthcare costs.Additionally, there are several age-related changes in the body that contribute to this phase of life (Mendonça et al., 2020).
Aging is a natural process associated with numerous changes in different biological systems, such as reduced muscle strength, decreased lean mass and bone mineral density, and concomitant increases in body fat, which collectively can negatively affect the health and physical fitness functioning of older individuals, regardless of the presence or absence of diseases (Graça et al., 2022;Palencia-Flórez et al., 2021;Sousa et al., 2021).
Neuromuscular, cardiovascular, and metabolic impairments are some examples of changes that can lead to a scenario conducive to the development of diseases, lower quality of life, and increased risk factors for mortality (Graça et al., 2022;Palencia-Flórez et al., 2021;Rodríguez y Barón, 2019).
In this context, physical activity programs have been prioritized and implemented in the Unified Health System, with the aim of expanding and improving primary healthcare, contributing to the promotion of health and quality of life, as well as increasing levels of physical activity and socialization, and being associated with balance control benefits (Vieira et al., 2022).
However, despite the existence of programs that contribute to improving the quality of life of this population, some elderly individuals still face difficulties in performing activities, as other factors take priority in addition to functional and psychological difficulties.Factors such as stress and symptoms of chronic diseases are directly associated with these difficulties.It is worth noting that some of these chronic diseases are more prevalent in elderly women and contribute to higher rates of disability (Sousa et al., 2021;Souza et al., 2022;Palencia-Flórez et al., 2021;Rodríguez y Barón, 2019).
Given this, it is considered important to define the health profile of these elderly individuals, including sociodemographic characteristics, activity level, and functional capacity.Therefore, the objective of this research was to analyze the sociodemographic profile, functional autonomy and level of physical activity of elderly women participating in an exercise program, while also comparing functional autonomy between active and sedentary individuals.
Methods
This is a cross-sectional study with a descriptive and quantitative approach (Thomas, Nelson & Silverman, 2012).
The data used in this study were obtained from the diagnostic evaluation of the MASTERFITTS project, which aims to provide a supervised physical exercise program for health and well-being.
The participants were elderly individuals recruited during visits to the Basic Health Units in the following neighborhoods of Aracaju, Sergipe, Brazil: Aeroporto, Atalaia, Coroa do Meio, Farolândia, Inácio Barbosa, and Jardins.
The participants were asked to come to the Laboratory of Human Motor Biociences with the following documents: 1) Medical certificate authorizing them to engage in physical exercise; 2) Referral from their respective Basic Health Unit; 3) Identification and Individual Taxpayer Registry (CPF) documents.
The selection criteria were: being 60 years old or older and committing to participate in a physical exercise program by signing an informed consent form, having the ability to walk, and not having motor limitations or comorbidities that would prevent participation in the exercises.
After applying these criteria, 372 elderly women participated in the study.
Data collection took place in August 2022.Initially, a medical history was taken to characterize the participants' sociodemographic profile.The physical activity level questionnaire was also administered, followed by tests to assess functional autonomy.
To determine the physical activity level of the participants, the adapted Baecke questionnaire for habitual physical activity in older adults was used.This instrument consists of questions that should be answered considering the past 12 months, in relation to three domains: household activities, sports, and leisure activities (Ueno, 2013).
To analyze the total score of the physical activity level calculated by the questionnaire, the second quartile, or median, of the sample was used as the dividing point, identified as 2.01 (score).Thus, all scores below this value were classified as sedentary, while equal to or above this value were classified as active.
To assess functional autonomy, the protocol developed by the Latin American Development Group for Maturity (GDLAM) was used.This protocol evaluates the functional autonomy of older adults through a battery of five tests: walking 10 meters (W10m), rising from the sitting position (RSP), Rising from ventral decubitus position (RVDP), putting on and taking off a t-shirt (PTTs), and sitting and rising from a chair and walking around the house (SRCW).These tests are combined in the following formula to calculate the general index of functional autonomy (GI) (Dantas et al., 2014) The data on the division of the group based on the total score calculation for the level of physical activity indicated that 192 volunteers were active (X ̅ = 2.68±0.49)and 180 were sedentary (X ̅ = 1.59±0.27),∆% = 1.09, p = 0.0001.
Table 5 presents the results for the comparison of functional autonomy between the groups of active and sedentary elderly participants.
Discussion
Table 3, which refers to the data collected in the anamnesis, shows that the highest concentration of participants was in the age group of 60 to 69 years (64.51%).It is common for younger older adults to be the majority in physical activity programs or to be more physically active, as older individuals are more likely to be affected by physical limitations due to chronic pain, for example, which diminishes their possibilities for practice (Ferretti et al., 2019;Sousa et al., 2022).Another variable that showed predominance was selfreported mixed race with 48.39%, justified by Brazil's miscegenation, where the majority (47.00%) of people in Brazil self-declare as mixed race (Instituto Brasileiro de Geografia e Estatística [IBGE], 2022a).
Regarding marital status, in the present study, the majority of the sample identified as single (45.16%).These data can be corroborated by civil registration statistics, which not only show that people, in general, are getting married less, but also that when they are married, the duration of marriages is shorter, indicating an increase in divorce rates (IBGE, 2019).
In terms of the participants' educational attainment in the research, the majority had completed high school (35.48%), which aligns with data from the National Continuous Household Sample Survey -Education, which demonstrates that as individuals get older, their educational attainment decreases.Older adults account for 18% of the illiteracy rate in Brazil, although this percentage has been decreasing since 2016 (IBGE, 2020).
It was found that 41.94% of the evaluated sample had the occupation of taking care of the house and family on a daily basis, a common scenario among older adults, as they are represented in lower percentages in the workforce, as recorded, for example, in 2021, where older adults accounted for only 8.7% of the overall workforce.Additionally, 42.6% of the population outside the workforce in 2021 (unemployed) were older adults (IBGE, 2021).
Therefore, this percentage of older adults of working age who are out of the workforce may indicate that a portion of them is likely engaged in their daily household tasks and family care.
Another point observed in the present study was the monthly family income, indicated by the majority (58.06%) as being up to 2 minimum wages, which aligns with the average monthly household income in Brazil, which is US$ 283.00 (IBGE, 2022b).This average monthly family income is concerning, as only 14.9% of households have only one resident, meaning that this average income in other households is divided among other individuals (IBGE, 2022a).Furthermore, an interesting fact is that in 2018, it was reported that 20.6% of households had at least 50% of their income coming from older adults, which corresponded on average to a 69.8% financial contribution within the household.Of these, 56.3% were pension and/or retirement income, meaning that even though older adults are out of the workforce, they still contribute significantly to monthly household income (IBGE, 2018).
Regarding chronic diseases, 77.42% reported having a family history, and 87.10% had pre-existing conditions requiring medication use.This presented scenario is justifiable, as 52% of the population aged 18 years and older have some type of chronic disease, and 74.9% of older adults have at least one chronic disease requiring continuous medication for treatment (Camarano, 2022).
Aging can lead to an increase in stress levels, which can be caused by trauma, threats, difficulty in adaptation, tragedies, or other internal and external factors that can trigger stress and its associated health implications, such as anxiety and depression.In this sense, it is important for older adults to work on their self-control so that they can reduce or mitigate the emotional impact caused by stress-inducing situations (Moura, 2021).In relation to this, in the present study, the majority of older women reported that they considered their level of stress self-control to be regular (45.16%), followed by good/excellent (35.48%).However, there is still room for improvement, as controlling stress, through practices such as regular physical activity, is important for the prevention, promotion, and/or maintenance of various health variables.
Regarding smoking and alcoholism, it was found that 93.55% were non-smokers and 83.87% did not consume alcoholic beverages, data supported by a study (Barbosa et al., 2018) that also identified a low prevalence of these types of consumption among the evaluated older adults, which is relevant considering that the use of these substances, especially when combined, is associated with health problems and a lower quality of life.
Regarding the assessment of functional autonomy, it was possible to observe that the majority of participants had a good classification in the tests (C10M -54.84%;LPS -48.39%; LPDV -51.61%;LCLC -54.84%;VTC -51.61%) and IG (48.39%).Participants in a study (Sousa et al., 2022) also underwent an evaluation of functional performance, revealing that 43.7% of older adults showed low functional physical performance, which contradicts the findings of the present study and may be associated with the more active lifestyle of the population surveyed.
Although the level of physical activity was found to be higher among active older women (51.61%), this percentage is not significant when compared to sedentary individuals (48.39%).However, these results are organized in this way because the median of the calculated scores was used as a cutoff point.Furthermore, since this evaluation was conducted prior to the start of an exercise program, some volunteers had no prior practice, while others had already participated in the program in the previous semester.In this regard, studies (Sousa et al., 2022;Vieira et al., 2022;Christoph et al., 2017;Grace et al., 2021;L'Gamiz-Matuk et al., 2014) affirm that the level of physical activity is directly related to the level of functional physical performance.
In the comparison between the variables of the GDLAM protocol for the groups of active and sedentary older women (Table 5), it was found that the C10m test (p= 0.041; ∆% = -0.89),LPS test (p= 0.024; ∆% = -1.90),and LPDV test (p= 0.032; ∆% = -1.20)showed statistically significant differences, indicating that more active individuals have better performance in activities of daily living that rely on lower limb strength, which is the most demanding physical capacity among these tests.However, these three tests in the protocol rely less on coordination, balance, and agility.Higher levels of physical activity will also lead to better functional physical performance, highlighting the importance of regular participation of older adults in physical activity programs (Sousa et al., 2022;Vieira et al., 2022;Christoph et al., 2017;Grace et al., 2021;L'Gamiz-Matuk et al., 2014).
Limitations of this study include the lack of inclusion of more health variables to be evaluated, which could contribute to the generalization of the presented data and the characterization of the health profile of older women participating in the exercise program.
Therefore, it is suggested that future research includes the evaluation of additional variables.
Conclusion
The participants in this study were mostly in the age range of 60 to 69 years old, selfreported mixed-race ethnicity, single, with a high school education, engaged in daily household and family care, with a monthly family income of up to 2 minimum wages, having a family history of chronic diseases and pre-existing conditions, using medications, reporting regular levels of stress self-control, being non-smokers and non-alcohol consumers.
Additionally, the majority had good classification in all tests and the functional autonomy index (IG).In terms of comparing the variables of the GDLAM protocol, only the C10m, LPS, and LPDV tests showed statistically significant differences in favor of the active group.
Overall, the obtained data suggest that with an increase in the level of physical activity, variables such as stress control and reduced completion times in the GDLAM protocol tests tend to undergo significant positive changes, converging towards the maintenance and/or improvement of multiple health variables.Therefore, the importance of regular participation of this population in physical activity programs is emphasized.
Table 1 .
Classification of the functional autonomy of elderly people (≥ 60 years).
Dantas et al. (2014)ing 10 meters; RSP= rising from the sitting position; RVDP= Rising from ventral decubitus position; PTTs= putting on and taking off a t-shirt; SRCW= sitting and rising from a chair and walking around the house; GI= functional autonomy index.Source:Dantas et al. (2014).The Microsoft Office Excel 2016 ® software was used for data tabulation, as well as for presenting percentages, mean, standard deviation, maximum and minimum values of the results, and calculation of body mass index and IG from the GDLAM protocol.The research also utilized the BioStat 5.3 ® software, adopting a significance level of p<0.05 with a 5% error rate.Descriptive statistics were performed using mean, standard deviation, maximum and minimum values.Normality of the data was assessed using the Kolmogorov-Smirnov test, and the independent samples t-test was used to compare the variables of the GDLAM protocol between the active and sedentary groups.Each participant voluntarily expressed their consent by signing the Informed Consent Form, which included a thorough explanation of the risks and benefits, as well as the social relevance of the research with advantages for the study subjects, particularly the elderly individuals.
Table 2 .
Descriptive data of age, weight, height, body mass index and GI of functional autonomy of the volunteers.
Table 3
displays the sociodemographic data of the participants collected through anamnesis, presented as absolute numbers and corresponding percentages.
Table 3 .
Sociodemographic data of the volunteers.By analyzing the results of the GDLAM protocol tests for functional autonomy, it was possible to classify the participants for each test and for the overall IG based on the execution time of the tests and the age range.These data can be observed in Table4, showing absolute numbers and respective percentages.
Table 4 .
Classification of the volunteers' functional autonomy.
from ventral decubitus position; PTTs= putting on and taking off a t-shirt; SRCW= sitting and rising from a chair and walking around the house; GI= functional autonomy index.Source: own authorship.
Table 5 .
Comparative assessment of functional autonomy between groups of active and sedentary volunteers.
Subtitle: N-number of participants; SDstandard deviation; sseconds; W10m= walking 10 meters; RSP= rising from the sitting position; RVDP= Rising from ventral decubitus position; PTTs= putting on and taking off a t-shirt; SRCW= sitting and rising from a chair and walking around the house; GI= functional autonomy index.Bold numbers indicate a p-value <0.05.Source: own authorship.
|
2024-06-20T15:06:01.015Z
|
2024-06-17T00:00:00.000
|
{
"year": 2024,
"sha1": "757c57dfac61b772b5a1582bb0ef53b5ce45a833",
"oa_license": "CCBYNCSA",
"oa_url": "https://revistas.usantotomas.edu.co/index.php/rccm/article/download/9868/8389",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "034758101ca0f920e26ee522860d71b337de2d88",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.